id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
cs/0612133
|
Tales of Huffman
|
cs.IT cs.CC math.IT
|
We study the new problem of Huffman-like codes subject to individual
restrictions on the code-word lengths of a subset of the source words. These
are prefix codes with minimal expected code-word length for a random source
where additionally the code-word lengths of a subset of the source words is
prescribed, possibly differently for every such source word. Based on a
structural analysis of properties of optimal solutions, we construct an
efficient dynamic programming algorithm for this problem, and for an integer
programming problem that may be of independent interest.
|
cs/0612136
|
Experiments on predictability of word in context and information rate in
natural language
|
cs.IT math.IT
|
Based on data from a large-scale experiment with human subjects, we conclude
that the logarithm of probability to guess a word in context (unpredictability)
depends linearly on the word length. This result holds both for poetry and
prose, even though with prose, the subjects don't know the length of the
omitted word. We hypothesize that this effect reflects a tendency of natural
language to have an even information rate.
|
cs/0612137
|
Turning Cluster Management into Data Management: A System Overview
|
cs.DB
|
This paper introduces the CondorJ2 cluster management system. Traditionally,
cluster management systems such as Condor employ a process-oriented approach
with little or no use of modern database system technology. In contrast,
CondorJ2 employs a data-centric, 3-tier web-application architecture for all
system functions (e.g., job submission, monitoring and scheduling; node
configuration, monitoring and management, etc.) except for job execution.
Employing a data-oriented approach allows the core challenge (i.e., managing
and coordinating a large set of distributed computing resources) to be
transformed from a relatively low-level systems problem into a more abstract,
higher-level data management problem. Preliminary results suggest that
CondorJ2's use of standard 3-tier software represents a significant step
forward to the design and implementation of large clusters (1,000 to 10,000
nodes).
|
cs/0701002
|
Relay Assisted F/TDMA Ad Hoc Networks: Node Classification, Power
Allocation and Relaying Strategies
|
cs.IT math.IT
|
This paper considers the design of relay assisted F/TDMA ad hoc networks with
multiple relay nodes each of which assists the transmission of a predefined
subset of source nodes to their respective destinations. Considering the sum
capacity as the performance metric, we solve the problem of optimally
allocating the total power of each relay node between the transmissions it is
assisting. We consider four different relay transmission strategies, namely
regenerative decode-and-forward (RDF), nonregenerative decode-and-forward
(NDF), amplify-and-forward (AF) and compress-and-forward (CF). We first obtain
the optimum power allocation policies for the relay nodes that employ a uniform
relaying strategy for all nodes. We show that the optimum power allocation for
the RDF and NDF cases are modified water-filling solutions. We observe that for
a given relay transmit power, NDF always outperforms RDF whereas CF always
provides higher sum capacity than AF. When CF and NDF are compared, it is
observed that either of CF or NDF may outperform the other in different
scenarios. This observation suggests that the sum capacity can be further
improved by having each relay adopt its relaying strategy in helping different
source nodes. We investigate this problem next and determine the optimum power
allocation and relaying strategy for each source node that relay nodes assist.
We observe that optimum power allocation for relay nodes with hybrid relaying
strategies provides higher sum capacity than pure RDF, NDF, AF or CF relaying
strategies.
|
cs/0701003
|
Magnification Laws of Winner-Relaxing and Winner-Enhancing Kohonen
Feature Maps
|
cs.NE cs.IT math.IT
|
Self-Organizing Maps are models for unsupervised representation formation of
cortical receptor fields by stimuli-driven self-organization in laterally
coupled winner-take-all feedforward structures. This paper discusses
modifications of the original Kohonen model that were motivated by a potential
function, in their ability to set up a neural mapping of maximal mutual
information. Enhancing the winner update, instead of relaxing it, results in an
algorithm that generates an infomax map corresponding to magnification exponent
of one. Despite there may be more than one algorithm showing the same
magnification exponent, the magnification law is an experimentally accessible
quantity and therefore suitable for quantitative description of neural
optimization principles.
|
cs/0701006
|
The Trapping Redundancy of Linear Block Codes
|
cs.IT math.IT
|
We generalize the notion of the stopping redundancy in order to study the
smallest size of a trapping set in Tanner graphs of linear block codes. In this
context, we introduce the notion of the trapping redundancy of a code, which
quantifies the relationship between the number of redundant rows in any
parity-check matrix of a given code and the size of its smallest trapping set.
Trapping sets with certain parameter sizes are known to cause error-floors in
the performance curves of iterative belief propagation decoders, and it is
therefore important to identify decoding matrices that avoid such sets. Bounds
on the trapping redundancy are obtained using probabilistic and constructive
methods, and the analysis covers both general and elementary trapping sets.
Numerical values for these bounds are computed for the [2640,1320] Margulis
code and the class of projective geometry codes, and compared with some new
code-specific trapping set size estimates.
|
cs/0701011
|
Infinite-Alphabet Prefix Codes Optimal for $\beta$-Exponential Penalties
|
cs.IT cs.DS math.IT
|
Let $P = \{p(i)\}$ be a measure of strictly positive probabilities on the set
of nonnegative integers. Although the countable number of inputs prevents usage
of the Huffman algorithm, there are nontrivial $P$ for which known methods find
a source code that is optimal in the sense of minimizing expected codeword
length. For some applications, however, a source code should instead minimize
one of a family of nonlinear objective functions, $\beta$-exponential means,
those of the form $\log_a \sum_i p(i) a^{n(i)}$, where $n(i)$ is the length of
the $i$th codeword and $a$ is a positive constant. Applications of such
minimizations include a problem of maximizing the chance of message receipt in
single-shot communications ($a<1$) and a problem of minimizing the chance of
buffer overflow in a queueing system ($a>1$). This paper introduces methods for
finding codes optimal for such exponential means. One method applies to
geometric distributions, while another applies to distributions with lighter
tails. The latter algorithm is applied to Poisson distributions. Both are
extended to minimizing maximum pointwise redundancy.
|
cs/0701012
|
$D$-ary Bounded-Length Huffman Coding
|
cs.IT cs.DS math.IT
|
Efficient optimal prefix coding has long been accomplished via the Huffman
algorithm. However, there is still room for improvement and exploration
regarding variants of the Huffman problem. Length-limited Huffman coding,
useful for many practical applications, is one such variant, in which codes are
restricted to the set of codes in which none of the $n$ codewords is longer
than a given length, $l_{\max}$. Binary length-limited coding can be done in
$O(n l_{\max})$ time and O(n) space via the widely used Package-Merge
algorithm. In this paper the Package-Merge approach is generalized without
increasing complexity in order to introduce a minimum codeword length,
$l_{\min}$, to allow for objective functions other than the minimization of
expected codeword length, and to be applicable to both binary and nonbinary
codes; nonbinary codes were previously addressed using a slower dynamic
programming approach. These extensions have various applications -- including
faster decompression -- and can be used to solve the problem of finding an
optimal code with limited fringe, that is, finding the best code among codes
with a maximum difference between the longest and shortest codewords. The
previously proposed method for solving this problem was nonpolynomial time,
whereas solving this using the novel algorithm requires only $O(n (l_{\max}-
l_{\min})^2)$ time and O(n) space.
|
cs/0701013
|
Attribute Value Weighting in K-Modes Clustering
|
cs.AI
|
In this paper, the traditional k-modes clustering algorithm is extended by
weighting attribute value matches in dissimilarity computation. The use of
attribute value weighting technique makes it possible to generate clusters with
stronger intra-similarities, and therefore achieve better clustering
performance. Experimental results on real life datasets show that these value
weighting based k-modes algorithms are superior to the standard k-modes
algorithm with respect to clustering accuracy.
|
cs/0701016
|
The Second Law and Informatics
|
cs.IT math.IT
|
A unification of thermodynamics and information theory is proposed. It is
argued that similarly to the randomness due to collisions in thermal systems,
the quenched randomness that exists in data files in informatics systems
contributes to entropy. Therefore, it is possible to define equilibrium and to
calculate temperature for informatics systems. The obtained temperature yields
correctly the Shannon information balance in informatics systems and is
consistent with the Clausius inequality and the Carnot cycle.
|
cs/0701017
|
Energy-Efficient Power Control in Impulse Radio UWB Wireless Networks
|
cs.IT math.IT
|
In this paper, a game-theoretic model for studying power control for wireless
data networks in frequency-selective multipath environments is analyzed. The
uplink of an impulse-radio ultrawideband system is considered. The effects of
self-interference and multiple-access interference on the performance of
generic Rake receivers are investigated for synchronous systems. Focusing on
energy efficiency, a noncooperative game is proposed in which users in the
network are allowed to choose their transmit powers to maximize their own
utilities, and the Nash equilibrium for the proposed game is derived. It is
shown that, due to the frequency selective multipath, the noncooperative
solution is achieved at different signal-to-interference-plus-noise ratios,
depending on the channel realization and the type of Rake receiver employed. A
large-system analysis is performed to derive explicit expressions for the
achieved utilities. The Pareto-optimal (cooperative) solution is also discussed
and compared with the noncooperative approach.
|
cs/0701018
|
Performance Analysis of Algebraic Soft-Decision Decoding of Reed-Solomon
Codes
|
cs.IT math.IT
|
We investigate the decoding region for Algebraic Soft-Decision Decoding (ASD)
of Reed-Solomon codes in a discrete, memoryless, additive-noise channel. An
expression is derived for the error correction radius within which the
soft-decision decoder produces a list that contains the transmitted codeword.
The error radius for ASD is shown to be larger than that of Guruswami-Sudan
hard-decision decoding for a subset of low-rate codes. These results are also
extended to multivariable interpolation in the sense of Parvaresh and Vardy. An
upper bound is then presented for ASD's probability of error, where an error is
defined as the event that the decoder selects an erroneous codeword from its
list. This new definition gives a more accurate bound on the probability of
error of ASD than the results available in the literature.
|
cs/0701019
|
Flow-optimized Cooperative Transmission for the Relay Channel
|
cs.IT math.IT
|
This paper describes an approach for half-duplex cooperative transmission in
a classical three-node relay channel. Assuming availability of channel state
information at nodes, the approach makes use of this information to optimize
distinct flows through the direct link from the source to the destination and
the path via the relay, respectively. It is shown that such a design can
effectively harness diversity advantage of the relay channel in both high-rate
and low-rate scenarios. When the rate requirement is low, the proposed design
gives a second-order outage diversity performance approaching that of
full-duplex relaying. When the rate requirement becomes asymptotically large,
the design still gives a close-to-second-order outage diversity performance.
The design also achieves the best diversity-multiplexing tradeoff possible for
the relay channel. With optimal long-term power control over the fading relay
channel, the proposed design achieves a delay-limited rate performance that is
only 3.0dB (5.4dB) worse than the capacity performance of the additive white
Gaussian channel in low- (high-) rate scenarios.
|
cs/0701024
|
Secure Communication over Fading Channels
|
cs.IT cs.CR math.IT
|
The fading broadcast channel with confidential messages (BCC) is
investigated, where a source node has common information for two receivers
(receivers 1 and 2), and has confidential information intended only for
receiver 1. The confidential information needs to be kept as secret as possible
from receiver 2. The broadcast channel from the source node to receivers 1 and
2 is corrupted by multiplicative fading gain coefficients in addition to
additive Gaussian noise terms. The channel state information (CSI) is assumed
to be known at both the transmitter and the receivers. The parallel BCC with
independent subchannels is first studied, which serves as an
information-theoretic model for the fading BCC. The secrecy capacity region of
the parallel BCC is established. This result is then specialized to give the
secrecy capacity region of the parallel BCC with degraded subchannels. The
secrecy capacity region is then established for the parallel Gaussian BCC, and
the optimal source power allocations that achieve the boundary of the secrecy
capacity region are derived. In particular, the secrecy capacity region is
established for the basic Gaussian BCC. The secrecy capacity results are then
applied to study the fading BCC. Both the ergodic and outage performances are
studied.
|
cs/0701025
|
Free deconvolution for signal processing applications
|
cs.IT math.IT
|
Situations in many fields of research, such as digital communications,
nuclear physics and mathematical finance, can be modelled with random matrices.
When the matrices get large, free probability theory is an invaluable tool for
describing the asymptotic behaviour of many systems. It will be shown how free
probability can be used to aid in source detection for certain systems. Sample
covariance matrices for systems with noise are the starting point in our source
detection problem. Multiplicative free deconvolution is shown to be a method
which can aid in expressing limit eigenvalue distributions for sample
covariance matrices, and to simplify estimators for eigenvalue distributions of
covariance matrices.
|
cs/0701026
|
Analysis of Sequential Decoding Complexity Using the Berry-Esseen
Inequality
|
cs.IT math.IT
|
his study presents a novel technique to estimate the computational complexity
of sequential decoding using the Berry-Esseen theorem. Unlike the theoretical
bounds determined by the conventional central limit theorem argument, which
often holds only for sufficiently large codeword length, the new bound obtained
from the Berry-Esseen theorem is valid for any blocklength. The accuracy of the
new bound is then examined for two sequential decoding algorithms, an
ordering-free variant of the generalized Dijkstra's algorithm (GDA)(or
simplified GDA) and the maximum-likelihood sequential decoding algorithm
(MLSDA). Empirically investigating codes of small blocklength reveals that the
theoretical upper bound for the simplified GDA almost matches the simulation
results as the signal-to-noise ratio (SNR) per information bit ($\gamma_b$) is
greater than or equal to 8 dB. However, the theoretical bound may become
markedly higher than the simulated average complexity when $\gamma_b$ is small.
For the MLSDA, the theoretical upper bound is quite close to the simulation
results for both high SNR ($\gamma_b\geq 6$ dB) and low SNR ($\gamma_b\leq 2$
dB). Even for moderate SNR, the simulation results and the theoretical bound
differ by at most \makeblue{0.8} on a $\log_{10}$ scale.
|
cs/0701027
|
The source coding game with a cheating switcher
|
cs.IT math.IT
|
Berger's paper `The Source Coding Game', IEEE Trans. Inform. Theory, 1971,
considers the problem of finding the rate-distortion function for an
adversarial source comprised of multiple known IID sources. The adversary,
called the `switcher', was allowed only causal access to the source
realizations and the rate-distortion function was obtained through the use of a
type covering lemma. In this paper, the rate-distortion function of the
adversarial source is described, under the assumption that the switcher has
non-causal access to all source realizations. The proof utilizes the type
covering lemma and simple conditional, random `switching' rules. The
rate-distortion function is once again the maximization of the R(D) function
for a region of attainable IID distributions.
|
cs/0701028
|
Statistical keyword detection in literary corpora
|
cs.CL cs.IR physics.soc-ph
|
Understanding the complexity of human language requires an appropriate
analysis of the statistical distribution of words in texts. We consider the
information retrieval problem of detecting and ranking the relevant words of a
text by means of statistical information referring to the "spatial" use of the
words. Shannon's entropy of information is used as a tool for automatic keyword
extraction. By using The Origin of Species by Charles Darwin as a
representative text sample, we show the performance of our detector and compare
it with another proposals in the literature. The random shuffled text receives
special attention as a tool for calibrating the ranking indices.
|
cs/0701030
|
New Constructions of a Family of 2-Generator Quasi-Cyclic Two-Weight
Codes and Related Codes
|
cs.IT math.IT
|
Based on cyclic simplex codes, a new construction of a family of 2-generator
quasi-cyclic two-weight codes is given. New optimal binary quasi-cyclic [195,
8, 96], [210, 8, 104] and [240, 8, 120] codes, good QC ternary [195, 6, 126],
[208, 6, 135], [221, 6, 144] codes are thus obtained. Furthermre, binary
quasi-cyclic self-complementary codes are also constructed.
|
cs/0701034
|
Performance of Rake Receivers in IR-UWB Networks Using Energy-Efficient
Power Control
|
cs.IT math.IT
|
This paper studies the performance of partial-Rake (PRake) receivers in
impulse-radio ultrawideband wireless networks when an energy-efficient power
control scheme is adopted. Due to the large bandwidth of the system, the
multipath channel is assumed to be frequency-selective. By making use of
noncooperative game-theoretic models and large-system analysis tools, explicit
expressions are derived in terms of network parameters to measure the effects
of self-interference and multiple-access interference at a receiving access
point. Performance of the PRake receivers is thus compared in terms of achieved
utilities and loss to that of the all-Rake receiver. Simulation results are
provided to validate the analysis.
|
cs/0701036
|
Compression-based methods for nonparametric density estimation, on-line
prediction, regression and classification for time series
|
cs.IT math.IT
|
We address the problem of nonparametric estimation of characteristics for
stationary and ergodic time series. We consider finite-alphabet time series and
real-valued ones and the following four problems: i) estimation of the
(limiting) probability (or estimation of the density for real-valued time
series), ii) on-line prediction, iii) regression and iv) classification (or
so-called problems with side information). We show that so-called archivers (or
data compressors) can be used as a tool for solving these problems. In
particular, firstly, it is proven that any so-called universal code (or
universal data compressor) can be used as a basis for constructing
asymptotically optimal methods for the above problems. (By definition, a
universal code can "compress" any sequence generated by a stationary and
ergodic source asymptotically till the Shannon entropy of the source.) And,
secondly, we show experimentally that estimates, which are based on practically
used methods of data compression, have a reasonable precision.
|
cs/0701038
|
Approximate Eigenstructure of LTV Channels with Compactly Supported
Spreading
|
cs.IT math.IT
|
In this article we obtain estimates on the approximate eigenstructure of
channels with a spreading function supported only on a set of finite measure
$|U|$.Because in typical application like wireless communication the spreading
function is a random process corresponding to a random Hilbert--Schmidt channel
operator $\BH$ we measure this approximation in terms of the ratio of the
$p$--norm of the deviation from variants of the Weyl symbol calculus to the
$a$--norm of the spreading function itself. This generalizes recent results
obtained for the case $p=2$ and $a=1$. We provide a general approach to this
topic and consider then operators with $|U|<\infty$ in more detail. We show the
relation to pulse shaping and weighted norms of ambiguity functions. Finally we
derive several necessary conditions on $|U|$, such that the approximation error
is below certain levels.
|
cs/0701039
|
On the Complexity of the Numerically Definite Syllogistic and Related
Fragments
|
cs.LO cs.AI cs.CC
|
In this paper, we determine the complexity of the satisfiability problem for
various logics obtained by adding numerical quantifiers, and other
constructions, to the traditional syllogistic. In addition, we demonstrate the
incompleteness of some recently proposed proof-systems for these logics.
|
cs/0701040
|
Curve Tracking Control for Legged Locomotion in Horizontal Plane
|
cs.RO
|
We derive a hybrid feedback control law for the lateral leg spring (LLS)
model so that the center of mass of a legged runner follows a curved path in
horizontal plane. The control law enables the runner to change the placement
and the elasticity of its legs to move in a desired direction. Stable motion
along a curved path is achieved using curvature, bearing and relative distance
between the runner and the curve as feedback. Constraints on leg parameters
determine the class of curves that can be followed. We also derive an optimal
control law that stabilizes the orientation of the runner's body relative to
the velocity of the runner's center of mass.
|
cs/0701041
|
A Coding Theorem for a Class of Stationary Channels with Feedback
|
cs.IT math.IT
|
A coding theorem is proved for a class of stationary channels with feedback
in which the output Y_n = f(X_{n-m}^n, Z_{n-m}^n) is the function of the
current and past m symbols from the channel input X_n and the stationary
ergodic channel noise Z_n. In particular, it is shown that the feedback
capacity is equal to $$ \limp_{n\to\infty} \sup_{p(x^n||y^{n-1})} \frac{1}{n}
I(X^n \to Y^n), $$ where I(X^n \to Y^n) = \sum_{i=1}^n I(X^i; Y_i|Y^{i-1})
denotes the Massey directed information from the channel input to the output,
and the supremum is taken over all causally conditioned distributions
p(x^n||y^{n-1}) = \prod_{i=1}^n p(x_i|x^{i-1},y^{i-1}). The main ideas of the
proof are the Shannon strategy for coding with side information and a new
elementary coding technique for the given channel model without feedback, which
is in a sense dual to Gallager's lossy coding of stationary ergodic sources. A
similar approach gives a simple alternative proof of coding theorems for finite
state channels by Yang-Kavcic-Tatikonda, Chen-Berger, and
Permuter-Weissman-Goldsmith.
|
cs/0701042
|
Sending a Bivariate Gaussian Source over a Gaussian MAC with Feedback
|
cs.IT math.IT
|
We consider the problem of transmitting a bivariate Gaussian source over a
two-user additive Gaussian multiple-access channel with feedback. Each of the
transmitters observes one of the source components and tries to describe it to
the common receiver. We are interested in the minimal mean squared error at
which the receiver can reconstruct each of the source components.
In the ``symmetric case'' we show that, below a certain signal-to-noise ratio
threshold which is determined by the source correlation, feedback is useless
and the minimal distortion is achieved by uncoded transmission. For the general
case we give necessary conditions for the achievability of a distortion pair.
|
cs/0701043
|
Adaptive Alternating Minimization Algorithms
|
cs.IT math.IT math.OC
|
The classical alternating minimization (or projection) algorithm has been
successful in the context of solving optimization problems over two variables.
The iterative nature and simplicity of the algorithm has led to its application
to many areas such as signal processing, information theory, control, and
finance. A general set of sufficient conditions for the convergence and
correctness of the algorithm is quite well-known when the underlying problem
parameters are fixed. In many practical situations, however, the underlying
problem parameters are changing over time, and the use of an adaptive algorithm
is more appropriate. In this paper, we study such an adaptive version of the
alternating minimization algorithm. As a main result of this paper, we provide
a general set of sufficient conditions for the convergence and correctness of
the adaptive algorithm. Perhaps surprisingly, these conditions seem to be the
minimal ones one would expect in such an adaptive setting. We present
applications of our results to adaptive decomposition of mixtures, adaptive
log-optimal portfolio selection, and adaptive filter design.
|
cs/0701047
|
On vocabulary size of grammar-based codes
|
cs.IT cs.CL math.IT
|
We discuss inequalities holding between the vocabulary size, i.e., the number
of distinct nonterminal symbols in a grammar-based compression for a string,
and the excess length of the respective universal code, i.e., the code-based
analog of algorithmic mutual information. The aim is to strengthen inequalities
which were discussed in a weaker form in linguistics but shed some light on
redundancy of efficiently computable codes. The main contribution of the paper
is a construction of universal grammar-based codes for which the excess lengths
can be bounded easily.
|
cs/0701048
|
Energy Conscious Interactive Communication for Sensor Networks
|
cs.IT math.IT
|
In this work, we are concerned with maximizing the lifetime of a cluster of
sensors engaged in single-hop communication with a base-station. In a
data-gathering network, the spatio-temporal correlation in sensor data induces
data-redundancy. Also, the interaction between two communicating parties is
well-known to reduce the communication complexity. This paper proposes a
formalism that exploits these two opportunities to reduce the number of bits
transmitted by a sensor node in a cluster, hence enhancing its lifetime. We
argue that our approach has several inherent advantages in scenarios where the
sensor nodes are acutely energy and computing-power constrained, but the
base-station is not so. This provides us an opportunity to develop
communication protocols, where most of the computing and communication is done
by the base-station.
The proposed framework casts the sensor nodes and base-station communication
problem as the problem of multiple informants with correlated information
communicating with a recipient and attempts to extend extant work on
interactive communication between an informant-recipient pair to such
scenarios. Our work makes four major contributions. Firstly, we explicitly show
that in such scenarios interaction can help in reducing the communication
complexity. Secondly, we show that the order in which the informants
communicate with the recipient may determine the communication complexity.
Thirdly, we provide the framework to compute the $m$-message communication
complexity in such scenarios. Lastly, we prove that in a typical sensor network
scenario, the proposed formalism significantly reduces the communication and
computational complexities.
|
cs/0701050
|
A Simple Proof of the Entropy-Power Inequality via Properties of Mutual
Information
|
cs.IT math.IT
|
While most useful information theoretic inequalities can be deduced from the
basic properties of entropy or mutual information, Shannon's entropy power
inequality (EPI) seems to be an exception: available information theoretic
proofs of the EPI hinge on integral representations of differential entropy
using either Fisher's information (FI) or minimum mean-square error (MMSE). In
this paper, we first present a unified view of proofs via FI and MMSE, showing
that they are essentially dual versions of the same proof, and then fill the
gap by providing a new, simple proof of the EPI, which is solely based on the
properties of mutual information and sidesteps both FI or MMSE representations.
|
cs/0701051
|
Coding, Scheduling, and Cooperation in Wireless Sensor Networks
|
cs.IT math.IT
|
We consider a single-hop data gathering sensor cluster consisting of a set of
sensors that need to transmit data periodically to a base-station. We are
interested in maximizing the lifetime of this network. Even though the setting
of our problem is very simple, it turns out that the solution is far from easy.
The complexity arises from several competing system-level opportunities
available to reduce the energy consumed in radio transmission. First, sensor
data is spatially and temporally correlated. Recent advances in distributed
source-coding allow us to take advantage of these correlations to reduce the
number of transmitted bits, with concomitant savings in energy. Second, it is
also well-known that channel-coding can be used to reduce transmission energy
by increasing transmission time. Finally, sensor nodes are cooperative, unlike
nodes in an ad hoc network that are often modeled as competitive, allowing us
to take full advantage of the first two opportunities for the purpose of
maximizing cluster lifetime. In this paper, we pose the problem of maximizing
lifetime as a max-min optimization problem subject to the constraint of
successful data collection and limited energy supply at each node. By
introducing the notion of instantaneous decoding, we are able to simplify this
optimization problem into a joint scheduling and time allocation problem. We
show that even with our ample simplification, the problem remains NP-hard. We
provide some algorithms, heuristics and insight for various scenarios. Our
chief contribution is to illustrate both the challenges and gains provided by
joint source-channel coding and scheduling.
|
cs/0701052
|
Time Series Forecasting: Obtaining Long Term Trends with Self-Organizing
Maps
|
cs.LG math.ST stat.TH
|
Kohonen self-organisation maps are a well know classification tool, commonly
used in a wide variety of problems, but with limited applications in time
series forecasting context. In this paper, we propose a forecasting method
specifically designed for multi-dimensional long-term trends prediction, with a
double application of the Kohonen algorithm. Practical applications of the
method are also presented.
|
cs/0701053
|
A Case For Amplify-Forward Relaying in the Block-Fading Multi-Access
Channel
|
cs.IT math.IT
|
This paper demonstrates the significant gains that multi-access users can
achieve from sharing a single amplify-forward relay in slow fading
environments. The proposed protocol, namely the multi-access relay
amplify-forward, allows for a low-complexity relay and achieves the optimal
diversity-multiplexing trade-off at high multiplexing gains. Analysis of the
protocol reveals that it uniformly dominates the compress-forward strategy and
further outperforms the dynamic decode-forward protocol at high multiplexing
gains. An interesting feature of the proposed protocol is that, at high
multiplexing gains, it resembles a multiple-input single-output system, and at
low multiplexing gains, it provides each user with the same
diversity-multiplexing trade-off as if there is no contention for the relay
from the other users.
|
cs/0701055
|
Bounds on Space-Time-Frequency Dimensionality
|
cs.IT math.IT
|
We bound the number of electromagnetic signals which may be observed over a
frequency range $2W$ for a time $T$ within a region of space enclosed by a
radius $R$. Our result implies that broadband fields in space cannot be
arbitrarily complex: there is a finite amount of information which may be
extracted from a region of space via electromagnetic radiation.
Three-dimensional space allows a trade-off between large carrier frequency
and bandwidth. We demonstrate applications in super-resolution and broadband
communication.
|
cs/0701056
|
Space-Time-Frequency Degrees of Freedom: Fundamental Limits for Spatial
Information
|
cs.IT math.IT
|
We bound the number of electromagnetic signals which may be observed over a
frequency range $[F-W,F+W]$ a time interval $[0,T]$ within a sphere of radius
$R$. We show that the such constrained signals may be represented by a series
expansion whose terms are bounded exponentially to zero beyond a threshold. Our
result implies there is a finite amount of information which may be extracted
from a region of space via electromagnetic radiation.
|
cs/0701057
|
Cooperative Optimization for Energy Minimization: A Case Study of Stereo
Matching
|
cs.CV cs.AI
|
Often times, individuals working together as a team can solve hard problems
beyond the capability of any individual in the team. Cooperative optimization
is a newly proposed general method for attacking hard optimization problems
inspired by cooperation principles in team playing. It has an established
theoretical foundation and has demonstrated outstanding performances in solving
real-world optimization problems. With some general settings, a cooperative
optimization algorithm has a unique equilibrium and converges to it with an
exponential rate regardless initial conditions and insensitive to
perturbations. It also possesses a number of global optimality conditions for
identifying global optima so that it can terminate its search process
efficiently. This paper offers a general description of cooperative
optimization, addresses a number of design issues, and presents a case study to
demonstrate its power.
|
cs/0701058
|
Precoding in Multiple-Antenna Broadcast Systems with a Probabilistic
Viewpoint
|
cs.IT math.IT
|
In this paper, we investigate the minimum average transmit energy that can be
obtained in multiple antenna broadcast systems with channel inversion
technique. The achievable gain can be significantly higher than the
conventional gains that are mentioned in methods like perturbation technique of
Peel, et al. In order to obtain this gain, we introduce a Selective Mapping
(SLM) technique (based on random coding arguments). We propose to implement the
SLM method by using nested lattice codes in a trellis precoding framework.
|
cs/0701059
|
Enhancing Sensor Network Lifetime Using Interactive Communication
|
cs.IT math.IT
|
We are concerned with maximizing the lifetime of a data-gathering wireless
sensor network consisting of set of nodes directly communicating with a
base-station. We model this scenario as the m-message interactive communication
between multiple correlated informants (sensor nodes) and a recipient
(base-station). With this framework, we show that m-message interactive
communication can indeed enhance network lifetime. Both worst-case and
average-case performances are considered.
|
cs/0701060
|
Duadic Group Algebra Codes
|
cs.IT math.IT quant-ph
|
Duadic group algebra codes are a generalization of quadratic residue codes.
This paper settles an open problem raised by Zhu concerning the existence of
duadic group algebra codes. These codes can be used to construct degenerate
quantum stabilizer codes that have the nice feature that many errors of small
weight do not need error correction; this fact is illustrated by an example.
|
cs/0701061
|
Conjugate Gradient Projection Approach for Multi-Antenna Gaussian
Broadcast Channels
|
cs.IT math.IT
|
It has been shown recently that the dirty-paper coding is the optimal
strategy for maximizing the sum rate of multiple-input multiple-output Gaussian
broadcast channels (MIMO BC). Moreover, by the channel duality, the nonconvex
MIMO BC sum rate problem can be transformed to the convex dual MIMO
multiple-access channel (MIMO MAC) problem with a sum power constraint. In this
paper, we design an efficient algorithm based on conjugate gradient projection
(CGP) to solve the MIMO BC maximum sum rate problem. Our proposed CGP algorithm
solves the dual sum power MAC problem by utilizing the powerful concept of
Hessian conjugacy. We also develop a rigorous algorithm to solve the projection
problem. We show that CGP enjoys provable convergence, nice scalability, and
great efficiency for large MIMO BC systems.
|
cs/0701062
|
Network Coding over a Noisy Relay : a Belief Propagation Approach
|
cs.IT math.IT
|
In recent years, network coding has been investigated as a method to obtain
improvements in wireless networks. A typical assumption of previous work is
that relay nodes performing network coding can decode the messages from sources
perfectly. On a simple relay network, we design a scheme to obtain network
coding gain even when the relay node cannot perfectly decode its received
messages. In our scheme, the operation at the relay node resembles message
passing in belief propagation, sending the logarithm likelihood ratio (LLR) of
the network coded message to the destination. Simulation results demonstrate
the gain obtained over different channel conditions. The goal of this paper is
not to give a theoretical result, but to point to possible interaction of
network coding with user cooperation in noisy scenario. The extrinsic
information transfer (EXIT) chart is shown to be a useful engineering tool to
analyze the performance of joint channel coding and network coding in the
network.
|
cs/0701063
|
Hierarchical Decoupling Principle of a MIMO-CDMA Channel in Asymptotic
Limits
|
cs.IT math.IT
|
We analyze an uplink of a fast flat fading MIMO-CDMA channel in the case
where the data symbol vector for each user follows an arbitrary distribution.
The spectral efficiency of the channel with CSI at the receiver is evaluated
analytically with the replica method. The main result is that the hierarchical
decoupling principle holds in the MIMO-CDMA channel, i.e., the MIMO-CDMA
channel is decoupled into a bank of single-user MIMO channels in the many-user
limit, and each single-user MIMO channel is further decoupled into a bank of
scalar Gaussian channels in the many-antenna limit for a fading model with a
limited number of scatterers.
|
cs/0701065
|
Can Punctured Rate-1/2 Turbo Codes Achieve a Lower Error Floor than
their Rate-1/3 Parent Codes?
|
cs.IT math.IT
|
In this paper we concentrate on rate-1/3 systematic parallel concatenated
convolutional codes and their rate-1/2 punctured child codes. Assuming
maximum-likelihood decoding over an additive white Gaussian channel, we
demonstrate that a rate-1/2 non-systematic child code can exhibit a lower error
floor than that of its rate-1/3 parent code, if a particular condition is met.
However, assuming iterative decoding, convergence of the non-systematic code
towards low bit-error rates is problematic. To alleviate this problem, we
propose rate-1/2 partially-systematic codes that can still achieve a lower
error floor than that of their rate-1/3 parent codes. Results obtained from
extrinsic information transfer charts and simulations support our conclusion.
|
cs/0701066
|
Non-binary Hybrid LDPC Codes: Structure, Decoding and Optimization
|
cs.IT math.IT
|
In this paper, we propose to study and optimize a very general class of LDPC
codes whose variable nodes belong to finite sets with different orders. We
named this class of codes Hybrid LDPC codes. Although efficient optimization
techniques exist for binary LDPC codes and more recently for non-binary LDPC
codes, they both exhibit drawbacks due to different reasons. Our goal is to
capitalize on the advantages of both families by building codes with binary (or
small finite set order) and non-binary parts in their factor graph
representation. The class of Hybrid LDPC codes is obviously larger than
existing types of codes, which gives more degrees of freedom to find good codes
where the existing codes show their limits. We give two examples where hybrid
LDPC codes show their interest.
|
cs/0701067
|
On Four-group ML Decodable Distributed Space Time Codes for Cooperative
Communication
|
cs.IT math.IT
|
A construction of a new family of distributed space time codes (DSTCs) having
full diversity and low Maximum Likelihood (ML) decoding complexity is provided
for the two phase based cooperative diversity protocols of Jing-Hassibi and the
recently proposed Generalized Non-orthogonal Amplify and Forward (GNAF)
protocol of Rajan et al. The salient feature of the proposed DSTCs is that they
satisfy the extra constraints imposed by the protocols and are also four-group
ML decodable which leads to significant reduction in ML decoding complexity
compared to all existing DSTC constructions. Moreover these codes have uniform
distribution of power among the relays as well as in time. Also, simulations
results indicate that these codes perform better in comparison with the only
known DSTC with the same rate and decoding complexity, namely the Coordinate
Interleaved Orthogonal Design (CIOD). Furthermore, they perform very close to
DSTCs from field extensions which have same rate but higher decoding
complexity.
|
cs/0701068
|
Distributed Space-Time Codes for Cooperative Networks with Partial CSI
|
cs.IT math.IT
|
Design criteria and full-diversity Distributed Space Time Codes (DSTCs) for
the two phase transmission based cooperative diversity protocol of Jing-Hassibi
and the Generalized Nonorthogonal Amplify and Forward (GNAF) protocol are
reported, when the relay nodes are assumed to have knowledge of the phase
component of the source to relay channel gains. It is shown that this under
this partial channel state information (CSI), several well known space time
codes for the colocated MIMO (Multiple Input Multiple Output) channel become
amenable for use as DSTCs. In particular, the well known complex orthogonal
designs, generalized coordinate interleaved orthogonal designs (GCIODs) and
unitary weight single symbol decodable (UW-SSD) codes are shown to satisfy the
required design constraints for DSTCs. Exploiting the relaxed code design
constraints, we propose DSTCs obtained from Clifford Algebras which have low ML
decoding complexity.
|
cs/0701070
|
On formulas for decoding binary cyclic codes
|
cs.IT math.IT
|
We adress the problem of the algebraic decoding of any cyclic code up to the
true minimum distance. For this, we use the classical formulation of the
problem, which is to find the error locator polynomial in terms of the syndroms
of the received word. This is usually done with the Berlekamp-Massey algorithm
in the case of BCH codes and related codes, but for the general case, there is
no generic algorithm to decode cyclic codes. Even in the case of the quadratic
residue codes, which are good codes with a very strong algebraic structure,
there is no available general decoding algorithm. For this particular case of
quadratic residue codes, several authors have worked out, by hand, formulas for
the coefficients of the locator polynomial in terms of the syndroms, using the
Newton identities. This work has to be done for each particular quadratic
residue code, and is more and more difficult as the length is growing.
Furthermore, it is error-prone. We propose to automate these computations,
using elimination theory and Grbner bases. We prove that, by computing
appropriate Grbner bases, one automatically recovers formulas for the
coefficients of the locator polynomial, in terms of the syndroms.
|
cs/0701072
|
Tagging, Folksonomy & Co - Renaissance of Manual Indexing?
|
cs.IR
|
This paper gives an overview of current trends in manual indexing on the Web.
Along with a general rise of user generated content there are more and more
tagging systems that allow users to annotate digital resources with tags
(keywords) and share their annotations with other users. Tagging is frequently
seen in contrast to traditional knowledge organization systems or as something
completely new. This paper shows that tagging should better be seen as a
popular form of manual indexing on the Web. Difference between controlled and
free indexing blurs with sufficient feedback mechanisms. A revised typology of
tagging systems is presented that includes different user roles and knowledge
organization systems with hierarchical relationships and vocabulary control. A
detailed bibliography of current research in collaborative tagging is included.
|
cs/0701077
|
Asynchronous Distributed Searchlight Scheduling
|
cs.MA cs.RO
|
This paper develops and compares two simple asynchronous distributed
searchlight scheduling algorithms for multiple robotic agents in nonconvex
polygonal environments. A searchlight is a ray emitted by an agent which cannot
penetrate the boundary of the environment. A point is detected by a searchlight
if and only if the point is on the ray at some instant. Targets are points
which can move continuously with unbounded speed. The objective of the proposed
algorithms is for the agents to coordinate the slewing (rotation about a point)
of their searchlights in a distributed manner, i.e., using only local sensing
and limited communication, such that any target will necessarily be detected in
finite time. The first algorithm we develop, called the DOWSS (Distributed One
Way Sweep Strategy), is a distributed version of a known algorithm described
originally in 1990 by Sugihara et al \cite{KS-IS-MY:90}, but it can be very
slow in clearing the entire environment because only one searchlight may slew
at a time. In an effort to reduce the time to clear the environment, we develop
a second algorithm, called the PTSS (Parallel Tree Sweep Strategy), in which
searchlights sweep in parallel if guards are placed according to an environment
partition belonging to a class we call PTSS partitions. Finally, we discuss how
DOWSS and PTSS could be combined with with deployment, or extended to
environments with holes.
|
cs/0701078
|
Low SNR Capacity of Fading Channels -- MIMO and Delay Spread
|
cs.IT math.IT
|
Discrete-time Rayleigh fading multiple-input multiple-output (MIMO) channels
are considered, with no channel state information at the transmitter and
receiver. The fading is assumed to be correlated in time and independent from
antenna to antenna. Peak and average transmit power constraints are imposed,
either on the sum over antennas, or on each individual antenna. In both cases,
an upper bound and an asymptotic lower bound, as the signal-to-noise ratio
approaches zero, on the channel capacity are presented. The limit of normalized
capacity is identified under the sum power constraints, and, for a subclass of
channels, for individual power constraints. These results carry over to a SISO
channel with delay spread (i.e. frequency selective fading).
|
cs/0701079
|
Practical Binary Adaptive Block Coder
|
cs.IT cs.DS math.IT
|
This paper describes design of a low-complexity algorithm for adaptive
encoding/ decoding of binary sequences produced by memoryless sources. The
algorithm implements universal block codes constructed for a set of contexts
identified by the numbers of non-zero bits in previous bits in a sequence. We
derive a precise formula for asymptotic redundancy of such codes, which refines
previous well-known estimate by Krichevsky and Trofimov, and provide
experimental verification of this result. In our experimental study we also
compare our implementation with existing binary adaptive encoders, such as
JBIG's Q-coder, and MPEG AVC (ITU-T H.264)'s CABAC algorithms.
|
cs/0701080
|
Analysis of the Sufficient Path Elimination Window for the
Maximum-Likelihood Sequential-Search Decoding Algorithm for Binary
Convolutional Codes
|
cs.IT math.IT
|
A common problem on sequential-type decoding is that at the signal-to-noise
ratio (SNR) below the one corresponding to the cutoff rate, the average
decoding complexity per information bit and the required stack size grow
rapidly with the information length. In order to alleviate the problem in the
maximum-likelihood sequential decoding algorithm (MLSDA), we propose to
directly eliminate the top path whose end node is $\Delta$-trellis-level prior
to the farthest one among all nodes that have been expanded thus far by the
sequential search. Following random coding argument, we analyze the
early-elimination window $\Delta$ that results in negligible performance
degradation for the MLSDA. Our analytical results indicate that the required
early elimination window for negligible performance degradation is just twice
of the constraint length for rate one-half convolutional codes. For rate
one-third convolutional codes, the required early-elimination window even
reduces to the constraint length. The suggestive theoretical level thresholds
almost coincide with the simulation results. As a consequence of the small
early-elimination window required for near maximum-likelihood performance, the
MLSDA with early-elimination modification rules out considerable computational
burdens, as well as memory requirement, by directly eliminating a big number of
the top paths, which makes the MLSDA with early elimination very suitable for
applications that dictate a low-complexity software implementation with near
maximum-likelihood performance.
|
cs/0701083
|
A Backtracking-Based Algorithm for Computing Hypertree-Decompositions
|
cs.DS cs.AI
|
Hypertree decompositions of hypergraphs are a generalization of tree
decompositions of graphs. The corresponding hypertree-width is a measure for
the cyclicity and therefore tractability of the encoded computation problem.
Many NP-hard decision and computation problems are known to be tractable on
instances whose structure corresponds to hypergraphs of bounded
hypertree-width. Intuitively, the smaller the hypertree-width, the faster the
computation problem can be solved. In this paper, we present the new
backtracking-based algorithm det-k-decomp for computing hypertree
decompositions of small width. Our benchmark evaluations have shown that
det-k-decomp significantly outperforms opt-k-decomp, the only exact hypertree
decomposition algorithm so far. Even compared to the best heuristic algorithm,
we obtained competitive results as long as the hypergraphs are not too large.
|
cs/0701084
|
Pseudo-codeword Landscape
|
cs.IT cond-mat.stat-mech math.IT
|
We discuss the performance of Low-Density-Parity-Check (LDPC) codes decoded
by means of Linear Programming (LP) at moderate and large
Signal-to-Noise-Ratios (SNR). Utilizing a combination of the previously
introduced pseudo-codeword-search method and a new "dendro" trick, which allows
us to reduce the complexity of the LP decoding, we analyze the dependence of
the Frame-Error-Rate (FER) on the SNR. Under Maximum-A-Posteriori (MAP)
decoding the dendro-code, having only checks with connectivity degree three,
performs identically to its original code with high-connectivity checks. For a
number of popular LDPC codes performing over the Additive-White-Gaussian-Noise
(AWGN) channel we found that either an error-floor sets at a relatively low
SNR, or otherwise a transient asymptote, characterized by a faster decay of FER
with the SNR increase, precedes the error-floor asymptote. We explain these
regimes in terms of the pseudo-codeword spectra of the codes.
|
cs/0701085
|
Variations on the Fibonacci Universal Code
|
cs.IT cs.CR math.IT
|
This note presents variations on the Fibonacci universal code, that may also
be called the Gopala-Hemachandra code, that can have applications in source
coding as well as in cryptography.
|
cs/0701086
|
Loop Calculus and Belief Propagation for q-ary Alphabet: Loop Tower
|
cs.IT cond-mat.stat-mech math.IT
|
Loop Calculus introduced in [Chertkov, Chernyak '06] constitutes a new
theoretical tool that explicitly expresses the symbol Maximum-A-Posteriori
(MAP) solution of a general statistical inference problem via a solution of the
Belief Propagation (BP) equations. This finding brought a new significance to
the BP concept, which in the past was thought of as just a loop-free
approximation. In this paper we continue a discussion of the Loop Calculus. We
introduce an invariant formulation which allows to generalize the Loop Calculus
approach to a q-are alphabet.
|
cs/0701087
|
Artificiality in Social Sciences
|
cs.MA
|
This text provides with an introduction to the modern approach of
artificiality and simulation in social sciences. It presents the relationship
between complexity and artificiality, before introducing the field of
artificial societies which greatly benefited from the computer power fast
increase, gifting social sciences with formalization and experimentation tools
previously owned by "hard" sciences alone. It shows that as "a new way of doing
social sciences", artificial societies should undoubtedly contribute to a
renewed approach in the study of sociality and should play a significant part
in the elaboration of original theories of social phenomena.
|
cs/0701089
|
Constructive Dimension and Turing Degrees
|
cs.CC cs.IT math.IT
|
This paper examines the constructive Hausdorff and packing dimensions of
Turing degrees. The main result is that every infinite sequence S with
constructive Hausdorff dimension dim_H(S) and constructive packing dimension
dim_P(S) is Turing equivalent to a sequence R with dim_H(R) <= (dim_H(S) /
dim_P(S)) - epsilon, for arbitrary epsilon > 0. Furthermore, if dim_P(S) > 0,
then dim_P(R) >= 1 - epsilon. The reduction thus serves as a *randomness
extractor* that increases the algorithmic randomness of S, as measured by
constructive dimension.
A number of applications of this result shed new light on the constructive
dimensions of Turing degrees. A lower bound of dim_H(S) / dim_P(S) is shown to
hold for the Turing degree of any sequence S. A new proof is given of a
previously-known zero-one law for the constructive packing dimension of Turing
degrees. It is also shown that, for any regular sequence S (that is, dim_H(S) =
dim_P(S)) such that dim_H(S) > 0, the Turing degree of S has constructive
Hausdorff and packing dimension equal to 1.
Finally, it is shown that no single Turing reduction can be a universal
constructive Hausdorff dimension extractor, and that bounded Turing reductions
cannot extract constructive Hausdorff dimension. We also exhibit sequences on
which weak truth-table and bounded Turing reductions differ in their ability to
extract dimension.
|
cs/0701090
|
Ergodic Capacity of Discrete- and Continuous-Time, Frequency-Selective
Rayleigh Fading Channels with Correlated Scattering
|
cs.IT math.IT
|
We study the ergodic capacity of a frequency-selective Rayleigh fading
channel with correlated scattering, which finds application in the area of UWB.
Under an average power constraint, we consider a single-user, single-antenna
transmission. Coherent reception is assumed with full CSI at the receiver and
no CSI at the transmitter. We distinguish between a continuous- and a
discrete-time channel, modeled either as random process or random vector with
generic covariance. As a practically relevant example, we examine an
exponentially attenuated Ornstein-Uhlenbeck process in detail. Finally, we give
numerical results, discuss the relation between the continuous- and the
discrete-time channel model and show the significant impact of correlated
scattering.
|
cs/0701091
|
Iterative LDPC decoding using neighborhood reliabilities
|
cs.IT math.IT
|
In this paper we study the impact of the processing order of nodes of a
bipartite graph, on the performance of an iterative message-passing decoding.
To this end, we introduce the concept of neighborhood reliabilities of graph's
nodes. Nodes reliabilities are calculated at each iteration and then are used
to obtain a processing order within a serial or serial/parallel scheduling. The
basic idea is that by processing first the most reliable data, the decoder is
reinforced before processing the less reliable one. Using neighborhood
reliabilities, the Min-Sum decoder of LDPC codes approaches the performance of
the Sum-Product decoder.
|
cs/0701092
|
The Multiplexing Gain of MIMO X-Channels with Partial Transmit
Side-Information
|
cs.IT math.IT
|
In this paper, we obtain the scaling laws of the sum-rate capacity of a MIMO
X-channel, a 2 independent sender, 2 independent receiver channel with messages
from each transmitter to each receiver, at high signal to noise ratios (SNR).
The X-channel has sparked recent interest in the context of cooperative
networks and it encompasses the interference, multiple access, and broadcast
channels as special cases. Here, we consider the case with partially
cooperative transmitters in which only partial and asymmetric side-information
is available at one of the transmitters. It is proved that when there are M
antennas at all four nodes, the sum-rate scales like 2Mlog(SNR) which is in
sharp contrast to [\lfloor 4M/3 \rfloor,4M/3]log(SNR) for non-cooperative
X-channels \cite{maddah-ali,jafar_degrees}. This further proves that, in terms
of sum-rate scaling at high SNR, partial side-information at one of the
transmitters and full side-information at both transmitters are equivalent in
the MIMO X-channel.
|
cs/0701093
|
Throughput Scaling Laws for Wireless Networks with Fading Channels
|
cs.IT math.IT
|
A network of $n$ wireless communication links is considered. Fading is
assumed to be the dominant factor affecting the strength of the channels
between nodes. The objective is to analyze the achievable throughput of the
network when power allocation is allowed. By proposing a decentralized on-off
power allocation strategy, a lower bound on the achievable throughput is
obtained for a general fading model. In particular, under Rayleigh fading
conditions the achieved sum-rate is of order $\log n$, which is, by a constant
factor, larger than what is obtained with a centralized scheme in the work of
Gowaikar et al. Similar to most of previous works on large networks, the
proposed scheme assigns a vanishingly small rate for each link. However, it is
shown that by allowing the sum-rate to decrease by a factor $\alpha<1$, this
scheme is capable of providing non-zero rate-per-links of order $\Theta(1)$. To
obtain larger non-zero rate-per-links, the proposed scheme is modified to a
centralized version. It turns out that for the same number of active links the
centralized scheme achieves a much larger rate-per-link. Moreover, at large
values of rate-per-link, it achieves a sum-rate close to $\log n$, i.e., the
maximum achieved by the decentralized scheme.
|
cs/0701095
|
Propositional theories are strongly equivalent to logic programs
|
cs.AI cs.LO
|
This paper presents a property of propositional theories under the answer
sets semantics (called Equilibrium Logic for this general syntax): any theory
can always be reexpressed as a strongly equivalent disjunctive logic program,
possibly with negation in the head. We provide two different proofs for this
result: one involving a syntactic transformation, and one that constructs a
program starting from the countermodels of the theory in the intermediate logic
of here-and-there.
|
cs/0701097
|
MacWilliams Identity for the Rank Metric
|
cs.IT math.IT
|
This paper investigates the relationship between the rank weight distribution
of a linear code and that of its dual code. The main result of this paper is
that, similar to the MacWilliams identity for the Hamming metric, the rank
weight distribution of any linear code can be expressed as an analytical
expression of that of its dual code. Remarkably, our new identity has a similar
form to the MacWilliams identity for the Hamming metric. Our new identity
provides a significant analytical tool to the rank weight distribution analysis
of linear codes. We use a linear space based approach in the proof for our new
identity, and adapt this approach to provide an alternative proof of the
MacWilliams identity for the Hamming metric. Finally, we determine the
relationship between moments of the rank distribution of a linear code and
those of its dual code, and provide an alternative derivation of the rank
weight distribution of maximum rank distance codes.
|
cs/0701098
|
Packing and Covering Properties of Rank Metric Codes
|
cs.IT math.IT
|
This paper investigates packing and covering properties of codes with the
rank metric. First, we investigate packing properties of rank metric codes.
Then, we study sphere covering properties of rank metric codes, derive bounds
on their parameters, and investigate their asymptotic covering properties.
|
cs/0701099
|
On the Feedback Capacity of Power Constrained Gaussian Noise Channels
with Memory
|
cs.IT math.IT
|
For a stationary additive Gaussian-noise channel with a rational noise power
spectrum of a finite-order $L$, we derive two new results for the feedback
capacity under an average channel input power constraint. First, we show that a
very simple feedback-dependent Gauss-Markov source achieves the feedback
capacity, and that Kalman-Bucy filtering is optimal for processing the
feedback. Based on these results, we develop a new method for optimizing the
channel inputs for achieving the Cover-Pombra block-length-$n$ feedback
capacity by using a dynamic programming approach that decomposes the
computation into $n$ sequentially identical optimization problems where each
stage involves optimizing $O(L^2)$ variables. Second, we derive the explicit
maximal information rate for stationary feedback-dependent sources. In general,
evaluating the maximal information rate for stationary sources requires solving
only a few equations by simple non-linear programming. For first-order
autoregressive and/or moving average (ARMA) noise channels, this optimization
admits a closed form maximal information rate formula. The maximal information
rate for stationary sources is a lower bound on the feedback capacity, and it
equals the feedback capacity if the long-standing conjecture, that stationary
sources achieve the feedback capacity, holds.
|
cs/0701100
|
Delayed Feedback Capacity of Stationary Sources over Linear Gaussian
Noise Channels
|
cs.IT math.IT
|
We consider a linear Gaussian noise channel used with delayed feedback. The
channel noise is assumed to be a ARMA (autoregressive and/or moving average)
process. We reformulate the Gaussian noise channel into an intersymbol
interference channel with white noise, and show that the delayed-feedback of
the original channel is equivalent to the instantaneous-feedback of the derived
channel. By generalizing results previously developed for Gaussian channels
with instantaneous feedback and applying them to the derived intersymbol
interference channel, we show that conditioned on the delayed feedback, a
conditional Gauss-Markov source achieves the feedback capacity and its Markov
memory length is determined by the noise spectral order and the feedback delay.
A Kalman-Bucy filter is shown to be optimal for processing the feedback. The
maximal information rate for stationary sources is derived in terms of channel
input power constraint and the steady state solution of the Riccati equation of
the Kalman-Bucy filter used in the feedback loop.
|
cs/0701102
|
Coding Solutions for the Secure Biometric Storage Problem
|
cs.IT cs.CR math.IT
|
The paper studies the problem of securely storing biometric passwords, such
as fingerprints and irises. With the help of coding theory Juels and Wattenberg
derived in 1999 a scheme where similar input strings will be accepted as the
same biometric. In the same time nothing could be learned from the stored data.
They called their scheme a "fuzzy commitment scheme". In this paper we will
revisit the solution of Juels and Wattenberg and we will provide answers to two
important questions: What type of error-correcting codes should be used and
what happens if biometric templates are not uniformly distributed, i.e. the
biometric data come with redundancy. Answering the first question will lead us
to the search for low-rate large-minimum distance error-correcting codes which
come with efficient decoding algorithms up to the designed distance. In order
to answer the second question we relate the rate required with a quantity
connected to the "entropy" of the string, trying to estimate a sort of
"capacity", if we want to see a flavor of the converse of Shannon's noisy
coding theorem. Finally we deal with side-problems arising in a practical
implementation and we propose a possible solution to the main one that seems to
have so far prevented real life applications of the fuzzy scheme, as far as we
know.
|
cs/0701103
|
Analysis and design of raptor codes for joint decoding using Information
Content evolution
|
cs.IT math.IT
|
In this paper, we present an analytical analysis of the convergence of raptor
codes under joint decoding over the binary input additive white noise channel
(BIAWGNC), and derive an optimization method. We use Information Content
evolution under Gaussian approximation, and focus on a new decoding scheme that
proves to be more efficient: the joint decoding of the two code components of
the raptor code. In our general model, the classical tandem decoding scheme
appears to be a subcase, and thus, the design of LT codes is also possible.
|
cs/0701104
|
Why is a new Journal of Informetrics needed?
|
cs.DL cs.DB
|
In our study we analysed 3.889 records which were indexed in the Library and
Information Science Abstracts (LISA) database in the research field of
informetrics. We can show the core journals of the field via a Bradford (power
law) distribution and corroborate on the basis of the restricted LISA data set
that it was the appropriate time to found a new specialized journal dedicated
to informetrics. According to Bradford's Law of scattering (pure quantitative
calculation), Egghe's Journal of Informetrics (JOI) first issue to appear in
2007, comes most probable at the right time.
|
cs/0701105
|
A Delta Debugger for ILP Query Execution
|
cs.PL cs.LG
|
Because query execution is the most crucial part of Inductive Logic
Programming (ILP) algorithms, a lot of effort is invested in developing faster
execution mechanisms. These execution mechanisms typically have a low-level
implementation, making them hard to debug. Moreover, other factors such as the
complexity of the problems handled by ILP algorithms and size of the code base
of ILP data mining systems make debugging at this level a very difficult job.
In this work, we present the trace-based debugging approach currently used in
the development of new execution mechanisms in hipP, the engine underlying the
ACE Data Mining system. This debugger uses the delta debugging algorithm to
automatically reduce the total time needed to expose bugs in ILP execution,
thus making manual debugging step much lighter.
|
cs/0701112
|
(l,s)-Extension of Linear Codes
|
cs.IT math.CO math.IT
|
We construct new linear codes with high minimum distance d. In at least 12
cases these codes improve the minimum distance of the previously known best
linear codes for fixed parameters n,k. Among these new codes there is an
optimal ternary [88,8,54] code.
We develop an algorithm, which starts with already good codes C, i.e. codes
with high minimum distance d for given length n and dimension k over the field
GF(q). The algorithm is based on the newly defined (l,s)-extension. This is a
generalization of the well-known method of adding a parity bit in the case of a
binary linear code of odd minimum weight. (l,s)-extension tries to extend the
generator matrix of C by adding l columns with the property that at least s of
the l letters added to each of the codewords of minimum weight in C are
different from 0. If one finds such columns the minimum distance of the
extended code is d+s provided that the second smallest weight in C was at least
d+s. The question whether such columns exist can be settled using a Diophantine
system of equations.
|
cs/0701114
|
The problem determination of Functional Dependencies between attributes
Relation Scheme in the Relational Data Model. El problema de determinar
Dependencias Funcionales entre atributos en los esquemas en el Modelo
Relacional
|
cs.DB cs.DS
|
An alternative definition of the concept is given of functional dependence
among the attributes of the relational schema in the Relational Model, this
definition is obtained in terms of the set theory. For that which a theorem is
demonstrated that establishes equivalence and on the basis theorem an algorithm
is built for the search of the functional dependences among the attributes. The
algorithm is illustrated by a concrete example
|
cs/0701115
|
Browser-based distributed evolutionary computation: performance and
scaling behavior
|
cs.DC cs.NE
|
The challenge of ad-hoc computing is to find the way of taking advantage of
spare cycles in an efficient way that takes into account all capabilities of
the devices and interconnections available to them. In this paper we explore
distributed evolutionary computation based on the Ruby on Rails framework,
which overlays a Model-View-Controller on evolutionary computation. It allows
anybody with a web browser (that is, mostly everybody connected to the
Internet) to participate in an evolutionary computation experiment. Using a
straightforward farming model, we consider different factors, such as the size
of the population used. We are mostly interested in how they impact on
performance, but also the scaling behavior when a non-trivial number of
computers is applied to the problem. Experiments show the impact of different
packet sizes on performance, as well as a quite limited scaling behavior, due
to the characteristics of the server. Several solutions for that problem are
proposed.
|
cs/0701116
|
The Impact of CSI and Power Allocation on Relay Channel Capacity and
Cooperation Strategies
|
cs.IT math.IT
|
Capacity gains from transmitter and receiver cooperation are compared in a
relay network where the cooperating nodes are close together. Under
quasi-static phase fading, when all nodes have equal average transmit power
along with full channel state information (CSI), it is shown that transmitter
cooperation outperforms receiver cooperation, whereas the opposite is true when
power is optimally allocated among the cooperating nodes but only CSI at the
receiver (CSIR) is available. When the nodes have equal power with CSIR only,
cooperative schemes are shown to offer no capacity improvement over
non-cooperation under the same network power constraint. When the system is
under optimal power allocation with full CSI, the decode-and-forward
transmitter cooperation rate is close to its cut-set capacity upper bound, and
outperforms compress-and-forward receiver cooperation. Under fast Rayleigh
fading in the high SNR regime, similar conclusions follow. Cooperative systems
provide resilience to fading in channel magnitudes; however, capacity becomes
more sensitive to power allocation, and the cooperating nodes need to be closer
together for the decode-and-forward scheme to be capacity-achieving. Moreover,
to realize capacity improvement, full CSI is necessary in transmitter
cooperation, while in receiver cooperation optimal power allocation is
essential.
|
cs/0701117
|
Maximum Entropy in the framework of Algebraic Statistics: A First Step
|
cs.IT cs.SC math.IT
|
Algebraic statistics is a recently evolving field, where one would treat
statistical models as algebraic objects and thereby use tools from
computational commutative algebra and algebraic geometry in the analysis and
computation of statistical models. In this approach, calculation of parameters
of statistical models amounts to solving set of polynomial equations in several
variables, for which one can use celebrated Grobner bases theory. Owing to the
important role of information theory in statistics, this paper as a first step,
explores the possibility of describing maximum and minimum entropy (ME) models
in the framework of algebraic statistics. We show that ME-models are toric
models (a class of algebraic statistical models) when the constraint functions
(that provide the information about the underlying random variable) are integer
valued functions, and the set of statistical models that results from
ME-methods are indeed an affine variety.
|
cs/0701118
|
Optimal Order of Decoding for Max-Min Fairness in $K$-User Memoryless
Interference Channels
|
cs.IT math.IT
|
A $K$-user memoryless interference channel is considered where each receiver
sequentially decodes the data of a subset of transmitters before it decodes the
data of the designated transmitter. Therefore, the data rate of each
transmitter depends on (i) the subset of receivers which decode the data of
that transmitter, (ii) the decoding order, employed at each of these receivers.
In this paper, a greedy algorithm is developed to find the users which are
decoded at each receiver and the corresponding decoding order such that the
minimum rate of the users is maximized. It is proven that the proposed
algorithm is optimal.
|
cs/0701119
|
The framework for simulation of dynamics of mechanical aggregates
|
cs.CE
|
A framework for simulation of dynamics of mechanical aggregates has been
developed. This framework enables us to build model of aggregate from models of
its parts. Framework is a part of universal framework for science and
engineering.
|
cs/0701120
|
Algorithmic Complexity Bounds on Future Prediction Errors
|
cs.LG cs.AI cs.IT math.IT
|
We bound the future loss when predicting any (computably) stochastic sequence
online. Solomonoff finitely bounded the total deviation of his universal
predictor $M$ from the true distribution $mu$ by the algorithmic complexity of
$mu$. Here we assume we are at a time $t>1$ and already observed $x=x_1...x_t$.
We bound the future prediction performance on $x_{t+1}x_{t+2}...$ by a new
variant of algorithmic complexity of $mu$ given $x$, plus the complexity of the
randomness deficiency of $x$. The new complexity is monotone in its condition
in the sense that this complexity can only decrease if the condition is
prolonged. We also briefly discuss potential generalizations to Bayesian model
classes and to classification problems.
|
cs/0701123
|
Feasible Depth
|
cs.CC cs.IT math.IT
|
This paper introduces two complexity-theoretic formulations of Bennett's
logical depth: finite-state depth and polynomial-time depth. It is shown that
for both formulations, trivial and random infinite sequences are shallow, and a
slow growth law holds, implying that deep sequences cannot be created easily
from shallow sequences. Furthermore, the E analogue of the halting language is
shown to be polynomial-time deep, by proving a more general result: every
language to which a nonnegligible subset of E can be reduced in uniform
exponential time is polynomial-time deep.
|
cs/0701124
|
Group Secret Key Generation Algorithms
|
cs.IT cs.CR math.IT
|
We consider a pair-wise independent network where every pair of terminals in
the network observes a common pair-wise source that is independent of all the
sources accessible to the other pairs. We propose a method for secret key
agreement in such a network that is based on well-established point-to-point
techniques and repeated application of the one-time pad. Three specific
problems are investigated. 1) Each terminal's observations are correlated only
with the observations of a central terminal. All these terminals wish to
generate a common secret key. 2) In a pair-wise independent network, two
designated terminals wish to generate a secret key with the help of other
terminals. 3) All the terminals in a pair-wise independent network wish to
generate a common secret key. A separate protocol for each of these problems is
proposed. Furthermore, we show that the protocols for the first two problems
are optimal and the protocol for the third problem is efficient, in terms of
the resulting secret key rates.
|
cs/0701125
|
Universal Algorithmic Intelligence: A mathematical top->down approach
|
cs.AI cs.LG
|
Sequential decision theory formally solves the problem of rational agents in
uncertain worlds if the true environmental prior probability distribution is
known. Solomonoff's theory of universal induction formally solves the problem
of sequence prediction for unknown prior distribution. We combine both ideas
and get a parameter-free theory of universal Artificial Intelligence. We give
strong arguments that the resulting AIXI model is the most intelligent unbiased
agent possible. We outline how the AIXI model can formally solve a number of
problem classes, including sequence prediction, strategic games, function
minimization, reinforcement and supervised learning. The major drawback of the
AIXI model is that it is uncomputable. To overcome this problem, we construct a
modified algorithm AIXItl that is still effectively more intelligent than any
other time t and length l bounded agent. The computation time of AIXItl is of
the order t x 2^l. The discussion includes formal definitions of intelligence
order relations, the horizon problem and relations of the AIXI theory to other
AI approaches.
|
cs/0701126
|
Optimal Throughput-Diversity-Delay Tradeoff in MIMO ARQ Block-Fading
Channels
|
cs.IT math.IT
|
In this paper, we consider an automatic-repeat-request (ARQ) retransmission
protocol signaling over a block-fading multiple-input, multiple-output (MIMO)
channel. Unlike previous work, we allow for multiple fading blocks within each
transmission (ARQ round), and we constrain the transmitter to fixed rate codes
constructed over complex signal constellations. In particular, we examine the
general case of average input-power-constrained constellations as well as the
practically important case of finite discrete constellations. This scenario is
a suitable model for practical wireless communications systems employing
orthogonal frequency division multiplexing techniques over a MIMO ARQ channel.
Two cases of fading dynamics are considered, namely short-term static fading
where channel fading gains change randomly for each ARQ round, and long-term
static fading where channel fading gains remain constant over all ARQ rounds
pertaining to a given message. As our main result, we prove that for the
block-fading MIMO ARQ channel with discrete input signal constellation
satisfying a short-term power constraint, the optimal signal-to-noise ratio
(SNR) exponent is given by a modified Singleton bound, relating all the system
parameters. To demonstrate the practical significance of the theoretical
analysis, we present numerical results showing that practical
Singleton-bound-achieving maximum distance separable codes achieve the optimal
SNR exponent.
|
cs/0701127
|
A novel set of rotationally and translationally invariant features for
images based on the non-commutative bispectrum
|
cs.CV cs.AI
|
We propose a new set of rotationally and translationally invariant features
for image or pattern recognition and classification. The new features are cubic
polynomials in the pixel intensities and provide a richer representation of the
original image than most existing systems of invariants. Our construction is
based on the generalization of the concept of bispectrum to the
three-dimensional rotation group SO(3), and a projection of the image onto the
sphere.
|
cs/0701129
|
Space-time codes with controllable ML decoding complexity for any number
of transmit antennas
|
cs.IT math.IT
|
We construct a class of linear space-time block codes for any number of
transmit antennas that have controllable ML decoding complexity with a maximum
rate of 1 symbol per channel use. The decoding complexity for $M$ transmit
antennas can be varied from ML decoding of $2^{\lceil \log_2M \rceil -1}$
symbols together to single symbol ML decoding. For ML decoding of $2^{\lceil
\log_2M \rceil - n}$ ($n=1,2,...$) symbols together, a diversity of
$\min(M,2^{\lceil \log_2M \rceil-n+1})$ can be achieved. Numerical results show
that the performance of the constructed code when $2^{\lceil \log_2M \rceil-1}$
symbols are decoded together is quite close to the performance of ideal rate-1
orthogonal codes (that are non-existent for more than 2 transmit antennas).
|
cs/0701131
|
Effective Beam Width of Directional Antennas in Wireless Ad Hoc Networks
|
cs.IT math.IT
|
It is known at a qualitative level that directional antennas can be used to
boost the capacity of wireless ad hoc networks. Lacking is a measure to
quantify this advantage and to compare directional antennas of different
footprint patterns. This paper introduces the concept of the effective beam
width (and the effective null width as its dual counterpart) as a measure which
quantitatively captures the capacity-boosting capability of directional
antennas. Beam width is commonly defined to be the directional angle spread
within which the main-lobe beam power is above a certain threshold. In
contrast, our effective beam width definition lumps the effects of the (i)
antenna pattern, (ii) active-node distribution, and (iii) channel
characteristics, on network capacity into a single quantitative measure. We
investigate the mathematical properties of the effective beam width and show
how the convenience afforded by these properties can be used to analyze the
effectiveness of complex directional antenna patterns in boosting network
capacity, with fading and multi-user interference taken into account. In
particular, we derive the extent to which network capacity can be scaled with
the use of phased array antennas. We show that a phased array antenna with N
elements can boost transport capacity of an Aloha-like network by a factor of
order N^1.620.
|
cs/0701135
|
Complex networks and human language
|
cs.CL
|
This paper introduces how human languages can be studied in light of recent
development of network theories. There are two directions of exploration. One
is to study networks existing in the language system. Various lexical networks
can be built based on different relationships between words, being semantic or
syntactic. Recent studies have shown that these lexical networks exhibit
small-world and scale-free features. The other direction of exploration is to
study networks of language users (i.e. social networks of people in the
linguistic community), and their role in language evolution. Social networks
also show small-world and scale-free features, which cannot be captured by
random or regular network models. In the past, computational models of language
change and language emergence often assume a population to have a random or
regular structure, and there has been little discussion how network structures
may affect the dynamics. In the second part of the paper, a series of
simulation models of diffusion of linguistic innovation are used to illustrate
the importance of choosing realistic conditions of population structure for
modeling language change. Four types of social networks are compared, which
exhibit two categories of diffusion dynamics. While the questions about which
type of networks are more appropriate for modeling still remains, we give some
preliminary suggestions for choosing the type of social networks for modeling.
|
cs/0701136
|
Citation Advantage For OA Self-Archiving Is Independent of Journal
Impact Factor, Article Age, and Number of Co-Authors
|
cs.IR cs.DL
|
Eysenbach has suggested that the OA (Green) self-archiving advantage might
just be an artifact of potential uncontrolled confounding factors such as
article age (older articles may be both more cited and more likely to be
self-archived), number of authors (articles with more authors might be more
cited and more self-archived), subject matter (the subjects that are cited
more, self-archive more), country (same thing), number of authors, citation
counts of authors, etc. Chawki Hajjem (doctoral candidate, UQaM) had already
shown that the OA advantage was present in all cases when articles were
analysed separately by age, subject matter or country. He has now done a
multiple regression analysis jointly testing (1) article age, (2) journal
impact factor, (3) number of authors, and (4) OA self-archiving as separate
factors for 442,750 articles in 576 (biomedical) journals across 11 years, and
has shown that each of the four factors contributes an independent,
statistically significant increment to the citation counts. The
OA-self-archiving advantage remains a robust, independent factor. Having
successfully responded to his challenge, we now challenge Eysenbach to
demonstrate -- by testing a sufficiently broad and representative sample of
journals at all levels of the journal quality, visibility and prestige
hierarchy -- that his finding of a citation advantage for Gold OA (articles
published OA on the high-profile website of the only journal he tested (PNAS)
over Green OA articles in the same journal (self-archived on the author's
website) was not just an artifact of having tested only one very high-profile
journal.
|
cs/0701137
|
The Open Access Citation Advantage: Quality Advantage Or Quality Bias?
|
cs.IR cs.DL
|
Many studies have now reported the positive correlation between Open Access
(OA) self-archiving and citation counts ("OA Advantage," OAA). But does this
OAA occur because (QB) authors are more likely to self-selectively self-archive
articles that are more likely to be cited (self-selection "Quality Bias": QB)?
or because (QA) articles that are self-archived are more likely to be cited
("Quality Advantage": QA)? The probable answer is both. Three studies [by (i)
Kurtz and co-workers in astrophysics, (ii) Moed in condensed matter physics,
and (iii) Davis & Fromerth in mathematics] had reported the OAA to be due to QB
[plus Early Advantage, EA, from self-archiving the preprint before publication,
in (i) and (ii)] rather than QA. These three fields, however, (1) have less of
a postprint access problem than most other fields and (i) and (ii) also happen
to be among the minority of fields that (2) make heavy use of prepublication
preprints. Chawki Hajjem has now analyzed preliminary evidence based on over
100,000 articles from multiple fields, comparing self-selected self-archiving
with mandated self-archiving to estimate the contributions of QB and QA to the
OAA. Both factors contribute, and the contribution of QA is greater.
|
cs/0701139
|
Time and the Prisoner's Dilemma
|
cs.GT cs.AI
|
This paper examines the integration of computational complexity into game
theoretic models. The example focused on is the Prisoner's Dilemma, repeated
for a finite length of time. We show that a minimal bound on the players'
computational ability is sufficient to enable cooperative behavior.
In addition, a variant of the repeated Prisoner's Dilemma game is suggested,
in which players have the choice of opting out. This modification enriches the
game and suggests dominance of cooperative strategies.
Competitive analysis is suggested as a tool for investigating sub-optimal
(but computationally tractable) strategies and game theoretic models in
general. Using competitive analysis, it is shown that for bounded players, a
sub-optimal strategy might be the optimal choice, given resource limitations.
|
cs/0701143
|
Dirac Notation, Fock Space and Riemann Metric Tensor in Information
Retrieval Models
|
cs.IR math-ph math.MP
|
Using Dirac Notation as a powerful tool, we investigate the three classical
Information Retrieval (IR) models and some their extensions. We show that
almost all such models can be described by vectors in Occupation Number
Representations (ONR) of Fock spaces with various specifications on, e.g.,
occupation number, inner product or term-term interactions. As important cases
of study, Concept Fock Space (CFS) is introduced for Boolean model; the basic
formulas for Singular Value Decomposition (SVD) of Latent Semantic Indexing
(LSI) Model are manipulated in terms of Dirac notation. And, based on SVD, a
Riemannian metric tensor is introduced, which not only can be used to calculate
the relevance of documents to a query, but also may be used to measure the
closeness of documents in data clustering.
|
cs/0701146
|
State constraints and list decoding for the AVC
|
cs.IT math.IT
|
List decoding for arbitrarily varying channels (AVCs) under state constraints
is investigated. It is shown that rates within $\epsilon$ of the randomized
coding capacity of AVCs with input-dependent state can be achieved under
maximal error with list decoding using lists of size $O(1/\epsilon)$. Under
average error an achievable rate region and converse bound are given for lists
of size $L$. These bounds are based on two different notions of
symmetrizability and do not coincide in general. An example is given that shows
that for list size $L$ the capacity may be positive but strictly smaller than
the randomized coding capacity. This behavior is different than the situation
without state constraints.
|
cs/0701149
|
Power-Bandwidth Tradeoff in Dense Multi-Antenna Relay Networks
|
cs.IT math.IT
|
We consider a dense fading multi-user network with multiple active
multi-antenna source-destination pair terminals communicating simultaneously
through a large common set of $K$ multi-antenna relay terminals in the full
spatial multiplexing mode. We use Shannon-theoretic tools to analyze the
tradeoff between energy efficiency and spectral efficiency (known as the power-
bandwidth tradeoff) in meaningful asymptotic regimes of signal-to-noise ratio
(SNR) and network size. We design linear distributed multi-antenna relay
beamforming (LDMRB) schemes that exploit the spatial signature of multi-user
interference and characterize their power-bandwidth tradeoff under a system
wide power constraint on source and relay transmissions. The impact of multiple
users, multiple relays and multiple antennas on the key performance measures of
the high and low SNR regimes is investigated in order to shed new light on the
possible reduction in power and bandwidth requirements through the usage of
such practical relay cooperation techniques. Our results indicate that
point-to-point coded multi-user networks supported by distributed relay
beamforming techniques yield enhanced energy efficiency and spectral
efficiency, and with appropriate signaling and sufficient antenna degrees of
freedom, can achieve asymptotically optimal power-bandwidth tradeoff with the
best possible (i.e., as in the cutset bound) energy scaling of $K^{-1}$ and the
best possible spectral efficiency slope at any SNR for large number of relay
terminals.
|
cs/0701150
|
Contains and Inside relationships within combinatorial Pyramids
|
cs.CV
|
Irregular pyramids are made of a stack of successively reduced graphs
embedded in the plane. Such pyramids are used within the segmentation framework
to encode a hierarchy of partitions. The different graph models used within the
irregular pyramid framework encode different types of relationships between
regions. This paper compares different graph models used within the irregular
pyramid framework according to a set of relationships between regions. We also
define a new algorithm based on a pyramid of combinatorial maps which allows to
determine if one region contains the other using only local calculus.
|
cs/0701152
|
Characterization of Rate Region in Interference Channels with
Constrained Power
|
cs.IT math.IT
|
In this paper, an $n$-user Gaussian interference channel, where the power of
the transmitters are subject to some upper-bounds is studied. We obtain a
closed-form expression for the rate region of such a channel based on the
Perron-Frobenius theorem. While the boundary of the rate region for the case of
unconstrained power is a well-established result, this is the first result for
the case of constrained power. We extend this result to the time-varying
channels and obtain a closed-form solution for the rate region of such
channels.
|
cs/0701155
|
Data Cube: A Relational Aggregation Operator Generalizing Group-By,
Cross-Tab, and Sub-Totals
|
cs.DB
|
Data analysis applications typically aggregate data across many dimensions
looking for anomalies or unusual patterns. The SQL aggregate functions and the
GROUP BY operator produce zero-dimensional or one-dimensional aggregates.
Applications need the N-dimensional generalization of these operators. This
paper defines that operator, called the data cube or simply cube. The cube
operator generalizes the histogram, cross-tabulation, roll-up, drill-down, and
sub-total constructs found in most report writers. The novelty is that cubes
are relations. Consequently, the cube operator can be imbedded in more complex
non-procedural data analysis programs. The cube operator treats each of the N
aggregation attributes as a dimension of N-space. The aggregate of a particular
set of attribute values is a point in this space. The set of points forms an
N-dimensional cube. Super-aggregates are computed by aggregating the N-cube to
lower dimensional spaces. This paper (1) explains the cube and roll-up
operators, (2) shows how they fit in SQL, (3) explains how users can define new
aggregate functions for cubes, and (4) discusses efficient techniques to
compute the cube. Many of these features are being added to the SQL Standard.
|
cs/0701156
|
Data Management: Past, Present, and Future
|
cs.DB
|
Soon most information will be available at your fingertips, anytime,
anywhere. Rapid advances in storage, communications, and processing allow us
move all information into Cyberspace. Software to define, search, and visualize
online information is also a key to creating and accessing online information.
This article traces the evolution of data management systems and outlines
current trends. Data management systems began by automating traditional tasks:
recording transactions in business, science, and commerce. This data consisted
primarily of numbers and character strings. Today these systems provide the
infrastructure for much of our society, allowing fast, reliable, secure, and
automatic access to data distributed throughout the world. Increasingly these
systems automatically design and manage access to the data. The next steps are
to automate access to richer forms of data: images, sound, video, maps, and
other media. A second major challenge is automatically summarizing and
abstracting data in anticipation of user requests. These multi-media databases
and tools to access them will be a cornerstone of our move to Cyberspace.
|
cs/0701157
|
A Critique of ANSI SQL Isolation Levels
|
cs.DB
|
ANSI SQL-92 defines Isolation Levels in terms of phenomena: Dirty Reads,
Non-Repeatable Reads, and Phantoms. This paper shows that these phenomena and
the ANSI SQL definitions fail to characterize several popular isolation levels,
including the standard locking implementations of the levels. Investigating the
ambiguities of the phenomena leads to clearer definitions; in addition new
phenomena that better characterize isolation types are introduced. An important
multiversion isolation type, Snapshot Isolation, is defined.
|
cs/0701158
|
Queues Are Databases
|
cs.DB
|
Message-oriented-middleware (MOM) has become an small industry. MOM offers
queued transaction processing as an advance over pure client-server transaction
processing. This note makes four points: Queued transaction processing is less
general than direct transaction processing. Queued systems are built on top of
direct systems. You cannot build a direct system atop a queued system. It is
difficult to build direct, conversational, or distributed transactions atop a
queued system. Queues are interesting databases with interesting concurrency
control. It is best to build these mechanisms into a standard database system
so other applications can use these interesting features. Queue systems need
DBMS functionality. Queues need security, configuration, performance
monitoring, recovery, and reorganization utilities. Database systems already
have these features. A full-function MOM system duplicates these database
features. Queue managers are simple TP-monitors managing server pools driven by
queues. Database systems are encompassing many server pool features as they
evolve to TP-lite systems.
|
cs/0701159
|
Supporting Finite Element Analysis with a Relational Database Backend,
Part I: There is Life beyond Files
|
cs.DB cs.CE
|
In this paper, we show how to use a Relational Database Management System in
support of Finite Element Analysis. We believe it is a new way of thinking
about data management in well-understood applications to prepare them for two
major challenges, - size and integration (globalization). Neither extreme size
nor integration (with other applications over the Web) was a design concern 30
years ago when the paradigm for FEA implementation first was formed. On the
other hand, database technology has come a long way since its inception and it
is past time to highlight its usefulness to the field of scientific computing
and computer based engineering. This series aims to widen the list of
applications for database designers and for FEA users and application
developers to reap some of the benefits of database development.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.