id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
0811.0139
|
Entropy, Perception, and Relativity
|
cs.LG
|
In this paper, I expand Shannon's definition of entropy into a new form of
entropy that allows integration of information from different random events.
Shannon's notion of entropy is a special case of my more general definition of
entropy. I define probability using a so-called performance function, which is
de facto an exponential distribution. Assuming that my general notion of
entropy reflects the true uncertainty about a probabilistic event, I understand
that our perceived uncertainty differs. I claim that our perception is the
result of two opposing forces similar to the two famous antagonists in Chinese
philosophy: Yin and Yang. Based on this idea, I show that our perceived
uncertainty matches the true uncertainty in points determined by the golden
ratio. I demonstrate that the well-known sigmoid function, which we typically
employ in artificial neural networks as a non-linear threshold function,
describes the actual performance. Furthermore, I provide a motivation for the
time dilation in Einstein's Special Relativity, basically claiming that
although time dilation conforms with our perception, it does not correspond to
reality. At the end of the paper, I show how to apply this theoretical
framework to practical applications. I present recognition rates for a pattern
recognition problem, and also propose a network architecture that can take
advantage of general entropy to solve complex decision problems.
|
0811.0146
|
Effect of Tuned Parameters on a LSA MCQ Answering Model
|
cs.LG cs.AI stat.ML
|
This paper presents the current state of a work in progress, whose objective
is to better understand the effects of factors that significantly influence the
performance of Latent Semantic Analysis (LSA). A difficult task, which consists
in answering (French) biology Multiple Choice Questions, is used to test the
semantic properties of the truncated singular space and to study the relative
influence of main parameters. A dedicated software has been designed to fine
tune the LSA semantic space for the Multiple Choice Questions task. With
optimal parameters, the performances of our simple model are quite surprisingly
equal or superior to those of 7th and 8th grades students. This indicates that
semantic spaces were quite good despite their low dimensions and the small
sizes of training data sets. Besides, we present an original entropy global
weighting of answers' terms of each question of the Multiple Choice Questions
which was necessary to achieve the model's success.
|
0811.0152
|
Theoretical Analysis of Compressive Sensing via Random Filter
|
cs.IT math.IT
|
In this paper, the theoretical analysis of compressive sensing via random
filter, firstly outlined by J. Romberg [compressive sensing by random
convolution, submitted to SIAM Journal on Imaging Science on July 9, 2008], has
been refined or generalized to the design of general random filter used for
compressive sensing. This universal CS measurement consists of two parts: one
is from the convolution of unknown signal with a random waveform followed by
random time-domain subsampling; the other is from the directly time-domain
subsampling of the unknown signal. It has been shown that the proposed approach
is a universally efficient data acquisition strategy, which means that the
n-dimensional signal which is S sparse in any sparse representation can be
exactly recovered from Slogn measurements with overwhelming probability.
|
0811.0174
|
A Bit of Information Theory, and the Data Augmentation Algorithm
Converges
|
cs.IT math.IT stat.CO
|
The data augmentation (DA) algorithm is a simple and powerful tool in
statistical computing. In this note basic information theory is used to prove a
nontrivial convergence theorem for the DA algorithm.
|
0811.0196
|
Reduced-Complexity Reed--Solomon Decoders Based on Cyclotomic FFTs
|
cs.IT math.IT
|
In this paper, we reduce the computational complexities of partial and dual
partial cyclotomic FFTs (CFFTs), which are discrete Fourier transforms where
spectral and temporal components are constrained, based on their properties as
well as a common subexpression elimination algorithm. Our partial CFFTs achieve
smaller computational complexities than previously proposed partial CFFTs.
Utilizing our CFFTs in both transform- and time-domain Reed--Solomon decoders,
we achieve significant complexity reductions.
|
0811.0210
|
Novel Blind Signal Classification Method Based on Data Compression
|
cs.IT math.IT
|
This paper proposes a novel algorithm for signal classification problems. We
consider a non-stationary random signal, where samples can be classified into
several different classes, and samples in each class are identically
independently distributed with an unknown probability distribution. The problem
to be solved is to estimate the probability distributions of the classes and
the correct membership of the samples to the classes. We propose a signal
classification method based on the data compression principle that the accurate
estimation in the classification problems induces the optimal signal models for
data compression. The method formulates the classification problem as an
optimization problem, where a so called {"classification gain"} is maximized.
In order to circumvent the difficulties in integer optimization, we propose a
continuous relaxation based algorithm. It is proven in this paper that
asymptotically vanishing optimality loss is incurred by the continuous
relaxation. We show by simulation results that the proposed algorithm is
effective, robust and has low computational complexity. The proposed algorithm
can be applied to solve various multimedia signal segmentation, analysis, and
pattern recognition problems.
|
0811.0241
|
Joint Transmitter-Receiver Design for the Downlink Multiuser Spatial
Multiplexing MIMO System
|
cs.IT math.IT
|
This paper proposes a joint transmitter-receiver design to minimize the
weighted sum power under the post-processing signal-to-interference-and-noise
ratio (post-SINR) constraints for all subchannels. Simulation results
demonstrate that the algorithm can not only satisfy the post-SINR constraints
but also easily adjust the power distribution among the users by changing the
weights accordingly. Hence the algorithm can be used to alleviates the adjacent
cell interference by reducing the transmitting power to the edge users without
performance penalty.
|
0811.0285
|
Some results on communicating the sum of sources over a network
|
cs.IT math.IT
|
We consider the problem of communicating the sum of $m$ sources to $n$
terminals in a directed acyclic network. Recently, it was shown that for a
network of unit capacity links with either $m=2$ or $n=2$, the sum of the
sources can be communicated to the terminals if and only if every
source-terminal pair is connected in the network. We show in this paper that
for any finite set of primes, there exists a network where the sum of the
sources can be communicated to the terminals only over finite fields of
characteristic belonging to that set. As a corollary, this gives networks where
the sum can not be communicated over any finite field even though every source
is connected to every terminal.
|
0811.0310
|
Edhibou: a Customizable Interface for Decision Support in a Semantic
Portal
|
cs.AI cs.HC
|
The Semantic Web is becoming more and more a reality, as the required
technologies have reached an appropriate level of maturity. However, at this
stage, it is important to provide tools facilitating the use and deployment of
these technologies by end-users. In this paper, we describe EdHibou, an
automatically generated, ontology-based graphical user interface that
integrates in a semantic portal. The particularity of EdHibou is that it makes
use of OWL reasoning capabilities to provide intelligent features, such as
decision support, upon the underlying ontology. We present an application of
EdHibou to medical decision support based on a formalization of clinical
guidelines in OWL and show how it can be customized thanks to an ontology of
graphical components.
|
0811.0325
|
Energy Benefit of Network Coding for Multiple Unicast in Wireless
Networks
|
cs.IT math.IT
|
We show that the maximum possible energy benefit of network coding for
multiple unicast on wireless networks is at least 3. This improves the
previously known lower bound of 2.4 from [1].
|
0811.0335
|
Cooperative interface of a swarm of UAVs
|
cs.AI cs.HC cs.MA
|
After presenting the broad context of authority sharing, we outline how
introducing more natural interaction in the design of the ground operator
interface of UV systems should help in allowing a single operator to manage the
complexity of his/her task. Introducing new modalities is one one of the means
in the realization of our vision of next- generation GOI. A more fundamental
aspect resides in the interaction manager which should help balance the
workload of the operator between mission and interaction, notably by applying a
multi-strategy approach to generation and interpretation. We intend to apply
these principles to the context of the Smaart prototype, and in this
perspective, we illustrate how to characterize the workload associated with a
particular operational situation.
|
0811.0340
|
Document stream clustering: experimenting an incremental algorithm and
AR-based tools for highlighting dynamic trends
|
cs.AI
|
We address here two major challenges presented by dynamic data mining: 1) the
stability challenge: we have implemented a rigorous incremental density-based
clustering algorithm, independent from any initial conditions and ordering of
the data-vectors stream, 2) the cognitive challenge: we have implemented a
stringent selection process of association rules between clusters at time t-1
and time t for directly generating the main conclusions about the dynamics of a
data-stream. We illustrate these points with an application to a two years and
2600 documents scientific information database.
|
0811.0359
|
Embedding Non-Ground Logic Programs into Autoepistemic Logic for
Knowledge Base Combination
|
cs.LO cs.AI
|
In the context of the Semantic Web, several approaches to the combination of
ontologies, given in terms of theories of classical first-order logic and rule
bases, have been proposed. They either cast rules into classical logic or limit
the interaction between rules and ontologies. Autoepistemic logic (AEL) is an
attractive formalism which allows to overcome these limitations, by serving as
a uniform host language to embed ontologies and nonmonotonic logic programs
into it. For the latter, so far only the propositional setting has been
considered. In this paper, we present three embeddings of normal and three
embeddings of disjunctive non-ground logic programs under the stable model
semantics into first-order AEL. While the embeddings all correspond with
respect to objective ground atoms, differences arise when considering
non-atomic formulas and combinations with first-order theories. We compare the
embeddings with respect to stable expansions and autoepistemic consequences,
considering the embeddings by themselves, as well as combinations with
classical theories. Our results reveal differences and correspondences of the
embeddings and provide useful guidance in the choice of a particular embedding
for knowledge combination.
|
0811.0405
|
Predicting the popularity of online content
|
cs.CY cs.IR physics.soc-ph
|
We present a method for accurately predicting the long time popularity of
online content from early measurements of user access. Using two content
sharing portals, Youtube and Digg, we show that by modeling the accrual of
views and votes on content offered by these services we can predict the
long-term dynamics of individual submissions from initial data. In the case of
Digg, measuring access to given stories during the first two hours allows us to
forecast their popularity 30 days ahead with remarkable accuracy, while
downloads of Youtube videos need to be followed for 10 days to attain the same
performance. The differing time scales of the predictions are shown to be due
to differences in how content is consumed on the two portals: Digg stories
quickly become outdated, while Youtube videos are still found long after they
are initially submitted to the portal. We show that predictions are more
accurate for submissions for which attention decays quickly, whereas
predictions for evergreen content will be prone to larger errors.
|
0811.0413
|
Robust Linear Processing for Downlink Multiuser MIMO System With
Imperfectly Known Channel
|
cs.IT math.IT
|
This paper proposes a roust downlink multiuser MIMO scheme that exploits the
channel mean and antenna correlations to alleviate the performance penalty due
to the mismatch between the true and estimated CSI.
|
0811.0417
|
Parametric Channel Estimation by Exploiting Hopping Pilots in Uplink
OFDMA
|
cs.IT math.IT
|
This paper proposes a parametric channel estimation algorithm applicable to
uplink of OFDMA systems with pseudo-random subchannelization. It exploits the
hopping pilots to facilitate ESPRIT to estimate the delay subspace of the
multipath fading channel, and utilizes the global pilot tones to interpolate on
data subcarriers. Hence, it outperforms the traditional local channel
interpolators considerably.
|
0811.0419
|
Doppler Spread Estimation by Subspace Tracking for OFDM Systems
|
cs.IT math.IT
|
This paper proposes a novel maximum Doppler spread estimation algorithm for
OFDM systems with the comb-type pilot pattern. By tracking the drifting delay
subspace of the multipath channel, the time correlation function is measured at
a high accuracy, which accordingly improves the estimation accuracy of the
maximum Doppler spread considerably.
|
0811.0430
|
An Analysis of the Bias-Property of the Sample Auto-Correlation Matrices
of Doubly Selective Fading Channels for OFDM Systems
|
cs.IT math.IT
|
This paper derives the analytic expression of the sample auto-correlation
matrix from the least-squared channel estimation of doubly selective fading
channels for OFDM systems. According to the expression, the sample
auto-correlation matrix reveals the bias property which would cause the model
mismatch and therefore deteriorate the performance of channel estimation.
Numerical results demonstrate the bias property and corresponding analysis.
|
0811.0431
|
On the Cramer-Rao Lower Bound for Frequency Correlation Matrices of
Doubly Selective Fading Channels for OFDM Systems
|
cs.IT math.IT
|
The analytic expression of CRLB and the maximum likelihood estimator for the
sample frequency correlation matrices in doubly selective fading channels for
OFDM systems are reported in this paper. According to the analytical and
numerical results, the amount of samples affects the average mean square error
dominantly while the SNR and the Doppler spread do negligibly.
|
0811.0433
|
On the Cramer-Rao Lower Bound for Spatial Correlation Matrices of Doubly
Selective Fading Channels for MIMO OFDM Systems
|
cs.IT math.IT
|
The analytic expression of CRLB and the maximum likelihood estimator for
spatial correlation matrices in time-varying multipath fading channels for MIMO
OFDM systems are reported in this paper. The analytical and numerical results
reveal that the amount of samples and the order of frequency selectivity have
dominant impact on the CRLB. Moreover, the number of pilot tones, SNR as well
as the normalized maximum Doppler spread together influence the effective order
of frequency selectivity.
|
0811.0452
|
Doppler Spread Estimation by Tracking the Delay-Subspace for OFDM
Systems in Doubly Selective Fading Channels
|
cs.IT math.IT
|
A novel maximum Doppler spread estimation algorithm for OFDM systems with
comb-type pilot pattern is presented in this paper. By tracking the drifting
delay subspace of time-varying multipath channels, a Doppler dependent
parameter can be accurately measured and further expanded and transformed into
a non-linear high-order polynomial equation, from which the maximum Doppler
spread is readily solved by resorting to the Newton's method. Its performance
is demonstrated by simulations.
|
0811.0453
|
CoZo+ - A Content Zoning Engine for textual documents
|
cs.CL cs.IR
|
Content zoning can be understood as a segmentation of textual documents into
zones. This is inspired by [6] who initially proposed an approach for the
argumentative zoning of textual documents. With the prototypical CoZo+ engine,
we focus on content zoning towards an automatic processing of textual streams
while considering only the actors as the zones. We gain information that can be
used to realize an automatic recognition of content for pre-defined actors. We
understand CoZo+ as a necessary pre-step towards an automatic generation of
summaries and to make intellectual ownership of documents detectable.
|
0811.0543
|
Incomplete decode-and-forward protocol using distributed space-time
block codes
|
cs.IT math.IT
|
In this work, we explore the introduction of distributed space-time codes in
decode-and-forward (DF) protocols. A first protocol named the Asymmetric DF is
presented. It is based on two phases of different lengths, defined so that
signals can be fully decoded at relays. This strategy brings full diversity but
the symbol rate is not optimal. To solve this problem a second protocol named
the Incomplete DF is defined. It is based on an incomplete decoding at the
relays reducing the length of the first phase. This last strategy brings both
full diversity and full symbol rate. The outage probability and the simulation
results show that the Incomplete DF has better performance than any existing DF
protocol and than the non-orthogonal amplify-and-forward (NAF) strategy using
the same space-time codes. Moreover the diversity-multiplexing gain tradeoff
(DMT) of this new DF protocol is proven to be the same as the one of the NAF.
|
0811.0579
|
UNL-French deconversion as transfer & generation from an interlingua
with possible quality enhancement through offline human interaction
|
cs.CL
|
We present the architecture of the UNL-French deconverter, which "generates"
from the UNL interlingua by first"localizing" the UNL form for French, within
UNL, and then applying slightly adapted but classical transfer and generation
techniques, implemented in GETA's Ariane-G5 environment, supplemented by some
UNL-specific tools. Online interaction can be used during deconversion to
enhance output quality and is now used for development purposes. We show how
interaction could be delayed and embedded in the postedition phase, which would
then interact not directly with the output text, but indirectly with several
components of the deconverter. Interacting online or offline can improve the
quality not only of the utterance at hand, but also of the utterances processed
later, as various preferences may be automatically changed to let the
deconverter "learn".
|
0811.0602
|
Classification dynamique d'un flux documentaire : une \'evaluation
statique pr\'ealable de l'algorithme GERMEN
|
cs.AI
|
Data-stream clustering is an ever-expanding subdomain of knowledge
extraction. Most of the past and present research effort aims at efficient
scaling up for the huge data repositories. Our approach focuses on qualitative
improvement, mainly for "weak signals" detection and precise tracking of
topical evolutions in the framework of information watch - though scalability
is intrinsically guaranteed in a possibly distributed implementation. Our
GERMEN algorithm exhaustively picks up the whole set of density peaks of the
data at time t, by identifying the local perturbations induced by the current
document vector, such as changing cluster borders, or new/vanishing clusters.
Optimality yields from the uniqueness 1) of the density landscape for any value
of our zoom parameter, 2) of the cluster allocation operated by our border
propagation rule. This results in a rigorous independence from the data
presentation ranking or any initialization parameter. We present here as a
first step the only assessment of a static view resulting from one year of the
CNRS/INIST Pascal database in the field of geotechnics.
|
0811.0603
|
Query Refinement by Multi Word Term expansions and semantic synonymy
|
cs.IR
|
We developed a system, TermWatch
(https://stid-bdd.iut.univ-metz.fr/TermWatch/index.pl), which combines a
linguistic extraction of terms, their structuring into a terminological network
with a clustering algorithm. In this paper we explore its ability in
integrating the most promising aspects of the studies on query refinement:
choice of meaningful text units to cluster (domain terms), choice of tight
semantic relations with which to cluster terms, structuring of terms in a
network enabling abetter perception of domain concepts. We have run this
experiment on the 367 645 English abstracts of PASCAL 2005-2006 bibliographic
database (http://www.inist.fr) and compared the structured terminological
resource automatically build by TermWarch to the English segment of TermScience
resource (http://termsciences.inist.fr/) containing 88 211 terms.
|
0811.0623
|
Algorithmic complexity and randomness in elastic solids
|
cs.CC cs.IT math.IT
|
A system comprised of an elastic solid and its response to an external random
force sequence is shown to behave based on the principles of the theory of
algorithmic complexity and randomness. The solid distorts the randomness of an
input force sequence in a way proportional to its algorithmic complexity. We
demonstrate this by numerical analysis of a one-dimensional vibrating elastic
solid (the system) on which we apply a maximally random input force. The level
of complexity of the system is controlled via external parameters. The output
response is the field of displacements observed at several positions on the
body. The algorithmic complexity and stochasticity of the resulting output
displacement sequence is measured and compared against the complexity of the
system. The results show that the higher the system complexity the more
random-deficient the output sequence. This agrees with the theory introduced in
[16] which states that physical systems such as this behave as algorithmic
selection-rules which act on random actions in their surroundings.
|
0811.0637
|
Optimality of Myopic Sensing in Multi-Channel Opportunistic Access
|
cs.NI cs.IT math.IT
|
We consider opportunistic communications over multiple channels where the
state ("good" or "bad") of each channel evolves as independent and identically
distributed Markov processes. A user, with limited sensing and access
capability, chooses one channel to sense and subsequently access (based on the
sensed channel state) in each time slot. A reward is obtained when the user
senses and accesses a "good" channel. The objective is to design the optimal
channel selection policy that maximizes the expected reward accrued over time.
This problem can be generally cast as a Partially Observable Markov Decision
Process (POMDP) or a restless multi-armed bandit process, to which optimal
solutions are often intractable. We show in this paper that the myopic policy,
with a simple and robust structure, achieves optimality under certain
conditions. This result finds applications in opportunistic communications in
fading environment, cognitive radio networks for spectrum overlay, and
resource-constrained jamming and anti-jamming.
|
0811.0705
|
The Design of Sparse Antenna Array
|
cs.IT math.IT
|
The aim of antenna array synthesis is to achieve a desired radiation pattern
with the minimum number of antenna elements. In this paper the antenna
synthesis problem is studied from a totally new perspective. One of the key
principles of compressive sensing is that the signal to be sensed should be
sparse or compressible. This coincides with the requirement of minimum number
of element in the antenna array synthesis problem. In this paper the antenna
element of the array can be efficiently reduced via compressive sensing, which
shows a great improvement to the existing antenna synthesis method. Moreover,
the desired radiation pattern can be achieved in a very computation time which
is even shorter than the existing method. Numerical examples are presented to
show the high efficiency of the proposed method.
|
0811.0717
|
Visualization of association graphs for assisting the interpretation of
classifications
|
stat.AP cs.DL cs.IR
|
Given a query on the PASCAL database maintained by the INIST, we design user
interfaces to visualize and browse two types of graphs extracted from
abstracts: 1) the graph of all associations between authors (co-author graph),
2) the graph of strong associations between authors and terms automatically
extracted from abstracts and grouped using linguistic variations. We adapt for
this purpose the TermWatch system that comprises a term extractor, a relation
identifier which yields the terminological network and a clustering module. The
results are output on two interfaces: a graphic one mapping the clusters in a
2D space and a terminological hypertext network allowing the user to
interactively explore results and return to source texts.
|
0811.0719
|
Web Usage Analysis: New Science Indicators and Co-usage
|
cs.IR stat.AP
|
A new type of statistical analysis of the science and technical information
(STI) in the Web context is produced. We propose a set of indicators about Web
users, visualized bibliographic records, and e-commercial transactions. In
addition, we introduce two Web usage factors. Finally, we give an overview of
the co-usage analysis. For these tasks, we introduce a computer based system,
called Miri@d, which produces descriptive statistical information about the Web
users' searching behaviour, and what is effectively used from a free access
digital bibliographical database. The system is conceived as a server of
statistical data which are carried out beforehand, and as an interactive server
for online statistical work. The results will be made available to analysts,
who can use this descriptive statistical information as raw data for their
indicator design tasks, and as input for multivariate data analysis, clustering
analysis, and mapping. Managers also can exploit the results in order to
improve management and decision-making.
|
0811.0726
|
Improved Capacity Scaling in Wireless Networks With Infrastructure
|
cs.IT math.IT
|
This paper analyzes the impact and benefits of infrastructure support in
improving the throughput scaling in networks of $n$ randomly located wireless
nodes. The infrastructure uses multi-antenna base stations (BSs), in which the
number of BSs and the number of antennas at each BS can scale at arbitrary
rates relative to $n$. Under the model, capacity scaling laws are analyzed for
both dense and extended networks. Two BS-based routing schemes are first
introduced in this study: an infrastructure-supported single-hop (ISH) routing
protocol with multiple-access uplink and broadcast downlink and an
infrastructure-supported multi-hop (IMH) routing protocol. Then, their
achievable throughput scalings are analyzed. These schemes are compared against
two conventional schemes without BSs: the multi-hop (MH) transmission and
hierarchical cooperation (HC) schemes. It is shown that a linear throughput
scaling is achieved in dense networks, as in the case without help of BSs. In
contrast, the proposed BS-based routing schemes can, under realistic network
conditions, improve the throughput scaling significantly in extended networks.
The gain comes from the following advantages of these BS-based protocols.
First, more nodes can transmit simultaneously in the proposed scheme than in
the MH scheme if the number of BSs and the number of antennas are large enough.
Second, by improving the long-distance signal-to-noise ratio (SNR), the
received signal power can be larger than that of the HC, enabling a better
throughput scaling under extended networks. Furthermore, by deriving the
corresponding information-theoretic cut-set upper bounds, it is shown under
extended networks that a combination of four schemes IMH, ISH, MH, and HC is
order-optimal in all operating regimes.
|
0811.0731
|
Cognitive OFDM network sensing: a free probability approach
|
cs.IT cs.AI math.IT math.PR
|
In this paper, a practical power detection scheme for OFDM terminals, based
on recent free probability tools, is proposed. The objective is for the
receiving terminal to determine the transmission power and the number of the
surrounding base stations in the network. However, thesystem dimensions of the
network model turn energy detection into an under-determined problem. The focus
of this paper is then twofold: (i) discuss the maximum amount of information
that an OFDM terminal can gather from the surrounding base stations in the
network, (ii) propose a practical solution for blind cell detection using the
free deconvolution tool. The efficiency of this solution is measured through
simulations, which show better performance than the classical power detection
methods.
|
0811.0741
|
Data Mining-based Fragmentation of XML Data Warehouses
|
cs.DB
|
With the multiplication of XML data sources, many XML data warehouse models
have been proposed to handle data heterogeneity and complexity in a way
relational data warehouses fail to achieve. However, XML-native database
systems currently suffer from limited performances, both in terms of manageable
data volume and response time. Fragmentation helps address both these issues.
Derived horizontal fragmentation is typically used in relational data
warehouses and can definitely be adapted to the XML context. However, the
number of fragments produced by classical algorithms is difficult to control.
In this paper, we propose the use of a k-means-based fragmentation approach
that allows to master the number of fragments through its $k$ parameter. We
experimentally compare its efficiency to classical derived horizontal
fragmentation algorithms adapted to XML data warehouses and show its
superiority.
|
0811.0764
|
A Bayesian Framework for Collaborative Multi-Source Signal Detection
|
cs.IT cs.AI math.IT math.PR
|
This paper introduces a Bayesian framework to detect multiple signals
embedded in noisy observations from a sensor array. For various states of
knowledge on the communication channel and the noise at the receiving sensors,
a marginalization procedure based on recent tools of finite random matrix
theory, in conjunction with the maximum entropy principle, is used to compute
the hypothesis selection criterion. Quite remarkably, explicit expressions for
the Bayesian detector are derived which enable to decide on the presence of
signal sources in a noisy wireless environment. The proposed Bayesian detector
is shown to outperform the classical power detector when the noise power is
known and provides very good performance for limited knowledge on the noise
power. Simulations corroborate the theoretical results and quantify the gain
achieved using the proposed Bayesian framework.
|
0811.0777
|
A random coding theorem for "modulo-two adder" source network
|
cs.IT math.IT
|
This paper has been withdrawn by the author, due a crucial error in the proof
of the main Theorem (Sec. 3). In particular, in deriving the bound on the
probability of error (Eq. 10) the contribution of those pairs (x', y') that are
not equal to (x, y) has not been considered. By adding the contribution of
these pairs, one can verify that a region of rates similar to the Slepian-Wolf
region will emerge.
The author would like to acknowledge a critical review of the paper by Mr.
Paul Cuff of Stanford University who first pointed out the error.
|
0811.0778
|
A maximum entropy approach to OFDM channel estimation
|
cs.IT math.IT math.PR
|
In this work, a new Bayesian framework for OFDM channel estimation is
proposed. Using Jaynes' maximum entropy principle to derive prior information,
we successively tackle the situations when only the channel delay spread is a
priori known, then when it is not known. Exploitation of the time-frequency
dimensions are also considered in this framework, to derive the optimal channel
estimation associated to some performance measure under any state of knowledge.
Simulations corroborate the optimality claim and always prove as good or better
in performance than classical estimators.
|
0811.0823
|
Distributed Constrained Optimization with Semicoordinate Transformations
|
cs.NE cs.AI
|
Recent work has shown how information theory extends conventional
full-rationality game theory to allow bounded rational agents. The associated
mathematical framework can be used to solve constrained optimization problems.
This is done by translating the problem into an iterated game, where each agent
controls a different variable of the problem, so that the joint probability
distribution across the agents' moves gives an expected value of the objective
function. The dynamics of the agents is designed to minimize a Lagrangian
function of that joint distribution. Here we illustrate how the updating of the
Lagrange parameters in the Lagrangian is a form of automated annealing, which
focuses the joint distribution more and more tightly about the joint moves that
optimize the objective function. We then investigate the use of
``semicoordinate'' variable transformations. These separate the joint state of
the agents from the variables of the optimization problem, with the two
connected by an onto mapping. We present experiments illustrating the ability
of such transformations to facilitate optimization. We focus on the special
kind of transformation in which the statistically independent states of the
agents induces a mixture distribution over the optimization variables. Computer
experiment illustrate this for $k$-sat constraint satisfaction problems and for
unconstrained minimization of $NK$ functions.
|
0811.0935
|
A New Training Protocol for Channel State Estimation in Wireless Relay
Networks
|
cs.IT math.IT
|
The accuracy of channel state information (CSI) is critical for improving the
capacity of wireless networks. In this paper, we introduce a training protocol
for wireless relay networks that uses channel estimation and feedforwarding
methods. The feedforwarding method is the distinctive feature of the proposed
protocol. As we show, each relay feedforwards the imperfect CSI to the
destination in a way that provides a higher network capacity and a faster
transfer of the CSI than the existing protocols. In addition, we show the
importance of the effective CSI accuracy on the wireless relay network capacity
by comparing networks with the perfect effective CSI, imperfect effective CSI,
and noisy imperfect effective CSI available at the destination.
|
0811.0942
|
\'Etude longitudinale d'une proc\'edure de mod\'elisation de
connaissances en mati\`ere de gestion du territoire agricole
|
cs.AI
|
This paper gives an introduction to this issue, and presents the framework
and the main steps of the Rosa project. Four teams of researchers, agronomists,
computer scientists, psychologists and linguists were involved during five
years within this project that aimed at the development of a knowledge based
system. The purpose of the Rosa system is the modelling and the comparison of
farm spatial organizations. It relies on a formalization of agronomical
knowledge and thus induces a joint knowledge building process involving both
the agronomists and the computer scientists. The paper describes the steps of
the modelling process as well as the filming procedures set up by the
psychologists and linguists in order to make explicit and to analyze the
underlying knowledge building process.
|
0811.0952
|
Raptor Codes and Cryptographic Issues
|
cs.IT math.IT
|
In this paper two cryptographic methods are introduced. In the first method
the presence of a certain size subgroup of persons can be checked for an action
to take place. For this we use fragments of Raptor codes delivered to the group
members. In the other method a selection of a subset of objects can be made
secret. Also, it can be proven afterwards, what the original selection was.
|
0811.0971
|
Mining Complex Hydrobiological Data with Galois Lattices
|
cs.AI q-bio.QM
|
We have used Galois lattices for mining hydrobiological data. These data are
about macrophytes, that are macroscopic plants living in water bodies. These
plants are characterized by several biological traits, that own several
modalities. Our aim is to cluster the plants according to their common traits
and modalities and to find out the relations between traits. Galois lattices
are efficient methods for such an aim, but apply on binary data. In this
article, we detail a few approaches we used to transform complex
hydrobiological data into binary data and compare the first results obtained
thanks to Galois lattices.
|
0811.0980
|
Self-organized criticality and adaptation in discrete dynamical networks
|
nlin.AO cond-mat.dis-nn cs.NE
|
It has been proposed that adaptation in complex systems is optimized at the
critical boundary between ordered and disordered dynamical regimes. Here, we
review models of evolving dynamical networks that lead to self-organization of
network topology based on a local coupling between a dynamical order parameter
and rewiring of network connectivity, with convergence towards criticality in
the limit of large network size $N$. In particular, two adaptive schemes are
discussed and compared in the context of Boolean Networks and Threshold
Networks: 1) Active nodes loose links, frozen nodes aquire new links, 2) Nodes
with correlated activity connect, de-correlated nodes disconnect. These simple
local adaptive rules lead to co-evolution of network topology and -dynamics.
Adaptive networks are strikingly different from random networks: They evolve
inhomogeneous topologies and broad plateaus of homeostatic regulation,
dynamical activity exhibits $1/f$ noise and attractor periods obey a scale-free
distribution. The proposed co-evolutionary mechanism of topological
self-organization is robust against noise and does not depend on the details of
dynamical transition rules. Using finite-size scaling, it is shown that
networks converge to a self-organized critical state in the thermodynamic
limit. Finally, we discuss open questions and directions for future research,
and outline possible applications of these models to adaptive systems in
diverse areas.
|
0811.1000
|
Hard and Soft Spherical-Bound Stack decoder for MIMO systems
|
cs.IT math.IT
|
Classical ML decoders of MIMO systems like the sphere decoder, the
Schnorr-Euchner algorithm, the Fano and the stack decoders suffer of high
complexity for high number of antennas and large constellation sizes. We
propose in this paper a novel sequential algorithm which combines the stack
algorithm search strategy and the sphere decoder search region. The proposed
decoder that we call the Spherical-Bound-Stack decoder (SB-Stack) can then be
used to resolve lattice and large size constellations decoding with a reduced
complexity compared to the classical ML decoders.
The SB-Stack decoder will be further extended to support soft-output
detection over linear channels. It will be shown that the soft SB-Stack decoder
outperforms other MIMO soft decoders in term of performance and complexity.
|
0811.1083
|
A role-free approach to indexing large RDF data sets in secondary memory
for efficient SPARQL evaluation
|
cs.DB cs.DS
|
Massive RDF data sets are becoming commonplace. RDF data is typically
generated in social semantic domains (such as personal information management)
wherein a fixed schema is often not available a priori. We propose a simple
Three-way Triple Tree (TripleT) secondary-memory indexing technique to
facilitate efficient SPARQL query evaluation on such data sets. The novelty of
TripleT is that (1) the index is built over the atoms occurring in the data
set, rather than at a coarser granularity, such as whole triples occurring in
the data set; and (2) the atoms are indexed regardless of the roles (i.e.,
subjects, predicates, or objects) they play in the triples of the data set. We
show through extensive empirical evaluation that TripleT exhibits multiple
orders of magnitude improvement over the state of the art on RDF indexing, in
terms of both storage and query processing costs.
|
0811.1108
|
Resource Allocation for Downlink Cellular OFDMA Systems: Part I -
Optimal Allocation
|
cs.IT math.IT
|
In this pair of papers (Part I and Part II in this issue), we investigate the
issue of power control and subcarrier assignment in a sectorized two-cell
downlink OFDMA system impaired by multicell interference. As recommended for
WiMAX, we assume that the first part of the available bandwidth is likely to be
reused by different base stations (and is thus subject to multicell
interference) and that the second part of the bandwidth is shared in an
orthogonal way between the different base stations (and is thus protected from
multicell interference).
Although the problem of multicell resource allocation is nonconvex in this
scenario, we provide in Part I the general form of the global solution. In
particular, the optimal resource allocation turns out to be "binary" in the
sense that, except for at most one pivot-user in each cell, any user receives
data either in the reused bandwidth or in the protected bandwidth, but not in
both. The determination of the optimal resource allocation essentially reduces
to the determination of the latter pivot-position.
|
0811.1112
|
Resource Allocation for Downlink Cellular OFDMA Systems: Part II -
Practical Algorithms and Optimal Reuse Factor
|
cs.IT math.IT
|
In a companion paper, we characterized the optimal resource allocation in
terms of power control and subcarrier assignment, for a downlink sectorized
OFDMA system. In our model, the network is assumed to be one dimensional for
the sake of analysis. We also assume that a certain part of the available
bandwidth is likely to be reused by different base stations while that the
other part of the bandwidth is shared in an orthogonal way between these base
stations. The optimal resource allocation characterized in Part I is obtained
by minimizing the total power spent by the network under the constraint that
all users rate requirements are satisfied. When optimal resource allocation is
used, any user receives data either in the reused bandwidth or in the protected
bandwidth, but not in both (except for at most one pivot-user in each cell). We
also proposed an algorithm that determines the optimal values of users resource
allocation parameters. The optimal allocation algorithm proposed in Part I
requires a large number of operations. In the present paper, we propose a
distributed practical resource allocation algorithm with low complexity. We
study the asymptotic behavior of both this simplified resource allocation
algorithm and the optimal resource allocation algorithm of Part I as the number
of users in each cell tends to infinity. Our analysis allows to prove that the
proposed simplified algorithm is asymptotically optimal. As a byproduct of our
analysis, we characterize the optimal value of the frequency reuse factor.
|
0811.1250
|
Adaptive Base Class Boost for Multi-class Classification
|
cs.LG cs.IR
|
We develop the concept of ABC-Boost (Adaptive Base Class Boost) for
multi-class classification and present ABC-MART, a concrete implementation of
ABC-Boost. The original MART (Multiple Additive Regression Trees) algorithm has
been very successful in large-scale applications. For binary classification,
ABC-MART recovers MART. For multi-class classification, ABC-MART considerably
improves MART, as evaluated on several public data sets.
|
0811.1254
|
Coding Theory and Algebraic Combinatorics
|
math.CO cs.IT math.IT
|
This chapter introduces and elaborates on the fruitful interplay of coding
theory and algebraic combinatorics, with most of the focus on the interaction
of codes with combinatorial designs, finite geometries, simple groups, sphere
packings, kissing numbers, lattices, and association schemes. In particular,
special interest is devoted to the relationship between codes and combinatorial
designs. We describe and recapitulate important results in the development of
the state of the art. In addition, we give illustrative examples and
constructions, and highlight recent advances. Finally, we provide a collection
of significant open problems and challenges concerning future research.
|
0811.1260
|
The Application of Fuzzy Logic to Collocation Extraction
|
cs.CL
|
Collocations are important for many tasks of Natural language processing such
as information retrieval, machine translation, computational lexicography etc.
So far many statistical methods have been used for collocation extraction.
Almost all the methods form a classical crisp set of collocation. We propose a
fuzzy logic approach of collocation extraction to form a fuzzy set of
collocations in which each word combination has a certain grade of membership
for being collocation. Fuzzy logic provides an easy way to express natural
language into fuzzy logic rules. Two existing methods; Mutual information and
t-test have been utilized for the input of the fuzzy inference system. The
resulting membership function could be easily seen and demonstrated. To show
the utility of the fuzzy logic some word pairs have been examined as an
example. The working data has been based on a corpus of about one million words
contained in different novels constituting project Gutenberg available on
www.gutenberg.org. The proposed method has all the advantages of the two
methods, while overcoming their drawbacks. Hence it provides a better result
than the two methods.
|
0811.1317
|
Secrecy in Cooperative Relay Broadcast Channels
|
cs.IT math.IT
|
We investigate the effects of user cooperation on the secrecy of broadcast
channels by considering a cooperative relay broadcast channel. We show that
user cooperation can increase the achievable secrecy region. We propose an
achievable scheme that combines Marton's coding scheme for broadcast channels
and Cover and El Gamal's compress-and-forward scheme for relay channels. We
derive outer bounds for the rate-equivocation region using auxiliary random
variables for single-letterization. Finally, we consider a Gaussian channel and
show that both users can have positive secrecy rates, which is not possible for
scalar Gaussian broadcast channels without cooperation.
|
0811.1319
|
Modeling Social Annotation: a Bayesian Approach
|
cs.AI
|
Collaborative tagging systems, such as Delicious, CiteULike, and others,
allow users to annotate resources, e.g., Web pages or scientific papers, with
descriptive labels called tags. The social annotations contributed by thousands
of users, can potentially be used to infer categorical knowledge, classify
documents or recommend new relevant information. Traditional text inference
methods do not make best use of social annotation, since they do not take into
account variations in individual users' perspectives and vocabulary. In a
previous work, we introduced a simple probabilistic model that takes interests
of individual annotators into account in order to find hidden topics of
annotated resources. Unfortunately, that approach had one major shortcoming:
the number of topics and interests must be specified a priori. To address this
drawback, we extend the model to a fully Bayesian framework, which offers a way
to automatically estimate these numbers. In particular, the model allows the
number of interests and topics to change as suggested by the structure of the
data. We evaluate the proposed model in detail on the synthetic and real-world
data by comparing its performance to Latent Dirichlet Allocation on the topic
extraction task. For the latter evaluation, we apply the model to infer topics
of Web resources from social annotations obtained from Delicious in order to
discover new resources similar to a specified one. Our empirical results
demonstrate that the proposed model is a promising method for exploiting social
knowledge contained in user-generated annotations.
|
0811.1500
|
Linear Processing and Sum Throughput in the Multiuser MIMO Downlink
|
cs.IT math.IT
|
We consider linear precoding and decoding in the downlink of a multiuser
multiple-input, multiple-output (MIMO) system, wherein each user may receive
more than one data stream. We propose several mean squared error (MSE) based
criteria for joint transmit-receive optimization and establish a series of
relationships linking these criteria to the signal-to-interference-plus-noise
ratios of individual data streams and the information theoretic channel
capacity under linear minimum MSE decoding. In particular, we show that
achieving the maximum sum throughput is equivalent to minimizing the product of
MSE matrix determinants (PDetMSE). Since the PDetMSE minimization problem does
not admit a computationally efficient solution, a simplified scalar version of
the problem is considered that minimizes the product of mean squared errors
(PMSE). An iterative algorithm is proposed to solve the PMSE problem, and is
shown to provide near-optimal performance with greatly reduced computational
complexity. Our simulations compare the achievable sum rates under linear
precoding strategies to the sum capacity for the broadcast channel.
|
0811.1520
|
Modeling Microscopic Chemical Sensors in Capillaries
|
cs.RO physics.bio-ph q-bio.TO
|
Nanotechnology-based microscopic robots could provide accurate in vivo
measurement of chemicals in the bloodstream for detailed biological research
and as an aid to medical treatment. Quantitative performance estimates of such
devices require models of how chemicals in the blood diffuse to the devices.
This paper models microscopic robots and red blood cells (erythrocytes) in
capillaries using realistic distorted cell shapes. The models evaluate two
sensing scenarios: robots moving with the cells past a chemical source on the
vessel wall, and robots attached to the wall for longer-term chemical
monitoring. Using axial symmetric geometry with realistic flow speeds and
diffusion coefficients, we compare detection performance with a simpler model
that does not include the cells. The average chemical absorption is
quantitatively similar in both models, indicating the simpler model is an
adequate design guide to sensor performance in capillaries. However,
determining the variation in forces and absorption as cells move requires the
full model.
|
0811.1570
|
Constructions of Subsystem Codes over Finite Fields
|
quant-ph cs.IT math.IT
|
Subsystem codes protect quantum information by encoding it in a tensor factor
of a subspace of the physical state space. Subsystem codes generalize all major
quantum error protection schemes, and therefore are especially versatile. This
paper introduces numerous constructions of subsystem codes. It is shown how one
can derive subsystem codes from classical cyclic codes. Methods to trade the
dimensions of subsystem and co-subystem are introduced that maintain or improve
the minimum distance. As a consequence, many optimal subsystem codes are
obtained. Furthermore, it is shown how given subsystem codes can be extended,
shortened, or combined to yield new subsystem codes. These subsystem code
constructions are used to derive tables of upper and lower bounds on the
subsystem code parameters.
|
0811.1618
|
Airport Gate Assignment: New Model and Implementation
|
cs.AI
|
Airport gate assignment is of great importance in airport operations. In this
paper, we study the Airport Gate Assignment Problem (AGAP), propose a new model
and implement the model with Optimization Programming language (OPL). With the
objective to minimize the number of conflicts of any two adjacent aircrafts
assigned to the same gate, we build a mathematical model with logical
constraints and the binary constraints, which can provide an efficient
evaluation criterion for the Airlines to estimate the current gate assignment.
To illustrate the feasibility of the model we construct experiments with the
data obtained from Continental Airlines, Houston Gorge Bush Intercontinental
Airport IAH, which indicate that our model is both energetic and effective.
Moreover, we interpret experimental results, which further demonstrate that our
proposed model can provide a powerful tool for airline companies to estimate
the efficiency of their current work of gate assignment.
|
0811.1629
|
Stability Bound for Stationary Phi-mixing and Beta-mixing Processes
|
cs.LG
|
Most generalization bounds in learning theory are based on some measure of
the complexity of the hypothesis class used, independently of any algorithm. In
contrast, the notion of algorithmic stability can be used to derive tight
generalization bounds that are tailored to specific learning algorithms by
exploiting their particular properties. However, as in much of learning theory,
existing stability analyses and bounds apply only in the scenario where the
samples are independently and identically distributed. In many machine learning
applications, however, this assumption does not hold. The observations received
by the learning algorithm often have some inherent temporal dependence.
This paper studies the scenario where the observations are drawn from a
stationary phi-mixing or beta-mixing sequence, a widely adopted assumption in
the study of non-i.i.d. processes that implies a dependence between
observations weakening over time. We prove novel and distinct stability-based
generalization bounds for stationary phi-mixing and beta-mixing sequences.
These bounds strictly generalize the bounds given in the i.i.d. case and apply
to all stable learning algorithms, thereby extending the use of
stability-bounds to non-i.i.d. scenarios.
We also illustrate the application of our phi-mixing generalization bounds to
general classes of learning algorithms, including Support Vector Regression,
Kernel Ridge Regression, and Support Vector Machines, and many other kernel
regularization-based and relative entropy-based regularization algorithms.
These novel bounds can thus be viewed as the first theoretical basis for the
use of these algorithms in non-i.i.d. scenarios.
|
0811.1693
|
Protection Schemes for Two Link Failures in Optical Networks
|
cs.IT cs.NI math.IT
|
In this paper we develop network protection schemes against two link failures
in optical networks. The motivation behind this work is the fact that the
majority of all available links in an optical network suffer from single and
double link failures. In the proposed network protection schemes, NPS2-I and
NPS2-II, we deploy network coding and reduced capacity on the working paths to
provide backup protection paths. In addition, we demonstrate the encoding and
decoding aspects of the proposed schemes.
|
0811.1711
|
Artificial Intelligence Techniques for Steam Generator Modelling
|
cs.AI
|
This paper investigates the use of different Artificial Intelligence methods
to predict the values of several continuous variables from a Steam Generator.
The objective was to determine how the different artificial intelligence
methods performed in making predictions on the given dataset. The artificial
intelligence methods evaluated were Neural Networks, Support Vector Machines,
and Adaptive Neuro-Fuzzy Inference Systems. The types of neural networks
investigated were Multi-Layer Perceptions, and Radial Basis Function. Bayesian
and committee techniques were applied to these neural networks. Each of the AI
methods considered was simulated in Matlab. The results of the simulations
showed that all the AI methods were capable of predicting the Steam Generator
data reasonably accurately. However, the Adaptive Neuro-Fuzzy Inference system
out performed the other methods in terms of accuracy and ease of
implementation, while still achieving a fast execution time as well as a
reasonable training time.
|
0811.1770
|
A Class of Transformations that Polarize Symmetric Binary-Input
Memoryless Channels
|
cs.IT math.IT
|
A generalization of Ar\i kan's polar code construction using transformations
of the form $G^{\otimes n}$ where $G$ is an $\ell \times \ell$ matrix is
considered. Necessary and sufficient conditions are given for these
transformations to ensure channel polarization. It is shown that a large class
of such transformations polarize symmetric binary-input memoryless channels.
|
0811.1790
|
Robust Regression and Lasso
|
cs.IT cs.LG math.IT
|
Lasso, or $\ell^1$ regularized least squares, has been explored extensively
for its remarkable sparsity properties. It is shown in this paper that the
solution to Lasso, in addition to its sparsity, has robustness properties: it
is the solution to a robust optimization problem. This has two important
consequences. First, robustness provides a connection of the regularizer to a
physical property, namely, protection from noise. This allows a principled
selection of the regularizer, and in particular, generalizations of Lasso that
also yield convex optimization problems are obtained by considering different
uncertainty sets.
Secondly, robustness can itself be used as an avenue to exploring different
properties of the solution. In particular, it is shown that robustness of the
solution explains why the solution is sparse. The analysis as well as the
specific results obtained differ from standard sparsity results, providing
different geometric intuition. Furthermore, it is shown that the robust
optimization formulation is related to kernel density estimation, and based on
this approach, a proof that Lasso is consistent is given using robustness
directly. Finally, a theorem saying that sparsity and algorithmic stability
contradict each other, and hence Lasso is not stable, is presented.
|
0811.1825
|
A Divergence Formula for Randomness and Dimension
|
cs.CC cs.IT math.IT
|
If $S$ is an infinite sequence over a finite alphabet $\Sigma$ and $\beta$ is
a probability measure on $\Sigma$, then the {\it dimension} of $ S$ with
respect to $\beta$, written $\dim^\beta(S)$, is a constructive version of
Billingsley dimension that coincides with the (constructive Hausdorff)
dimension $\dim(S)$ when $\beta$ is the uniform probability measure. This paper
shows that $\dim^\beta(S)$ and its dual $\Dim^\beta(S)$, the {\it strong
dimension} of $S$ with respect to $\beta$, can be used in conjunction with
randomness to measure the similarity of two probability measures $\alpha$ and
$\beta$ on $\Sigma$. Specifically, we prove that the {\it divergence formula}
\[
\dim^\beta(R) = \Dim^\beta(R) =\frac{\CH(\alpha)}{\CH(\alpha) + \D(\alpha ||
\beta)} \] holds whenever $\alpha$ and $\beta$ are computable, positive
probability measures on $\Sigma$ and $R \in \Sigma^\infty$ is random with
respect to $\alpha$. In this formula, $\CH(\alpha)$ is the Shannon entropy of
$\alpha$, and $\D(\alpha||\beta)$ is the Kullback-Leibler divergence between
$\alpha$ and $\beta$. We also show that the above formula holds for all
sequences $R$ that are $\alpha$-normal (in the sense of Borel) when
$\dim^\beta(R)$ and $\Dim^\beta(R)$ are replaced by the more effective
finite-state dimensions $\dimfs^\beta(R)$ and $\Dimfs^\beta(R)$. In the course
of proving this, we also prove finite-state compression characterizations of
$\dimfs^\beta(S)$ and $\Dimfs^\beta(S)$.
|
0811.1868
|
Necessary Conditions for Discontinuities of Multidimensional Size
Functions
|
cs.CG cs.CV math.AT
|
Some new results about multidimensional Topological Persistence are
presented, proving that the discontinuity points of a k-dimensional size
function are necessarily related to the pseudocritical or special values of the
associated measuring function.
|
0811.1878
|
Action Theory Evolution
|
cs.AI cs.LO
|
Like any other logical theory, domain descriptions in reasoning about actions
may evolve, and thus need revision methods to adequately accommodate new
information about the behavior of actions. The present work is about changing
action domain descriptions in propositional dynamic logic. Its contribution is
threefold: first we revisit the semantics of action theory contraction that has
been done in previous work, giving more robust operators that express minimal
change based on a notion of distance between Kripke-models. Second we give
algorithms for syntactical action theory contraction and establish their
correctness w.r.t. our semantics. Finally we state postulates for action theory
contraction and assess the behavior of our operators w.r.t. them. Moreover, we
also address the revision counterpart of action theory change, showing that it
benefits from our semantics for contraction.
|
0811.1885
|
The Expressive Power of Binary Submodular Functions
|
cs.DM cs.AI cs.CV
|
It has previously been an open problem whether all Boolean submodular
functions can be decomposed into a sum of binary submodular functions over a
possibly larger set of variables. This problem has been considered within
several different contexts in computer science, including computer vision,
artificial intelligence, and pseudo-Boolean optimisation. Using a connection
between the expressive power of valued constraints and certain algebraic
properties of functions, we answer this question negatively.
Our results have several corollaries. First, we characterise precisely which
submodular functions of arity 4 can be expressed by binary submodular
functions. Next, we identify a novel class of submodular functions of arbitrary
arities which can be expressed by binary submodular functions, and therefore
minimised efficiently using a so-called expressibility reduction to the Min-Cut
problem. More importantly, our results imply limitations on this kind of
reduction and establish for the first time that it cannot be used in general to
minimise arbitrary submodular functions. Finally, we refute a conjecture of
Promislow and Young on the structure of the extreme rays of the cone of Boolean
submodular functions.
|
0811.2016
|
Land Cover Mapping Using Ensemble Feature Selection Methods
|
cs.LG
|
Ensemble classification is an emerging approach to land cover mapping whereby
the final classification output is a result of a consensus of classifiers.
Intuitively, an ensemble system should consist of base classifiers which are
diverse i.e. classifiers whose decision boundaries err differently. In this
paper ensemble feature selection is used to impose diversity in ensembles. The
features of the constituent base classifiers for each ensemble were created
through an exhaustive search algorithm using different separability indices.
For each ensemble, the classification accuracy was derived as well as a
diversity measure purported to give a measure of the inensemble diversity. The
correlation between ensemble classification accuracy and diversity measure was
determined to establish the interplay between the two variables. From the
findings of this paper, diversity measures as currently formulated do not
provide an adequate means upon which to constitute ensembles for land cover
mapping.
|
0811.2117
|
Disjunctive Databases for Representing Repairs
|
cs.DB
|
This paper addresses the problem of representing the set of repairs of a
possibly inconsistent database by means of a disjunctive database.
Specifically, the class of denial constraints is considered. We show that,
given a database and a set of denial constraints, there exists a (unique)
disjunctive database, called canonical, which represents the repairs of the
database w.r.t. the constraints and is contained in any other disjunctive
database with the same set of minimal models. We propose an algorithm for
computing the canonical disjunctive database. Finally, we study the size of the
canonical disjunctive database in the presence of functional dependencies for
both repairs and cardinality-based repairs.
|
0811.2201
|
Fast Maximum-Likelihood Decoding of the Golden Code
|
cs.IT math.IT
|
The golden code is a full-rate full-diversity space-time code for two
transmit antennas that has a maximal coding gain. Because each codeword conveys
four information symbols from an M-ary quadrature-amplitude modulation
alphabet, the complexity of an exhaustive search decoder is proportional to
M^2. In this paper we present a new fast algorithm for maximum-likelihood
decoding of the golden code that has a worst-case complexity of only O(2M^2.5).
We also present an efficient implementation of the fast decoder that exhibits a
low average complexity. Finally, in contrast to the overlaid Alamouti codes,
which lose their fast decodability property on time-varying channels, we show
that the golden code is fast decodable on both quasistatic and rapid
time-varying channels.
|
0811.2250
|
Semantics and Evaluation of Top-k Queries in Probabilistic Databases
|
cs.DB
|
We study here fundamental issues involved in top-k query evaluation in
probabilistic databases. We consider simple probabilistic databases in which
probabilities are associated with individual tuples, and general probabilistic
databases in which, additionally, exclusivity relationships between tuples can
be represented. In contrast to other recent research in this area, we do not
limit ourselves to injective scoring functions. We formulate three intuitive
postulates that the semantics of top-k queries in probabilistic databases
should satisfy, and introduce a new semantics, Global-Topk, that satisfies
those postulates to a large degree. We also show how to evaluate queries under
the Global-Topk semantics. For simple databases we design dynamic-programming
based algorithms, and for general databases we show polynomial-time reductions
to the simple cases. For example, we demonstrate that for a fixed k the time
complexity of top-k query evaluation is as low as linear, under the assumption
that probabilistic databases are simple and scoring functions are injective.
|
0811.2356
|
The List-Decoding Size of Reed-Muller Codes
|
cs.IT cs.DM math.IT
|
In this work we study the list-decoding size of Reed-Muller codes. Given a
received word and a distance parameter, we are interested in bounding the size
of the list of Reed-Muller codewords that are within that distance from the
received word. Previous bounds of Gopalan, Klivans and Zuckerman \cite{GKZ08}
on the list size of Reed-Muller codes apply only up to the minimum distance of
the code. In this work we provide asymptotic bounds for the list-decoding size
of Reed-Muller codes that apply for {\em all} distances. Additionally, we study
the weight distribution of Reed-Muller codes. Prior results of Kasami and
Tokura \cite{KT70} on the structure of Reed-Muller codewords up to twice the
minimum distance, imply bounds on the weight distribution of the code that
apply only until twice the minimum distance. We provide accumulative bounds for
the weight distribution of Reed-Muller codes that apply to {\em all} distances.
|
0811.2403
|
Composite CDMA - A statistical mechanics analysis
|
cond-mat.dis-nn cond-mat.stat-mech cs.IT math.IT
|
Code Division Multiple Access (CDMA) in which the spreading code assignment
to users contains a random element has recently become a cornerstone of CDMA
research. The random element in the construction is particular attractive as it
provides robustness and flexibility in utilising multi-access channels, whilst
not making significant sacrifices in terms of transmission power. Random codes
are generated from some ensemble, here we consider the possibility of combining
two standard paradigms, sparsely and densely spread codes, in a single
composite code ensemble. The composite code analysis includes a replica
symmetric calculation of performance in the large system limit, and
investigation of finite systems through a composite belief propagation
algorithm. A variety of codes are examined with a focus on the high
multi-access interference regime. In both the large size limit and finite
systems we demonstrate scenarios in which the composite code has typical
performance exceeding sparse and dense codes at equivalent signal to noise
ratio.
|
0811.2518
|
Gaussian Belief Propagation: Theory and Aplication
|
cs.IT math.IT
|
The canonical problem of solving a system of linear equations arises in
numerous contexts in information theory, communication theory, and related
fields. In this contribution, we develop a solution based upon Gaussian belief
propagation (GaBP) that does not involve direct matrix inversion. The iterative
nature of our approach allows for a distributed message-passing implementation
of the solution algorithm. In the first part of this thesis, we address the
properties of the GaBP solver. We characterize the rate of convergence, enhance
its message-passing efficiency by introducing a broadcast version, discuss its
relation to classical solution methods including numerical examples. We present
a new method for forcing the GaBP algorithm to converge to the correct solution
for arbitrary column dependent matrices.
In the second part we give five applications to illustrate the applicability
of the GaBP algorithm to very large computer networks: Peer-to-Peer rating,
linear detection, distributed computation of support vector regression,
efficient computation of Kalman filter and distributed linear programming.
Using extensive simulations on up to 1,024 CPUs in parallel using IBM Bluegene
supercomputer we demonstrate the attractiveness and applicability of the GaBP
algorithm, using real network topologies with up to millions of nodes and
hundreds of millions of communication links. We further relate to several other
algorithms and explore their connection to the GaBP algorithm.
|
0811.2525
|
Amendment to "Performance Analysis of the V-BLAST Algorithm: An
Analytical Approach." [1]
|
cs.IT math.IT
|
An analytical technique for the outage and BER analysis of the nx2 V-BLAST
algorithm with the optimal ordering has been presented in [1], including
closed-form exact expressions for average BER and outage probabilities, and
simple high-SNR approximations. The analysis in [1] is based on the following
essential approximations: 1. The SNR was defined in terms of total
after-projection signal and noise powers, and the BER was analyzed based on
their ratio. This corresponds to a non-coherent (power-wise) equal-gain
combining of both the signal and the noise, and it is not optimum since it does
not provide the maximum output SNR. 2. The definition of the total
after-projection noise power at each step ignored the fact that the
after-projection noise vector had correlated components. 3. The after-combining
noises at different steps (and hence the errors) were implicitly assumed to be
independent of each other. Under non-coherent equal-gain combining, that is not
the case. It turns out that the results in [1] hold also true without these
approximations, subject to minor modifications only. The purpose of this note
is to show this and also to extend the average BER results in [1] to the case
of BPSK-modulated V-BLAST with more than two Rx antennas (eq. 18-20).
Additionally, we emphasize that the block error rate is dominated by the first
step BER at the high-SNR mode (eq. 14 and 21).
|
0811.2551
|
Modeling Cultural Dynamics
|
cs.MA cs.AI q-bio.NC
|
EVOC (for EVOlution of Culture) is a computer model of culture that enables
us to investigate how various factors such as barriers to cultural diffusion,
the presence and choice of leaders, or changes in the ratio of innovation to
imitation affect the diversity and effectiveness of ideas. It consists of
neural network based agents that invent ideas for actions, and imitate
neighbors' actions. The model is based on a theory of culture according to
which what evolves through culture is not memes or artifacts, but the internal
models of the world that give rise to them, and they evolve not through a
Darwinian process of competitive exclusion but a Lamarckian process involving
exchange of innovation protocols. EVOC shows an increase in mean fitness of
actions over time, and an increase and then decrease in the diversity of
actions. Diversity of actions is positively correlated with population size and
density, and with barriers between populations. Slowly eroding borders increase
fitness without sacrificing diversity by fostering specialization followed by
sharing of fit actions. Introducing a leader that broadcasts its actions
throughout the population increases the fitness of actions but reduces
diversity of actions. Increasing the number of leaders reduces this effect.
Efforts are underway to simulate the conditions under which an agent
immigrating from one culture to another contributes new ideas while still
fitting in.
|
0811.2609
|
Noise-Resilient Group Testing: Limitations and Constructions
|
cs.DM cs.IT math.CO math.IT
|
We study combinatorial group testing schemes for learning $d$-sparse Boolean
vectors using highly unreliable disjunctive measurements. We consider an
adversarial noise model that only limits the number of false observations, and
show that any noise-resilient scheme in this model can only approximately
reconstruct the sparse vector. On the positive side, we take this barrier to
our advantage and show that approximate reconstruction (within a satisfactory
degree of approximation) allows us to break the information theoretic lower
bound of $\tilde{\Omega}(d^2 \log n)$ that is known for exact reconstruction of
$d$-sparse vectors of length $n$ via non-adaptive measurements, by a
multiplicative factor $\tilde{\Omega}(d)$.
Specifically, we give simple randomized constructions of non-adaptive
measurement schemes, with $m=O(d \log n)$ measurements, that allow efficient
reconstruction of $d$-sparse vectors up to $O(d)$ false positives even in the
presence of $\delta m$ false positives and $O(m/d)$ false negatives within the
measurement outcomes, for any constant $\delta < 1$. We show that, information
theoretically, none of these parameters can be substantially improved without
dramatically affecting the others. Furthermore, we obtain several explicit
constructions, in particular one matching the randomized trade-off but using $m
= O(d^{1+o(1)} \log n)$ measurements. We also obtain explicit constructions
that allow fast reconstruction in time $\poly(m)$, which would be sublinear in
$n$ for sufficiently sparse vectors. The main tool used in our construction is
the list-decoding view of randomness condensers and extractors.
|
0811.2637
|
The Design of Compressive Sensing Filter
|
cs.CE cs.IT math.IT
|
In this paper, the design of universal compressive sensing filter based on
normal filters including the lowpass, highpass, bandpass, and bandstop filters
with different cutoff frequencies (or bandwidth) has been developed to enable
signal acquisition with sub-Nyquist sampling. Moreover, to control flexibly the
size and the coherence of the compressive sensing filter, as an example, the
microstrip filter based on defected ground structure (DGS) has been employed to
realize the compressive sensing filter. Of course, the compressive sensing
filter also can be constructed along the identical idea by many other
structures, for example, the man-made electromagnetic materials, the plasma
with different electron density, and so on. By the proposed architecture, the
n-dimensional signals of S-sparse in arbitrary orthogonal frame can be exactly
reconstructed with measurements on the order of Slog(n) with overwhelming
probability, which is consistent with the bonds estimated by theoretical
analysis.
|
0811.2690
|
A framework for the local information dynamics of distributed
computation in complex systems
|
nlin.CG cs.IT math.IT nlin.AO nlin.PS physics.data-an
|
The nature of distributed computation has often been described in terms of
the component operations of universal computation: information storage,
transfer and modification. We review the first complete framework that
quantifies each of these individual information dynamics on a local scale
within a system, and describes the manner in which they interact to create
non-trivial computation where "the whole is greater than the sum of the parts".
We describe the application of the framework to cellular automata, a simple yet
powerful model of distributed computation. This is an important application,
because the framework is the first to provide quantitative evidence for several
important conjectures about distributed computation in cellular automata: that
blinkers embody information storage, particles are information transfer agents,
and particle collisions are information modification events. The framework is
also shown to contrast the computations conducted by several well-known
cellular automata, highlighting the importance of information coherence in
complex computation. The results reviewed here provide important quantitative
insights into the fundamental nature of distributed computation and the
dynamics of complex systems, as well as impetus for the framework to be applied
to the analysis and design of other systems.
|
0811.2696
|
AG Codes from Polyhedral Divisors
|
math.AG cs.IT math.IT
|
A description of complete normal varieties with lower dimensional torus
action has been given by Altmann, Hausen, and Suess, generalizing the theory of
toric varieties. Considering the case where the acting torus T has codimension
one, we describe T-invariant Weil and Cartier divisors and provide formulae for
calculating global sections, intersection numbers, and Euler characteristics.
As an application, we use divisors on these so-called T-varieties to define new
evaluation codes called T-codes. We find estimates on their minimum distance
using intersection theory. This generalizes the theory of toric codes and
combines it with AG codes on curves. As the simplest application of our general
techniques we look at codes on ruled surfaces coming from decomposable vector
bundles. Already this construction gives codes that are better than the related
product code. Further examples show that we can improve these codes by
constructing more sophisticated T-varieties. These results suggest to look
further for good codes on T-varieties.
|
0811.2841
|
Universally Utility-Maximizing Privacy Mechanisms
|
cs.DB cs.GT
|
A mechanism for releasing information about a statistical database with
sensitive data must resolve a trade-off between utility and privacy. Privacy
can be rigorously quantified using the framework of {\em differential privacy},
which requires that a mechanism's output distribution is nearly the same
whether or not a given database row is included or excluded. The goal of this
paper is strong and general utility guarantees, subject to differential
privacy.
We pursue mechanisms that guarantee near-optimal utility to every potential
user, independent of its side information (modeled as a prior distribution over
query results) and preferences (modeled via a loss function).
Our main result is: for each fixed count query and differential privacy
level, there is a {\em geometric mechanism} $M^*$ -- a discrete variant of the
simple and well-studied Laplace mechanism -- that is {\em simultaneously
expected loss-minimizing} for every possible user, subject to the differential
privacy constraint. This is an extremely strong utility guarantee: {\em every}
potential user $u$, no matter what its side information and preferences,
derives as much utility from $M^*$ as from interacting with a differentially
private mechanism $M_u$ that is optimally tailored to $u$.
|
0811.2850
|
Codes against Online Adversaries
|
cs.IT math.IT
|
In this work we consider the communication of information in the presence of
an online adversarial jammer. In the setting under study, a sender wishes to
communicate a message to a receiver by transmitting a codeword x=x_1,...,x_n
symbol-by-symbol over a communication channel. The adversarial jammer can view
the transmitted symbols x_i one at a time, and can change up to a p-fraction of
them. However, the decisions of the jammer must be made in an online or causal
manner. More generally, for a delay parameter 0<d<1, we study the scenario in
which the jammer's decision on the corruption of x_i must depend solely on x_j
for j < i - dn. In this work, we initiate the study of codes for online
adversaries, and present a tight characterization of the amount of information
one can transmit in both the 0-delay and, more generally, the d-delay online
setting. We prove tight results for both additive and overwrite jammers when
the transmitted symbols are assumed to be over a sufficiently large field F.
Finally, we extend our results to a jam-or-listen online model, where the
online adversary can either jam a symbol or eavesdrop on it. We again provide a
tight characterization of the achievable rate for several variants of this
model. The rate-regions we prove for each model are informational-theoretic in
nature and hold for computationally unbounded adversaries. The rate regions are
characterized by "simple" piecewise linear functions of p and d. The codes we
construct to attain the optimal rate for each scenario are computationally
efficient.
|
0811.2853
|
Generating Random Networks Without Short Cycles
|
cs.DS cs.IT math.IT
|
Random graph generation is an important tool for studying large complex
networks. Despite abundance of random graph models, constructing models with
application-driven constraints is poorly understood. In order to advance
state-of-the-art in this area, we focus on random graphs without short cycles
as a stylized family of graphs, and propose the RandGraph algorithm for
randomly generating them. For any constant k, when m=O(n^{1+1/[2k(k+3)]}),
RandGraph generates an asymptotically uniform random graph with n vertices, m
edges, and no cycle of length at most k using O(n^2m) operations. We also
characterize the approximation error for finite values of n. To the best of our
knowledge, this is the first polynomial-time algorithm for the problem.
RandGraph works by sequentially adding $m$ edges to an empty graph with n
vertices. Recently, such sequential algorithms have been successful for random
sampling problems. Our main contributions to this line of research includes
introducing a new approach for sequentially approximating edge-specific
probabilities at each step of the algorithm, and providing a new method for
analyzing such algorithms.
|
0811.2868
|
Approximate Sparse Decomposition Based on Smoothed L0-Norm
|
cs.MM cs.IT math.IT
|
In this paper, we propose a method to address the problem of source
estimation for Sparse Component Analysis (SCA) in the presence of additive
noise. Our method is a generalization of a recently proposed method (SL0),
which has the advantage of directly minimizing the L0-norm instead of L1-norm,
while being very fast. SL0 is based on minimization of the smoothed L0-norm
subject to As=x. In order to better estimate the source vector for noisy
mixtures, we suggest then to remove the constraint As=x, by relaxing exact
equality to an approximation (we call our method Smoothed L0-norm Denoising or
SL0DN). The final result can then be obtained by minimization of a proper
linear combination of the smoothed L0-norm and a cost function for the
approximation. Experimental results emphasize on the significant enhancement of
the modified method in noisy cases.
|
0811.2904
|
Secondary Indexing in One Dimension: Beyond B-trees and Bitmap Indexes
|
cs.DB cs.DS
|
Let S be a finite, ordered alphabet, and let x = x_1 x_2 ... x_n be a string
over S. A "secondary index" for x answers alphabet range queries of the form:
Given a range [a_l,a_r] over S, return the set I_{[a_l;a_r]} = {i |x_i \in
[a_l; a_r]}. Secondary indexes are heavily used in relational databases and
scientific data analysis. It is well-known that the obvious solution, storing a
dictionary for the position set associated with each character, does not always
give optimal query time. In this paper we give the first theoretically optimal
data structure for the secondary indexing problem. In the I/O model, the amount
of data read when answering a query is within a constant factor of the minimum
space needed to represent I_{[a_l;a_r]}, assuming that the size of internal
memory is (|S| log n)^{delta} blocks, for some constant delta > 0. The space
usage of the data structure is O(n log |S|) bits in the worst case, and we
further show how to bound the size of the data structure in terms of the 0-th
order entropy of x. We show how to support updates achieving various time-space
trade-offs.
We also consider an approximate version of the basic secondary indexing
problem where a query reports a superset of I_{[a_l;a_r]} containing each
element not in I_{[a_l;a_r]} with probability at most epsilon, where epsilon >
0 is the false positive probability. For this problem the amount of data that
needs to be read by the query algorithm is reduced to O(|I_{[a_l;a_r]}|
log(1/epsilon)) bits.
|
0811.3055
|
Exact phase transition of backtrack-free search with implications on the
power of greedy algorithms
|
cs.AI cs.DM cs.DS
|
Backtracking is a basic strategy to solve constraint satisfaction problems
(CSPs). A satisfiable CSP instance is backtrack-free if a solution can be found
without encountering any dead-end during a backtracking search, implying that
the instance is easy to solve. We prove an exact phase transition of
backtrack-free search in some random CSPs, namely in Model RB and in Model RD.
This is the first time an exact phase transition of backtrack-free search can
be identified on some random CSPs. Our technical results also have interesting
implications on the power of greedy algorithms, on the width of random
hypergraphs and on the exact satisfiability threshold of random CSPs.
|
0811.3301
|
Faster Retrieval with a Two-Pass Dynamic-Time-Warping Lower Bound
|
cs.DB cs.CV
|
The Dynamic Time Warping (DTW) is a popular similarity measure between time
series. The DTW fails to satisfy the triangle inequality and its computation
requires quadratic time. Hence, to find closest neighbors quickly, we use
bounding techniques. We can avoid most DTW computations with an inexpensive
lower bound (LB Keogh). We compare LB Keogh with a tighter lower bound (LB
Improved). We find that LB Improved-based search is faster. As an example, our
approach is 2-3 times faster over random-walk and shape time series.
|
0811.3328
|
chi2TeX Semi-automatic translation from chiwriter to LaTeX
|
cs.SE cs.CV
|
Semi-automatic translation of math-filled book from obsolete ChiWriter format
to LaTeX. Is it possible? Idea of criterion whether to use automatic or hand
mode for translation. Illustrations.
|
0811.3475
|
Robust Network Coding in the Presence of Untrusted Nodes
|
cs.IT cs.NI math.IT
|
While network coding can be an efficient means of information dissemination
in networks, it is highly susceptible to "pollution attacks," as the injection
of even a single erroneous packet has the potential to corrupt each and every
packet received by a given destination. Even when suitable error-control coding
is applied, an adversary can, in many interesting practical situations,
overwhelm the error-correcting capability of the code. To limit the power of
potential adversaries, a broadcast transformation is introduced, in which nodes
are limited to just a single (broadcast) transmission per generation. Under
this broadcast transformation, the multicast capacity of a network is changed
(in general reduced) from the number of edge-disjoint paths between source and
sink to the number of internally-disjoint paths. Exploiting this fact, we
propose a family of networks whose capacity is largely unaffected by a
broadcast transformation. This results in a significant achievable transmission
rate for such networks, even in the presence of adversaries.
|
0811.3476
|
Error correcting code using tree-like multilayer perceptron
|
cond-mat.stat-mech cond-mat.dis-nn cs.IT math.IT
|
An error correcting code using a tree-like multilayer perceptron is proposed.
An original message $\mbi{s}^0$ is encoded into a codeword $\boldmath{y}_0$
using a tree-like committee machine (committee tree) or a tree-like parity
machine (parity tree). Based on these architectures, several schemes featuring
monotonic or non-monotonic units are introduced. The codeword $\mbi{y}_0$ is
then transmitted via a Binary Asymmetric Channel (BAC) where it is corrupted by
noise. The analytical performance of these schemes is investigated using the
replica method of statistical mechanics. Under some specific conditions, some
of the proposed schemes are shown to saturate the Shannon bound at the infinite
codeword length limit. The influence of the monotonicity of the units on the
performance is also discussed.
|
0811.3536
|
Analyse de la rigidit\'e des machines outils 3 axes d'architecture
parall\`ele hyperstatique
|
cs.RO
|
The paper presents a new stiffness modelling method for overconstrained
parallel manipulators, which is applied to 3-d.o.f. translational mechanisms.
It is based on a multidimensional lumped-parameter model that replaces the link
flexibility by localized 6-d.o.f. virtual springs. In contrast to other works,
the method includes a FEA-based link stiffness evaluation and employs a new
solution strategy of the kinetostatic equations, which allows computing the
stiffness matrix for the overconstrained architectures and for the singular
manipulator postures. The advantages of the developed technique are confirmed
by application examples, which deal with comparative stiffness analysis of two
translational parallel manipulators.
|
0811.3585
|
The Capacity of Ad hoc Networks under Random Packet Losses
|
cs.IT cs.NI math.IT
|
We consider the problem of determining asymptotic bounds on the capacity of a
random ad hoc network. Previous approaches assumed a link layer model in which
if a transmitter-receiver pair can communicate with each other, i.e., the
Signal to Interference and Noise Ratio (SINR) is above a certain threshold,
then every transmitted packet is received error-free by the receiver thereby.
Using this model, the per node capacity of the network was shown to be
$\Theta(\frac{1}{\sqrt{n\log{n}}})$. In reality, for any finite link SINR,
there is a non-zero probability of erroneous reception of the packet. We show
that in a large network, as the packet travels an asymptotically large number
of hops from source to destination, the cumulative impact of packet losses over
intermediate links results in a per-node throughput of only $O(\frac{1}{n})$.
We then propose a new scheduling scheme to counter this effect. The proposed
scheme provides tight guarantees on end-to-end packet loss probability, and
improves the per-node throughput to $\Omega(\frac{1}{\sqrt{n}
({\log{n}})^{\frac{\alpha{{+2}}}{2(\alpha-2)}}})$ where $\alpha>2$ is the path
loss exponent.
|
0811.3617
|
Distributed Scalar Quantization for Computing: High-Resolution Analysis
and Extensions
|
cs.IT math.IT
|
Communication of quantized information is frequently followed by a
computation. We consider situations of \emph{distributed functional scalar
quantization}: distributed scalar quantization of (possibly correlated) sources
followed by centralized computation of a function. Under smoothness conditions
on the sources and function, companding scalar quantizer designs are developed
to minimize mean-squared error (MSE) of the computed function as the quantizer
resolution is allowed to grow. Striking improvements over quantizers designed
without consideration of the function are possible and are larger in the
entropy-constrained setting than in the fixed-rate setting. As extensions to
the basic analysis, we characterize a large class of functions for which
regular quantization suffices, consider certain functions for which asymptotic
optimality is achieved without arbitrarily fine quantization, and allow limited
collaboration between source encoders. In the entropy-constrained setting, a
single bit per sample communicated between encoders can have an
arbitrarily-large effect on functional distortion. In contrast, such
communication has very little effect in the fixed-rate setting.
|
0811.3691
|
Temporal Support of Regular Expressions in Sequential Pattern Mining
|
cs.DB
|
Classic algorithms for sequential pattern discovery, return all frequent
sequences present in a database, but, in general, only a few ones are
interesting for the user. Languages based on regular expressions (RE) have been
proposed to restrict frequent sequences to the ones that satisfy user-specified
constraints. Although the support of a sequence is computed as the number of
data-sequences satisfying a pattern with respect to the total number of
data-sequences in the database, once regular expressions come into play, new
approaches to the concept of support are needed. For example, users may be
interested in computing the support of the RE as a whole, in addition to the
one of a particular pattern. Also, when the items are frequently updated, the
traditional way of counting support in sequential pattern mining may lead to
incorrect (or, at least incomplete), conclusions. The problem gets more
involved if we are interested in categorical sequential patterns. In light of
the above, in this paper we propose to revise the classic notion of support in
sequential pattern mining, introducing the concept of temporal support of
regular expressions, intuitively defined as the number of sequences satisfying
a target pattern, out of the total number of sequences that could have possibly
matched such pattern, where the pattern is defined as a RE over complex items
(i.e., not only item identifiers, but also attributes and functions).
|
0811.3777
|
The Relationship between Tsallis Statistics, the Fourier Transform, and
Nonlinear Coupling
|
cs.IT math.IT math.PR
|
Tsallis statistics (or q-statistics) in nonextensive statistical mechanics is
a one-parameter description of correlated states. In this paper we use a
translated entropic index: $1 - q \to q$ . The essence of this translation is
to improve the mathematical symmetry of the q-algebra and make q directly
proportional to the nonlinear coupling. A conjugate transformation is defined
$\hat q \equiv \frac{{- 2q}}{{2 + q}}$ which provides a dual mapping between
the heavy-tail q-Gaussian distributions, whose translated q parameter is
between $ - 2 < q < 0$, and the compact-support q-Gaussians, between $0 < q <
\infty $ . This conjugate transformation is used to extend the definition of
the q-Fourier transform to the domain of compact support. A conjugate q-Fourier
transform is proposed which transforms a q-Gaussian into a conjugate $\hat q$
-Gaussian, which has the same exponential decay as the Fourier transform of a
power-law function. The nonlinear statistical coupling is defined such that the
conjugate pair of q-Gaussians have equal strength but either couple
(compact-support) or decouple (heavy-tail) the statistical states. Many of the
nonextensive entropy applications can be shown to have physical parameters
proportional to the nonlinear statistical coupling.
|
0811.3887
|
Transmit Diversity v. Spatial Multiplexing in Modern MIMO Systems
|
cs.IT math.IT
|
A contemporary perspective on the tradeoff between transmit antenna diversity
and spatial multiplexing is provided. It is argued that, in the context of most
modern wireless systems and for the operating points of interest, transmission
techniques that utilize all available spatial degrees of freedom for
multiplexing outperform techniques that explicitly sacrifice spatial
multiplexing for diversity. In the context of such systems, therefore, there
essentially is no decision to be made between transmit antenna diversity and
spatial multiplexing in MIMO communication. Reaching this conclusion, however,
requires that the channel and some key system features be adequately modeled
and that suitable performance metrics be adopted; failure to do so may bring
about starkly different conclusions. As a specific example, this contrast is
illustrated using the 3GPP Long-Term Evolution system design.
|
0811.4033
|
Computation of Grobner basis for systematic encoding of generalized
quasi-cyclic codes
|
cs.IT cs.DM math.AC math.IT
|
Generalized quasi-cyclic (GQC) codes form a wide and useful class of linear
codes that includes thoroughly quasi-cyclic codes, finite geometry (FG) low
density parity check (LDPC) codes, and Hermitian codes. Although it is known
that the systematic encoding of GQC codes is equivalent to the division
algorithm in the theory of Grobner basis of modules, there has been no
algorithm that computes Grobner basis for all types of GQC codes. In this
paper, we propose two algorithms to compute Grobner basis for GQC codes from
their parity check matrices: echelon canonical form algorithm and transpose
algorithm. Both algorithms require sufficiently small number of finite-field
operations with the order of the third power of code-length. Each algorithm has
its own characteristic; the first algorithm is composed of elementary methods,
and the second algorithm is based on a novel formula and is faster than the
first one for high-rate codes. Moreover, we show that a serial-in serial-out
encoder architecture for FG LDPC codes is composed of linear feedback shift
registers with the size of the linear order of code-length; to encode a binary
codeword of length n, it takes less than 2n adder and 2n memory elements.
Keywords: automorphism group, Buchberger's algorithm, division algorithm,
circulant matrix, finite geometry low density parity check (LDPC) codes.
|
0811.4139
|
Artin automorphisms, Cyclotomic function fields, and Folded
list-decodable codes
|
math.NT cs.IT math.IT
|
Algebraic codes that achieve list decoding capacity were recently constructed
by a careful ``folding'' of the Reed-Solomon code. The ``low-degree'' nature of
this folding operation was crucial to the list decoding algorithm. We show how
such folding schemes conducive to list decoding arise out of the
Artin-Frobenius automorphism at primes in Galois extensions. Using this
approach, we construct new folded algebraic-geometric codes for list decoding
based on cyclotomic function fields with a cyclic Galois group. Such function
fields are obtained by adjoining torsion points of the Carlitz action of an
irreducible $M \in \F_q[T]$. The Reed-Solomon case corresponds to the simplest
such extension (corresponding to the case $M=T$). In the general case, we need
to descend to the fixed field of a suitable Galois subgroup in order to ensure
the existence of many degree one places that can be used for encoding.
Our methods shed new light on algebraic codes and their list decoding, and
lead to new codes achieving list decoding capacity. Quantitatively, these codes
provide list decoding (and list recovery/soft decoding) guarantees similar to
folded Reed-Solomon codes but with an alphabet size that is only
polylogarithmic in the block length. In comparison, for folded RS codes, the
alphabet size is a large polynomial in the block length. This has applications
to fully explicit (with no brute-force search) binary concatenated codes for
list decoding up to the Zyablov radius.
|
0811.4162
|
Optimal Encoding Schemes for Several Classes of Discrete Degraded
Broadcast Channels
|
cs.IT math.IT
|
Consider a memoryless degraded broadcast channel (DBC) in which the channel
output is a single-letter function of the channel input and the channel noise.
As examples, for the Gaussian broadcast channel (BC) this single-letter
function is regular Euclidian addition and for the binary-symmetric BC this
single-letter function is Galois-Field-two addition. This paper identifies
several classes of discrete memoryless DBCs for which a relatively simple
encoding scheme, which we call natural encoding, achieves capacity. Natural
Encoding (NE) combines symbols from independent codebooks (one for each
receiver) using the same single-letter function that adds distortion to the
channel. The alphabet size of each NE codebook is bounded by that of the
channel input.
Inspired by Witsenhausen and Wyner, this paper defines the conditional
entropy bound function $F^*$, studies its properties, and applies them to show
that NE achieves the boundary of the capacity region for the multi-receiver
broadcast Z channel. Then, this paper defines the input-symmetric DBC,
introduces permutation encoding for the input-symmetric DBC, and proves its
optimality. Because it is a special case of permutation encoding, NE is
capacity achieving for the two-receiver group-operation DBC. Combining the
broadcast Z channel and group-operation DBC results yields a proof that NE is
also optimal for the discrete multiplication DBC. Along the way, the paper also
provides explicit parametric expressions for the two-receiver binary-symmetric
DBC and broadcast Z channel.
|
0811.4163
|
Packing and Covering Properties of Subspace Codes for Error Control in
Random Linear Network Coding
|
cs.IT math.IT
|
Codes in the projective space and codes in the Grassmannian over a finite
field - referred to as subspace codes and constant-dimension codes (CDCs),
respectively - have been proposed for error control in random linear network
coding. For subspace codes and CDCs, a subspace metric was introduced to
correct both errors and erasures, and an injection metric was proposed to
correct adversarial errors. In this paper, we investigate the packing and
covering properties of subspace codes with both metrics. We first determine
some fundamental geometric properties of the projective space with both
metrics. Using these properties, we then derive bounds on the cardinalities of
packing and covering subspace codes, and determine the asymptotic rates of
optimal packing and optimal covering subspace codes with both metrics. Our
results not only provide guiding principles for the code design for error
control in random linear network coding, but also illustrate the difference
between the two metrics from a geometric perspective. In particular, our
results show that optimal packing CDCs are optimal packing subspace codes up to
a scalar for both metrics if and only if their dimension is half of their
length (up to rounding). In this case, CDCs suffer from only limited rate loss
as opposed to subspace codes with the same minimum distance. We also show that
optimal covering CDCs can be used to construct asymptotically optimal covering
subspace codes with the injection metric only.
|
0811.4186
|
Search Result Clustering via Randomized Partitioning of Query-Induced
Subgraphs
|
cs.IR cs.DS
|
In this paper, we present an approach to search result clustering, using
partitioning of underlying link graph. We define the notion of "query-induced
subgraph" and formulate the problem of search result clustering as a problem of
efficient partitioning of given subgraph into topic-related clusters. Also, we
propose a novel algorithm for approximative partitioning of such graph, which
results in cluster quality comparable to the one obtained by deterministic
algorithms, while operating in more efficient computation time, suitable for
practical implementations. Finally, we present a practical clustering search
engine developed as a part of this research and use it to get results about
real-world performance of proposed concepts.
|
0811.4191
|
Performance of Hybrid-ARQ in Block-Fading Channels: A Fixed Outage
Probability Analysis
|
cs.IT math.IT
|
This paper studies the performance of hybrid-ARQ (automatic repeat request)
in Rayleigh block fading channels. The long-term average transmitted rate is
analyzed in a fast-fading scenario where the transmitter only has knowledge of
channel statistics, and, consistent with contemporary wireless systems, rate
adaptation is performed such that a target outage probability (after a maximum
number of H-ARQ rounds) is maintained. H-ARQ allows for early termination once
decoding is possible, and thus is a coarse, and implicit, mechanism for rate
adaptation to the instantaneous channel quality. Although the rate with H-ARQ
is not as large as the ergodic capacity, which is achievable with rate
adaptation to the instantaneous channel conditions, even a few rounds of H-ARQ
make the gap to ergodic capacity reasonably small for operating points of
interest. Furthermore, the rate with H-ARQ provides a significant advantage
compared to systems that do not use H-ARQ and only adapt rate based on the
channel statistics.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.