id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
0806.1797
|
A new generalization of the proportional conflict redistribution rule
stable in terms of decision
|
cs.AI
|
In this chapter, we present and discuss a new generalized proportional
conflict redistribution rule. The Dezert-Smarandache extension of the
Demster-Shafer theory has relaunched the studies on the combination rules
especially for the management of the conflict. Many combination rules have been
proposed in the last few years. We study here different combination rules and
compare them in terms of decision on didactic example and on generated data.
Indeed, in real applications, we need a reliable decision and it is the final
results that matter. This chapter shows that a fine proportional conflict
redistribution rule must be preferred for the combination in the belief
function theory.
|
0806.1798
|
Human expert fusion for image classification
|
cs.CV cs.AI
|
In image classification, merging the opinion of several human experts is very
important for different tasks such as the evaluation or the training. Indeed,
the ground truth is rarely known before the scene imaging. We propose here
different models in order to fuse the informations given by two or more
experts. The considered unit for the classification, a small tile of the image,
can contain one or more kind of the considered classes given by the experts. A
second problem that we have to take into account, is the amount of certainty of
the expert has for each pixel of the tile. In order to solve these problems we
define five models in the context of the Dempster-Shafer Theory and in the
context of the Dezert-Smarandache Theory and we study the possible decisions
with these models.
|
0806.1802
|
Une nouvelle r\`egle de combinaison r\'epartissant le conflit -
Applications en imagerie Sonar et classification de cibles Radar
|
cs.AI
|
These last years, there were many studies on the problem of the conflict
coming from information combination, especially in evidence theory. We can
summarise the solutions for manage the conflict into three different
approaches: first, we can try to suppress or reduce the conflict before the
combination step, secondly, we can manage the conflict in order to give no
influence of the conflict in the combination step, and then take into account
the conflict in the decision step, thirdly, we can take into account the
conflict in the combination step. The first approach is certainly the better,
but not always feasible. It is difficult to say which approach is the best
between the second and the third. However, the most important is the produced
results in applications. We propose here a new combination rule that
distributes the conflict proportionally on the element given this conflict. We
compare these different combination rules on real data in Sonar imagery and
Radar target classification.
|
0806.1806
|
Perfect Derived Propagators
|
cs.AI
|
When implementing a propagator for a constraint, one must decide about
variants: When implementing min, should one also implement max? Should one
implement linear equations both with and without coefficients? Constraint
variants are ubiquitous: implementing them requires considerable (if not
prohibitive) effort and decreases maintainability, but will deliver better
performance.
This paper shows how to use variable views, previously introduced for an
implementation architecture, to derive perfect propagator variants. A model for
views and derived propagators is introduced. Derived propagators are proved to
be indeed perfect in that they inherit essential properties such as correctness
and domain and bounds consistency. Techniques for systematically deriving
propagators such as transformation, generalization, specialization, and
channeling are developed for several variable domains. We evaluate the massive
impact of derived propagators. Without derived propagators, Gecode would
require 140000 rather than 40000 lines of code for propagators.
|
0806.1816
|
Cardinality heterogeneities in Web service composition: Issues and
solutions
|
cs.SE cs.DB
|
Data exchanges between Web services engaged in a composition raise several
heterogeneities. In this paper, we address the problem of data cardinality
heterogeneity in a composition. Firstly, we build a theoretical framework to
describe different aspects of Web services that relate to data cardinality, and
secondly, we solve this problem by developing a solution for cardinality
mediation based on constraint logic programming.
|
0806.1819
|
A Low-Complexity, Full-Rate, Full-Diversity 2 X 2 STBC with Golden
Code's Coding Gain
|
cs.IT math.IT
|
This paper presents a low-ML-decoding-complexity, full-rate, full-diversity
space-time block code (STBC) for a 2 transmit antenna, 2 receive antenna
multiple-input multiple-output (MIMO) system, with coding gain equal to that of
the best and well known Golden code for any QAM constellation. Recently, two
codes have been proposed (by Paredes, Gershman and Alkhansari and by Sezginer
and Sari), which enjoy a lower decoding complexity relative to the Golden code,
but have lesser coding gain. The $2\times 2$ STBC presented in this paper has
lesser decoding complexity for non-square QAM constellations, compared with
that of the Golden code, while having the same decoding complexity for square
QAM constellations. Compared with the Paredes-Gershman-Alkhansari and
Sezginer-Sari codes, the proposed code has the same decoding complexity for
non-rectangular QAM constellations. Simulation results, which compare the
codeword error rate (CER) performance, are presented.
|
0806.1834
|
A Low-decoding-complexity, Large coding Gain, Full-rate, Full-diversity
STBC for 4 X 2 MIMO System
|
cs.IT math.IT
|
This paper proposes a low decoding complexity, full-diversity and full-rate
space-time block code (STBC) for 4 transmit and 2 receive ($4\times 2$)
multiple-input multiple-output (MIMO) systems. For such systems, the best code
known is the DjABBA code and recently, Biglieri, Hong and Viterbo have proposed
another STBC (BHV code) which has lower decoding complexity than DjABBA but
does not have full-diversity like the DjABBA code. The code proposed in this
paper has the same decoding complexity as the BHV code for square QAM
constellations but has full-diversity as well. Compared to the best code in the
DjABBA family of codes, our code has lower decoding complexity, a better coding
gain and hence a better error performance as well. Simulation results
confirming these are presented.
|
0806.1918
|
Analysis of Social Voting Patterns on Digg
|
cs.CY cs.IR
|
The social Web is transforming the way information is created and
distributed. Blog authoring tools enable users to publish content, while sites
such as Digg and Del.icio.us are used to distribute content to a wider
audience. With content fast becoming a commodity, interest in using social
networks to promote and find content has grown, both on the side of content
producers (viral marketing) and consumers (recommendation). Here we study the
role of social networks in promoting content on Digg, a social news aggregator
that allows users to submit links to and vote on news stories. Digg's goal is
to feature the most interesting stories on its front page, and it aggregates
opinions of its many users to identify them. Like other social networking
sites, Digg allows users to designate other users as ``friends'' and see what
stories they found interesting. We studied the spread of interest in news
stories submitted to Digg in June 2006. Our results suggest that pattern of the
spread of interest in a story on the network is indicative of how popular the
story will become. Stories that spread mainly outside of the submitter's
neighborhood go on to be very popular, while stories that spread mainly through
submitter's social neighborhood prove not to be very popular. This effect is
visible already in the early stages of voting, and one can make a prediction
about the potential audience of a story simply by analyzing where the initial
votes come from.
|
0806.1919
|
Non-linear index coding outperforming the linear optimum
|
cs.IT math.IT
|
The following source coding problem was introduced by Birk and Kol: a sender
holds a word $x\in\{0,1\}^n$, and wishes to broadcast a codeword to $n$
receivers, $R_1,...,R_n$. The receiver $R_i$ is interested in $x_i$, and has
prior \emph{side information} comprising some subset of the $n$ bits. This
corresponds to a directed graph $G$ on $n$ vertices, where $i j$ is an edge iff
$R_i$ knows the bit $x_j$. An \emph{index code} for $G$ is an encoding scheme
which enables each $R_i$ to always reconstruct $x_i$, given his side
information. The minimal word length of an index code was studied by
Bar-Yossef, Birk, Jayram and Kol (FOCS 2006). They introduced a graph
parameter, $\minrk_2(G)$, which completely characterizes the length of an
optimal \emph{linear} index code for $G$. The authors of BBJK showed that in
various cases linear codes attain the optimal word length, and conjectured that
linear index coding is in fact \emph{always} optimal.
In this work, we disprove the main conjecture of BBJK in the following strong
sense: for any $\epsilon > 0$ and sufficiently large $n$, there is an
$n$-vertex graph $G$ so that every linear index code for $G$ requires codewords
of length at least $n^{1-\epsilon}$, and yet a non-linear index code for $G$
has a word length of $n^\epsilon$. This is achieved by an explicit
construction, which extends Alon's variant of the celebrated Ramsey
construction of Frankl and Wilson.
In addition, we study optimal index codes in various, less restricted,
natural models, and prove several related properties of the graph parameter
$\minrk(G)$.
|
0806.1945
|
Complexity in atoms: an approach with a new analytical density
|
nlin.CD cs.IT math.IT physics.atom-ph quant-ph
|
In this work, the calculation of complexity on atomic systems is considered.
In order to unveil the increasing of this statistical magnitude with the atomic
number due to the relativistic effects, recently reported in [A. Borgoo, F. De
Proft, P. Geerlings, K.D. Sen, Chem. Phys. Lett., 444 (2007) 186], a new
analytical density to describe neutral atoms is proposed. This density is
inspired in the Tietz potential model. The parameters of this density are
determined from the normalization condition and from a variational calculation
of the energy, which is a functional of the density. The density is
non-singular at the origin and its specific form is selected so as to fit the
results coming from non-relativistic Hartree-Fock calculations. The main
ingredients of the energy functional are the non-relativistic kinetic energy,
the nuclear-electron attraction energy and the classical term of the electron
repulsion. The relativistic correction to the kinetic energy and the Weizsacker
term are also taken into account. The Dirac and the correlation terms are shown
to be less important than the other terms and they have been discarded in this
study. When the statistical measure of complexity is calculated in position
space with the analytical density derived from this model, the increasing trend
of this magnitude as the atomic number increases is also found.
|
0806.1984
|
Classification of curves in 2D and 3D via affine integral signatures
|
cs.CV
|
We propose a robust classification algorithm for curves in 2D and 3D, under
the special and full groups of affine transformations. To each plane or spatial
curve we assign a plane signature curve. Curves, equivalent under an affine
transformation, have the same signature. The signatures introduced in this
paper are based on integral invariants, which behave much better on noisy
images than classically known differential invariants. The comparison with
other types of invariants is given in the introduction. Though the integral
invariants for planar curves were known before, the affine integral invariants
for spatial curves are proposed here for the first time. Using the inductive
variation of the moving frame method we compute affine invariants in terms of
Euclidean invariants. We present two types of signatures, the global signature
and the local signature. Both signatures are independent of parameterization
(curve sampling). The global signature depends on the choice of the initial
point and does not allow us to compare fragments of curves, and is therefore
sensitive to occlusions. The local signature, although is slightly more
sensitive to noise, is independent of the choice of the initial point and is
not sensitive to occlusions in an image. It helps establish local equivalence
of curves. The robustness of these invariants and signatures in their
application to the problem of classification of noisy spatial curves extracted
from a 3D object is analyzed.
|
0806.2006
|
Fusion de classifieurs pour la classification d'images sonar
|
cs.CV cs.AI
|
In this paper, we present some high level information fusion approaches for
numeric and symbolic data. We study the interest of such method particularly
for classifier fusion. A comparative study is made in a context of sea bed
characterization from sonar images. The classi- fication of kind of sediment is
a difficult problem because of the data complexity. We compare high level
information fusion and give the obtained performance.
|
0806.2007
|
Experts Fusion and Multilayer Perceptron Based on Belief Learning for
Sonar Image Classification
|
cs.CV cs.AI
|
The sonar images provide a rapid view of the seabed in order to characterize
it. However, in such as uncertain environment, real seabed is unknown and the
only information we can obtain, is the interpretation of different human
experts, sometimes in conflict. In this paper, we propose to manage this
conflict in order to provide a robust reality for the learning step of
classification algorithms. The classification is conducted by a multilayer
perceptron, taking into account the uncertainty of the reality in the learning
stage. The results of this seabed characterization are presented on real sonar
images.
|
0806.2008
|
Generalized proportional conflict redistribution rule applied to Sonar
imagery and Radar targets classification
|
cs.CV cs.AI
|
In this chapter, we present two applications in information fusion in order
to evaluate the generalized proportional conflict redistribution rule presented
in the chapter \cite{Martin06a}. Most of the time the combination rules are
evaluated only on simple examples. We study here different combination rules
and compare them in terms of decision on real data. Indeed, in real
applications, we need a reliable decision and it is the final results that
matter. Two applications are presented here: a fusion of human experts opinions
on the kind of underwater sediments depict on sonar image and a classifier
fusion for radar targets recognition.
|
0806.2035
|
Nodal distances for rooted phylogenetic trees
|
q-bio.PE cs.CE cs.DM
|
Dissimilarity measures for (possibly weighted) phylogenetic trees based on
the comparison of their vectors of path lengths between pairs of taxa, have
been present in the systematics literature since the early seventies. But, as
far as rooted phylogenetic trees goes, these vectors can only separate
non-weighted binary trees, and therefore these dissimilarity measures are
metrics only on this class. In this paper we overcome this problem, by
splitting in a suitable way each path length between two taxa into two lengths.
We prove that the resulting splitted path lengths matrices single out arbitrary
rooted phylogenetic trees with nested taxa and arcs weighted in the set of
positive real numbers. This allows the definition of metrics on this general
class by comparing these matrices by means of metrics in spaces of real-valued
$n\times n$ matrices. We conclude this paper by establishing some basic facts
about the metrics for non-weighted phylogenetic trees defined in this way using
$L^p$ metrics on these spaces of matrices.
|
0806.2084
|
On the existence of compactly supported reconstruction functions in a
sampling problem
|
cs.IT math.FA math.IT math.NA
|
Assume that samples of a filtered version of a function in a shift-invariant
space are avalaible. This work deals with the existence of a sampling formula
involving these samples and having reconstruction functions with compact
support. Thus, low computational complexity is involved and truncation errors
are avoided. This is done in the light of the generalized sampling theory by
using the oversampling technique: more samples than strictly necessary are
used. For a suitable choice of the sampling period, a necessary and sufficient
condition is given in terms of the Kronecker canonical form of a matrix pencil.
Comparing with other characterizations in the mathematical literature, the
given here has an important advantage: it can be reliable computed by using the
GUPTRI form of the matrix pencil. Finally, a practical method for computing the
compactly supported reconstruction functions is given for the important case
where the oversampling rate is minimum.
|
0806.2139
|
Beyond Nash Equilibrium: Solution Concepts for the 21st Century
|
cs.GT cs.AI cs.CR cs.DC
|
Nash equilibrium is the most commonly-used notion of equilibrium in game
theory. However, it suffers from numerous problems. Some are well known in the
game theory community; for example, the Nash equilibrium of repeated prisoner's
dilemma is neither normatively nor descriptively reasonable. However, new
problems arise when considering Nash equilibrium from a computer science
perspective: for example, Nash equilibrium is not robust (it does not tolerate
``faulty'' or ``unexpected'' behavior), it does not deal with coalitions, it
does not take computation cost into account, and it does not deal with cases
where players are not aware of all aspects of the game. Solution concepts that
try to address these shortcomings of Nash equilibrium are discussed.
|
0806.2140
|
Defaults and Normality in Causal Structures
|
cs.AI
|
A serious defect with the Halpern-Pearl (HP) definition of causality is
repaired by combining a theory of causality with a theory of defaults. In
addition, it is shown that (despite a claim to the contrary) a cause according
to the HP condition need not be a single conjunct. A definition of causality
motivated by Wright's NESS test is shown to always hold for a single conjunct.
Moreover, conditions that hold for all the examples considered by HP are given
that guarantee that causality according to (this version) of the NESS test is
equivalent to the HP definition.
|
0806.2198
|
Capacity-achieving CPM schemes
|
cs.IT math.IT
|
The pragmatic approach to coded continuous-phase modulation (CPM) is proposed
as a capacity-achieving low-complexity alternative to the serially-concatenated
CPM (SC-CPM) coding scheme. In this paper, we first perform a selection of the
best spectrally-efficient CPM modulations to be embedded into SC-CPM schemes.
Then, we consider the pragmatic capacity (a.k.a. BICM capacity) of CPM
modulations and optimize it through a careful design of the mapping between
input bits and CPM waveforms. The so obtained schemes are cascaded with an
outer serially-concatenated convolutional code to form a pragmatic
coded-modulation system. The resulting schemes exhibit performance very close
to the CPM capacity without requiring iterations between the outer decoder and
the CPM demodulator. As a result, the receiver exhibits reduced complexity and
increased flexibility due to the separation of the demodulation and decoding
functions.
|
0806.2216
|
An Intelligent Multi-Agent Recommender System for Human Capacity
Building
|
cs.AI cs.HC
|
This paper presents a Multi-Agent approach to the problem of recommending
training courses to engineering professionals. The recommendation system is
built as a proof of concept and limited to the electrical and mechanical
engineering disciplines. Through user modelling and data collection from a
survey, collaborative filtering recommendation is implemented using intelligent
agents. The agents work together in recommending meaningful training courses
and updating the course information. The system uses a users profile and
keywords from courses to rank courses. A ranking accuracy for courses of 90% is
achieved while flexibility is achieved using an agent that retrieves
information autonomously using data mining techniques from websites. This
manner of recommendation is scalable and adaptable. Further improvements can be
made using clustering and recording user feedback.
|
0806.2312
|
Continuing Progress on a Lattice QCD Software Infrastructure
|
hep-lat cs.CE
|
We report on the progress of the software effort in the QCD Application Area
of SciDAC. In particular, we discuss how the software developed under SciDAC
enabled the aggressive exploitation of leadership computers, and we report on
progress in the area of QCD software for multi-core architectures.
|
0806.2356
|
Development of Hybrid Intelligent Systems and their Applications from
Engineering Systems to Complex Systems
|
cs.AI cs.MA
|
In this study, we introduce general frame of MAny Connected Intelligent
Particles Systems (MACIPS). Connections and interconnections between particles
get a complex behavior of such merely simple system (system in
system).Contribution of natural computing, under information granulation
theory, are the main topic of this spacious skeleton. Upon this clue, we
organize different algorithms involved a few prominent intelligent computing
and approximate reasoning methods such as self organizing feature map (SOM)[9],
Neuro- Fuzzy Inference System[10], Rough Set Theory (RST)[11], collaborative
clustering, Genetic Algorithm and Ant Colony System. Upon this, we have
employed our algorithms on the several engineering systems, especially emerged
systems in Civil and Mineral processing. In other process, we investigated how
our algorithms can be taken as a linkage of government-society interaction,
where government catches various fashions of behavior: solid (absolute) or
flexible. So, transition of such society, by changing of connectivity
parameters (noise) from order to disorder is inferred. Add to this, one may
find an indirect mapping among finical systems and eventual market fluctuations
with MACIPS. In the following sections, we will mention the main topics of the
suggested proposal, briefly Details of the proposed algorithms can be found in
the references.
|
0806.2513
|
The Perfect Binary One-Error-Correcting Codes of Length 15: Part
I--Classification
|
cs.IT math.IT
|
A complete classification of the perfect binary one-error-correcting codes of
length 15 as well as their extensions of length 16 is presented. There are 5983
such inequivalent perfect codes and 2165 extended perfect codes. Efficient
generation of these codes relies on the recent classification of Steiner
quadruple systems of order 16. Utilizing a result of Blackmore, the optimal
binary one-error-correcting codes of length 14 and the (15, 1024, 4) codes are
also classified; there are 38408 and 5983 such codes, respectively.
|
0806.2533
|
Asymptotic Analysis of the Performance of LAS Algorithm for Large-MIMO
Detection
|
cs.IT math.IT
|
In our recent work, we reported an exhaustive study on the simulated bit
error rate (BER) performance of a low-complexity likelihood ascent search (LAS)
algorithm for detection in large multiple-input multiple-output (MIMO) systems
with large number of antennas that achieve high spectral efficiencies. Though
the algorithm was shown to achieve increasingly closer to near
maximum-likelihood (ML) performance through simulations, no BER analysis was
reported. Here, we extend our work on LAS and report an asymptotic BER analysis
of the LAS algorithm in the large system limit, where $N_t,N_r \to \infty$ with
$N_t=N_r$, where $N_t$ and $N_r$ are the number of transmit and receive
antennas. We prove that the error performance of the LAS detector in V-BLAST
with 4-QAM in i.i.d. Rayleigh fading converges to that of the ML detector as
$N_t,N_r \to \infty$.
|
0806.2555
|
Frequency of Correctness versus Average-Case Polynomial Time and
Generalized Juntas
|
cs.CC cs.GT cs.MA
|
We prove that every distributional problem solvable in polynomial time on the
average with respect to the uniform distribution has a frequently
self-knowingly correct polynomial-time algorithm. We also study some features
of probability weight of correctness with respect to generalizations of
Procaccia and Rosenschein's junta distributions [PR07b].
|
0806.2581
|
A chain dictionary method for Word Sense Disambiguation and applications
|
cs.CL
|
A large class of unsupervised algorithms for Word Sense Disambiguation (WSD)
is that of dictionary-based methods. Various algorithms have as the root Lesk's
algorithm, which exploits the sense definitions in the dictionary directly. Our
approach uses the lexical base WordNet for a new algorithm originated in
Lesk's, namely "chain algorithm for disambiguation of all words", CHAD. We show
how translation from a language into another one and also text entailment
verification could be accomplished by this disambiguation.
|
0806.2643
|
On the Capacity Equivalence with Side Information at Transmitter and
Receiver
|
cs.IT math.IT
|
In this paper, a channel that is contaminated by two independent Gaussian
noises $S ~ N(0,Q)$ and $Z_0 ~ N(0,N_0)$ is considered. The capacity of this
channel is computed when independent noisy versions of $S$ are known to the
transmitter and/or receiver. It is shown that the channel capacity is greater
then the capacity when $S$ is completely unknown, but is less then the capacity
when $S$ is perfectly known at the transmitter or receiver. For example, if
there is one noisy version of $S$ known at the transmitter only, the capacity
is $0.5\log(1+\frac{P}{Q(N_1/(Q+N_1))+N_0})$, where $P$ is the input power
constraint and $N_1$ is the power of the noise corrupting $S$. Further, it is
shown that the capacity with knowledge of any independent noisy versions of $S$
at the transmitter is equal to the capacity with knowledge of the statistically
equivalent noisy versions of $S$ at the receiver.
|
0806.2674
|
On Certain Large Random Hermitian Jacobi Matrices with Applications to
Wireless Communications
|
cs.IT math.IT
|
In this paper we study the spectrum of certain large random Hermitian Jacobi
matrices. These matrices are known to describe certain communication setups. In
particular we are interested in an uplink cellular channel which models mobile
users experiencing a soft-handoff situation under joint multicell decoding.
Considering rather general fading statistics we provide a closed form
expression for the per-cell sum-rate of this channel in high-SNR, when an
intra-cell TDMA protocol is employed. Since the matrices of interest are
tridiagonal, their eigenvectors can be considered as sequences with second
order linear recurrence. Therefore, the problem is reduced to the study of the
exponential growth of products of two by two matrices. For the case where $K$
users are simultaneously active in each cell, we obtain a series of lower and
upper bound on the high-SNR power offset of the per-cell sum-rate, which are
considerably tighter than previously known bounds.
|
0806.2682
|
Weighted Superimposed Codes and Constrained Integer Compressed Sensing
|
cs.IT math.IT
|
We introduce a new family of codes, termed weighted superimposed codes
(WSCs). This family generalizes the class of Euclidean superimposed codes
(ESCs), used in multiuser identification systems. WSCs allow for discriminating
all bounded, integer-valued linear combinations of real-valued codewords that
satisfy prescribed norm and non-negativity constraints. By design, WSCs are
inherently noise tolerant. Therefore, these codes can be seen as special
instances of robust compressed sensing schemes. The main results of the paper
are lower and upper bounds on the largest achievable code rates of several
classes of WSCs. These bounds suggest that with the codeword and weighting
vector constraints at hand, one can improve the code rates achievable by
standard compressive sensing.
|
0806.2726
|
L2 Orthogonal Space Time Code for Continuous Phase Modulation
|
cs.IT math.IT
|
To combine the high power efficiency of Continuous Phase Modulation (CPM)
with either high spectral efficiency or enhanced performance in low Signal to
Noise conditions, some authors have proposed to introduce CPM in a MIMO frame,
by using Space Time Codes (STC). In this paper, we address the code design
problem of Space Time Block Codes combined with CPM and introduce a new design
criterion based on L2 orthogonality. This L2 orthogonality condition, with the
help of simplifying assumption, leads, in the 2x2 case, to a new family of
codes. These codes generalize the Wang and Xia code, which was based on
pointwise orthogonality. Simulations indicate that the new codes achieve full
diversity and a slightly better coding gain. Moreover, one of the codes can be
interpreted as two antennas fed by two conventional CPMs using the same data
but with different alphabet sets. Inspection of these alphabet sets lead also
to a simple explanation of the (small) spectrum broadening of Space Time Coded
CPM.
|
0806.2738
|
Identification of information tonality based on Bayesian approach and
neural networks
|
cs.IT math.IT
|
A model of the identification of information tonality, based on Bayesian
approach and neural networks was described. In the context of this paper
tonality means positive or negative tone of both the whole information and its
parts which are related to particular concepts. The method, its application is
presented in the paper, is based on statistic regularities connected with the
presence of definite lexemes in the texts. A distinctive feature of the method
is its simplicity and versatility. At present ideologically similar approaches
are widely used to control spam.
|
0806.2760
|
L2 OSTC-CPM: Theory and design
|
cs.IT math.IT
|
The combination of space-time coding (STC) and continuous phase modulation
(CPM) is an attractive field of research because both STC and CPM bring many
advantages for wireless communications. Zhang and Fitz [1] were the first to
apply this idea by constructing a trellis based scheme. But for these codes the
decoding effort grows exponentially with the number of transmitting antennas.
This was circumvented by orthogonal codes introduced by Wang and Xia [2].
Unfortunately, based on Alamouti code [3], this design is restricted to two
antennas. However, by relaxing the orthogonality condition, we prove here that
it is possible to design L2-orthogonal space-time codes which achieve full rate
and full diversity with low decoding effort. In part one, we generalize the
two-antenna code proposed by Wang and Xia [2] from pointwise to
L2-orthogonality and in part two we present the first L2-orthogonal code for
CPM with three antennas. In this report, we detail these results and focus on
the properties of these codes. Of special interest is the optimization of the
bit error rate which depends on the initial phase of the system. Our simulation
results illustrate the systemic behavior of these conditions.
|
0806.2843
|
MultiKulti Algorithm: Migrating the Most Different Genotypes in an
Island Model
|
cs.NE cs.DC
|
Migration policies in distributed evolutionary algorithms has not been an
active research area until recently. However, in the same way as operators have
an impact on performance, the choice of migrants is due to have an impact too.
In this paper we propose a new policy (named multikulti) for choosing the
individuals that are going to be sent to other nodes, based on
multiculturality: the individual sent should be as different as possible to the
receiving population. We have checked this policy on different discrete
optimization problems, and found that, in average or in median, this policy
outperforms classical ones like sending the best or a random individual.
|
0806.2850
|
Decoding Beta-Decay Systematics: A Global Statistical Model for Beta^-
Halflives
|
nucl-th astro-ph cond-mat.dis-nn cs.LG stat.ML
|
Statistical modeling of nuclear data provides a novel approach to nuclear
systematics complementary to established theoretical and phenomenological
approaches based on quantum theory. Continuing previous studies in which global
statistical modeling is pursued within the general framework of machine
learning theory, we implement advances in training algorithms designed to
improved generalization, in application to the problem of reproducing and
predicting the halflives of nuclear ground states that decay 100% by the beta^-
mode. More specifically, fully-connected, multilayer feedforward artificial
neural network models are developed using the Levenberg-Marquardt optimization
algorithm together with Bayesian regularization and cross-validation. The
predictive performance of models emerging from extensive computer experiments
is compared with that of traditional microscopic and phenomenological models as
well as with the performance of other learning systems, including earlier
neural network models as well as the support vector machines recently applied
to the same problem. In discussing the results, emphasis is placed on
predictions for nuclei that are far from the stability line, and especially
those involved in the r-process nucleosynthesis. It is found that the new
statistical models can match or even surpass the predictive performance of
conventional models for beta-decay systematics and accordingly should provide a
valuable additional tool for exploring the expanding nuclear landscape.
|
0806.2890
|
Learning Graph Matching
|
cs.CV cs.LG
|
As a fundamental problem in pattern recognition, graph matching has
applications in a variety of fields, from computer vision to computational
biology. In graph matching, patterns are modeled as graphs and pattern
recognition amounts to finding a correspondence between the nodes of different
graphs. Many formulations of this problem can be cast in general as a quadratic
assignment problem, where a linear term in the objective function encodes node
compatibility and a quadratic term encodes edge compatibility. The main
research focus in this theme is about designing efficient algorithms for
approximately solving the quadratic assignment problem, since it is NP-hard. In
this paper we turn our attention to a different question: how to estimate
compatibility functions such that the solution of the resulting graph matching
problem best matches the expected solution that a human would manually provide.
We present a method for learning graph matching: the training examples are
pairs of graphs and the `labels' are matches between them. Our experimental
results reveal that learning can substantially improve the performance of
standard graph matching algorithms. In particular, we find that simple linear
assignment with such a learning scheme outperforms Graduated Assignment with
bistochastic normalisation, a state-of-the-art quadratic assignment relaxation
algorithm.
|
0806.2925
|
Neural networks in 3D medical scan visualization
|
cs.AI cs.GR
|
For medical volume visualization, one of the most important tasks is to
reveal clinically relevant details from the 3D scan (CT, MRI ...), e.g. the
coronary arteries, without obscuring them with less significant parts. These
volume datasets contain different materials which are difficult to extract and
visualize with 1D transfer functions based solely on the attenuation
coefficient. Multi-dimensional transfer functions allow a much more precise
classification of data which makes it easier to separate different surfaces
from each other. Unfortunately, setting up multi-dimensional transfer functions
can become a fairly complex task, generally accomplished by trial and error.
This paper explains neural networks, and then presents an efficient way to
speed up visualization process by semi-automatic transfer function generation.
We describe how to use neural networks to detect distinctive features shown in
the 2D histogram of the volume data and how to use this information for data
classification.
|
0806.2943
|
Modern Set
|
math.GM cs.IT math.IT
|
In this paper, we intend to generalize the classical set theory as much as
possible. we will do this by freeing sets from the regular properties of
classical sets; e.g., the law of excluded middle, the law of non-contradiction,
the distributive law, the commutative law,etc....
|
0806.2991
|
On Information Rates of the Fading Wyner Cellular Model via the Thouless
Formula for the Strip
|
cs.IT math.IT
|
We apply the theory of random Schr\"odinger operators to the analysis of
multi-users communication channels similar to the Wyner model, that are
characterized by short-range intra-cell broadcasting. With $H$ the channel
transfer matrix, $HH^\dagger$ is a narrow-band matrix and in many aspects is
similar to a random Schr\"odinger operator. We relate the per-cell sum-rate
capacity of the channel to the integrated density of states of a random
Schr\"odinger operator; the latter is related to the top Lyapunov exponent of a
random sequence of matrices via a version of the Thouless formula. Unlike
related results in classical random matrix theory, limiting results do depend
on the underlying fading distributions. We also derive several bounds on the
limiting per-cell sum-rate capacity, some based on the theory of random
Schr\"odinger operators, and some derived from information theoretical
considerations. Finally, we get explicit results in the high-SNR regime for
some particular cases.
|
0806.3023
|
A Random Search Framework for Convergence Analysis of Distributed
Beamforming with Feedback
|
cs.DC cs.IT math.IT
|
The focus of this work is on the analysis of transmit beamforming schemes
with a low-rate feedback link in wireless sensor/relay networks, where nodes in
the network need to implement beamforming in a distributed manner.
Specifically, the problem of distributed phase alignment is considered, where
neither the transmitters nor the receiver has perfect channel state
information, but there is a low-rate feedback link from the receiver to the
transmitters. In this setting, a framework is proposed for systematically
analyzing the performance of distributed beamforming schemes. To illustrate the
advantage of this framework, a simple adaptive distributed beamforming scheme
that was recently proposed by Mudambai et al. is studied. Two important
properties for the received signal magnitude function are derived. Using these
properties and the systematic framework, it is shown that the adaptive
distributed beamforming scheme converges both in probability and in mean.
Furthermore, it is established that the time required for the adaptive scheme
to converge in mean scales linearly with respect to the number of sensor/relay
nodes.
|
0806.3031
|
Multi Site Coordination using a Multi-Agent System
|
cs.MA
|
A new approach of coordination of decisions in a multi site system is
proposed. It is based this approach on a multi-agent concept and on the
principle of distributed network of enterprises. For this purpose, each
enterprise is defined as autonomous and performs simultaneously at the local
and global levels. The basic component of our approach is a so-called Virtual
Enterprise Node (VEN), where the enterprise network is represented as a set of
tiers (like in a product breakdown structure). Within the network, each partner
constitutes a VEN, which is in contact with several customers and suppliers.
Exchanges between the VENs ensure the autonomy of decision, and guarantiee the
consistency of information and material flows. Only two complementary VEN
agents are necessary: one for external interactions, the Negotiator Agent (NA)
and one for the planning of internal decisions, the Planner Agent (PA). If
supply problems occur in the network, two other agents are defined: the Tier
Negotiator Agent (TNA) working at the tier level only and the Supply Chain
Mediator Agent (SCMA) working at the level of the enterprise network. These two
agents are only active when the perturbation occurs. Otherwise, the VENs
process the flow of information alone. With this new approach, managing
enterprise network becomes much more transparent and looks like managing a
simple enterprise in the network. The use of a Multi-Agent System (MAS) allows
physical distribution of the decisional system, and procures a heterarchical
organization structure with a decentralized control that guaranties the
autonomy of each entity and the flexibility of the network.
|
0806.3032
|
Multi-agents architecture for supply chain management
|
cs.MA
|
The purpose of this paper is to propose a new approach for the supply chain
management. This approach is based on the virtual enterprise paradigm and the
used of multi-agent concept. Each entity (like enterprise) is autonomous and
must perform local and global goals in relation with its environment. The base
component of our approach is a Virtual Enterprise Node (VEN). The supply chain
is viewed as a set of tiers (corresponding to the levels of production), in
which each partner of the supply chain (VEN) is in relation with several
customers and suppliers. Each VEN belongs to one tier. The main customer gives
global objectives (quantity, cost and delay) to the supply chain. The Mediator
Agent (MA) is in charge to manage the supply chain in order to respect those
objectives as global level. Those objectives are taking over to Negotiator
Agent at the tier level (NAT). These two agents are only active if a
perturbation occurs; otherwise information flows are only exchange between
VENs. This architecture allows supply chains management which is completely
transparent seen from simple enterprise of the supply chain. The used of
Multi-Agent System (MAS) allows physical distribution of the decisional system.
Moreover, the hierarchical organizational structure with a decentralized
control guaranties, in the same time, the autonomy of each entity and the whole
flexibility.
|
0806.3115
|
Using rational numbers to key nested sets
|
cs.DB
|
This report details the generation and use of tree node ordering keys in a
single relational database table. The keys for each node are calculated from
the keys of its parent, in such a way that the sort order places every node in
the tree before all of its descendants and after all siblings having a lower
index. The calculation from parent keys to child keys is simple, and reversible
in the sense that the keys of every ancestor of a node can be calculated from
that node's keys without having to consult the database.
Proofs of the above properties of the key encoding process and of its
correspondence to a finite continued fraction form are provided.
|
0806.3133
|
Shannon Meets Carnot: Mutual Information Via Thermodynamics
|
cs.IT math.IT
|
In this contribution, the Gaussian channel is represented as an equivalent
thermal system allowing to express its input-output mutual information in terms
of thermodynamic quantities. This thermodynamic description of the mutual
information is based upon a generalization of the $2^{nd}$ thermodynamic law
and provides an alternative proof to the Guo-Shamai-Verd\'{u} theorem, giving
an intriguing connection between this remarkable theorem and the most
fundamental laws of nature - the laws of thermodynamics.
|
0806.3227
|
A Non-differential Distributed Space-Time Coding for Partially-Coherent
Cooperative Communication
|
cs.IT math.IT
|
In a distributed space-time coding scheme, based on the relay channel model,
the relay nodes co-operate to linearly process the transmitted signal from the
source and forward them to the destination such that the signal at the
destination appears as a space time block code. Recently, a code design
criteria for achieving full diversity in a partially-coherent environment have
been proposed along with codes based on differential encoding and decoding
techniques. For such a set up, in this paper, a non-differential encoding
technique and construction of distributed space time block codes from unitary
matrix groups at the source and a set of diagonal unitary matrices for the
relays are proposed. It is shown that, the performance of our scheme is
independent of the choice of unitary matrices at the relays. When the group is
cyclic, a necessary and sufficient condition on the generator of the cyclic
group to achieve full diversity and to minimize the pairwise error probability
is proved. Various choices on the generator of cyclic group to reduce the ML
decoding complexity at the destination is presented. It is also shown that, at
the source, if non-cyclic abelian unitary matrix groups are used, then
full-diversity can not be obtained. The presented scheme is also robust to
failure of any subset of relay nodes.
|
0806.3243
|
Analysis of Verification-based Decoding on the q-ary Symmetric Channel
for Large q
|
cs.IT math.IT
|
We discuss and analyze a list-message-passing decoder with verification for
low-density parity-check (LDPC) codes on the q-ary symmetric channel (q-SC).
Rather than passing messages consisting of symbol probabilities, this decoder
passes lists of possible symbols and marks some lists as verified. The density
evolution (DE) equations for this decoder are derived and used to compute
decoding thresholds. If the maximum list size is unbounded, then we find that
any capacity-achieving LDPC code for the binary erasure channel can be used to
achieve capacity on the q-SC for large q. The decoding thresholds are also
computed via DE for the case where each list is truncated to satisfy a maximum
list size constraint. Simulation results are also presented to confirm the DE
results. During the simulations, we observed differences between two
verification-based decoding algorithms, introduced by Luby and Mitzenmacher,
that were implicitly assumed to be identical. In this paper, we provide an
analysis of the node-based algorithms from that paper and verify that it
matches simulation results. The probability of false verification (FV) is also
considered and techniques are discussed to mitigate the FV. Optimization of the
degree distribution is also used to improve the threshold for a fixed maximum
list size. Finally, the proposed algorithm is compared with a variety of other
algorithms using both density evolution thresholds and simulation results.
|
0806.3246
|
Broadcasting with side information
|
cs.IT math.IT
|
A sender holds a word x consisting of n blocks x_i, each of t bits, and
wishes to broadcast a codeword to m receivers, R_1,...,R_m. Each receiver R_i
is interested in one block, and has prior side information consisting of some
subset of the other blocks. Let \beta_t be the minimum number of bits that has
to be transmitted when each block is of length t, and let \beta be the limit
\beta = \lim_{t \to \infty} \beta_t/t. In words, \beta is the average
communication cost per bit in each block (for long blocks). Finding the coding
rate \beta, for such an informed broadcast setting, generalizes several coding
theoretic parameters related to Informed Source Coding on Demand, Index Coding
and Network Coding.
In this work we show that usage of large data blocks may strictly improve
upon the trivial encoding which treats each bit in the block independently. To
this end, we provide general bounds on \beta_t, and prove that for any constant
C there is an explicit broadcast setting in which \beta = 2 but \beta_1 > C.
One of these examples answers a question of Lubetzky and Stav.
In addition, we provide examples with the following counterintuitive
direct-sum phenomena. Consider a union of several mutually independent
broadcast settings. The optimal code for the combined setting may yield a
significant saving in communication over concatenating optimal encodings for
the individual settings. This result also provides new non-linear coding
schemes which improve upon the largest known gap between linear and non-linear
Network Coding, thus improving the results of Dougherty, Freiling, and Zeger.
The proofs use ideas related to Witsenhausen's rate, OR graph products,
colorings of Cayley graphs and the chromatic numbers of Kneser graphs.
|
0806.3277
|
On McMillan's theorem about uniquely decipherable codes
|
math.CO cs.IT math.IT
|
Karush's proof of McMillan's theorem is recast as an argument involving
polynomials with non-commuting indeterminates certain evaluations of which
yield the Kraft sums of codes, proving a strengthened version of McMillan's
theorem.
|
0806.3284
|
Optimal hash functions for approximate closest pairs on the n-cube
|
cs.IT math.IT
|
One way to find closest pairs in large datasets is to use hash functions. In
recent years locality-sensitive hash functions for various metrics have been
given: projecting an n-cube onto k bits is simple hash function that performs
well. In this paper we investigate alternatives to projection. For various
parameters hash functions given by complete decoding algorithms for codes work
better, and asymptotically random codes perform better than projection.
|
0806.3317
|
Differential Transmit Diversity Based on Quasi-Orthogonal Space-Time
Block Code
|
cs.IT math.IT
|
By using joint modulation and customized constellation set, we show that
Quasi-Orthogonal Space-Time Block Code (QO-STBC) can be used to form a new
differential space-time modulation (DSTM) scheme to provide full transmit
diversity with non-coherent detection. Our new scheme can provide higher code
rate than existing DSTM schemes based on Orthogonal STBC. It also has a lower
decoding complexity than the other DSTM schemes, such as those based on Group
Codes, because it only requires a joint detection of two complex symbols. We
derive the design criteria for the customized constellation set and use them to
construct a constellation set that provides a wide range of spectral efficiency
with full diversity and maximum coding gain.
|
0806.3320
|
Unitary Differential Space-Time Modulation with Joint Modulation
|
cs.IT math.IT
|
We develop two new designs of unitary differential space-time modulation
(DSTM) with low decoding complexity. Their decoder can be separated into a few
parallel decoders, each of which has a decoding search space of less than
sqrt(N) if the DSTM codebook contains N codewords. Both designs are based on
the concept of joint modulation, which means that several information symbols
are jointly modulated, unlike the conventional symbol-by-symbol modulation. The
first design is based on Orthogonal Space-Time Block Code (O-STBC) with joint
constellation constructed from spherical codes. The second design is based on
Quasi-Orthogonal Space-Time Block Code (QO-STBC) with specially designed
pair-wise constellation sets. Both the proposed unitary DSTM schemes have
considerably lower decoding complexity than many prior DSTM schemes, including
those based on Group Codes and Sp(2) which generally have a decoding search
space of N for a codebook size of N codewords, and much better decoding
performance than the existing O-STBC DSTM scheme. Between two designs, the
proposed DSTM based on O-STBC generally has better decoding performance, while
the proposed DSTM based on QO-STBC has lower decoding complexity when 8
transmit antennas.
|
0806.3321
|
Achieving Near-Capacity at Low SNR on a Multiple-Antenna Multiple-User
Channel
|
cs.IT math.IT
|
We analyze the sensitivity of the capacity of a multi-antenna multi-user
system to the number of users being served. We show analytically that, for a
given desired sum-rate, the extra power needed to serve a subset of the users
at low SNR (signal-to-noise ratio) can be very small, and is generally much
smaller than the extra power needed to serve the same subset at high SNR. The
advantages of serving only subsets of the users are many: multi-user algorithms
have lower complexity, reduced channel-state information requirements, and,
often, better performance. We provide guidelines on how many users to serve to
get near-capacity performance with low complexity. For example, we show how in
an eight-antenna eight-user system we can serve only four users and still be
approximately 2 dB from capacity at very low SNR.
|
0806.3322
|
Power-Balanced Orthogonal Space-Time Block Code
|
cs.IT math.IT
|
In this paper, we propose two new systematic ways to construct amicable
orthogonal designs (AOD), with an aim to facilitate the construction of
power-balanced orthogonal spacetime block codes (O-STBC) with favorable
practical attributes. We also show that an AOD can be constructed from an
Amicable Family (AF), and such a construction is crucial for achieving a
power-balanced O-STBC. In addition, we develop design guidelines on how to
select the "type" parameter of an AOD so that the resultant O-STBC will have
better power-distribution and code-coefficient attributes. Among the new
O-STBCs obtained, one is shown to be optimal in terms of power distribution
attributes. In addition, one of the proposed construction methods is shown to
generalize some other construction methods proposed in the literature.
|
0806.3324
|
Optimizing Quasi-Orthogonal STBC Through Group-Constrained Linear
Transformation
|
cs.IT math.IT
|
In this paper, we first derive the generic algebraic structure of a
Quasi-Orthogonal STBC (QO-STBC). Next we propose Group-Constrained Linear
Transformation (GCLT) as a means to optimize the diversity and coding gains of
a QO-STBC with square or rectangular QAM constellations. Compared with QO-STBC
with constellation rotation (CR), we show that QO-STBC with GCLT requires only
half the number of symbols for joint detection, hence lower maximum-likelihood
decoding complexity. We also derive analytically the optimum GCLT parameters
for QO-STBC with square QAM constellation. The optimized QO-STBCs with GCLT are
able to achieve full transmit diversity, and have negligible performance loss
compared with QO-STBCs with CR at the same code rate.
|
0806.3325
|
On the Search for High-Rate Quasi-Orthogonal Space-Time Block Code
|
cs.IT math.IT
|
A Quasi-Orthogonal Space-Time Block Code (QO-STBC) is attractive because it
achieves higher code rate than Orthogonal STBC and lower decoding complexity
than nonorthogonal STBC. In this paper, we first derive the algebraic structure
of QO-STBC, then we apply it in a novel graph-based search algorithm to find
high-rate QO-STBCs with code rates greater than 1. From the four-antenna codes
found using this approach, it is found that the maximum code rate is limited to
5/4 with symbolwise diversity level of four, and 4 with symbolwise diversity
level of two. The maximum likelihood decoding of these high-rate QO-STBCs can
be performed on two separate sub-groups of symbols. The rate-5/4 codes are the
first known QO-STBCs with code rate greater 1 that has full symbolwise
diversity level.
|
0806.3328
|
Limited Feedback for Multi-Antenna Multi-user Communications with
Generalized Multi-Unitary Decomposition
|
cs.IT math.IT
|
In this paper, we propose a decomposition method called Generalized
Multi-Unitary Decomposition (GMUD) which is useful in multi-user MIMO
precoding. This decomposition transforms a complex matrix H into PRQ, where R
is a special matrix whose first row contains only a non-zero user defined value
at the left-most position, P and Q are a pair of unitary matrices. The major
attraction of our proposed GMUD is we can obtain multiple solutions of P and Q
>. With GMUD, we propose a precoding method for a MIMO multi-user system that
does not require full channel state information (CSI) at the transmitter. The
proposed precoding method uses the multiple unitary matrices property to
compensate the inaccurate feedback information as the transmitter can steer the
transmission beams of individual users such that the inter-user interference is
kept minimum.
|
0806.3329
|
Beamforming Matrix Quantization with Variable Feedback Rate
|
cs.IT math.IT
|
We propose an improved beamforming matrix compression by Givens Rotation with
the use of variable feedback rate. The variable feedback rate means that the
number of bits used to represent the quantized beamforming matrix is based on
the value of the matrix. Compared with the fixed feedback rate scheme, the
proposed method has better performance without additional feedback bandwidth.
|
0806.3332
|
Compressed Sensing of Analog Signals in Shift-Invariant Spaces
|
cs.IT math.IT
|
A traditional assumption underlying most data converters is that the signal
should be sampled at a rate exceeding twice the highest frequency. This
statement is based on a worst-case scenario in which the signal occupies the
entire available bandwidth. In practice, many signals are sparse so that only
part of the bandwidth is used. In this paper, we develop methods for low-rate
sampling of continuous-time sparse signals in shift-invariant (SI) spaces,
generated by m kernels with period T. We model sparsity by treating the case in
which only k out of the m generators are active, however, we do not know which
k are chosen. We show how to sample such signals at a rate much lower than m/T,
which is the minimal sampling rate without exploiting sparsity. Our approach
combines ideas from analog sampling in a subspace with a recently developed
block diagram that converts an infinite set of sparse equations to a finite
counterpart. Using these two components we formulate our problem within the
framework of finite compressed sensing (CS) and then rely on algorithms
developed in that context. The distinguishing feature of our results is that in
contrast to standard CS, which treats finite-length vectors, we consider
sampling of analog signals for which no underlying finite-dimensional model
exists. The proposed framework allows to extend much of the recent literature
on CS to the analog domain.
|
0806.3474
|
Information field theory for cosmological perturbation reconstruction
and non-linear signal analysis
|
astro-ph cs.IT hep-th math.IT physics.data-an stat.CO
|
We develop information field theory (IFT) as a means of Bayesian inference on
spatially distributed signals, the information fields. A didactical approach is
attempted. Starting from general considerations on the nature of measurements,
signals, noise, and their relation to a physical reality, we derive the
information Hamiltonian, the source field, propagator, and interaction terms.
Free IFT reproduces the well known Wiener-filter theory. Interacting IFT can be
diagrammatically expanded, for which we provide the Feynman rules in position-,
Fourier-, and spherical harmonics space, and the Boltzmann-Shannon information
measure. The theory should be applicable in many fields. However, here, two
cosmological signal recovery problems are discussed in their IFT-formulation.
1) Reconstruction of the cosmic large-scale structure matter distribution from
discrete galaxy counts in incomplete galaxy surveys within a simple model of
galaxy formation. We show that a Gaussian signal, which should resemble the
initial density perturbations of the Universe, observed with a strongly
non-linear, incomplete and Poissonian-noise affected response, as the processes
of structure and galaxy formation and observations provide, can be
reconstructed thanks to the virtue of a response-renormalization flow equation.
2) We design a filter to detect local non-linearities in the cosmic microwave
background, which are predicted from some Early-Universe inflationary
scenarios, and expected due to measurement imperfections. This filter is the
optimal Bayes' estimator up to linear order in the non-linearity parameter and
can be used even to construct sky maps of non-linearities in the data.
|
0806.3537
|
Statistical Learning of Arbitrary Computable Classifiers
|
cs.LG
|
Statistical learning theory chiefly studies restricted hypothesis classes,
particularly those with finite Vapnik-Chervonenkis (VC) dimension. The
fundamental quantity of interest is the sample complexity: the number of
samples required to learn to a specified level of accuracy. Here we consider
learning over the set of all computable labeling functions. Since the
VC-dimension is infinite and a priori (uniform) bounds on the number of samples
are impossible, we let the learning algorithm decide when it has seen
sufficient samples to have learned. We first show that learning in this setting
is indeed possible, and develop a learning algorithm. We then show, however,
that bounding sample complexity independently of the distribution is
impossible. Notably, this impossibility is entirely due to the requirement that
the learning algorithm be computable, and not due to the statistical nature of
the problem.
|
0806.3628
|
Four-node Relay Network with Bi-directional Traffic Employing Wireless
Network Coding with Pre-cancellation
|
cs.IT math.IT
|
Network coding has the potential to improve the overall throughput of a
network by combining different streams of data and forwarding them. In wireless
networks, the wireless channel provide an excellent medium for physical layer
network coding as signals from different transmitters are combined
automatically by the wireless channel. In such scenarios, it would be
interesting to investigate protocols and algorithms which can optimally relay
information. In this paper, we look at a four-node two-way or bidirectional
relay network, and propose a relay protocol which can relay information
efficiently in this network.
|
0806.3629
|
Bi-Directional Multi-Antenna Relay Communications with Wireless Network
Coding
|
cs.IT math.IT
|
In this paper, we consider a two-way or bidirectional communications system
with a relay equipped with multiple antennas. We show that when the downlink
channel state information is not known at the relay, the benefit of having
additional antennas at the relay can only be obtained by using decode and
forward (DF) but not amplify and forward (AF). The gain becomes significant
when we employ transmit diversity together with wireless network coding. We
also demonstrate how the performance of such system can be improved by
performing antenna selection at the relay. Our results show that if downlink
channel state information is known at the relay, network coding may not provide
additional gain than simple antenna selection scheme.
|
0806.3630
|
Comparative Study of SVD and QRS in Closed-Loop Beamforming Systems
|
cs.IT math.IT
|
We compare two closed-loop beamforming algorithms, one based on singular
value decomposition (SVD) and the other based on equal diagonal QR
decomposition (QRS). SVD has the advantage of parallelizing the MIMO channel,
but each of the sub-channels has different gain. QRS has the advantage of
having equal diagonal value for the decomposed channel, but the subchannels are
not fully parallelized, hence requiring successive interference cancellation or
other techniques to perform decoding. We consider a closed-loop system where
the feedback information is a unitary beamforming matrix. Due to the discrete
and limited modulation set, SVD may have inferior performance to QRS when no
modulation set selection is performed. However, if the selection of modulation
set is performed optimally, we show that SVD can outperform QRS.
|
0806.3631
|
Comparative Study of Open-loop Transmit Diversity Schemes for Four
Transmit Antennas in Coded OFDM Systems
|
cs.IT math.IT
|
We compare four open-loop transmit diversity schemes in a coded Orthogonal
Frequency Division Multiplexing (OFDM) system with four transmit antennas,
namely cyclic delay diversity (CDD), Space-Time Block Code (STBC, Alamouti code
is used) with CDD, Quasi-Orthogonal STBC (QO-STBC) and
Minimum-Decoding-Complexity QOSTBC (MDC-QOSTBC). We show that in a coded system
with low code rate, a scheme with spatial transmit diversity of second order
can achieve similar performance to that with spatial transmit diversity of
fourth order due to the additional diversity provided by the phase shift
diversity with channel coding. In addition, we also compare the decoding
complexity and other features of the above four mentioned schemes, such as the
requirement for the training signals, hybrid automatic retransmission request
(HARQ), etc. The discussions in this paper can be readily applied to future
wireless communication systems, such as mobile systems beyond 3G, IEEE 802.11
wireless LAN, or IEEE 802.16 WiMAX, that employ more than two transmit antennas
and OFDM.
|
0806.3633
|
A Continuous Vector-Perturbation for Multi-Antenna Multi-User
Communication
|
cs.IT math.IT
|
The sum-rate of the broadcast channel in a multi-antenna multi-user
communication system can be achieved by using precoding and adding a regular
perturbation to the data vector. The perturbation can be removed by the modulus
function, thus transparent to the receiver, but the information of the
precoding matrix is needed to decode the symbols. This paper proposes a new
technique to improve the multi-antenna multi-user system, by adding a
continuous perturbation to the data vector without the need of information on
the precoding matrix to be known at the receiver. The perturbation vector will
be treated as interference at the receiver, thus it will be transparent to the
receiver. The derivation of the continuous vector perturbation is provided by
maximizing the signal-to-interference plus noise ratio or minimizing the
minimum mean square error of the received signal.
|
0806.3646
|
Round Trip Time Prediction Using the Symbolic Function Network Approach
|
cs.NE cs.SC
|
In this paper, we develop a novel approach to model the Internet round trip
time using a recently proposed symbolic type neural network model called
symbolic function network. The developed predictor is shown to have good
generalization performance and simple representation compared to the multilayer
perceptron based predictors.
|
0806.3650
|
Recursive Code Construction for Random Networks
|
cs.IT math.IT
|
A modification of Koetter-Kschischang codes for random networks is presented
(these codes were also studied by Wang et al. in the context of authentication
problems). The new codes have higher information rate, while maintaining the
same error-correcting capabilities. An efficient error-correcting algorithm is
proposed for these codes.
|
0806.3653
|
Opportunistic Interference Alignment in MIMO Interference Channels
|
cs.GT cs.IT math.IT
|
We present two interference alignment techniques such that an opportunistic
point-to-point multiple input multiple output (MIMO) link can reuse, without
generating any additional interference, the same frequency band of a similar
pre-existing primary link. In this scenario, we exploit the fact that under
power constraints, although each radio maximizes independently its rate by
water-filling on their channel transfer matrix singular values, frequently, not
all of them are used. Therefore, by aligning the interference of the
opportunistic radio it is possible to transmit at a significant rate while
insuring zero-interference on the pre-existing link. We propose a linear
pre-coder for a perfect interference alignment and a power allocation scheme
which maximizes the individual data rate of the secondary link. Our numerical
results show that significant data rates are achieved even for a reduced number
of antennas.
|
0806.3681
|
On the d-dimensional Quasi-Equally Spaced Sampling
|
cs.IT math.IT
|
We study a class of random matrices that appear in several communication and
signal processing applications, and whose asymptotic eigenvalue distribution is
closely related to the reconstruction error of an irregularly sampled
bandlimited signal. We focus on the case where the random variables
characterizing these matrices are d-dimensional vectors, independent, and
quasi-equally spaced, i.e., they have an arbitrary distribution and their
averages are vertices of a d-dimensional grid. Although a closed form
expression of the eigenvalue distribution is still unknown, under these
conditions we are able (i) to derive the distribution moments as the matrix
size grows to infinity, while its aspect ratio is kept constant, and (ii) to
show that the eigenvalue distribution tends to the Marcenko-Pastur law as
d->infinity. These results can find application in several fields, as an
example we show how they can be used for the estimation of the mean square
error provided by linear reconstruction techniques.
|
0806.3710
|
How Is Meaning Grounded in Dictionary Definitions?
|
cs.CL cs.DB
|
Meaning cannot be based on dictionary definitions all the way down: at some
point the circularity of definitions must be broken in some way, by grounding
the meanings of certain words in sensorimotor categories learned from
experience or shaped by evolution. This is the "symbol grounding problem." We
introduce the concept of a reachable set -- a larger vocabulary whose meanings
can be learned from a smaller vocabulary through definition alone, as long as
the meanings of the smaller vocabulary are themselves already grounded. We
provide simple algorithms to compute reachable sets for any given dictionary.
|
0806.3765
|
Cross-concordances: terminology mapping and its effectiveness for
information retrieval
|
cs.DL cs.IR
|
The German Federal Ministry for Education and Research funded a major
terminology mapping initiative, which found its conclusion in 2007. The task of
this terminology mapping initiative was to organize, create and manage
'cross-concordances' between controlled vocabularies (thesauri, classification
systems, subject heading lists) centred around the social sciences but quickly
extending to other subject areas. 64 crosswalks with more than 500,000
relations were established. In the final phase of the project, a major
evaluation effort to test and measure the effectiveness of the vocabulary
mappings in an information system environment was conducted. The paper reports
on the cross-concordance work and evaluation results.
|
0806.3787
|
Computational Approaches to Measuring the Similarity of Short Contexts :
A Review of Applications and Methods
|
cs.CL
|
Measuring the similarity of short written contexts is a fundamental problem
in Natural Language Processing. This article provides a unifying framework by
which short context problems can be categorized both by their intended
application and proposed solution. The goal is to show that various problems
and methodologies that appear quite different on the surface are in fact very
closely related. The axes by which these categorizations are made include the
format of the contexts (headed versus headless), the way in which the contexts
are to be measured (first-order versus second-order similarity), and the
information used to represent the features in the contexts (micro versus macro
views). The unifying thread that binds together many short context applications
and methods is the fact that similarity decisions must be made between contexts
that share few (if any) words in common.
|
0806.3799
|
A Sublinear Algorithm for Sparse Reconstruction with l2/l2 Recovery
Guarantees
|
cs.IT math.IT
|
Compressed Sensing aims to capture attributes of a sparse signal using very
few measurements. Cand\`{e}s and Tao showed that sparse reconstruction is
possible if the sensing matrix acts as a near isometry on all
$\boldsymbol{k}$-sparse signals. This property holds with overwhelming
probability if the entries of the matrix are generated by an iid Gaussian or
Bernoulli process. There has been significant recent interest in an alternative
signal processing framework; exploiting deterministic sensing matrices that
with overwhelming probability act as a near isometry on $\boldsymbol{k}$-sparse
vectors with uniformly random support, a geometric condition that is called the
Statistical Restricted Isometry Property or StRIP. This paper considers a
family of deterministic sensing matrices satisfying the StRIP that are based on
\srm codes (binary chirps) and a $\boldsymbol{k}$-sparse reconstruction
algorithm with sublinear complexity. In the presence of stochastic noise in the
data domain, this paper derives bounds on the $\boldsymbol{\ell_2}$ accuracy of
approximation in terms of the $\boldsymbol{\ell_2}$ norm of the measurement
noise and the accuracy of the best $\boldsymbol{k}$-sparse approximation, also
measured in the $\boldsymbol{\ell_2}$ norm. This type of $\boldsymbol{\ell_2
/\ell_2}$ bound is tighter than the standard $\boldsymbol{\ell_2 /\ell_1}$ or
$\boldsymbol{\ell_1/ \ell_1}$ bounds.
|
0806.3802
|
Efficient and Robust Compressed Sensing using High-Quality Expander
Graphs
|
cs.IT math.IT
|
Expander graphs have been recently proposed to construct efficient compressed
sensing algorithms. In particular, it has been shown that any $n$-dimensional
vector that is $k$-sparse (with $k\ll n$) can be fully recovered using
$O(k\log\frac{n}{k})$ measurements and only $O(k\log n)$ simple recovery
iterations. In this paper we improve upon this result by considering expander
graphs with expansion coefficient beyond 3/4 and show that, with the same
number of measurements, only $O(k)$ recovery iterations are required, which is
a significant improvement when $n$ is large. In fact, full recovery can be
accomplished by at most $2k$ very simple iterations. The number of iterations
can be made arbitrarily close to $k$, and the recovery algorithm can be
implemented very efficiently using a simple binary search tree. We also show
that by tolerating a small penalty on the number of measurements, and not on
the number of recovery iterations, one can use the efficient construction of a
family of expander graphs to come up with explicit measurement matrices for
this method. We compare our result with other recently developed
expander-graph-based methods and argue that it compares favorably both in terms
of the number of required measurements and in terms of the recovery time
complexity. Finally we will show how our analysis extends to give a robust
algorithm that finds the position and sign of the $k$ significant elements of
an almost $k$-sparse signal and then, using very simple optimization
techniques, finds in sublinear time a $k$-sparse signal which approximates the
original signal with very high precision.
|
0806.3849
|
Separability in the Ambient Logic
|
cs.LO cs.MA cs.PL
|
The \it{Ambient Logic} (AL) has been proposed for expressing properties of
process mobility in the calculus of Mobile Ambients (MA), and as a basis for
query languages on semistructured data. We study some basic questions
concerning the discriminating power of AL, focusing on the equivalence on
processes induced by the logic $(=_L>)$. As underlying calculi besides MA we
consider a subcalculus in which an image-finiteness condition holds and that we
prove to be Turing complete. Synchronous variants of these calculi are studied
as well. In these calculi, we provide two operational characterisations of
$_=L$: a coinductive one (as a form of bisimilarity) and an inductive one
(based on structual properties of processes). After showing $_=L$ to be stricly
finer than barbed congruence, we establish axiomatisations of $_=L$ on the
subcalculus of MA (both the asynchronous and the synchronous version), enabling
us to relate $_=L$ to structural congruence. We also present some
(un)decidability results that are related to the above separation properties
for AL: the undecidability of $_=L$ on MA and its decidability on the
subcalculus.
|
0806.3885
|
Conceptualization of seeded region growing by pixels aggregation. Part
1: the framework
|
cs.CV
|
Adams and Bishop have proposed in 1994 a novel region growing algorithm
called seeded region growing by pixels aggregation (SRGPA). This paper
introduces a framework to implement an algorithm using SRGPA. This framework is
built around two concepts: localization and organization of applied action.
This conceptualization gives a quick implementation of algorithms, a direct
translation between the mathematical idea and the numerical implementation, and
an improvement of algorithms efficiency.
|
0806.3887
|
Conceptualization of seeded region growing by pixels aggregation. Part
2: how to localize a final partition invariant about the seeded region
initialisation order
|
cs.CV
|
In the previous paper, we have conceptualized the localization and the
organization of seeded region growing by pixels aggregation (SRGPA) but we do
not give the issue when there is a collision between two distinct regions
during the growing process. In this paper, we propose two implementations to
manage two classical growing processes: one without a boundary region region to
divide the other regions and another with. Unfortunately, as noticed by Mehnert
and Jakway (1997), this partition depends on the seeded region initialisation
order (SRIO). We propose a growing process, invariant about SRIO such as the
boundary region is the set of ambiguous pixels.
|
0806.3928
|
Conceptualization of seeded region growing by pixels aggregation. Part
3: a wide range of algorithms
|
cs.CV
|
In the two previous papers of this serie, we have created a library, called
Population, dedicated to seeded region growing by pixels aggregation and we
have proposed different growing processes to get a partition with or without a
boundary region to divide the other regions or to get a partition invariant
about the seeded region initialisation order. Using this work, we implement
some algorithms belonging to the field of SRGPA using this library and these
growing processes.
|
0806.3938
|
Cooperation with Complement is Better
|
cs.MA physics.soc-ph
|
In a setting where heterogeneous agents interact to accomplish a given set of
goals, cooperation is of utmost importance, especially when agents cannot
achieve their individual goals by exclusive use of their own efforts. Even when
we consider friendly environments and benevolent agents, cooperation involves
several issues: with whom to cooperate, reciprocation, how to address credit
assignment and complex division of gains, etc. We propose a model where
heterogeneous agents cooperate by forming groups and formation of larger groups
is promoted. Benefit of agents is proportional to the performance and the size
of the group. There is a time pressure to form a group. We investigate how
preferring similar or complement agents in group formation affects an agent's
success. Preferring complement in group formation is found to be better, yet
there is no need to push the strategy to the extreme since the effect of
complementing partners is saturated.
|
0806.3939
|
Conceptualization of seeded region growing by pixels aggregation. Part
4: Simple, generic and robust extraction of grains in granular materials
obtained by X-ray tomography
|
cs.CV
|
This paper proposes a simple, generic and robust method to extract the grains
from experimental tridimensionnal images of granular materials obtained by
X-ray tomography. This extraction has two steps: segmentation and splitting.
For the segmentation step, if there is a sufficient contrast between the
different components, a classical threshold procedure followed by a succession
of morphological filters can be applied. If not, and if the boundary needs to
be localized precisely, a watershed transformation controlled by labels is
applied. The basement of this transformation is to localize a label included in
the component and another label in the component complementary. A "soft"
threshold following by an opening is applied on the initial image to localize a
label in a component. For any segmentation procedure, the visualisation shows a
problem: some groups of two grains, close one to each other, become connected.
So if a classical cluster procedure is applied on the segmented binary image,
these numerical connected grains are considered as a single grain. To overcome
this problem, we applied a procedure introduced by L. Vincent in 1993. This
grains extraction is tested for various complexes porous media and granular
material, to predict various properties (diffusion, electrical conductivity,
deformation field) in a good agreement with experiment data.
|
0806.3949
|
Use of a Quantum Computer and the Quick Medical Reference To Give an
Approximate Diagnosis
|
quant-ph cs.AI
|
The Quick Medical Reference (QMR) is a compendium of statistical knowledge
connecting diseases to findings (symptoms). The information in QMR can be
represented as a Bayesian network. The inference problem (or, in more medical
language, giving a diagnosis) for the QMR is to, given some findings, find the
probability of each disease. Rejection sampling and likelihood weighted
sampling (a.k.a. likelihood weighting) are two simple algorithms for making
approximate inferences from an arbitrary Bayesian net (and from the QMR
Bayesian net in particular). Heretofore, the samples for these two algorithms
have been obtained with a conventional "classical computer". In this paper, we
will show that two analogous algorithms exist for the QMR Bayesian net, where
the samples are obtained with a quantum computer. We expect that these two
algorithms, implemented on a quantum computer, can also be used to make
inferences (and predictions) with other Bayesian nets.
|
0806.3978
|
Information In The Non-Stationary Case
|
q-bio.NC cs.IT math.IT q-bio.QM stat.ME
|
Information estimates such as the ``direct method'' of Strong et al. (1998)
sidestep the difficult problem of estimating the joint distribution of response
and stimulus by instead estimating the difference between the marginal and
conditional entropies of the response. While this is an effective estimation
strategy, it tempts the practitioner to ignore the role of the stimulus and the
meaning of mutual information. We show here that, as the number of trials
increases indefinitely, the direct (or ``plug-in'') estimate of marginal
entropy converges (with probability 1) to the entropy of the time-averaged
conditional distribution of the response, and the direct estimate of the
conditional entropy converges to the time-averaged entropy of the conditional
distribution of the response. Under joint stationarity and ergodicity of the
response and stimulus, the difference of these quantities converges to the
mutual information. When the stimulus is deterministic or non-stationary the
direct estimate of information no longer estimates mutual information, which is
no longer meaningful, but it remains a measure of variability of the response
distribution across time.
|
0806.4020
|
Design, Development and Testing of Underwater Vehicles: ITB Experience
|
cs.RO
|
The last decade has witnessed increasing worldwide interest in the research
of underwater robotics with particular focus on the area of autonomous
underwater vehicles (AUVs). The underwater robotics technology has enabled
human to access the depth of the ocean to conduct environmental surveys,
resources mapping as well as scientific and military missions. This capability
is especially valuable for countries with major water or oceanic resources. As
an archipelagic nation with more than 13,000 islands, Indonesia has one of the
most abundant living and non-organic oceanic resources. The needs for the
mapping, exploration, and environmental preservation of the vast marine
resources are therefore imperative. The challenge of the deep water exploration
has been the complex issues associated with hazardous and unstructured undersea
and sea-bed environments. The paper reports the design, development and testing
efforts of underwater vehicle that have been conducted at Institut Teknologi
Bandung. Key technology areas have been identified and step-by-step development
is presented in conjunction with the need to meet the challenge of underwater
vehicle operation. A number of future research directions are also highlighted.
|
0806.4021
|
Linear Parameter Varying Model Identification for Control of
Rotorcraft-based UAV
|
cs.RO
|
A rotorcraft-based unmanned aerial vehicle exhibits more complex properties
compared to its full-size counterparts due to its increased sensitivity to
control inputs and disturbances and higher bandwidth of its dynamics. As an
aerial vehicle with vertical take-off and landing capability, the helicopter
specifically poses a difficult problem of transition between forward flight and
unstable hover and vice versa. The LPV control technique explicitly takes into
account the change in performance due to the real-time parameter variations.
The technique therefore theoretically guarantees the performance and robustness
over the entire operating envelope. In this study, we investigate a new
approach implementing model identification for use in the LPV control
framework. The identification scheme employs recursive least square technique
implemented on the LPV system represented by dynamics of helicopter during a
transition. The airspeed as the scheduling of parameter trajectory is not
assumed to vary slowly. The exclusion of slow parameter change requirement
allows for the application of the algorithm for aggressive maneuvering
capability without the need of expensive computation. The technique is tested
numerically and will be validated in the autonomous flight of a small scale
helicopter.
|
0806.4168
|
Established Clustering Procedures for Network Analysis
|
physics.soc-ph cs.SI physics.data-an stat.AP
|
In light of the burgeoning interest in network analysis in the new millenium,
we bring to the attention of contemporary network theorists, a two-stage
double-standarization and hierarchical clustering (single-linkage-like)
procedure devised in 1974. In its many applications over the next
decade--primarily to the migration flows between geographic subdivisions within
nations--the presence was often revealed of ``hubs''. These are, typically,
``cosmopolitan/non-provincial'' areas--such as the French capital, Paris--which
send and receive people relatively broadly across their respective nations.
Additionally, this two-stage procedure--which ``might very well be the most
successful application of cluster analysis'' (R. C. Dubes)--has detected many
(physically or socially) isolated groups (regions) of areas, such as those
forming the southern islands, Shikoku and Kyushu, of Japan, the Italian islands
of Sardinia and Sicily, and the New England region of the United States.
Further, we discuss a (complementary) approach developed in 1976, involving the
application of the max-flow/min-cut theorem to raw/non-standardized flows.
|
0806.4200
|
The Secrecy Rate Region of the Broadcast Channel
|
cs.IT math.IT
|
In this paper, we consider a scenario where a source node wishes to broadcast
two confidential messages for two respective receivers, while a wire-tapper
also receives the transmitted signal. This model is motivated by wireless
communications, where individual secure messages are broadcast over open media
and can be received by any illegitimate receiver. The secrecy level is measured
by equivocation rate at the eavesdropper. We first study the general
(non-degraded) broadcast channel with confidential messages. We present an
inner bound on the secrecy capacity region for this model. The inner bound
coding scheme is based on a combination of random binning and the
Gelfand-Pinsker bining. This scheme matches the Marton's inner bound on the
broadcast channel without confidentiality constraint. We further study the
situation where the channels are degraded. For the degraded broadcast channel
with confidential messages, we present the secrecy capacity region. Our
achievable coding scheme is based on Cover's superposition scheme and random
binning. We refer to this scheme as Secret Superposition Scheme. In this
scheme, we show that randomization in the first layer increases the secrecy
rate of the second layer. This capacity region matches the capacity region of
the degraded broadcast channel without security constraint. It also matches the
secrecy capacity for the conventional wire-tap channel. Our converse proof is
based on a combination of the converse proof of the conventional degraded
broadcast channel and Csiszar lemma. Finally, we assume that the channels are
Additive White Gaussian Noise (AWGN) and show that secret superposition scheme
with Gaussian codebook is optimal. The converse proof is based on the
generalized entropy power inequality.
|
0806.4210
|
Agnostically Learning Juntas from Random Walks
|
cs.LG
|
We prove that the class of functions g:{-1,+1}^n -> {-1,+1} that only depend
on an unknown subset of k<<n variables (so-called k-juntas) is agnostically
learnable from a random walk in time polynomial in n, 2^{k^2}, epsilon^{-k},
and log(1/delta). In other words, there is an algorithm with the claimed
running time that, given epsilon, delta > 0 and access to a random walk on
{-1,+1}^n labeled by an arbitrary function f:{-1,+1}^n -> {-1,+1}, finds with
probability at least 1-delta a k-junta that is (opt(f)+epsilon)-close to f,
where opt(f) denotes the distance of a closest k-junta to f.
|
0806.4264
|
Online network coding for optimal throughput and delay -- the
three-receiver case
|
cs.IT math.IT
|
For a packet erasure broadcast channel with three receivers, we propose a new
coding algorithm that makes use of feedback to dynamically adapt the code. Our
algorithm is throughput optimal, and we conjecture that it also achieves an
asymptotically optimal average decoding delay at the receivers. We consider
heavy traffic asymptotics, where the load factor \rho approaches 1 from below
with either the arrival rate (\lambda) or the channel parameter (\mu) being
fixed at a number less than 1. We verify through simulations that our algorithm
achieves an asymptotically optimal decoding delay of O(1/(1-\rho)).
|
0806.4293
|
Scalar Quantization for Audio Data Coding
|
cs.MM cs.IT math.IT
|
This paper is concerned with scalar quantization of transform coefficients in
an audio codec. The generalized Gaussian distribution (GGD) is used as an
approximation of one-dimensional probability density function for transform
coefficients obtained by modulated lapped transform (MLT) or modified cosine
transform (MDCT) filterbank. The rationale of the model is provided in
comparison with theoretically achievable rate-distortion function. The
rate-distortion function computed for the random sequence obtained from a real
sequence of samples from a large database is compared with that computed for
random sequence obtained by a GGD random generator. A simple algorithm of
constructing the Extended Zero Zone (EZZ) quantizer is proposed. Simulation
results show that the EZZ quantizer yields a negligible loss in terms of coding
efficiency compared to optimal scalar quantizers. Furthermore, we describe an
adaptive version of the EZZ quantizer which works efficiently with low bitrate
requirements for transmitting side information
|
0806.4341
|
On Sequences with Non-Learnable Subsequences
|
cs.AI cs.LG
|
The remarkable results of Foster and Vohra was a starting point for a series
of papers which show that any sequence of outcomes can be learned (with no
prior knowledge) using some universal randomized forecasting algorithm and
forecast-dependent checking rules. We show that for the class of all
computationally efficient outcome-forecast-based checking rules, this property
is violated. Moreover, we present a probabilistic algorithm generating with
probability close to one a sequence with a subsequence which simultaneously
miscalibrates all partially weakly computable randomized forecasting
algorithms. %subsequences non-learnable by each randomized algorithm.
According to the Dawid's prequential framework we consider partial recursive
randomized algorithms.
|
0806.4391
|
Prediction with Expert Advice in Games with Unbounded One-Step Gains
|
cs.LG cs.AI
|
The games of prediction with expert advice are considered in this paper. We
present some modification of Kalai and Vempala algorithm of following the
perturbed leader for the case of unrestrictedly large one-step gains. We show
that in general case the cumulative gain of any probabilistic prediction
algorithm can be much worse than the gain of some expert of the pool.
Nevertheless, we give the lower bound for this cumulative gain in general case
and construct a universal algorithm which has the optimal performance; we also
prove that in case when one-step gains of experts of the pool have ``limited
deviations'' the performance of our algorithm is close to the performance of
the best expert.
|
0806.4415
|
On the inner and outer bounds of 3-receiver broadcast channels with
2-degraded message sets
|
cs.IT math.IT
|
We consider a broadcast channel with 3 receivers and 2 messages (M0, M1)
where two of the three receivers need to decode messages (M0, M1) while the
remaining one just needs to decode the message M0. We study the best known
inner and outer bounds under this setting, in an attempt to find the
deficiencies with the current techniques of establishing the bounds. We produce
a simple example where we are able to explicitly evaluate the inner bound and
show that it differs from the general outer bound. For a class of channels
where the general inner and outer bounds differ, we use a new argument to show
that the inner bound is tight.
|
0806.4422
|
Computationally Efficient Estimators for Dimension Reductions Using
Stable Random Projections
|
cs.LG
|
The method of stable random projections is a tool for efficiently computing
the $l_\alpha$ distances using low memory, where $0<\alpha \leq 2$ is a tuning
parameter. The method boils down to a statistical estimation task and various
estimators have been proposed, based on the geometric mean, the harmonic mean,
and the fractional power etc.
This study proposes the optimal quantile estimator, whose main operation is
selecting, which is considerably less expensive than taking fractional power,
the main operation in previous estimators. Our experiments report that the
optimal quantile estimator is nearly one order of magnitude more
computationally efficient than previous estimators. For large-scale learning
tasks in which storing and computing pairwise distances is a serious
bottleneck, this estimator should be desirable.
In addition to its computational advantages, the optimal quantile estimator
exhibits nice theoretical properties. It is more accurate than previous
estimators when $\alpha>1$. We derive its theoretical error bounds and
establish the explicit (i.e., no hidden constants) sample complexity bound.
|
0806.4423
|
On Approximating the Lp Distances for p>2
|
cs.LG
|
Applications in machine learning and data mining require computing pairwise
Lp distances in a data matrix A. For massive high-dimensional data, computing
all pairwise distances of A can be infeasible. In fact, even storing A or all
pairwise distances of A in the memory may be also infeasible. This paper
proposes a simple method for p = 2, 4, 6, ... We first decompose the l_p (where
p is even) distances into a sum of 2 marginal norms and p-1 ``inner products''
at different orders. Then we apply normal or sub-Gaussian random projections to
approximate the resultant ``inner products,'' assuming that the marginal norms
can be computed exactly by a linear scan. We propose two strategies for
applying random projections. The basic projection strategy requires only one
projection matrix but it is more difficult to analyze, while the alternative
projection strategy requires p-1 projection matrices but its theoretical
analysis is much easier. In terms of the accuracy, at least for p=4, the basic
strategy is always more accurate than the alternative strategy if the data are
non-negative, which is common in reality.
|
0806.4451
|
Counteracting Byzantine Adversaries with Network Coding: An Overhead
Analysis
|
cs.IT cs.CR math.IT
|
Network coding increases throughput and is robust against failures and
erasures. However, since it allows mixing of information within the network, a
single corrupted packet generated by a Byzantine attacker can easily
contaminate the information to multiple destinations.
In this paper, we study the transmission overhead associated with three
different schemes for detecting Byzantine adversaries at a node using network
coding: end-to-end error correction, packet-based Byzantine detection scheme,
and generation-based Byzantine detection scheme. In end-to-end error
correction, it is known that we can correct up to the min-cut between the
source and destinations. However, if we use Byzantine detection schemes, we can
detect polluted data, drop them, and therefore, only transmit valid data. For
the dropped data, the destinations perform erasure correction, which is
computationally lighter than error correction. We show that, with enough
attackers present in the network, Byzantine detection schemes may improve the
throughput of the network since we choose to forward only reliable information.
When the probability of attack is high, a packet-based detection scheme is the
most bandwidth efficient; however, when the probability of attack is low, the
overhead involved with signing each packet becomes costly, and the
generation-based scheme may be preferred. Finally, we characterize the tradeoff
between generation size and overhead of detection in bits as the probability of
attack increases in the network.
|
0806.4468
|
On Ergodic Sum Capacity of Fading Cognitive Multiple-Access and
Broadcast Channels
|
cs.IT math.IT
|
This paper studies the information-theoretic limits of a secondary or
cognitive radio (CR) network under spectrum sharing with an existing primary
radio network. In particular, the fading cognitive multiple-access channel
(C-MAC) is first studied, where multiple secondary users transmit to the
secondary base station (BS) under both individual transmit-power constraints
and a set of interference-power constraints each applied at one of the primary
receivers. This paper considers the long-term (LT) or the short-term (ST)
transmit-power constraint over the fading states at each secondary transmitter,
combined with the LT or ST interference-power constraint at each primary
receiver. In each case, the optimal power allocation scheme is derived for the
secondary users to achieve the ergodic sum capacity of the fading C-MAC, as
well as the conditions for the optimality of the dynamic
time-division-multiple-access (D-TDMA) scheme in the secondary network. The
fading cognitive broadcast channel (C-BC) that models the downlink transmission
in the secondary network is then studied under the LT/ST transmit-power
constraint at the secondary BS jointly with the LT/ST interference-power
constraint at each of the primary receivers. It is shown that D-TDMA is indeed
optimal for achieving the ergodic sum capacity of the fading C-BC for all
combinations of transmit-power and interference-power constraints.
|
0806.4484
|
On empirical meaning of randomness with respect to a real parameter
|
cs.LG cs.AI
|
We study the empirical meaning of randomness with respect to a family of
probability distributions $P_\theta$, where $\theta$ is a real parameter, using
algorithmic randomness theory. In the case when for a computable probability
distribution $P_\theta$ an effectively strongly consistent estimate exists, we
show that the Levin's a priory semicomputable semimeasure of the set of all
$P_\theta$-random sequences is positive if and only if the parameter $\theta$
is a computable real number. The different methods for generating
``meaningful'' $P_\theta$-random sequences with noncomputable $\theta$ are
discussed.
|
0806.4510
|
On Field Size and Success Probability in Network Coding
|
cs.IT math.IT
|
Using tools from algebraic geometry and Groebner basis theory we solve two
problems in network coding. First we present a method to determine the smallest
field size for which linear network coding is feasible. Second we derive
improved estimates on the success probability of random linear network coding.
These estimates take into account which monomials occur in the support of the
determinant of the product of Edmonds matrices. Therefore we finally
investigate which monomials can occur in the determinant of the Edmonds matrix.
|
0806.4511
|
The model of quantum evolution
|
cs.AI
|
This paper has been withdrawn by the author due to extremely unscientific
errors.
|
0806.4572
|
Problems of robustness for universal coding schemes
|
cs.IT cs.OH math.IT
|
The Lempel-Ziv universal coding scheme is asymptotically optimal for the
class of all stationary ergodic sources. A problem of robustness of this
property under small violations of ergodicity is studied. A notion of
deficiency of algorithmic randomness is used as a measure of disagreement
between data sequence and probability measure. We prove that universal
compressing schemes from a large class are non-robust in the following sense:
if the randomness deficiency grows arbitrarily slowly on initial fragments of
an infinite sequence then the property of asymptotic optimality of any
universal compressing algorithm can be violated. Lempel-Ziv compressing
algorithms are robust on infinite sequences generated by ergodic Markov chains
when the randomness deficiency of its initial fragments of length $n$ grows as
$o(n)$.
|
0806.4627
|
SP2Bench: A SPARQL Performance Benchmark
|
cs.DB cs.PF
|
Recently, the SPARQL query language for RDF has reached the W3C
recommendation status. In response to this emerging standard, the database
community is currently exploring efficient storage techniques for RDF data and
evaluation strategies for SPARQL queries. A meaningful analysis and comparison
of these approaches necessitates a comprehensive and universal benchmark
platform. To this end, we have developed SP^2Bench, a publicly available,
language-specific SPARQL performance benchmark. SP^2Bench is settled in the
DBLP scenario and comprises both a data generator for creating arbitrarily
large DBLP-like documents and a set of carefully designed benchmark queries.
The generated documents mirror key characteristics and social-world
distributions encountered in the original DBLP data set, while the queries
implement meaningful requests on top of this data, covering a variety of SPARQL
operator constellations and RDF access patterns. As a proof of concept, we
apply SP^2Bench to existing engines and discuss their strengths and weaknesses
that follow immediately from the benchmark results.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.