id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
0808.2515
|
Provably efficient instanton search algorithm for LP decoding of LDPC
codes over the BSC
|
cs.IT cond-mat.stat-mech math.IT
|
We consider Linear Programming (LP) decoding of a fixed Low-Density
Parity-Check (LDPC) code over the Binary Symmetric Channel (BSC). The LP
decoder fails when it outputs a pseudo-codeword which is not a codeword. We
design an efficient algorithm termed the Instanton Search Algorithm (ISA)
which, given a random input, generates a set of flips called the BSC-instanton.
We prove that: (a) the LP decoder fails for any set of flips with support
vector including an instanton; (b) for any input, the algorithm outputs an
instanton in the number of steps upper-bounded by twice the number of flips in
the input. Repeated sufficient number of times, the ISA outcomes the number of
unique instantons of different sizes.
|
0808.2530
|
Fair Scheduling in Networks Through Packet Election
|
cs.IT math.IT
|
We consider the problem of designing a fair scheduling algorithm for
discrete-time constrained queuing networks. Each queue has dedicated exogenous
packet arrivals. There are constraints on which queues can be served
simultaneously. This model effectively describes important special instances
like network switches, interference in wireless networks, bandwidth sharing for
congestion control and traffic scheduling in road roundabouts. Fair scheduling
is required because it provides isolation to different traffic flows; isolation
makes the system more robust and enables providing quality of service. Existing
work on fairness for constrained networks concentrates on flow based fairness.
As a main result, we describe a notion of packet based fairness by establishing
an analogy with the ranked election problem: packets are voters, schedules are
candidates and each packet ranks the schedules based on its priorities. We then
obtain a scheduling algorithm that achieves the described notion of fairness by
drawing upon the seminal work of Goodman and Markowitz (1952). This yields the
familiar Maximum Weight (MW) style algorithm. As another important result we
prove that algorithm obtained is throughput optimal. There is no reason a
priori why this should be true, and the proof requires non-traditional methods.
|
0808.2548
|
Negative Beta Encoder
|
cs.IT math.IT
|
A new class of analog-to-digital (A/D) and digital-to-analog (D/A) converters
using a flaky quantiser, called the $\beta$-encoder, has been shown to have
exponential bit rate accuracy while possessing a self-correction property for
fluctuations of the amplifier factor $\beta$ and the quantiser threshold $\nu$.
The probabilistic behavior of such a flaky quantiser is explained as the
deterministic dynamics of the multi-valued R\'enyi map. That is, a sample $x$
is always confined to a contracted subinterval while successive approximations
of $x$ are performed using $\beta$-expansion even if $\nu$ may vary at each
iteration. This viewpoint enables us to get the decoded sample, which is equal
to the midpoint of the subinterval, and its associated characteristic equation
for recovering $\beta$ which improves the quantisation error by more than
$3{dB}$ when $\beta>1.5$. The invariant subinterval under the R\'enyi map shows
that $\nu$ should be set to around the midpoint of its associated greedy and
lazy values. %in terms of its quantisation MSE (mean square error).
Furthermore, a new A/D converter is introduced called the negative
$\beta$-encoder, which further improves the quantisation error of the
$\beta$-encoder. A two-state Markov chain describing the $\beta$-encoder
suggests that a negative eigenvalue of its associated transition probability
matrix reduces the quantisation error.
|
0808.2562
|
Spectrum Sensing Algorithms for Cognitive Radio Based on Statistical
Covariances
|
cs.IT math.IT
|
Spectrum sensing, i.e., detecting the presence of primary users in a licensed
spectrum, is a fundamental problem in cognitive radio. Since the statistical
covariances of received signal and noise are usually different, they can be
used to differentiate the case where the primary user's signal is present from
the case where there is only noise. In this paper, spectrum sensing algorithms
are proposed based on the sample covariance matrix calculated from a limited
number of received signal samples. Two test statistics are then extracted from
the sample covariance matrix. A decision on the signal presence is made by
comparing the two test statistics. Theoretical analysis for the proposed
algorithms is given. Detection probability and associated threshold are found
based on statistical theory. The methods do not need any information of the
signal, the channel and noise power a priori. Also, no synchronization is
needed. Simulations based on narrowband signals, captured digital television
(DTV) signals and multiple antenna signals are presented to verify the methods.
|
0808.2659
|
Distributed Source Coding using Abelian Group Codes
|
cs.IT math.IT
|
In this work, we consider a distributed source coding problem with a joint
distortion criterion depending on the sources and the reconstruction. This
includes as a special case the problem of computing a function of the sources
to within some distortion and also the classic Slepian-Wolf problem,
Berger-Tung problem, Wyner-Ziv problem, Yeung-Berger problem and the
Ahlswede-Korner-Wyner problem. While the prevalent trend in information theory
has been to prove achievability results using Shannon's random coding
arguments, using structured random codes offer rate gains over unstructured
random codes for many problems. Motivated by this, we present a new achievable
rate-distortion region for this problem for discrete memoryless sources based
on "good" structured random nested codes built over abelian groups. We
demonstrate rate gains for this problem over traditional coding schemes using
random unstructured codes. For certain sources and distortion functions, the
new rate region is strictly bigger than the Berger-Tung rate region, which has
been the best known achievable rate region for this problem till now. Further,
there is no known unstructured random coding scheme that achieves these rate
gains. Achievable performance limits for single-user source coding using
abelian group codes are also obtained as parts of the proof of the main coding
theorem. As a corollary, we also prove that nested linear codes achieve the
Shannon rate-distortion bound in the single-user setting.
|
0808.2670
|
Solving the apparent diversity-accuracy dilemma of recommender systems
|
cs.IR physics.soc-ph
|
Recommender systems use data on past user preferences to predict possible
future likes and interests. A key challenge is that while the most useful
individual recommendations are to be found among diverse niche objects, the
most reliably accurate results are obtained by methods that recommend objects
based on user or object similarity. In this paper we introduce a new algorithm
specifically to address the challenge of diversity and show how it can be used
to resolve this apparent dilemma when combined in an elegant hybrid with an
accuracy-focused algorithm. By tuning the hybrid appropriately we are able to
obtain, without relying on any semantic or context-specific information,
simultaneous gains in both accuracy and diversity of recommendations.
|
0808.2703
|
Low-Signal-Energy Asymptotics of Capacity and Mutual Information for the
Discrete-Time Poisson Channel
|
cs.IT math.IT
|
The first terms of the low-signal-energy asymptotics for the mutual
information in the discrete-time Poisson channel are derived and compared to an
asymptotic expression of the capacity. In the presence of non-zero additive
noise (either Poisson or geometric), the mutual information is concave at zero
signal-energy and the minimum energy per bit is not attained at zero capacity.
Fixed signal constellations which scale with the signal energy do not attain
the minimum energy per bit. The minimum energy per bit is zero when additive
Poisson noise is present and $\ew\log 2$ when additive geometric noise of mean
$\ew$ is present.
|
0808.2833
|
Efficient tests for equivalence of hidden Markov processes and quantum
random walks
|
cs.IT math.IT
|
While two hidden Markov process (HMP) resp. quantum random walk (QRW)
parametrizations can differ from one another, the stochastic processes arising
from them can be equivalent. Here a polynomial-time algorithm is presented
which can determine equivalence of two HMP parametrizations $\cM_1,\cM_2$ resp.
two QRW parametrizations $\cQ_1,\cQ_2$ in time $O(|\S|\max(N_1,N_2)^{4})$,
where $N_1,N_2$ are the number of hidden states in $\cM_1,\cM_2$ resp. the
dimension of the state spaces associated with $\cQ_1,\cQ_2$, and $\S$ is the
set of output symbols. Previously available algorithms for testing equivalence
of HMPs were exponential in the number of hidden states. In case of QRWs,
algorithms for testing equivalence had not yet been presented. The core
subroutines of this algorithm can also be used to efficiently test hidden
Markov processes and quantum random walks for ergodicity.
|
0808.2837
|
List Decoding of Burst Errors
|
cs.IT cs.DM math.IT
|
A generalization of the Reiger bound is presented for the list decoding of
burst errors. It is then shown that Reed-Solomon codes attain this bound.
|
0808.2904
|
Investigation of the Zipf-plot of the extinct Meroitic language
|
cs.CL
|
The ancient and extinct language Meroitic is investigated using Zipf's Law.
In particular, since Meroitic is still undeciphered, the Zipf law analysis
allows us to assess the quality of current texts and possible avenues for
future investigation using statistical techniques.
|
0808.2931
|
Spatial planning with constraints on translational distances between
geometric objects
|
cs.CG cs.RO
|
The main constraint on relative position of geometric objects, used in
spatial planning for computing the C-space maps (for example, in robotics, CAD,
and packaging), is the relative non-overlapping of objects. This is the
simplest constraint in which the minimum translational distance between objects
is greater than zero, or more generally, than some positive value. We present a
technique, based on the Minkowski operations, for generating the translational
C-space maps for spatial planning with more general and more complex
constraints on the relative position of geometric objects, such as constraints
on various types (not only on the minimum) of the translational distances
between objects. The developed technique can also be used, respectively, for
spatial planning with constraints on translational distances in a given
direction, and rotational distances between geometric objects, as well as for
spatial planning with given dynamic geometric situation of moving objects.
|
0808.2964
|
Estimating the Lengths of Memory Words
|
cs.IT math.IT
|
For a stationary stochastic process $\{X_n\}$ with values in some set $A$, a
finite word $w \in A^K$ is called a memory word if the conditional probability
of $X_0$ given the past is constant on the cylinder set defined by
$X_{-K}^{-1}=w$. It is a called a minimal memory word if no proper suffix of
$w$ is also a memory word. For example in a $K$-step Markov processes all words
of length $K$ are memory words but not necessarily minimal. We consider the
problem of determining the lengths of the longest minimal memory words and the
shortest memory words of an unknown process $\{X_n\}$ based on sequentially
observing the outputs of a single sample $\{\xi_1,\xi_2,...\xi_n\}$. We will
give a universal estimator which converges almost surely to the length of the
longest minimal memory word and show that no such universal estimator exists
for the length of the shortest memory word. The alphabet $A$ may be finite or
countable.
|
0808.2984
|
Building an interpretable fuzzy rule base from data using Orthogonal
Least Squares Application to a depollution problem
|
cs.LG cs.AI
|
In many fields where human understanding plays a crucial role, such as
bioprocesses, the capacity of extracting knowledge from data is of critical
importance. Within this framework, fuzzy learning methods, if properly used,
can greatly help human experts. Amongst these methods, the aim of orthogonal
transformations, which have been proven to be mathematically robust, is to
build rules from a set of training data and to select the most important ones
by linear regression or rank revealing techniques. The OLS algorithm is a good
representative of those methods. However, it was originally designed so that it
only cared about numerical performance. Thus, we propose some modifications of
the original method to take interpretability into account. After recalling the
original algorithm, this paper presents the changes made to the original
method, then discusses some results obtained from benchmark problems. Finally,
the algorithm is applied to a real-world fault detection depollution problem.
|
0808.3003
|
Codes Associated with Orthogonal Groups and Power Moments of Kloosterman
Sums
|
math.NT cs.IT math.IT
|
In this paper, we construct three binary linear codes $C(SO^{-}(2,q))$,
$C(O^{-}(2,q))$, $C(SO^{-}(4,q))$, respectively associated with the orthogonal
groups $SO^{-}(2,q)$, $O^{-}(2,q)$, $SO^{-}(4,q)$, with $q$ powers of two. Then
we obtain recursive formulas for the power moments of Kloosterman and
2-dimensional Kloosterman sums in terms of the frequencies of weights in the
codes. This is done via Pless power moment identity and by utilizing the
explicit expressions of Gauss sums for the orthogonal groups. We emphasize
that, when the recursive formulas for the power moments of Kloosterman sums are
compared, the present one is computationally more effective than the previous
one constructed from the special linear group $SL(2,q)$. We illustrate our
results with some examples.
|
0808.3109
|
n-ary Fuzzy Logic and Neutrosophic Logic Operators
|
cs.AI
|
We extend Knuth's 16 Boolean binary logic operators to fuzzy logic and
neutrosophic logic binary operators. Then we generalize them to n-ary fuzzy
logic and neutrosophic logic operators using the smarandache codification of
the Venn diagram and a defined vector neutrosophic law. In such way, new
operators in neutrosophic logic/set/probability are built.
|
0808.3145
|
Approximate capacity of the two-way relay channel: A deterministic
approach
|
cs.IT math.IT
|
We study the capacity of the full-duplex bidirectional (or two-way) relay
channel with two nodes and one relay. The channels in the forward direction are
assumed to be different (in general) than the channels in the backward
direction, i.e. channel reciprocity is not assumed. We use the recently
proposed deterministic approach to capture the essence of the problem and to
determine a good transmission and relay strategy for the Gaussian channel.
Depending on the ratio of the individual channel gains, we propose to use
either a simple amplify-and-forward or a particular superposition coding
strategy at the relay. We analyze the achievable rate region and show that the
scheme achieves to within 3 bits the cut-set bound for all values of channel
gains.
|
0808.3214
|
The discrete Fourier transform: A canonical basis of eigenfunctions
|
cs.IT cs.DM math.IT math.RT
|
The discrete Fourier transform (DFT) is an important operator which acts on
the Hilbert space of complex valued functions on the ring Z/NZ. In the case
where N=p is an odd prime number, we exhibit a canonical basis of eigenvectors
for the DFT. The transition matrix from the standard basis to the canonical
basis defines a novel transform which we call the "discrete oscillator
transform" (DOT for short). Finally, we describe a fast algorithm for computing
the DOT in certain cases.
|
0808.3230
|
Phase Transitions on Fixed Connected Graphs and Random Graphs in the
Presence of Noise
|
math.OC cs.IT math.IT
|
In this paper, we study the phase transition behavior emerging from the
interactions among multiple agents in the presence of noise. We propose a
simple discrete-time model in which a group of non-mobile agents form either a
fixed connected graph or a random graph process, and each agent, taking bipolar
value either +1 or -1, updates its value according to its previous value and
the noisy measurements of the values of the agents connected to it. We present
proofs for the occurrence of the following phase transition behavior: At a
noise level higher than some threshold, the system generates symmetric behavior
(vapor or melt of magnetization) or disagreement; whereas at a noise level
lower than the threshold, the system exhibits spontaneous symmetry breaking
(solid or magnetization) or consensus. The threshold is found analytically. The
phase transition occurs for any dimension. Finally, we demonstrate the phase
transition behavior and all analytic results using simulations. This result may
be found useful in the study of the collective behavior of complex systems
under communication constraints.
|
0808.3231
|
Multi-Instance Multi-Label Learning
|
cs.LG cs.AI
|
In this paper, we propose the MIML (Multi-Instance Multi-Label learning)
framework where an example is described by multiple instances and associated
with multiple class labels. Compared to traditional learning frameworks, the
MIML framework is more convenient and natural for representing complicated
objects which have multiple semantic meanings. To learn from MIML examples, we
propose the MimlBoost and MimlSvm algorithms based on a simple degeneration
strategy, and experiments show that solving problems involving complicated
objects with multiple semantic meanings in the MIML framework can lead to good
performance. Considering that the degeneration process may lose information, we
propose the D-MimlSvm algorithm which tackles MIML problems directly in a
regularization framework. Moreover, we show that even when we do not have
access to the real objects and thus cannot capture more information from real
objects by using the MIML representation, MIML is still useful. We propose the
InsDif and SubCod algorithms. InsDif works by transforming single-instances
into the MIML representation for learning, while SubCod works by transforming
single-label examples into the MIML representation for learning. Experiments
show that in some tasks they are able to achieve better performance than
learning the single-instances or single-label examples directly.
|
0808.3281
|
On the diagonalization of the discrete Fourier transform
|
cs.IT cs.DM math.IT math.RT
|
The discrete Fourier transform (DFT) is an important operator which acts on
the Hilbert space of complex valued functions on the ring Z/NZ. In the case
where N=p is an odd prime number, we exhibit a canonical basis of eigenvectors
for the DFT. The transition matrix from the standard basis to the canonical
basis defines a novel transform which we call the discrete oscillator transform
(DOT for short). Finally, we describe a fast algorithm for computing the
discrete oscillator transform in certain cases.
|
0808.3296
|
Confirmation Bias and the Open Access Advantage: Some Methodological
Suggestions for the Davis Citation Study
|
cs.DL cs.DB
|
Davis (2008) analyzes citations from 2004-2007 in 11 biomedical journals. 15%
of authors paid to make them Open Access (OA). The outcome is a significant OA
citation Advantage, but a small one (21%). The author infers that the OA
advantage has been shrinking yearly, but the data suggest the opposite. Further
analyses are necessary:
(1) Not just author-choice (paid) OA but Free OA self-archiving needs to be
taken into account rather than being counted as non-OA.
(2) proportion of OA articles per journal per year needs to be reported and
taken into account.
(3) The Journal Impact Factor and the relation between the size of the OA
Advantage article 'citation-bracket' need to be taken into account.
(4) The sample-size for the highest-impact, largest-sample journal analyzed,
PNAS, is restricted and excluded from some of the analyses. The full PNAS
dataset is needed.
(5) The interaction between OA and time, 2004-2007, is based on retrospective
data from a June 2008 total cumulative citation count. The dates of both the
cited articles and the citing articles need to be taken into account.
The author proposes that author self-selection bias for is the primary cause
of the observed OA Advantage, but this study does not test this or of any of
the other potential causal factors. The author suggests that paid OA is not
worth the cost, per extra citation. But with OA self-archiving both the OA and
the extra citations are free.
|
0808.3418
|
Jamming in Fixed-Rate Wireless Systems with Power Constraints - Part II:
Parallel Slow Fading Channels
|
cs.IT cs.CR math.IT
|
This is the second part of a two-part paper that studies the problem of
jamming in a fixed-rate transmission system with fading. In the first part, we
studied the scenario with a fast fading channel, and found Nash equilibria of
mixed strategies for short term power constraints, and for average power
constraints with and without channel state information (CSI) feedback. We also
solved the equally important maximin and minimax problems with pure strategies.
Whenever we dealt with average power constraints, we decomposed the problem
into two levels of power control, which we solved individually. In this second
part of the paper, we study the scenario with a parallel, slow fading channel,
which usually models multi-carrier transmissions, such as OFDM. Although the
framework is similar as the one in Part I \cite{myself3}, dealing with the slow
fading requires more intricate techniques. Unlike in the fast fading scenario,
where the frames supporting the transmission of the codewords were equivalent
and completely characterized by the channel statistics, in our present scenario
the frames are unique, and characterized by a specific set of channel
realizations. This leads to more involved inter-frame power allocation
strategies, and in some cases even to the need for a third level of power
control. We also show that for parallel slow fading channels, the CSI feedback
helps in the battle against jamming, as evidenced by the significant
degradation to system performance when CSI is not sent back. We expect this
degradation to decrease as the number of parallel channels $M$ increases, until
it becomes marginal for $M\to \infty$ (which can be considered as the case in
Part I).
|
0808.3431
|
Jamming in Fixed-Rate Wireless Systems with Power Constraints - Part I:
Fast Fading Channels
|
cs.IT cs.CR math.IT
|
This is the first part of a two-part paper that studies the problem of
jamming in a fixed-rate transmission system with fading. Both transmitter and
jammer are subject to power constraints which can be enforced over each
codeword short-term / peak) or over all codewords (long-term / average), hence
generating different scenarios. All our jamming problems are formulated as
zero-sum games, having the probability of outage as pay-off function and power
control functions as strategies. The paper aims at providing a comprehensive
coverage of these problems, under fast and slow fading, peak and average power
constraints, pure and mixed strategies, with and without channel state
information (CSI) feedback. In this first part we study the fast fading
scenario. We first assume full CSI to be available to all parties. For peak
power constraints, a Nash equilibrium of pure strategies is found. For average
power constraints, both pure and mixed strategies are investigated. With pure
strategies, we derive the optimal power control functions for both intra-frame
and inter-frame power allocation. Maximin and minimax solutions are found and
shown to be different, which implies the non-existence of a saddle point. In
addition we provide alternative perspectives in obtaining the optimal
intra-frame power control functions under the long-term power constraints. With
mixed strategies, the Nash equilibrium is found by solving the generalized form
of an older problem dating back to Bell and Cover \cite{bell}. Finally, we
derive a Nash equilibrium of the game in which no CSI is fed back from the
receiver. We show that full channel state information brings only a very slight
improvement in the system's performance.
|
0808.3453
|
Codes on hypergraphs
|
cs.IT math.IT
|
Codes on hypergraphs are an extension of the well-studied family of codes on
bipartite graphs. Bilu and Hoory (2004) constructed an explicit family of codes
on regular t-partite hypergraphs whose minimum distance improves earlier
estimates of the distance of bipartite-graph codes. They also suggested a
decoding algorithm for such codes and estimated its error-correcting
capability.
In this paper we study two aspects of hypergraph codes. First, we compute the
weight enumerators of several ensembles of such codes, establishing conditions
under which they attain the Gilbert-Varshamov bound and deriving estimates of
their distance. In particular, we show that this bound is attained by codes
constructed on a fixed bipartite graph with a large spectral gap.
We also suggest a new decoding algorithm of hypergraph codes that corrects a
constant fraction of errors, improving upon the algorithm of Bilu and Hoory.
|
0808.3502
|
Cooperative Protocols for Random Access Networks
|
cs.IT math.IT
|
Cooperative communications have emerged as a significant concept to improve
reliability and throughput in wireless systems. On the other hand, WLANs based
on random access mechanism have become popular due to ease of deployment and
low cost. Since cooperation introduces extra transmissions among the
cooperating nodes and therefore increases the number of packet collisions, it
is not clear whether there is any benefit from using physical layer cooperation
under random access. In this paper, we develop new low complexity cooperative
protocols for random access that outperform the conventional non cooperative
scheme for a large range of signal-to-noise ratios.
|
0808.3504
|
On the Growth Rate of the Weight Distribution of Irregular
Doubly-Generalized LDPC Codes
|
cs.IT math.IT
|
In this paper, an expression for the asymptotic growth rate of the number of
small linear-weight codewords of irregular doubly-generalized LDPC (D-GLDPC)
codes is derived. The expression is compact and generalizes existing results
for LDPC and generalized LDPC (GLDPC) codes. Assuming that there exist check
and variable nodes with minimum distance 2, it is shown that the growth rate
depends only on these nodes. An important connection between this new result
and the stability condition of D-GLDPC codes over the BEC is highlighted. Such
a connection, previously observed for LDPC and GLDPC codes, is now extended to
the case of D-GLDPC codes.
|
0808.3511
|
Conditional probability based significance tests for sequential patterns
in multi-neuronal spike trains
|
q-bio.NC cond-mat.dis-nn cs.DB q-bio.QM stat.ME
|
In this paper we consider the problem of detecting statistically significant
sequential patterns in multi-neuronal spike trains. These patterns are
characterized by ordered sequences of spikes from different neurons with
specific delays between spikes. We have previously proposed a data mining
scheme to efficiently discover such patterns which are frequent in the sense
that the count of non-overlapping occurrences of the pattern in the data stream
is above a threshold. Here we propose a method to determine the statistical
significance of these repeating patterns and to set the thresholds
automatically. The novelty of our approach is that we use a compound null
hypothesis that includes not only models of independent neurons but also models
where neurons have weak dependencies. The strength of interaction among the
neurons is represented in terms of certain pair-wise conditional probabilities.
We specify our null hypothesis by putting an upper bound on all such
conditional probabilities. We construct a probabilistic model that captures the
counting process and use this to calculate the mean and variance of the count
for any pattern. Using this we derive a test of significance for rejecting such
a null hypothesis. This also allows us to rank-order different significant
patterns. We illustrate the effectiveness of our approach using spike trains
generated from a non-homogeneous Poisson model with embedded dependencies.
|
0808.3563
|
What It Feels Like To Hear Voices: Fond Memories of Julian Jaynes
|
cs.CL
|
Julian Jaynes's profound humanitarian convictions not only prevented him from
going to war, but would have prevented him from ever kicking a dog. Yet
according to his theory, not only are language-less dogs unconscious, but so
too were the speaking/hearing Greeks in the Bicameral Era, when they heard
gods' voices telling them what to do rather than thinking for themselves. I
argue that to be conscious is to be able to feel, and that all mammals (and
probably lower vertebrates and invertebrates too) feel, hence are conscious.
Julian Jaynes's brilliant analysis of our concepts of consciousness
nevertheless keeps inspiring ever more inquiry and insights into the age-old
mind/body problem and its relation to cognition and language.
|
0808.3569
|
Offloading Cognition onto Cognitive Technology
|
cs.MA cs.CL
|
"Cognizing" (e.g., thinking, understanding, and knowing) is a mental state.
Systems without mental states, such as cognitive technology, can sometimes
contribute to human cognition, but that does not make them cognizers. Cognizers
can offload some of their cognitive functions onto cognitive technology,
thereby extending their performance capacity beyond the limits of their own
brain power. Language itself is a form of cognitive technology that allows
cognizers to offload some of their cognitive functions onto the brains of other
cognizers. Language also extends cognizers' individual and joint performance
powers, distributing the load through interactive and collaborative cognition.
Reading, writing, print, telecommunications and computing further extend
cognizers' capacities. And now the web, with its network of cognizers, digital
databases and software agents, all accessible anytime, anywhere, has become our
'Cognitive Commons,' in which distributed cognizers and cognitive technology
can interoperate globally with a speed, scope and degree of interactivity
inconceivable through local individual cognition alone. And as with language,
the cognitive tool par excellence, such technological changes are not merely
instrumental and quantitative: they can have profound effects on how we think
and encode information, on how we communicate with one another, on our mental
states, and on our very nature.
|
0808.3572
|
Model-Based Compressive Sensing
|
cs.IT math.IT
|
Compressive sensing (CS) is an alternative to Shannon/Nyquist sampling for
the acquisition of sparse or compressible signals that can be well approximated
by just K << N elements from an N-dimensional basis. Instead of taking periodic
samples, CS measures inner products with M < N random vectors and then recovers
the signal via a sparsity-seeking optimization or greedy algorithm. Standard CS
dictates that robust signal recovery is possible from M = O(K log(N/K))
measurements. It is possible to substantially decrease M without sacrificing
robustness by leveraging more realistic signal models that go beyond simple
sparsity and compressibility by including structural dependencies between the
values and locations of the signal coefficients. This paper introduces a
model-based CS theory that parallels the conventional theory and provides
concrete guidelines on how to create model-based recovery algorithms with
provable performance guarantees. A highlight is the introduction of a new class
of structured compressible signals along with a new sufficient condition for
robust structured compressible signal recovery that we dub the restricted
amplification property, which is the natural counterpart to the restricted
isometry property of conventional CS. Two examples integrate two relevant
signal models - wavelet trees and block sparsity - into two state-of-the-art CS
recovery algorithms and prove that they offer robust recovery from just M=O(K)
measurements. Extensive numerical simulations demonstrate the validity and
applicability of our new theory and algorithms.
|
0808.3616
|
Constructing word similarities in Meroitic as an aid to decipherment
|
cs.CL
|
Meroitic is the still undeciphered language of the ancient civilization of
Kush. Over the years, various techniques for decipherment such as finding a
bilingual text or cognates from modern or other ancient languages in the Sudan
and surrounding areas has not been successful. Using techniques borrowed from
information theory and natural language statistics, similar words are paired
and attempts are made to use currently defined words to extract at least
partial meaning from unknown words.
|
0808.3689
|
Optimal Power Allocation for Fading Channels in Cognitive Radio
Networks: Ergodic Capacity and Outage Capacity
|
cs.IT math.IT
|
A cognitive radio network (CRN) is formed by either allowing the secondary
users (SUs) in a secondary communication network (SCN) to opportunistically
operate in the frequency bands originally allocated to a primary communication
network (PCN) or by allowing SCN to coexist with the primary users (PUs) in PCN
as long as the interference caused by SCN to each PU is properly regulated. In
this paper, we consider the latter case, known as spectrum sharing, and study
the optimal power allocation strategies to achieve the ergodic capacity and the
outage capacity of the SU fading channel under different types of power
constraints and fading channel models. In particular, besides the interference
power constraint at PU, the transmit power constraint of SU is also considered.
Since the transmit power and the interference power can be limited either by a
peak or an average constraint, various combinations of power constraints are
studied. It is shown that there is a capacity gain for SU under the average
over the peak transmit/interference power constraint. It is also shown that
fading for the channel between SU transmitter and PU receiver is usually a
beneficial factor for enhancing the SU channel capacities.
|
0808.3712
|
Critique du rapport signal \`a bruit en communications num\'eriques --
Questioning the signal to noise ratio in digital communications
|
cs.IT math.IT math.PR math.RA
|
The signal to noise ratio, which plays such an important r\^ole in
information theory, is shown to become pointless for digital communications
where the demodulation is achieved via new fast estimation techniques.
Operational calculus, differential algebra, noncommutative algebra and
nonstandard analysis are the main mathematical tools.
|
0808.3726
|
Highly accurate recommendation algorithm based on high-order
similarities
|
physics.data-an cs.IR
|
In this Letter, we introduce a modified collaborative filtering (MCF)
algorithm, which has remarkably higher accuracy than the standard collaborative
filtering. In the MCF, instead of the standard Pearson coefficient, the
user-user similarities are obtained by a diffusion process. Furthermore, by
considering the second order similarities, we design an effective algorithm
that depresses the influence of mainstream preferences. The corresponding
algorithmic accuracy, measured by the ranking score, is further improved by
24.9% in the optimal case. In addition, two significant criteria of algorithmic
performance, diversity and popularity, are also taken into account. Numerical
results show that the algorithm based on second order similarity can outperform
the MCF simultaneously in all three criteria.
|
0808.3746
|
A game-theoretic version of Oakes' example for randomized forecasting
|
cs.LG cs.GT
|
Using the game-theoretic framework for probability, Vovk and Shafer. have
shown that it is always possible, using randomization, to make sequential
probability forecasts that pass any countable set of well-behaved statistical
tests. This result generalizes work by other authors, who consider only tests
of calbration.
We complement this result with a lower bound. We show that Vovk and Shafer's
result is valid only when the forecasts are computed with unrestrictedly
increasing degree of accuracy.
When some level of discreteness is fixed, we present a game-theoretic
generalization of Oakes' example for randomized forecasting that is a test
failing any given method of deferministic forecasting; originally, this example
was presented for deterministic calibration.
|
0808.3756
|
Approaching Blokh-Zyablov Error Exponent with Linear-Time
Encodable/Decodable Codes
|
cs.IT cs.CC math.IT
|
Guruswami and Indyk showed in [1] that Forney's error exponent can be
achieved with linear coding complexity over binary symmetric channels. This
paper extends this conclusion to general discrete-time memoryless channels and
shows that Forney's and Blokh-Zyablov error exponents can be arbitrarily
approached by one-level and multi-level concatenated codes with linear
encoding/decoding complexity. The key result is a revision to Forney's general
minimum distance decoding algorithm, which enables a low complexity integration
of Guruswami-Indyk's outer codes into the concatenated coding schemes.
|
0808.3889
|
Open architecture for multilingual parallel texts
|
cs.CL
|
Multilingual parallel texts (abbreviated to parallel texts) are linguistic
versions of the same content ("translations"); e.g., the Maastricht Treaty in
English and Spanish are parallel texts. This document is about creating an open
architecture for the whole Authoring, Translation and Publishing Chain
(ATP-chain) for the processing of parallel texts.
|
0808.3959
|
A Simple Extension of the $\modulo$-$\Lambda$ Transformation
|
cs.IT math.IT
|
A simple lemma is derived that allows to transform a general scalar
(non-Gaussian, non-additive) continuous-alphabet channel as well as a general
multiple-access channel into a modulo-additive noise channel. While in general
the transformation is information lossy, it allows to leverage linear coding
techniques and capacity results derived for networks comprised of additive
Gaussian nodes to more general networks.
|
0808.3971
|
Networked MIMO with Clustered Linear Precoding
|
cs.IT math.IT
|
A clustered base transceiver station (BTS) coordination strategy is proposed
for a large cellular MIMO network, which includes full intra-cluster
coordination to enhance the sum rate and limited inter-cluster coordination to
reduce interference for the cluster edge users. Multi-cell block
diagonalization is used to coordinate the transmissions across multiple BTSs in
the same cluster. To satisfy per-BTS power constraints, three combined precoder
and power allocation algorithms are proposed with different performance and
complexity tradeoffs. For inter-cluster coordination, the coordination area is
chosen to balance fairness for edge users and the achievable sum rate. It is
shown that a small cluster size (about 7 cells) is sufficient to obtain most of
the sum rate benefits from clustered coordination while greatly relieving
channel feedback requirement. Simulations show that the proposed coordination
strategy efficiently reduces interference and provides a considerable sum rate
gain for cellular MIMO networks.
|
0808.4060
|
TrustMAS: Trusted Communication Platform for Multi-Agent Systems
|
cs.CR cs.MA
|
The paper presents TrustMAS - Trusted Communication Platform for Multi-Agent
Systems, which provides trust and anonymity for mobile agents. The platform
includes anonymous technique based on random-walk algorithm for providing
general purpose anonymous communication for agents. All agents, which take part
in the proposed platform, benefit from trust and anonymity that is provided for
their interactions. Moreover, in TrustMAS there are StegAgents (SA) that are
able to perform various steganographic communication. To achieve that goal, SAs
may use methods in different layers of TCP/IP model or specialized middleware
enabling steganography that allows hidden communication through all layers of
mentioned model. In TrustMAS steganographic channels are used to exchange
routing tables between StegAgents. Thus all StegAgents in TrustMAS with their
ability to exchange information by using hidden channels form distributed
steganographic router (Stegrouter).
|
0808.4100
|
Codes and Noncommutative Stochastic Matrices
|
math.RA cs.IT math.IT
|
Given a matrix over a skew field fixing the column (1,...,1)^t, we give
formulas for a row vector fixed by this matrix. The same techniques are applied
to give noncommutative extensions of probabilistic properties of codes.
|
0808.4111
|
Relative Entropy and Statistics
|
cs.IT math.IT math.ST stat.TH
|
Formalising the confrontation of opinions (models) to observations (data) is
the task of Inferential Statistics. Information Theory provides us with a basic
functional, the relative entropy (or Kullback-Leibler divergence), an
asymmetrical measure of dissimilarity between the empirical and the theoretical
distributions. The formal properties of the relative entropy turn out to be
able to capture every aspect of Inferential Statistics, as illustrated here,
for simplicity, on dices (= i.i.d. process with finitely many outcomes):
refutability (strict or probabilistic): the asymmetry data / models; small
deviations: rejecting a single hypothesis; competition between hypotheses and
model selection; maximum likelihood: model inference and its limits; maximum
entropy: reconstructing partially observed data; EM-algorithm; flow data and
gravity modelling; determining the order of a Markov chain.
|
0808.4122
|
Swapping Lemmas for Regular and Context-Free Languages
|
cs.CC cs.CL cs.FL
|
In formal language theory, one of the most fundamental tools, known as
pumping lemmas, is extremely useful for regular and context-free languages.
However, there are natural properties for which the pumping lemmas are of
little use. One of such examples concerns a notion of advice, which depends
only on the size of an underlying input. A standard pumping lemma encounters
difficulty in proving that a given language is not regular in the presence of
advice. We develop its substitution, called a swapping lemma for regular
languages, to demonstrate the non-regularity of a target language with advice.
For context-free languages, we also present a similar form of swapping lemma,
which serves as a technical tool to show that certain languages are not
context-free with advice.
|
0808.4133
|
Tableau-based decision procedure for the multi-agent epistemic logic
with operators of common and distributed knowledge
|
cs.LO cs.MA
|
We develop an incremental-tableau-based decision procedure for the
multi-agent epistemic logic MAEL(CD) (aka S5_n (CD)), whose language contains
operators of individual knowledge for a finite set Ag of agents, as well as
operators of distributed and common knowledge among all agents in Ag. Our
tableau procedure works in (deterministic) exponential time, thus establishing
an upper bound for MAEL(CD)-satisfiability that matches the (implicit)
lower-bound known from earlier results, which implies ExpTime-completeness of
MAEL(CD)-satisfiability. Therefore, our procedure provides a complexity-optimal
algorithm for checking MAEL(CD)-satisfiability, which, however, in most cases
is much more efficient. We prove soundness and completeness of the procedure,
and illustrate it with an example.
|
0808.4135
|
Achieving the Empirical Capacity Using Feedback Part I: Memoryless
Additive Models
|
cs.IT math.IT
|
We address the problem of universal communications over an unknown channel
with an instantaneous noiseless feedback, and show how rates corresponding to
the empirical behavior of the channel can be attained, although no rate can be
guaranteed in advance. First, we consider a discrete modulo-additive channel
with alphabet $\mathcal{X}$, where the noise sequence $Z^n$ is arbitrary and
unknown and may causally depend on the transmitted and received sequences and
on the encoder's message, possibly in an adversarial fashion. Although the
classical capacity of this channel is zero, we show that rates approaching the
empirical capacity $\log|\mathcal{X}|-H_{emp}(Z^n)$ can be universally
attained, where $H_{emp}(Z^n)$ is the empirical entropy of $Z^n$. For the more
general setting where the channel can map its input to an output in an
arbitrary unknown fashion subject only to causality, we model the empirical
channel actions as the modulo-addition of a realized noise sequence, and show
that the same result applies if common randomness is available. The results are
proved constructively, by providing a simple sequential transmission scheme
approaching the empirical capacity. In part II of this work we demonstrate how
even higher rates can be attained by using more elaborate models for channel
actions, and by utilizing possible empirical dependencies in its behavior.
|
0808.4146
|
Dynamic Connectivity in ALOHA Ad Hoc Networks
|
cs.IT cs.NI math.IT math.PR
|
In a wireless network the set of transmitting nodes changes frequently
because of the MAC scheduler and the traffic load. Previously, connectivity in
wireless networks was analyzed using static geometric graphs, and as we show
leads to an overly constrained design criterion. The dynamic nature of the
transmitting set introduces additional randomness in a wireless system that
improves the connectivity, and this additional randomness is not captured by a
static connectivity graph. In this paper, we consider an ad hoc network with
half-duplex radios that uses multihop routing and slotted ALOHA for the MAC
contention and introduce a random dynamic multi-digraph to model its
connectivity. We first provide analytical results about the degree distribution
of the graph. Next, defining the path formation time as the minimum time
required for a causal path to form between the source and destination on the
dynamic graph, we derive the distributional properties of the connection delay
using techniques from first-passage percolation and epidemic processes. We
consider the giant component of the network formed when communication is
noise-limited (by neglecting interference). Then, in the presence of
interference, we prove that the delay scales linearly with the
source-destination distance on this giant component. We also provide simulation
results to support the theoretical results.
|
0808.4156
|
Rate-Distortion via Markov Chain Monte Carlo
|
cs.IT math.IT
|
We propose an approach to lossy source coding, utilizing ideas from Gibbs
sampling, simulated annealing, and Markov Chain Monte Carlo (MCMC). The idea is
to sample a reconstruction sequence from a Boltzmann distribution associated
with an energy function that incorporates the distortion between the source and
reconstruction, the compressibility of the reconstruction, and the point sought
on the rate-distortion curve. To sample from this distribution, we use a `heat
bath algorithm': Starting from an initial candidate reconstruction (say the
original source sequence), at every iteration, an index i is chosen and the
i-th sequence component is replaced by drawing from the conditional probability
distribution for that component given all the rest. At the end of this process,
the encoder conveys the reconstruction to the decoder using universal lossless
compression. The complexity of each iteration is independent of the sequence
length and only linearly dependent on a certain context parameter (which grows
sub-logarithmically with the sequence length). We show that the proposed
algorithms achieve optimum rate-distortion performance in the limits of large
number of iterations, and sequence length, when employed on any stationary
ergodic source. Experimentation shows promising initial results. Employing our
lossy compressors on noisy data, with appropriately chosen distortion measure
and level, followed by a simple de-randomization operation, results in a family
of denoisers that compares favorably (both theoretically and in practice) with
other MCMC-based schemes, and with the Discrete Universal Denoiser (DUDE).
|
0808.4160
|
Using Relative Entropy to Find Optimal Approximations: an Application to
Simple Fluids
|
cond-mat.stat-mech cs.IT math.IT math.PR physics.data-an
|
We develop a maximum relative entropy formalism to generate optimal
approximations to probability distributions. The central results consist in (a)
justifying the use of relative entropy as the uniquely natural criterion to
select a preferred approximation from within a family of trial parameterized
distributions, and (b) to obtain the optimal approximation by marginalizing
over parameters using the method of maximum entropy and information geometry.
As an illustration we apply our method to simple fluids. The "exact" canonical
distribution is approximated by that of a fluid of hard spheres. The proposed
method first determines the preferred value of the hard-sphere diameter, and
then obtains an optimal hard-sphere approximation by a suitably weighed average
over different hard-sphere diameters. This leads to a considerable improvement
in accounting for the soft-core nature of the interatomic potential. As a
numerical demonstration, the radial distribution function and the equation of
state for a Lennard-Jones fluid (argon) are compared with results from
molecular dynamics simulations.
|
0809.0009
|
Distributed Parameter Estimation in Sensor Networks: Nonlinear
Observation Models and Imperfect Communication
|
cs.MA cs.IT math.IT
|
The paper studies distributed static parameter (vector) estimation in sensor
networks with nonlinear observation models and noisy inter-sensor
communication. It introduces \emph{separably estimable} observation models that
generalize the observability condition in linear centralized estimation to
nonlinear distributed estimation. It studies two distributed estimation
algorithms in separably estimable models, the $\mathcal{NU}$ (with its linear
counterpart $\mathcal{LU}$) and the $\mathcal{NLU}$. Their update rule combines
a \emph{consensus} step (where each sensor updates the state by weight
averaging it with its neighbors' states) and an \emph{innovation} step (where
each sensor processes its local current observation.) This makes the three
algorithms of the \textit{consensus + innovations} type, very different from
traditional consensus. The paper proves consistency (all sensors reach
consensus almost surely and converge to the true parameter value,) efficiency,
and asymptotic unbiasedness. For $\mathcal{LU}$ and $\mathcal{NU}$, it proves
asymptotic normality and provides convergence rate guarantees. The three
algorithms are characterized by appropriately chosen decaying weight sequences.
Algorithms $\mathcal{LU}$ and $\mathcal{NU}$ are analyzed in the framework of
stochastic approximation theory; algorithm $\mathcal{NLU}$ exhibits mixed
time-scale behavior and biased perturbations, and its analysis requires a
different approach that is developed in the paper.
|
0809.0016
|
An overview of the transmission capacity of wireless networks
|
cs.IT math.IT
|
This paper surveys and unifies a number of recent contributions that have
collectively developed a metric for decentralized wireless network analysis
known as transmission capacity. Although it is notoriously difficult to derive
general end-to-end capacity results for multi-terminal or \adhoc networks, the
transmission capacity (TC) framework allows for quantification of achievable
single-hop rates by focusing on a simplified physical/MAC-layer model. By using
stochastic geometry to quantify the multi-user interference in the network, the
relationship between the optimal spatial density and success probability of
transmissions in the network can be determined, and expressed -- often fairly
simply -- in terms of the key network parameters. The basic model and
analytical tools are first discussed and applied to a simple network with path
loss only and we present tight upper and lower bounds on transmission capacity
(via lower and upper bounds on outage probability). We then introduce random
channels (fading/shadowing) and give TC and outage approximations for an
arbitrary channel distribution, as well as exact results for the special cases
of Rayleigh and Nakagami fading. We then apply these results to show how TC can
be used to better understand scheduling, power control, and the deployment of
multiple antennas in a decentralized network. The paper closes by discussing
shortcomings in the model as well as future research directions.
|
0809.0032
|
A Variational Inference Framework for Soft-In-Soft-Out Detection in
Multiple Access Channels
|
cs.IT cs.LG math.IT
|
We propose a unified framework for deriving and studying soft-in-soft-out
(SISO) detection in interference channels using the concept of variational
inference. The proposed framework may be used in multiple-access interference
(MAI), inter-symbol interference (ISI), and multiple-input multiple-outpu
(MIMO) channels. Without loss of generality, we will focus our attention on
turbo multiuser detection, to facilitate a more concrete discussion. It is
shown that, with some loss of optimality, variational inference avoids the
exponential complexity of a posteriori probability (APP) detection by
optimizing a closely-related, but much more manageable, objective function
called variational free energy. In addition to its systematic appeal, there are
several other advantages to this viewpoint. First of all, it provides unified
and rigorous justifications for numerous detectors that were proposed on
radically different grounds, and facilitates convenient joint detection and
decoding (utilizing the turbo principle) when error-control codes are
incorporated. Secondly, efficient joint parameter estimation and data detection
is possible via the variational expectation maximization (EM) algorithm, such
that the detrimental effect of inaccurate channel knowledge at the receiver may
be dealt with systematically. We are also able to extend BPSK-based SISO
detection schemes to arbitrary square QAM constellations in a rigorous manner
using a variational argument.
|
0809.0070
|
Underwater Acoustic Networks: Channel Models and Network Coding based
Lower Bound to Transmission Power for Multicast
|
cs.IT math.IT
|
The goal of this paper is two-fold. First, to establish a tractable model for
the underwater acoustic channel useful for network optimization in terms of
convexity. Second, to propose a network coding based lower bound for
transmission power in underwater acoustic networks, and compare this bound to
the performance of several network layer schemes. The underwater acoustic
channel is characterized by a path loss that depends strongly on transmission
distance and signal frequency. The exact relationship among power, transmission
band, distance and capacity for the Gaussian noise scenario is a complicated
one. We provide a closed-form approximate model for 1) transmission power and
2) optimal frequency band to use, as functions of distance and capacity. The
model is obtained through numerical evaluation of analytical results that take
into account physical models of acoustic propagation loss and ambient noise.
Network coding is applied to determine a lower bound to transmission power for
a multicast scenario, for a variety of multicast data rates and transmission
distances of interest for practical systems, exploiting physical properties of
the underwater acoustic channel. The results quantify the performance gap in
transmission power between a variety of routing and network coding schemes and
the network coding based lower bound. We illustrate results numerically for
different network scenarios.
|
0809.0091
|
A functional view of upper bounds on codes
|
cs.IT math.IT
|
Functional and linear-algebraic approaches to the Delsarte problem of upper
bounds on codes are discussed. We show that Christoffel-Darboux kernels and
Levenshtein polynomials related to them arise as stationary points of the
moment functionals of some distributions. We also show that they can be derived
as eigenfunctions of the Jacobi operator. This motivates the choice of
polynomials used to derive linear programming upper bounds on codes in
homogeneous spaces.
|
0809.0099
|
Degrees of Freedom of the $K$ User $M \times N$ MIMO Interference
Channel
|
cs.IT math.IT
|
We provide innerbound and outerbound for the total number of degrees of
freedom of the $K$ user multiple input multiple output (MIMO) Gaussian
interference channel with $M$ antennas at each transmitter and $N$ antennas at
each receiver if the channel coefficients are time-varying and drawn from a
continuous distribution. The bounds are tight when the ratio
$\frac{\max(M,N)}{\min(M,N)}=R$ is equal to an integer. For this case, we show
that the total number of degrees of freedom is equal to $\min(M,N)K$ if $K \leq
R$ and $\min(M,N)\frac{R}{R+1}K$ if $K > R$. Achievability is based on
interference alignment. We also provide examples where using interference
alignment combined with zero forcing can achieve more degrees of freedom than
merely zero forcing for some MIMO interference channels with constant channel
coefficients.
|
0809.0103
|
On the nature of long-range letter correlations in texts
|
cs.CL cs.IT math.IT
|
The origin of long-range letter correlations in natural texts is studied
using random walk analysis and Jensen-Shannon divergence. It is concluded that
they result from slow variations in letter frequency distribution, which are a
consequence of slow variations in lexical composition within the text. These
correlations are preserved by random letter shuffling within a moving window.
As such, they do reflect structural properties of the text, but in a very
indirect manner.
|
0809.0116
|
Toward Expressive and Scalable Sponsored Search Auctions
|
cs.DB
|
Internet search results are a growing and highly profitable advertising
platform. Search providers auction advertising slots to advertisers on their
search result pages. Due to the high volume of searches and the users' low
tolerance for search result latency, it is imperative to resolve these auctions
fast. Current approaches restrict the expressiveness of bids in order to
achieve fast winner determination, which is the problem of allocating slots to
advertisers so as to maximize the expected revenue given the advertisers' bids.
The goal of our work is to permit more expressive bidding, thus allowing
advertisers to achieve complex advertising goals, while still providing fast
and scalable techniques for winner determination.
|
0809.0124
|
A Uniform Approach to Analogies, Synonyms, Antonyms, and Associations
|
cs.CL cs.IR cs.LG
|
Recognizing analogies, synonyms, antonyms, and associations appear to be four
distinct tasks, requiring distinct NLP algorithms. In the past, the four tasks
have been treated independently, using a wide variety of algorithms. These four
semantic classes, however, are a tiny sample of the full range of semantic
phenomena, and we cannot afford to create ad hoc algorithms for each semantic
phenomenon; we need to seek a unified approach. We propose to subsume a broad
range of phenomena under analogies. To limit the scope of this paper, we
restrict our attention to the subsumption of synonyms, antonyms, and
associations. We introduce a supervised corpus-based machine learning algorithm
for classifying analogous word pairs, and we show that it can solve
multiple-choice SAT analogy questions, TOEFL synonym questions, ESL
synonym-antonym questions, and similar-associated-both questions from cognitive
psychology.
|
0809.0158
|
Network Tomography Based on Additive Metrics
|
cs.NI cs.IT math.IT
|
Inference of the network structure (e.g., routing topology) and dynamics
(e.g., link performance) is an essential component in many network design and
management tasks. In this paper we propose a new, general framework for
analyzing and designing routing topology and link performance inference
algorithms using ideas and tools from phylogenetic inference in evolutionary
biology. The framework is applicable to a variety of measurement techniques.
Based on the framework we introduce and develop several polynomial-time
distance-based inference algorithms with provable performance. We provide
sufficient conditions for the correctness of the algorithms. We show that the
algorithms are consistent (return correct topology and link performance with an
increasing sample size) and robust (can tolerate a certain level of measurement
errors). In addition, we establish certain optimality properties of the
algorithms (i.e., they achieve the optimal $l_\infty$-radius) and demonstrate
their effectiveness via model simulation.
|
0809.0199
|
Dense Error Correction via L1-Minimization
|
cs.IT math.IT
|
This paper studies the problem of recovering a non-negative sparse signal $\x
\in \Re^n$ from highly corrupted linear measurements $\y = A\x + \e \in \Re^m$,
where $\e$ is an unknown error vector whose nonzero entries may be unbounded.
Motivated by an observation from face recognition in computer vision, this
paper proves that for highly correlated (and possibly overcomplete)
dictionaries $A$, any non-negative, sufficiently sparse signal $\x$ can be
recovered by solving an $\ell^1$-minimization problem: $\min \|\x\|_1 +
\|\e\|_1 \quad {subject to} \quad \y = A\x + \e.$ More precisely, if the
fraction $\rho$ of errors is bounded away from one and the support of $\x$
grows sublinearly in the dimension $m$ of the observation, then as $m$ goes to
infinity, the above $\ell^1$-minimization succeeds for all signals $\x$ and
almost all sign-and-support patterns of $\e$. This result suggests that
accurate recovery of sparse signals is possible and computationally feasible
even with nearly 100% of the observations corrupted. The proof relies on a
careful characterization of the faces of a convex polytope spanned together by
the standard crosspolytope and a set of iid Gaussian vectors with nonzero mean
and small variance, which we call the ``cross-and-bouquet'' model. Simulations
and experimental results corroborate the findings, and suggest extensions to
the result.
|
0809.0271
|
Randomised Variable Neighbourhood Search for Multi Objective
Optimisation
|
cs.AI
|
Various local search approaches have recently been applied to machine
scheduling problems under multiple objectives. Their foremost consideration is
the identification of the set of Pareto optimal alternatives. An important
aspect of successfully solving these problems lies in the definition of an
appropriate neighbourhood structure. Unclear in this context remains, how
interdependencies within the fitness landscape affect the resolution of the
problem.
The paper presents a study of neighbourhood search operators for multiple
objective flow shop scheduling. Experiments have been carried out with twelve
different combinations of criteria. To derive exact conclusions, small problem
instances, for which the optimal solutions are known, have been chosen.
Statistical tests show that no single neighbourhood operator is able to equally
identify all Pareto optimal alternatives. Significant improvements however have
been obtained by hybridising the solution algorithm using a randomised variable
neighbourhood search technique.
|
0809.0360
|
The Complexity of Enriched Mu-Calculi
|
cs.LO cs.CL
|
The fully enriched μ-calculus is the extension of the propositional
μ-calculus with inverse programs, graded modalities, and nominals. While
satisfiability in several expressive fragments of the fully enriched
μ-calculus is known to be decidable and ExpTime-complete, it has recently
been proved that the full calculus is undecidable. In this paper, we study the
fragments of the fully enriched μ-calculus that are obtained by dropping at
least one of the additional constructs. We show that, in all fragments obtained
in this way, satisfiability is decidable and ExpTime-complete. Thus, we
identify a family of decidable logics that are maximal (and incomparable) in
expressive power. Our results are obtained by introducing two new automata
models, showing that their emptiness problems are ExpTime-complete, and then
reducing satisfiability in the relevant logics to these problems. The automata
models we introduce are two-way graded alternating parity automata over
infinite trees (2GAPTs) and fully enriched automata (FEAs) over infinite
forests. The former are a common generalization of two incomparable automata
models from the literature. The latter extend alternating automata in a similar
way as the fully enriched μ-calculus extends the standard μ-calculus.
|
0809.0406
|
Foundations of the Pareto Iterated Local Search Metaheuristic
|
cs.AI
|
The paper describes the proposition and application of a local search
metaheuristic for multi-objective optimization problems. It is based on two
main principles of heuristic search, intensification through variable
neighborhoods, and diversification through perturbations and successive
iterations in favorable regions of the search space. The concept is
successfully tested on permutation flow shop scheduling problems under multiple
objectives. While the obtained results are encouraging in terms of their
quality, another positive attribute of the approach is its' simplicity as it
does require the setting of only very few parameters. The implementation of the
Pareto Iterated Local Search metaheuristic is based on the MOOPPS computer
system of local search heuristics for multi-objective scheduling which has been
awarded the European Academic Software Award 2002 in Ronneby, Sweden
(http://www.easa-award.net/, http://www.bth.se/llab/easa_2002.nsf)
|
0809.0410
|
A Computational Study of Genetic Crossover Operators for Multi-Objective
Vehicle Routing Problem with Soft Time Windows
|
cs.AI
|
The article describes an investigation of the effectiveness of genetic
algorithms for multi-objective combinatorial optimization (MOCO) by presenting
an application for the vehicle routing problem with soft time windows. The work
is motivated by the question, if and how the problem structure influences the
effectiveness of different configurations of the genetic algorithm.
Computational results are presented for different classes of vehicle routing
problems, varying in their coverage with time windows, time window size,
distribution and number of customers. The results are compared with a simple,
but effective local search approach for multi-objective combinatorial
optimization problems.
|
0809.0416
|
Genetic Algorithms for multiple objective vehicle routing
|
cs.AI
|
The talk describes a general approach of a genetic algorithm for multiple
objective optimization problems. A particular dominance relation between the
individuals of the population is used to define a fitness operator, enabling
the genetic algorithm to adress even problems with efficient, but
convex-dominated alternatives. The algorithm is implemented in a multilingual
computer program, solving vehicle routing problems with time windows under
multiple objectives. The graphical user interface of the program shows the
progress of the genetic algorithm and the main parameters of the approach can
be easily modified. In addition to that, the program provides powerful decision
support to the decision maker. The software has proved it's excellence at the
finals of the European Academic Software Award EASA, held at the Keble college/
University of Oxford/ Great Britain.
|
0809.0444
|
Quantum classification
|
quant-ph cs.LG
|
Quantum classification is defined as the task of predicting the associated
class of an unknown quantum state drawn from an ensemble of pure states given a
finite number of copies of this state. By recasting the state discrimination
problem within the framework of Machine Learning (ML), we can use the notion of
learning reduction coming from classical ML to solve different variants of the
classification task, such as the weighted binary and the multiclass versions.
|
0809.0448
|
The Stock Market as a Game: An Agent Based Approach to Trading in Stocks
|
q-fin.TR cs.AI cs.GT
|
Just as war is sometimes fallaciously represented as a zero sum game -- when
in fact war is a negative sum game - stock market trading, a positive sum game
over time, is often erroneously represented as a zero sum game. This is called
the "zero sum fallacy" -- the erroneous belief that one trader in a stock
market exchange can only improve their position provided some other trader's
position deteriorates. However, a positive sum game in absolute terms can be
recast as a zero sum game in relative terms. Similarly it appears that negative
sum games in absolute terms have been recast as zero sum games in relative
terms: otherwise, why would zero sum games be used to represent situations of
war? Such recasting may have heuristic or pedagogic interest but recasting must
be clearly explicited or risks generating confusion.
Keywords: Game theory, stock trading and agent based AI.
|
0809.0458
|
Agent Models of Political Interactions
|
cs.AI cs.GT
|
Looks at state interactions from an agent based AI perspective to see state
interactions as an example of emergent intelligent behavior. Exposes basic
principles of game theory.
|
0809.0490
|
Principal Graphs and Manifolds
|
cs.LG cs.NE stat.ML
|
In many physical, statistical, biological and other investigations it is
desirable to approximate a system of points by objects of lower dimension
and/or complexity. For this purpose, Karl Pearson invented principal component
analysis in 1901 and found 'lines and planes of closest fit to system of
points'. The famous k-means algorithm solves the approximation problem too, but
by finite sets instead of lines and planes. This chapter gives a brief
practical introduction into the methods of construction of general principal
objects, i.e. objects embedded in the 'middle' of the multidimensional data
set. As a basis, the unifying framework of mean squared distance approximation
of finite datasets is selected. Principal graphs and manifolds are constructed
as generalisations of principal components and k-means principal points. For
this purpose, the family of expectation/maximisation algorithms with nearest
generalisations is presented. Construction of principal graphs with controlled
complexity is based on the graph grammar approach.
|
0809.0522
|
The first-mover advantage in scientific publication
|
physics.soc-ph cs.DL cs.SI
|
Mathematical models of the scientific citation process predict a strong
"first-mover" effect under which the first papers in a field will, essentially
regardless of content, receive citations at a rate enormously higher than
papers published later. Moreover papers are expected to retain this advantage
in perpetuity -- they should receive more citations indefinitely, no matter how
many other papers are published after them. We test this conjecture against
data from a selection of fields and in several cases find a first-mover effect
of a magnitude similar to that predicted by the theory. Were we wearing our
cynical hat today, we might say that the scientist who wants to become famous
is better off -- by a wide margin -- writing a modest paper in next year's
hottest field than an outstanding paper in this year's. On the other hand,
there are some papers, albeit only a small fraction, that buck the trend and
attract significantly more citations than theory predicts despite having
relatively late publication dates. We suggest that papers of this kind, though
they often receive comparatively few citations overall, are probably worthy of
our attention.
|
0809.0536
|
How to Fully Exploit the Degrees of Freedom in the Downlink of MISO
Systems With Opportunistic Beamforming
|
cs.IT math.IT
|
The opportunistic beamforming in the downlink of multiple-input single-output
(MISO) systems forms $N$ transmit beams, usually, no more than the number of
transmit antennas $N_t$. However, the degrees of freedom in this downlink is as
large as $N_t^2$. That is, at most $N_t^2$ rather than only $N_t$ users can be
simultaneously transmitted and thus the scheduling latency can be significantly
reduced. In this paper, we focus on the opportunistic beamforming schemes with
$N_t<N\le N_t^2$ transmit beams in the downlink of MISO systems over Rayleigh
fading channels. We first show how to design the beamforming matrices with
maximum number of transmit beams as well as least correlation between any pair
of them as possible, through Fourier, Grassmannian, and mutually unbiased bases
(MUB) based constructions in practice. Then, we analyze their system throughput
by exploiting the asymptotic theory of extreme order statistics. Finally, our
simulation results show the Grassmannian-based beamforming achieves the maximum
throughput in all cases with $N_t=2$, 3, 4. However, if we want to exploit
overall $N_t^2$ degrees of freedom, we shall resort to the Fourier and
MUB-based constructions in the cases with $N_t=3$, 4, respectively.
|
0809.0539
|
Signature Quantization in Fading CDMA With Limited Feedback
|
cs.IT math.IT
|
In this work, we analyze the performance of a signature quantization scheme
for reverse-link Direct Sequence (DS)- Code Division Multiple Access (CDMA).
Assuming perfect estimates of the channel and interference covariance, the
receiver selects the signature that minimizes interference power or maximizes
signal-to-interference plus noise ratio (SINR) for a desired user from a
signature codebook. The codebook index corresponding to the optimal signature
is then relayed to the user with a finite number of bits via a feedback
channel. Here we are interested in the performance of a Random Vector
Quantization (RVQ) codebook, which contains independent isotropically
distributed vectors. Assuming arbitrary transmit power allocation, we consider
additive white Gaussian noise (AWGN) channel first with no fading and
subsequently, with multipath fading. We derive the corresponding SINR in a
large system limit at the output of matched filter and linear minimum mean
squared error (MMSE) receiver. Numerical examples show that the derived large
system results give a good approximation to the performance of finite-size
system and that the MMSE receiver achieves close to a single-user performance
with only one feedback bit per signature element.
|
0809.0545
|
Frequency Locking of an Optical Cavity using LQG Integral Control
|
quant-ph cs.SY
|
This paper considers the application of integral Linear Quadratic Gaussian
(LQG) optimal control theory to a problem of cavity locking in quantum optics.
The cavity locking problem involves controlling the error between the laser
frequency and the resonant frequency of the cavity. A model for the cavity
system, which comprises a piezo-electric actuator and an optical cavity is
experimentally determined using a subspace identification method. An LQG
controller which includes integral action is synthesized to stabilize the
frequency of the cavity to the laser frequency and to reject low frequency
noise. The controller is successfully implemented in the laboratory using a
dSpace DSP board.
|
0809.0610
|
A framework for the interactive resolution of multi-objective vehicle
routing problems
|
cs.AI
|
The article presents a framework for the resolution of rich vehicle routing
problems which are difficult to address with standard optimization techniques.
We use local search on the basis on variable neighborhood search for the
construction of the solutions, but embed the techniques in a flexible framework
that allows the consideration of complex side constraints of the problem such
as time windows, multiple depots, heterogeneous fleets, and, in particular,
multiple optimization criteria. In order to identify a compromise alternative
that meets the requirements of the decision maker, an interactive procedure is
integrated in the resolution of the problem, allowing the modification of the
preference information articulated by the decision maker. The framework is
prototypically implemented in a computer system. First results of test runs on
multiple depot vehicle routing problems with time windows are reported.
|
0809.0635
|
Low ML-Decoding Complexity, Large Coding Gain, Full-Rate, Full-Diversity
STBCs for 2 X 2 and 4 X 2 MIMO Systems
|
cs.IT math.IT
|
This paper (Part of the content of this manuscript has been accepted for
presentation in IEEE Globecom 2008, to be held in New Orleans) deals with low
maximum likelihood (ML) decoding complexity, full-rate and full-diversity
space-time block codes (STBCs), which also offer large coding gain, for the 2
transmit antenna, 2 receive antenna ($2\times 2$) and the 4 transmit antenna, 2
receive antenna ($4\times 2$) MIMO systems. Presently, the best known STBC for
the $2\times2$ system is the Golden code and that for the $4\times2$ system is
the DjABBA code. Following the approach by Biglieri, Hong and Viterbo, a new
STBC is presented in this paper for the $2\times 2$ system. This code matches
the Golden code in performance and ML-decoding complexity for square QAM
constellations while it has lower ML-decoding complexity with the same
performance for non-rectangular QAM constellations. This code is also shown to
be \emph{information-lossless} and \emph{diversity-multiplexing gain} (DMG)
tradeoff optimal. This design procedure is then extended to the $4\times 2$
system and a code, which outperforms the DjABBA code for QAM constellations
with lower ML-decoding complexity, is presented. So far, the Golden code has
been reported to have an ML-decoding complexity of the order of $M^4$ for
square QAM of size $M$. In this paper, a scheme that reduces its ML-decoding
complexity to $M^2\sqrt{M}$ is presented.
|
0809.0662
|
Improving Local Search for Fuzzy Scheduling Problems
|
cs.AI
|
The integration of fuzzy set theory and fuzzy logic into scheduling is a
rather new aspect with growing importance for manufacturing applications,
resulting in various unsolved aspects. In the current paper, we investigate an
improved local search technique for fuzzy scheduling problems with fitness
plateaus, using a multi criteria formulation of the problem. We especially
address the problem of changing job priorities over time as studied at the
Sherwood Press Ltd, a Nottingham based printing company, who is a collaborator
on the project.
|
0809.0680
|
The Prolog Interface to the Unstructured Information Management
Architecture
|
cs.SE cs.IR
|
In this paper we describe the design and implementation of the Prolog
interface to the Unstructured Information Management Architecture (UIMA) and
some of its applications in natural language processing. The UIMA Prolog
interface translates unstructured data and the UIMA Common Analysis Structure
(CAS) into a Prolog knowledge base, over which, the developers write rules and
use resolution theorem proving to search and generate new annotations over the
unstructured data. These rules can explore all the previous UIMA annotations
(such as, the syntactic structure, parsing statistics) and external Prolog
knowledge bases (such as, Prolog WordNet and Extended WordNet) to implement a
variety of tasks for the natural language analysis. We also describe
applications of this logic programming interface in question analysis (such as,
focus detection, answer-type and other constraints detection), shallow parsing
(such as, relations in the syntactic structure), and answer selection.
|
0809.0686
|
Energy Scaling Laws for Distributed Inference in Random Fusion Networks
|
cs.IT cs.NI math.IT math.ST stat.TH
|
The energy scaling laws of multihop data fusion networks for distributed
inference are considered. The fusion network consists of randomly located
sensors distributed i.i.d. according to a general spatial distribution in an
expanding region. Among the class of data fusion schemes that enable optimal
inference at the fusion center for Markov random field (MRF) hypotheses, the
scheme with minimum average energy consumption is bounded below by average
energy of fusion along the minimum spanning tree, and above by a suboptimal
scheme, referred to as Data Fusion for Markov Random Fields (DFMRF). Scaling
laws are derived for the optimal and suboptimal fusion policies. It is shown
that the average asymptotic energy of the DFMRF scheme is finite for a class of
MRF models.
|
0809.0723
|
A Simple Mechanism for Focused Web-harvesting
|
cs.IR cs.CY
|
The focused web-harvesting is deployed to realize an automated and
comprehensive index databases as an alternative way for virtual topical data
integration. The web-harvesting has been implemented and extended by not only
specifying the targeted URLs, but also predefining human-edited harvesting
parameters to improve the speed and accuracy. The harvesting parameter set
comprises three main components. First, the depth-scale of being harvested
final pages containing desired information counted from the first page at the
targeted URLs. Secondly, the focus-point number to determine the exact box
containing relevant information. Lastly, the combination of keywords to
recognize encountered hyperlinks of relevant images or full-texts embedded in
those final pages. All parameters are accessible and fully customizable for
each target by the administrators of participating institutions over an
integrated web interface. A real implementation to the Indonesian Scientific
Index which covers all scientific information across Indonesia is also briefly
introduced.
|
0809.0727
|
Microcontroller-based System for Modular Networked Robot
|
cs.RO cs.CY
|
A prototype of modular networked robot for autonomous monitoring works with
full control over web through wireless connection has been developed. The robot
is equipped with a particular set of built-in analyzing tools and appropriate
censors, depending on its main purposes, to enable self-independent and
real-time data acquisition and processing. The paper is focused on the
microcontroller-based system to realize the modularity. The whole system is
divided into three modules : main unit, data acquisition and data processing,
while the analyzed results and all aspects of control and monitoring systems
are fully accessible over an integrated web-interface. This concept leads to
some unique features : enhancing flexibility due to enabling partial
replacement of the modules according to user needs, easy access over web for
remote users, and low development and maintenance cost due to software
dominated components.
|
0809.0728
|
A Spectrum-Shaping Perspective on Cognitive Radio
|
cs.IT math.IT
|
A new perspective on cognitive radio is presented, where the pre-existent
legacy service is either uncoded or coded and a pair of cognitive transceivers
need be appropriately deployed to coexist with the legacy service. The basic
idea underlying the new perspective is to exploit the fact that, typically, the
legacy channel is not fully loaded by the legacy service, thus leaving a
non-negligible margin to accommodate the cognitive transmission. The
exploitation of such a load margin is optimized by shaping the spectrum of the
transmitted cognitive signal. It is shown that non-trivial coexistence of
legacy and cognitive systems is possible even without sharing the legacy
message with the cognitive transmitter. Surprisingly, the optimized cognitive
transmitter is no longer limited by its interference power at the legacy
receiver, and can always transmit at its full available device power.
Analytical development and numerical illustration are presented, in particular
focusing on the logarithmic growth rate, {\it i.e.}, the prelog coefficient, of
cognitive transmission in the high-power regime.
|
0809.0733
|
There exists no self-dual [24,12,10] code over F5
|
math.CO cs.IT math.IT
|
Self-dual codes over F5 exist for all even lengths. The smallest length for
which the largest minimum weight among self-dual codes has not been determined
is 24, and the largest minimum weight is either 9 or 10. In this note, we show
that there exists no self-dual [24,12,10] code over F5, using the
classification of 24-dimensional odd unimodular lattices due to Borcherds.
|
0809.0737
|
Malleable Coding with Fixed Reuse
|
cs.IT math.IT
|
In cloud computing, storage area networks, remote backup storage, and similar
settings, stored data is modified with updates from new versions. Representing
information and modifying the representation are both expensive. Therefore it
is desirable for the data to not only be compressed but to also be easily
modified during updates. A malleable coding scheme considers both compression
efficiency and ease of alteration, promoting codeword reuse. We examine the
trade-off between compression efficiency and malleability cost-the difficulty
of synchronizing compressed versions-measured as the length of a reused prefix
portion. Through a coding theorem, the region of achievable rates and
malleability is expressed as a single-letter optimization. Relationships to
common information problems are also described.
|
0809.0745
|
Sparse Recovery by Non-convex Optimization -- Instance Optimality
|
cs.IT math.IT
|
In this note, we address the theoretical properties of $\Delta_p$, a class of
compressed sensing decoders that rely on $\ell^p$ minimization with 0<p<1 to
recover estimates of sparse and compressible signals from incomplete and
inaccurate measurements. In particular, we extend the results of Candes,
Romberg and Tao, and Wojtaszczyk regarding the decoder $\Delta_1$, based on
$\ell^1$ minimization, to $\Delta_p$ with 0<p<1. Our results are two-fold.
First, we show that under certain sufficient conditions that are weaker than
the analogous sufficient conditions for $\Delta_1$ the decoders $\Delta_p$ are
robust to noise and stable in the sense that they are (2,p) instance optimal
for a large class of encoders. Second, we extend the results of Wojtaszczyk to
show that, like $\Delta_1$, the decoders $\Delta_p$ are (2,2) instance optimal
in probability provided the measurement matrix is drawn from an appropriate
distribution.
|
0809.0753
|
Proposition of the Interactive Pareto Iterated Local Search Procedure -
Elements and Initial Experiments
|
cs.AI cs.HC
|
The article presents an approach to interactively solve multi-objective
optimization problems. While the identification of efficient solutions is
supported by computational intelligence techniques on the basis of local
search, the search is directed by partial preference information obtained from
the decision maker.
An application of the approach to biobjective portfolio optimization, modeled
as the well-known knapsack problem, is reported, and experimental results are
reported for benchmark instances taken from the literature. In brief, we obtain
encouraging results that show the applicability of the approach to the
described problem.
|
0809.0755
|
Bin Packing Under Multiple Objectives - a Heuristic Approximation
Approach
|
cs.AI
|
The article proposes a heuristic approximation approach to the bin packing
problem under multiple objectives. In addition to the traditional objective of
minimizing the number of bins, the heterogeneousness of the elements in each
bin is minimized, leading to a biobjective formulation of the problem with a
tradeoff between the number of bins and their heterogeneousness. An extension
of the Best-Fit approximation algorithm is presented to solve the problem.
Experimental investigations have been carried out on benchmark instances of
different size, ranging from 100 to 1000 items. Encouraging results have been
obtained, showing the applicability of the heuristic approach to the described
problem.
|
0809.0757
|
An application of the Threshold Accepting metaheuristic for curriculum
based course timetabling
|
cs.AI
|
The article presents a local search approach for the solution of timetabling
problems in general, with a particular implementation for competition track 3
of the International Timetabling Competition 2007 (ITC 2007). The heuristic
search procedure is based on Threshold Accepting to overcome local optima. A
stochastic neighborhood is proposed and implemented, randomly removing and
reassigning events from the current solution.
The overall concept has been incrementally obtained from a series of
experiments, which we describe in each (sub)section of the paper. In result, we
successfully derived a potential candidate solution approach for the finals of
track 3 of the ITC 2007.
|
0809.0788
|
Peek Arc Consistency
|
cs.AI cs.CC cs.LO
|
This paper studies peek arc consistency, a reasoning technique that extends
the well-known arc consistency technique for constraint satisfaction. In
contrast to other more costly extensions of arc consistency that have been
studied in the literature, peek arc consistency requires only linear space and
quadratic time and can be parallelized in a straightforward way such that it
runs in linear time with a linear number of processors. We demonstrate that for
various constraint languages, peek arc consistency gives a polynomial-time
decision procedure for the constraint satisfaction problem. We also present an
algebraic characterization of those constraint languages that can be solved by
peek arc consistency, and study the robustness of the algorithm.
|
0809.0835
|
Approximating the volume of unions and intersections of high-dimensional
geometric objects
|
cs.CG cs.NE
|
We consider the computation of the volume of the union of high-dimensional
geometric objects. While showing that this problem is #P-hard already for very
simple bodies (i.e., axis-parallel boxes), we give a fast FPRAS for all objects
where one can: (1) test whether a given point lies inside the object, (2)
sample a point uniformly, (3) calculate the volume of the object in polynomial
time. All three oracles can be weak, that is, just approximate. This implies
that Klee's measure problem and the hypervolume indicator can be approximated
efficiently even though they are #P-hard and hence cannot be solved exactly in
time polynomial in the number of dimensions unless P=NP. Our algorithm also
allows to approximate efficiently the volume of the union of convex bodies
given by weak membership oracles.
For the analogous problem of the intersection of high-dimensional geometric
objects we prove #P-hardness for boxes and show that there is no multiplicative
polynomial-time $2^{d^{1-\epsilon}}$-approximation for certain boxes unless
NP=BPP, but give a simple additive polynomial-time $\epsilon$-approximation.
|
0809.0840
|
HEP data analysis using jHepWork and Java
|
cs.CE hep-ex hep-ph
|
A role of Java in high-energy physics and recent progress in development of a
platform-independent data-analysis framework, jHepWork, is discussed. The
framework produces professional graphics and has many libraries for data
manipulation.
|
0809.0853
|
Estimating divergence functionals and the likelihood ratio by convex
risk minimization
|
math.ST cs.IT math.IT stat.TH
|
We develop and analyze $M$-estimation methods for divergence functionals and
the likelihood ratios of two probability distributions. Our method is based on
a non-asymptotic variational characterization of $f$-divergences, which allows
the problem of estimating divergences to be tackled via convex empirical risk
optimization. The resulting estimators are simple to implement, requiring only
the solution of standard convex programs. We present an analysis of consistency
and convergence for these estimators. Given conditions only on the ratios of
densities, we show that our estimators can achieve optimal minimax rates for
the likelihood ratio and the divergence functionals in certain regimes. We
derive an efficient optimization algorithm for computing our estimates, and
illustrate their convergence behavior and practical viability by simulations.
|
0809.0908
|
Reduced Complexity Demodulation and Equalization Scheme for Differential
Impulse Radio UWB Systems with ISI
|
cs.IT math.IT
|
In this paper, we consider the demodulation and equalization problem of
differential Impulse Radio (IR) Ultra-WideBand (UWB) Systems with
Inter-Symbol-Interference (ISI). The differential IR UWB systems have been
extensively discussed recently. The advantage of differential IR UWB systems
include simple receiver frontend structure. One challenge in the demodulation
and equalization of such systems with ISI is that the systems have a rather
complex model. The input and output signals of the systems follow a
second-order Volterra model. Furthermore, the noise at the output is data
dependent. In this paper, we propose a reduced-complexity joint demodulation
and equalization algorithm. The algorithm is based on reformulating the nearest
neighborhood decoding problem into a mixed quadratic programming and utilizing
a semi-definite relaxation. The numerical results show that the proposed
demodulation and equalization algorithm has low computational complexity, and
at the same time, has almost the same error probability performance compared
with the maximal likelihood decoding algorithm.
|
0809.0916
|
Irreversible Monte Carlo Algorithms for Efficient Sampling
|
cond-mat.stat-mech cs.IT math.IT math.PR stat.AP
|
Equilibrium systems evolve according to Detailed Balance (DB). This principe
guided development of the Monte-Carlo sampling techniques, of which
Metropolis-Hastings (MH) algorithm is the famous representative. It is also
known that DB is sufficient but not necessary. We construct irreversible
deformation of a given reversible algorithm capable of dramatic improvement of
sampling from known distribution. Our transformation modifies transition rates
keeping the structure of transitions intact. To illustrate the general scheme
we design an Irreversible version of Metropolis-Hastings (IMH) and test it on
example of a spin cluster. Standard MH for the model suffers from the critical
slowdown, while IMH is free from critical slowdown.
|
0809.0918
|
Intersecting random graphs and networks with multiple adjacency
constraints: A simple example
|
cs.IT math.IT math.PR
|
When studying networks using random graph models, one is sometimes faced with
situations where the notion of adjacency between nodes reflects multiple
constraints. Traditional random graph models are insufficient to handle such
situations.
A simple idea to account for multiple constraints consists in taking the
intersection of random graphs. In this paper we initiate the study of random
graphs so obtained through a simple example. We examine the intersection of an
Erdos-Renyi graph and of one-dimensional geometric random graphs. We
investigate the zero-one laws for the property that there are no isolated
nodes. When the geometric component is defined on the unit circle, a full
zero-one law is established and we determine its critical scaling. When the
geometric component lies in the unit interval, there is a gap in that the
obtained zero and one laws are found to express deviations from different
critical scalings. In particular, the first moment method requires a larger
critical scaling than in the unit circle case in order to obtain the one law.
This discrepancy is somewhat surprising given that the zero-one laws for the
absence of isolated nodes are identical in the geometric random graphs on both
the unit interval and unit circle.
|
0809.0922
|
Superposition for Fixed Domains
|
cs.AI cs.LO
|
Superposition is an established decision procedure for a variety of
first-order logic theories represented by sets of clauses. A satisfiable
theory, saturated by superposition, implicitly defines a minimal term-generated
model for the theory. Proving universal properties with respect to a saturated
theory directly leads to a modification of the minimal model's term-generated
domain, as new Skolem functions are introduced. For many applications, this is
not desired.
Therefore, we propose the first superposition calculus that can explicitly
represent existentially quantified variables and can thus compute with respect
to a given domain. This calculus is sound and refutationally complete in the
limit for a first-order fixed domain semantics. For saturated Horn theories and
classes of positive formulas, we can even employ the calculus to prove
properties of the minimal model itself, going beyond the scope of known
superposition-based approaches.
|
0809.0949
|
Efficient Implementation of the Generalized Tunstall Code Generation
Algorithm
|
cs.IT cs.DS math.IT
|
A method is presented for constructing a Tunstall code that is linear time in
the number of output items. This is an improvement on the state of the art for
non-Bernoulli sources, including Markov sources, which require a (suboptimal)
generalization of Tunstall's algorithm proposed by Savari and analytically
examined by Tabus and Rissanen. In general, if n is the total number of output
leaves across all Tunstall trees, s is the number of trees (states), and D is
the number of leaves of each internal node, then this method takes O((1+(log
s)/D) n) time and O(n) space.
|
0809.0961
|
MOOPPS: An Optimization System for Multi Objective Scheduling
|
cs.AI cs.HC
|
In the current paper, we present an optimization system solving multi
objective production scheduling problems (MOOPPS). The identification of Pareto
optimal alternatives or at least a close approximation of them is possible by a
set of implemented metaheuristics. Necessary control parameters can easily be
adjusted by the decision maker as the whole software is fully menu driven. This
allows the comparison of different metaheuristic algorithms for the considered
problem instances. Results are visualized by a graphical user interface showing
the distribution of solutions in outcome space as well as their corresponding
Gantt chart representation.
The identification of a most preferred solution from the set of efficient
solutions is supported by a module based on the aspiration interactive method
(AIM). The decision maker successively defines aspiration levels until a single
solution is chosen.
After successfully competing in the finals in Ronneby, Sweden, the MOOPPS
software has been awarded the European Academic Software Award 2002
(http://www.bth.se/llab/easa_2002.nsf)
|
0809.1017
|
Entropy Concentration and the Empirical Coding Game
|
cs.IT cs.LG math.IT math.ST stat.ME stat.TH
|
We give a characterization of Maximum Entropy/Minimum Relative Entropy
inference by providing two `strong entropy concentration' theorems. These
theorems unify and generalize Jaynes' `concentration phenomenon' and Van
Campenhout and Cover's `conditional limit theorem'. The theorems characterize
exactly in what sense a prior distribution Q conditioned on a given constraint,
and the distribution P, minimizing the relative entropy D(P ||Q) over all
distributions satisfying the constraint, are `close' to each other. We then
apply our theorems to establish the relationship between entropy concentration
and a game-theoretic characterization of Maximum Entropy Inference due to
Topsoe and others.
|
0809.1039
|
High-SNR Analysis of Outage-Limited Communications with Bursty and
Delay-Limited Information
|
cs.IT math.IT
|
This work analyzes the high-SNR asymptotic error performance of
outage-limited communications with fading, where the number of bits that arrive
at the transmitter during any time slot is random but the delivery of bits at
the receiver must adhere to a strict delay limitation. Specifically, bit errors
are caused by erroneous decoding at the receiver or violation of the strict
delay constraint. Under certain scaling of the statistics of the bit-arrival
process with SNR, this paper shows that the optimal decay behavior of the
asymptotic total probability of bit error depends on how fast the burstiness of
the source scales down with SNR. If the source burstiness scales down too
slowly, the total probability of error is asymptotically dominated by
delay-violation events. On the other hand, if the source burstiness scales down
too quickly, the total probability of error is asymptotically dominated by
channel-error events. However, at the proper scaling, where the burstiness
scales linearly with 1/sqrt(log SNR) and at the optimal coding duration and
transmission rate, the occurrences of channel errors and delay-violation errors
are asymptotically balanced. In this latter case, the optimal exponent of the
total probability of error reveals a tradeoff that addresses the question of
how much of the allowable time and rate should be used for gaining reliability
over the channel and how much for accommodating the burstiness with delay
constraints.
|
0809.1043
|
On Unique Decodability
|
cs.IT math.IT
|
In this paper we propose a revisitation of the topic of unique decodability
and of some fundamental theorems of lossless coding. It is widely believed
that, for any discrete source X, every "uniquely decodable" block code
satisfies E[l(X_1 X_2 ... X_n)]>= H(X_1,X_2,...,X_n), where X_1, X_2,...,X_n
are the first n symbols of the source, E[l(X_1 X_2 ... X_n)] is the expected
length of the code for those symbols and H(X_1,X_2,...,X_n) is their joint
entropy. We show that, for certain sources with memory, the above inequality
only holds when a limiting definition of "uniquely decodable code" is
considered. In particular, the above inequality is usually assumed to hold for
any "practical code" due to a debatable application of McMillan's theorem to
sources with memory. We thus propose a clarification of the topic, also
providing an extended version of McMillan's theorem to be used for Markovian
sources.
|
0809.1053
|
An impossibility result for process discrimination
|
math.PR cs.IT math.IT math.ST stat.TH
|
Two series of binary observations $x_1,x_1,...$ and $y_1,y_2,...$ are
presented: at each time $n\in\N$ we are given $x_n$ and $y_n$. It is assumed
that the sequences are generated independently of each other by two
B-processes. We are interested in the question of whether the sequences
represent a typical realization of two different processes or of the same one.
We demonstrate that this is impossible to decide, in the sense that every
discrimination procedure is bound to err with non-negligible frequency when
presented with sequences from some B-processes. This contrasts earlier positive
results on B-processes, in particular those showing that there are consistent
$\bar d$-distance estimates for this class of processes.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.