id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
0710.3561
|
Stationary probability density of stochastic search processes in global
optimization
|
cs.AI cond-mat.stat-mech cs.NE
|
A method for the construction of approximate analytical expressions for the
stationary marginal densities of general stochastic search processes is
proposed. By the marginal densities, regions of the search space that with high
probability contain the global optima can be readily defined. The density
estimation procedure involves a controlled number of linear operations, with a
computational cost per iteration that grows linearly with problem size.
|
0710.3621
|
Numerical removal of water-vapor effects from THz-TDS measurements
|
cs.CE physics.comp-ph
|
One source of disturbance in a pulsed T-ray signal is attributed to ambient
water vapor. Water molecules in the gas phase selectively absorb T-rays at
discrete frequencies corresponding to their molecular rotational transitions.
This results in prominent resonances spread over the T-ray spectrum, and in the
time domain the T-ray signal is observed as fluctuations after the main pulse.
These effects are generally undesired, since they may mask critical
spectroscopic data. So, ambient water vapor is commonly removed from the T-ray
path by using a closed chamber during the measurement. Yet, in some
applications a closed chamber is not applicable. This situation, therefore,
motivates the need for another method to reduce these unwanted artifacts. This
paper presents a study on a computational means to address the problem.
Initially, a complex frequency response of water vapor is modeled from a
spectroscopic catalog. Using a deconvolution technique, together with fine
tuning of the strength of each resonance, parts of the water-vapor response are
removed from a measured T-ray signal, with minimal signal distortion.
|
0710.3757
|
Inferring the conditional mean
|
math.PR cs.IT math.IT
|
Consider a stationary real-valued time series $\{X_n\}_{n=0}^{\infty}$ with a
priori unknown distribution. The goal is to estimate the conditional
expectation $E(X_{n+1}|X_0,..., X_n)$ based on the observations $(X_0,...,
X_n)$ in a pointwise consistent way. It is well known that this is not possible
at all values of $n$. We will estimate it along stopping times.
|
0710.3760
|
Guessing the output of a stationary binary time series
|
math.PR cs.IT math.IT
|
The forward prediction problem for a binary time series
$\{X_n\}_{n=0}^{\infty}$ is to estimate the probability that $X_{n+1}=1$ based
on the observations $X_i$, $0\le i\le n$ without prior knowledge of the
distribution of the process $\{X_n\}$. It is known that this is not possible if
one estimates at all values of $n$. We present a simple procedure which will
attempt to make such a prediction infinitely often at carefully selected
stopping times chosen by the algorithm. The growth rate of the stopping times
is also exhibited.
|
0710.3773
|
Limitations on intermittent forecasting
|
math.PR cs.IT math.IT
|
Bailey showed that the general pointwise forecasting for stationary and
ergodic time series has a negative solution. However, it is known that for
Markov chains the problem can be solved. Morvai showed that there is a stopping
time sequence $\{\lambda_n\}$ such that
$P(X_{\lambda_n+1}=1|X_0,...,X_{\lambda_n}) $ can be estimated from samples
$(X_0,...,X_{\lambda_n})$ such that the difference between the conditional
probability and the estimate vanishes along these stoppping times for all
stationary and ergodic binary time series. We will show it is not possible to
estimate the above conditional probability along a stopping time sequence for
all stationary and ergodic binary time series in a pointwise sense such that if
the time series turns out to be a Markov chain, the predictor will predict
eventually for all $n$.
|
0710.3775
|
On classifying processes
|
math.PR cs.IT math.IT
|
We prove several results concerning classifications, based on successive
observations $(X_1,..., X_n)$ of an unknown stationary and ergodic process, for
membership in a given class of processes, such as the class of all finite order
Markov chains.
|
0710.3777
|
A Deterministic Approach to Wireless Relay Networks
|
cs.IT cs.DM math.IT math.PR
|
We present a deterministic channel model which captures several key features
of multiuser wireless communication. We consider a model for a wireless network
with nodes connected by such deterministic channels, and present an exact
characterization of the end-to-end capacity when there is a single source and a
single destination and an arbitrary number of relay nodes. This result is a
natural generalization of the max-flow min-cut theorem for wireline networks.
Finally to demonstrate the connections between deterministic model and Gaussian
model, we look at two examples: the single-relay channel and the diamond
network. We show that in each of these two examples, the capacity-achieving
scheme in the corresponding deterministic model naturally suggests a scheme in
the Gaussian model that is within 1 bit and 2 bit respectively from cut-set
upper bound, for all values of the channel gains. This is the first part of a
two-part paper; the sequel [1] will focus on the proof of the max-flow min-cut
theorem of a class of deterministic networks of which our model is a special
case.
|
0710.3781
|
Wireless Network Information Flow
|
cs.IT cs.DM math.IT math.PR
|
We present an achievable rate for general deterministic relay networks, with
broadcasting at the transmitters and interference at the receivers. In
particular we show that if the optimizing distribution for the
information-theoretic cut-set bound is a product distribution, then we have a
complete characterization of the achievable rates for such networks. For linear
deterministic finite-field models discussed in a companion paper [3], this is
indeed the case, and we have a generalization of the celebrated max-flow
min-cut theorem for such a network.
|
0710.3802
|
A Posteriori Equivalence: A New Perspective for Design of Optimal
Channel Shortening Equalizers
|
cs.IT math.IT
|
The problem of channel shortening equalization for optimal detection in ISI
channels is considered. The problem is to choose a linear equalizer and a
partial response target filter such that the combination produces the best
detection performance. Instead of using the traditional approach of MMSE
equalization, we directly seek all equalizer and target pairs that yield
optimal detection performance in terms of the sequence or symbol error rate.
This leads to a new notion of a posteriori equivalence between the equalized
and target channels with a simple characterization in terms of their underlying
probability distributions. Using this characterization we show the surprising
existence an infinite family of equalizer and target pairs for which any
maximum a posteriori (MAP) based detector designed for the target channel is
simultaneously MAP optimal for the equalized channel. For channels whose input
symbols have equal energy, such as q-PSK, the MMSE equalizer designed with a
monic target constraint yields a solution belonging to this optimal family of
designs. Although, these designs produce IIR target filters, the ideas are
extended to design good FIR targets. For an arbitrary choice of target and
equalizer, we derive an expression for the probability of sequence detection
error. This expression is used to design optimal FIR targets and IIR equalizers
and to quantify the FIR approximation penalty.
|
0710.3817
|
A Note on Comparison of Error Correction Codes
|
cs.IT math.IT math.ST stat.TH
|
Use of an error correction code in a given transmission channel can be
regarded as the statistical experiment. Therefore, powerful results from the
theory of comparison of experiments can be applied to compare the performances
of different error correction codes. We present results on the comparison of
block error correction codes using the representation of error correction code
as a linear experiment. In this case the code comparison is based on the
Loewner matrix ordering of respective code matrices. Next, we demonstrate the
bit-error rate code performance comparison based on the representation of the
codes as dichotomies, in which case the comparison is based on the matrix
majorization ordering of their respective equivalent code matrices.
|
0710.3861
|
Optimal encoding on discrete lattice with translational invariant
constrains using statistical algorithms
|
cs.IT math.IT
|
In this paper will be presented methodology of encoding information in
valuations of discrete lattice with some translational invariant constrains in
asymptotically optimal way. The method is based on finding statistical
description of such valuations and changing it into statistical algorithm,
which allows to construct deterministically valuation with given statistics.
Optimal statistics allow to generate valuations with uniform distribution - we
get maximum information capacity this way. It will be shown that we can reach
the optimum for one-dimensional models using maximal entropy random walk and
that for the general case we can practically get as close to the capacity of
the model as we want (found numerically: lost 10^{-10} bit/node for Hard
Square). There will be also presented simpler alternative to arithmetic coding
method which can be used as cryptosystem and data correction method too.
|
0710.3888
|
Cooperative Multi-Cell Networks: Impact of Limited-Capacity Backhaul and
Inter-Users Links
|
cs.IT math.IT
|
Cooperative technology is expected to have a great impact on the performance
of cellular or, more generally, infrastructure networks. Both multicell
processing (cooperation among base stations) and relaying (cooperation at the
user level) are currently being investigated. In this presentation, recent
results regarding the performance of multicell processing and user cooperation
under the assumption of limited-capacity interbase station and inter-user
links, respectively, are reviewed. The survey focuses on related results
derived for non-fading uplink and downlink channels of simple cellular system
models. The analytical treatment, facilitated by these simple setups, enhances
the insight into the limitations imposed by limited-capacity constraints on the
gains achievable by cooperative techniques.
|
0710.3974
|
Distributed source coding in dense sensor networks
|
cs.IT math.IT
|
We study the problem of the reconstruction of a Gaussian field defined in
[0,1] using N sensors deployed at regular intervals. The goal is to quantify
the total data rate required for the reconstruction of the field with a given
mean square distortion. We consider a class of two-stage mechanisms which a)
send information to allow the reconstruction of the sensor's samples within
sufficient accuracy, and then b) use these reconstructions to estimate the
entire field. To implement the first stage, the heavy correlation between the
sensor samples suggests the use of distributed coding schemes to reduce the
total rate. We demonstrate the existence of a distributed block coding scheme
that achieves, for a given fidelity criterion for the reconstruction of the
field, a total information rate that is bounded by a constant, independent of
the number $N$ of sensors. The constant in general depends on the
autocorrelation function of the field and the desired distortion criterion for
the sensor samples. We then describe a scheme which can be implemented using
only scalar quantizers at the sensors, without any use of distributed source
coding, and which also achieves a total information rate that is a constant,
independent of the number of sensors. While this scheme operates at a rate that
is greater than the rate achievable through distributed coding and entails
greater delay in reconstruction, its simplicity makes it attractive for
implementation in sensor networks.
|
0710.4046
|
Bit-interleaved coded modulation in the wideband regime
|
cs.IT math.IT
|
The wideband regime of bit-interleaved coded modulation (BICM) in Gaussian
channels is studied. The Taylor expansion of the coded modulation capacity for
generic signal constellations at low signal-to-noise ratio (SNR) is derived and
used to determine the corresponding expansion for the BICM capacity. Simple
formulas for the minimum energy per bit and the wideband slope are given. BICM
is found to be suboptimal in the sense that its minimum energy per bit can be
larger than the corresponding value for coded modulation schemes. The minimum
energy per bit using standard Gray mapping on M-PAM or M^2-QAM is given by a
simple formula and shown to approach -0.34 dB as M increases. Using the low SNR
expansion, a general trade-off between power and bandwidth in the wideband
regime is used to show how a power loss can be traded off against a bandwidth
gain.
|
0710.4051
|
On the capacity achieving covariance matrix for Rician MIMO channels: an
asymptotic approach
|
math.PR cs.IT math.IT
|
The capacity-achieving input covariance matrices for coherent block-fading
correlated MIMO Rician channels are determined. In this case, no closed-form
expressions for the eigenvectors of the optimum input covariance matrix are
available. An approximation of the average mutual information is evaluated in
this paper in the asymptotic regime where the number of transmit and receive
antennas converge to $+\infty$. New results related to the accuracy of the
corresponding large system approximation are provided. An attractive
optimization algorithm of this approximation is proposed and we establish that
it yields an effective way to compute the capacity achieving covariance matrix
for the average mutual information. Finally, numerical simulation results show
that, even for a moderate number of transmit and receive antennas, the new
approach provides the same results as direct maximization approaches of the
average mutual information, while being much more computationally attractive.
|
0710.4076
|
Some information-theoretic computations related to the distribution of
prime numbers
|
cs.IT math.IT math.NT math.PR
|
We illustrate how elementary information-theoretic ideas may be employed to
provide proofs for well-known, nontrivial results in number theory.
Specifically, we give an elementary and fairly short proof of the following
asymptotic result: The sum of (log p)/p, taken over all primes p not exceeding
n, is asymptotic to log n as n tends to infinity. We also give finite-n bounds
refining the above limit. This result, originally proved by Chebyshev in 1852,
is closely related to the celebrated prime number theorem.
|
0710.4105
|
A Note on the Secrecy Capacity of the Multi-antenna Wiretap Channel
|
cs.IT math.IT
|
Recently, the secrecy capacity of the multi-antenna wiretap channel was
characterized by Khisti and Wornell [1] using a Sato-like argument. This note
presents an alternative characterization using a channel enhancement argument.
This characterization relies on an extremal entropy inequality recently proved
in the context of multi-antenna broadcast channels, and is directly built on
the physical intuition regarding to the optimal transmission strategy in this
communication scenario.
|
0710.4117
|
From the entropy to the statistical structure of spike trains
|
q-bio.NC cs.IT math.IT math.PR stat.AP
|
We use statistical estimates of the entropy rate of spike train data in order
to make inferences about the underlying structure of the spike train itself. We
first examine a number of different parametric and nonparametric estimators
(some known and some new), including the ``plug-in'' method, several versions
of Lempel-Ziv-based compression algorithms, a maximum likelihood estimator
tailored to renewal processes, and the natural estimator derived from the
Context-Tree Weighting method (CTW). The theoretical properties of these
estimators are examined, several new theoretical results are developed, and all
estimators are systematically applied to various types of synthetic data and
under different conditions.
Our main focus is on the performance of these entropy estimators on the
(binary) spike trains of 28 neurons recorded simultaneously for a one-hour
period from the primary motor and dorsal premotor cortices of a monkey. We show
how the entropy estimates can be used to test for the existence of long-term
structure in the data, and we construct a hypothesis test for whether the
renewal process model is appropriate for these spike trains. Further, by
applying the CTW algorithm we derive the maximum a posterior (MAP) tree model
of our empirical data, and comment on the underlying structure it reveals.
|
0710.4180
|
A quick search method for audio signals based on a piecewise linear
representation of feature trajectories
|
cs.MM cs.DB
|
This paper presents a new method for a quick similarity-based search through
long unlabeled audio streams to detect and locate audio clips provided by
users. The method involves feature-dimension reduction based on a piecewise
linear representation of a sequential feature trajectory extracted from a long
audio stream. Two techniques enable us to obtain a piecewise linear
representation: the dynamic segmentation of feature trajectories and the
segment-based Karhunen-L\'{o}eve (KL) transform. The proposed search method
guarantees the same search results as the search method without the proposed
feature-dimension reduction method in principle. Experiment results indicate
significant improvements in search speed. For example the proposed method
reduced the total search time to approximately 1/12 that of previous methods
and detected queries in approximately 0.3 seconds from a 200-hour audio
database.
|
0710.4182
|
Beyond Feedforward Models Trained by Backpropagation: a Practical
Training Tool for a More Efficient Universal Approximator
|
cs.NE
|
Cellular Simultaneous Recurrent Neural Network (SRN) has been shown to be a
function approximator more powerful than the MLP. This means that the
complexity of MLP would be prohibitively large for some problems while SRN
could realize the desired mapping with acceptable computational constraints.
The speed of training of complex recurrent networks is crucial to their
successful application. Present work improves the previous results by training
the network with extended Kalman filter (EKF). We implemented a generic
Cellular SRN and applied it for solving two challenging problems: 2D maze
navigation and a subset of the connectedness problem. The speed of convergence
has been improved by several orders of magnitude in comparison with the earlier
results in the case of maze navigation, and superior generalization has been
demonstrated in the case of connectedness. The implications of this
improvements are discussed.
|
0710.4187
|
Universal coding for correlated sources with complementary delivery
|
cs.IT math.IT
|
This paper deals with a universal coding problem for a certain kind of
multiterminal source coding system that we call the complementary delivery
coding system. In this system, messages from two correlated sources are jointly
encoded, and each decoder has access to one of the two messages to enable it to
reproduce the other message. Both fixed-to-fixed length and fixed-to-variable
length lossless coding schemes are considered. Explicit constructions of
universal codes and bounds of the error probabilities are clarified via
type-theoretical and graph-theoretical analyses. [[Keywords]] multiterminal
source coding, complementary delivery, universal coding, types of sequences,
bipartite graphs
|
0710.4231
|
Analyzing covert social network foundation behind terrorism disaster
|
cs.AI
|
This paper addresses a method to analyze the covert social network foundation
hidden behind the terrorism disaster. It is to solve a node discovery problem,
which means to discover a node, which functions relevantly in a social network,
but escaped from monitoring on the presence and mutual relationship of nodes.
The method aims at integrating the expert investigator's prior understanding,
insight on the terrorists' social network nature derived from the complex graph
theory, and computational data processing. The social network responsible for
the 9/11 attack in 2001 is used to execute simulation experiment to evaluate
the performance of the method.
|
0710.4255
|
Analysis of a Mixed Strategy for Multiple Relay Networks
|
cs.IT math.IT
|
In their landmark paper Cover and El Gamal proposed different coding
strategies for the relay channel with a single relay supporting a communication
pair. These strategies are the decode-and-forward and compress-and-forward
approach, as well as a general lower bound on the capacity of a relay network
which relies on the mixed application of the previous two strategies. So far,
only parts of their work - the decode-and-forward and the compress-and-forward
strategy - have been applied to networks with multiple relays.
This paper derives a mixed strategy for multiple relay networks using a
combined approach of partial decode-and-forward with N +1 levels and the ideas
of successive refinement with different side information at the receivers.
After describing the protocol structure, we present the achievable rates for
the discrete memoryless relay channel as well as Gaussian multiple relay
networks. Using these results we compare the mixed strategy with some special
cases, e. g., multilevel decode-and-forward, distributed compress-and-forward
and a mixed approach where one relay node operates in decode-and-forward and
the other in compress-and-forward mode.
|
0710.4486
|
Non-linear estimation is easy
|
cs.CE cs.NA cs.PF math.AC math.NA math.OC
|
Non-linear state estimation and some related topics, like parametric
estimation, fault diagnosis, and perturbation attenuation, are tackled here via
a new methodology in numerical differentiation. The corresponding basic system
theoretic definitions and properties are presented within the framework of
differential algebra, which permits to handle system variables and their
derivatives of any order. Several academic examples and their computer
simulations, with on-line estimations, are illustrating our viewpoint.
|
0710.4516
|
The predictability of letters in written english
|
physics.soc-ph cs.CL stat.ML
|
We show that the predictability of letters in written English texts depends
strongly on their position in the word. The first letters are usually the least
easy to predict. This agrees with the intuitive notion that words are well
defined subunits in written languages, with much weaker correlations across
these units than within them. It implies that the average entropy of a letter
deep inside a word is roughly 4 times smaller than the entropy of the first
letter.
|
0710.4680
|
Energy Bounds for Fault-Tolerant Nanoscale Designs
|
cs.CC cs.IT math.IT
|
The problem of determining lower bounds for the energy cost of a given
nanoscale design is addressed via a complexity theory-based approach. This
paper provides a theoretical framework that is able to assess the trade-offs
existing in nanoscale designs between the amount of redundancy needed for a
given level of resilience to errors and the associated energy cost. Circuit
size, logic depth and error resilience are analyzed and brought together in a
theoretical framework that can be seamlessly integrated with automated
synthesis tools and can guide the design process of nanoscale systems comprised
of failure prone devices. The impact of redundancy addition on the switching
energy and its relationship with leakage energy is modeled in detail. Results
show that 99% error resilience is possible for fault-tolerant designs, but at
the expense of at least 40% more energy if individual gates fail independently
with probability of 1%.
|
0710.4725
|
Fault-Trajectory Approach for Fault Diagnosis on Analog Circuits
|
cs.NE
|
This issue discusses the fault-trajectory approach suitability for fault
diagnosis on analog networks. Recent works have shown promising results
concerning a method based on this concept for ATPG for diagnosing faults on
analog networks. Such method relies on evolutionary techniques, where a generic
algorithm (GA) is coded to generate a set of optimum frequencies capable to
disclose faults.
|
0710.4734
|
Computational Intelligence Characterization Method of Semiconductor
Device
|
cs.AI cs.NE
|
Characterization of semiconductor devices is used to gather as much data
about the device as possible to determine weaknesses in design or trends in the
manufacturing process. In this paper, we propose a novel multiple trip point
characterization concept to overcome the constraint of single trip point
concept in device characterization phase. In addition, we use computational
intelligence techniques (e.g. neural network, fuzzy and genetic algorithm) to
further manipulate these sets of multiple trip point values and tests based on
semiconductor test equipments, Our experimental results demonstrate an
excellent design parameter variation analysis in device characterization phase,
as well as detection of a set of worst case tests that can provoke the worst
case variation, while traditional approach was not capable of detecting them.
|
0710.4750
|
On the Analysis of Reed Solomon Coding for Resilience to
Transient/Permanent Faults in Highly Reliable Memories
|
cs.IT math.IT
|
Single Event Upsets (SEU) as well as permanent faults can significantly
affect the correct on-line operation of digital systems, such as memories and
microprocessors; a memory can be made resilient to permanent and transient
faults by using modular redundancy and coding. In this paper, different memory
systems are compared: these systems utilize simplex and duplex arrangements
with a combination of Reed Solomon coding and scrubbing. The memory systems and
their operations are analyzed by novel Markov chains to characterize
performance for dynamic reconfiguration as well as error detection and
correction under the occurrence of permanent and transient faults. For a
specific Reed Solomon code, the duplex arrangement allows to efficiently cope
with the occurrence of permanent faults, while the use of scrubbing allows to
cope with transient faults.
|
0710.4780
|
Querying XML Documents in Logic Programming
|
cs.PL cs.DB
|
Extensible Markup Language (XML) is a simple, very flexible text format
derived from SGML. Originally designed to meet the challenges of large-scale
electronic publishing, XML is also playing an increasingly important role in
the exchange of a wide variety of data on the Web and elsewhere. XPath language
is the result of an effort to provide address parts of an XML document. In
support of this primary purpose, it becomes in a query language against an XML
document. In this paper we present a proposal for the implementation of the
XPath language in logic programming. With this aim we will describe the
representation of XML documents by means of a logic program. Rules and facts
can be used for representing the document schema and the XML document itself.
In particular, we will present how to index XML documents in logic programs:
rules are supposed to be stored in main memory, however facts are stored in
secondary memory by using two kind of indexes: one for each XML tag, and other
for each group of terminal items. In addition, we will study how to query by
means of the XPath language against a logic program representing an XML
document. It evolves the specialization of the logic program with regard to the
XPath expression. Finally, we will also explain how to combine the indexing and
the top-down evaluation of the logic program. To appear in Theory and Practice
of Logic Programming (TPLP)"
|
0710.4847
|
Bayesian sequential change diagnosis
|
math.PR cs.IT math.IT math.ST stat.TH
|
Sequential change diagnosis is the joint problem of detection and
identification of a sudden and unobservable change in the distribution of a
random sequence. In this problem, the common probability law of a sequence of
i.i.d. random variables suddenly changes at some disorder time to one of
finitely many alternatives. This disorder time marks the start of a new regime,
whose fingerprint is the new law of observations. Both the disorder time and
the identity of the new regime are unknown and unobservable. The objective is
to detect the regime-change as soon as possible, and, at the same time, to
determine its identity as accurately as possible. Prompt and correct diagnosis
is crucial for quick execution of the most appropriate measures in response to
the new regime, as in fault detection and isolation in industrial processes,
and target detection and identification in national defense. The problem is
formulated in a Bayesian framework. An optimal sequential decision strategy is
found, and an accurate numerical scheme is described for its implementation.
Geometrical properties of the optimal strategy are illustrated via numerical
examples. The traditional problems of Bayesian change-detection and Bayesian
sequential multi-hypothesis testing are solved as special cases. In addition, a
solution is obtained for the problem of detection and identification of
component failure(s) in a system with suspended animation.
|
0710.4903
|
Anonymous Networking amidst Eavesdroppers
|
cs.IT math.IT
|
The problem of security against timing based traffic analysis in wireless
networks is considered in this work. An analytical measure of anonymity in
eavesdropped networks is proposed using the information theoretic concept of
equivocation. For a physical layer with orthogonal transmitter directed
signaling, scheduling and relaying techniques are designed to maximize
achievable network performance for any given level of anonymity. The network
performance is measured by the achievable relay rates from the sources to
destinations under latency and medium access constraints. In particular,
analytical results are presented for two scenarios:
For a two-hop network with maximum anonymity, achievable rate regions for a
general m x 1 relay are characterized when nodes generate independent Poisson
transmission schedules. The rate regions are presented for both strict and
average delay constraints on traffic flow through the relay.
For a multihop network with an arbitrary anonymity requirement, the problem
of maximizing the sum-rate of flows (network throughput) is considered. A
selective independent scheduling strategy is designed for this purpose, and
using the analytical results for the two-hop network, the achievable throughput
is characterized as a function of the anonymity level. The throughput-anonymity
relation for the proposed strategy is shown to be equivalent to an information
theoretic rate-distortion function.
|
0710.4905
|
Distributed Source Coding in the Presence of Byzantine Sensors
|
cs.IT math.IT
|
The distributed source coding problem is considered when the sensors, or
encoders, are under Byzantine attack; that is, an unknown group of sensors have
been reprogrammed by a malicious intruder to undermine the reconstruction at
the fusion center. Three different forms of the problem are considered. The
first is a variable-rate setup, in which the decoder adaptively chooses the
rates at which the sensors transmit. An explicit characterization of the
variable-rate achievable sum rates is given for any number of sensors and any
groups of traitors. The converse is proved constructively by letting the
traitors simulate a fake distribution and report the generated values as the
true ones. This fake distribution is chosen so that the decoder cannot
determine which sensors are traitors while maximizing the required rate to
decode every value. Achievability is proved using a scheme in which the decoder
receives small packets of information from a sensor until its message can be
decoded, before moving on to the next sensor. The sensors use randomization to
choose from a set of coding functions, which makes it probabilistically
impossible for the traitors to cause the decoder to make an error. Two forms of
the fixed-rate problem are considered, one with deterministic coding and one
with randomized coding. The achievable rate regions are given for both these
problems, and it is shown that lower rates can be achieved with randomized
coding.
|
0710.4975
|
Node discovery problem for a social network
|
cs.AI
|
Methods to solve a node discovery problem for a social network are presented.
Covert nodes refer to the nodes which are not observable directly. They
transmit the influence and affect the resulting collaborative activities among
the persons in a social network, but do not appear in the surveillance logs
which record the participants of the collaborative activities. Discovering the
covert nodes is identifying the suspicious logs where the covert nodes would
appear if the covert nodes became overt. The performance of the methods is
demonstrated with a test dataset generated from computationally synthesized
networks and a real organization.
|
0710.4982
|
First to Market is not Everything: an Analysis of Preferential
Attachment with Fitness
|
math.PR cs.SI
|
In this paper, we provide a rigorous analysis of preferential attachment with
fitness, a random graph model introduced by Bianconi and Barabasi. Depending on
the shape of the fitness distribution, we observe three distinct phases: a
first-mover-advantage phase, a fit-get-richer phase and an innovation-pays-off
phase.
|
0710.4987
|
Universal source coding over generalized complementary delivery networks
|
cs.IT math.IT
|
This paper deals with a universal coding problem for a certain kind of
multiterminal source coding network called a generalized complementary delivery
network. In this network, messages from multiple correlated sources are jointly
encoded, and each decoder has access to some of the messages to enable it to
reproduce the other messages. Both fixed-to-fixed length and fixed-to-variable
length lossless coding schemes are considered. Explicit constructions of
universal codes and the bounds of the error probabilities are clarified by
using methods of types and graph-theoretical analysis.
|
0710.5002
|
The entropy of keys derived from laser speckle
|
cs.CR cs.CV
|
Laser speckle has been proposed in a number of papers as a high-entropy
source of unpredictable bits for use in security applications. Bit strings
derived from speckle can be used for a variety of security purposes such as
identification, authentication, anti-counterfeiting, secure key storage, random
number generation and tamper protection. The choice of laser speckle as a
source of random keys is quite natural, given the chaotic properties of
speckle. However, this same chaotic behaviour also causes reproducibility
problems. Cryptographic protocols require either zero noise or very low noise
in their inputs; hence the issue of error rates is critical to applications of
laser speckle in cryptography. Most of the literature uses an error reduction
method based on Gabor filtering. Though the method is successful, it has not
been thoroughly analysed.
In this paper we present a statistical analysis of Gabor-filtered speckle
patterns. We introduce a model in which perturbations are described as random
phase changes in the source plane. Using this model we compute the second and
fourth order statistics of Gabor coefficients. We determine the mutual
information between perturbed and unperturbed Gabor coefficients and the bit
error rate in the derived bit string. The mutual information provides an
absolute upper bound on the number of secure bits that can be reproducibly
extracted from noisy measurements.
|
0710.5116
|
Combining haplotypers
|
cs.LG cs.CE q-bio.QM
|
Statistically resolving the underlying haplotype pair for a genotype
measurement is an important intermediate step in gene mapping studies, and has
received much attention recently. Consequently, a variety of methods for this
problem have been developed. Different methods employ different statistical
models, and thus implicitly encode different assumptions about the nature of
the underlying haplotype structure. Depending on the population sample in
question, their relative performance can vary greatly, and it is unclear which
method to choose for a particular sample. Instead of choosing a single method,
we explore combining predictions returned by different methods in a principled
way, and thereby circumvent the problem of method selection.
We propose several techniques for combining haplotype reconstructions and
analyze their computational properties. In an experimental study on real-world
haplotype data we show that such techniques can provide more accurate and
robust reconstructions, and are useful for outlier detection. Typically, the
combined prediction is at least as accurate as or even more accurate than the
best individual method, effectively circumventing the method selection problem.
|
0710.5144
|
Forecasting for stationary binary time series
|
math.PR cs.IT math.IT
|
The forecasting problem for a stationary and ergodic binary time series
$\{X_n\}_{n=0}^{\infty}$ is to estimate the probability that $X_{n+1}=1$ based
on the observations $X_i$, $0\le i\le n$ without prior knowledge of the
distribution of the process $\{X_n\}$. It is known that this is not possible if
one estimates at all values of $n$. We present a simple procedure which will
attempt to make such a prediction infinitely often at carefully selected
stopping times chosen by the algorithm. We show that the proposed procedure is
consistent under certain conditions, and we estimate the growth rate of the
stopping times.
|
0710.5161
|
Decomposable Subspaces, Linear Sections of Grassmann Varieties, and
Higher Weights of Grassmann Codes
|
math.AG cs.IT math.IT
|
Given a homogeneous component of an exterior algebra, we characterize those
subspaces in which every nonzero element is decomposable. In geometric terms,
this corresponds to characterizing the projective linear subvarieties of the
Grassmann variety with its Plucker embedding. When the base field is finite, we
consider the more general question of determining the maximum number of points
on sections of Grassmannians by linear subvarieties of a fixed (co)dimension.
This corresponds to a known open problem of determining the complete weight
hierarchy of linear error correcting codes associated to Grassmann varieties.
We recover most of the known results as well as prove some new results. In the
process we obtain, and utilize, a simple generalization of the Griesmer-Wei
bound for arbitrary linear codes.
|
0710.5190
|
Identifying statistical dependence in genomic sequences via mutual
information estimates
|
q-bio.GN cs.IT math.IT
|
Questions of understanding and quantifying the representation and amount of
information in organisms have become a central part of biological research, as
they potentially hold the key to fundamental advances. In this paper, we
demonstrate the use of information-theoretic tools for the task of identifying
segments of biomolecules (DNA or RNA) that are statistically correlated. We
develop a precise and reliable methodology, based on the notion of mutual
information, for finding and extracting statistical as well as structural
dependencies. A simple threshold function is defined, and its use in
quantifying the level of significance of dependencies between biological
segments is explored. These tools are used in two specific applications. First,
for the identification of correlations between different parts of the maize
zmSRp32 gene. There, we find significant dependencies between the 5'
untranslated region in zmSRp32 and its alternatively spliced exons. This
observation may indicate the presence of as-yet unknown alternative splicing
mechanisms or structural scaffolds. Second, using data from the FBI's Combined
DNA Index System (CODIS), we demonstrate that our approach is particularly well
suited for the problem of discovering short tandem repeats, an application of
importance in genetic profiling.
|
0710.5194
|
Rate-Constrained Wireless Networks with Fading Channels:
Interference-Limited and Noise-Limited Regimes
|
cs.IT math.IT
|
A network of $n$ wireless communication links is considered in a Rayleigh
fading environment. It is assumed that each link can be active and transmit
with a constant power $P$ or remain silent. The objective is to maximize the
number of active links such that each active link can transmit with a constant
rate $\lambda$. An upper bound is derived that shows the number of active links
scales at most like $\frac{1}{\lambda} \log n$. To obtain a lower bound, a
decentralized link activation strategy is described and analyzed. It is shown
that for small values of $\lambda$, the number of supported links by this
strategy meets the upper bound; however, as $\lambda$ grows, this number
becomes far below the upper bound. To shrink the gap between the upper bound
and the achievability result, a modified link activation strategy is proposed
and analyzed based on some results from random graph theory. It is shown that
this modified strategy performs very close to the optimum. Specifically, this
strategy is \emph{asymptotically almost surely} optimum when $\lambda$
approaches $\infty$ or 0. It turns out the optimality results are obtained in
an interference-limited regime. It is demonstrated that, by proper selection of
the algorithm parameters, the proposed scheme also allows the network to
operate in a noise-limited regime in which the transmission rates can be
adjusted by the transmission powers. The price for this flexibility is a
decrease in the throughput scaling law by a multiplicative factor of $\log \log
n$.
|
0710.5230
|
Generalized reliability-based syndrome decoding for LDPC codes
|
cs.IT math.IT
|
Aiming at bridging the gap between the maximum likelihood decoding (MLD) and
the suboptimal iterative decodings for short or medium length LDPC codes, we
present a generalized ordered statistic decoding (OSD) in the form of syndrome
decoding, to cascade with the belief propagation (BP) or enhanced min-sum
decoding. The OSD is invoked only when the decoding failures are obtained for
the preceded iterative decoding method. With respect to the existing OSD which
is based on the accumulated log-likelihood ratio (LLR) metric, we extend the
accumulative metric to the situation where the BP decoding is in the
probability domain. Moreover, after generalizing the accumulative metric to the
context of the normalized or offset min-sum decoding, the OSD shows appealing
tradeoff between performance and complexity. In the OSD implementation, when
deciding the true error pattern among many candidates, an alternative proposed
proves to be effective to reduce the number of real additions without
performance loss. Simulation results demonstrate that the cascade connection of
enhanced min-sum and OSD decodings outperforms the BP alone significantly, in
terms of either performance or complexity.
|
0710.5241
|
Connection Between System Parameters and Localization Probability in
Network of Randomly Distributed Nodes
|
cs.NI cs.DM cs.IT math.IT
|
This article deals with localization probability in a network of randomly
distributed communication nodes contained in a bounded domain. A fraction of
the nodes denoted as L-nodes are assumed to have localization information while
the rest of the nodes denoted as NL nodes do not. The basic model assumes each
node has a certain radio coverage within which it can make relative distance
measurements. We model both the case radio coverage is fixed and the case radio
coverage is determined by signal strength measurements in a Log-Normal
Shadowing environment. We apply the probabilistic method to determine the
probability of NL-node localization as a function of the coverage area to
domain area ratio and the density of L-nodes. We establish analytical
expressions for this probability and the transition thresholds with respect to
key parameters whereby marked change in the probability behavior is observed.
The theoretical results presented in the article are supported by simulations.
|
0710.5333
|
Neutrosophic Relational Data Model
|
cs.DB
|
In this paper, we present a generalization of the relational data model based
on interval neutrosophic set. Our data model is capable of manipulating
incomplete as well as inconsistent information. Fuzzy relation or
intuitionistic fuzzy relation can only handle incomplete information.
Associated with each relation are two membership functions one is called
truth-membership function T which keeps track of the extent to which we believe
the tuple is in the relation, another is called falsity-membership function F
which keeps track of the extent to which we believe that it is not in the
relation. A neutrosophic relation is inconsistent if there exists one tuple a
such that T(a) + F(a) > 1 . In order to handle inconsistent situation, we
propose an operator called "split" to transform inconsistent neutrosophic
relations into pseudo-consistent neutrosophic relations and do the
set-theoretic and relation-theoretic operations on them and finally use another
operator called "combine" to transform the result back to neutrosophic
relation. For this data model, we define algebraic operators that are
generalizations of the usual operators such as intersection, union, selection,
join on fuzzy relations. Our data model can underlie any database and
knowledge-base management system that deals with incomplete and inconsistent
information.
|
0710.5340
|
Bounds on the Network Coding Capacity for Wireless Random Networks
|
cs.IT cs.NI math.IT
|
Recently, it has been shown that the max flow capacity can be achieved in a
multicast network using network coding. In this paper, we propose and analyze a
more realistic model for wireless random networks. We prove that the capacity
of network coding for this model is concentrated around the expected value of
its minimum cut. Furthermore, we establish upper and lower bounds for wireless
nodes using Chernoff bound. Our experiments show that our theoretical
predictions are well matched by simulation results.
|
0710.5376
|
Broadcasting Correlated Gaussians
|
cs.IT math.IT
|
We consider the transmission of a memoryless bivariate Gaussian source over
an average-power-constrained one-to-two Gaussian broadcast channel. The
transmitter observes the source and describes it to the two receivers by means
of an average-power-constrained signal. Each receiver observes the transmitted
signal corrupted by a different additive white Gaussian noise and wishes to
estimate the source component intended for it. That is, Receiver~1 wishes to
estimate the first source component and Receiver~2 wishes to estimate the
second source component. Our interest is in the pairs of expected squared-error
distortions that are simultaneously achievable at the two receivers.
We prove that an uncoded transmission scheme that sends a linear combination
of the source components achieves the optimal power-versus-distortion trade-off
whenever the signal-to-noise ratio is below a certain threshold. The threshold
is a function of the source correlation and the distortion at the receiver with
the weaker noise.
|
0710.5382
|
Some Reflections on the Task of Content Determination in the Context of
Multi-Document Summarization of Evolving Events
|
cs.CL
|
Despite its importance, the task of summarizing evolving events has received
small attention by researchers in the field of multi-document summariztion. In
a previous paper (Afantenos et al. 2007) we have presented a methodology for
the automatic summarization of documents, emitted by multiple sources, which
describe the evolution of an event. At the heart of this methodology lies the
identification of similarities and differences between the various documents,
in two axes: the synchronic and the diachronic. This is achieved by the
introduction of the notion of Synchronic and Diachronic Relations. Those
relations connect the messages that are found in the documents, resulting thus
in a graph which we call grid. Although the creation of the grid completes the
Document Planning phase of a typical NLG architecture, it can be the case that
the number of messages contained in a grid is very large, exceeding thus the
required compression rate. In this paper we provide some initial thoughts on a
probabilistic model which can be applied at the Content Determination stage,
and which tries to alleviate this problem.
|
0710.5386
|
Acquisition of Information is Achieved by the Measurement Process in
Classical and Quantum Physics
|
quant-ph cs.IT hep-th math.IT
|
No consensus seems to exist as to what constitutes a measurement which is
still considered somewhat mysterious in many respects in quantum mechanics. At
successive stages mathematical theory of measure, metrology and measurement
theory tried to systematize this field but significant questions remain open
about the nature of measurement, about the characterization of the observer,
about the reliability of measurement processes etc. The present paper attempts
to talk about these questions through the information science. We start from
the idea, rather common and intuitive, that the measurement process basically
acquires information. Next we expand this idea through four formal definitions
and infer some corollaries regarding the measurement process from those
definitions. Relativity emerges as the basic property of measurement from the
present logical framework and this rather surprising result collides with the
feeling of physicists who take measurement as a myth. In the closing this paper
shows how the measurement relativity wholly consists with some effects
calculated in QM and in Einstein's theory.
|
0710.5501
|
Discriminated Belief Propagation
|
cs.IT cs.AI math.IT
|
Near optimal decoding of good error control codes is generally a difficult
task. However, for a certain type of (sufficiently) good codes an efficient
decoding algorithm with near optimal performance exists. These codes are
defined via a combination of constituent codes with low complexity trellis
representations. Their decoding algorithm is an instance of (loopy) belief
propagation and is based on an iterative transfer of constituent beliefs. The
beliefs are thereby given by the symbol probabilities computed in the
constituent trellises. Even though weak constituent codes are employed close to
optimal performance is obtained, i.e., the encoder/decoder pair (almost)
achieves the information theoretic capacity. However, (loopy) belief
propagation only performs well for a rather specific set of codes, which limits
its applicability.
In this paper a generalisation of iterative decoding is presented. It is
proposed to transfer more values than just the constituent beliefs. This is
achieved by the transfer of beliefs obtained by independently investigating
parts of the code space. This leads to the concept of discriminators, which are
used to improve the decoder resolution within certain areas and defines
discriminated symbol beliefs. It is shown that these beliefs approximate the
overall symbol probabilities. This leads to an iteration rule that (below
channel capacity) typically only admits the solution of the overall decoding
problem. Via a Gauss approximation a low complexity version of this algorithm
is derived. Moreover, the approach may then be applied to a wide range of
channel maps without significant complexity increase.
|
0710.5512
|
Risk Minimization and Optimal Derivative Design in a Principal Agent
Game
|
cs.CE
|
We consider the problem of Adverse Selection and optimal derivative design
within a Principal-Agent framework. The principal's income is exposed to
non-hedgeable risk factors arising, for instance, from weather or climate
phenomena. She evaluates her risk using a coherent and law invariant risk
measure and tries minimize her exposure by selling derivative securities on her
income to individual agents. The agents have mean-variance preferences with
heterogeneous risk aversion coefficients. An agent's degree of risk aversion is
private information and hidden to the principal who only knows the overall
distribution. We show that the principal's risk minimization problem has a
solution and illustrate the effects of risk transfer on her income by means of
two specific examples. Our model extends earlier work of Barrieu and El Karoui
(2005) and Carlier, Ekeland and Touzi (2007).
|
0710.5547
|
Code Similarity on High Level Programs
|
cs.CV cs.DS
|
This paper presents a new approach for code similarity on High Level
programs. Our technique is based on Fast Dynamic Time Warping, that builds a
warp path or points relation with local restrictions. The source code is
represented into Time Series using the operators inside programming languages
that makes possible the comparison. This makes possible subsequence detection
that represent similar code instructions. In contrast with other code
similarity algorithms, we do not make features extraction. The experiments show
that two source codes are similar when their respective Time Series are
similar.
|
0710.5640
|
LDPC-Based Iterative Algorithm for Compression of Correlated Sources at
Rates Approaching the Slepian-Wolf Bound
|
cs.IT math.IT
|
This article proposes a novel iterative algorithm based on Low Density Parity
Check (LDPC) codes for compression of correlated sources at rates approaching
the Slepian-Wolf bound. The setup considered in the article looks at the
problem of compressing one source at a rate determined based on the knowledge
of the mean source correlation at the encoder, and employing the other
correlated source as side information at the decoder which decompresses the
first source based on the estimates of the actual correlation. We demonstrate
that depending on the extent of the actual source correlation estimated through
an iterative paradigm, significant compression can be obtained relative to the
case the decoder does not use the implicit knowledge of the existence of
correlation.
|
0710.5666
|
The Entropy Photon-Number Inequality and its Consequences
|
quant-ph cs.IT math.IT
|
Determining the ultimate classical information carrying capacity of
electromagnetic waves requires quantum-mechanical analysis to properly account
for the bosonic nature of these waves. Recent work has established capacity
theorems for bosonic single-user, broadcast, and wiretap channels, under the
presumption of two minimum output entropy conjectures. Despite considerable
accumulated evidence that supports the validity of these conjectures, they have
yet to be proven. Here we show that the preceding minimum output entropy
conjectures are simple consequences of an Entropy Photon-Number Inequality,
which is a conjectured quantum-mechanical analog of the Entropy Power
Inequality (EPI) from classical information theory.
|
0710.5758
|
Grassmannian Beamforming for MIMO Amplify-and-Forward Relaying
|
cs.IT math.IT
|
In this paper, we derive the optimal transmitter/ receiver beamforming
vectors and relay weighting matrix for the multiple-input multiple-output
amplify-and-forward relay channel. The analysis is accomplished in two steps.
In the first step, the direct link between the transmitter (Tx) and receiver
(Rx) is ignored and we show that the transmitter and the relay should map their
signals to the strongest right singular vectors of the Tx-relay and relay-Rx
channels. Based on the distributions of these vectors for independent
identically distributed (i.i.d.) Rayleigh channels, the Grassmannian codebooks
are used for quantizing and sending back the channel information to the
transmitter and the relay. The simulation results show that even a few number
of bits can considerably increase the link reliability in terms of bit error
rate. For the second step, the direct link is considered in the problem model
and we derive the optimization problem that identifies the optimal Tx
beamforming vector. For the i.i.d Rayleigh channels, we show that the solution
to this problem is uniformly distributed on the unit sphere and we justify the
appropriateness of the Grassmannian codebook (for determining the optimal
beamforming vector), both analytically and by simulation. Finally, a modified
quantizing scheme is presented which introduces a negligible degradation in the
system performance but significantly reduces the required number of feedback
bits.
|
0710.5893
|
Codes from Zero-divisors and Units in Group Rings
|
cs.IT math.IT
|
We describe and present a new construction method for codes using encodings
from group rings. They consist primarily of two types: zero-divisor and
unit-derived codes. Previous codes from group rings focused on ideals; for
example cyclic codes are ideals in the group ring over a cyclic group. The
fresh focus is on the encodings themselves, which only under very limited
conditions result in ideals. We use the result that a group ring is isomorphic
to a certain well-defined ring of matrices, and thus every group ring element
has an associated matrix. This allows matrix algebra to be used as needed in
the study and production of codes, enabling the creation of standard generator
and check matrices. Group rings are a fruitful source of units and
zero-divisors from which new codes result. Many code properties, such as being
LDPC or self-dual, may be expressed as properties within the group ring thus
enabling the construction of codes with these properties. The methods are
general enabling the construction of codes with many types of group rings.
There is no restriction on the ring and thus codes over the integers, over
matrix rings or even over group rings themselves are possible and fruitful.
|
0711.0189
|
A Tutorial on Spectral Clustering
|
cs.DS cs.LG
|
In recent years, spectral clustering has become one of the most popular
modern clustering algorithms. It is simple to implement, can be solved
efficiently by standard linear algebra software, and very often outperforms
traditional clustering algorithms such as the k-means algorithm. On the first
glance spectral clustering appears slightly mysterious, and it is not obvious
to see why it works at all and what it really does. The goal of this tutorial
is to give some intuition on those questions. We describe different graph
Laplacians and their basic properties, present the most common spectral
clustering algorithms, and derive those algorithms from scratch by several
different approaches. Advantages and disadvantages of the different spectral
clustering algorithms are discussed.
|
0711.0237
|
Zero-rate feedback can achieve the empirical capacity
|
cs.IT math.IT
|
The utility of limited feedback for coding over an individual sequence of
DMCs is investigated. This study complements recent results showing how limited
or noisy feedback can boost the reliability of communication. A strategy with
fixed input distribution $P$ is given that asymptotically achieves rates
arbitrarily close to the mutual information induced by $P$ and the
state-averaged channel. When the capacity achieving input distribution is the
same over all channel states, this achieves rates at least as large as the
capacity of the state averaged channel, sometimes called the empirical
capacity.
|
0711.0261
|
Gradient Descent Bit Flipping Algorithms for Decoding LDPC Codes
|
cs.IT math.IT
|
A novel class of bit-flipping (BF) algorithms for decoding low-density
parity-check (LDPC) codes is presented. The proposed algorithms, which are
called gradient descent bit flipping (GDBF) algorithms, can be regarded as
simplified gradient descent algorithms. Based on gradient descent formulation,
the proposed algorithms are naturally derived from a simple non-linear
objective function.
|
0711.0277
|
Bandwidth Partitioning in Decentralized Wireless Networks
|
cs.IT math.IT
|
This paper addresses the following question, which is of interest in the
design of a multiuser decentralized network. Given a total system bandwidth of
W Hz and a fixed data rate constraint of R bps for each transmission, how many
frequency slots N of size W/N should the band be partitioned into in order to
maximize the number of simultaneous links in the network? Dividing the
available spectrum results in two competing effects. On the positive side, a
larger N allows for more parallel, noninterfering communications to take place
in the same area. On the negative side, a larger N increases the SINR
requirement for each link because the same information rate must be achieved
over less bandwidth. Exploring this tradeoff and determining the optimum value
of N in terms of the system parameters is the focus of the paper. Using
stochastic geometry, the optimal SINR threshold - which directly corresponds to
the optimal spectral efficiency - is derived for both the low SNR
(power-limited) and high SNR (interference-limited) regimes. This leads to the
optimum choice of the number of frequency bands N in terms of the path loss
exponent, power and noise spectral density, desired rate, and total bandwidth.
|
0711.0350
|
Intermittent estimation of stationary time series
|
math.PR cs.IT math.IT
|
Let $\{X_n\}_{n=0}^{\infty}$ be a stationary real-valued time series with
unknown distribution. Our goal is to estimate the conditional expectation of
$X_{n+1}$ based on the observations $X_i$, $0\le i\le n$ in a strongly
consistent way. Bailey and Ryabko proved that this is not possible even for
ergodic binary time series if one estimates at all values of $n$. We propose a
very simple algorithm which will make prediction infinitely often at carefully
selected stopping times chosen by our rule. We show that under certain
conditions our procedure is strongly (pointwise) consistent, and $L_2$
consistent without any condition. An upper bound on the growth of the stopping
times is also presented in this paper.
|
0711.0351
|
Noise threshold for universality of 2-input gates
|
cs.IT cs.CC math.IT
|
Evans and Pippenger showed in 1998 that noisy gates with 2 inputs are
universal for arbitrary computation (i.e. can compute any function with bounded
error), if all gates fail independently with probability epsilon and
epsilon<theta, where theta is roughly 8.856%.
We show that formulas built from gates with 2 inputs, in which each gate
fails with probability at least theta cannot be universal. Hence, there is a
threshold on the tolerable noise for formulas with 2-input gates and it is
theta. We conjecture that the same threshold also holds for circuits.
|
0711.0366
|
Shannon Theoretic Limits on Noisy Compressive Sampling
|
cs.IT math.IT
|
In this paper, we study the number of measurements required to recover a
sparse signal in ${\mathbb C}^M$ with $L$ non-zero coefficients from compressed
samples in the presence of noise. For a number of different recovery criteria,
we prove that $O(L)$ (an asymptotically linear multiple of $L$) measurements
are necessary and sufficient if $L$ grows linearly as a function of $M$. This
improves on the existing literature that is mostly focused on variants of a
specific recovery algorithm based on convex programming, for which
$O(L\log(M-L))$ measurements are required. We also show that $O(L\log(M-L))$
measurements are required in the sublinear regime ($L = o(M)$).
|
0711.0367
|
Nonparametric inference for ergodic, stationary time series
|
math.PR cs.IT math.IT
|
The setting is a stationary, ergodic time series. The challenge is to
construct a sequence of functions, each based on only finite segments of the
past, which together provide a strongly consistent estimator for the
conditional probability of the next observation, given the infinite past.
Ornstein gave such a construction for the case that the values are from a
finite set, and recently Algoet extended the scheme to time series with
coordinates in a Polish space.
The present study relates a different solution to the challenge. The
algorithm is simple and its verification is fairly transparent. Some extensions
to regression, pattern recognition, and on-line forecasting are mentioned.
|
0711.0471
|
Prediction for discrete time series
|
math.PR cs.IT math.IT
|
Let $\{X_n\}$ be a stationary and ergodic time series taking values from a
finite or countably infinite set ${\cal X}$. Assume that the distribution of
the process is otherwise unknown. We propose a sequence of stopping times
$\lambda_n$ along which we will be able to estimate the conditional probability
$P(X_{\lambda_n+1}=x|X_0,...,X_{\lambda_n})$ from data segment
$(X_0,...,X_{\lambda_n})$ in a pointwise consistent way for a restricted class
of stationary and ergodic finite or countably infinite alphabet time series
which includes among others all stationary and ergodic finitarily Markovian
processes. If the stationary and ergodic process turns out to be finitarily
Markovian (among others, all stationary and ergodic Markov chains are included
in this class) then $ \lim_{n\to \infty} {n\over \lambda_n}>0$ almost surely.
If the stationary and ergodic process turns out to possess finite entropy rate
then $\lambda_n$ is upperbounded by a polynomial, eventually almost surely.
|
0711.0472
|
Order estimation of Markov chains
|
math.PR cs.IT math.IT
|
We describe estimators $\chi_n(X_0,X_1,...,X_n)$, which when applied to an
unknown stationary process taking values from a countable alphabet ${\cal X}$,
converge almost surely to $k$ in case the process is a $k$-th order Markov
chain and to infinity otherwise.
|
0711.0557
|
Kerdock Codes for Limited Feedback Precoded MIMO Systems
|
cs.IT math.IT
|
A codebook based limited feedback strategy is a practical way to obtain
partial channel state information at the transmitter in a precoded
multiple-input multiple-output (MIMO) wireless system. Conventional codebook
designs use Grassmannian packing, equiangular frames, vector quantization, or
Fourier based constructions. While the capacity and error rate performance of
conventional codebook constructions have been extensively investigated,
constructing these codebooks is notoriously difficult relying on techniques
such as nonlinear search or iterative algorithms. Further, the resulting
codebooks may not have a systematic structure to facilitate storage of the
codebook and low search complexity. In this paper, we propose a new systematic
codebook design based on Kerdock codes and mutually unbiased bases. The
proposed Kerdock codebook consists of multiple mutually unbiased unitary bases
matrices with quaternary entries and the identity matrix. We propose to derive
the beamforming and precoding codebooks from this base codebook, eliminating
the requirement to store multiple codebooks. The propose structure requires
little memory to store and, as we show, the quaternary structure facilitates
codeword search. We derive the chordal distance for two antenna and four
antenna codebooks, showing that the proposed codebooks compare favorably with
prior designs. Monte Carlo simulations are used to compare achievable rates and
error rates for different codebooks sizes.
|
0711.0574
|
Singular Curves in the Joint Space and Cusp Points of 3-RPR parallel
manipulators
|
cs.RO
|
This paper investigates the singular curves in the joint space of a family of
planar parallel manipulators. It focuses on special points, referred to as cusp
points, which may appear on these curves. Cusp points play an important role in
the kinematic behavior of parallel manipulators since they make possible a
nonsingular change of assembly mode. The purpose of this study is twofold.
First, it exposes a method to compute joint space singular curves of 3-RPR
planar parallel manipulators. Second, it presents an algorithm for detecting
and computing all cusp points in the joint space of these same manipulators.
|
0711.0666
|
Discriminative Phoneme Sequences Extraction for Non-Native Speaker's
Origin Classification
|
cs.CL
|
In this paper we present an automated method for the classification of the
origin of non-native speakers. The origin of non-native speakers could be
identified by a human listener based on the detection of typical pronunciations
for each nationality. Thus we suppose the existence of several phoneme
sequences that might allow the classification of the origin of non-native
speakers. Our new method is based on the extraction of discriminative sequences
of phonemes from a non-native English speech database. These sequences are used
to construct a probabilistic classifier for the speakers' origin. The existence
of discriminative phone sequences in non-native speech is a significant result
of this work. The system that we have developed achieved a significant correct
classification rate of 96.3% and a significant error reduction compared to some
other tested techniques.
|
0711.0694
|
Performance Bounds for Lambda Policy Iteration and Application to the
Game of Tetris
|
cs.AI cs.RO
|
We consider the discrete-time infinite-horizon optimal control problem
formalized by Markov Decision Processes. We revisit the work of Bertsekas and
Ioffe, that introduced $\lambda$ Policy Iteration, a family of algorithms
parameterized by $\lambda$ that generalizes the standard algorithms Value
Iteration and Policy Iteration, and has some deep connections with the Temporal
Differences algorithm TD($\lambda$) described by Sutton and Barto. We deepen
the original theory developped by the authors by providing convergence rate
bounds which generalize standard bounds for Value Iteration described for
instance by Puterman. Then, the main contribution of this paper is to develop
the theory of this algorithm when it is used in an approximate form and show
that this is sound. Doing so, we extend and unify the separate analyses
developped by Munos for Approximate Value Iteration and Approximate Policy
Iteration. Eventually, we revisit the use of this algorithm in the training of
a Tetris playing controller as originally done by Bertsekas and Ioffe. We
provide an original performance bound that can be applied to such an
undiscounted control problem. Our empirical results are different from those of
Bertsekas and Ioffe (which were originally qualified as "paradoxical" and
"intriguing"), and much more conform to what one would expect from a learning
experiment. We discuss the possible reason for such a difference.
|
0711.0705
|
Feedback Capacity of the Compound Channel
|
cs.IT math.IT
|
In this work we find the capacity of a compound finite-state channel with
time-invariant deterministic feedback. The model we consider involves the use
of fixed length block codes. Our achievability result includes a proof of the
existence of a universal decoder for the family of finite-state channels with
feedback. As a consequence of our capacity result, we show that feedback does
not increase the capacity of the compound Gilbert-Elliot channel. Additionally,
we show that for a stationary and uniformly ergodic Markovian channel, if the
compound channel capacity is zero without feedback then it is zero with
feedback. Finally, we use our result on the finite-state channel to show that
the feedback capacity of the memoryless compound channel is given by
$\inf_{\theta} \max_{Q_X} I(X;Y|\theta)$.
|
0711.0708
|
A Rank-Metric Approach to Error Control in Random Network Coding
|
cs.IT math.IT
|
The problem of error control in random linear network coding is addressed
from a matrix perspective that is closely related to the subspace perspective
of K\"otter and Kschischang. A large class of constant-dimension subspace codes
is investigated. It is shown that codes in this class can be easily constructed
from rank-metric codes, while preserving their distance properties. Moreover,
it is shown that minimum distance decoding of such subspace codes can be
reformulated as a generalized decoding problem for rank-metric codes where
partial information about the error is available. This partial information may
be in the form of erasures (knowledge of an error location but not its value)
and deviations (knowledge of an error value but not its location). Taking
erasures and deviations into account (when they occur) strictly increases the
error correction capability of a code: if $\mu$ erasures and $\delta$
deviations occur, then errors of rank $t$ can always be corrected provided that
$2t \leq d - 1 + \mu + \delta$, where $d$ is the minimum rank distance of the
code. For Gabidulin codes, an important family of maximum rank distance codes,
an efficient decoding algorithm is proposed that can properly exploit erasures
and deviations. In a network coding application where $n$ packets of length $M$
over $F_q$ are transmitted, the complexity of the decoding algorithm is given
by $O(dM)$ operations in an extension field $F_{q^n}$.
|
0711.0711
|
Information-Theoretic Security in Wireless Networks
|
cs.IT cs.CR math.IT
|
This paper summarizes recent contributions of the authors and their
co-workers in the area of information-theoretic security.
|
0711.0784
|
Addendum to Research MMMCV; A Man/Microbio/Megabio/Computer Vision
|
cs.CV cs.CE
|
In October 2007, a Research Proposal for the University of Sydney, Australia,
the author suggested that biovie-physical phenomenon as `electrodynamic
dependant biological vision', is governed by relativistic quantum laws and
biovision. The phenomenon on the basis of `biovielectroluminescence', satisfies
man/microbio/megabio/computer vision (MMMCV), as a robust candidate for
physical and visual sciences. The general aim of this addendum is to present a
refined text of Sections 1-3 of that proposal and highlighting the contents of
its Appendix in form of a `Mechanisms' Section. We then briefly remind in an
article aimed for December 2007, by appending two more equations into Section
3, a theoretical II-time scenario as a time model well-proposed for the
phenomenon. The time model within the core of the proposal, plays a significant
role in emphasizing the principle points on Objectives no. 1-8, Sub-hypothesis
3.1.2, mentioned in Article [arXiv:0710.0410]. It also expresses the time
concept in terms of causing quantized energy f(|E|) of time |t|, emit in regard
to shortening the probability of particle loci as predictable patterns of
particle's un-occurred motion, a solution to Heisenberg's uncertainty principle
(HUP) into a simplistic manner. We conclude that, practical frames via a time
algorithm to this model, fixates such predictable patterns of motion of scenery
bodies onto recordable observation points of a MMMCV system. It even
suppresses/predicts superposition phenomena coming from a human subject and/or
other bio-subjects for any decision making event, e.g., brainwave quantum
patterns based on vision. Maintaining the existential probability of Riemann
surfaces of II-time scenarios in the context of biovielectroluminescence, makes
motion-prediction a possibility.
|
0711.0811
|
Combined Acoustic and Pronunciation Modelling for Non-Native Speech
Recognition
|
cs.CL
|
In this paper, we present several adaptation methods for non-native speech
recognition. We have tested pronunciation modelling, MLLR and MAP non-native
pronunciation adaptation and HMM models retraining on the HIWIRE foreign
accented English speech database. The ``phonetic confusion'' scheme we have
developed consists in associating to each spoken phone several sequences of
confused phones. In our experiments, we have used different combinations of
acoustic models representing the canonical and the foreign pronunciations:
spoken and native models, models adapted to the non-native accent with MAP and
MLLR. The joint use of pronunciation modelling and acoustic adaptation led to
further improvements in recognition accuracy. The best combination of the above
mentioned techniques resulted in a relative word error reduction ranging from
46% to 71%.
|
0711.1038
|
Am\'elioration des Performances des Syst\`emes Automatiques de
Reconnaissance de la Parole pour la Parole Non Native
|
cs.CL
|
In this article, we present an approach for non native automatic speech
recognition (ASR). We propose two methods to adapt existing ASR systems to the
non-native accents. The first method is based on the modification of acoustic
models through integration of acoustic models from the mother tong. The
phonemes of the target language are pronounced in a similar manner to the
native language of speakers. We propose to combine the models of confused
phonemes so that the ASR system could recognize both concurrent
pronounciations. The second method we propose is a refinment of the
pronounciation error detection through the introduction of graphemic
constraints. Indeed, non native speakers may rely on the writing of words in
their uttering. Thus, the pronounctiation errors might depend on the characters
composing the words. The average error rate reduction that we observed is
(22.5%) relative for the sentence error rate, and 34.5% (relative) in word
error rate.
|
0711.1056
|
Bounds on the Number of Iterations for Turbo-Like Ensembles over the
Binary Erasure Channe
|
cs.IT math.IT
|
This paper provides simple lower bounds on the number of iterations which is
required for successful message-passing decoding of some important families of
graph-based code ensembles (including low-density parity-check codes and
variations of repeat-accumulate codes). The transmission of the code ensembles
is assumed to take place over a binary erasure channel, and the bounds refer to
the asymptotic case where we let the block length tend to infinity. The
simplicity of the bounds derived in this paper stems from the fact that they
are easily evaluated and are expressed in terms of some basic parameters of the
ensemble which include the fraction of degree-2 variable nodes, the target bit
erasure probability and the gap between the channel capacity and the design
rate of the ensemble. This paper demonstrates that the number of iterations
which is required for successful message-passing decoding scales at least like
the inverse of the gap (in rate) to capacity, provided that the fraction of
degree-2 variable nodes of these turbo-like ensembles does not vanish (hence,
the number of iterations becomes unbounded as the gap to capacity vanishes).
|
0711.1161
|
Joint Source-Channel Codes for MIMO Block Fading Channels
|
cs.IT math.IT
|
We consider transmission of a continuous amplitude source over an L-block
Rayleigh fading $M_t \times M_r$ MIMO channel when the channel state
information is only available at the receiver. Since the channel is not
ergodic, Shannon's source-channel separation theorem becomes obsolete and the
optimal performance requires a joint source -channel approach. Our goal is to
minimize the expected end-to-end distortion, particularly in the high SNR
regime. The figure of merit is the distortion exponent, defined as the
exponential decay rate of the expected distortion with increasing SNR. We
provide an upper bound and lower bounds for the distortion exponent with
respect to the bandwidth ratio among the channel and source bandwidths. For the
lower bounds, we analyze three different strategies based on layered source
coding concatenated with progressive, superposition or hybrid digital/analog
transmission. In each case, by adjusting the system parameters we optimize the
distortion exponent as a function of the bandwidth ratio. We prove that the
distortion exponent upper bound can be achieved when the channel has only one
degree of freedom, that is L=1, and $\min\{M_t,M_r\}=1$. When we have more
degrees of freedom, our achievable distortion exponents meet the upper bound
for only certain ranges of the bandwidth ratio. We demonstrate that our
results, which were derived for a complex Gaussian source, can be extended to
more general source distributions as well.
|
0711.1295
|
On the performance of Golden space-time trellis coded modulation over
MIMO block fading channels
|
cs.IT math.IT
|
The Golden space-time trellis coded modulation (GST-TCM) scheme was proposed
in \cite{Hong06} for a high rate $2\times 2$ multiple-input multiple-output
(MIMO) system over slow fading channels. In this letter, we present the
performance analysis of GST-TCM over block fading channels, where the channel
matrix is constant over a fraction of the codeword length and varies from one
fraction to another, independently. In practice, it is not useful to design
such codes for specific block fading channel parameters and a robust solution
is preferable. We then show both analytically and by simulation that the
GST-TCM designed for slow fading channels are indeed robust to all block fading
channel conditions.
|
0711.1360
|
Analytical approach to bit-string models of language evolution
|
physics.soc-ph cs.CL
|
A formulation of bit-string models of language evolution, based on
differential equations for the population speaking each language, is introduced
and preliminarily studied. Connections with replicator dynamics and diffusion
processes are pointed out. The stability of the dominance state, where most of
the population speaks a single language, is analyzed within a mean-field-like
approximation, while the homogeneous state, where the population is evenly
distributed among languages, can be exactly studied. This analysis discloses
the existence of a bistability region, where dominance coexists with
homogeneity as possible asymptotic states. Numerical resolution of the
differential system validates these findings.
|
0711.1383
|
On Minimal Tree Realizations of Linear Codes
|
cs.IT math.IT
|
A tree decomposition of the coordinates of a code is a mapping from the
coordinate set to the set of vertices of a tree. A tree decomposition can be
extended to a tree realization, i.e., a cycle-free realization of the code on
the underlying tree, by specifying a state space at each edge of the tree, and
a local constraint code at each vertex of the tree. The constraint complexity
of a tree realization is the maximum dimension of any of its local constraint
codes. A measure of the complexity of maximum-likelihood decoding for a code is
its treewidth, which is the least constraint complexity of any of its tree
realizations.
It is known that among all tree realizations of a code that extends a given
tree decomposition, there exists a unique minimal realization that minimizes
the state space dimension at each vertex of the underlying tree. In this paper,
we give two new constructions of these minimal realizations. As a by-product of
the first construction, a generalization of the state-merging procedure for
trellis realizations, we obtain the fact that the minimal tree realization also
minimizes the local constraint code dimension at each vertex of the underlying
tree. The second construction relies on certain code decomposition techniques
that we develop. We further observe that the treewidth of a code is related to
a measure of graph complexity, also called treewidth. We exploit this
connection to resolve a conjecture of Forney's regarding the gap between the
minimum trellis constraint complexity and the treewidth of a code. We present a
family of codes for which this gap can be arbitrarily large.
|
0711.1401
|
Towards a Sound Theory of Adaptation for the Simple Genetic Algorithm
|
cs.NE cs.AI
|
The pace of progress in the fields of Evolutionary Computation and Machine
Learning is currently limited -- in the former field, by the improbability of
making advantageous extensions to evolutionary algorithms when their capacity
for adaptation is poorly understood, and in the latter by the difficulty of
finding effective semi-principled reductions of hard real-world problems to
relatively simple optimization problems. In this paper we explain why a theory
which can accurately explain the simple genetic algorithm's remarkable capacity
for adaptation has the potential to address both these limitations. We describe
what we believe to be the impediments -- historic and analytic -- to the
discovery of such a theory and highlight the negative role that the building
block hypothesis (BBH) has played. We argue based on experimental results that
a fundamental limitation which is widely believed to constrain the SGA's
adaptive ability (and is strongly implied by the BBH) is in fact illusionary
and does not exist. The SGA therefore turns out to be more powerful than it is
currently thought to be. We give conditions under which it becomes feasible to
numerically approximate and study the multivariate marginals of the search
distribution of an infinite population SGA over multiple generations even when
its genomes are long, and explain why this analysis is relevant to the riddle
of the SGA's remarkable adaptive abilities.
|
0711.1466
|
Predicting relevant empty spots in social interaction
|
cs.AI
|
An empty spot refers to an empty hard-to-fill space which can be found in the
records of the social interaction, and is the clue to the persons in the
underlying social network who do not appear in the records. This contribution
addresses a problem to predict relevant empty spots in social interaction.
Homogeneous and inhomogeneous networks are studied as a model underlying the
social interaction. A heuristic predictor function approach is presented as a
new method to address the problem. Simulation experiment is demonstrated over a
homogeneous network. A test data in the form of baskets is generated from the
simulated communication. Precision to predict the empty spots is calculated to
demonstrate the performance of the presented approach.
|
0711.1478
|
A constructive Borel-Cantelli Lemma. Constructing orbits with required
statistical properties
|
math.CA cs.IT math.DS math.IT math.PR math.ST stat.TH
|
In the general context of computable metric spaces and computable measures we
prove a kind of constructive Borel-Cantelli lemma: given a sequence
(constructive in some way) of sets $A_{i}$ with effectively summable measures,
there are computable points which are not contained in infinitely many $A_{i}$.
As a consequence of this we obtain the existence of computable points which
follow the \emph{typical statistical behavior} of a dynamical system (they
satisfy the Birkhoff theorem) for a large class of systems, having computable
invariant measure and a certain ``logarithmic'' speed of convergence of
Birkhoff averages over Lipshitz observables. This is applied to uniformly
hyperbolic systems, piecewise expanding maps, systems on the interval with an
indifferent fixed point and it directly implies the existence of computable
numbers which are normal with respect to any base.
|
0711.1565
|
Channel Code Design with Causal Side Information at the Encoder
|
cs.IT math.IT
|
The problem of channel code design for the $M$-ary input AWGN channel with
additive $Q$-ary interference where the sequence of i.i.d. interference symbols
is known causally at the encoder is considered. The code design criterion at
high SNR is derived by defining a new distance measure between the input
symbols of the Shannon's \emph{associated} channel. For the case of
binary-input channel, i.e., M=2, it is shown that it is sufficient to use only
two (out of $2^Q$) input symbols of the \emph{associated} channel in the
encoding as far as the distance spectrum of code is concerned. This reduces the
problem of channel code design for the binary-input AWGN channel with known
interference at the encoder to design of binary codes for the binary symmetric
channel where the Hamming distance among codewords is the major factor in the
performance of the code.
|
0711.1573
|
Outage-Efficient Downlink Transmission Without Transmit Channel State
Information
|
cs.IT math.IT
|
This paper investigates downlink transmission over a quasi-static fading
Gaussian broadcast channel (BC), to model delay-sensitive applications over
slowly time-varying fading channels. System performance is characterized by
outage achievable rate regions. In contrast to most previous work, here the
problem is studied under the key assumption that the transmitter only knows the
probability distributions of the fading coefficients, but not their
realizations. For scalar-input channels, two coding schemes are proposed. The
first scheme is called blind dirty paper coding (B-DPC), which utilizes a
robustness property of dirty paper coding to perform precoding at the
transmitter. The second scheme is called statistical superposition coding
(S-SC), in which each receiver adaptively performs successive decoding with the
process statistically governed by the realized fading. Both B-DPC and S-SC
schemes lead to the same outage achievable rate region, which always dominates
that of time-sharing, irrespective of the particular fading distributions. The
S-SC scheme can be extended to BCs with multiple transmit antennas.
|
0711.1605
|
Asymptotic Capacity of Wireless Ad Hoc Networks with Realistic Links
under a Honey Comb Topology
|
cs.IT math.IT
|
We consider the effects of Rayleigh fading and lognormal shadowing in the
physical interference model for all the successful transmissions of traffic
across the network. New bounds are derived for the capacity of a given random
ad hoc wireless network that reflect packet drop or capture probability of the
transmission links. These bounds are based on a simplified network topology
termed as honey-comb topology under a given routing and scheduling scheme.
|
0711.1765
|
Kinematic calibration of orthoglide-type mechanisms
|
cs.RO
|
The paper proposes a novel calibration approach for the Orthoglide-type
mechanisms based on observations of the manipulator leg parallelism during
mo-tions between the prespecified test postures. It employs a low-cost
measuring system composed of standard comparator indicators attached to the
universal magnetic stands. They are sequentially used for measuring the
deviation of the relevant leg location while the manipulator moves the TCP
along the Cartesian axes. Using the measured differences, the developed
algorithm estimates the joint offsets that are treated as the most essential
parameters to be adjusted. The sensitivity of the meas-urement methods and the
calibration accuracy are also studied. Experimental re-sults are presented that
demonstrate validity of the proposed calibration technique
|
0711.1766
|
Achieving the Gaussian Rate-Distortion Function by Prediction
|
cs.IT math.IT
|
The "water-filling" solution for the quadratic rate-distortion function of a
stationary Gaussian source is given in terms of its power spectrum. This
formula naturally lends itself to a frequency domain "test-channel"
realization. We provide an alternative time-domain realization for the
rate-distortion function, based on linear prediction. This solution has some
interesting implications, including the optimality at all distortion levels of
pre/post filtered vector-quantized differential pulse code modulation (DPCM),
and a duality relationship with decision-feedback equalization (DFE) for
inter-symbol interference (ISI) channels.
|
0711.1814
|
Building Rules on Top of Ontologies for the Semantic Web with Inductive
Logic Programming
|
cs.AI cs.LG
|
Building rules on top of ontologies is the ultimate goal of the logical layer
of the Semantic Web. To this aim an ad-hoc mark-up language for this layer is
currently under discussion. It is intended to follow the tradition of hybrid
knowledge representation and reasoning systems such as $\mathcal{AL}$-log that
integrates the description logic $\mathcal{ALC}$ and the function-free Horn
clausal language \textsc{Datalog}. In this paper we consider the problem of
automating the acquisition of these rules for the Semantic Web. We propose a
general framework for rule induction that adopts the methodological apparatus
of Inductive Logic Programming and relies on the expressive and deductive power
of $\mathcal{AL}$-log. The framework is valid whatever the scope of induction
(description vs. prediction) is. Yet, for illustrative purposes, we also
discuss an instantiation of the framework which aims at description and turns
out to be useful in Ontology Refinement.
Keywords: Inductive Logic Programming, Hybrid Knowledge Representation and
Reasoning Systems, Ontologies, Semantic Web.
Note: To appear in Theory and Practice of Logic Programming (TPLP)
|
0711.1890
|
A Geometric Interpretation of Fading in Wireless Networks: Theory and
Applications
|
cs.IT math.IT
|
In wireless networks with random node distribution, the underlying point
process model and the channel fading process are usually considered separately.
A unified framework is introduced that permits the geometric characterization
of fading by incorporating the fading process into the point process model.
Concretely, assuming nodes are distributed in a stationary Poisson point
process in $\R^d$, the properties of the point processes that describe the path
loss with fading are analyzed. The main applications are connectivity and
broadcasting.
|
0711.1986
|
Performance bounds and codes design criteria for channel decoding with
a-priori information
|
cs.IT math.IT
|
In this article we focus on the problem of channel decoding in presence of
a-priori information. In particular, assuming that the a-priori information
reliability is not perfectly estimated at the receiver, we derive a novel
analytical framework for evaluating the decoder's performance. It is derived
the important result that a "good code", i.e., a code which allows to fully
exploit the potential benefit of a-priori information, must associate
information sequences with high Hamming weights to codewords with low Hamming
weights. Basing on the proposed analysis, we analyze the performance of
convolutional codes, random codes, and turbo codes. Moreover, we consider the
transmission of correlated binary sources from independent nodes, a problem
which has several practical applications, e.g. in the case of sensor networks.
In this context, we propose a very simple joint source-channel turbo decoding
scheme where each decoder works by exploiting a-priori information given by the
other decoder. In the case of block fading channels, it is shown that the
inherent correlation between information signals provide a form of
non-cooperative diversity, thus allowing joint source-channel decoding to
outperform separation-based schemes.
|
0711.2023
|
Empirical Evaluation of Four Tensor Decomposition Algorithms
|
cs.LG cs.CL cs.IR
|
Higher-order tensor decompositions are analogous to the familiar Singular
Value Decomposition (SVD), but they transcend the limitations of matrices
(second-order tensors). SVD is a powerful tool that has achieved impressive
results in information retrieval, collaborative filtering, computational
linguistics, computational vision, and other fields. However, SVD is limited to
two-dimensional arrays of data (two modes), and many potential applications
have three or more modes, which require higher-order tensor decompositions.
This paper evaluates four algorithms for higher-order tensor decomposition:
Higher-Order Singular Value Decomposition (HO-SVD), Higher-Order Orthogonal
Iteration (HOOI), Slice Projection (SP), and Multislice Projection (MP). We
measure the time (elapsed run time), space (RAM and disk space requirements),
and fit (tensor reconstruction accuracy) of the four algorithms, under a
variety of conditions. We find that standard implementations of HO-SVD and HOOI
do not scale up to larger tensors, due to increasing RAM requirements. We
recommend HOOI for tensors that are small enough for the available RAM and MP
for larger tensors.
|
0711.2050
|
Two Families of Quantum Codes Derived from Cyclic Codes
|
cs.IT math.IT
|
We characterize the affine-invariant maximal extended cyclic codes. Then by
the CSS construction, we derive from these codes a family of pure quantum
codes. Also for ordnq even, a new family of degenerate quantum stabilizer codes
is derived from the classical duadic codes. This answer an open problem asked
by Aly et al.
|
0711.2058
|
Computer Model of a "Sense of Humour". I. General Algorithm
|
q-bio.NC cs.AI
|
A computer model of a "sense of humour" is proposed. The humorous effect is
interpreted as a specific malfunction in the course of information processing
due to the need for the rapid deletion of the false version transmitted into
consciousness. The biological function of a sense of humour consists in
speeding up the bringing of information into consciousness and in fuller use of
the resources of the brain.
|
0711.2061
|
Computer Model of a "Sense of Humour". II. Realization in Neural
Networks
|
q-bio.NC cs.AI
|
The computer realization of a "sense of humour" requires the creation of an
algorithm for solving the "linguistic problem", i.e. the problem of recognizing
a continuous sequence of polysemantic images. Such algorithm may be realized in
the Hopfield model of a neural network after its proper modification.
|
0711.2087
|
Query Evaluation and Optimization in the Semantic Web
|
cs.DB cs.LO
|
We address the problem of answering Web ontology queries efficiently. An
ontology is formalized as a Deductive Ontology Base (DOB), a deductive database
that comprises the ontology's inference axioms and facts. A cost-based query
optimization technique for DOB is presented. A hybrid cost model is proposed to
estimate the cost and cardinality of basic and inferred facts. Cardinality and
cost of inferred facts are estimated using an adaptive sampling technique,
while techniques of traditional relational cost models are used for estimating
the cost of basic facts and conjunctive ontology queries. Finally, we implement
a dynamic-programming optimization algorithm to identify query evaluation plans
that minimize the number of intermediate inferred facts. We modeled a subset of
the Web ontology language OWL Lite as a DOB, and performed an experimental
study to analyze the predictive capacity of our cost model and the benefits of
the query optimization technique. Our study has been conducted over synthetic
and real-world OWL ontologies, and shows that the techniques are accurate and
improve query performance. To appear in Theory and Practice of Logic
Programming (TPLP).
|
0711.2102
|
Patterns of i.i.d. Sequences and Their Entropy - Part II: Bounds for
Some Distributions
|
cs.IT math.IT
|
A pattern of a sequence is a sequence of integer indices with each index
describing the order of first occurrence of the respective symbol in the
original sequence. In a recent paper, tight general bounds on the block entropy
of patterns of sequences generated by independent and identically distributed
(i.i.d.) sources were derived. In this paper, precise approximations are
provided for the pattern block entropies for patterns of sequences generated by
i.i.d. uniform and monotonic distributions, including distributions over the
integers, and the geometric distribution. Numerical bounds on the pattern block
entropies of these distributions are provided even for very short blocks. Tight
bounds are obtained even for distributions that have infinite i.i.d. entropy
rates. The approximations are obtained using general bounds and their
derivation techniques. Conditional index entropy is also studied for
distributions over smaller alphabets.
|
0711.2104
|
On the Information Rates of the Plenoptic Function
|
cs.IT cs.CV math.IT math.PR
|
The {\it plenoptic function} (Adelson and Bergen, 91) describes the visual
information available to an observer at any point in space and time. Samples of
the plenoptic function (POF) are seen in video and in general visual content,
and represent large amounts of information. In this paper we propose a
stochastic model to study the compression limits of the plenoptic function. In
the proposed framework, we isolate the two fundamental sources of information
in the POF: the one representing the camera motion and the other representing
the information complexity of the "reality" being acquired and transmitted. The
sources of information are combined, generating a stochastic process that we
study in detail. We first propose a model for ensembles of realities that do
not change over time. The proposed model is simple in that it enables us to
derive precise coding bounds in the information-theoretic sense that are sharp
in a number of cases of practical interest. For this simple case of static
realities and camera motion, our results indicate that coding practice is in
accordance with optimal coding from an information-theoretic standpoint. The
model is further extended to account for visual realities that change over
time. We derive bounds on the lossless and lossy information rates for this
dynamic reality model, stating conditions under which the bounds are tight.
Examples with synthetic sources suggest that in the presence of scene dynamics,
simple hybrid coding using motion/displacement estimation with DPCM performs
considerably suboptimally relative to the true rate-distortion bound.
|
0711.2116
|
A numerical approach for 3D manufacturing tolerances synthesis
|
cs.CE
|
Making a product conform to the functional requirements indicated by the
customer suppose to be able to manage the manufacturing process chosen to
realise the parts. A simulation step is generally performed to verify that the
expected generated deviations fit with these requirements. It is then necessary
to assess the actual deviations of the process in progress. This is usually
done by the verification of the conformity of the workpiece to manufacturing
tolerances at the end of each set-up. It is thus necessary to determine these
manufacturing tolerances. This step is called "manufacturing tolerance
synthesis". In this paper, a numerical method is proposed to perform 3D
manufacturing tolerances synthesis. This method uses the result of the
numerical analysis of tolerances to determine influent mall displacement of
surfaces. These displacements are described by small displacements torsors. An
algorithm is then proposed to determine suitable ISO manufacturing tolerances.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.