id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1007.0904
|
Secure rate-adaptive reconciliation
|
cs.IT math.IT
|
We consider in this paper the problem of information reconciliation in the
context of secret key agreement between two legitimate parties, Alice and Bob.
Beginning the discussion with the secret key agreement model introduced by
Ahlswede and Csisz\'ar, the channel-type model with wiretapper, we study a
protocol based on error correcting codes. The protocol can be adapted to
changes in the communication channel extending the original source. The
efficiency of the reconciliation is only limited by the quality of the code
and, while transmitting more information than needed to reconcile Alice's and
Bob's sequences, it does not reveal any more information on the original source
than an ad-hoc code would have revealed.
|
1007.0931
|
LDPC Code Design for Transmission of Correlated Sources Across Noisy
Channels Without CSIT
|
cs.IT math.IT
|
We consider the problem of transmitting correlated data after independent
encoding to a central receiver through orthogonal channels. We assume that the
channel state information is not known at the transmitter. The receiver has
access to both the source correlation and the channel state information. We
provide a generic framework for analyzing the performance of joint iterative
decoding, using density evolution. Using differential evolution, we design
punctured systematic LDPC codes to maximize the region of achievable channel
conditions, with joint iterative decoding. The main contribution of this paper
is to demonstrate that properly designed LDPC can perform well simultaneously
over a wide range of channel parameters.
|
1007.0936
|
Linguistic complexity: English vs. Polish, text vs. corpus
|
cs.CL physics.soc-ph
|
We analyze the rank-frequency distributions of words in selected English and
Polish texts. We show that for the lemmatized (basic) word forms the
scale-invariant regime breaks after about two decades, while it might be
consistent for the whole range of ranks for the inflected word forms. We also
find that for a corpus consisting of texts written by different authors the
basic scale-invariant regime is broken more strongly than in the case of
comparable corpus consisting of texts written by the same author. Similarly,
for a corpus consisting of texts translated into Polish from other languages
the scale-invariant regime is broken more strongly than for a comparable corpus
of native Polish texts. Moreover, we find that if the words are tagged with
their proper part of speech, only verbs show rank-frequency distribution that
is almost scale-invariant.
|
1007.0940
|
An axiomatic formalization of bounded rationality based on a
utility-information equivalence
|
cs.AI cs.GT
|
Classic decision-theory is based on the maximum expected utility (MEU)
principle, but crucially ignores the resource costs incurred when determining
optimal decisions. Here we propose an axiomatic framework for bounded
decision-making that considers resource costs. Agents are formalized as
probability measures over input-output streams. We postulate that any such
probability measure can be assigned a corresponding conjugate utility function
based on three axioms: utilities should be real-valued, additive and monotonic
mappings of probabilities. We show that these axioms enforce a unique
conversion law between utility and probability (and thereby, information).
Moreover, we show that this relation can be characterized as a variational
principle: given a utility function, its conjugate probability measure
maximizes a free utility functional. Transformations of probability measures
can then be formalized as a change in free utility due to the addition of new
constraints expressed by a target utility function. Accordingly, one obtains a
criterion to choose a probability measure that trades off the maximization of a
target utility function and the cost of the deviation from a reference
distribution. We show that optimal control, adaptive estimation and adaptive
control problems can be solved this way in a resource-efficient way. When
resource costs are ignored, the MEU principle is recovered. Our formalization
might thus provide a principled approach to bounded rationality that
establishes a close link to information theory.
|
1007.0945
|
Human activity as the decision-based queueing process: statistical data
analysis of waiting times in scientific journals
|
physics.data-an cs.IR
|
We consider the editorial processing of papers in scientific journals as a
human activity process based on the decision making. A functional form of the
probability distributions of random variables describing a human dynamics is
studied using classical approaches of mass service systems theory, physics of
critical phenomena and statistical methods of data analysis. Our additional
goal is to corroborate the scientometrical application of the results obtained.
Keywords: data analysis, statistics, mass service systems, human activity,
scientometrics
|
1007.0982
|
MIMO B-MAC Interference Network Optimization under Rate Constraints by
Polite Water-filling and Duality
|
cs.IT math.IT
|
We take two new approaches to design efficient algorithms for transmitter
optimization under rate constraints, to guarantee the Quality of Service in
general MIMO interference networks, which is a combination of multiple
interfering broadcast channels (BC) and multiaccess channels (MAC) and is named
B-MAC Networks. Two related optimization problems, maximizing the minimum of
weighted rates under a sum-power constraint and minimizing the sum-power under
rate constraints, are considered. The first approach takes advantage of
existing efficient algorithms for SINR problems by building a bridge between
rate and SINR through the design of optimal mappings between them. The approach
can be applied to other optimization problems as well. The second approach
employs polite water-filling, which is the optimal network version of
water-filling that we recently found. It replaces most generic optimization
algorithms currently used for networks and reduces the complexity while
demonstrating superior performance even in non-convex cases. Both centralized
and distributed algorithms are designed and the performance is analyzed in
addition to numeric examples.
|
1007.1016
|
Bilateral filters: what they can and cannot do
|
cs.CV
|
Nonlinear bilateral filters (BF) deliver a fine blend of computational
simplicity and blur-free denoising. However, little is known about their
nature, noise-suppressing properties, and optimal choices of filter parameters.
Our study is meant to fill this gap-explaining the underlying mechanism of
bilateral filtering and providing the methodology for optimal filter selection.
Practical application to CT image denoising is discussed to illustrate our
results.
|
1007.1024
|
Model Counting in Product Configuration
|
cs.AI cs.LO cs.SC
|
We describe how to use propositional model counting for a quantitative
analysis of product configuration data. Our approach computes valuable meta
information such as the total number of valid configurations or the relative
frequency of components. This information can be used to assess the severity of
documentation errors or to measure documentation quality. As an application
example we show how we apply these methods to product documentation formulas of
the Mercedes-Benz line of vehicles. In order to process these large formulas we
developed and implemented a new model counter for non-CNF formulas. Our model
counter can process formulas, whose CNF representations could not be processed
up till now.
|
1007.1025
|
Inflection system of a language as a complex network
|
cs.CL nlin.AO
|
We investigate inflection structure of a synthetic language using Latin as an
example. We construct a bipartite graph in which one group of vertices
correspond to dictionary headwords and the other group to inflected forms
encountered in a given text. Each inflected form is connected to its
corresponding headword, which in some cases in non-unique. The resulting sparse
graph decomposes into a large number of connected components, to be called word
groups. We then show how the concept of the word group can be used to construct
coverage curves of selected Latin texts. We also investigate a version of the
inflection graph in which all theoretically possible inflected forms are
included. Distribution of sizes of connected components of this graphs
resembles cluster distribution in a lattice percolation near the critical
point.
|
1007.1033
|
A Theory of Network Equivalence, Parts I and II
|
cs.IT math.IT
|
A family of equivalence tools for bounding network capacities is introduced.
Part I treats networks of point-to-point channels. The main result is roughly
as follows. Given a network of noisy, independent, memoryless point-to-point
channels, a collection of communication demands can be met on the given network
if and only if it can be met on another network where each noisy channel is
replaced by a noiseless bit pipe with throughput equal to the noisy channel
capacity. This result was known previously for the case of a single-source
multicast demand. The result given here treats general demands -- including,
for example, multiple unicast demands -- and applies even when the achievable
rate region for the corresponding demands is unknown in the noiseless network.
In part II, definitions of upper and lower bounding channel models for general
channels are introduced. By these definitions, a collection of communication
demands can be met on a network of independent channels if it can be met on a
network where each channel is replaced by its lower bounding model andonly if
it can be met on a network where each channel is replaced by its upper bounding
model. This work derives general conditions under which a network of noiseless
bit pipes is an upper or lower bounding model for a multiterminal channel.
Example upper and lower bounding models for broadcast, multiple access, and
interference channels are given. It is then shown that bounding the difference
between the upper and lower bounding models for a given channel yields bounds
on the accuracy of network capacity bounds derived using those models. By
bounding the capacity of a network of independent noisy channels by the network
coding capacity of a network of noiseless bit pipes, this approach represents
one step towards the goal of building computational tools for bounding network
capacities.
|
1007.1048
|
Registration of Brain Images using Fast Walsh Hadamard Transform
|
cs.CV
|
A lot of image registration techniques have been developed with great
significance for data analysis in medicine, astrophotography, satellite imaging
and few other areas. This work proposes a method for medical image registration
using Fast Walsh Hadamard transform. This algorithm registers images of the
same or different modalities. Each image bit is lengthened in terms of Fast
Walsh Hadamard basis functions. Each basis function is a notion of determining
various aspects of local structure, e.g., horizontal edge, corner, etc. These
coefficients are normalized and used as numerals in a chosen number system
which allows one to form a unique number for each type of local structure. The
experimental results show that Fast Walsh Hadamard transform accomplished
better results than the conventional Walsh transform in the time domain. Also
Fast Walsh Hadamard transform is more reliable in medical image registration
consuming less time.
|
1007.1069
|
On the instantaneous frequency of Gaussian stochastic processes
|
cs.IT math.IT math.PR
|
This paper concerns the instantaneous frequency (IF) of continuous-time,
zero-mean, complex-valued, proper, mean-square differentiable nonstationary
Gaussian stochastic processes. We compute the probability density function for
the IF for fixed time, which extends a result known for wide-sense stationary
processes to nonstationary processes. For a fixed time the IF has either zero
or infinite variance. For harmonizable processes we obtain as a byproduct that
the mean of the IF, for fixed time, is the normalized first order frequency
moment of the Wigner spectrum.
|
1007.1079
|
Intelligent data analysis based on the complex network theory methods: a
case study
|
cs.IR physics.data-an
|
The development of modern information technologies permits to collect and to
analyze huge amounts of statistical data in different spheres of life. The main
problem is not to only to collect but to process all relevant information. The
purpose of our work is to show the example of intelligent data analysis in such
complex and non-formalized field as science. Using the statistical data about
scientific periodical it is possible to perform its comprehensive analysis and
to solve different practical problems. The combination of various approaches
including the statistical analysis, methods of the complex network theory and
different techniques that can be used for the concept mapping permits to
perform an intelligent data analysis in order to obtain underlying patterns and
hidden connections. Results of such analysis can be used for particular
practical problems like information retrieval within journal.
|
1007.1087
|
Competition of Wireless Providers for Atomic Users
|
cs.IT cs.GT math.IT
|
We study a problem where wireless service providers compete for heterogenous
wireless users. The users differ in their utility functions as well as in the
perceived quality of service of individual providers. We model the interaction
of an arbitrary number of providers and users as a two-stage
multi-leader-follower game. We prove existence and uniqueness of the subgame
perfect Nash equilibrium for a generic channel model and a wide class of users'
utility functions. We show that the competition of resource providers leads to
a globally optimal outcome under mild technical conditions. Most users will
purchase the resource from only one provider at the unique subgame perfect
equilibrium. The number of users who connect to multiple providers at the
equilibrium is always smaller than the number of providers. We also present a
decentralized algorithm that globally converges to the unique system
equilibrium with only local information under mild conditions on the update
rates.
|
1007.1174
|
Group Based Interference Alignment
|
cs.IT math.IT
|
In the $K$-user single-input single-output (SISO) frequency-selective fading
interference channel, it is shown that the maximal achievable multiplexing gain
is almost surely $K/2$ by using interference alignment (IA). However, when the
signaling dimensions are limited, allocating all the resources to all users
simultaneously is not optimal. So, a group based interference alignment (GIA)
scheme is proposed, and it is formulated as an unbounded knapsack problem.
Optimal and greedy search algorithms are proposed to obtain group patterns.
Analysis and numerical results show that the GIA scheme can obtain a higher
multiplexing gain when the resources are limited.
|
1007.1209
|
Prime Factor Cyclotomic Fourier Transforms with Reduced Complexity over
Finite Fields
|
cs.IT math.IT
|
Discrete Fourier transforms~(DFTs) over finite fields have widespread
applications in error correction coding. Hence, reducing the computational
complexities of DFTs is of great significance, especially for long DFTs as
increasingly longer error control codes are chosen for digital communication
and storage systems. Since DFTs involve both multiplications and additions over
finite fields and multiplications are much more complex than additions,
recently proposed cyclotomic fast Fourier transforms (CFFTs) are promising due
to their low multiplicative complexity. Unfortunately, they have very high
additive complexity. Techniques such as common subexpression elimination (CSE)
can be used to reduce the additive complexity of CFFTs, but their effectiveness
for long DFTs is limited by their complexity. In this paper, we propose prime
factor cyclotomic Fourier transforms (PFCFTs), which use CFFTs as sub-DFTs via
the prime factor algorithm. When the length of DFTs is prime, our PFCFTs reduce
to CFFTs. When the length has co-prime factors, since the sub-DFTs have much
shorter lengths, this allows us to use CSE to significantly reduce their
additive complexity. In comparison to previously proposed fast Fourier
transforms, our PFCFTs achieve reduced overall complexity when the length of
DFTs is at least 255, and the improvement significantly increases as the length
grows. This approach also enables us to propose efficient DFTs with very long
length (e.g., 4095-point), first efficient DFTs of such lengths in the
literature. Finally, our PFCFTs are also advantageous for hardware
implementation due to their regular structure.
|
1007.1213
|
Composite Cyclotomic Fourier Transforms with Reduced Complexities
|
cs.IT math.IT
|
Discrete Fourier transforms~(DFTs) over finite fields have widespread
applications in digital communication and storage systems. Hence, reducing the
computational complexities of DFTs is of great significance. Recently proposed
cyclotomic fast Fourier transforms (CFFTs) are promising due to their low
multiplicative complexities. Unfortunately, there are two issues with CFFTs:
(1) they rely on efficient short cyclic convolution algorithms, which has not
been investigated thoroughly yet, and (2) they have very high additive
complexities when directly implemented. In this paper, we address both issues.
One of the main contributions of this paper is efficient bilinear 11-point
cyclic convolution algorithms, which allow us to construct CFFTs over
GF$(2^{11})$. The other main contribution of this paper is that we propose
composite cyclotomic Fourier transforms (CCFTs). In comparison to previously
proposed fast Fourier transforms, our CCFTs achieve lower overall complexities
for moderate to long lengths, and the improvement significantly increases as
the length grows. Our 2047-point and 4095-point CCFTs are also first efficient
DFTs of such lengths to the best of our knowledge. Finally, our CCFTs are also
advantageous for hardware implementations due to their regular and modular
structure.
|
1007.1234
|
Stochastic stability of continuous time consensus protocols
|
math.OC cs.SY nlin.AO q-bio.NC
|
A unified approach to studying convergence and stochastic stability of
continuous time consensus protocols (CPs) is presented in this work. Our method
applies to networks with directed information flow; both cooperative and
noncooperative interactions; networks under weak stochastic forcing; and those
whose topology and strength of connections may vary in time. The graph
theoretic interpretation of the analytical results is emphasized. We show how
the spectral properties, such as algebraic connectivity and total effective
resistance, as well as the geometric properties, such the dimension and the
structure of the cycle subspace of the underlying graph, shape stability of the
corresponding CPs. In addition, we explore certain implications of the spectral
graph theory to CP design. In particular, we point out that expanders, sparse
highly connected graphs, generate CPs whose performance remains uniformly high
when the size of the network grows unboundedly. Similarly, we highlight the
benefits of using random versus regular network topologies for CP design. We
illustrate these observations with numerical examples and refer to the relevant
graph-theoretic results.
Keywords: consensus protocol, dynamical network, synchronization, robustness
to noise, algebraic connectivity, effective resistance, expander, random graph
|
1007.1243
|
New Results on the Capacity of the Gaussian Cognitive Interference
Channel
|
cs.IT math.IT
|
The capacity of the two-user Gaussian cognitive interference channel, a
variation of the classical interference channel where one of the transmitters
has knowledge of both messages, is known in several parameter regimes but
remains unknown in general. In this paper, we consider the following achievable
scheme: the cognitive transmitter pre-codes its message against the
interference created at its intended receiver by the primary user, and the
cognitive receiver only decodes its intended message, similar to the optimal
scheme for "weak interference"; the primary decoder decodes both messages,
similar to the optimal scheme for "very strong interference". Although the
cognitive message is pre-coded against the primary message, by decoding it, the
primary receiver obtains information about its own message, thereby improving
its rate. We show: (1) that this proposed scheme achieves capacity in what we
term the "primary decodes cognitive" regime, i.e., a subset of the "strong
interference" regime that is not included in the "very strong interference"
regime for which capacity was known; (2) that this scheme is within one
bit/s/Hz, or a factor two, of capacity for a much larger set of parameters,
thus improving the best known constant gap result; (3) we provide insights into
the trade-off between interference pre-coding at the cognitive encoder and
interference decoding at the primary receiver based on the analysis of the
approximate capacity results.
|
1007.1253
|
Efficient Sketches for the Set Query Problem
|
cs.DS cs.IT math.IT
|
We develop an algorithm for estimating the values of a vector x in R^n over a
support S of size k from a randomized sparse binary linear sketch Ax of size
O(k). Given Ax and S, we can recover x' with ||x' - x_S||_2 <= eps ||x -
x_S||_2 with probability at least 1 - k^{-\Omega(1)}. The recovery takes O(k)
time.
While interesting in its own right, this primitive also has a number of
applications. For example, we can:
1. Improve the linear k-sparse recovery of heavy hitters in Zipfian
distributions with O(k log n) space from a (1+eps) approximation to a (1 +
o(1)) approximation, giving the first such approximation in O(k log n) space
when k <= O(n^{1-eps}).
2. Recover block-sparse vectors with O(k) space and a (1+eps) approximation.
Previous algorithms required either omega(k) space or omega(1) approximation.
|
1007.1255
|
Queue-Architecture and Stability Analysis in Cooperative Relay Networks
|
cs.NI cs.IT math.IT
|
An abstraction of the physical layer coding using bit pipes that are coupled
through data-rates is insufficient to capture notions such as node cooperation
in cooperative relay networks. Consequently, network-stability analyses based
on such abstractions are valid for non-cooperative schemes alone and
meaningless for cooperative schemes. Motivated from this, this paper develops a
framework that brings the information-theoretic coding scheme together with
network-stability analysis. This framework does not constrain the system to any
particular achievable scheme, i.e., the relays can use any cooperative coding
strategy of its choice, be it amplify/compress/quantize or any
alter-and-forward scheme. The paper focuses on the scenario when coherence
duration is of the same order of the packet/codeword duration, the channel
distribution is unknown and the fading state is only known causally. The main
contributions of this paper are two-fold: first, it develops a low-complexity
queue-architecture to enable stable operation of cooperative relay networks,
and, second, it establishes the throughput optimality of a simple network
algorithm that utilizes this queue-architecture.
|
1007.1268
|
Application of Data Mining to Network Intrusion Detection: Classifier
Selection Model
|
cs.NI cs.AI
|
As network attacks have increased in number and severity over the past few
years, intrusion detection system (IDS) is increasingly becoming a critical
component to secure the network. Due to large volumes of security audit data as
well as complex and dynamic properties of intrusion behaviors, optimizing
performance of IDS becomes an important open problem that is receiving more and
more attention from the research community. The uncertainty to explore if
certain algorithms perform better for certain attack classes constitutes the
motivation for the reported herein. In this paper, we evaluate performance of a
comprehensive set of classifier algorithms using KDD99 dataset. Based on
evaluation results, best algorithms for each attack category is chosen and two
classifier algorithm selection models are proposed. The simulation result
comparison indicates that noticeable performance improvement and real-time
intrusion detection can be achieved as we apply the proposed models to detect
different kinds of network attacks.
|
1007.1270
|
How to Maximize User Satisfaction Degree in Multi-service IP Networks
|
cs.NI cs.AI
|
Bandwidth allocation is a fundamental problem in communication networks. With
current network moving towards the Future Internet model, the problem is
further intensified as network traffic demanding far from exceeds network
bandwidth capability. Maintaining a certain user satisfaction degree therefore
becomes a challenge research topic. In this paper, we deal with the problem by
proposing BASMIN, a novel bandwidth allocation scheme that aims to maximize
network user's happiness. We also defined a new metric for evaluating network
user satisfaction degree: network worth. A three-step evaluation process is
then conducted to compare BASMIN efficiency with other three popular bandwidth
allocation schemes. Throughout the tests, we experienced BASMIN's advantages
over the others; we even found out that one of the most widely used bandwidth
allocation schemes, in fact, is not effective at all.
|
1007.1272
|
Binary is Good: A Binary Inference Framework for Primary User Separation
in Cognitive Radio Networks
|
cs.NI cs.IT math.IT
|
Primary users (PU) separation concerns with the issues of distinguishing and
characterizing primary users in cognitive radio (CR) networks. We argue the
need for PU separation in the context of collaborative spectrum sensing and
monitor selection. In this paper, we model the observations of monitors as
boolean OR mixtures of underlying binary latency sources for PUs, and devise a
novel binary inference algorithm for PU separation. Simulation results show
that without prior knowledge regarding PUs' activities, the algorithm achieves
high inference accuracy. An interesting implication of the proposed algorithm
is the ability to effectively represent n independent binary sources via
(correlated) binary vectors of logarithmic length.
|
1007.1282
|
A note on sample complexity of learning binary output neural networks
under fixed input distributions
|
cs.LG
|
We show that the learning sample complexity of a sigmoidal neural network
constructed by Sontag (1992) required to achieve a given misclassification
error under a fixed purely atomic distribution can grow arbitrarily fast: for
any prescribed rate of growth there is an input distribution having this rate
as the sample complexity, and the bound is asymptotically tight. The rate can
be superexponential, a non-recursive function, etc. We further observe that
Sontag's ANN is not Glivenko-Cantelli under any input distribution having a
non-atomic part.
|
1007.1361
|
Top-K Color Queries for Document Retrieval
|
cs.DS cs.IR
|
In this paper we describe a new efficient (in fact optimal) data structure
for the {\em top-$K$ color problem}. Each element of an array $A$ is assigned a
color $c$ with priority $p(c)$. For a query range $[a,b]$ and a value $K$, we
have to report $K$ colors with the highest priorities among all colors that
occur in $A[a..b]$, sorted in reverse order by their priorities. We show that
such queries can be answered in $O(K)$ time using an $O(N\log \sigma)$ bits
data structure, where $N$ is the number of elements in the array and $\sigma$
is the number of colors. Thus our data structure is asymptotically optimal with
respect to the worst-case query time and space. As an immediate application of
our results, we obtain optimal time solutions for several document retrieval
problems. The method of the paper could be also of independent interest.
|
1007.1368
|
Low Complexity Linear Programming Decoding of Nonbinary Linear Codes
|
cs.IT math.IT
|
Linear Programming (LP) decoding of Low-Density Parity-Check (LDPC) codes has
attracted much attention in the research community in the past few years. The
aim of LP decoding is to develop an algorithm which has error-correcting
performance similar to that of the Sum-Product (SP) decoding algorithm, while
at the same time it should be amenable to mathematical analysis. The LP
decoding algorithm has also been extended to nonbinary linear codes by Flanagan
et al. However, the most important problem with LP decoding for both binary and
nonbinary linear codes is that the complexity of standard LP solvers such as
the simplex algorithm remain prohibitively large for codes of moderate to large
block length. To address this problem, Vontobel et al. proposed a low
complexity LP decoding algorithm for binary linear codes which has complexity
linear in the block length. In this paper, we extend the latter work and
propose a low-complexity LP decoding algorithm for nonbinary linear codes. We
use the LP formulation proposed by Flanagan et al. as a basis and derive a pair
of primal-dual LP formulations. The dual LP is then used to develop the
low-complexity LP decoding algorithm for nonbinary linear codes. In contrast to
the binary low-complexity LP decoding algorithm, our proposed algorithm is not
directly related to the nonbinary SP algorithm. Nevertheless, the complexity of
the proposed algorithm is linear in the block length and is limited mainly by
the maximum check node degree. As a proof of concept, we also present a
simulation result for a $[80,48]$ LDPC code defined over $\mathbb{Z}_4$ using
quaternary phase-shift keying over the AWGN channel, and we show that the
error-correcting performance of the proposed LP decoding algorithm is similar
to that of the standard LP decoding using the simplex solver.
|
1007.1398
|
Multi-environment model estimation for motility analysis of
Caenorhabditis Elegans
|
cs.CV
|
The nematode Caenorhabditis elegans is a well-known model organism used to
investigate fundamental questions in biology. Motility assays of this small
roundworm are designed to study the relationships between genes and behavior.
Commonly, motility analysis is used to classify nematode movements and
characterize them quantitatively. Over the past years, C. elegans' motility has
been studied across a wide range of environments, including crawling on
substrates, swimming in fluids, and locomoting through microfluidic substrates.
However, each environment often requires customized image processing tools
relying on heuristic parameter tuning. In the present study, we propose a novel
Multi-Environment Model Estimation (MEME) framework for automated image
segmentation that is versatile across various environments. The MEME platform
is constructed around the concept of Mixture of Gaussian (MOG) models, where
statistical models for both the background environment and the nematode
appearance are explicitly learned and used to accurately segment a target
nematode. Our method is designed to simplify the burden often imposed on users;
here, only a single image which includes a nematode in its environment must be
provided for model learning. In addition, our platform enables the extraction
of nematode `skeletons' for straightforward motility quantification. We test
our algorithm on various locomotive environments and compare performances with
an intensity-based thresholding method. Overall, MEME outperforms the
threshold-based approach for the overwhelming majority of cases examined.
Ultimately, MEME provides researchers with an attractive platform for C.
elegans' segmentation and `skeletonizing' across a wide range of motility
assays.
|
1007.1407
|
Parallel Bit Interleaved Coded Modulation
|
cs.IT math.IT
|
A new variant of bit interleaved coded modulation (BICM) is proposed. In the
new scheme, called Parallel BICM, L identical binary codes are used in parallel
using a mapper, a newly proposed finite-length interleaver and a binary dither
signal. As opposed to previous approaches, the scheme does not rely on any
assumptions of an ideal, infinite-length interleaver. Over a memoryless
channel, the new scheme is proven to be equivalent to a binary memoryless
channel. Therefore the scheme enables one to easily design coded modulation
schemes using a simple binary code that was designed for that binary channel.
The overall performance of the coded modulation scheme is analytically
evaluated based on the performance of the binary code over the binary channel.
The new scheme is analyzed from an information theoretic viewpoint, where the
capacity, error exponent and channel dispersion are considered. The capacity of
the scheme is identical to the BICM capacity. The error exponent of the scheme
is numerically compared to a recently proposed mismatched-decoding exponent
analysis of BICM.
|
1007.1432
|
Improved RANSAC performance using simple, iterative minimal-set solvers
|
cs.CV
|
RANSAC is a popular technique for estimating model parameters in the presence
of outliers. The best speed is achieved when the minimum possible number of
points is used to estimate hypotheses for the model. Many useful problems can
be represented using polynomial constraints (for instance, the determinant of a
fundamental matrix must be zero) and so have a number of solutions which are
consistent with a minimal set. A considerable amount of effort has been
expended on finding the constraints of such problems, and these often require
the solution of systems of polynomial equations. We show that better
performance can be achieved by using a simple optimization based approach on
minimal sets. For a given minimal set, the optimization approach is not
guaranteed to converge to the correct solution. However, when used within
RANSAC the greater speed and numerical stability results in better performance
overall, and much simpler algorithms. We also show that by selecting more than
the minimal number of points and using robust optimization can yield better
results for very noisy by reducing the number of trials required. The increased
speed of our method demonstrated with experiments on essential matrix
estimation.
|
1007.1483
|
On Inequalities Relating the Characteristic Function and Fisher
Information
|
cs.IT math.IT
|
A relationship between the Fisher information and the characteristic function
is established with the help of two inequalities. A necessary and sufficient
condition for equality is found. These results are used to determine the
asymptotic efficiency of a distributed estimation algorithm that uses constant
modulus transmissions over Gaussian multiple access channels. The loss in
efficiency of the distributed estimation scheme relative to the centralized
approach is quantified for different sensing noise distributions. It is shown
that the distributed estimator does not incur an efficiency loss if and only if
the sensing noise distribution is Gaussian.
|
1007.1697
|
Quantum Cyclic Code
|
cs.IT math.IT
|
In this paper, we define and study \emph{quantum cyclic codes}, a
generalisation of cyclic codes to the quantum setting. Previously studied
examples of quantum cyclic codes were all quantum codes obtained from classical
cyclic codes via the CSS construction. However, the codes that we study are
much more general. In particular, we construct cyclic stabiliser codes with
parameters $[[5,1,3]]$, $[[17,1,7]]$ and $[[17,9,3]]$, all of which are
\emph{not} CSS. The $[[5,1,3]]$ code is the well known Laflamme code and to the
best of our knowledge the other two are new examples. Our definition of
cyclicity applies to non-stabiliser codes as well; in fact we show that the
$((5,6,2))$ nonstabiliser first constructed by Rains\etal~
cite{rains97nonadditive} and latter by Arvind
\etal~\cite{arvind:2004:nonstabilizer} is cyclic. We also study stabiliser
codes of length $4^m +1$ over $\mathbb{F}_2$ for which we define a notation of
BCH distance. Much like the Berlekamp decoding algorithm for classical BCH
codes, we give efficient quantum algorithms to correct up to
$\floor{\frac{d-1}{2}}$ errors when the BCH distance is $d$.
|
1007.1708
|
A Study on the Effectiveness of Different Patch Size and Shape for Eyes
and Mouth Detection
|
cs.CV
|
Template matching is one of the simplest methods used for eyes and mouth
detection. However, it can be modified and extended to become a powerful tool.
Since the patch itself plays a significant role in optimizing detection
performance, a study on the influence of patch size and shape is carried out.
The optimum patch size and shape is determined using the proposed method.
Usually, template matching is also combined with other methods in order to
improve detection accuracy. Thus, in this paper, the effectiveness of two image
processing methods i.e. grayscale and Haar wavelet transform, when used with
template matching are analyzed.
|
1007.1735
|
Diversity Embedded Streaming Erasure Codes (DE-SCo): Constructions and
Optimality
|
cs.IT cs.NI math.IT
|
Streaming erasure codes encode a source stream to guarantee that each source
packet is recovered within a fixed delay at the receiver over a burst-erasure
channel. This paper introduces diversity embedded streaming erasure codes
(DE-SCo), that provide a flexible tradeoff between the channel quality and
receiver delay. When the channel conditions are good, the source stream is
recovered with a low delay, whereas when the channel conditions are poor the
source stream is still recovered, albeit with a larger delay. Information
theoretic analysis of the underlying burst-erasure broadcast channel reveals
that DE-SCo achieve the minimum possible delay for the weaker user, without
sacrificing the performance of the stronger user. A larger class of multicast
streaming erasure codes (MU-SCo) that achieve optimal tradeoff between rate,
delay and erasure-burst length is also constructed.
|
1007.1756
|
Shannon Meets Nash on the Interference Channel
|
cs.IT cs.GT math.IT
|
The interference channel is the simplest communication scenario where
multiple autonomous users compete for shared resources. We combine game theory
and information theory to define a notion of a Nash equilibrium region of the
interference channel. The notion is game theoretic: it captures the selfish
behavior of each user as they compete. The notion is also information
theoretic: it allows each user to use arbitrary communication strategies as it
optimizes its own performance. We give an exact characterization of the Nash
equilibrium region of the two-user linear deterministic interference channel
and an approximate characterization of the Nash equilibrium region of the
two-user Gaussian interference channel to within 1 bit/s/Hz..
|
1007.1766
|
An svm multiclassifier approach to land cover mapping
|
cs.AI
|
From the advent of the application of satellite imagery to land cover
mapping, one of the growing areas of research interest has been in the area of
image classification. Image classifiers are algorithms used to extract land
cover information from satellite imagery. Most of the initial research has
focussed on the development and application of algorithms to better existing
and emerging classifiers. In this paper, a paradigm shift is proposed whereby a
committee of classifiers is used to determine the final classification output.
Two of the key components of an ensemble system are that there should be
diversity among the classifiers and that there should be a mechanism through
which the results are combined. In this paper, the members of the ensemble
system include: Linear SVM, Gaussian SVM and Quadratic SVM. The final output
was determined through a simple majority vote of the individual classifiers.
From the results obtained it was observed that the final derived map generated
by an ensemble system can potentially improve on the results derived from the
individual classifiers making up the ensemble system. The ensemble system
classification accuracy was, in this case, better than the linear and quadratic
SVM result. It was however less than that of the RBF SVM. Areas for further
research could focus on improving the diversity of the ensemble system used in
this research.
|
1007.1768
|
StochKit-FF: Efficient Systems Biology on Multicore Architectures
|
cs.CE q-bio.QM
|
The stochastic modelling of biological systems is an informative, and in some
cases, very adequate technique, which may however result in being more
expensive than other modelling approaches, such as differential equations. We
present StochKit-FF, a parallel version of StochKit, a reference toolkit for
stochastic simulations. StochKit-FF is based on the FastFlow programming
toolkit for multicores and exploits the novel concept of selective memory. We
experiment StochKit-FF on a model of HIV infection dynamics, with the aim of
extracting information from efficiently run experiments, here in terms of
average and variance and, on a longer term, of more structured data.
|
1007.1778
|
Quantum Error Correction beyond the Bounded Distance Decoding Limit
|
cs.IT math.IT quant-ph
|
In this paper, we consider quantum error correction over depolarizing
channels with non-binary low-density parity-check codes defined over Galois
field of size $2^p$ . The proposed quantum error correcting codes are based on
the binary quasi-cyclic CSS (Calderbank, Shor and Steane) codes. The resulting
quantum codes outperform the best known quantum codes and surpass the
performance limit of the bounded distance decoder. By increasing the size of
the underlying Galois field, i.e., $2^p$, the error floors are considerably
improved.
|
1007.1799
|
Discrete denoising of heterogenous two-dimensional data
|
cs.IT math.IT
|
We consider discrete denoising of two-dimensional data with characteristics
that may be varying abruptly between regions.
Using a quadtree decomposition technique and space-filling curves, we extend
the recently developed S-DUDE (Shifting Discrete Universal DEnoiser), which was
tailored to one-dimensional data, to the two-dimensional case. Our scheme
competes with a genie that has access, in addition to the noisy data, also to
the underlying noiseless data, and can employ $m$ different two-dimensional
sliding window denoisers along $m$ distinct regions obtained by a quadtree
decomposition with $m$ leaves, in a way that minimizes the overall loss. We
show that, regardless of what the underlying noiseless data may be, the
two-dimensional S-DUDE performs essentially as well as this genie, provided
that the number of distinct regions satisfies $m=o(n)$, where $n$ is the total
size of the data. The resulting algorithm complexity is still linear in both
$n$ and $m$, as in the one-dimensional case. Our experimental results show that
the two-dimensional S-DUDE can be effective when the characteristics of the
underlying clean image vary across different regions in the data.
|
1007.1800
|
Multimode Control Attacks on Elections
|
cs.GT cs.CC cs.DS cs.MA
|
In 1992, Bartholdi, Tovey, and Trick opened the study of control attacks on
elections---attempts to improve the election outcome by such actions as
adding/deleting candidates or voters. That work has led to many results on how
algorithms can be used to find attacks on elections and how
complexity-theoretic hardness results can be used as shields against attacks.
However, all the work in this line has assumed that the attacker employs just a
single type of attack. In this paper, we model and study the case in which the
attacker launches a multipronged (i.e., multimode) attack. We do so to more
realistically capture the richness of real-life settings. For example, an
attacker might simultaneously try to suppress some voters, attract new voters
into the election, and introduce a spoiler candidate. Our model provides a
unified framework for such varied attacks, and by constructing polynomial-time
multiprong attack algorithms we prove that for various election systems even
such concerted, flexible attacks can be perfectly planned in deterministic
polynomial time.
|
1007.1811
|
On the Capacity of a Class of Cognitive Z-interference Channels
|
cs.IT math.IT
|
We study a special class of the cognitive radio channel in which the receiver
of the cognitive pair does not suffer interference from the primary user.
Previously developed general encoding schemes for this channel are complex as
they attempt to cope with arbitrary channel conditions, which leads to rate
regions that are difficult to evaluate. The focus of our work is to derive
simple rate regions that are easily computable, thereby providing more insights
into achievable rates and good coding strategies under different channel
conditions. We first present several explicit achievable regions for the
general discrete memoryless case. We also present an improved outer bound on
the capacity region for the case of high interference. We then extend these
regions to Gaussian channels. With a simple outer bound we establish a new
capacity region in the high-interference regime. Lastly, we provide numerical
comparisons between the derived achievable rate regions and the outer bounds.
|
1007.1819
|
Rewritable Codes for Flash Memories Based Upon Lattices, and an Example
Using the E8 Lattice
|
cs.IT math.IT
|
A rewriting code construction for flash memories based upon lattices is
described. The values stored in flash cells correspond to lattice points. This
construction encodes information to lattice points in such a way that data can
be written to the memory multiple times without decreasing the cell values. The
construction partitions the flash memory's cubic signal space into blocks. The
minimum number of writes is shown to be linear in one of the code parameters.
An example using the E8 lattice is given, with numerical results.
|
1007.1852
|
A Generalized Sampling Theorem for Stable Reconstructions in Arbitrary
Bases
|
math.NA cs.IT math.IT
|
We introduce a generalized framework for sampling and reconstruction in
separable Hilbert spaces. Specifically, we establish that it is always possible
to stably reconstruct a vector in an arbitrary Riesz basis from sufficiently
many of its samples in any other Riesz basis. This framework can be viewed as
an extension of that of Eldar et al. However, whilst the latter imposes
stringent assumptions on the reconstruction basis, and may in practice be
unstable, our framework allows for recovery in any (Riesz) basis in a manner
that is completely stable.
Whilst the classical Shannon Sampling Theorem is a special case of our
theorem, this framework allows us to exploit additional information about the
approximated vector (or, in this case, function), for example sparsity or
regularity, to design a reconstruction basis that is better suited. Examples
are presented illustrating this procedure.
|
1007.1938
|
Affine equivalence of cubic homogeneous rotation symmetric Boolean
functions
|
cs.IT math.IT math.NT
|
Homogeneous rotation symmetric Boolean functions have been extensively
studied in recent years because of their applications in cryptography. Little
is known about the basic question of when two such functions are affine
equivalent. The simplest case of quadratic rotation symmetric functions which
are generated by cyclic permutations of the variables in a single monomial was
only settled in 2009. This paper studies the much more complicated cubic case
for such functions. A new concept of \emph{patterns} is introduced, by means of
which the structure of the smallest group G_n, whose action on the set of all
such cubic functions in $n$ variables gives the affine equivalence classes for
these functions under permutation of the variables, is determined. We
conjecture that the equivalence classes are the same if all nonsingular affine
transformations, not just permutations, are allowed. This conjecture is
verified if n < 22. Our method gives much more information about the
equivalence classes; for example, in this paper we give a complete description
of the equivalence classes when n is a prime or a power of 3.
|
1007.1944
|
LHC Databases on the Grid: Achievements and Open Issues
|
cs.DB cs.DC hep-ex physics.data-an
|
To extract physics results from the recorded data, the LHC experiments are
using Grid computing infrastructure. The event data processing on the Grid
requires scalable access to non-event data (detector conditions, calibrations,
etc.) stored in relational databases. The database-resident data are critical
for the event data reconstruction processing steps and often required for
physics analysis. This paper reviews LHC experience with database technologies
for the Grid computing. List of topics includes: database integration with Grid
computing models of the LHC experiments; choice of database technologies;
examples of database interfaces; distributed database applications (data
complexity, update frequency, data volumes and access patterns); scalability of
database access in the Grid computing environment of the LHC experiments. The
review describes areas in which substantial progress was made and remaining
open issues.
|
1007.1986
|
Achievable Error Exponents in the Gaussian Channel with Rate-Limited
Feedback
|
cs.IT math.IT
|
We investigate the achievable error probability in communication over an AWGN
discrete time memoryless channel with noiseless delay-less rate-limited
feedback. For the case where the feedback rate R_FB is lower than the data rate
R transmitted over the forward channel, we show that the decay of the
probability of error is at most exponential in blocklength, and obtain an upper
bound for increase in the error exponent due to feedback. Furthermore, we show
that the use of feedback in this case results in an error exponent that is at
least RF B higher than the error exponent in the absence of feedback. For the
case where the feedback rate exceeds the forward rate (R_FB \geq R), we propose
a simple iterative scheme that achieves a probability of error that decays
doubly exponentially with the codeword blocklength n. More generally, for some
positive integer L, we show that a L-th order exponential error decay is
achievable if R_FB \geq (L-1)R. We prove that the above results hold whether
the feedback constraint is expressed in terms of the average feedback rate or
per channel use feedback rate. Our results show that the error exponent as a
function of R_FB has a strong discontinuity at R, where it jumps from a finite
value to infinity.
|
1007.2049
|
Reinforcement Learning via AIXI Approximation
|
cs.LG
|
This paper introduces a principled approach for the design of a scalable
general reinforcement learning agent. This approach is based on a direct
approximation of AIXI, a Bayesian optimality notion for general reinforcement
learning agents. Previously, it has been unclear whether the theory of AIXI
could motivate the design of practical algorithms. We answer this hitherto open
question in the affirmative, by providing the first computationally feasible
approximation to the AIXI agent. To develop our approximation, we introduce a
Monte Carlo Tree Search algorithm along with an agent-specific extension of the
Context Tree Weighting algorithm. Empirically, we present a set of encouraging
results on a number of stochastic, unknown, and partially observable domains.
|
1007.2071
|
Independent Component Analysis Over Galois Fields
|
cs.IT math.IT physics.data-an
|
We consider the framework of Independent Component Analysis (ICA) for the
case where the independent sources and their linear mixtures all reside in a
Galois field of prime order P. Similarities and differences from the classical
ICA framework (over the Real field) are explored. We show that a necessary and
sufficient identifiability condition is that none of the sources should have a
Uniform distribution. We also show that pairwise independence of the mixtures
implies their full mutual independence (namely a non-mixing condition) in the
binary (P=2) and ternary (P=3) cases, but not necessarily in higher order (P>3)
cases. We propose two different iterative separation (or identification)
algorithms: One is based on sequential identification of the smallest-entropy
linear combinations of the mixtures, and is shown to be equivariant with
respect to the mixing matrix; The other is based on sequential minimization of
the pairwise mutual information measures. We provide some basic performance
analysis for the binary (P=2) case, supplemented by simulation results for
higher orders, demonstrating advantages and disadvantages of the proposed
separation approaches.
|
1007.2075
|
Consistency of Feature Markov Processes
|
cs.LG cs.IT math.IT
|
We are studying long term sequence prediction (forecasting). We approach this
by investigating criteria for choosing a compact useful state representation.
The state is supposed to summarize useful information from the history. We want
a method that is asymptotically consistent in the sense it will provably
eventually only choose between alternatives that satisfy an optimality property
related to the used criterion. We extend our work to the case where there is
side information that one can take advantage of and, furthermore, we briefly
discuss the active setting where an agent takes actions to achieve desirable
outcomes.
|
1007.2088
|
A Multi-hop Multi-source Algebraic Watchdog
|
cs.CR cs.IT math.IT
|
In our previous work "An Algebraic Watchdog for Wireless Network Coding", we
proposed a new scheme in which nodes can detect malicious behaviors
probabilistically, police their downstream neighbors locally using overheard
messages; thus, provide a secure global "self-checking network". As the first
building block of such a system, we focused on a two-hop network, and presented
a graphical model to understand the inference process by which nodes police
their downstream neighbors and to compute the probabilities of misdetection and
false detection.
In this paper, we extend the Algebraic Watchdog to a more general network
setting, and propose a protocol in which we can establish "trust" in coded
systems in a distributed manner. We develop a graphical model to detect the
presence of an adversarial node downstream within a general two-hop network.
The structure of the graphical model (a trellis) lends itself to well-known
algorithms, such as Viterbi algorithm, that can compute the probabilities of
misdetection and false detection. Using this as a building block, we generalize
our scheme to multi-hop networks. We show analytically that as long as the
min-cut is not dominated by the Byzantine adversaries, upstream nodes can
monitor downstream neighbors and allow reliable communication with certain
probability. Finally, we present preliminary simulation results that support
our analysis.
|
1007.2119
|
Free Probability based Capacity Calculation of Multiantenna Gaussian
Fading Channels with Cochannel Interference
|
cs.IT math.IT
|
During the last decade, it has been well understood that communication over
multiple antennas can increase linearly the multiplexing capacity gain and
provide large spectral efficiency improvements. However, the majority of
studies in this area were carried out ignoring cochannel interference. Only a
small number of investigations have considered cochannel interference, but even
therein simple channel models were employed, assuming identically distributed
fading coefficients. In this paper, a generic model for a multi-antenna channel
is presented incorporating four impairments, namely additive white Gaussian
noise, flat fading, path loss and cochannel interference. Both point-to-point
and multiple-access MIMO channels are considered, including the case of
cooperating Base Station clusters. The asymptotic capacity limit of this
channel is calculated based on an asymptotic free probability approach which
exploits the additive and multiplicative free convolution in the R- and
S-transform domain respectively, as well as properties of the eta and Stieltjes
transform. Numerical results are utilized to verify the accuracy of the derived
closed-form expressions and evaluate the effect of the cochannel interference.
|
1007.2212
|
Optimal Path Planning under Temporal Logic Constraints
|
cs.RO
|
In this paper we present a method for automatically generating optimal robot
trajectories satisfying high level mission specifications. The motion of the
robot in the environment is modeled as a general transition system, enhanced
with weighted transitions. The mission is specified by a general linear
temporal logic formula. In addition, we require that an optimizing proposition
must be repeatedly satisfied. The cost function that we seek to minimize is the
maximum time between satisfying instances of the optimizing proposition. For
every environment model, and for every formula, our method computes a robot
trajectory which minimizes the cost function. The problem is motivated by
applications in robotic monitoring and data gathering. In this setting, the
optimizing proposition is satisfied at all locations where data can be
uploaded, and the entire formula specifies a complex (and infinite horizon)
data collection mission. Our method utilizes B\"uchi automata to produce an
automaton (which can be thought of as a graph) whose runs satisfy the temporal
logic specification. We then present a graph algorithm which computes a path
corresponding to the optimal robot trajectory. We also present an
implementation for a robot performing a data gathering mission in a road
network.
|
1007.2238
|
Online Algorithms for the Multi-Armed Bandit Problem with Markovian
Rewards
|
math.OC cs.LG
|
We consider the classical multi-armed bandit problem with Markovian rewards.
When played an arm changes its state in a Markovian fashion while it remains
frozen when not played. The player receives a state-dependent reward each time
it plays an arm. The number of states and the state transition probabilities of
an arm are unknown to the player. The player's objective is to maximize its
long-term total reward by learning the best arm over time. We show that under
certain conditions on the state transition probabilities of the arms, a sample
mean based index policy achieves logarithmic regret uniformly over the total
number of trials. The result shows that sample mean based index policies can be
applied to learning problems under the rested Markovian bandit model without
loss of optimality in the order. Moreover, comparision between Anantharam's
index policy and UCB shows that by choosing a small exploration parameter UCB
can have a smaller regret than Anantharam's index policy.
|
1007.2315
|
On extracting common random bits from correlated sources
|
cs.IT math.IT
|
Suppose Alice and Bob receive strings of unbiased independent but noisy bits
from some random source. They wish to use their respective strings to extract a
common sequence of random bits with high probability but without communicating.
How many such bits can they extract? The trivial strategy of outputting the
first $k$ bits yields an agreement probability of $(1 - \eps)^k <
2^{-1.44k\eps}$, where $\eps$ is the amount of noise. We show that no strategy
can achieve agreement probability better than $2^{-k\eps/(1 - \eps)}$.
On the other hand, we show that when $k \geq 10 + 2 (1 - \eps) / \eps$, there
exists a strategy which achieves an agreement probability of $0.1
(k\eps)^{-1/2} \cdot 2^{-k\eps/(1 - \eps)}$.
|
1007.2354
|
Nonuniform Sparse Recovery with Subgaussian Matrices
|
cs.IT math.IT math.PR
|
Compressive sensing predicts that sufficiently sparse vectors can be
recovered from highly incomplete information. Efficient recovery methods such
as $\ell_1$-minimization find the sparsest solution to certain systems of
equations. Random matrices have become a popular choice for the measurement
matrix. Indeed, near-optimal uniform recovery results have been shown for such
matrices. In this note we focus on nonuniform recovery using Gaussian random
matrices and $\ell_1$-minimization. We provide a condition on the number of
samples in terms of the sparsity and the signal length which guarantees that a
fixed sparse signal can be recovered with a random draw of the matrix using
$\ell_1$-minimization. The constant 2 in the condition is optimal, and the
proof is rather short compared to a similar result due to Donoho and Tanner.
|
1007.2364
|
A Note on Semantic Web Services Specification and Composition in
Constructive Description Logics
|
cs.AI cs.LO
|
The idea of the Semantic Web is to annotate Web content and services with
computer interpretable descriptions with the aim to automatize many tasks
currently performed by human users. In the context of Web services, one of the
most interesting tasks is their composition. In this paper we formalize this
problem in the framework of a constructive description logic. In particular we
propose a declarative service specification language and a calculus for service
composition. We show by means of an example how this calculus can be used to
define composed Web services and we discuss the problem of automatic service
synthesis.
|
1007.2377
|
Performance bounds for expander-based compressed sensing in Poisson
noise
|
cs.IT math.IT
|
This paper provides performance bounds for compressed sensing in the presence
of Poisson noise using expander graphs. The Poisson noise model is appropriate
for a variety of applications, including low-light imaging and digital
streaming, where the signal-independent and/or bounded noise models used in the
compressed sensing literature are no longer applicable. In this paper, we
develop a novel sensing paradigm based on expander graphs and propose a MAP
algorithm for recovering sparse or compressible signals from Poisson
observations. The geometry of the expander graphs and the positivity of the
corresponding sensing matrices play a crucial role in establishing the bounds
on the signal reconstruction error of the proposed algorithm. We support our
results with experimental demonstrations of reconstructing average packet
arrival rates and instantaneous packet counts at a router in a communication
network, where the arrivals of packets in each flow follow a Poisson process.
|
1007.2401
|
Double Circulant Minimum Storage Regenerating Codes
|
cs.DC cs.IT cs.NI math.IT
|
A newer version will appear soon
|
1007.2442
|
Neural Network Based Reconstruction of a 3D Object from a 2D Wireframe
|
cs.CV
|
We propose a new approach for constructing a 3D representation from a 2D
wireframe drawing. A drawing is simply a parallel projection of a 3D object
onto a 2D surface; humans are able to recreate mental 3D models from 2D
representations very easily, yet the process is very difficult to emulate
computationally. We hypothesize that our ability to perform this construction
relies on the angles in the 2D scene, among other geometric properties. Being
able to reproduce this reconstruction process automatically would allow for
efficient and robust 3D sketch interfaces. Our research focuses on the
relationship between 2D geometry observable in the sketch and 3D geometry
derived from a potential 3D construction. We present a fully automated system
that constructs 3D representations from 2D wireframes using a neural network in
conjunction with a genetic search algorithm.
|
1007.2449
|
A Brief Introduction to Temporality and Causality
|
cs.LG cs.AI
|
Causality is a non-obvious concept that is often considered to be related to
temporality. In this paper we present a number of past and present approaches
to the definition of temporality and causality from philosophical, physical,
and computational points of view. We note that time is an important ingredient
in many relationships and phenomena. The topic is then divided into the two
main areas of temporal discovery, which is concerned with finding relations
that are stretched over time, and causal discovery, where a claim is made as to
the causal influence of certain events on others. We present a number of
computational tools used for attempting to automatically discover temporal and
causal relations in data.
|
1007.2534
|
A general method for deciding about logically constrained issues
|
cs.AI
|
A general method is given for revising degrees of belief and arriving at
consistent decisions about a system of logically constrained issues. In
contrast to other works about belief revision, here the constraints are assumed
to be fixed. The method has two variants, dual of each other, whose revised
degrees of belief are respectively above and below the original ones. The upper
[resp. lower] revised degrees of belief are uniquely characterized as the
lowest [resp. highest] ones that are invariant by a certain max-min [resp.
min-max] operation determined by the logical constraints. In both variants,
making balance between the revised degree of belief of a proposition and that
of its negation leads to decisions that are ensured to be consistent with the
logical constraints. These decisions are ensured to agree with the majority
criterion as applied to the original degrees of belief whenever this gives a
consistent result. They are also also ensured to satisfy a property of respect
for unanimity about any particular issue, as well as a property of monotonicity
with respect to the original degrees of belief. The application of the method
to certain special domains comes down to well established or increasingly
accepted methods, such as the single-link method of cluster analysis and the
method of paths in preferential voting.
|
1007.2738
|
Consensus Computation in Unreliable Networks: A System Theoretic
Approach
|
math.OC cs.SY
|
This work addresses the problem of ensuring trustworthy computation in a
linear consensus network. A solution to this problem is relevant for several
tasks in multi-agent systems including motion coordination, clock
synchronization, and cooperative estimation. In a linear consensus network, we
allow for the presence of misbehaving agents, whose behavior deviate from the
nominal consensus evolution. We model misbehaviors as unknown and unmeasurable
inputs affecting the network, and we cast the misbehavior detection and
identification problem into an unknown-input system theoretic framework. We
consider two extreme cases of misbehaving agents, namely faulty (non-colluding)
and malicious (Byzantine) agents. First, we characterize the set of inputs that
allow misbehaving agents to affect the consensus network while remaining
undetected and/or unidentified from certain observing agents. Second, we
provide worst-case bounds for the number of concurrent faulty or malicious
agents that can be detected and identified. Precisely, the consensus network
needs to be 2k+1 (resp. k+1) connected for k malicious (resp. faulty) agents to
be generically detectable and identifiable by every well behaving agent. Third,
we quantify the effect of undetectable inputs on the final consensus value.
Fourth, we design three algorithms to detect and identify misbehaving agents.
The first and the second algorithm apply fault detection techniques, and
affords complete detection and identification if global knowledge of the
network is available to each agent, at a high computational cost. The third
algorithm is designed to exploit the presence in the network of weakly
interconnected subparts, and provides local detection and identification of
misbehaving agents whose behavior deviates more than a threshold, which is
quantified in terms of the interconnection structure.
|
1007.2814
|
A Unifying Framework for Local Throughput in Wireless Networks
|
cs.NI cs.IT math.IT
|
With the increased competition for the electromagnetic spectrum, it is
important to characterize the impact of interference in the performance of a
wireless network, which is traditionally measured by its throughput. This paper
presents a unifying framework for characterizing the local throughput in
wireless networks. We first analyze the throughput of a probe link from a
connectivity perspective, in which a packet is successfully received if it does
not collide with other packets from nodes within its reach (called the audible
interferers). We then characterize the throughput from a
signal-to-interference-plus-noise ratio (SINR) perspective, in which a packet
is successfully received if the SINR exceeds some threshold, considering the
interference from all emitting nodes in the network. Our main contribution is
to generalize and unify various results scattered throughout the literature. In
particular, the proposed framework encompasses arbitrary wireless propagation
effects (e.g, Nakagami-m fading, Rician fading, or log-normal shadowing), as
well as arbitrary traffic patterns (e.g., slotted-synchronous,
slotted-asynchronous, or exponential-interarrivals traffic), allowing us to
draw more general conclusions about network performance than previously
available in the literature.
|
1007.2827
|
Data processing theorems and the second law of thermodynamics
|
cs.IT cond-mat.stat-mech math.IT
|
We draw relationships between the generalized data processing theorems of
Zakai and Ziv (1973 and 1975) and the dynamical version of the second law of
thermodynamics, a.k.a. the Boltzmann H-Theorem, which asserts that the Shannon
entropy, $H(X_t)$, pertaining to a finite--state Markov process $\{X_t\}$, is
monotonically non-decreasing as a function of time $t$, provided that the
steady-state distribution of this process is uniform across the state space
(which is the case when the process designates an isolated system). It turns
out that both the generalized data processing theorems and the Boltzmann
H-Theorem can be viewed as special cases of a more general principle concerning
the monotonicity (in time) of a certain generalized information measure applied
to a Markov process. This gives rise to a new look at the generalized data
processing theorem, which suggests to exploit certain degrees of freedom that
may lead to better bounds, for a given choice of the convex function that
defines the generalized mutual information.
|
1007.2855
|
Quantum Channel Capacities
|
cs.IT math.IT quant-ph
|
A quantum communication channel can be put to many uses: it can transmit
classical information, private classical information, or quantum information.
It can be used alone, with shared entanglement, or together with other
channels. For each of these settings there is a capacity that quantifies a
channel's potential for communication. In this short review, I summarize what
is known about the various capacities of a quantum channel, including a
discussion of the relevant additivity questions. I also give some indication of
potentially interesting directions for future research.
|
1007.2876
|
The Spread of Evidence-Poor Medicine via Flawed Social-Network Analysis
|
stat.ME cs.SI physics.soc-ph
|
The chronic widespread misuse of statistics is usually inadvertent, not
intentional. We find cautionary examples in a series of recent papers by
Christakis and Fowler that advance statistical arguments for the transmission
via social networks of various personal characteristics, including obesity,
smoking cessation, happiness, and loneliness. Those papers also assert that
such influence extends to three degrees of separation in social networks. We
shall show that these conclusions do not follow from Christakis and Fowler's
statistical analyses. In fact, their studies even provide some evidence against
the existence of such transmission. The errors that we expose arose, in part,
because the assumptions behind the statistical procedures used were
insufficiently examined, not only by the authors, but also by the reviewers.
Our examples are instructive because the practitioners are highly reputed,
their results have received enormous popular attention, and the journals that
published their studies are among the most respected in the world. An
educational bonus emerges from the difficulty we report in getting our critique
published. We discuss the relevance of this episode to understanding
statistical literacy and the role of scientific review, as well as to reforming
statistics education.
|
1007.2928
|
Encoding Complexity of Network Coding with Two Simple Multicast Sessions
|
cs.IT math.IT
|
The encoding complexity of network coding for single multicast networks has
been intensively studied from several aspects: e.g., the time complexity, the
required number of encoding links, and the required field size for a linear
code solution. However, these issues as well as the solvability are less
understood for networks with multiple multicast sessions. Recently, Wang and
Shroff showed that the solvability of networks with two unit-rate multicast
sessions (2-URMS) can be decided in polynomial time. In this paper, we prove
that for the 2-URMS networks: $1)$ the solvability can be determined with time
$O(|E|)$; $2)$ a solution can be constructed with time $O(|E|)$; $3)$ an
optimal solution can be obtained in polynomial time; $4)$ the number of
encoding links required to achieve a solution is upper-bounded by
$\max\{3,2N-2\}$; and $5)$ the field size required to achieve a linear solution
is upper-bounded by $\max\{2,\lfloor\sqrt{2N-7/4}+1/2\rfloor\}$, where $|E|$ is
the number of links and $N$ is the number of sinks of the underlying network.
Both bounds are shown to be tight.
|
1007.2945
|
When is a Function Securely Computable?
|
cs.IT math.IT
|
A subset of a set of terminals that observe correlated signals seek to
compute a given function of the signals using public communication. It is
required that the value of the function be kept secret from an eavesdropper
with access to the communication. We show that the function is securely
computable if and only if its entropy is less than the "aided secret key"
capacity of an associated secrecy generation model, for which a single-letter
characterization is provided.
|
1007.2958
|
A Machine Learning Approach to Recovery of Scene Geometry from Images
|
cs.CV cs.LG
|
Recovering the 3D structure of the scene from images yields useful
information for tasks such as shape and scene recognition, object detection, or
motion planning and object grasping in robotics. In this thesis, we introduce a
general machine learning approach called unsupervised CRF learning based on
maximizing the conditional likelihood. We apply our approach to computer vision
systems that recover the 3-D scene geometry from images. We focus on recovering
3D geometry from single images, stereo pairs and video sequences. Building
these systems requires algorithms for doing inference as well as learning the
parameters of conditional Markov random fields (MRF). Our system is trained
unsupervisedly without using ground-truth labeled data. We employ a
slanted-plane stereo vision model in which we use a fixed over-segmentation to
segment the left image into coherent regions called superpixels, then assign a
disparity plane for each superpixel. Plane parameters are estimated by solving
an MRF labelling problem, through minimizing an energy fuction. We demonstrate
the use of our unsupervised CRF learning algorithm for a parameterized
slanted-plane stereo vision model involving shape from texture cues. Our stereo
model with texture cues, only by unsupervised training, outperforms the results
in related work on the same stereo dataset. In this thesis, we also formulate
structure and motion estimation as an energy minimization problem, in which the
model is an extension of our slanted-plane stereo vision model that also
handles surface velocity. Velocity estimation is achieved by solving an MRF
labeling problem using Loopy BP. Performance analysis is done using our novel
evaluation metrics based on the notion of view prediction error. Experiments on
road-driving stereo sequences show encouraging results.
|
1007.2980
|
Publishing and Discovery of Mobile Web Services in Peer to Peer Networks
|
cs.IR cs.NI
|
It is now feasible to host Web Services on a mobile device due to the
advances in cellular devices and mobile communication technologies. However,
the reliability, usability and responsiveness of the Mobile Hosts depend on
various factors including the characteristics of available network,
computational resources, and better means of searching the services provided by
them. P2P enhances the adoption of Mobile Host in commercial environments.
Mobile Hosts in P2P can collaboratively share the resources of individual
peers. P2P also enhances the service discovery of huge number of Web Services
possible with Mobile Hosts. Advanced features like post filtering with weight
of keywords and context-awareness can also be exploited to select the best
possible mobile Web Service. This paper proposes the concept of Mobile Hosts in
P2P networks and identifies the means of publishing and discovery of Web
Services in mobile P2P networks.
|
1007.3075
|
Resolving the Connectivity-Throughput Trade-Off in Random Networks
|
cs.IT math.IT
|
The discrepancy between the upper bound on throughput in wireless networks
and the throughput scaling in random networks which is also known as the
connectivity-throughput trade-off is analyzed. In a random network with
$\lambda$ nodes per unit area, throughput is found to scale by a factor of
$\sqrt{\log{\lambda}}$ worse compared to the upper bound which is due to the
uncertainty in the nodes' location. In the present model, nodes are assumed to
know their geographical location and to employ power control, which we
understand as an additional degree of freedom to improve network performance.
The expected throughput-progress and the expected packet delay normalized to
the one-hop progress are chosen as performance metrics. These metrics are
investigated for a nearest neighbor forwarding strategy, which benefits from
power control by reducing transmission power and, hence spatial contention. It
is shown that the connectivity-throughput trade-off can be resolved if nodes
employ a nearest neighbor forwarding strategy, achieving the upper bound on
throughput on average also in a random network while ensuring asymptotic
connectivity. In this case, the optimal throughput-delay scaling trade-off is
also achieved.
|
1007.3105
|
A Selection Region Based Routing Protocol for Random Mobile ad hoc
Networks
|
cs.IT math.IT
|
We propose a selection region based multi-hop routing protocol for random
mobile ad hoc networks, where the selection region is defined by two
parameters: a reference distance and a selection angle. At each hop, a relay is
chosen as the nearest node to the transmitter that is located within the
selection region. By assuming that the relay nodes are randomly placed, we
derive an upper bound for the optimum reference distance to maximize the
expected density of progress and investigate the relationship between the
optimum selection angle and the optimum reference distance. We also note that
the optimized expected density of progress scales as $\Theta(\sqrt{\lambda})$,
which matches the prior results in the literature. Compared with the
spatial-reuse multi-hop protocol in \cite{Baccelli:Aloha} recently proposed by
Baccelli \emph{et al.}, in our new protocol the amount of nodes involved and
the calculation complexity for each relay selection are reduced significantly,
which is attractive for energy-limited wireless ad hoc networks (e.g., wireless
sensor networks).
|
1007.3108
|
Second-Order Weight Distributions
|
cs.IT math.IT
|
A fundamental property of codes, the second-order weight distribution, is
proposed to solve the problems such as computing second moments of weight
distributions of linear code ensembles. A series of results, parallel to those
for weight distributions, is established for second-order weight distributions.
In particular, an analogue of MacWilliams identities is proved. The
second-order weight distributions of regular LDPC code ensembles are then
computed. As easy consequences, the second moments of weight distributions of
regular LDPC code ensembles are obtained. Furthermore, the application of
second-order weight distributions in random coding approach is discussed. The
second-order weight distributions of the ensembles generated by a so-called
2-good random generator or parity-check matrix are computed, where a 2-good
random matrix is a kind of generalization of the uniformly distributed random
matrix over a finite filed and is very useful for solving problems that involve
pairwise or triple-wise properties of sequences. It is shown that the 2-good
property is reflected in the second-order weight distribution, which thus plays
a fundamental role in some well-known problems in coding theory and
combinatorics. An example of linear intersecting codes is finally provided to
illustrate this fact.
|
1007.3159
|
Logic-Based Decision Support for Strategic Environmental Assessment
|
cs.AI
|
Strategic Environmental Assessment is a procedure aimed at introducing
systematic assessment of the environmental effects of plans and programs. This
procedure is based on the so-called coaxial matrices that define dependencies
between plan activities (infrastructures, plants, resource extractions,
buildings, etc.) and positive and negative environmental impacts, and
dependencies between these impacts and environmental receptors. Up to now, this
procedure is manually implemented by environmental experts for checking the
environmental effects of a given plan or program, but it is never applied
during the plan/program construction. A decision support system, based on a
clear logic semantics, would be an invaluable tool not only in assessing a
single, already defined plan, but also during the planning process in order to
produce an optimized, environmentally assessed plan and to study possible
alternative scenarios. We propose two logic-based approaches to the problem,
one based on Constraint Logic Programming and one on Probabilistic Logic
Programming that could be, in the future, conveniently merged to exploit the
advantages of both. We test the proposed approaches on a real energy plan and
we discuss their limitations and advantages.
|
1007.3208
|
Link Graph Analysis for Adult Images Classification
|
cs.IR
|
In order to protect an image search engine's users from undesirable results
adult images' classifier should be built. The information about links from
websites to images is employed to create such a classifier. These links are
represented as a bipartite website-image graph. Each vertex is equipped with
scores of adultness and decentness. The scores for image vertexes are
initialized with zero, those for website vertexes are initialized according to
a text-based website classifier. An iterative algorithm that propagates scores
within a website-image graph is described. The scores obtained are used to
classify images by choosing an appropriate threshold. The experiments on
Internet-scale data have shown that the algorithm under consideration increases
classification recall by 17% in comparison with a simple algorithm which
classifies an image as adult if it is connected with at least one adult site
(at the same precision level).
|
1007.3223
|
Testing and Debugging Techniques for Answer Set Solver Development
|
cs.AI cs.SE
|
This paper develops automated testing and debugging techniques for answer set
solver development. We describe a flexible grammar-based black-box ASP fuzz
testing tool which is able to reveal various defects such as unsound and
incomplete behavior, i.e. invalid answer sets and inability to find existing
solutions, in state-of-the-art answer set solver implementations. Moreover, we
develop delta debugging techniques for shrinking failure-inducing inputs on
which solvers exhibit defective behavior. In particular, we develop a delta
debugging algorithm in the context of answer set solving, and evaluate two
different elimination strategies for the algorithm.
|
1007.3254
|
Distinguishing Fact from Fiction: Pattern Recognition in Texts Using
Complex Networks
|
cs.CL cond-mat.stat-mech physics.soc-ph
|
We establish concrete mathematical criteria to distinguish between different
kinds of written storytelling, fictional and non-fictional. Specifically, we
constructed a semantic network from both novels and news stories, with $N$
independent words as vertices or nodes, and edges or links allotted to words
occurring within $m$ places of a given vertex; we call $m$ the word distance.
We then used measures from complex network theory to distinguish between news
and fiction, studying the minimal text length needed as well as the optimized
word distance $m$. The literature samples were found to be most effectively
represented by their corresponding power laws over degree distribution $P(k)$
and clustering coefficient $C(k)$; we also studied the mean geodesic distance,
and found all our texts were small-world networks. We observed a natural
break-point at $k=\sqrt{N}$ where the power law in the degree distribution
changed, leading to separate power law fit for the bulk and the tail of $P(k)$.
Our linear discriminant analysis yielded a $73.8 \pm 5.15%$ accuracy for the
correct classification of novels and $69.1 \pm 1.22%$ for news stories. We
found an optimal word distance of $m=4$ and a minimum text length of 100 to 200
words $N$.
|
1007.3275
|
An Algorithmic Structuration of a Type System for an Orthogonal
Object/Relational Model
|
cs.DB
|
Date and Darwen have proposed a theory of types, the latter forms the basis
of a detailed presentation of a panoply of simple and complex types. However,
this proposal has not been structured in a formal system. Specifically, Date
and Darwen haven't indicated the formalism of the type system that corresponds
to the type theory established. In this paper, we propose a pseudo-algorithmic
and grammatical description of a system of types for Date and Darwen's model.
Our type system is supposed take into account null values; for such intention,
we introduce a particular type noted #, which expresses one or more occurrences
of incomplete information in a database. Our algebraic grammar describes in
detail the complete specification of an inheritance model and the subryping
relation induced, thus the different definitions of related concepts.
|
1007.3315
|
Multi-Source Transmission for Wireless Relay Networks with Linear
Complexity
|
cs.IT math.IT
|
This paper considers transmission schemes in multi-access relay networks
(MARNs) where $J$ single-antenna sources send independent information to one
$N$-antenna destination through one $M$-antenna relay. For complexity
considerations, we propose a linear framework, where the relay linearly
transforms its received signals to generate the forwarded signals without
decoding and the destination uses its multi-antennas to fully decouple signals
from different sources before decoding, by which the decoding complexity is
linear in the number of sources. To achieve a high symbol rate, we first
propose a scheme called DSTC-ICRec in which all sources' information streams
are concurrently transmitted in both the source-relay link and the
relay-destination link. In this scheme, distributed space-time coding (DSTC) is
applied at the relay, which satisfies the linear constraint. DSTC also allows
the destination to conduct the zero-forcing interference cancellation (IC)
scheme originally proposed for multi-antenna systems to fully decouple signals
from different sources. Our analysis shows that the symbol rate of DSTC-ICRec
is $1/2$ symbols/source/channel use and the diversity gain of the scheme is
upperbounded by $M-J+1$. To achieve a higher diversity gain, we propose another
scheme called TDMA-ICRec in which the sources time-share the source-relay link.
The relay coherently combines the signals on its antennas to maximize the
signal-to-noise ratio (SNR) of each source, then concurrently forwards all
sources' information. The destination performs zero-forcing IC. It is shown
through both analysis and simulation that when $N \ge 2J-1$, TDMA-ICRec
achieves the same maximum diversity gain as the full TDMA scheme in which the
information stream from each source is assigned to an orthogonal channel in
both links, but with a higher symbol rate.
|
1007.3384
|
Relative entropy via non-sequential recursive pair substitutions
|
cs.IT cond-mat.stat-mech math.IT
|
The entropy of an ergodic source is the limit of properly rescaled 1-block
entropies of sources obtained applying successive non-sequential recursive
pairs substitutions (see P. Grassberger 2002 ArXiv:physics/0207023 and D.
Benedetto, E. Caglioti and D. Gabrielli 2006 Jour. Stat. Mech. Theo. Exp. 09
doi:10.1088/1742.-5468/2006/09/P09011). In this paper we prove that the cross
entropy and the Kullback-Leibler divergence can be obtained in a similar way.
|
1007.3424
|
Bacterial Community Reconstruction Using A Single Sequencing Reaction
|
q-bio.GN cs.IT math.IT q-bio.QM stat.AP stat.CO
|
Bacteria are the unseen majority on our planet, with millions of species and
comprising most of the living protoplasm. While current methods enable in-depth
study of a small number of communities, a simple tool for breadth studies of
bacterial population composition in a large number of samples is lacking. We
propose a novel approach for reconstruction of the composition of an unknown
mixture of bacteria using a single Sanger-sequencing reaction of the mixture.
This method is based on compressive sensing theory, which deals with
reconstruction of a sparse signal using a small number of measurements.
Utilizing the fact that in many cases each bacterial community is comprised of
a small subset of the known bacterial species, we show the feasibility of this
approach for determining the composition of a bacterial mixture. Using
simulations, we show that sequencing a few hundred base-pairs of the 16S rRNA
gene sequence may provide enough information for reconstruction of mixtures
containing tens of species, out of tens of thousands, even in the presence of
realistic measurement noise. Finally, we show initial promising results when
applying our method for the reconstruction of a toy experimental mixture with
five species. Our approach may have a potential for a practical and efficient
way for identifying bacterial species compositions in biological samples.
|
1007.3515
|
Query-driven Procedures for Hybrid MKNF Knowledge Bases
|
cs.AI
|
Hybrid MKNF knowledge bases are one of the most prominent tightly integrated
combinations of open-world ontology languages with closed-world (non-monotonic)
rule paradigms. The definition of Hybrid MKNF is parametric on the description
logic (DL) underlying the ontology language, in the sense that non-monotonic
rules can extend any decidable DL language. Two related semantics have been
defined for Hybrid MKNF: one that is based on the Stable Model Semantics for
logic programs and one on the Well-Founded Semantics (WFS). Under WFS, the
definition of Hybrid MKNF relies on a bottom-up computation that has polynomial
data complexity whenever the DL language is tractable. Here we define a general
query-driven procedure for Hybrid MKNF that is sound with respect to the stable
model-based semantics, and sound and complete with respect to its WFS variant.
This procedure is able to answer a slightly restricted form of conjunctive
queries, and is based on tabled rule evaluation extended with an external
oracle that captures reasoning within the ontology. Such an (abstract) oracle
receives as input a query along with knowledge already derived, and replies
with a (possibly empty) set of atoms, defined in the rules, whose truth would
suffice to prove the initial query. With appropriate assumptions on the
complexity of the abstract oracle, the general procedure maintains the data
complexity of the WFS for Hybrid MKNF knowledge bases.
To illustrate this approach, we provide a concrete oracle for EL+, a fragment
of the light-weight DL EL++. Such an oracle has practical use, as EL++ is the
language underlying OWL 2 EL, which is part of the W3C recommendations for the
Semantic Web, and is tractable for reasoning tasks such as subsumption. We show
that query-driven Hybrid MKNF preserves polynomial data complexity when using
the EL+ oracle and WFS.
|
1007.3518
|
Secret Key Generation for a Pairwise Independent Network Model
|
cs.IT math.IT
|
We consider secret key generation for a "pairwise independent network" model
in which every pair of terminals observes correlated sources that are
independent of sources observed by all other pairs of terminals. The terminals
are then allowed to communicate publicly with all such communication being
observed by all the terminals. The objective is to generate a secret key shared
by a given subset of terminals at the largest rate possible, with the
cooperation of any remaining terminals. Secrecy is required from an
eavesdropper that has access to the public interterminal communication. A
(single-letter) formula for secret key capacity brings out a natural connection
between the problem of secret key generation and a combinatorial problem of
maximal packing of Steiner trees in an associated multigraph. An explicit
algorithm is proposed for secret key generation based on a maximal packing of
Steiner trees in a multigraph; the corresponding maximum rate of Steiner tree
packing is thus a lower bound for the secret key capacity. When only two of the
terminals or when all the terminals seek to share a secret key, the mentioned
algorithm achieves secret key capacity in which case the bound is tight.
|
1007.3564
|
Manifold Elastic Net: A Unified Framework for Sparse Dimension Reduction
|
cs.LG stat.ML
|
It is difficult to find the optimal sparse solution of a manifold learning
based dimensionality reduction algorithm. The lasso or the elastic net
penalized manifold learning based dimensionality reduction is not directly a
lasso penalized least square problem and thus the least angle regression (LARS)
(Efron et al. \cite{LARS}), one of the most popular algorithms in sparse
learning, cannot be applied. Therefore, most current approaches take indirect
ways or have strict settings, which can be inconvenient for applications. In
this paper, we proposed the manifold elastic net or MEN for short. MEN
incorporates the merits of both the manifold learning based dimensionality
reduction and the sparse learning based dimensionality reduction. By using a
series of equivalent transformations, we show MEN is equivalent to the lasso
penalized least square problem and thus LARS is adopted to obtain the optimal
sparse solution of MEN. In particular, MEN has the following advantages for
subsequent classification: 1) the local geometry of samples is well preserved
for low dimensional data representation, 2) both the margin maximization and
the classification error minimization are considered for sparse projection
calculation, 3) the projection matrix of MEN improves the parsimony in
computation, 4) the elastic net penalty reduces the over-fitting problem, and
5) the projection matrix of MEN can be interpreted psychologically and
physiologically. Experimental evidence on face recognition over various popular
datasets suggests that MEN is superior to top level dimensionality reduction
algorithms.
|
1007.3568
|
Achieving the Secrecy Capacity of Wiretap Channels Using Polar Codes
|
cs.IT cs.CR math.IT
|
Suppose Alice wishes to send messages to Bob through a communication channel
C_1, but her transmissions also reach an eavesdropper Eve through another
channel C_2. The goal is to design a coding scheme that makes it possible for
Alice to communicate both reliably and securely. Reliability is measured in
terms of Bob's probability of error in recovering the message, while security
is measured in terms of the mutual information between the message and Eve's
observations. Wyner showed that the situation is characterized by a single
constant C_s, called the secrecy capacity, which has the following meaning: for
all $\epsilon > 0$, there exist coding schemes of rate $R \ge C_s - \epsilon$
that asymptotically achieve both the reliability and the security objectives.
However, his proof of this result is based upon a nonconstructive random-coding
argument. To date, despite a considerable research effort, the only case where
we know how to construct coding schemes that achieve secrecy capacity is when
Eve's channel C_2 is an erasure channel, or a combinatorial variation thereof.
Polar codes were recently invented by Arikan; they approach the capacity of
symmetric binary-input discrete memoryless channels with low encoding and
decoding complexity. Herein, we use polar codes to construct a coding scheme
that achieves the secrecy capacity of general wiretap channels. Our
construction works for any instantiation of the wiretap channel model, as
originally defined by Wyner, as long as both C_1 and C_2 are symmetric and
binary-input. Moreover, we show how to modify our construction in order to
achieve strong security, as defined by Maurer, while still operating at a rate
that approaches the secrecy capacity. In this case, we cannot guarantee that
the reliability condition will be satisfied unless the main channel C_1 is
noiseless, although we believe it can be always satisfied in practice.
|
1007.3588
|
Improved construction of irregular progressive edge-growth Tanner graphs
|
cs.IT math.IT
|
The progressive edge-growth algorithm is a well-known procedure to construct
regular and irregular low-density parity-check codes. In this paper, we propose
a modification of the original algorithm that improves the performance of these
codes in the waterfall region when constructing codes complying with both,
check and symbol node degree distributions. The proposed algorithm is thus
interesting if a family of irregular codes with a complex check node degree
distribution is used.
|
1007.3622
|
A generalized risk approach to path inference based on hidden Markov
models
|
stat.ML cs.LG stat.CO
|
Motivated by the unceasing interest in hidden Markov models (HMMs), this
paper re-examines hidden path inference in these models, using primarily a
risk-based framework. While the most common maximum a posteriori (MAP), or
Viterbi, path estimator and the minimum error, or Posterior Decoder (PD), have
long been around, other path estimators, or decoders, have been either only
hinted at or applied more recently and in dedicated applications generally
unfamiliar to the statistical learning community. Over a decade ago, however, a
family of algorithmically defined decoders aiming to hybridize the two standard
ones was proposed (Brushe et al., 1998). The present paper gives a careful
analysis of this hybridization approach, identifies several problems and issues
with it and other previously proposed approaches, and proposes practical
resolutions of those. Furthermore, simple modifications of the classical
criteria for hidden path recognition are shown to lead to a new class of
decoders. Dynamic programming algorithms to compute these decoders in the usual
forward-backward manner are presented. A particularly interesting subclass of
such estimators can be also viewed as hybrids of the MAP and PD estimators.
Similar to previously proposed MAP-PD hybrids, the new class is parameterized
by a small number of tunable parameters. Unlike their algorithmic predecessors,
the new risk-based decoders are more clearly interpretable, and, most
importantly, work "out of the box" in practice, which is demonstrated on some
real bioinformatics tasks and data. Some further generalizations and
applications are discussed in conclusion.
|
1007.3661
|
Non-Binary Polar Codes using Reed-Solomon Codes and Algebraic Geometry
Codes
|
cs.IT math.IT
|
Polar codes, introduced by Arikan, achieve symmetric capacity of any discrete
memoryless channels under low encoding and decoding complexity. Recently,
non-binary polar codes have been investigated. In this paper, we calculate
error probability of non-binary polar codes constructed on the basis of
Reed-Solomon matrices by numerical simulations. It is confirmed that 4-ary
polar codes have significantly better performance than binary polar codes on
binary-input AWGN channel. We also discuss an interpretation of polar codes in
terms of algebraic geometry codes, and further show that polar codes using
Hermitian codes have asymptotically good performance.
|
1007.3663
|
A decidable subclass of finitary programs
|
cs.AI
|
Answer set programming - the most popular problem solving paradigm based on
logic programs - has been recently extended to support uninterpreted function
symbols. All of these approaches have some limitation. In this paper we propose
a class of programs called FP2 that enjoys a different trade-off between
expressiveness and complexity. FP2 programs enjoy the following unique
combination of properties: (i) the ability of expressing predicates with
infinite extensions; (ii) full support for predicates with arbitrary arity;
(iii) decidability of FP2 membership checking; (iv) decidability of skeptical
and credulous stable model reasoning for call-safe queries. Odd cycles are
supported by composing FP2 programs with argument restricted programs.
|
1007.3676
|
(n,K)-user Interference Channels: Degrees of Freedom
|
cs.IT math.IT
|
We analyze the gains of opportunistic communication in multiuser interference
channels. Consider a fully connected $n$-user Gaussian interference channel. At
each time instance only $K\leq n$ transmitters are allowed to be communicating
with their respective receivers and the remaining $(n-K)$ transmitter-receiver
pairs remain inactive. For finite $n$, if the transmitters can acquire channel
state information (CSI) and if all channel gains are bounded away from zero and
infinity, the seminal results on interference alignment establish that for any
$K$ {\em arbitrary} active pairs the total number of spatial degrees of freedom
per orthogonal time and frequency domain is $\frac{K}{2}$. Also it is
noteworthy that without transmit-side CSI the interference channel becomes
interference-limited and the degrees of freedom is 0. In {\em dense} networks
($n\rightarrow\infty$), however, as the size of the network increase, it
becomes less likely to sustain the bounding conditions on the channel gains. By
exploiting this fact, we show that when $n$ obeys certain scaling laws, by {\em
opportunistically} and {\em dynamically} selecting the $K$ active pairs at each
time instance, the number of degrees of freedom can exceed $\frac{K}{2}$ and in
fact can be made arbitrarily close to $K$. More specifically when all
transmitters and receivers are equipped with one antenna, then the network size
scaling as $n\in\omega(\snr^{d(K-1)})$ is a {\em sufficient} condition for
achieving $d\in[0,K]$ degrees of freedom. Moreover, achieving these degrees of
freedom does not necessitate the transmitters to acquire channel state
information. Hence, invoking opportunistic communication in the context of
interference channels leads to achieving higher degrees of freedom that are not
achievable otherwise.
|
1007.3700
|
Logic Programming for Finding Models in the Logics of Knowledge and its
Applications: A Case Study
|
cs.AI cs.LO
|
The logics of knowledge are modal logics that have been shown to be effective
in representing and reasoning about knowledge in multi-agent domains.
Relatively few computational frameworks for dealing with computation of models
and useful transformations in logics of knowledge (e.g., to support multi-agent
planning with knowledge actions and degrees of visibility) have been proposed.
This paper explores the use of logic programming (LP) to encode interesting
forms of logics of knowledge and compute Kripke models. The LP modeling is
expanded with useful operators on Kripke structures, to support multi-agent
planning in the presence of both world-altering and knowledge actions. This
results in the first ever implementation of a planner for this type of complex
multi-agent domains.
|
1007.3706
|
Cooperative Convex Optimization in Networked Systems: Augmented
Lagrangian Algorithms with Directed Gossip Communication
|
cs.IT math.IT
|
We study distributed optimization in networked systems, where nodes cooperate
to find the optimal quantity of common interest, x=x^\star. The objective
function of the corresponding optimization problem is the sum of private (known
only by a node,) convex, nodes' objectives and each node imposes a private
convex constraint on the allowed values of x. We solve this problem for generic
connected network topologies with asymmetric random link failures with a novel
distributed, decentralized algorithm. We refer to this algorithm as AL-G
(augmented Lagrangian gossiping,) and to its variants as AL-MG (augmented
Lagrangian multi neighbor gossiping) and AL-BG (augmented Lagrangian broadcast
gossiping.) The AL-G algorithm is based on the augmented Lagrangian dual
function. Dual variables are updated by the standard method of multipliers, at
a slow time scale. To update the primal variables, we propose a novel,
Gauss-Seidel type, randomized algorithm, at a fast time scale. AL-G uses
unidirectional gossip communication, only between immediate neighbors in the
network and is resilient to random link failures. For networks with reliable
communication (i.e., no failures,) the simplified, AL-BG (augmented Lagrangian
broadcast gossiping) algorithm reduces communication, computation and data
storage cost. We prove convergence for all proposed algorithms and demonstrate
by simulations the effectiveness on two applications: l_1-regularized logistic
regression for classification and cooperative spectrum sensing for cognitive
radio networks.
|
1007.3753
|
Fast L1-Minimization Algorithms For Robust Face Recognition
|
cs.CV cs.NA
|
L1-minimization refers to finding the minimum L1-norm solution to an
underdetermined linear system b=Ax. Under certain conditions as described in
compressive sensing theory, the minimum L1-norm solution is also the sparsest
solution. In this paper, our study addresses the speed and scalability of its
algorithms. In particular, we focus on the numerical implementation of a
sparsity-based classification framework in robust face recognition, where
sparse representation is sought to recover human identities from very
high-dimensional facial images that may be corrupted by illumination, facial
disguise, and pose variation. Although the underlying numerical problem is a
linear program, traditional algorithms are known to suffer poor scalability for
large-scale applications. We investigate a new solution based on a classical
convex optimization framework, known as Augmented Lagrangian Methods (ALM). The
new convex solvers provide a viable solution to real-world, time-critical
applications such as face recognition. We conduct extensive experiments to
validate and compare the performance of the ALM algorithms against several
popular L1-minimization solvers, including interior-point method, Homotopy,
FISTA, SESOP-PCD, approximate message passing (AMP) and TFOCS. To aid peer
evaluation, the code for all the algorithms has been made publicly available.
|
1007.3772
|
Video Event Recognition for Surveillance Applications (VERSA)
|
cs.CV
|
VERSA provides a general-purpose framework for defining and recognizing
events in live or recorded surveillance video streams. The approach for event
recognition in VERSA is using a declarative logic language to define the
spatial and temporal relationships that characterize a given event or activity.
Doing so requires the definition of certain fundamental spatial and temporal
relationships and a high-level syntax for specifying frame templates and query
parameters. Although the handling of uncertainty in the current VERSA
implementation is simplistic, the language and architecture is amenable to
extending using Fuzzy Logic or similar approaches. VERSA's high-level
architecture is designed to work in XML-based, services- oriented environments.
VERSA can be thought of as subscribing to the XML annotations streamed by a
lower-level video analytics service that provides basic entity detection,
labeling, and tracking. One or many VERSA Event Monitors could thus analyze
video streams and provide alerts when certain events are detected.
|
1007.3781
|
Multiresolution Cube Estimators for Sensor Network Aggregate Queries
|
cs.DB
|
In this work we present in-network techniques to improve the efficiency of
spatial aggregate queries. Such queries are very common in a sensornet setting,
demanding more targeted techniques for their handling. Our approach constructs
and maintains multi-resolution cube hierarchies inside the network, which can
be constructed in a distributed fashion. In case of failures, recovery can also
be performed with in-network decisions. In this paper we demonstrate how
in-network cube hierarchies can be used to summarize sensor data, and how they
can be exploited to improve the efficiency of spatial aggregate queries. We
show that query plans over our cube summaries can be computed in polynomial
time, and we present a PTIME algorithm that selects the minimum number of data
requests that can compute the answer to a spatial query. We further extend our
algorithm to handle optimization over multiple queries, which can also be done
in polynomial time. We discuss enriching cube hierarchies with extra summary
information, and present an algorithm for distributed cube construction.
Finally we investigate node and area failures, and algorithms to recover query
results.
|
1007.3799
|
Adapting to the Shifting Intent of Search Queries
|
cs.LG
|
Search engines today present results that are often oblivious to abrupt
shifts in intent. For example, the query `independence day' usually refers to a
US holiday, but the intent of this query abruptly changed during the release of
a major film by that name. While no studies exactly quantify the magnitude of
intent-shifting traffic, studies suggest that news events, seasonal topics, pop
culture, etc account for 50% of all search queries. This paper shows that the
signals a search engine receives can be used to both determine that a shift in
intent has happened, as well as find a result that is now more relevant. We
present a meta-algorithm that marries a classifier with a bandit algorithm to
achieve regret that depends logarithmically on the number of query impressions,
under certain assumptions. We provide strong evidence that this regret is close
to the best achievable. Finally, via a series of experiments, we demonstrate
that our algorithm outperforms prior approaches, particularly as the amount of
intent-shifting traffic increases.
|
1007.3808
|
Characterization of Graph-cover Pseudocodewords of Codes over $F_3$
|
cs.IT math.IT
|
Linear-programming pseudocodewords play a pivotal role in our understanding
of the linear-programming decoding algorithms. These pseudocodewords are known
to be equivalent to the graph-cover pseudocodewords. The latter
pseudocodewords, when viewed as points in the multidimensional Euclidean space,
lie inside a fundamental cone. This fundamental cone depends on the choice of a
parity-check matrix of a code, rather than on the choice of the code itself.
The cone does not depend on the channel, over which the code is employed. The
knowledge of the boundaries of the fundamental cone could help in studying
various properties of the pseudocodewords, such as their minimum pseudoweight,
pseudoredundancy of the codes, etc. For the binary codes, the full
characterization of the fundamental cone was derived by Koetter et al. However,
if the underlying alphabet is large, such characterization becom is more
involved. In this work, a characterization of the fundamental cone for codes
over $F_3$ is discussed.
|
1007.3858
|
CHR(PRISM)-based Probabilistic Logic Learning
|
cs.PL cs.AI cs.LG cs.LO
|
PRISM is an extension of Prolog with probabilistic predicates and built-in
support for expectation-maximization learning. Constraint Handling Rules (CHR)
is a high-level programming language based on multi-headed multiset rewrite
rules.
In this paper, we introduce a new probabilistic logic formalism, called
CHRiSM, based on a combination of CHR and PRISM. It can be used for high-level
rapid prototyping of complex statistical models by means of "chance rules". The
underlying PRISM system can then be used for several probabilistic inference
tasks, including probability computation and parameter learning. We define the
CHRiSM language in terms of syntax and operational semantics, and illustrate it
with examples. We define the notion of ambiguous programs and define a
distribution semantics for unambiguous programs. Next, we describe an
implementation of CHRiSM, based on CHR(PRISM). We discuss the relation between
CHRiSM and other probabilistic logic programming languages, in particular PCHR.
Finally we identify potential application domains.
|
1007.3881
|
Orthogonal multifilters image processing of astronomical images from
scanned photographic plates
|
cs.CV cs.NA
|
In this paper orthogonal multifilters for astronomical image processing are
presented. We obtained new orthogonal multifilters based on the orthogonal
wavelet of Haar and Daubechies. Recently, multiwavelets have been introduced as
a more powerful multiscale analysis tool. It adds several degrees of freedom in
multifilter design and makes it possible to have several useful properties such
as symmetry, orthogonality, short support, and a higher number of vanishing
moments simultaneously. Multifilter decomposition of scanned photographic
plates with astronomical images is made.
|
1007.3884
|
New Results for the MAP Problem in Bayesian Networks
|
cs.AI cs.CC stat.ML
|
This paper presents new results for the (partial) maximum a posteriori (MAP)
problem in Bayesian networks, which is the problem of querying the most
probable state configuration of some of the network variables given evidence.
First, it is demonstrated that the problem remains hard even in networks with
very simple topology, such as binary polytrees and simple trees (including the
Naive Bayes structure). Such proofs extend previous complexity results for the
problem. Inapproximability results are also derived in the case of trees if the
number of states per variable is not bounded. Although the problem is shown to
be hard and inapproximable even in very simple scenarios, a new exact algorithm
is described that is empirically fast in networks of bounded treewidth and
bounded number of states per variable. The same algorithm is used as basis of a
Fully Polynomial Time Approximation Scheme for MAP under such assumptions.
Approximation schemes were generally thought to be impossible for this problem,
but we show otherwise for classes of networks that are important in practice.
The algorithms are extensively tested using some well-known networks as well as
random generated cases to show their effectiveness.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.