id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1007.3896
|
The State-Dependent Multiple-Access Channel with States Available at a
Cribbing Encoder
|
cs.IT math.IT
|
The two-user discrete memoryless state-dependent multiple-access channel
(MAC) models a scenario in which two encoders transmit independent messages to
a single receiver via a MAC whose channel law is governed by the pair of
encoders' inputs and by an i.i.d. state random variable. In the cooperative
state-dependent MAC model it is further assumed that Message 1 is shared by
both encoders whereas Message 2 is known only to Encoder 2 - the cognitive
transmitter. The capacity of the cooperative state-dependent MAC where the
realization of the state sequence is known non-causally to the cognitive
encoder has been derived by Somekh-Baruch et. al.
In this work we dispense of the assumption that Message 1 is shared a-priori
by both encoders. Instead, we study the case in which Encoder 2 cribs causally
from Encoder 1. We determine the capacity region for both, the case where
Encoder 2 cribs strictly causal and the case where Encoder 2 cribs causally
from Encoder 1.
|
1007.3906
|
A colorful origin for the genetic code: Information theory, statistical
mechanics and the emergence of molecular codes
|
q-bio.GN cond-mat.stat-mech cs.IT math.IT physics.bio-ph q-bio.MN q-bio.PE
|
The genetic code maps the sixty-four nucleotide triplets (codons) to twenty
amino-acids. While the biochemical details of this code were unraveled long
ago, its origin is still obscure. We review information-theoretic approaches to
the problem of the code's origin and discuss the results of a recent work that
treats the code in terms of an evolving, error-prone information channel. Our
model - which utilizes the rate-distortion theory of noisy communication
channels - suggests that the genetic code originated as a result of the
interplay of the three conflicting evolutionary forces: the needs for diverse
amino-acids, for error-tolerance and for minimal cost of resources. The
description of the code as an information channel allows us to mathematically
identify the fitness of the code and locate its emergence at a second-order
phase transition when the mapping of codons to amino-acids becomes nonrandom.
The noise in the channel brings about an error-graph, in which edges connect
codons that are likely to be confused. The emergence of the code is governed by
the topology of the error-graph, which determines the lowest modes of the
graph-Laplacian and is related to the map coloring problem.
|
1007.3926
|
Ear Identification by Fusion of Segmented Slice Regions using Invariant
Features: An Experimental Manifold with Dual Fusion Approach
|
cs.CV
|
This paper proposes a robust ear identification system which is developed by
fusing SIFT features of color segmented slice regions of an ear. The proposed
ear identification method makes use of Gaussian mixture model (GMM) to build
ear model with mixture of Gaussian using vector quantization algorithm and K-L
divergence is applied to the GMM framework for recording the color similarity
in the specified ranges by comparing color similarity between a pair of
reference ear and probe ear. SIFT features are then detected and extracted from
each color slice region as a part of invariant feature extraction. The
extracted keypoints are then fused separately by the two fusion approaches,
namely concatenation and the Dempster-Shafer theory. Finally, the fusion
approaches generate two independent augmented feature vectors which are used
for identification of individuals separately. The proposed identification
technique is tested on IIT Kanpur ear database of 400 individuals and is found
to achieve 98.25% accuracy for identification while top 5 matched criteria is
set for each subject.
|
1007.3934
|
Distributed Detection over Random Networks: Large Deviations Analysis
|
cs.IT math.IT
|
We show by large deviations theory that the performance of running consensus
is asymptotically equivalent to the performance of the (asymptotically) optimal
centralized detector. Running consensus is a stochastic approximation type
algorithm for distributed detection in sensor networks, recently proposed. At
each time step, the state at each sensor is updated by a local averaging of its
own state and the states of its neighbors (consensus) and by accounting for the
new observations (innovation). We assume Gaussian, spatially correlated
observations, and we allow for the underlying network to be randomly varying.
This paper shows through large deviations that the Bayes probability of
detection error, for the distributed detector, decays at the best achievable
rate, namely, the Chernoff information rate. Numerical examples illustrate the
behavior of the distributed detector for finite number of observations.
|
1007.4002
|
Continuum Percolation in the Intrinsically Secure Communications Graph
|
cs.IT cs.NI math.IT
|
The intrinsically secure communications graph (iS-graph) is a random graph
which captures the connections that can be securely established over a
large-scale network, in the presence of eavesdroppers. It is based on
principles of information-theoretic security, widely accepted as the strictest
notion of security. In this paper, we are interested in characterizing the
global properties of the iS-graph in terms of percolation on the infinite
plane. We prove the existence of a phase transition in the Poisson iS-graph,
whereby an unbounded component of securely connected nodes suddenly arises as
we increase the density of legitimate nodes. Our work shows that long-range
communication in a wireless network is still possible when a secrecy constraint
is present.
|
1007.4040
|
Loop Formulas for Description Logic Programs
|
cs.AI cs.LO
|
Description Logic Programs (dl-programs) proposed by Eiter et al. constitute
an elegant yet powerful formalism for the integration of answer set programming
with description logics, for the Semantic Web. In this paper, we generalize the
notions of completion and loop formulas of logic programs to description logic
programs and show that the answer sets of a dl-program can be precisely
captured by the models of its completion and loop formulas. Furthermore, we
propose a new, alternative semantics for dl-programs, called the {\em canonical
answer set semantics}, which is defined by the models of completion that
satisfy what are called canonical loop formulas. A desirable property of
canonical answer sets is that they are free of circular justifications. Some
properties of canonical answer sets are also explored.
|
1007.4053
|
AstroGrid-D: Grid Technology for Astronomical Science
|
cs.DC astro-ph.IM cs.DB cs.NI
|
We present status and results of AstroGrid-D, a joint effort of
astrophysicists and computer scientists to employ grid technology for
scientific applications. AstroGrid-D provides access to a network of
distributed machines with a set of commands as well as software interfaces. It
allows simple use of computer and storage facilities and to schedule or monitor
compute tasks and data management. It is based on the Globus Toolkit middleware
(GT4). Chapter 1 describes the context which led to the demand for advanced
software solutions in Astrophysics, and we state the goals of the project. We
then present characteristic astrophysical applications that have been
implemented on AstroGrid-D in chapter 2. We describe simulations of different
complexity, compute-intensive calculations running on multiple sites, and
advanced applications for specific scientific purposes, such as a connection to
robotic telescopes. We can show from these examples how grid execution improves
e.g. the scientific workflow. Chapter 3 explains the software tools and
services that we adapted or newly developed. Section 3.1 is focused on the
administrative aspects of the infrastructure, to manage users and monitor
activity. Section 3.2 characterises the central components of our architecture:
The AstroGrid-D information service to collect and store metadata, a file
management system, the data management system, and a job manager for automatic
submission of compute tasks. We summarise the successfully established
infrastructure in chapter 4, concluding with our future plans to establish
AstroGrid-D as a platform of modern e-Astronomy.
|
1007.4112
|
Interference Alignment for Clustered Multicell Joint Decoding
|
cs.IT math.IT
|
Multicell joint processing has been proven to be very efficient in overcoming
the interference-limited nature of the cellular paradigm. However, for reasons
of practical implementation global multicell joint decoding is not feasible and
thus clusters of cooperating Base Stations have to be considered. In this
context, intercluster interference has to be mitigated in order to harvest the
full potential of multicell joint processing. In this paper, four scenarios of
intercluster interference are investigated, namely a) global multicell joint
processing, b) interference alignment, c) resource division multiple access and
d) cochannel interference allowance. Each scenario is modelled and analyzed
using the per-cell ergodic sum-rate capacity as a figure of merit. In this
process, a number of theorems are derived for analytically expressing the
asymptotic eigenvalue distributions of the channel covariance matrices. The
analysis is based on principles from Free Probability theory and especially
properties in the R and Stieltjes transform domain.
|
1007.4122
|
A model for the emergence of the genetic code as a transition in a noisy
information channel
|
q-bio.QM cond-mat.stat-mech cs.IT math.IT physics.bio-ph
|
The genetic code maps the sixty-four nucleotide triplets (codons) to twenty
amino-acids. Some argue that the specific form of the code with its twenty
amino-acids might be a 'frozen accident' because of the overwhelming effects of
any further change. Others see it as a consequence of primordial biochemical
pathways and their evolution. Here we examine a scenario in which evolution
drives the emergence of a genetic code by selecting for an amino-acid map that
minimizes the impact of errors. We treat the stochastic mapping of codons to
amino-acids as a noisy information channel with a natural fitness measure.
Organisms compete by the fitness of their codes and, as a result, a genetic
code emerges at a supercritical transition in the noisy channel, when the
mapping of codons to amino-acids becomes nonrandom. At the phase transition, a
small expansion is valid and the emergent code is governed by smooth modes of
the Laplacian of errors. These modes are in turn governed by the topology of
the error-graph, in which codons are connected if they are likely to be
confused. This topology sets an upper bound - which is related to the classical
map-coloring problem - on the number of possible amino-acids. The suggested
scenario is generic and may describe a mechanism for the formation of other
error-prone biological codes, such as the recognition of DNA sites by proteins
in the transcription regulatory network.
|
1007.4124
|
A simple model for the evolution of molecular codes driven by the
interplay of accuracy, diversity and cost
|
q-bio.QM cs.IT math.IT physics.bio-ph
|
Molecular codes translate information written in one type of molecules into
another molecular language. We introduce a simple model that treats molecular
codes as noisy information channels. An optimal code is a channel that conveys
information accurately and efficiently while keeping down the impact of errors.
The equipoise of the three conflicting needs, for minimal error-load, minimal
cost of resources and maximal diversity of vocabulary, defines the fitness of
the code. The model suggests a mechanism for the emergence of a code when
evolution varies the parameters that control this equipoise and the mapping
between the two molecular languages becomes non-random. This mechanism is
demonstrated by a simple toy model that is formally equivalent to a mean-field
Ising magnet.
|
1007.4149
|
A rate-distortion scenario for the emergence and evolution of noisy
molecular codes
|
q-bio.MN cond-mat.stat-mech cs.IT math.IT physics.bio-ph
|
We discuss, in terms of rate-distortion theory, the fitness of molecular
codes as the problem of designing an optimal information channel. The fitness
is governed by an interplay between the cost and quality of the channel, which
induces smoothness in the code. By incorporating this code fitness into
population dynamics models, we suggest that the emergence and evolution of
molecular codes may be explained by simple channel design considerations.
|
1007.4221
|
Building Blocks Propagation in Quantum-Inspired Genetic Algorithm
|
cs.NE
|
This paper presents an analysis of building blocks propagation in
Quantum-Inspired Genetic Algorithm, which belongs to a new class of
metaheuristics drawing their inspiration from both biological evolution and
unitary evolution of quantum systems. The expected number of quantum
chromosomes matching a schema has been analyzed and a random variable
corresponding to this issue has been introduced. The results have been compared
with Simple Genetic Algorithm. Also, it has been presented how selected binary
quantum chromosomes cover a domain of one-dimensional fitness function.
|
1007.4236
|
Sorting of Permutations by Cost-Constrained Transpositions
|
cs.IT math.IT
|
We address the problem of finding the minimum decomposition of a permutation
in terms of transpositions with non-uniform cost. For arbitrary non-negative
cost functions, we describe polynomial-time, constant-approximation
decomposition algorithms. For metric-path costs, we describe exact
polynomial-time decomposition algorithms. Our algorithms represent a
combination of Viterbi-type algorithms and graph-search techniques for
minimizing the cost of individual transpositions, and dynamic programing
algorithms for finding minimum cost cycle decompositions. The presented
algorithms have applications in information theory, bioinformatics, and
algebra.
|
1007.4286
|
Queue Length Asymptotics for Generalized Max-Weight Scheduling in the
presence of Heavy-Tailed Traffic
|
cs.NI cs.IT math.IT math.PR
|
We investigate the asymptotic behavior of the steady-state queue length
distribution under generalized max-weight scheduling in the presence of
heavy-tailed traffic. We consider a system consisting of two parallel queues,
served by a single server. One of the queues receives heavy-tailed traffic, and
the other receives light-tailed traffic. We study the class of throughput
optimal max-weight-alpha scheduling policies, and derive an exact asymptotic
characterization of the steady-state queue length distributions. In particular,
we show that the tail of the light queue distribution is heavier than a
power-law curve, whose tail coefficient we obtain explicitly. Our asymptotic
characterization also contains an intuitively surprising result - the
celebrated max-weight scheduling policy leads to the worst possible tail of the
light queue distribution, among all non-idling policies. Motivated by the above
negative result regarding the max-weight-alpha policy, we analyze a
log-max-weight (LMW) scheduling policy. We show that the LMW policy guarantees
an exponentially decaying light queue tail, while still being throughput
optimal.
|
1007.4294
|
Properties of optimal prefix-free machines as instantaneous codes
|
cs.IT math.IT math.LO
|
The optimal prefix-free machine U is a universal decoding algorithm used to
define the notion of program-size complexity H(s) for a finite binary string s.
Since the set of all halting inputs for U is chosen to form a prefix-free set,
the optimal prefix-free machine U can be regarded as an instantaneous code for
noiseless source coding scheme. In this paper, we investigate the properties of
optimal prefix-free machines as instantaneous codes. In particular, we
investigate the properties of the set U^{-1}(s) of codewords associated with a
symbol s. Namely, we investigate the number of codewords in U^{-1}(s) and the
distribution of codewords in U^{-1}(s) for each symbol s, using the toolkit of
algorithmic information theory.
|
1007.4324
|
Clustering Unstructured Data (Flat Files) - An Implementation in Text
Mining Tool
|
cs.IR
|
With the advancement of technology and reduced storage costs, individuals and
organizations are tending towards the usage of electronic media for storing
textual information and documents. It is time consuming for readers to retrieve
relevant information from unstructured document collection. It is easier and
less time consuming to find documents from a large collection when the
collection is ordered or classified by group or category. The problem of
finding best such grouping is still there. This paper discusses the
implementation of k-Means clustering algorithm for clustering unstructured text
documents that we implemented, beginning with the representation of
unstructured text and reaching the resulting set of clusters. Based on the
analysis of resulting clusters for a sample set of documents, we have also
proposed a technique to represent documents that can further improve the
clustering result.
|
1007.4371
|
Improving the Sphere-Packing Bound for Binary Codes over Memoryless
Symmetric Channels
|
cs.IT math.IT
|
A lower bound on the minimum required code length of binary codes is
obtained. The bound is obtained based on observing a close relation between the
Ulam's liar game and channel coding. In fact, Spencer's optimal solution to the
game is used to derive this new bound which improves the famous Sphere-Packing
Bound.
|
1007.4418
|
Distributed Source Coding of Correlated Gaussian Sources
|
cs.IT math.IT
|
We consider the distributed source coding system of $L$ correlated Gaussian
sources $Y_i,i=1,2,...,L$ which are noisy observations of correlated Gaussian
remote sources $X_k, k=1,2,...,K$. We assume that $Y^{L}={}^{\rm t}(Y_1,Y_2,$
$..., Y_L)$ is an observation of the source vector $X^K={}^{\rm t}(X_1,X_2,...,
X_K)$, having the form $Y^L=AX^K+N^L$, where $A$ is a $L\times K$ matrix and
$N^L={}^{\rm t}(N_1,N_2,...,N_L)$ is a vector of $L$ independent Gaussian
random variables also independent of $X^K$. In this system $L$ correlated
Gaussian observations are separately compressed by $L$ encoders and sent to the
information processing center. We study the remote source coding problem where
the decoder at the center attempts to reconstruct the remote source $X^K$. We
consider three distortion criteria based on the covariance matrix of the
estimation error on $X^K$. For each of those three criteria we derive explicit
inner and outer bounds of the rate distortion region. Next, in the case of
$K=L$ and $A=I_L$, we study the multiterminal source coding problem where the
decoder wishes to reconstruct the observation $Y^L=X^L+N^L$. To investigate
this problem we shall establish a result which provides a strong connection
between the remote source coding problem and the multiterminal source coding
problem. Using this result, we drive several new partial solutions to the
multiterminal source coding problem.
|
1007.4440
|
Modeling correlated human dynamics
|
physics.soc-ph cs.SI
|
We empirically study the activity patterns of individual blog-posting and
find significant memory effects. The memory coefficient first decays in a power
law and then turns to an exponential form. Moreover, the inter-event time
distribution displays a heavy-tailed nature with power-law exponent dependent
on the activity. Our findings challenge the priority-queue model that can not
reproduce the memory effects or the activity-dependent distributions. We think
there is another kind of human activity patterns driven by personal interests
and characterized by strong memory effects. Accordingly, we propose a simple
model based on temporal preference, which can well reproduce both the
heavy-tailed nature and the strong memory effects. This work helps in
understanding both the temporal regularities and the predictability of human
behaviors.
|
1007.4467
|
Molecular Recognition as an Information Channel: The Role of
Conformational Changes
|
q-bio.BM cs.IT math.IT physics.bio-ph
|
Molecular recognition, which is essential in processing information in
biological systems, takes place in a crowded noisy biochemical environment and
requires the recognition of a specific target within a background of various
similar competing molecules. We consider molecular recognition as a
transmission of information via a noisy channel and use this analogy to gain
insights on the optimal, or fittest, molecular recognizer. We focus on the
optimal structural properties of the molecules such as flexibility and
conformation. We show that conformational changes upon binding, which often
occur during molecular recognition, may optimize the detection performance of
the recognizer. We thus suggest a generic design principle termed
'conformational proofreading' in which deformation enhances detection. We
evaluate the optimal flexibility of the molecular recognizer, which is
analogous to the stochasticity in a decision unit. In some scenarios, a
flexible recognizer, i.e., a stochastic decision unit, performs better than a
rigid, deterministic one. As a biological example, we discuss conformational
changes during homologous recombination, the process of genetic exchange
between two DNA strands.
|
1007.4471
|
The physical language of molecular codes: A rate-distortion approach to
the evolution and emergence of biological codes
|
q-bio.BM cs.IT math.IT physics.bio-ph
|
The function of the organism hinges on the performance of its
information-processing networks, which convey information via molecular
recognition. Many paths within these networks utilize molecular codebooks, such
as the genetic code, to translate information written in one class of molecules
into another molecular "language" . The present paper examines the emergence
and evolution of molecular codes in terms of rate-distortion theory and reviews
recent results of this approach. We discuss how the biological problem of
maximizing the fitness of an organism by optimizing its molecular coding
machinery is equivalent to the communication engineering problem of designing
an optimal information channel. The fitness of a molecular code takes into
account the interplay between the quality of the channel and the cost of
resources which the organism needs to invest in its construction and
maintenance. We analyze the dynamics of a population of organisms that compete
according to the fitness of their codes. The model suggests a generic mechanism
for the emergence of molecular codes as a phase transition in an information
channel. This mechanism is put into biological context and demonstrated in a
simple example.
|
1007.4523
|
A Hybrid Model for Disease Spread and an Application to the SARS
Pandemic
|
cs.MA q-bio.OT
|
Pandemics can cause immense disruption and damage to communities and
societies. Thus far, modeling of pandemics has focused on either large-scale
difference equation models like the SIR and the SEIR models, or detailed
micro-level simulations, which are harder to apply at a global scale. This
paper introduces a hybrid model for pandemics considering both global and local
spread of infections. We hypothesize that the spread of an infectious disease
between regions is significantly influenced by global traffic patterns and the
spread within a region is influenced by local conditions. Thus we model the
spread of pandemics considering the connections between regions for the global
spread of infection and population density based on the SEIR model for the
local spread of infection. We validate our hybrid model by carrying out a
simulation study for the spread of SARS pandemic of 2002-2003 using available
data on population, population density, and traffic networks between different
regions. While it is well-known that international relationships and global
traffic patterns significantly influence the spread of pandemics, our results
show that integrating these factors into relatively simple models can greatly
improve the results of modeling disease spread.
|
1007.4527
|
Optimal Design of a Molecular Recognizer: Molecular Recognition as a
Bayesian Signal Detection Problem
|
q-bio.MN cs.IT math.IT physics.bio-ph
|
Numerous biological functions-such as enzymatic catalysis, the immune
response system, and the DNA-protein regulatory network-rely on the ability of
molecules to specifically recognize target molecules within a large pool of
similar competitors in a noisy biochemical environment. Using the basic
framework of signal detection theory, we treat the molecular recognition
process as a signal detection problem and examine its overall performance.
Thus, we evaluate the optimal properties of a molecular recognizer in the
presence of competition and noise. Our analysis reveals that the optimal design
undergoes a "phase transition" as the structural properties of the molecules
and interaction energies between them vary. In one phase, the recognizer should
be complementary in structure to its target (like a lock and a key), while in
the other, conformational changes upon binding, which often accompany molecular
recognition, enhance recognition quality. Using this framework, the abundance
of conformational changes may be explained as a result of increasing the
fitness of the recognizer. Furthermore, this analysis may be used in future
design of artificial signal processing devices based on biomolecules.
|
1007.4531
|
Competitive Analysis of Minimum-Cut Maximum Flow Algorithms in Vision
Problems
|
cs.CV cs.DM math.CO math.OC
|
Rapid advances in image acquisition and storage technology underline the need
for algorithms that are capable of solving large scale image processing and
computer-vision problems. The minimum cut problem plays an important role in
processing many of these imaging problems such as, image and video
segmentation, stereo vision, multi-view reconstruction and surface fitting.
While several min-cut/max-flow algorithms can be found in the literature, their
performance in practice has been studied primarily outside the scope of
computer vision. We present here the results of a comprehensive computational
study, in terms of execution times and memory utilization, of four recently
published algorithms, which optimally solve the {\em s-t} cut and maximum flow
problems: (i) Goldberg's and Tarjan's {\em Push-Relabel}; (ii) Hochbaum's {\em
pseudoflow}; (iii) Boykov's and Kolmogorov's {\em augmenting paths}; and (iv)
Goldberg's {\em partial augment-relabel}. Our results demonstrate that the {\em
Hochbaum's pseudoflow} algorithm, is faster and utilizes less memory than the
other algorithms on all problem instances investigated.
|
1007.4540
|
Broadcast Approach and Oblivious Cooperative Strategies for the Wireless
Relay Channel - Part I: Sequential Decode-and-Forward (SDF)
|
cs.IT math.IT
|
In this two part paper we consider a wireless network in which a source
terminal communicates with a destination and a relay terminal is occasionally
present in close proximity to the source without source's knowledge, suggesting
oblivious protocols. The source-relay channel is assumed to be a fixed gain
AWGN due to the proximity while the source-destination and the
relay-destination channels are subject to a block flat Rayleigh fading. A
perfect CSI at the respective receivers only is assumed. With the average
throughput as a performance measure, we incorporate a two-layer broadcast
approach into two cooperative strategies based on the decode-and-forward scheme
- Sequential Decoded-and Forward (SDF) in part I and the Block-Markov (BM) in
part II. The broadcast approach splits the transmitted rate into superimposed
layers corresponding to a "bad" and a "good" channel states, allowing better
adaptation to the actual channel conditions In part I, the achievable rate
expressions for the SDF strategy are derived under the broadcast approach for
multiple settings including single user, MISO and the general relay setting
using successive decoding technique, both numerically and analytically.
Continuous broadcasting lower bounds are derived for the MISO and an oblivious
cooperation scenarios.
|
1007.4542
|
Broadcast Approach and Oblivious Cooperative Strategies for the Wireless
Relay Channel - Part II: Block-Markov Decode-and-Forward (BMDF)
|
cs.IT math.IT
|
This is the second in a two part series of papers on incorporation of the
broadcast approach into oblivious protocols for the relay channel where the
source and the relay are collocated. Part I described the broadcast approach
and its benefits in terms of achievable rates when used with the sequential
decode- and-forward (SDF) scheme. Part II investigates yet another oblivious
scheme, the Block-Markov decode- and-forward (BMDF) under the single and
two-layered transmissions. For the single layer, previously reported results
are enhanced and a conjecture regarding the optimal correlation coefficient
between the source and the relay's transmission is established. For the
discrete multi-layer transmission of two or more layers, it is shown that
perfect cooperation (2x1 MISO) rates are attained even with low collocation
gains at the expense of a longer delay, improving upon those achievable by the
SDF.
|
1007.4591
|
Biomolecular electrostatics using a fast multipole BEM on up to 512 GPUs
and a billion unknowns
|
cs.CE physics.chem-ph physics.comp-ph
|
We present teraflop-scale calculations of biomolecular electrostatics enabled
by the combination of algorithmic and hardware acceleration. The algorithmic
acceleration is achieved with the fast multipole method (FMM) in conjunction
with a boundary element method (BEM) formulation of the continuum electrostatic
model, as well as the BIBEE approximation to BEM. The hardware acceleration is
achieved through graphics processors, GPUs. We demonstrate the power of our
algorithms and software for the calculation of the electrostatic interactions
between biological molecules in solution. The applications demonstrated include
the electrostatics of protein--drug binding and several multi-million atom
systems consisting of hundreds to thousands of copies of lysozyme molecules.
The parallel scalability of the software was studied in a cluster at the
Nagasaki Advanced Computing Center, using 128 nodes, each with 4 GPUs. Delicate
tuning has resulted in strong scaling with parallel efficiency of 0.8 for 256
and 0.5 for 512 GPUs. The largest application run, with over 20 million atoms
and one billion unknowns, required only one minute on 512 GPUs. We are
currently adapting our BEM software to solve the linearized Poisson-Boltzmann
equation for dilute ionic solutions, and it is also designed to be flexible
enough to be extended for a variety of integral equation problems, ranging from
Poisson problems to Helmholtz problems in electromagnetics and acoustics to
high Reynolds number flow.
|
1007.4604
|
Concavity of Mutual Information Rate for Input-Restricted Finite-State
Memoryless Channels at High SNR
|
cs.IT math.IT
|
We consider a finite-state memoryless channel with i.i.d. channel state and
the input Markov process supported on a mixing finite-type constraint. We
discuss the asymptotic behavior of entropy rate of the output hidden Markov
chain and deduce that the mutual information rate of such a channel is concave
with respect to the parameters of the input Markov processes at high
signal-to-noise ratio. In principle, the concavity result enables good
numerical approximation of the maximum mutual information rate and capacity of
such a channel.
|
1007.4636
|
Computational Complexity Analysis of Simple Genetic Programming On Two
Problems Modeling Isolated Program Semantics
|
cs.NE cs.CC cs.DS
|
Analyzing the computational complexity of evolutionary algorithms for binary
search spaces has significantly increased their theoretical understanding. With
this paper, we start the computational complexity analysis of genetic
programming. We set up several simplified genetic programming algorithms and
analyze them on two separable model problems, ORDER and MAJORITY, each of which
captures an important facet of typical genetic programming problems. Both
analyses give first rigorous insights on aspects of genetic programming design,
highlighting in particular the impact of accepting or rejecting neutral moves
and the importance of a local mutation operator.
|
1007.4707
|
Simple Max-Min Ant Systems and the Optimization of Linear Pseudo-Boolean
Functions
|
cs.NE
|
With this paper, we contribute to the understanding of ant colony
optimization (ACO) algorithms by formally analyzing their runtime behavior. We
study simple MAX-MIN ant systems on the class of linear pseudo-Boolean
functions defined on binary strings of length 'n'. Our investigations point out
how the progress according to function values is stored in pheromone. We
provide a general upper bound of O((n^3 \log n)/ \rho) for two ACO variants on
all linear functions, where (\rho) determines the pheromone update strength.
Furthermore, we show improved bounds for two well-known linear pseudo-Boolean
functions called OneMax and BinVal and give additional insights using an
experimental study.
|
1007.4748
|
Detecting influenza outbreaks by analyzing Twitter messages
|
cs.IR cs.CL
|
We analyze over 500 million Twitter messages from an eight month period and
find that tracking a small number of flu-related keywords allows us to forecast
future influenza rates with high accuracy, obtaining a 95% correlation with
national health statistics. We then analyze the robustness of this approach to
spurious keyword matches, and we propose a document classification component to
filter these misleading messages. We find that this document classifier can
reduce error rates by over half in simulated false alarm experiments, though
more research is needed to develop methods that are robust in cases of
extremely high noise.
|
1007.4767
|
Formalization of Psychological Knowledge in Answer Set Programming and
its Application
|
cs.AI cs.LO
|
In this paper we explore the use of Answer Set Programming (ASP) to
formalize, and reason about, psychological knowledge. In the field of
psychology, a considerable amount of knowledge is still expressed using only
natural language. This lack of a formalization complicates accurate studies,
comparisons, and verification of theories. We believe that ASP, a knowledge
representation formalism allowing for concise and simple representation of
defaults, uncertainty, and evolving domains, can be used successfully for the
formalization of psychological knowledge. To demonstrate the viability of ASP
for this task, in this paper we develop an ASP-based formalization of the
mechanics of Short-Term Memory. We also show that our approach can have rather
immediate practical uses by demonstrating an application of our formalization
to the task of predicting a user's interaction with a graphical interface.
|
1007.4801
|
MIMO Wiretap Channels with Arbitrarily Varying Eavesdropper Channel
States
|
cs.IT math.IT
|
In this work, a class of information theoretic secrecy problems is addressed
where the eavesdropper channel states are completely unknown to the legitimate
parties. In particular, MIMO wiretap channel models are considered where the
channel of the eavesdropper is arbitrarily varying over time. Assuming that the
number of antennas of the eavesdropper is limited, the secrecy rate of the MIMO
wiretap channel in the sense of strong secrecy is derived, and shown to match
with the converse in secure degrees of freedom. It is proved that there exists
a universal coding scheme that secures the confidential message against any
sequence of channel states experienced by the eavesdropper. This yields the
conclusion that secure communication is possible regardless of the location or
channel states of (potentially infinite number of) eavesdroppers. Additionally,
it is observed that, the present setting renders the secrecy capacity problems
for multi-terminal wiretap-type channels more tractable as compared the case
with full or partial knowledge of eavesdropper channel states. To demonstrate
this observation, secure degrees of freedom regions are derived for the
Gaussian MIMO multiple access wiretap channel (MIMO MAC-WT) and the Gaussian
MIMO broadcast wiretap channel (MIMO BC-WT) where the transmitter(s) and the
intended receiver(s) have the same number of antennas.
|
1007.4840
|
Greedy Maximal Scheduling in Wireless Networks
|
cs.IT cs.NI math.IT
|
In this paper we consider greedy scheduling algorithms in wireless networks,
i.e., the schedules are computed by adding links greedily based on some
priority vector. Two special cases are considered: 1) Longest Queue First (LQF)
scheduling, where the priorities are computed using queue lengths, and 2)
Static Priority (SP) scheduling, where the priorities are pre-assigned. We
first propose a closed-form lower bound stability region for LQF scheduling,
and discuss the tightness result in some scenarios. We then propose an lower
bound stability region for SP scheduling with multiple priority vectors, as
well as a heuristic priority assignment algorithm, which is related to the
well-known Expectation-Maximization (EM) algorithm. The performance gain of the
proposed heuristic algorithm is finally confirmed by simulations.
|
1007.4868
|
Predicting Suicide Attacks: A Fuzzy Soft Set Approach
|
cs.AI
|
This paper models a decision support system to predict the occurance of
suicide attack in a given collection of cities. The system comprises two parts.
First part analyzes and identifies the factors which affect the prediction.
Admitting incomplete information and use of linguistic terms by experts, as two
characteristic features of this peculiar prediction problem we exploit the
Theory of Fuzzy Soft Sets. Hence the Part 2 of the model is an algorithm vz.
FSP which takes the assessment of factors given in Part 1 as its input and
produces a possibility profile of cities likely to receive the accident. The
algorithm is of O(2^n) complexity. It has been illustrated by an example solved
in detail. Simulation results for the algorithm have been presented which give
insight into the strengths and weaknesses of FSP. Three different decision
making measures have been simulated and compared in our discussion.
|
1007.4872
|
Asynchronous Capacity per Unit Cost
|
cs.IT math.IT
|
The capacity per unit cost, or equivalently minimum cost to transmit one bit,
is a well-studied quantity. It has been studied under the assumption of full
synchrony between the transmitter and the receiver. In many applications, such
as sensor networks, transmissions are very bursty, with small amounts of bits
arriving infrequently at random times. In such scenarios, the cost of acquiring
synchronization is significant and one is interested in the fundamental limits
on communication without assuming a priori synchronization. In this paper, we
show that the minimum cost to transmit B bits of information asynchronously is
(B + \bar{H})k_sync, where k_sync is the synchronous minimum cost per bit and
\bar{H} is a measure of timing uncertainty equal to the entropy for most
reasonable arrival time distributions. This result holds when the transmitter
can stay idle at no cost and is a particular case of a general result which
holds for arbitrary cost functions.
|
1007.4955
|
Decentralized Dynamic Hop Selection and Power Control in Cognitive
Multi-hop Relay Systems
|
cs.IT math.IT
|
In this paper, we consider a cognitive multi-hop relay secondary user (SU)
system sharing the spectrum with some primary users (PU). The transmit power as
well as the hop selection of the cognitive relays can be dynamically adapted
according to the local (and causal) knowledge of the instantaneous channel
state information (CSI) in the multi-hop SU system. We shall determine a low
complexity, decentralized algorithm to maximize the average end-to-end
throughput of the SU system with dynamic spatial reuse. The problem is
challenging due to the decentralized requirement as well as the causality
constraint on the knowledge of CSI. Furthermore, the problem belongs to the
class of stochastic Network Utility Maximization (NUM) problems which is quite
challenging. We exploit the time-scale difference between the PU activity and
the CSI fluctuations and decompose the problem into a master problem and
subproblems. We derive an asymptotically optimal low complexity solution using
divide-and-conquer and illustrate that significant performance gain can be
obtained through dynamic hop selection and power control. The worst case
complexity and memory requirement of the proposed algorithm is O(M^2) and
O(M^3) respectively, where $M$ is the number of SUs.
|
1007.5004
|
A Repeated Game Formulation of Energy-Efficient Decentralized Power
Control
|
math-ph cs.GT cs.IT math.IT math.MP
|
Decentralized multiple access channels where each transmitter wants to
selfishly maximize his transmission energy-efficiency are considered.
Transmitters are assumed to choose freely their power control policy and
interact (through multiuser interference) several times. It is shown that the
corresponding conflict of interest can have a predictable outcome, namely a
finitely or discounted repeated game equilibrium. Remarkably, it is shown that
this equilibrium is Pareto-efficient under reasonable sufficient conditions and
the corresponding decentralized power control policies can be implemented under
realistic information assumptions: only individual channel state information
and a public signal are required to implement the equilibrium strategies.
Explicit equilibrium conditions are derived in terms of minimum number of game
stages or maximum discount factor. Both analytical and simulation results are
provided to compare the performance of the proposed power control policies with
those already existing and exploiting the same information assumptions namely,
those derived for the one-shot and Stackelberg games.
|
1007.5024
|
A Program-Level Approach to Revising Logic Programs under the Answer Set
Semantics
|
cs.AI
|
An approach to the revision of logic programs under the answer set semantics
is presented. For programs P and Q, the goal is to determine the answer sets
that correspond to the revision of P by Q, denoted P * Q. A fundamental
principle of classical (AGM) revision, and the one that guides the approach
here, is the success postulate. In AGM revision, this stipulates that A is in K
* A. By analogy with the success postulate, for programs P and Q, this means
that the answer sets of Q will in some sense be contained in those of P * Q.
The essential idea is that for P * Q, a three-valued answer set for Q,
consisting of positive and negative literals, is first determined. The positive
literals constitute a regular answer set, while the negated literals make up a
minimal set of naf literals required to produce the answer set from Q. These
literals are propagated to the program P, along with those rules of Q that are
not decided by these literals. The approach differs from work in update logic
programs in two main respects. First, we ensure that the revising logic program
has higher priority, and so we satisfy the success postulate; second, for the
preference implicit in a revision P * Q, the program Q as a whole takes
precedence over P, unlike update logic programs, since answer sets of Q are
propagated to P. We show that a core group of the AGM postulates are satisfied,
as are the postulates that have been proposed for update logic programs.
|
1007.5030
|
Analysis of a Splitting Estimator for Rare Event Probabilities in
Jackson Networks
|
math.PR cs.CE stat.CO
|
We consider a standard splitting algorithm for the rare-event simulation of
overflow probabilities in any subset of stations in a Jackson network at level
n, starting at a fixed initial position. It was shown in DeanDup09 that a
subsolution to the Isaacs equation guarantees that a subexponential number of
function evaluations (in n) suffice to estimate such overflow probabilities
within a given relative accuracy. Our analysis here shows that in fact
O(n^{2{\beta}+1}) function evaluations suffice to achieve a given relative
precision, where {\beta} is the number of bottleneck stations in the network.
This is the first rigorous analysis that allows to favorably compare splitting
against directly computing the overflow probability of interest, which can be
evaluated by solving a linear system of equations with O(n^{d}) variables.
|
1007.5044
|
Symmetric Allocations for Distributed Storage
|
cs.IT math.IT
|
We consider the problem of optimally allocating a given total storage budget
in a distributed storage system. A source has a data object which it can code
and store over a set of storage nodes; it is allowed to store any amount of
coded data in each node, as long as the total amount of storage used does not
exceed the given budget. A data collector subsequently attempts to recover the
original data object by accessing each of the nodes independently with some
constant probability. By using an appropriate code, successful recovery occurs
when the total amount of data in the accessed nodes is at least the size of the
original data object. The goal is to find an optimal storage allocation that
maximizes the probability of successful recovery. This optimization problem is
challenging because of its discrete nature and nonconvexity, despite its simple
formulation. Symmetric allocations (in which all nonempty nodes store the same
amount of data), though intuitive, may be suboptimal; the problem is nontrivial
even if we optimize over only symmetric allocations. Our main result shows that
the symmetric allocation that spreads the budget maximally over all nodes is
asymptotically optimal in a regime of interest. Specifically, we derive an
upper bound for the suboptimality of this allocation and show that the
performance gap vanishes asymptotically in the specified regime. Further, we
explicitly find the optimal symmetric allocation for a variety of cases. Our
results can be applied to distributed storage systems and other problems
dealing with reliability under uncertainty, including delay tolerant networks
(DTNs) and content delivery networks (CDNs).
|
1007.5080
|
Analysis Framework for Opportunistic Spectrum OFDMA and its Application
to the IEEE 802.22 Standard
|
cs.NI cs.IT cs.PF math.IT
|
We present an analytical model that enables throughput evaluation of
Opportunistic Spectrum Orthogonal Frequency Division Multiple Access (OS-OFDMA)
networks. The core feature of the model, based on a discrete time Markov chain,
is the consideration of different channel and subchannel allocation strategies
under different Primary and Secondary user types, traffic and priority levels.
The analytical model also assesses the impact of different spectrum sensing
strategies on the throughput of OS-OFDMA network. The analysis applies to the
IEEE 802.22 standard, to evaluate the impact of two-stage spectrum sensing
strategy and varying temporal activity of wireless microphones on the IEEE
802.22 throughput. Our study suggests that OS-OFDMA with subchannel notching
and channel bonding could provide almost ten times higher throughput compared
with the design without those options, when the activity and density of
wireless microphones is very high. Furthermore, we confirm that OS-OFDMA
implementation without subchannel notching, used in the IEEE 802.22, is able to
support real-time and non-real-time quality of service classes, provided that
wireless microphones temporal activity is moderate (with approximately one
wireless microphone per 3,000 inhabitants with light urban population density
and short duty cycles). Finally, two-stage spectrum sensing option improves
OS-OFDMA throughput, provided that the length of spectrum sensing at every
stage is optimized using our model.
|
1007.5104
|
An Empirical Study of Borda Manipulation
|
cs.AI
|
We study the problem of coalitional manipulation in elections using the
unweighted Borda rule. We provide empirical evidence of the manipulability of
Borda elections in the form of two new greedy manipulation algorithms based on
intuitions from the bin-packing and multiprocessor scheduling domains. Although
we have not been able to show that these algorithms beat existing methods in
the worst-case, our empirical evaluation shows that they significantly
outperform the existing method and are able to find optimal manipulations in
the vast majority of the randomly generated elections that we tested. These
empirical results provide further evidence that the Borda rule provides little
defense against coalitional manipulation.
|
1007.5110
|
Fully Dynamic Data Structure for Top-k Queries on Uncertain Data
|
cs.DB cs.DS
|
Top-$k$ queries allow end-users to focus on the most important (top-$k$)
answers amongst those which satisfy the query. In traditional databases, a user
defined score function assigns a score value to each tuple and a top-$k$ query
returns $k$ tuples with the highest score. In uncertain database, top-$k$
answer depends not only on the scores but also on the membership probabilities
of tuples. Several top-$k$ definitions covering different aspects of
score-probability interplay have been proposed in recent
past~\cite{R10,R4,R2,R8}. Most of the existing work in this research field is
focused on developing efficient algorithms for answering top-$k$ queries on
static uncertain data. Any change (insertion, deletion of a tuple or change in
membership probability, score of a tuple) in underlying data forces
re-computation of query answers. Such re-computations are not practical
considering the dynamic nature of data in many applications. In this paper, we
propose a fully dynamic data structure that uses ranking function
$PRF^e(\alpha)$ proposed by Li et al.~\cite{R8} under the generally adopted
model of $x$-relations~\cite{R11}. $PRF^e$ can effectively approximate various
other top-$k$ definitions on uncertain data based on the value of parameter
$\alpha$. An $x$-relation consists of a number of $x$-tuples, where $x$-tuple
is a set of mutually exclusive tuples (up to a constant number) called
alternatives. Each $x$-tuple in a relation randomly instantiates into one tuple
from its alternatives. For an uncertain relation with $N$ tuples, our structure
can answer top-$k$ queries in $O(k\log N)$ time, handles an update in $O(\log
N)$ time and takes $O(N)$ space. Finally, we evaluate practical efficiency of
our structure on both synthetic and real data.
|
1007.5114
|
Where are the hard manipulation problems?
|
cs.AI cs.CC cs.GT cs.MA
|
One possible escape from the Gibbard-Satterthwaite theorem is computational
complexity. For example, it is NP-hard to compute if the STV rule can be
manipulated. However, there is increasing concern that such results may not re
ect the difficulty of manipulation in practice. In this tutorial, I survey
recent results in this area.
|
1007.5120
|
Stable marriage problems with quantitative preferences
|
cs.AI cs.GT cs.MA
|
The stable marriage problem is a well-known problem of matching men to women
so that no man and woman, who are not married to each other, both prefer each
other. Such a problem has a wide variety of practical applications, ranging
from matching resident doctors to hospitals, to matching students to schools or
more generally to any two-sided market. In the classical stable marriage
problem, both men and women express a strict preference order over the members
of the other sex, in a qualitative way. Here we consider stable marriage
problems with quantitative preferences: each man (resp., woman) provides a
score for each woman (resp., man). Such problems are more expressive than the
classical stable marriage problems. Moreover, in some real-life situations it
is more natural to express scores (to model, for example, profits or costs)
rather than a qualitative preference ordering. In this context, we define new
notions of stability and optimality, and we provide algorithms to find
marriages which are stable and/or optimal according to these notions. While
expressivity greatly increases by adopting quantitative preferences, we show
that in most cases the desired solutions can be found by adapting existing
algorithms for the classical stable marriage problem.
|
1007.5129
|
An Efficient Automatic Mass Classification Method In Digitized
Mammograms Using Artificial Neural Network
|
cs.CV
|
In this paper we present an efficient computer aided mass classification
method in digitized mammograms using Artificial Neural Network (ANN), which
performs benign-malignant classification on region of interest (ROI) that
contains mass. One of the major mammographic characteristics for mass
classification is texture. ANN exploits this important factor to classify the
mass into benign or malignant. The statistical textural features used in
characterizing the masses are mean, standard deviation, entropy, skewness,
kurtosis and uniformity. The main aim of the method is to increase the
effectiveness and efficiency of the classification process in an objective
manner to reduce the numbers of false-positive of malignancies. Three layers
artificial neural network (ANN) with seven features was proposed for
classifying the marked regions into benign and malignant and 90.91% sensitivity
and 83.87% specificity is achieved that is very much promising compare to the
radiologist's sensitivity 75%.
|
1007.5130
|
Resource-Optimal Planning For An Autonomous Planetary Vehicle
|
cs.AI
|
Autonomous planetary vehicles, also known as rovers, are small autonomous
vehicles equipped with a variety of sensors used to perform exploration and
experiments on a planet's surface. Rovers work in a partially unknown
environment, with narrow energy/time/movement constraints and, typically, small
computational resources that limit the complexity of on-line planning and
scheduling, thus they represent a great challenge in the field of autonomous
vehicles. Indeed, formal models for such vehicles usually involve hybrid
systems with nonlinear dynamics, which are difficult to handle by most of the
current planning algorithms and tools. Therefore, when offline planning of the
vehicle activities is required, for example for rovers that operate without a
continuous Earth supervision, such planning is often performed on simplified
models that are not completely realistic. In this paper we show how the
UPMurphi model checking based planning tool can be used to generate
resource-optimal plans to control the engine of an autonomous planetary
vehicle, working directly on its hybrid model and taking into account several
safety constraints, thus achieving very accurate results.
|
1007.5133
|
Comparison of Support Vector Machine and Back Propagation Neural Network
in Evaluating the Enterprise Financial Distress
|
cs.LG
|
Recently, applying the novel data mining techniques for evaluating enterprise
financial distress has received much research alternation. Support Vector
Machine (SVM) and back propagation neural (BPN) network has been applied
successfully in many areas with excellent generalization results, such as rule
extraction, classification and evaluation. In this paper, a model based on SVM
with Gaussian RBF kernel is proposed here for enterprise financial distress
evaluation. BPN network is considered one of the simplest and are most general
methods used for supervised training of multilayered neural network. The
comparative results show that through the difference between the performance
measures is marginal; SVM gives higher precision and lower error rates.
|
1007.5137
|
Comparison Of Modified Dual Ternary Indexing And Multi-Key Hashing
Algorithms For Music Information Retrieval
|
cs.IR
|
In this work we have compared two indexing algorithms that have been used to
index and retrieve Carnatic music songs. We have compared a modified algorithm
of the Dual ternary indexing algorithm for music indexing and retrieval with
the multi-key hashing indexing algorithm proposed by us. The modification in
the dual ternary algorithm was essential to handle variable length query phrase
and to accommodate features specific to Carnatic music. The dual ternary
indexing algorithm is adapted for Carnatic music by segmenting using the
segmentation technique for Carnatic music. The dual ternary algorithm is
compared with the multi-key hashing algorithm designed by us for indexing and
retrieval in which features like MFCC, spectral flux, melody string and
spectral centroid are used as features for indexing data into a hash table. The
way in which collision resolution was handled by this hash table is different
than the normal hash table approaches. It was observed that multi-key hashing
based retrieval had a lesser time complexity than dual-ternary based indexing
The algorithms were also compared for their precision and recall in which
multi-key hashing had a better recall than modified dual ternary indexing for
the sample data considered.
|
1007.5180
|
CLP-based protein fragment assembly
|
cs.AI cs.CE cs.PL q-bio.QM
|
The paper investigates a novel approach, based on Constraint Logic
Programming (CLP), to predict the 3D conformation of a protein via fragments
assembly. The fragments are extracted by a preprocessor-also developed for this
work- from a database of known protein structures that clusters and classifies
the fragments according to similarity and frequency. The problem of assembling
fragments into a complete conformation is mapped to a constraint solving
problem and solved using CLP. The constraint-based model uses a medium
discretization degree Ca-side chain centroid protein model that offers
efficiency and a good approximation for space filling. The approach adapts
existing energy models to the protein representation used and applies a large
neighboring search strategy. The results shows the feasibility and efficiency
of the method. The declarative nature of the solution allows to include future
extensions, e.g., different size fragments for better accuracy.
|
1007.5282
|
Noise-based deterministic logic and computing: a brief survey
|
physics.data-an cs.IT math.IT physics.gen-ph
|
A short survey is provided about our recent explorations of the young topic
of noise-based logic. After outlining the motivation behind noise-based
computation schemes, we present a short summary of our ongoing efforts in the
introduction, development and design of several noise-based deterministic
multivalued logic schemes and elements. In particular, we describe classical,
instantaneous, continuum, spike and random-telegraph-signal based schemes with
applications such as circuits that emulate the brain's functioning and string
verification via a slow communication channel.
|
1007.5336
|
The Value of Staying Current when Beamforming
|
cs.IT math.IT
|
Beamforming is a widely used method of provisioning high quality wireless
channels that leads to high data rates and simple decoding structures. It
requires feedback of Channel State Information (CSI) from receiver to
transmitter, and the accuracy of this information is limited by rate
constraints on the feedback channel and by delay. It is important to understand
how the performance gains associated with beamforming depend on the accuracy or
currency of the Channel State Information. This paper quantifies performance
degradation caused by aging of CSI. It uses outage probability to measure the
currency of CSI, and to discount the performance gains associated with ideal
beamforming. Outage probability is a function of the beamforming algorithm and
results are presented for Transmit Antenna Selection and other widely used
methods. These results are translated into effective diversity orders for
Multiple Input Single Output (MISO) and Multiuser Multiple Input Multiple
Output (MIMO) systems.
|
1007.5354
|
Synchronization and Control in Intrinsic and Designed Computation: An
Information-Theoretic Analysis of Competing Models of Stochastic Computation
|
cond-mat.stat-mech cs.IT math.IT nlin.CD stat.ML
|
We adapt tools from information theory to analyze how an observer comes to
synchronize with the hidden states of a finitary, stationary stochastic
process. We show that synchronization is determined by both the process's
internal organization and by an observer's model of it. We analyze these
components using the convergence of state-block and block-state entropies,
comparing them to the previously known convergence properties of the Shannon
block entropy. Along the way, we introduce a hierarchy of information
quantifiers as derivatives and integrals of these entropies, which parallels a
similar hierarchy introduced for block entropy. We also draw out the duality
between synchronization properties and a process's controllability. The tools
lead to a new classification of a process's alternative representations in
terms of minimality, synchronizability, and unifilarity.
|
1007.5408
|
A Lower Bound to the Receiver Operating Characteristic of a Cognitive
Radio Network
|
cs.IT math.IT
|
Cooperative cognitive radio networks are investigated by using an
information-theoretic approach. This approach consists of interpreting the
decision process carried out at the fusion center as a binary (asymmetric)
channel, whose input is the presence of a primary signal and output is the
fusion center decision itself. The error probabilities of this channel are the
false-alarm and missed-detection probabilities. After calculating the mutual
information between the binary random variable representing the primary signal
presence and the set of sensor (or secondary user) output samples, we apply the
data-processing inequality to derive a lower bound to the receiver operating
characteristic. This basic idea is developed through the paper in order to
consider the cases of full channel and signal knowledge and of knowledge in
probability distribution. The advantage of this approach is that the ROC lower
bound derived is independent of the particular type of spectrum detection
algorithm and fusion rule considered. Then, it can be used as a benchmark for
existing practical systems.
|
1007.5421
|
Inference with Constrained Hidden Markov Models in PRISM
|
cs.AI cs.LO cs.PL
|
A Hidden Markov Model (HMM) is a common statistical model which is widely
used for analysis of biological sequence data and other sequential phenomena.
In the present paper we show how HMMs can be extended with side-constraints and
present constraint solving techniques for efficient inference. Defining HMMs
with side-constraints in Constraint Logic Programming have advantages in terms
of more compact expression and pruning opportunities during inference.
We present a PRISM-based framework for extending HMMs with side-constraints
and show how well-known constraints such as cardinality and all different are
integrated. We experimentally validate our approach on the biologically
motivated problem of global pairwise alignment.
|
1007.5476
|
Degree of Separation in Social Networks
|
cs.SI physics.soc-ph
|
According to the small-world concept, the entire world is connected through
short chains of acquaintances. In popular imagination this is captured in the
phrase six degrees of separation, implying that any two individuals are, at
most, six handshakes away. Social network analysis is the understanding of
concepts and information on relationships among interacting units in an
ecological system. In this analysis the properties of the actors are explained
in terms of the structures of links amongst them. In general, the relational
links between the actors are primary and the properties of the actors are
secondary. This paper presents two methods to calculate the average degree of
separation between the actors or nodes in a graph. We apply this approach to
other random graphs depicting social networks and then compare the
characteristics of these graphs with the average degree of separation.
|
1007.5514
|
Distributed Beamforming in Wireless Multiuser Relay-Interference
Networks with Quantized Feedback
|
cs.IT math.IT
|
We study quantized beamforming in wireless amplify-and-forward
relay-interference networks with any number of transmitters, relays, and
receivers. We design the quantizer of the channel state information to minimize
the probability that at least one receiver incorrectly decodes its desired
symbol(s). Correspondingly, we introduce a generalized diversity measure that
encapsulates the conventional one as the first-order diversity. Additionally,
it incorporates the second-order diversity, which is concerned with the
transmitter power dependent logarithmic terms that appear in the error rate
expression. First, we show that, regardless of the quantizer and the amount of
feedback that is used, the relay-interference network suffers a second-order
diversity loss compared to interference-free networks. Then, two different
quantization schemes are studied: First, using a global quantizer, we show that
a simple relay selection scheme can achieve maximal diversity. Then, using the
localization method, we construct both fixed-length and variable-length local
(distributed) quantizers (fLQs and vLQs). Our fLQs achieve maximal first-order
diversity, whereas our vLQs achieve maximal diversity. Moreover, we show that
all the promised diversity and array gains can be obtained with arbitrarily low
feedback rates when the transmitter powers are sufficiently large. Finally, we
confirm our analytical findings through simulations.
|
1008.0042
|
Weblog patterns and human dynamics with decreasing interest
|
cs.SI physics.soc-ph
|
Weblog is the fourth way of network exchange after Email, BBS and MSN. Most
bloggers begin to write blogs with great interest, and then their interests
gradually achieve a balance with the passage of time. In order to describe the
phenomenon that people's interest in something gradually decreases until it
reaches a balance, we first propose the model that describes the attenuation of
interest and reflects the fact that people's interest becomes more stable after
a long time. We give a rigorous analysis on this model by non-homogeneous
Poisson processes. Our analysis indicates that the interval distribution of
arrival-time is a mixed distribution with exponential and power-law feature,
that is, it is a power law with an exponential cutoff. Second, we collect blogs
in ScienceNet.cn and carry on empirical studies on the interarrival time
distribution. The empirical results agree well with the analytical result,
obeying a special power law with the exponential cutoff, that is, a special
kind of Gamma distribution. These empirical results verify the model, providing
an evidence for a new class of phenomena in human dynamics. In human dynamics
there are other distributions, besides power-law distributions. These findings
demonstrate the variety of human behavior dynamics.
|
1008.0044
|
Network Control: A Rate-Distortion Perspective
|
cs.IT math.IT
|
Today's networks are controlled assuming pre-compressed and packetized data.
For video, this assumption of data packets abstracts out one of the key aspects
- the lossy compression problem. Therefore, first, this paper develops a
framework for network control that incorporates both source-rate and
source-distortion. Next, it decomposes the network control problem into an
application-layer compression control, a transport-layer congestion control and
a network-layer scheduling. It is shown that this decomposition is optimal for
concave utility functions. Finally, this paper derives further insights from
the developed rate-distortion framework by focusing on specific problems.
|
1008.0045
|
Universal and Robust Distributed Network Codes
|
cs.IT math.IT
|
Random linear network codes can be designed and implemented in a distributed
manner, with low computational complexity. However, these codes are classically
implemented over finite fields whose size depends on some global network
parameters (size of the network, the number of sinks) that may not be known
prior to code design. Also, if new nodes join the entire network code may have
to be redesigned.
In this work, we present the first universal and robust distributed linear
network coding schemes. Our schemes are universal since they are independent of
all network parameters. They are robust since if nodes join or leave, the
remaining nodes do not need to change their coding operations and the receivers
can still decode. They are distributed since nodes need only have topological
information about the part of the network upstream of them, which can be
naturally streamed as part of the communication protocol.
We present both probabilistic and deterministic schemes that are all
asymptotically rate-optimal in the coding block-length, and have guarantees of
correctness. Our probabilistic designs are computationally efficient, with
order-optimal complexity. Our deterministic designs guarantee zero error
decoding, albeit via codes with high computational complexity in general. Our
coding schemes are based on network codes over ``scalable fields". Instead of
choosing coding coefficients from one field at every node, each node uses
linear coding operations over an ``effective field-size" that depends on the
node's distance from the source node. The analysis of our schemes requires
technical tools that may be of independent interest. In particular, we
generalize the Schwartz-Zippel lemma by proving a non-uniform version, wherein
variables are chosen from sets of possibly different sizes. We also provide a
novel robust distributed algorithm to assign unique IDs to network nodes.
|
1008.0047
|
A Scalable Limited Feedback Design for Network MIMO using Per-Cell
Product Codebook
|
cs.IT math.IT
|
In network MIMO systems, channel state information is required at the
transmitter side to multiplex users in the spatial domain. Since perfect
channel knowledge is difficult to obtain in practice, \emph{limited feedback}
is a widely accepted solution. The {\em dynamic number of cooperating BSs} and
{\em heterogeneous path loss effects} of network MIMO systems pose new
challenges on limited feedback design. In this paper, we propose a scalable
limited feedback design for network MIMO systems with multiple base stations,
multiple users and multiple data streams for each user. We propose a {\em
limited feedback framework using per-cell product codebooks}, along with a {\em
low-complexity feedback indices selection algorithm}. We show that the proposed
per-cell product codebook limited feedback design can asymptotically achieve
the same performance as the joint-cell codebook approach. We also derive an
asymptotic \emph{per-user throughput loss} due to limited feedback with
per-cell product codebooks. Based on that, we show that when the number of
per-user feedback-bits $B_{k}$ is $\mathcal{O}\big( Nn_{T}n_{R}\log_{2}(\rho
g_{k}^{sum})\big)$, the system operates in the \emph{noise-limited} regime in
which the per-user throughput is $\mathcal{O} \left( n_{R} \log_{2} \big(
\frac{n_{R}\rho g_{k}^{sum}}{Nn_{T}} \big) \right)$. On the other hand, when
the number of per-user feedback-bits $B_{k}$ does not scale with the
\emph{system SNR} $\rho$, the system operates in the
\emph{interference-limited} regime where the per-user throughput is
$\mathcal{O}\left( \frac{n_{R}B_{k}}{(Nn_{T})^{2}} \right)$. Numerical results
show that the proposed design is very flexible to accommodate dynamic number of
cooperating BSs and achieves much better performance compared with other
baselines (such as the Givens rotation approach).
|
1008.0063
|
Evolutionary Approach to Test Generation for Functional BIST
|
cs.NE
|
In the paper, an evolutionary approach to test generation for functional BIST
is considered. The aim of the proposed scheme is to minimize the test data
volume by allowing the device's microprogram to test its logic, providing an
observation structure to the system, and generating appropriate test data for
the given architecture. Two methods of deriving a deterministic test set at
functional level are suggested. The first method is based on the classical
genetic algorithm with binary and arithmetic crossover and mutation operators.
The second one uses genetic programming, where test is represented as a
sequence of microoperations. In the latter case, we apply two-point crossover
based on exchanging test subsequences and mutation implemented as random
replacement of microoperations or operands. Experimental data of the program
realization showing the efficiency of the proposed methods are presented.
|
1008.0140
|
The Characteristics of the Factors That Govern the Preferred Force in
the Social Force Model of Pedestrian Movement
|
cs.IT math.IT
|
The social force model which belongs to the microscopic pedestrian studies
has been considered as the supremacy by many researchers and due to the main
feature of reproducing the self-organized phenomena resulted from pedestrian
dynamic. The Preferred Force which is a measurement of pedestrian's motivation
to adapt his actual velocity to his desired velocity is an essential term on
which the model was set up. This Force has gone through stages of development:
first of all, Helbing and Molnar (1995) have modeled the original force for the
normal situation. Second, Helbing and his co-workers (2000) have incorporated
the panic situation into this force by incorporating the panic parameter to
account for the panic situations. Third, Lakoba and Kaup (2005) have provided
the pedestrians some kind of intelligence by incorporating aspects of the
decision-making capability. In this paper, the authors analyze the most
important incorporations into the model regarding the preferred force. They
make comparisons between the different factors of these incorporations.
Furthermore, to enhance the decision-making ability of the pedestrians, they
introduce additional features such as the familiarity factor to the preferred
force to let it appear more representative of what actually happens in reality.
|
1008.0147
|
Intervention Mechanism Design for Networks With Selfish Users
|
cs.GT cs.MA
|
We consider a multi-user network where a network manager and selfish users
interact. The network manager monitors the behavior of users and intervenes in
the interaction among users if necessary, while users make decisions
independently to optimize their individual objectives. In this paper, we
develop a framework of intervention mechanism design, which is aimed to
optimize the objective of the manager, or the network performance, taking the
incentives of selfish users into account. Our framework is general enough to
cover a wide range of application scenarios, and it has advantages over
existing approaches such as Stackelberg strategies and pricing. To design an
intervention mechanism and to predict the resulting operating point, we
formulate a new class of games called intervention games and a new solution
concept called intervention equilibrium. We provide analytic results about
intervention equilibrium and optimal intervention mechanisms in the case of a
benevolent manager with perfect monitoring. We illustrate these results with a
random access model. Our illustrative example suggests that intervention
requires less knowledge about users than pricing.
|
1008.0170
|
Symmetric categorial grammar: residuation and Galois connections
|
cs.CL
|
The Lambek-Grishin calculus is a symmetric extension of the Lambek calculus:
in addition to the residuated family of product, left and right division
operations of Lambek's original calculus, one also considers a family of
coproduct, right and left difference operations, related to the former by an
arrow-reversing duality. Communication between the two families is implemented
in terms of linear distributivity principles. The aim of this paper is to
complement the symmetry between (dual) residuated type-forming operations with
an orthogonal opposition that contrasts residuated and Galois connected
operations. Whereas the (dual) residuated operations are monotone, the Galois
connected operations (and their duals) are antitone. We discuss the algebraic
properties of the (dual) Galois connected operations, and generalize the
(co)product distributivity principles to include the negative operations. We
give a continuation-passing-style translation for the new type-forming
operations, and discuss some linguistic applications.
|
1008.0178
|
Dictionary for Sparse Representation of Chirp Echo in Broadband Radar
|
cs.IT math.IT
|
A new dictionary for sparse representation of chirp echo in broadband radar
is put forward in this paper. Different with chirplet decomposition which
decomposes echo in time-frequency plane, the dictionary transforms the sparsity
of target observed by radar in distance range to the sparsity in frequency
domain by stretch processing and the sparse representation of echo is realized.
Using strict deduction with mathematics, the sparsity of echo in dictionary is
proved and the dictionary is orthogonal. In the application property, the
construction of dictionary is simple, the parameters that are needed for
dictionary can be obtained conveniently and the dictionary is convenient to
use. Furthermore, the object of application can be expanded to the echo of
multi-component chirps with single freedom degree.
|
1008.0212
|
An FPTAS for Bargaining Networks with Unequal Bargaining Powers
|
cs.GT cs.MA
|
Bargaining networks model social or economic situations in which agents seek
to form the most lucrative partnership with another agent from among several
alternatives. There has been a flurry of recent research studying Nash
bargaining solutions (also called 'balanced outcomes') in bargaining networks,
so that we now know when such solutions exist, and also that they can be
computed efficiently, even by market agents behaving in a natural manner. In
this work we study a generalization of Nash bargaining, that models the
possibility of unequal 'bargaining powers'. This generalization was introduced
in [KB+10], where it was shown that the corresponding 'unequal division' (UD)
solutions exist if and only if Nash bargaining solutions exist, and also that a
certain local dynamics converges to UD solutions when they exist. However, the
bound on convergence time obtained for that dynamics was exponential in network
size for the unequal division case. This bound is tight, in the sense that
there exists instances on which the dynamics of [KB+10] converges only after
exponential time. Other approaches, such as the one of Kleinberg and Tardos, do
not generalize to the unsymmetrical case. Thus, the question of computational
tractability of UD solutions has remained open. In this paper, we provide an
FPTAS for the computation of UD solutions, when such solutions exist. On a
graph G=(V,E) with weights (i.e. pairwise profit opportunities) uniformly
bounded above by 1, our FPTAS finds an \eps-UD solution in time
poly(|V|,1/\eps). We also provide a fast local algorithm for finding \eps-UD
solution, providing further justification that a market can find such a
solution.
|
1008.0223
|
Secure Joint Source-Channel Coding With Side Information
|
cs.IT math.IT
|
In this work, the problem of transmitting an i.i.d Gaussian source over an
i.i.d Gaussian wiretap channel with an i.i.d Gaussian side information is
considered. The intended receiver is assumed to have a certain minimum SNR and
the eavesdropper is assumed to have a strictly lower SNR compared to the
intended receiver. The objective is minimizing the distortion of source
reconstruction at the intended receiver. In this work, it is shown that unlike
the Gaussian wiretap channel without side information, Shannon's source-channel
separation coding scheme is not optimum in the sense of achieving the minimum
distortion. Three hybrid digital-analog secure joint source channel coding
schemes are then proposed which achieve the minimum distortion. The first
coding scheme is based on Costa's dirty paper coding scheme and wiretap channel
coding scheme when the analog source is not explicitly quantized. The second
coding scheme is based on the superposition of the secure digital signal and
the hybrid digital-analog signal. It is shown that for the problem of
communicating a Gaussian source over a Gaussian wiretap channel with side
information, there exists an infinite family of optimum secure joint
source-channel coding scheme. In the third coding scheme, the quantized signal
and the analog error signal are explicitly superimposed. It is shown that this
scheme provides an infinite family of optimum secure joint source-channel
channel coding schemes with a variable number of binning. Finally, the proposed
secure hybrid digital-analog schemes are analyzed under the main channel SNR
mismatch. It is proven that the proposed schemes can give a graceful
degradation of distortion with SNR under SNR mismatch, i.e., when the actual
SNR is larger than the designed SNR.
|
1008.0235
|
Network Coding for Multiple Unicasts: An Interference Alignment Approach
|
cs.IT math.IT
|
This paper considers the problem of network coding for multiple unicast
connections in networks represented by directed acyclic graphs. The concept of
interference alignment, traditionally used in interference networks, is
extended to analyze the performance of linear network coding in this setup and
to provide a systematic code design approach. It is shown that, for a broad
class of three-source three-destination unicast networks, a rate corresponding
to half the individual source-destination min-cut is achievable via alignment
strategies.
|
1008.0273
|
Threat assessment of a possible Vehicle-Born Improvised Explosive Device
using DSmT
|
cs.AI
|
This paper presents the solution about the threat of a VBIED (Vehicle-Born
Improvised Explosive Device) obtained with the DSmT (Dezert-Smarandache
Theory). This problem has been proposed recently to the authors by Simon
Maskell and John Lavery as a typical illustrative example to try to compare the
different approaches for dealing with uncertainty for decision-making support.
The purpose of this paper is to show in details how a solid justified solution
can be obtained from DSmT approach and its fusion rules thanks to a proper
modeling of the belief functions involved in this problem.
|
1008.0285
|
On optimizing low SNR wireless networks using network coding
|
cs.IT cs.NI math.IT
|
The rate optimization for wireless networks with low SNR is investigated.
While the capacity in the limit of disappearing SNR is known to be linear for
fading and non-fading channels, we study the problem of operating in low SNR
wireless network with given node locations that use network coding over flows.
The model we develop for low SNR Gaussian broadcast channel and multiple access
channel respectively operates in a non-trivial feasible rate region. We show
that the problem reduces to the optimization of total network power which can
be casted as standard linear multi-commodity min-cost flow program with no
inherent combinatorially difficult structure when network coding is used with
non integer constraints (which is a reasonable assumption). This is essentially
due to the linearity of the capacity with respect to vanishing SNR which helps
avoid the effect of interference for the degraded broadcast channel and
multiple access environment in consideration, respectively. We propose a fully
decentralized Primal-Dual Subgradient Algorithm for achieving optimal rates on
each subgraph (i.e. hyperarcs) of the network to support the set of traffic
demands (multicast/unicast connections).
|
1008.0322
|
Co-evolution is Incompatible with the Markov Assumption in Phylogenetics
|
q-bio.PE cs.AI cs.CE
|
Markov models are extensively used in the analysis of molecular evolution. A
recent line of research suggests that pairs of proteins with functional and
physical interactions co-evolve with each other. Here, by analyzing hundreds of
orthologous sets of three fungi and their co-evolutionary relations, we
demonstrate that co-evolutionary assumption may violate the Markov assumption.
Our results encourage developing alternative probabilistic models for the cases
of extreme co-evolution.
|
1008.0327
|
Skew Constacyclic Codes over Finite Chain Rings
|
cs.IT math.IT math.RA
|
Skew polynomial rings over finite fields ([7] and [10]) and over Galois rings
([8]) have been used to study codes. In this paper, we extend this concept to
finite chain rings. Properties of skew constacyclic codes generated by monic
right divisors of $x^n-\lambda$, where $\lambda$ is a unit element, are
exhibited. When $\lambda^2=1$, the generators of Euclidean and Hermitian dual
codes of such codes are determined together with necessary and sufficient
conditions for them to be Euclidean and Hermitian self-dual. Of more interest
are codes over the ring $\mathbb{F}_{p^m}+u\mathbb{F}_{p^m}$. The structure of
all skew constacyclic codes is completely determined. This allows us to express
generators of Euclidean and Hermitian dual codes of skew cyclic and skew
negacyclic codes in terms of the generators of the original codes. An
illustration of all skew cyclic codes of length~2 over
$\mathbb{F}_{3}+u\mathbb{F}_{3}$ and their Euclidean and Hermitian dual codes
is also provided.
|
1008.0336
|
Close Clustering Based Automated Color Image Annotation
|
cs.LG
|
Most image-search approaches today are based on the text based tags
associated with the images which are mostly human generated and are subject to
various kinds of errors. The results of a query to the image database thus can
often be misleading and may not satisfy the requirements of the user. In this
work we propose our approach to automate this tagging process of images, where
image results generated can be fine filtered based on a probabilistic tagging
mechanism. We implement a tool which helps to automate the tagging process by
maintaining a training database, wherein the system is trained to identify
certain set of input images, the results generated from which are used to
create a probabilistic tagging mechanism. Given a certain set of segments in an
image it calculates the probability of presence of particular keywords. This
probability table is further used to generate the candidate tags for input
images.
|
1008.0420
|
Modeling Network Coded TCP Throughput: A Simple Model and its Validation
|
cs.IT cs.NI math.IT
|
We analyze the performance of TCP and TCP with network coding (TCP/NC) in
lossy wireless networks. We build upon the simple framework introduced by
Padhye et al. and characterize the throughput behavior of classical TCP as well
as TCP/NC as a function of erasure rate, round-trip time, maximum window size,
and duration of the connection. Our analytical results show that network coding
masks erasures and losses from TCP, thus preventing TCP's performance
degradation in lossy networks, such as wireless networks. It is further seen
that TCP/NC has significant throughput gains over TCP. In addition, we simulate
TCP and TCP/NC to verify our analysis of the average throughput and the window
evolution. Our analysis and simulation results show very close concordance and
support that TCP/NC is robust against erasures. TCP/NC is not only able to
increase its window size faster but also to maintain a large window size
despite losses within the network, whereas TCP experiences window closing
essentially because losses are mistakenly attributed to congestion.
|
1008.0425
|
Quantum Steganography and Quantum Error-Correction
|
quant-ph cs.IT math.IT
|
In the current thesis we first talk about the six-qubit quantum
error-correcting code and show its connections to entanglement-assisted
error-correcting coding theory and then to subsystem codes. This code bridges
the gap between the five-qubit (perfect) and Steane codes. We discuss two
methods to encode one qubit into six physical qubits. Each of the two examples
corrects an arbitrary single-qubit error. The first example is a degenerate
six-qubit quantum error-correcting code. We prove that a six-qubit code without
entanglement assistance cannot simultaneously possess a Calderbank-Shor-Steane
(CSS) stabilizer and correct an arbitrary single-qubit error. A corollary of
this result is that the Steane seven-qubit code is the smallest single-error
correcting CSS code. Our second example is the construction of a non-degenerate
six-qubit CSS entanglement-assisted code. This code uses one bit of
entanglement (an ebit) shared between the sender (Alice) and the receiver (Bob)
and corrects an arbitrary single-qubit error. In the second half of this thesis
we explore the yet uncharted and relatively undiscovered area of quantum
steganography. Steganography is the process of hiding secret information by
embedding it in an innocent message. We present protocols for hiding quantum
information in a codeword of a quantum error-correcting code passing through a
channel. Using either a shared classical secret key or shared entanglement
Alice disguises her information as errors in the channel. Bob can retrieve the
hidden information, but an eavesdropper (Eve) with the power to monitor the
channel, but without the secret key, cannot distinguish the message from
channel noise. We analyze how difficult it is for Eve to detect the presence of
secret messages, and estimate rates of steganographic communication and secret
key consumption for certain protocols.
|
1008.0441
|
An Optimal Trade-off between Content Freshness and Refresh Cost
|
cs.IR
|
Caching is an effective mechanism for reducing bandwidth usage and
alleviating server load. However, the use of caching entails a compromise
between content freshness and refresh cost. An excessive refresh allows a high
degree of content freshness at a greater cost of system resource. Conversely, a
deficient refresh inhibits content freshness but saves the cost of resource
usages. To address the freshness-cost problem, we formulate the refresh
scheduling problem with a generic cost model and use this cost model to
determine an optimal refresh frequency that gives the best tradeoff between
refresh cost and content freshness. We prove the existence and uniqueness of an
optimal refresh frequency under the assumptions that the arrival of content
update is Poisson and the age-related cost monotonically increases with
decreasing freshness. In addition, we provide an analytic comparison of system
performance under fixed refresh scheduling and random refresh scheduling,
showing that with the same average refresh frequency two refresh schedulings
are mathematically equivalent in terms of the long-run average cost.
|
1008.0502
|
Fully automatic extraction of salient objects from videos in near
real-time
|
cs.CV cs.GR cs.MM
|
Automatic video segmentation plays an important role in a wide range of
computer vision and image processing applications. Recently, various methods
have been proposed for this purpose. The problem is that most of these methods
are far from real-time processing even for low-resolution videos due to the
complex procedures. To this end, we propose a new and quite fast method for
automatic video segmentation with the help of 1) efficient optimization of
Markov random fields with polynomial time of number of pixels by introducing
graph cuts, 2) automatic, computationally efficient but stable derivation of
segmentation priors using visual saliency and sequential update mechanism, and
3) an implementation strategy in the principle of stream processing with
graphics processor units (GPUs). Test results indicates that our method
extracts appropriate regions from videos as precisely as and much faster than
previous semi-automatic methods even though any supervisions have not been
incorporated.
|
1008.0528
|
Bounded Coordinate-Descent for Biological Sequence Classification in
High Dimensional Predictor Space
|
cs.LG
|
We present a framework for discriminative sequence classification where the
learner works directly in the high dimensional predictor space of all
subsequences in the training set. This is possible by employing a new
coordinate-descent algorithm coupled with bounding the magnitude of the
gradient for selecting discriminative subsequences fast. We characterize the
loss functions for which our generic learning algorithm can be applied and
present concrete implementations for logistic regression (binomial
log-likelihood loss) and support vector machines (squared hinge loss).
Application of our algorithm to protein remote homology detection and remote
fold recognition results in performance comparable to that of state-of-the-art
methods (e.g., kernel support vector machines). Unlike state-of-the-art
classifiers, the resulting classification models are simply lists of weighted
discriminative subsequences and can thus be interpreted and related to the
biological problem.
|
1008.0539
|
Assessing coupling dynamics from an ensemble of time series
|
cs.IT math.IT
|
Finding interdependency relations between (possibly multivariate) time series
provides valuable knowledge about the processes that generate the signals.
Information theory sets a natural framework for non-parametric measures of
several classes of statistical dependencies. However, a reliable estimation
from information-theoretic functionals is hampered when the dependency to be
assessed is brief or evolves in time. Here, we show that these limitations can
be overcome when we have access to an ensemble of independent repetitions of
the time series. In particular, we gear a data-efficient estimator of
probability densities to make use of the full structure of trial-based
measures. By doing so, we can obtain time-resolved estimates for a family of
entropy combinations (including mutual information, transfer entropy, and their
conditional counterparts) which are more accurate than the simple average of
individual estimates over trials. We show with simulated and real data that the
proposed approach allows to recover the time-resolved dynamics of the coupling
between different subsystems.
|
1008.0548
|
Image sequence interpolation using optimal control
|
cs.CV math.AP math.OC
|
The problem of the generation of an intermediate image between two given
images in an image sequence is considered. The problem is formulated as an
optimal control problem governed by a transport equation. This approach bears
similarities with the Horn \& Schunck method for optical flow calculation but
in fact the model is quite different. The images are modelled in $BV$ and an
analysis of solutions of transport equations with values in $BV$ is included.
Moreover, the existence of optimal controls is proven and necessary conditions
are derived. Finally, two algorithms are given and numerical results are
compared with existing methods. The new method is competitive with
state-of-the-art methods and even outperforms several existing methods.
|
1008.0557
|
LiquidXML: Adaptive XML Content Redistribution
|
cs.DB
|
We propose to demonstrate LiquidXML, a platform for managing large corpora of
XML documents in large-scale P2P networks. All LiquidXML peers may publish XML
documents to be shared with all the network peers. The challenge then is to
efficiently (re-)distribute the published content in the network, possibly in
overlapping, redundant fragments, to support efficient processing of queries at
each peer. The novelty of LiquidXML relies in its adaptive method of choosing
which data fragments are stored where, to improve performance. The "liquid"
aspect of XML management is twofold: XML data flows from many sources towards
many consumers, and its distribution in the network continuously adapts to
improve query performance.
|
1008.0602
|
A Framework for Partial Secrecy
|
cs.IT math.IT
|
We consider theoretical limits of partial secrecy in a setting where an
eavesdropper attempts to causally reconstruct an information sequence with low
distortion based on an intercepted transmission and the past of the sequence.
The transmitter and receiver have limited secret key at their disposal but not
enough to establish perfect secrecy with a one-time pad. From another
viewpoint, the eavesdropper is acting as an adversary, competing in a zero-sum
repeated game against the sender and receiver of the secrecy system. In this
case, the information sequence represents a sequence of actions, and the
distortion function captures the payoff of the game.
We give an information theoretic region expressing the tradeoff between
secret key rate and max-min distortion for the eavesdropper. We also simplify
this characterization to a linear program. As an example, we discuss how to
optimally use secret key to hide Bernoulli-p bits from an eavesdropper so that
they incur maximal Hamming distortion.
|
1008.0659
|
Evaluating and Improving Modern Variable and Revision Ordering
Strategies in CSPs
|
cs.AI
|
A key factor that can dramatically reduce the search space during constraint
solving is the criterion under which the variable to be instantiated next is
selected. For this purpose numerous heuristics have been proposed. Some of the
best of such heuristics exploit information about failures gathered throughout
search and recorded in the form of constraint weights, while others measure the
importance of variable assignments in reducing the search space. In this work
we experimentally evaluate the most recent and powerful variable ordering
heuristics, and new variants of them, over a wide range of benchmarks. Results
demonstrate that heuristics based on failures are in general more efficient.
Based on this, we then derive new revision ordering heuristics that exploit
recorded failures to efficiently order the propagation list when arc
consistency is maintained during search. Interestingly, in addition to reducing
the number of constraint checks and list operations, these heuristics are also
able to cut down the size of the explored search tree.
|
1008.0660
|
Adaptive Branching for Constraint Satisfaction Problems
|
cs.AI
|
The two standard branching schemes for CSPs are d-way and 2-way branching.
Although it has been shown that in theory the latter can be exponentially more
effective than the former, there is a lack of empirical evidence showing such
differences. To investigate this, we initially make an experimental comparison
of the two branching schemes over a wide range of benchmarks. Experimental
results verify the theoretical gap between d-way and 2-way branching as we move
from a simple variable ordering heuristic like dom to more sophisticated ones
like dom/ddeg. However, perhaps surprisingly, experiments also show that when
state-of-the-art variable ordering heuristics like dom/wdeg are used then d-way
can be clearly more efficient than 2-way branching in many cases. Motivated by
this observation, we develop two generic heuristics that can be applied at
certain points during search to decide whether 2-way branching or a restricted
version of 2-way branching, which is close to d-way branching, will be
followed. The application of these heuristics results in an adaptive branching
scheme. Experiments with instantiations of the two generic heuristics confirm
that search with adaptive branching outperforms search with a fixed branching
scheme on a wide range of problems.
|
1008.0706
|
Algorithmic Detection of Computer Generated Text
|
stat.ML cs.CL
|
Computer generated academic papers have been used to expose a lack of
thorough human review at several computer science conferences. We assess the
problem of classifying such documents. After identifying and evaluating several
quantifiable features of academic papers, we apply methods from machine
learning to build a binary classifier. In tests with two hundred papers, the
resulting classifier correctly labeled papers either as human written or as
computer generated with no false classifications of computer generated papers
as human and a 2% false classification rate for human papers as computer
generated. We believe generalizations of these features are applicable to
similar classification problems. While most current text-based spam detection
techniques focus on the keyword-based classification of email messages, a new
generation of unsolicited computer-generated advertisements masquerade as
legitimate postings in online groups, message boards and social news sites. Our
results show that taking the formatting and contextual clues offered by these
environments into account may be of central importance when selecting features
with which to identify such unwanted postings.
|
1008.0716
|
Cross-Lingual Adaptation using Structural Correspondence Learning
|
cs.IR
|
Cross-lingual adaptation, a special case of domain adaptation, refers to the
transfer of classification knowledge between two languages. In this article we
describe an extension of Structural Correspondence Learning (SCL), a recently
proposed algorithm for domain adaptation, for cross-lingual adaptation. The
proposed method uses unlabeled documents from both languages, along with a word
translation oracle, to induce cross-lingual feature correspondences. From these
correspondences a cross-lingual representation is created that enables the
transfer of classification knowledge from the source to the target language.
The main advantages of this approach over other approaches are its resource
efficiency and task specificity.
We conduct experiments in the area of cross-language topic and sentiment
classification involving English as source language and German, French, and
Japanese as target languages. The results show a significant improvement of the
proposed method over a machine translation baseline, reducing the relative
error due to cross-lingual adaptation by an average of 30% (topic
classification) and 59% (sentiment classification). We further report on
empirical analyses that reveal insights into the use of unlabeled data, the
sensitivity with respect to important hyperparameters, and the nature of the
induced cross-lingual correspondences.
|
1008.0728
|
Blind Spectrum Sensing by Information Theoretic Criteria for Cognitive
Radios
|
cs.IT math.IT
|
Spectrum sensing is a fundamental and critical issue for opportunistic
spectrum access in cognitive radio networks. Among the many spectrum sensing
methods, the information theoretic criteria (ITC) based method is a promising
blind method which can reliably detect the primary users while requiring little
prior information. In this paper, we provide an intensive treatment on the ITC
sensing method. To this end, we first introduce a new over-determined channel
model constructed by applying multiple antennas or over sampling at the
secondary user in order to make the ITC applicable. Then, a simplified ITC
sensing algorithm is introduced, which needs to compute and compare only two
decision values. Compared with the original ITC sensing algorithm, the
simplified algorithm significantly reduces the computational complexity without
losing any performance. Applying the recent advances in random matrix theory,
we then derive closed-form expressions to tightly approximate both the
probability of false alarm and probability of detection. Based on the insight
derived from the analytical study, we further present a generalized ITC sensing
algorithm which can provide flexible tradeoff between the probability of
detection and probability of false alarm. Finally, comprehensive simulations
are carried out to evaluate the performance of the proposed ITC sensing
algorithms. Results show that they considerably outperform other blind spectrum
sensing methods in certain cases.
|
1008.0730
|
A New SLNR-based Linear Precoding for Downlink Multi-User Multi-Stream
MIMO Systems
|
cs.IT math.IT
|
Signal-to-leakage-and-noise ratio (SLNR) is a promising criterion for linear
precoder design in multi-user (MU) multiple-input multiple-output (MIMO)
systems. It decouples the precoder design problem and makes closed-form
solution available. In this letter, we present a new linear precoding scheme by
slightly relaxing the SLNR maximization for MU-MIMO systems with multiple data
streams per user. The precoding matrices are obtained by a general form of
simultaneous diagonalization of two Hermitian matrices. The new scheme reduces
the gap between the per-stream effective channel gains, an inherent limitation
in the original SLNR precoding scheme. Simulation results demonstrate that the
proposed precoding achieves considerable gains in error performance over the
original one for multi-stream transmission while maintaining almost the same
achievable sum-rate.
|
1008.0735
|
Superimposed XOR: Approaching Capacity Bounds of the Two-Way Relay
Channels
|
cs.IT math.IT
|
In two-way relay channels, bitwise XOR and symbol-level superposition coding
are two popular network-coding based relaying schemes. However, neither of them
can approach the capacity bound when the channels in the broadcast phase are
asymmetric. In this paper, we present a new physical layer network coding
(PLNC) scheme, called \emph{superimposed XOR}. The new scheme advances the
existing schemes by specifically taking into account the channel asymmetry as
well as information asymmetry in the broadcast phase. We obtain its achievable
rate regions over Gaussian channels when integrated with two known time control
protocols in two-way relaying. We also demonstrate their average maximum
sum-rates and service delay performances over fading channels. Numerical
results show that the proposed superimposed XOR achieves a larger rate region
than both XOR and superposition and performs much better over fading channels.
We further deduce the boundary of its achievable rate region of the broadcast
phase in an explicit and analytical expression. Based on these results, we then
show that the gap to the capacity bound approaches zero at high signal-to-noise
ratio.
|
1008.0758
|
A Chaotic Approach to Market Dynamics
|
nlin.CD cs.CE q-fin.TR
|
Economy is demanding new models, able to understand and predict the evolution
of markets. To this respect, Econophysics is offering models of markets as
complex systems, such as the gas-like model, able to predict money
distributions observed in real economies. However, this model reveals some
technical hitches to explain the power law (Pareto) distribution, observed in
individuals with high incomes. Here, non linear dynamics is introduced in the
gas-like model. The results obtained demonstrate that a chaotic gas-like model
can reproduce the two money distributions observed in real economies
(Exponential and Pareto). Moreover, it is able to control the transition
between them. This may give some insight of the micro-level causes that
originate unfair distributions of money in a global society. Ultimately, the
chaotic model makes obvious the inherent instability of asymmetric scenarios,
where sinks of wealth appear in the market and doom it to complete inequality.
|
1008.0775
|
Systems Theoretic Techniques for Modeling, Control, and Decision Support
in Complex Dynamic Systems
|
cs.SY cs.AI cs.MA math.OC
|
We discuss the problems of modeling, control, and decision support in complex
dynamic systems from a general system theoretic point of view. The main
characteristics of complex systems and of system approach to complex system
study are considered. We provide an overview and analysis of known existing
paradigms and methods of mathematical modeling and simulation of complex
systems, which support the processes of control and decision making. Then we
continue with the general dynamic modeling and simulation technique for complex
hierarchical systems functioning in control loop. Architectural and structural
models of computer information system intended for simulation and decision
support in complex systems are presented.
|
1008.0821
|
Randomness extraction and asymptotic Hamming distance
|
math.LO cs.CC cs.IT cs.LO math.IT
|
We obtain a non-implication result in the Medvedev degrees by studying
sequences that are close to Martin-L\"of random in asymptotic Hamming distance.
Our result is that the class of stochastically bi-immune sets is not Medvedev
reducible to the class of sets having complex packing dimension 1.
|
1008.0823
|
A Homogeneous Reaction Rule Language for Complex Event Processing
|
cs.AI
|
Event-driven automation of reactive functionalities for complex event
processing is an urgent need in today's distributed service-oriented
architectures and Web-based event-driven environments. An important problem to
be addressed is how to correctly and efficiently capture and process the
event-based behavioral, reactive logic embodied in reaction rules, and
combining this with other conditional decision logic embodied, e.g., in
derivation rules. This paper elaborates a homogeneous integration approach that
combines derivation rules, reaction rules and other rule types such as
integrity constraints into the general framework of logic programming, the
industrial-strength version of declarative programming. We describe syntax and
semantics of the language, implement a distributed web-based middleware using
enterprise service technologies and illustrate its adequacy in terms of
expressiveness, efficiency and scalability through examples extracted from
industrial use cases. The developed reaction rule language provides expressive
features such as modular ID-based updates with support for external imports and
self-updates of the intensional and extensional knowledge bases, transactions
including integrity testing and roll-backs of update transition paths. It also
supports distributed complex event processing, event messaging and event
querying via efficient and scalable enterprise middleware technologies and
event/action reasoning based on an event/action algebra implemented by an
interval-based event calculus variant as a logic inference formalism.
|
1008.0826
|
The Emerging Scholarly Brain
|
physics.soc-ph astro-ph.IM cs.DL cs.IR
|
It is now a commonplace observation that human society is becoming a coherent
super-organism, and that the information infrastructure forms its emerging
brain. Perhaps, as the underlying technologies are likely to become billions of
times more powerful than those we have today, we could say that we are now
building the lizard brain for the future organism.
|
1008.0838
|
Associative control processor with a rigid structure
|
cs.AR cs.AI
|
The approach of applying associative processor for decision making problem
was proposed. It focuses on hardware implementations of fuzzy processing
systems, associativity as effective management basis of fuzzy processor. The
structural approach is being developed resulting in a quite simple and compact
parallel associative memory unit (PAMU). The memory cost and speed comparison
of processors with rigid and soft-variable structure is given. Also the example
PAMU flashing is considered.
|
1008.0851
|
Identification of Parametric Underspread Linear Systems and
Super-Resolution Radar
|
cs.IT math.IT
|
Identification of time-varying linear systems, which introduce both
time-shifts (delays) and frequency-shifts (Doppler-shifts), is a central task
in many engineering applications. This paper studies the problem of
identification of underspread linear systems (ULSs), whose responses lie within
a unit-area region in the delay Doppler space, by probing them with a known
input signal. It is shown that sufficiently-underspread parametric linear
systems, described by a finite set of delays and Doppler-shifts, are
identifiable from a single observation as long as the time bandwidth product of
the input signal is proportional to the square of the total number of delay
Doppler pairs in the system. In addition, an algorithm is developed that
enables identification of parametric ULSs from an input train of pulses in
polynomial time by exploiting recent results on sub-Nyquist sampling for time
delay estimation and classical results on recovery of frequencies from a sum of
complex exponentials. Finally, application of these results to super-resolution
target detection using radar is discussed. Specifically, it is shown that the
proposed procedure allows to distinguish between multiple targets with very
close proximity in the delay Doppler space, resulting in a resolution that
substantially exceeds that of standard matched-filtering based techniques
without introducing leakage effects inherent in recently proposed compressed
sensing-based radar methods.
|
1008.0885
|
Deterministic Construction of Partial Fourier Compressed Sensing
Matrices Via Cyclic Difference Sets
|
cs.IT math.IT
|
Compressed sensing is a novel technique where one can recover sparse signals
from the undersampled measurements. This paper studies a $K \times N$ partial
Fourier measurement matrix for compressed sensing which is deterministically
constructed via cyclic difference sets (CDS). Precisely, the matrix is
constructed by $K$ rows of the $N\times N$ inverse discrete Fourier transform
(IDFT) matrix, where each row index is from a $(N, K, \lambda)$ cyclic
difference set. The restricted isometry property (RIP) is statistically studied
for the deterministic matrix to guarantee the recovery of sparse signals. A
computationally efficient reconstruction algorithm is then proposed from the
structure of the matrix. Numerical results show that the reconstruction
algorithm presents competitive recovery performance with allowable
computational complexity.
|
1008.0919
|
Compressive Sensing over Graphs
|
cs.IT cs.NI math.IT
|
In this paper, motivated by network inference and tomography applications, we
study the problem of compressive sensing for sparse signal vectors over graphs.
In particular, we are interested in recovering sparse vectors representing the
properties of the edges from a graph. Unlike existing compressive sensing
results, the collective additive measurements we are allowed to take must
follow connected paths over the underlying graph. For a sufficiently connected
graph with $n$ nodes, it is shown that, using $O(k \log(n))$ path measurements,
we are able to recover any $k$-sparse link vector (with no more than $k$
nonzero elements), even though the measurements have to follow the graph path
constraints. We further show that the computationally efficient $\ell_1$
minimization can provide theoretical guarantees for inferring such $k$-sparse
vectors with $O(k \log(n))$ path measurements from the graph.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.