id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1011.4829
|
Closed-Form Solutions to A Category of Nuclear Norm Minimization
Problems
|
cs.IT cs.CV math.IT
|
It is an efficient and effective strategy to utilize the nuclear norm
approximation to learn low-rank matrices, which arise frequently in machine
learning and computer vision. So the exploration of nuclear norm minimization
problems is gaining much attention recently. In this paper we shall prove that
the following Low-Rank Representation (LRR) \cite{icml_2010_lrr,lrr_extention}
problem: {eqnarray*} \min_{Z} \norm{Z}_*, & {s.t.,} & X=AZ, {eqnarray*} has a
unique and closed-form solution, where $X$ and $A$ are given matrices. The
proof is based on proving a lemma that allows us to get closed-form solutions
to a category of nuclear norm minimization problems.
|
1011.4833
|
A Logical Charaterisation of Ordered Disjunction
|
cs.LO cs.AI
|
In this paper we consider a logical treatment for the ordered disjunction
operator 'x' introduced by Brewka, Niemel\"a and Syrj\"anen in their Logic
Programs with Ordered Disjunctions (LPOD). LPODs are used to represent
preferences in logic programming under the answer set semantics. Their
semantics is defined by first translating the LPOD into a set of normal
programs (called split programs) and then imposing a preference relation among
the answer sets of these split programs. We concentrate on the first step and
show how a suitable translation of the ordered disjunction as a derived
operator into the logic of Here-and-There allows capturing the answer sets of
the split programs in a direct way. We use this characterisation not only for
providing an alternative implementation for LPODs, but also for checking
several properties (under strongly equivalent transformations) of the 'x'
operator, like for instance, its distributivity with respect to conjunction or
regular disjunction. We also make a comparison to an extension proposed by
K\"arger, Lopes, Olmedilla and Polleres, that combines 'x' with regular
disjunction.
|
1011.4859
|
Geographic constraints on social network groups
|
physics.soc-ph cond-mat.dis-nn cs.SI
|
Social groups are fundamental building blocks of human societies. While our
social interactions have always been constrained by geography, it has been
impossible, due to practical difficulties, to evaluate the nature of this
restriction on social group structure. We construct a social network of
individuals whose most frequent geographical locations are also known. We also
classify the individuals into groups according to a community detection
algorithm. We study the variation of geographical span for social groups of
varying sizes, and explore the relationship between topological positions and
geographic positions of their members. We find that small social groups are
geographically very tight, but become much more clumped when the group size
exceeds about 30 members. Also, we find no correlation between the topological
positions and geographic positions of individuals within network communities.
These results suggest that spreading processes face distinct structural and
spatial constraints.
|
1011.4910
|
Sensor Selection for Event Detection in Wireless Sensor Networks
|
cs.IT math.IT stat.AP
|
We consider the problem of sensor selection for event detection in wireless
sensor networks (WSNs). We want to choose a subset of p out of n sensors that
yields the best detection performance. As the sensor selection optimality
criteria, we propose the Kullback-Leibler and Chernoff distances between the
distributions of the selected measurements under the two hypothesis. We
formulate the maxmin robust sensor selection problem to cope with the
uncertainties in distribution means. We prove that the sensor selection problem
is NP hard, for both Kullback-Leibler and Chernoff criteria. To (sub)optimally
solve the sensor selection problem, we propose an algorithm of affordable
complexity. Extensive numerical simulations on moderate size problem instances
(when the optimum by exhaustive search is feasible to compute) demonstrate the
algorithm's near optimality in a very large portion of problem instances. For
larger problems, extensive simulations demonstrate that our algorithm
outperforms random searches, once an upper bound on computational time is set.
We corroborate numerically the validity of the Kullback-Leibler and Chernoff
sensor selection criteria, by showing that they lead to sensor selections
nearly optimal both in the Neyman-Pearson and Bayes sense.
|
1011.4969
|
Learning in A Changing World: Restless Multi-Armed Bandit with Unknown
Dynamics
|
math.OC cs.LG math.PR
|
We consider the restless multi-armed bandit (RMAB) problem with unknown
dynamics in which a player chooses M out of N arms to play at each time. The
reward state of each arm transits according to an unknown Markovian rule when
it is played and evolves according to an arbitrary unknown random process when
it is passive. The performance of an arm selection policy is measured by
regret, defined as the reward loss with respect to the case where the player
knows which M arms are the most rewarding and always plays the M best arms. We
construct a policy with an interleaving exploration and exploitation epoch
structure that achieves a regret with logarithmic order when arbitrary (but
nontrivial) bounds on certain system parameters are known. When no knowledge
about the system is available, we show that the proposed policy achieves a
regret arbitrarily close to the logarithmic order. We further extend the
problem to a decentralized setting where multiple distributed players share the
arms without information exchange. Under both an exogenous restless model and
an endogenous restless model, we show that a decentralized extension of the
proposed policy preserves the logarithmic regret order as in the centralized
setting. The results apply to adaptive learning in various dynamic systems and
communication networks, as well as financial investment.
|
1011.5039
|
Information and Interpretation of Quantum Mechanics
|
quant-ph cs.IT math.IT
|
This work is a discussion on the concept of information. We define here
information as an abstraction that is able to be copied. We consider the
connection between the process of copying information in quantum systems and
the emergence of the so-called classical realism. The problem of interpretation
of quantum mechanics in this context is discussed as well.
|
1011.5053
|
Tight Sample Complexity of Large-Margin Learning
|
cs.LG math.PR math.ST stat.ML stat.TH
|
We obtain a tight distribution-specific characterization of the sample
complexity of large-margin classification with L_2 regularization: We introduce
the \gamma-adapted-dimension, which is a simple function of the spectrum of a
distribution's covariance matrix, and show distribution-specific upper and
lower bounds on the sample complexity, both governed by the
\gamma-adapted-dimension of the source distribution. We conclude that this new
quantity tightly characterizes the true sample complexity of large-margin
classification. The bounds hold for a rich family of sub-Gaussian
distributions.
|
1011.5065
|
Gaussian Relay Channel Capacity to Within a Fixed Number of Bits
|
cs.IT math.IT
|
In this paper, we show that the capacity of the three-node Gaussian relay
channel can be achieved to within 1 and 2 bit/sec/Hz using compress-and-forward
and amplify-and-forward relaying, respectively.
|
1011.5076
|
Application of a Quantum Ensemble Model to Linguistic Analysis
|
physics.data-an cs.CL
|
A new set of parameters to describe the word frequency behavior of texts is
proposed. The analogy between the word frequency distribution and the
Bose-distribution is suggested and the notion of "temperature" is introduced
for this case. The calculations are made for English, Ukrainian, and the
Guinean Maninka languages. The correlation between in-deep language structure
(the level of analyticity) and the defined parameters is shown to exist.
|
1011.5105
|
Logical Foundations and Complexity of 4QL, a Query Language with
Unrestricted Negation
|
cs.LO cs.DB
|
The paper discusses properties of a DATALOG$^{\neg\neg}$-like query language
4QL, originally outlined in [MS10]. Negated literals in heads of rules
naturally lead to inconsistencies. On the other hand, rules do not have to
attach meaning to some literals. Therefore 4QL is founded on a four-valued
semantics, employing the logic introduced in [MSV08, VMS09] with truth values:
'true', 'false', 'inconsistent' and 'unknown'. 4QL allows one to use rules with
negation in heads and bodies of rules, it is based on a simple and intuitive
semantics and provides uniform tools for "lightweight" versions of known forms
of nonmonotonic reasoning. In addition, 4QL is tractable as regards its data
complexity and captures PTIME queries. Even if DATALOG$^{\neg\neg}$ is known as
a concept for the last 30 years, to our best knowledge no existing approach
enjoys these properties.
In the current paper we: - investigate properties of well-supported models of
4QL - prove the correctness of the algorithm for computing well-supported
models - show that 4QL has PTIME data complexity and captures PTIME.
|
1011.5113
|
State-Based Random Access: A Cross-Layer Approach
|
cs.IT cs.NI math.IT
|
In this paper, we propose novel state-based algorithms which dynamically
control the random access network based on its current state such as channel
states of wireless links and backlog states of the queues. After formulating
the problem, corresponding algorithms with diverse control functions are
proposed. Consequently, it will be shown that the proposed state-based schemes
for control of the random access networks, results in significant performance
gains in comparison with previously proposed control algorithms. In order to
select an appropriate control function, performances of the state-based control
algorithms are compared for a wide range of traffic scenarios. It is also shown
that even an approximate knowledge of network statistics helps in selecting the
proper state dependent control function.
|
1011.5115
|
Optimal Utility-Energy tradeoff in Delay Constrained Random Access
Networks
|
cs.IT cs.NI math.IT
|
Rate, energy and delay are three main parameters of interest in ad-hoc
networks. In this paper, we discuss the problem of maximizing network utility
and minimizing energy consumption while satisfying a given transmission delay
constraint for each packet. We formulate this problem in the standard convex
optimization form and subsequently discuss the tradeoff between utility, energy
and delay in such framework. Also, in order to adapt for the distributed nature
of the network, a distributed algorithm where nodes decide on choosing
transmission rates and probabilities based on their local information is
introduced.
|
1011.5117
|
Energy and Utility Optimization in Wireless Networks with Random Access
|
cs.IT math.IT
|
Energy consumption is a main issue of concern in wireless networks. Energy
minimization increases the time that networks' nodes work properly without
recharging or substituting batteries. Another criterion for network performance
is data transmission rate which is usually quantified by a network utility
function. There exists an inherent tradeoff between these criteria and
enhancing one of them can deteriorate the other one. In this paper, we consider
both Network Utility Maximization (NUM) and energy minimization in a
bi-criterion optimization problem. The problem is formulated for Random Access
(RA) Medium Access Control (MAC) for ad-hoc networks. First, we optimize
performance of the MAC and define utility as a monotonically increasing
function of link throughputs. We investigate the optimal tradeoff between
energy and utility in this part. In the second part, we define utility as a
function of end to end rates and optimize MAC and transport layers
simultaneously. We calculate optimal persistence probabilities and end-to-end
rates. Finally, by means of duality theorem, we decompose the problem into
smaller subproblems, which are solved at node and network layers separately.
This decomposition avoids need for a central unit while sustaining benefits of
layering.
|
1011.5122
|
Utility Constrained Energy Minimization In Aloha Networks
|
cs.IT math.IT
|
In this paper we consider the issue of energy efficiency in random access
networks and show that optimizing transmission probabilities of nodes can
enhance network performance in terms of energy consumption and fairness. First,
we propose a heuristic power control method that improves throughput, and then
we model the Utility Constrained Energy Minimization (UCEM) problem in which
the utility constraint takes into account single and multi node performance.
UCEM is modeled as a convex optimization problem and Sequential Quadratic
Programming (SQP) is used to find optimal transmission probabilities. Numerical
results show that our method can achieve fairness, reduce energy consumption
and enhance lifetime of such networks.
|
1011.5124
|
Delay Constrained Utility Maximization in Multihop Random Access
Networks
|
cs.IT cs.NI cs.SY math.IT math.OC
|
Multi-hop random access networks have received much attention due to their
distributed nature which facilitates deploying many new applications over the
sensor and computer networks. Recently, utility maximization framework is
applied in order to optimize performance of such networks, however proposed
algorithms result in large transmission delays. In this paper, we will analyze
delay in random access multi-hop networks and solve the delay-constrained
utility maximization problem. We define the network utility as a combination of
rate utility and energy cost functions and solve the following two problems:
'optimal medium access control with link delay constraint' and, 'optimal
congestion and contention control with end-to-end delay constraint'. The
optimal tradeoff between delay, rate, and energy is achieved for different
values of delay constraint and the scaling factors between rate and energy.
Eventually linear and super-linear distributed optimization solutions are
proposed for each problem and their performance are compared in terms of
convergence and complexity.
|
1011.5164
|
Living City, a Collaborative Browser-based Massively Multiplayer Online
Game
|
cs.CY cs.SI
|
This work presents the design and implementation of our Browser-based
Massively Multiplayer Online Game, Living City, a simulation game fully
developed at the University of Messina. Living City is a persistent and
real-time digital world, running in the Web browser environment and accessible
from users without any client-side installation. Today Massively Multiplayer
Online Games attract the attention of Computer Scientists both for their
architectural peculiarity and the close interconnection with the social network
phenomenon. We will cover these two aspects paying particular attention to some
aspects of the project: game balancing (e.g. algorithms behind time and money
balancing); business logic (e.g., handling concurrency, cheating avoidance and
availability) and, finally, social and psychological aspects involved in the
collaboration of players, analyzing their activities and interconnections.
|
1011.5167
|
A Coder-Decoder model for use in Lossless Data Compression
|
cs.IT math.IT
|
This article describes a technique of using a trigonometric function and
combinatorial calculations to code or transform any finite sequence of binary
numbers (0s and 1s) of any length to a unique set of three Real numbers. In
reverse, these three Real numbers can be used independently to reconstruct the
original Binary sequence precisely. The main principles of this technique are
then applied in a proposal for a highly efficient model for Lossless Data
Compression.
|
1011.5168
|
Analyzing the Facebook Friendship Graph
|
cs.SI physics.soc-ph
|
Online Social Networks (OSN) during last years acquired a huge and increasing
popularity as one of the most important emerging Web phenomena, deeply
modifying the behavior of users and contributing to build a solid substrate of
connections and relationships among people using the Web. In this preliminary
work paper, our purpose is to analyze Facebook, considering a significant
sample of data reflecting relationships among subscribed users. Our goal is to
extract, from this platform, relevant information about the distribution of
these relations and exploit tools and algorithms provided by the Social Network
Analysis (SNA) to discover and, possibly, understand underlying similarities
between the developing of OSN and real-life social networks.
|
1011.5188
|
La r\'eduction de termes complexes dans les langues de sp\'ecialit\'e
|
cs.CL
|
Our study applies statistical methods to French and Italian corpora to
examine the phenomenon of multi-word term reduction in specialty languages.
There are two kinds of reduction: anaphoric and lexical. We show that anaphoric
reduction depends on the discourse type (vulgarization, pedagogical,
specialized) but is independent of both domain and language; that lexical
reduction depends on domain and is more frequent in technical, rapidly evolving
domains; and that anaphoric reductions tend to follow full terms rather than
precede them. We define the notion of the anaphoric tree of the term and study
its properties. Concerning lexical reduction, we attempt to prove statistically
that there is a notion of term lifecycle, where the full form is progressively
replaced by a lexical reduction. ----- Nous \'etudions par des m\'ethodes
statistiques sur des corpus fran\c{c}ais et italiens, le ph\'enom\`ene de
r\'eduction des termes complexes dans les langues de sp\'ecialit\'e. Il existe
deux types de r\'eductions : anaphorique et lexicale. Nous montrons que la
r\'eduction anaphorique d\'epend du type de discours (de vulgarisation,
p\'edagogique, sp\'ecialis\'e) mais ne d\'epend ni du domaine, ni de la langue,
alors que la r\'eduction lexicale d\'epend du domaine et est plus fr\'equente
dans les domaines techniques \`a \'evolution rapide. D'autre part, nous
montrons que la r\'eduction anaphorique a tendance \`a suivre la forme pleine
du terme, nous d\'efinissons une notion d'arbre anaphorique de terme et nous
\'etudions ses propri\'et\'es. Concernant la r\'eduction lexicale, nous tentons
de d\'emontrer statistiquement qu'il existe une notion de cycle de vie de
terme, o\`u la forme pleine est progressivement remplac\'ee par une r\'eduction
lexicale.
|
1011.5202
|
Covered Clause Elimination
|
cs.LO cs.AI
|
Generalizing the novel clause elimination procedures developed in [M. Heule,
M. J\"arvisalo, and A. Biere. Clause elimination procedures for CNF formulas.
In Proc. LPAR-17, volume 6397 of LNCS, pages 357-371. Springer, 2010.], we
introduce explicit (CCE), hidden (HCCE), and asymmetric (ACCE) variants of a
procedure that eliminates covered clauses from CNF formulas. We show that these
procedures are more effective in reducing CNF formulas than the respective
variants of blocked clause elimination, and may hence be interesting as new
preprocessing/simplification techniques for SAT solving.
|
1011.5209
|
The semantic mapping of words and co-words in contexts
|
cs.CL stat.AP
|
Meaning can be generated when information is related at a systemic level.
Such a system can be an observer, but also a discourse, for example,
operationalized as a set of documents. The measurement of semantics as
similarity in patterns (correlations) and latent variables (factor analysis)
has been enhanced by computer techniques and the use of statistics; for
example, in "Latent Semantic Analysis". This communication provides an
introduction, an example, pointers to relevant software, and summarizes the
choices that can be made by the analyst. Visualization ("semantic mapping") is
thus made more accessible.
|
1011.5239
|
Preferential attachment in growing spatial networks
|
cond-mat.dis-nn cond-mat.stat-mech cs.SI physics.soc-ph
|
We obtain the degree distribution for a class of growing network models on
flat and curved spaces. These models evolve by preferential attachment weighted
by a function of the distance between nodes. The degree distribution of these
models is similar to the one of the fitness model of Bianconi and Barabasi,
with a fitness distribution dependent on the metric and the density of nodes.
We show that curvature singularities in these spaces can give rise to
asymptotic Bose-Einstein condensation, but transient condensation can be
observed also in smooth hyperbolic spaces with strong curvature. We provide
numerical results for spaces of constant curvature (sphere, flat and hyperbolic
space) and we discuss the conditions for the breakdown of this approach and the
critical points of the transition to distance-dominated attachment. Finally we
discuss the distribution of link lengths.
|
1011.5270
|
Classifying Clustering Schemes
|
stat.ML cs.LG
|
Many clustering schemes are defined by optimizing an objective function
defined on the partitions of the underlying set of a finite metric space. In
this paper, we construct a framework for studying what happens when we instead
impose various structural conditions on the clustering schemes, under the
general heading of functoriality. Functoriality refers to the idea that one
should be able to compare the results of clustering algorithms as one varies
the data set, for example by adding points or by applying functions to it. We
show that within this framework, one can prove a theorems analogous to one of
J. Kleinberg, in which for example one obtains an existence and uniqueness
theorem instead of a non-existence result.
We obtain a full classification of all clustering schemes satisfying a
condition we refer to as excisiveness. The classification can be changed by
varying the notion of maps of finite metric spaces. The conditions occur
naturally when one considers clustering as the statistical version of the
geometric notion of connected components. By varying the degree of
functoriality that one requires from the schemes it is possible to construct
richer families of clustering schemes that exhibit sensitivity to density.
|
1011.5274
|
Jamming Games in the MIMO Wiretap Channel With an Active Eavesdropper
|
cs.IT math.IT
|
This paper investigates reliable and covert transmission strategies in a
multiple-input multiple-output (MIMO) wiretap channel with a transmitter,
receiver and an adversarial wiretapper, each equipped with multiple antennas.
In a departure from existing work, the wiretapper possesses a novel capability
to act either as a passive eavesdropper or as an active jammer, under a
half-duplex constraint. The transmitter therefore faces a choice between
allocating all of its power for data, or broadcasting artificial interference
along with the information signal in an attempt to jam the eavesdropper
(assuming its instantaneous channel state is unknown). To examine the resulting
trade-offs for the legitimate transmitter and the adversary, we model their
interactions as a two-person zero-sum game with the ergodic MIMO secrecy rate
as the payoff function. We first examine conditions for the existence of
pure-strategy Nash equilibria (NE) and the structure of mixed-strategy NE for
the strategic form of the game.We then derive equilibrium strategies for the
extensive form of the game where players move sequentially under scenarios of
perfect and imperfect information. Finally, numerical simulations are presented
to examine the equilibrium outcomes of the various scenarios considered.
|
1011.5287
|
Distributed Storage Allocations
|
cs.IT math.IT
|
We examine the problem of allocating a given total storage budget in a
distributed storage system for maximum reliability. A source has a single data
object that is to be coded and stored over a set of storage nodes; it is
allowed to store any amount of coded data in each node, as long as the total
amount of storage used does not exceed the given budget. A data collector
subsequently attempts to recover the original data object by accessing only the
data stored in a random subset of the nodes. By using an appropriate code,
successful recovery can be achieved whenever the total amount of data accessed
is at least the size of the original data object. The goal is to find an
optimal storage allocation that maximizes the probability of successful
recovery. This optimization problem is challenging in general because of its
combinatorial nature, despite its simple formulation. We study several
variations of the problem, assuming different allocation models and access
models. The optimal allocation and the optimal symmetric allocation (in which
all nonempty nodes store the same amount of data) are determined for a variety
of cases. Our results indicate that the optimal allocations often have
nonintuitive structure and are difficult to specify. We also show that
depending on the circumstances, coding may or may not be beneficial for
reliable storage.
|
1011.5298
|
Bayesian Sequential Detection with Phase-Distributed Change Time and
Nonlinear Penalty -- A POMDP Approach
|
cs.IT math.IT stat.ME
|
We show that the optimal decision policy for several types of Bayesian
sequential detection problems has a threshold switching curve structure on the
space of posterior distributions. This is established by using lattice
programming and stochastic orders in a partially observed Markov decision
process (POMDP) framework. A stochastic gradient algorithm is presented to
estimate the optimal linear approximation to this threshold curve. We
illustrate these results by first considering quickest time detection with
phase-type distributed change time and a variance stopping penalty. Then it is
proved that the threshold switching curve also arises in several other Bayesian
decision problems such as quickest transient detection, exponential delay
(risk-sensitive) penalties, stopping time problems in social learning, and
multi-agent scheduling in a changing world. Using Blackwell dominance, it is
shown that for dynamic decision making problems, the optimal decision policy is
lower bounded by a myopic policy. Finally, it is shown how the achievable cost
of the optimal decision policy varies with change time distribution by imposing
a partial order on transition matrices.
|
1011.5314
|
ML(n)BiCGStab: Reformulation, Analysis and Implementation
|
math.NA cs.IT math.DS math.IT math.OC math.ST stat.TH
|
With the aid of index functions, we re-derive the ML(n)BiCGStab algorithm in
a paper by Yeung and Chan in 1999 in a more systematic way. It turns out that
there are n ways to define the ML(n)BiCGStab residual vector. Each definition
will lead to a different ML(n)BiCGStab algorithm. We demonstrate this by
presenting a second algorithm which requires less storage. In theory, this
second algorithm serves as a bridge that connects the Lanczos-based BiCGStab
and the Arnoldi-based FOM while ML(n)BiCG a bridge connecting BiCG and FOM. We
also analyze the breakdown situations from the probabilistic point of view and
summarize some useful properties of ML(n)BiCGStab. Implementation issues are
also addressed.
|
1011.5349
|
Distributed Graph Coloring: An Approach Based on the Calling Behavior of
Japanese Tree Frogs
|
cs.AI
|
Graph coloring, also known as vertex coloring, considers the problem of
assigning colors to the nodes of a graph such that adjacent nodes do not share
the same color. The optimization version of the problem concerns the
minimization of the number of used colors. In this paper we deal with the
problem of finding valid colorings of graphs in a distributed way, that is, by
means of an algorithm that only uses local information for deciding the color
of the nodes. Such algorithms prescind from any central control. Due to the
fact that quite a few practical applications require to find colorings in a
distributed way, the interest in distributed algorithms for graph coloring has
been growing during the last decade. As an example consider wireless ad-hoc and
sensor networks, where tasks such as the assignment of frequencies or the
assignment of TDMA slots are strongly related to graph coloring.
The algorithm proposed in this paper is inspired by the calling behavior of
Japanese tree frogs. Male frogs use their calls to attract females.
Interestingly, groups of males that are located nearby each other desynchronize
their calls. This is because female frogs are only able to correctly localize
the male frogs when their calls are not too close in time. We experimentally
show that our algorithm is very competitive with the current state of the art,
using different sets of problem instances and comparing to one of the most
competitive algorithms from the literature.
|
1011.5364
|
Optimizing On-Line Advertising
|
cs.IR
|
We want to find the optimal strategy for displaying advertisements e.g.
banners, videos, in given locations at given times under some realistic dynamic
constraints. Our primary goal is to maximize the expected revenue in a given
period of time, i.e. the total profit produced by the impressions, which
depends on profit-generating events such as the impressions themselves, the
ensuing clicks and registrations. Moreover we must take into consideration the
possibility that the constraints could change in time in a way that cannot
always be foreseen.
|
1011.5367
|
The dynamical strength of social ties in information spreading
|
physics.soc-ph cs.SI
|
We investigate the temporal patterns of human communication and its influence
on the spreading of information in social networks. The analysis of mobile
phone calls of 20 million people in one country shows that human communication
is bursty and happens in group conversations. These features have opposite
effects in information reach: while bursts hinder propagation at large scales,
conversations favor local rapid cascades. To explain these phenomena we define
the dynamical strength of social ties, a quantity that encompasses both the
topological and temporal patterns of human communication.
|
1011.5395
|
The Sample Complexity of Dictionary Learning
|
stat.ML cs.LG
|
A large set of signals can sometimes be described sparsely using a
dictionary, that is, every element can be represented as a linear combination
of few elements from the dictionary. Algorithms for various signal processing
applications, including classification, denoising and signal separation, learn
a dictionary from a set of signals to be represented. Can we expect that the
representation found by such a dictionary for a previously unseen example from
the same source will have L_2 error of the same magnitude as those for the
given examples? We assume signals are generated from a fixed distribution, and
study this questions from a statistical learning theory perspective.
We develop generalization bounds on the quality of the learned dictionary for
two types of constraints on the coefficient selection, as measured by the
expected L_2 error in representation when the dictionary is used. For the case
of l_1 regularized coefficient selection we provide a generalization bound of
the order of O(sqrt(np log(m lambda)/m)), where n is the dimension, p is the
number of elements in the dictionary, lambda is a bound on the l_1 norm of the
coefficient vector and m is the number of samples, which complements existing
results. For the case of representing a new signal as a combination of at most
k dictionary elements, we provide a bound of the order O(sqrt(np log(m k)/m))
under an assumption on the level of orthogonality of the dictionary (low Babel
function). We further show that this assumption holds for most dictionaries in
high dimensions in a strong probabilistic sense. Our results further yield fast
rates of order 1/m as opposed to 1/sqrt(m) using localized Rademacher
complexity. We provide similar results in a general setting using kernels with
weak smoothness requirements.
|
1011.5425
|
Layered Label Propagation: A MultiResolution Coordinate-Free Ordering
for Compressing Social Networks
|
cs.DS cs.SI physics.soc-ph
|
We continue the line of research on graph compression started with WebGraph,
but we move our focus to the compression of social networks in a proper sense
(e.g., LiveJournal): the approaches that have been used for a long time to
compress web graphs rely on a specific ordering of the nodes (lexicographical
URL ordering) whose extension to general social networks is not trivial. In
this paper, we propose a solution that mixes clusterings and orders, and devise
a new algorithm, called Layered Label Propagation, that builds on previous work
on scalable clustering and can be used to reorder very large graphs (billions
of nodes). Our implementation uses overdecomposition to perform aggressively on
multi-core architecture, making it possible to reorder graphs of more than 600
millions nodes in a few hours. Experiments performed on a wide array of web
graphs and social networks show that combining the order produced by the
proposed algorithm with the WebGraph compression framework provides a major
increase in compression with respect to all currently known techniques, both on
web graphs and on social networks. These improvements make it possible to
analyse in main memory significantly larger graphs.
|
1011.5452
|
Convergence Speed of the Consensus Algorithm with Interference and
Sparse Long-Range Connectivity
|
cs.IT math.IT
|
We analyze the effect of interference on the convergence rate of average
consensus algorithms, which iteratively compute the measurement average by
message passing among nodes. It is usually assumed that these algorithms
converge faster with a greater exchange of information (i.e., by increased
network connectivity) in every iteration. However, when interference is taken
into account, it is no longer clear if the rate of convergence increases with
network connectivity. We study this problem for randomly-placed
consensus-seeking nodes connected through an interference-limited network. We
investigate the following questions: (a) How does the rate of convergence vary
with increasing communication range of each node? and (b) How does this result
change when each node is allowed to communicate with a few selected far-off
nodes? When nodes schedule their transmissions to avoid interference, we show
that the convergence speed scales with $r^{2-d}$, where $r$ is the
communication range and $d$ is the number of dimensions. This scaling is the
result of two competing effects when increasing $r$: Increased schedule length
for interference-free transmission vs. the speed gain due to improved
connectivity. Hence, although one-dimensional networks can converge faster from
a greater communication range despite increased interference, the two effects
exactly offset one another in two-dimensions. In higher dimensions, increasing
the communication range can actually degrade the rate of convergence. Our
results thus underline the importance of factoring in the effect of
interference in the design of distributed estimation algorithms.
|
1011.5480
|
Bayesian Modeling of a Human MMORPG Player
|
cs.AI
|
This paper describes an application of Bayesian programming to the control of
an autonomous avatar in a multiplayer role-playing game (the example is based
on World of Warcraft). We model a particular task, which consists of choosing
what to do and to select which target in a situation where allies and foes are
present. We explain the model in Bayesian programming and show how we could
learn the conditional probabilities from data gathered during human-played
sessions.
|
1011.5481
|
Using Evolution Strategy with Meta-models for Well Placement
Optimization
|
cs.CE
|
Optimum implementation of non-conventional wells allows us to increase
considerably hydrocarbon recovery. By considering the high drilling cost and
the potential improvement in well productivity, well placement decision is an
important issue in field development. Considering complex reservoir geology and
high reservoir heterogeneities, stochastic optimization methods are the most
suitable approaches for optimum well placement. This paper proposes an
optimization methodology to determine optimal well location and trajectory
based upon the Covariance Matrix Adaptation - Evolution Strategy (CMA-ES) which
is a variant of Evolution Strategies recognized as one of the most powerful
derivative-free optimizers for continuous optimization. To improve the
optimization procedure, two new techniques are investigated: (1). Adaptive
penalization with rejection is developed to handle well placement constraints.
(2). A meta-model, based on locally weighted regression, is incorporated into
CMA-ES using an approximate ranking procedure. Therefore, we can reduce the
number of reservoir simulations, which are computationally expensive. Several
examples are presented. Our new approach is compared with a Genetic Algorithm
incorporating the Genocop III technique. It is shown that our approach
outperforms the genetic algorithm: it leads in general to both a higher NPV and
a significant reduction of the number of reservoir simulations.
|
1011.5496
|
On Network Functional Compression
|
cs.IT math.IT
|
In this paper, we consider different aspects of the network functional
compression problem where computation of a function (or, some functions) of
sources located at certain nodes in a network is desired at receiver(s). The
rate region of this problem has been considered in the literature under certain
restrictive assumptions, particularly in terms of the network topology, the
functions and the characteristics of the sources. In this paper, we present
results that significantly relax these assumptions. Firstly, we consider this
problem for an arbitrary tree network and asymptotically lossless computation.
We show that, for depth one trees with correlated sources, or for general trees
with independent sources, a modularized coding scheme based on graph colorings
and Slepian-Wolf compression performs arbitrarily closely to rate lower bounds.
For a general tree network with independent sources, optimal computation to be
performed at intermediate nodes is derived. We introduce a necessary and
sufficient condition on graph colorings of any achievable coding scheme, called
coloring connectivity condition (C.C.C.).
Secondly, we investigate the effect of having several functions at the
receiver. In this problem, we derive a rate region and propose a coding scheme
based on graph colorings. Thirdly, we consider the functional compression
problem with feedback. We show that, in this problem, unlike Slepian-Wolf
compression, by having feedback, one may outperform rate bounds of the case
without feedback. Fourthly, we investigate functional computation problem with
distortion. We compute a rate-distortion region for this problem. Then, we
propose a simple suboptimal coding scheme with a non-trivial performance
guarantee. Finally, we introduce cases where finding minimum entropy colorings
and therefore, optimal coding schemes can be performed in polynomial time.
|
1011.5535
|
Examples of minimal-memory, non-catastrophic quantum convolutional
encoders
|
quant-ph cs.IT math.IT
|
One of the most important open questions in the theory of quantum
convolutional coding is to determine a minimal-memory, non-catastrophic,
polynomial-depth convolutional encoder for an arbitrary quantum convolutional
code. Here, we present a technique that finds quantum convolutional encoders
with such desirable properties for several example quantum convolutional codes
(an exposition of our technique in full generality will appear elsewhere). We
first show how to encode the well-studied Forney-Grassl-Guha (FGG) code with an
encoder that exploits just one memory qubit (the former Grassl-Roetteler
encoder requires 15 memory qubits). We then show how our technique can find an
online decoder corresponding to this encoder, and we also detail the operation
of our technique on a different example of a quantum convolutional code.
Finally, the reduction in memory for the FGG encoder makes it feasible to
simulate the performance of a quantum turbo code employing it, and we present
the results of such simulations.
|
1011.5566
|
Secure Index Coding with Side Information
|
cs.IT cs.CR cs.NI math.IT
|
Security aspects of the Index Coding with Side Information (ICSI) problem are
investigated. Building on the results of Bar-Yossef et al. (2006), the
properties of linear coding schemes for the ICSI problem are further explored.
The notion of weak security, considered by Bhattad and Narayanan (2005) in the
context of network coding, is generalized to block security. It is shown that
the coding scheme for the ICSI problem based on a linear code C of length n,
minimum distance d and dual distance d^\perp, is (d-1-t)-block secure (and
hence also weakly secure) if the adversary knows in advance t \le d - 2
messages, and is completely insecure if the adversary knows in advance more
than n - d^\perp messages.
|
1011.5599
|
HyperANF: Approximating the Neighbourhood Function of Very Large Graphs
on a Budget
|
cs.DS cs.SI physics.soc-ph
|
The neighbourhood function N(t) of a graph G gives, for each t, the number of
pairs of nodes <x, y> such that y is reachable from x in less that t hops. The
neighbourhood function provides a wealth of information about the graph (e.g.,
it easily allows one to compute its diameter), but it is very expensive to
compute it exactly. Recently, the ANF algorithm (approximate neighbourhood
function) has been proposed with the purpose of approximating NG(t) on large
graphs. We describe a breakthrough improvement over ANF in terms of speed and
scalability. Our algorithm, called HyperANF, uses the new HyperLogLog counters
and combines them efficiently through broadword programming; our implementation
uses overdecomposition to exploit multi-core parallelism. With HyperANF, for
the first time we can compute in a few hours the neighbourhood function of
graphs with billions of nodes with a small error and good confidence using a
standard workstation. Then, we turn to the study of the distribution of the
shortest paths between reachable nodes (that can be efficiently approximated by
means of HyperANF), and discover the surprising fact that its index of
dispersion provides a clear-cut characterisation of proper social networks vs.
web graphs. We thus propose the spid (Shortest-Paths Index of Dispersion) of a
graph as a new, informative statistics that is able to discriminate between the
above two types of graphs. We believe this is the first proposal of a
significant new non-local structural index for complex networks whose
computation is highly scalable.
|
1011.5606
|
Stability of a Stochastic Model for Demand-Response
|
cs.SY math.OC
|
We study the stability of a Markovian model of electricity production and
consumption that incorporates production volatility due to renewables and
uncertainty about actual demand versus planned production. We assume that the
energy producer targets a fixed energy reserve, subject to ramp-up and
ramp-down constraints, and that appliances are subject to demand-response
signals and adjust their consumption to the available production by delaying
their demand. When a constant fraction of the delayed demand vanishes over
time, we show that the general state Markov chain characterizing the system is
positive Harris and ergodic (i.e., delayed demand is bounded with high
probability). However, when delayed demand increases by a constant fraction
over time, we show that the Markov chain is non-positive (i.e., there exists a
non-zero probability that delayed demand becomes unbounded). We exhibit
Lyapunov functions to prove our claims. In addition, we provide examples of
heating appliances that, when delayed, have energy requirements corresponding
to the two considered cases.
|
1011.5668
|
On Theorem 2.3 in "Prediction, Learning, and Games" by Cesa-Bianchi and
Lugosi
|
cs.LG
|
The note presents a modified proof of a loss bound for the exponentially
weighted average forecaster with time-varying potential. The regret term of the
algorithm is upper-bounded by sqrt{n ln(N)} (uniformly in n), where N is the
number of experts and n is the number of steps.
|
1011.5694
|
Formulation Of A N-Degree Polynomial For Depth Estimation using a Single
Image
|
cs.CV math-ph math.MP physics.comp-ph physics.ed-ph physics.pop-ph
|
The depth of a visible surface of a scene is the distance between the surface
and the sensor. Recovering depth information from two-dimensional images of a
scene is an important task in computer vision that can assist numerous
applications such as object recognition, scene interpretation, obstacle
avoidance, inspection and assembly. Various passive depth computation
techniques have been developed for computer vision applications. They can be
classified into two groups. The first group operates using just one image. The
second group requires more than one image which can be acquired using either
multiple cameras or a camera whose parameters and positioning can be changed.
This project is aimed to find the real depth of the object from the camera
which had been used to click the photograph. An n-degree polynomial was
formulated, which maps the pixel depth of an image to the real depth. In order
to find the coefficients of the polynomial, an experiment was carried out for a
particular lens and thus, these coefficients are a unique feature of a
particular camera. The procedure explained in this report is a monocular
approach for estimation of depth of a scene. The idea involves mapping the
Pixel Depth of the object photographed in the image with the Real Depth of the
object from the camera lens with an interpolation function. In order to find
the parameters of the interpolation function, a set of lines with predefined
distance from camera is used, and then the distance of each line from the
bottom edge of the picture (as the origin line) is calculated.
|
1011.5696
|
Quantifying and qualifying trust: Spectral decomposition of trust
networks
|
cs.CR cs.IR
|
In a previous FAST paper, I presented a quantitative model of the process of
trust building, and showed that trust is accumulated like wealth: the rich get
richer. This explained the pervasive phenomenon of adverse selection of trust
certificates, as well as the fragility of trust networks in general. But a
simple explanation does not always suggest a simple solution. It turns out that
it is impossible to alter the fragile distribution of trust without sacrificing
some of its fundamental functions. A solution for the vulnerability of trust
must thus be sought elsewhere, without tampering with its distribution. This
observation was the starting point of the present paper. It explores a
different method for securing trust: not by redistributing it, but by mining
for its sources. The method used to break privacy is thus also used to secure
trust. A high level view of the mining methods that connect the two is provided
in terms of *similarity networks*, and *spectral decomposition* of similarity
preserving maps. This view may be of independent interest, as it uncovers a
common conceptual and structural foundation of mathematical classification
theory on one hand, and of the spectral methods of graph clustering and data
mining on the other hand.
|
1011.5699
|
The Necessity of Relay Selection
|
cs.IT math.IT
|
We determine necessary conditions on the structure of symbol error rate (SER)
optimal quantizers for limited feedback beamforming in wireless networks with
one transmitter-receiver pair and R parallel amplify-and-forward relays. We
call a quantizer codebook "small" if its cardinality is less than R, and
"large" otherwise. A "d-codebook" depends on the power constraints and can be
optimized accordingly, while an "i-codebook" remains fixed. It was previously
shown that any i-codebook that contains the single-relay selection (SRS)
codebook achieves the full-diversity order, R. We prove the following:
Every full-diversity i-codebook contains the SRS codebook, and thus is
necessarily large. In general, as the power constraints grow to infinity, the
limit of an optimal large d-codebook contains an SRS codebook, provided that it
exists. For small codebooks, the maximal diversity is equal to the codebook
cardinality. Every diversity-optimal small i-codebook is an orthogonal
multiple-relay selection (OMRS) codebook. Moreover, the limit of an optimal
small d-codebook is an OMRS codebook.
We observe that SRS is nothing but a special case of OMRS for codebooks with
cardinality equal to R. As a result, we call OMRS as "the universal necessary
condition" for codebook optimality. Finally, we confirm our analytical findings
through simulations.
|
1011.5739
|
Protocol Coding through Reordering of User Resources: Applications and
Capacity Results
|
cs.IT cs.NI math.IT
|
While there are continuous efforts to introduce new communication systems and
standards, it is legitimate to ask the question: how can one send additional
bits by minimally changing the systems that are already operating? This is of a
significant practical interest, since it has a potential to generate additional
value of the systems through, for example, introduction of new devices and only
a software update of the access points or base stations, without incurring
additional cost for infrastructure hardware installation. The place to look for
such an opportunity is the communication protocol and we use the term *protocol
coding* to refer to strategies for sending information by using the degrees of
freedom available when one needs to decide the actions taken by a particular
communication protocol. In this paper we consider protocol coding that gives a
rise to *secondary communication channels*, defined by combinatorial ordering
of the user resources (packets, channels) in a primary (legacy) communication
system. We introduce communication models that enable us to compute the
capacity of such secondary channels under suitable restrictions imposed by the
primary systems. We first show the relation to the capacity of channels with
causal channel state information at the transmitter (CSIT), originally
considered by Shannon. By using the specific communication setup, we develop an
alternative framework for achieving the capacity and we discuss coding
strategies that need to be used over the secondary channels. We also discuss
some practical features of the secondary channels and their applications that
add value to the existing wireless systems.
|
1011.5814
|
Quantum Cyclic Code of length dividing $p^{t}+1$
|
cs.IT math.IT
|
In this paper, we study cyclic stabiliser codes over $\mathbb{F}_p$ of length
dividing $p^t+1$ for some positive integer $t$. We call these $t$-Frobenius
codes or just Frobenius codes for short. We give methods to construct them and
show that they have efficient decoding algorithms. An important subclass of
stabiliser codes are the linear stabiliser codes. For linear Frobenius codes we
have stronger results: We completely characterise all linear Frobenius codes.
As a consequence, we show that for every integer $n$ that divides $p^t+1$ for
an odd $t$, there are no linear cyclic codes of length $n$. On the other hand
for even $t$, we give an explicit method to construct all of them. This gives
us a many explicit example of Frobenius codes which include the well studied
Laflamme code. We show that the classical notion of BCH distance can be
generalised to all the Frobenius codes that we construct, including the
non-linear ones, and show that the algorithm of Berlekamp can be generalised to
correct quantum errors within the BCH limit. This gives, for the first time, a
family of codes that are neither CSS nor linear for which efficient decoding
algorithm exits. The explicit examples that we construct are summarised in
Table \ref{tab:explicit-examples-short} and explained in detail in Tables
\ref{tab:explicit-examples-2} (linear case) and \ref{tab:explicit-examples-3}
(non-linear case).
|
1011.5866
|
Evolving difficult SAT instances thanks to local search
|
cs.NE cs.LO
|
We propose to use local search algorithms to produce SAT instances which are
harder to solve than randomly generated k-CNF formulae. The first results,
obtained with rudimentary search algorithms, show that the approach deserves
further study. It could be used as a test of robustness for SAT solvers, and
could help to investigate how branching heuristics, learning strategies, and
other aspects of solvers impact there robustness.
|
1011.5914
|
Static and Expanding Grid Coverage with Ant Robots : Complexity Results
|
cs.MA
|
In this paper we study the strengths and limitations of collaborative teams
of simple agents. In particular, we discuss the efficient use of "ant robots"
for covering a connected region on the Z^{2} grid, whose area is unknown in
advance, and which expands at a given rate, where $n$ is the initial size of
the connected region.
We show that regardless of the algorithm used, and the robots' hardware and
software specifications, the minimal number of robots required in order for
such coverage to be possible is \Omega({\sqrt{n}}).
In addition, we show that when the region expands at a sufficiently slow
rate, a team of \Theta(\sqrt{n}) robots could cover it in at most O(n^{2} \ln
n) time.
This completion time can even be achieved by myopic robots, with no ability
to directly communicate with each other, and where each robot is equipped with
a memory of size O(1) bits w.r.t the size of the region (therefore, the robots
cannot maintain maps of the terrain, nor plan complete paths).
Regarding the coverage of non-expanding regions in the grid, we improve the
current best known result of O(n^{2}) by demonstrating an algorithm that
guarantees such a coverage with completion time of O(\frac{1}{k} n^{1.5} + n)
in the worst case, and faster for shapes of perimeter length which is shorter
than O(n).
|
1011.5936
|
On the Performance of Sparse Recovery via L_p-minimization (0<=p <=1)
|
cs.IT math.IT
|
It is known that a high-dimensional sparse vector x* in R^n can be recovered
from low-dimensional measurements y= A^{m*n} x* (m<n) . In this paper, we
investigate the recovering ability of l_p-minimization (0<=p<=1) as p varies,
where l_p-minimization returns a vector with the least l_p ``norm'' among all
the vectors x satisfying Ax=y. Besides analyzing the performance of strong
recovery where l_p-minimization needs to recover all the sparse vectors up to
certain sparsity, we also for the first time analyze the performance of
``weak'' recovery of l_p-minimization (0<=p<1) where the aim is to recover all
the sparse vectors on one support with fixed sign pattern. When m/n goes to 1,
we provide sharp thresholds of the sparsity ratio that differentiates the
success and failure via l_p-minimization. For strong recovery, the threshold
strictly decreases from 0.5 to 0.239 as p increases from 0 to 1. Surprisingly,
for weak recovery, the threshold is 2/3 for all p in [0,1), while the threshold
is 1 for l_1-minimization. We also explicitly demonstrate that l_p-minimization
(p<1) can return a denser solution than l_1-minimization. For any m/n<1, we
provide bounds of sparsity ratio for strong recovery and weak recovery
respectively below which l_p-minimization succeeds with overwhelming
probability. Our bound of strong recovery improves on the existing bounds when
m/n is large. Regarding the recovery threshold, l_p-minimization has a higher
threshold with smaller p for strong recovery; the threshold is the same for all
p for sectional recovery; and l_1-minimization can outperform l_p-minimization
for weak recovery. These are in contrast to traditional wisdom that
l_p-minimization has better sparse recovery ability than l_1-minimization since
it is closer to l_0-minimization. We provide an intuitive explanation to our
findings and use numerical examples to illustrate the theoretical predictions.
|
1011.5950
|
Networks and the Epidemiology of Infectious Disease
|
physics.soc-ph cs.SI q-bio.PE
|
The science of networks has revolutionised research into the dynamics of
interacting elements. It could be argued that epidemiology in particular has
embraced the potential of network theory more than any other discipline. Here
we review the growing body of research concerning the spread of infectious
diseases on networks, focusing on the interplay between network theory and
epidemiology. The review is split into four main sections, which examine: the
types of network relevant to epidemiology; the multitude of ways these networks
can be characterised; the statistical methods that can be applied to infer the
epidemiological parameters on a realised network; and finally simulation and
analytical methods to determine epidemic dynamics on a given network. Given the
breadth of areas covered and the ever-expanding number of publications, a
comprehensive review of all work is impossible. Instead, we provide a
personalised overview into the areas of network epidemiology that have seen the
greatest progress in recent years or have the greatest potential to provide
novel insights. As such, considerable importance is placed on analytical
approaches and statistical methods which are both rapidly expanding fields.
Throughout this review we restrict our attention to epidemiological issues.
|
1011.5951
|
Reinforcement Learning in Partially Observable Markov Decision Processes
using Hybrid Probabilistic Logic Programs
|
cs.AI
|
We present a probabilistic logic programming framework to reinforcement
learning, by integrating reinforce-ment learning, in POMDP environments, with
normal hybrid probabilistic logic programs with probabilistic answer set
seman-tics, that is capable of representing domain-specific knowledge. We
formally prove the correctness of our approach. We show that the complexity of
finding a policy for a reinforcement learning problem in our approach is
NP-complete. In addition, we show that any reinforcement learning problem can
be encoded as a classical logic program with answer set semantics. We also show
that a reinforcement learning problem can be encoded as a SAT problem. We
present a new high level action description language that allows the factored
representation of POMDP. Moreover, we modify the original model of POMDP so
that it be able to distinguish between knowledge producing actions and actions
that change the environment.
|
1011.5962
|
Edge Preserving Image Denoising in Reproducing Kernel Hilbert Spaces
|
cs.CV
|
The goal of this paper is the development of a novel approach for the problem
of Noise Removal, based on the theory of Reproducing Kernels Hilbert Spaces
(RKHS). The problem is cast as an optimization task in a RKHS, by taking
advantage of the celebrated semiparametric Representer Theorem. Examples verify
that in the presence of gaussian noise the proposed method performs relatively
well compared to wavelet based technics and outperforms them significantly in
the presence of impulse or mixed noise.
A more detailed version of this work has been published in the IEEE Trans.
Im. Proc. : P. Bouboulis, K. Slavakis and S. Theodoridis, Adaptive Kernel-based
Image Denoising employing Semi-Parametric Regularization, IEEE Transactions on
Image Processing, vol 19(6), 2010, 1465 - 1479.
|
1011.5987
|
Prediction-based Adaptation (PRADA) Algorithm for Modulation and Coding
|
cs.IT math.IT
|
In this paper, we propose a novel adaptive modulation and coding (AMC)
algorithm dedicated to reduce the feedback frequency of the channel state
information (CSI). There have been already plenty of works on AMC so as to
exploit the bandwidth more efficiently with the CSI feedback to the
transmitter. However, in some occasions, frequent CSI feedback is not favorable
in these systems. This work considers finite-state Markov chain (FSMC) based
channel prediction to alleviate the feedback while maximizing the overall
throughput. We derive the close-form of the frame error rate (FER) based on
channel prediction using limited CSI feedback. In addition, instead of
switching settings according to the CSI, we also provide means to combine both
CSI and FER as the switching parameter. Numerical results illustrate that the
average throughput of the proposed algorithm has significant performance
improvement over fixed modulation and coding while the CSI feedback being
largely reduced.
|
1011.6017
|
A Selection Region Based Routing Protocol for Random Mobile ad hoc
Networks with Directional Antennas
|
cs.IT math.IT
|
In this paper, we propose a selection region based multihop routing protocol
with directional antennas for wireless mobile ad hoc networks, where the
selection region is defined by two parameters: a reference distance and the
beamwidth of the directional antenna. At each hop, we choose the nearest node
to the transmitter within the selection region as the next hop relay. By
maximizing the expected density of progress, we present an upper bound for the
optimum reference distance and derive the relationship between the optimum
reference distance and the optimum transmission probability. Compared with the
results with routing strategy using omnidirectional antennas in
\cite{Di:Relay-Region}, we find interestingly that the optimum transmission
probability is a constant independent of the beamwidth, the expected density of
progress with the new routing strategy is increased significantly, and the
computational complexity involved in the relay selection is also greatly
reduced.
|
1011.6022
|
DXNN Platform: The Shedding of Biological Inefficiencies
|
cs.NE
|
This paper introduces a novel type of memetic algorithm based Topology and
Weight Evolving Artificial Neural Network (TWEANN) system called DX Neural
Network (DXNN). DXNN implements a number of interesting features, amongst which
is: a simple and database friendly tuple based encoding method, a 2 phase
neuroevolutionary approach aimed at removing the need for speciation due to its
intrinsic population diversification effects, a new "Targeted Tuning Phase"
aimed at dealing with "the curse of dimensionality", and a new Random Intensity
Mutation (RIM) method that removes the need for crossover algorithms. The paper
will discuss DXNN's architecture, mutation operators, and its built in feature
selection method that allows for the evolved systems to expand and incorporate
new sensors and actuators. I then compare DXNN to other state of the art
TWEANNs on the standard double pole balancing benchmark, and demonstrate its
superior ability to evolve highly compact solutions faster than its
competitors. Then a set of oblation experiments is performed to demonstrate how
each feature of DXNN effects its performance, followed by a set of experiments
which demonstrate the platform's ability to create NN populations with
exceptionally high diversity profiles. Finally, DXNN is used to evolve
artificial robots in a set of two dimensional open-ended food gathering and
predator-prey simulations, demonstrating the system's ability to produce ever
more complex Neural Networks, and the system's applicability to the domain of
robotics, artificial life, and coevolution.
|
1011.6075
|
Distributed High Accuracy Peer-to-Peer Localization in Mobile Multipath
Environments
|
cs.IT cs.DC cs.NI math.IT math.OC
|
In this paper we consider the problem of high accuracy localization of mobile
nodes in a multipath-rich environment where sub-meter accuracies are required.
We employ a peer to peer framework where the vehicles/nodes can get pairwise
multipath-degraded ranging estimates in local neighborhoods together with a
fixed number of anchor nodes. The challenge is to overcome the
multipath-barrier with redundancy in order to provide the desired accuracies
especially under severe multipath conditions when the fraction of received
signals corrupted by multipath is dominating. We invoke a message passing
analytical framework based on particle filtering and reveal its high accuracy
localization promise through simulations.
|
1011.6086
|
In All Likelihood, Deep Belief Is Not Enough
|
stat.ML cs.LG
|
Statistical models of natural stimuli provide an important tool for
researchers in the fields of machine learning and computational neuroscience. A
canonical way to quantitatively assess and compare the performance of
statistical models is given by the likelihood. One class of statistical models
which has recently gained increasing popularity and has been applied to a
variety of complex data are deep belief networks. Analyses of these models,
however, have been typically limited to qualitative analyses based on samples
due to the computationally intractable nature of the model likelihood.
Motivated by these circumstances, the present article provides a consistent
estimator for the likelihood that is both computationally tractable and simple
to apply in practice. Using this estimator, a deep belief network which has
been suggested for the modeling of natural image patches is quantitatively
investigated and compared to other models of natural image patches. Contrary to
earlier claims based on qualitative results, the results presented in this
article provide evidence that the model under investigation is not a
particularly good model for natural images
|
1011.6121
|
On Beamformer Design for Multiuser MIMO Interference Channels
|
cs.IT math.IT
|
This paper considers several linear beamformer design paradigms for multiuser
time-invariant multiple-input multiple-output interference channels. Notably,
interference alignment and sum-rate based algorithms such as the maximum
signal-to-interference-plus noise (max-SINR) algorithm are considered. Optimal
linear beamforming under interference alignment consists of two layers; an
inner precoder and decoder (or receive filter) accomplish interference
alignment to eliminate inter-user interference, and an outer precoder and
decoder diagonalize the effective single-user channel resulting from the
interference alignment by the inner precoder and decoder. The relationship
between this two-layer beamforming and the max-SINR algorithm is established at
high signal-to-noise ratio. Also, the optimality of the max-SINR algorithm
within the class of linear beamforming algorithms, and its local convergence
with exponential rate, are established at high signal-to-noise ratio.
|
1011.6127
|
Visibility maintenance via controlled invariance for leader-follower
Dubins-like vehicles
|
cs.MA
|
The paper studies the visibility maintenance problem (VMP) for a
leader-follower pair of Dubins-like vehicles with input constraints, and
proposes an original solution based on the notion of controlled invariance. The
nonlinear model describing the relative dynamics of the vehicles is interpreted
as linear uncertain system, with the leader robot acting as an external
disturbance. The VMP is then reformulated as a linear constrained regulation
problem with additive disturbances (DLCRP). Positive D-invariance conditions
for linear uncertain systems with parametric disturbance matrix are introduced
and used to solve the VMP when box bounds on the state, control input and
disturbance are considered. The proposed design procedure is shown to be easily
adaptable to more general working scenarios. Extensive simulation results are
provided to illustrate the theory and show the effectiveness of our approach
|
1011.6218
|
Coordinated Transmissions to Direct and Relayed Users in Wireless
Cellular Systems
|
cs.IT cs.NI math.IT
|
The ideas of wireless network coding at the physical layer promise high
throughput gains in wireless systems with relays and multi-way traffic flows.
This gain can be ascribed to two principles: (1) joint transmission of multiple
communication flows and (2) usage of \emph{a priori} information to cancel the
interference. In this paper we use these principles to devise new transmission
schemes in wireless cellular systems that feature both users served directly by
the base stations (direct users) and users served through relays (relayed
users). We present four different schemes for \emph{coordinated transmission}
of uplink and downlink traffic in which one direct and one relayed user are
served. These schemes are then used as building blocks in multi-user scenarios,
where we present several schemes for scheduling pairs of users for coordinated
transmissions. The optimal scheme involves exhaustive search of the best user
pair in terms of overall rate. We propose several suboptimal scheduling
schemes, which perform closely to the optimal scheme. The numerical results
show a substantial increase in the system--level rate with respect to the
systems with non--coordinated transmissions.
|
1011.6220
|
Multimodal Biometric Systems - Study to Improve Accuracy and Performance
|
cs.AI
|
Biometrics is the science and technology of measuring and analyzing
biological data of human body, extracting a feature set from the acquired data,
and comparing this set against to the template set in the database.
Experimental studies show that Unimodal biometric systems had many
disadvantages regarding performance and accuracy. Multimodal biometric systems
perform better than unimodal biometric systems and are popular even more
complex also. We examine the accuracy and performance of multimodal biometric
authentication systems using state of the art Commercial Off- The-Shelf (COTS)
products. Here we discuss fingerprint and face biometric systems, decision and
fusion techniques used in these systems. We also discuss their advantage over
unimodal biometric systems.
|
1011.6224
|
Classifying extremely imbalanced data sets
|
physics.data-an cs.LG hep-ex stat.ML
|
Imbalanced data sets containing much more background than signal instances
are very common in particle physics, and will also be characteristic for the
upcoming analyses of LHC data. Following up the work presented at ACAT 2008, we
use the multivariate technique presented there (a rule growing algorithm with
the meta-methods bagging and instance weighting) on much more imbalanced data
sets, especially a selection of D0 decays without the use of particle
identification. It turns out that the quality of the result strongly depends on
the number of background instances used for training. We discuss methods to
exploit this in order to improve the results significantly, and how to handle
and reduce the size of large training sets without loss of result quality in
general. We will also comment on how to take into account statistical
fluctuation in receiver operation characteristic curves (ROC) for comparing
classifier methods.
|
1011.6242
|
A Construction of Weakly and Non-Weakly Regular Bent Functions
|
math.CO cs.IT math.IT
|
In this article a technique for constructing $p$-ary bent functions from
near-bent functions is presented. Two classes of quadratic $p$-ary functions
are shown to be near-bent. Applying the construction of bent functions to these
classes of near-bent functions yields classes of non-quadratic bent functions.
We show that one construction in even dimension yields weakly regular bent
functions. For other constructions, we obtain both weakly regular and
non-weakly regular bent functions. In particular we present the first known
infinite class of non-weakly regular bent functions.
|
1011.6266
|
Characterizing the speed and paths of shared bicycles in Lyon
|
cs.SI
|
Thanks to numerical data gathered by Lyon's shared bicycling system V\'elo'v,
we are able to analyze 11.6 millions bicycle trips, leading to the first robust
characterization of urban bikers' behaviors. We show that bicycles outstrip
cars in downtown Lyon, by combining high speed and short paths.These data also
allows us to calculate V\'elo'v fluxes on all streets, pointing to interesting
locations for bike paths.
|
1011.6268
|
Quantitative Analysis of Bloggers Collective Behavior Powered by
Emotions
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
Large-scale data resulting from users online interactions provide the
ultimate source of information to study emergent social phenomena on the Web.
From individual actions of users to observable collective behaviors, different
mechanisms involving emotions expressed in the posted text play a role. Here we
combine approaches of statistical physics with machine-learning methods of text
analysis to study emergence of the emotional behavior among Web users. Mapping
the high-resolution data from digg.com onto bipartite network of users and
their comments onto posted stories, we identify user communities centered
around certain popular posts and determine emotional contents of the related
comments by the emotion-classifier developed for this type of texts. Applied
over different time periods, this framework reveals strong correlations between
the excess of negative emotions and the evolution of communities. We observe
avalanches of emotional comments exhibiting significant self-organized critical
behavior and temporal correlations. To explore robustness of these critical
states, we design a network automaton model on realistic network connections
and several control parameters, which can be inferred from the dataset.
Dissemination of emotions by a small fraction of very active users appears to
critically tune the collective states.
|
1011.6293
|
Nonparametric Bayesian sparse factor models with application to gene
expression modeling
|
stat.AP cs.AI stat.ML
|
A nonparametric Bayesian extension of Factor Analysis (FA) is proposed where
observed data $\mathbf{Y}$ is modeled as a linear superposition, $\mathbf{G}$,
of a potentially infinite number of hidden factors, $\mathbf{X}$. The Indian
Buffet Process (IBP) is used as a prior on $\mathbf{G}$ to incorporate sparsity
and to allow the number of latent features to be inferred. The model's utility
for modeling gene expression data is investigated using randomly generated data
sets based on a known sparse connectivity matrix for E. Coli, and on three
biological data sets of increasing complexity.
|
1011.6326
|
New Null Space Results and Recovery Thresholds for Matrix Rank
Minimization
|
math.OC cs.IT math.IT stat.ML
|
Nuclear norm minimization (NNM) has recently gained significant attention for
its use in rank minimization problems. Similar to compressed sensing, using
null space characterizations, recovery thresholds for NNM have been studied in
\cite{arxiv,Recht_Xu_Hassibi}. However simulations show that the thresholds are
far from optimal, especially in the low rank region. In this paper we apply the
recent analysis of Stojnic for compressed sensing \cite{mihailo} to the null
space conditions of NNM. The resulting thresholds are significantly better and
in particular our weak threshold appears to match with simulation results.
Further our curves suggest for any rank growing linearly with matrix size $n$
we need only three times of oversampling (the model complexity) for weak
recovery. Similar to \cite{arxiv} we analyze the conditions for weak, sectional
and strong thresholds. Additionally a separate analysis is given for special
case of positive semidefinite matrices. We conclude by discussing simulation
results and future research directions.
|
1011.6441
|
LP Decodable Permutation Codes based on Linearly Constrained Permutation
Matrices
|
cs.IT math.CO math.IT math.RT
|
A set of linearly constrained permutation matrices are proposed for
constructing a class of permutation codes. Making use of linear constraints
imposed on the permutation matrices, we can formulate a minimum Euclidian
distance decoding problem for the proposed class of permutation codes as a
linear programming (LP) problem. The main feature of this class of permutation
codes, called LP decodable permutation codes, is this LP decodability. It is
demonstrated that the LP decoding performance of the proposed class of
permutation codes is characterized by the vertices of the code polytope of the
code. Two types of linear constraints are discussed; one is structured
constraints and another is random constraints. The structured constraints such
as pure involution lead to an efficient encoding algorithm. On the other hand,
the random constraints enable us to use probabilistic methods for analyzing
several code properties such as the average cardinality and the average weight
distribution.
|
1011.6495
|
The Minimum-Rank Gram Matrix Completion via Modified Fixed Point
Continuation Method
|
math.OC cs.NA cs.SY
|
The problem of computing a representation for a real polynomial as a sum of
minimum number of squares of polynomials can be casted as finding a symmetric
positive semidefinite real matrix (Gram matrix) of minimum rank subject to
linear equality constraints.
In this paper, we propose algorithms for solving the minimum-rank Gram matrix
completion problem, and show the convergence of these algorithms. Our methods
are based on the modified fixed point continuation (FPC) method. We also use
the Barzilai-Borwein (BB) technique and a specific linear combination of two
previous iterates to accelerate the convergence of modified FPC algorithms. We
demonstrate the effectiveness of our algorithms for computing approximate and
exact rational sum of squares (SOS) decompositions of polynomials with rational
coefficients.
|
1011.6639
|
Multiple Access Channels with States Causally Known at Transmitters
|
cs.IT math.IT
|
It has been recently shown by Lapidoth and Steinberg that strictly causal
state information can be beneficial in multiple access channels (MACs).
Specifically, it was proved that the capacity region of a two-user MAC with
independent states, each known strictly causally to one encoder, can be
enlarged by letting the encoders send compressed past state information to the
decoder. In this work, a generalization of the said strategy is proposed
whereby the encoders compress also the past transmitted codewords along with
the past state sequences. The proposed scheme uses a combination of
long-message encoding, compression of the past state sequences and codewords
without binning, and joint decoding over all transmission blocks. The proposed
strategy has been recently shown by Lapidoth and Steinberg to strictly improve
upon the original one. Capacity results are then derived for a class of
channels that include two-user modulo-additive state-dependent MACs. Moreover,
the proposed scheme is extended to state-dependent MACs with an arbitrary
number of users. Finally, output feedback is introduced and an example is
provided to illustrate the interplay between feedback and availability of
strictly causal state information in enlarging the capacity region.
|
1011.6644
|
Interference Alignment via Improved Subspace Conditioning
|
cs.IT math.IT
|
For the K user, single input single output (SISO), frequency selective
interference channel, a new low complexity transmit beamforming design that
improves the achievable sum rate is presented. Jointly employing the
interference alignment (IA) scheme presented by Cadambe and Jafar in [1] and
linear minimum mean square error (MMSE) decoding at the transmitters and
receivers, respectively, the new IA precoding design improves the average sum
rate while preserving the achievable degrees of freedom of the Cadambe and
Jafar scheme, K/2.
|
1011.6656
|
Learning sparse representations of depth
|
cs.CV
|
This paper introduces a new method for learning and inferring sparse
representations of depth (disparity) maps. The proposed algorithm relaxes the
usual assumption of the stationary noise model in sparse coding. This enables
learning from data corrupted with spatially varying noise or uncertainty,
typically obtained by laser range scanners or structured light depth cameras.
Sparse representations are learned from the Middlebury database disparity maps
and then exploited in a two-layer graphical model for inferring depth from
stereo, by including a sparsity prior on the learned features. Since they
capture higher-order dependencies in the depth structure, these priors can
complement smoothness priors commonly used in depth inference based on Markov
Random Field (MRF) models. Inference on the proposed graph is achieved using an
alternating iterative optimization technique, where the first layer is solved
using an existing MRF-based stereo matching algorithm, then held fixed as the
second layer is solved using the proposed non-stationary sparse coding
algorithm. This leads to a general method for improving solutions of state of
the art MRF-based depth estimation algorithms. Our experimental results first
show that depth inference using learned representations leads to state of the
art denoising of depth maps obtained from laser range scanners and a time of
flight camera. Furthermore, we show that adding sparse priors improves the
results of two depth estimation methods: the classical graph cut algorithm by
Boykov et al. and the more recent algorithm of Woodford et al.
|
1011.6664
|
Learning restricted Bayesian network structures
|
math.OC cs.DS cs.IT math.IT
|
Bayesian networks are basic graphical models, used widely both in statistics
and artificial intelligence. These statistical models of conditional
independence structure are described by acyclic directed graphs whose nodes
correspond to (random) variables in consideration. A quite important topic is
the learning of Bayesian network structures, which is determining the best
fitting statistical model on the basis of given data. Although there are
learning methods based on statistical conditional independence tests,
contemporary methods are mainly based on maximization of a suitable quality
criterion that evaluates how good the graph explains the occurrence of the
observed data. This leads to a nonlinear combinatorial optimization problem
that is in general NP-hard to solve. In this paper we deal with the complexity
of learning restricted Bayesian network structures, that is, we wish to find
network structures of highest score within a given subset of all possible
network structures. For this, we introduce a new unique algebraic
representative for these structures, called the characteristic imset. We show
that these imsets are always 0-1-vectors and that they have many nice
properties that allow us to simplify long proofs for some known results and to
easily establish new complexity results for learning restricted Bayes network
structures.
|
1012.0009
|
Time-Varying Graphs and Dynamic Networks
|
cs.DC cs.NI cs.SI physics.soc-ph
|
The past few years have seen intensive research efforts carried out in some
apparently unrelated areas of dynamic systems -- delay-tolerant networks,
opportunistic-mobility networks, social networks -- obtaining closely related
insights. Indeed, the concepts discovered in these investigations can be viewed
as parts of the same conceptual universe; and the formal models proposed so far
to express some specific concepts are components of a larger formal description
of this universe. The main contribution of this paper is to integrate the vast
collection of concepts, formalisms, and results found in the literature into a
unified framework, which we call TVG (for time-varying graphs). Using this
framework, it is possible to express directly in the same formalism not only
the concepts common to all those different areas, but also those specific to
each. Based on this definitional work, employing both existing results and
original observations, we present a hierarchical classification of TVGs; each
class corresponds to a significant property examined in the distributed
computing literature. We then examine how TVGs can be used to study the
evolution of network properties, and propose different techniques, depending on
whether the indicators for these properties are a-temporal (as in the majority
of existing studies) or temporal. Finally, we briefly discuss the introduction
of randomness in TVGs.
|
1012.0011
|
Secure Wireless Communication and Optimal Power Control under
Statistical Queueing Constraints
|
cs.IT math.IT
|
In this paper, secure transmission of information over fading broadcast
channels is studied in the presence of statistical queueing constraints.
Effective capacity is employed as a performance metric to identify the secure
throughput of the system, i.e., effective secure throughput. It is assumed that
perfect channel side information (CSI) is available at both the transmitter and
the receivers. Initially, the scenario in which the transmitter sends common
messages to two receivers and confidential messages to one receiver is
considered. For this case, effective secure throughput region, which is the
region of constant arrival rates of common and confidential messages that can
be supported by the buffer-constrained transmitter and fading broadcast
channel, is defined. It is proven that this effective throughput region is
convex. Then, the optimal power control policies that achieve the boundary
points of the effective secure throughput region are investigated and an
algorithm for the numerical computation of the optimal power adaptation schemes
is provided. Subsequently, the special case in which the transmitter sends only
confidential messages to one receiver, is addressed in more detail. For this
case, effective secure throughput is formulated and two different power
adaptation policies are studied. In particular, it is noted that opportunistic
transmission is no longer optimal under buffer constraints and the transmitter
should not wait to send the data at a high rate until the main channel is much
better than the eavesdropper channel.
|
1012.0018
|
n-Channel Asymmetric Entropy-Constrained Multiple-Description Lattice
Vector Quantization
|
cs.IT math.IT
|
This paper is about the design and analysis of an index-assignment (IA) based
multiple-description coding scheme for the n-channel asymmetric case. We use
entropy constrained lattice vector quantization and restrict attention to
simple reconstruction functions, which are given by the inverse IA function
when all descriptions are received or otherwise by a weighted average of the
received descriptions. We consider smooth sources with finite differential
entropy rate and MSE fidelity criterion. As in previous designs, our
construction is based on nested lattices which are combined through a single IA
function. The results are exact under high-resolution conditions and
asymptotically as the nesting ratios of the lattices approach infinity. For any
n, the design is asymptotically optimal within the class of IA-based schemes.
Moreover, in the case of two descriptions and finite lattice vector dimensions
greater than one, the performance is strictly better than that of existing
designs. In the case of three descriptions, we show that in the limit of large
lattice vector dimensions, points on the inner bound of Pradhan et al. can be
achieved. Furthermore, for three descriptions and finite lattice vector
dimensions, we show that the IA-based approach yields, in the symmetric case, a
smaller rate loss than the recently proposed source-splitting approach.
|
1012.0065
|
Counting in Graph Covers: A Combinatorial Characterization of the Bethe
Entropy Function
|
cs.IT cond-mat.stat-mech cs.AI math.CO math.IT
|
We present a combinatorial characterization of the Bethe entropy function of
a factor graph, such a characterization being in contrast to the original,
analytical, definition of this function. We achieve this combinatorial
characterization by counting valid configurations in finite graph covers of the
factor graph. Analogously, we give a combinatorial characterization of the
Bethe partition function, whose original definition was also of an analytical
nature. As we point out, our approach has similarities to the replica method,
but also stark differences. The above findings are a natural backdrop for
introducing a decoder for graph-based codes that we will call symbolwise
graph-cover decoding, a decoder that extends our earlier work on blockwise
graph-cover decoding. Both graph-cover decoders are theoretical tools that help
towards a better understanding of message-passing iterative decoding, namely
blockwise graph-cover decoding links max-product (min-sum) algorithm decoding
with linear programming decoding, and symbolwise graph-cover decoding links
sum-product algorithm decoding with Bethe free energy function minimization at
temperature one. In contrast to the Gibbs entropy function, which is a concave
function, the Bethe entropy function is in general not concave everywhere. In
particular, we show that every code picked from an ensemble of regular
low-density parity-check codes with minimum Hamming distance growing (with high
probability) linearly with the block length has a Bethe entropy function that
is convex in certain regions of its domain.
|
1012.0081
|
Molecular communication in fluid media: The additive inverse Gaussian
noise channel
|
cs.IT math.IT
|
We consider molecular communication, with information conveyed in the time of
release of molecules. The main contribution of this paper is the development of
a theoretical foundation for such a communication system. Specifically, we
develop the additive inverse Gaussian (IG) noise channel model: a channel in
which the information is corrupted by noise with an inverse Gaussian
distribution. We show that such a channel model is appropriate for molecular
communication in fluid media - when propagation between transmitter and
receiver is governed by Brownian motion and when there is positive drift from
transmitter to receiver. Taking advantage of the available literature on the IG
distribution, upper and lower bounds on channel capacity are developed, and a
maximum likelihood receiver is derived. Theory and simulation results are
presented which show that such a channel does not have a single quality measure
analogous to signal-to-noise ratio in the AWGN channel. It is also shown that
the use of multiple molecules leads to reduced error rate in a manner akin to
diversity order in wireless communications. Finally, we discuss some open
problems in molecular communications that arise from the IG system model.
|
1012.0084
|
Survey on Various Gesture Recognition Techniques for Interfacing
Machines Based on Ambient Intelligence
|
cs.AI cs.CV cs.HC cs.RO
|
Gesture recognition is mainly apprehensive on analyzing the functionality of
human wits. The main goal of gesture recognition is to create a system which
can recognize specific human gestures and use them to convey information or for
device control. Hand gestures provide a separate complementary modality to
speech for expressing ones ideas. Information associated with hand gestures in
a conversation is degree,discourse structure, spatial and temporal structure.
The approaches present can be mainly divided into Data-Glove Based and Vision
Based approaches. An important face feature point is the nose tip. Since nose
is the highest protruding point from the face. Besides that, it is not affected
by facial expressions.Another important function of the nose is that it is able
to indicate the head pose. Knowledge of the nose location will enable us to
align an unknown 3D face with those in a face database. Eye detection is
divided into eye position detection and eye contour detection. Existing works
in eye detection can be classified into two major categories: traditional
image-based passive approaches and the active IR based approaches. The former
uses intensity and shape of eyes for detection and the latter works on the
assumption that eyes have a reflection under near IR illumination and produce
bright/dark pupil effect. The traditional methods can be broadly classified
into three categories: template based methods,appearance based methods and
feature based methods. The purpose of this paper is to compare various human
Gesture recognition systems for interfacing machines directly to human wits
without any corporeal media in an ambient environment.
|
1012.0112
|
Multiple-access Network Information-flow and Correction Codes
|
cs.IT math.IT
|
This work considers the multiple-access multicast error-correction scenario
over a packetized network with $z$ malicious edge adversaries. The network has
min-cut $m$ and packets of length $\ell$, and each sink demands all information
from the set of sources $\sources$. The capacity region is characterized for
both a "side-channel" model (where sources and sinks share some random bits
that are secret from the adversary) and an "omniscient" adversarial model
(where no limitations on the adversary's knowledge are assumed). In the
"side-channel" adversarial model, the use of a secret channel allows higher
rates to be achieved compared to the "omniscient" adversarial model, and a
polynomial-complexity capacity-achieving code is provided. For the "omniscient"
adversarial model, two capacity-achieving constructions are given: the first is
based on random subspace code design and has complexity exponential in $\ell
m$, while the second uses a novel multiple-field-extension technique and has
$O(\ell m^{|\sources|})$ complexity, which is polynomial in the network size.
Our code constructions are "end-to-end" in that all nodes except the sources
and sinks are oblivious to the adversaries and may simply implement predesigned
linear network codes (random or otherwise). Also, the sources act independently
without knowledge of the data from other sources.
|
1012.0142
|
Universal patterns in sound amplitudes of songs and music genres
|
physics.data-an cs.IR cs.SD
|
We report a statistical analysis over more than eight thousand songs.
Specifically, we investigate the probability distribution of the normalized
sound amplitudes. Our findings seems to suggest a universal form of
distribution which presents a good agreement with a one-parameter stretched
Gaussian. We also argue that this parameter can give information on music
complexity, and consequently it goes towards classifying songs as well as music
genres. Additionally, we present statistical evidences that correlation aspects
of the songs are directly related with the non-Gaussian nature of their sound
amplitude distributions.
|
1012.0178
|
From Social Data Mining to Forecasting Socio-Economic Crisis
|
cs.CY cs.DB cs.DC
|
Socio-economic data mining has a great potential in terms of gaining a better
understanding of problems that our economy and society are facing, such as
financial instability, shortages of resources, or conflicts. Without
large-scale data mining, progress in these areas seems hard or impossible.
Therefore, a suitable, distributed data mining infrastructure and research
centers should be built in Europe. It also appears appropriate to build a
network of Crisis Observatories. They can be imagined as laboratories devoted
to the gathering and processing of enormous volumes of data on both natural
systems such as the Earth and its ecosystem, as well as on human
techno-socio-economic systems, so as to gain early warnings of impending
events. Reality mining provides the chance to adapt more quickly and more
accurately to changing situations. Further opportunities arise by individually
customized services, which however should be provided in a privacy-respecting
way. This requires the development of novel ICT (such as a self- organizing
Web), but most likely new legal regulations and suitable institutions as well.
As long as such regulations are lacking on a world-wide scale, it is in the
public interest that scientists explore what can be done with the huge data
available. Big data do have the potential to change or even threaten democratic
societies. The same applies to sudden and large-scale failures of ICT systems.
Therefore, dealing with data must be done with a large degree of responsibility
and care. Self-interests of individuals, companies or institutions have limits,
where the public interest is affected, and public interest is not a sufficient
justification to violate human rights of individuals. Privacy is a high good,
as confidentiality is, and damaging it would have serious side effects for
society.
|
1012.0196
|
Coarse Graining for Synchronization in Directed Networks
|
physics.soc-ph cond-mat.dis-nn cs.SI
|
Coarse graining model is a promising way to analyze and visualize large-scale
networks. The coarse-grained networks are required to preserve the same
statistical properties as well as the dynamic behaviors as the initial
networks. Some methods have been proposed and found effective in undirected
networks, while the study on coarse graining in directed networks lacks of
consideration. In this paper, we proposed a Topology-aware Coarse Graining
(TCG) method to coarse grain the directed networks. Performing the linear
stability analysis of synchronization and numerical simulation of the Kuramoto
model on four kinds of directed networks, including tree-like networks and
variants of Barab\'{a}si-Albert networks, Watts-Strogatz networks and
Erd\"{o}s-R\'{e}nyi networks, we find our method can effectively preserve the
network synchronizability.
|
1012.0197
|
Low-Rank Matrix Approximation with Weights or Missing Data is NP-hard
|
math.OC cs.SY math.NA
|
Weighted low-rank approximation (WLRA), a dimensionality reduction technique
for data analysis, has been successfully used in several applications, such as
in collaborative filtering to design recommender systems or in computer vision
to recover structure from motion. In this paper, we study the computational
complexity of WLRA and prove that it is NP-hard to find an approximate
solution, even when a rank-one approximation is sought. Our proofs are based on
a reduction from the maximum-edge biclique problem, and apply to strictly
positive weights as well as binary weights (the latter corresponding to
low-rank matrix approximation with missing data).
|
1012.0201
|
Generation of degree-correlated networks using copulas
|
physics.data-an cs.SI math-ph math.MP physics.soc-ph
|
Dynamical processes on complex networks such as information propagation,
innovation diffusion, cascading failures or epidemic spreading are highly
affected by their underlying topologies as characterized by, for instance,
degree-degree correlations. Here, we introduce the concept of copulas in order
to artificially generate random networks with an arbitrary degree distribution
and a rich a priori degree-degree correlation (or `association') structure. The
accuracy of the proposed formalism and corresponding algorithm is numerically
confirmed. The derived network ensembles can be systematically deployed as
proper null models, in order to unfold the complex interplay between the
topology of real networks and the dynamics on top of them.
|
1012.0203
|
Enhancing synchronization by directionality in complex networks
|
cond-mat.dis-nn cond-mat.stat-mech cs.SI physics.soc-ph
|
We proposed a method called residual edge-betweenness gradient (REBG) to
enhance synchronizability of networks by assignment of link direction while
keeping network topology and link weight unchanged. Direction assignment has
been shown to improve the synchronizability of undirected networks in general,
but we find that in some cases incommunicable components emerge and networks
fail to synchronize. We show that the REBG method can effectively avoid the
synchronization failure ($R=\lambda_{2}^{r}/\lambda_{N}^{r}=0$) which occurs in
the residual degree gradient (RDG) method proposed in Phys. Rev. Lett. 103,
228702 (2009). Further experiments show that REBG method enhance
synchronizability in networks with community structure as compared with the RDG
method.
|
1012.0206
|
Catastrophic Cascade of Failures in Interdependent Networks
|
physics.data-an cond-mat.stat-mech cs.SI physics.comp-ph physics.soc-ph
|
Modern network-like systems are usually coupled in such a way that failures
in one network can affect the entire system. In infrastructures, biology,
sociology, and economy, systems are interconnected and events taking place in
one system can propagate to any other coupled system. Recent studies on such
coupled systems show that the coupling increases their vulnerability to random
failure. Properties for interdependent networks differ significantly from those
of single-network systems. In this article, these results are reviewed and the
main properties discussed.
|
1012.0223
|
An Effective Method of Image Retrieval using Image Mining Techniques
|
cs.CV cs.MM
|
The present research scholars are having keen interest in doing their
research activities in the area of Data mining all over the world. Especially,
[13]Mining Image data is the one of the essential features in this present
scenario since image data plays vital role in every aspect of the system such
as business for marketing, hospital for surgery, engineering for construction,
Web for publication and so on. The other area in the Image mining system is the
Content-Based Image Retrieval (CBIR) which performs retrieval based on the
similarity defined in terms of extracted features with more objectiveness. The
drawback in CBIR is the features of the query image alone are considered.
Hence, a new technique called Image retrieval based on optimum clusters is
proposed for improving user interaction with image retrieval systems by fully
exploiting the similarity information. The index is created by describing the
images according to their color characteristics, with compact feature vectors,
that represent typical color distributions [12].
|
1012.0260
|
Modeling and Analysis of Time-Varying Graphs
|
cs.NI cs.DM cs.SI physics.soc-ph
|
We live in a world increasingly dominated by networks -- communications,
social, information, biological etc. A central attribute of many of these
networks is that they are dynamic, that is, they exhibit structural changes
over time. While the practice of dynamic networks has proliferated, we lag
behind in the fundamental, mathematical understanding of network dynamism.
Existing research on time-varying graphs ranges from preliminary algorithmic
studies (e.g., Ferreira's work on evolving graphs) to analysis of specific
properties such as flooding time in dynamic random graphs. A popular model for
studying dynamic graphs is a sequence of graphs arranged by increasing
snapshots of time. In this paper, we study the fundamental property of
reachability in a time-varying graph over time and characterize the latency
with respect to two metrics, namely store-or-advance latency and cut-through
latency. Instead of expected value analysis, we concentrate on characterizing
the exact probability distribution of routing latency along a randomly
intermittent path in two popular dynamic random graph models. Using this
analysis, we characterize the loss of accuracy (in a probabilistic setting)
between multiple temporal graph models, ranging from one that preserves all the
temporal ordering information for the purpose of computing temporal graph
properties to one that collapses various snapshots into one graph (an operation
called smashing), with multiple intermediate variants. We also show how some
other traditional graph theoretic properties can be extended to the temporal
domain. Finally, we propose algorithms for controlling the progress of a packet
in single-copy adaptive routing schemes in various dynamic random graphs.
|
1012.0322
|
A Bayesian Methodology for Estimating Uncertainty of Decisions in
Safety-Critical Systems
|
cs.AI
|
Uncertainty of decisions in safety-critical engineering applications can be
estimated on the basis of the Bayesian Markov Chain Monte Carlo (MCMC)
technique of averaging over decision models. The use of decision tree (DT)
models assists experts to interpret causal relations and find factors of the
uncertainty. Bayesian averaging also allows experts to estimate the uncertainty
accurately when a priori information on the favored structure of DTs is
available. Then an expert can select a single DT model, typically the Maximum a
Posteriori model, for interpretation purposes. Unfortunately, a priori
information on favored structure of DTs is not always available. For this
reason, we suggest a new prior on DTs for the Bayesian MCMC technique. We also
suggest a new procedure of selecting a single DT and describe an application
scenario. In our experiments on the Short-Term Conflict Alert data our
technique outperforms the existing Bayesian techniques in predictive accuracy
of the selected single DTs.
|
1012.0335
|
Faster Query Answering in Probabilistic Databases using Read-Once
Functions
|
cs.DB
|
A boolean expression is in read-once form if each of its variables appears
exactly once. When the variables denote independent events in a probability
space, the probability of the event denoted by the whole expression in
read-once form can be computed in polynomial time (whereas the general problem
for arbitrary expressions is #P-complete). Known approaches to checking
read-once property seem to require putting these expressions in disjunctive
normal form. In this paper, we tell a better story for a large subclass of
boolean event expressions: those that are generated by conjunctive queries
without self-joins and on tuple-independent probabilistic databases. We first
show that given a tuple-independent representation and the provenance graph of
an SPJ query plan without self-joins, we can, without using the DNF of a result
event expression, efficiently compute its co-occurrence graph. From this, the
read-once form can already, if it exists, be computed efficiently using
existing techniques. Our second and key contribution is a complete, efficient,
and simple to implement algorithm for computing the read-once forms (whenever
they exist) directly, using a new concept, that of co-table graph, which can be
significantly smaller than the co-occurrence graph.
|
1012.0356
|
The Past and the Future in the Present
|
nlin.CD cs.IT math.DS math.IT math.ST stat.TH
|
We show how the shared information between the past and future---the excess
entropy---derives from the components of directional information stored in the
present---the predictive and retrodictive causal states. A detailed proof
allows us to highlight a number of the subtle problems in estimation and
analysis that impede accurate calculation of the excess entropy.
|
1012.0365
|
A Block Lanczos with Warm Start Technique for Accelerating Nuclear Norm
Minimization Algorithms
|
cs.NA cs.AI math.OC
|
Recent years have witnessed the popularity of using rank minimization as a
regularizer for various signal processing and machine learning problems. As
rank minimization problems are often converted to nuclear norm minimization
(NNM) problems, they have to be solved iteratively and each iteration requires
computing a singular value decomposition (SVD). Therefore, their solution
suffers from the high computation cost of multiple SVDs. To relieve this issue,
we propose using the block Lanczos method to compute the partial SVDs, where
the principal singular subspaces obtained in the previous iteration are used to
start the block Lanczos procedure. To avoid the expensive reorthogonalization
in the Lanczos procedure, the block Lanczos procedure is performed for only a
few steps. Our block Lanczos with warm start (BLWS) technique can be adopted by
different algorithms that solve NNM problems. We present numerical results on
applying BLWS to Robust PCA and Matrix Completion problems. Experimental
results show that our BLWS technique usually accelerates its host algorithm by
at least two to three times.
|
1012.0366
|
Optimal measures and Markov transition kernels
|
math.OC cs.CC cs.IT math-ph math.FA math.IT math.MP stat.ML
|
We study optimal solutions to an abstract optimization problem for measures,
which is a generalization of classical variational problems in information
theory and statistical physics. In the classical problems, information and
relative entropy are defined using the Kullback-Leibler divergence, and for
this reason optimal measures belong to a one-parameter exponential family.
Measures within such a family have the property of mutual absolute continuity.
Here we show that this property characterizes other families of optimal
positive measures if a functional representing information has a strictly
convex dual. Mutual absolute continuity of optimal probability measures allows
us to strictly separate deterministic and non-deterministic Markov transition
kernels, which play an important role in theories of decisions, estimation,
control, communication and computation. We show that deterministic transitions
are strictly sub-optimal, unless information resource with a strictly convex
dual is unconstrained. For illustration, we construct an example where, unlike
non-deterministic, any deterministic kernel either has negatively infinite
expected utility (unbounded expected error) or communicates infinite
information.
|
1012.0367
|
Universal polar coding and sparse recovery
|
cs.IT math.IT
|
This paper investigates universal polar coding schemes. In particular, a
notion of ordering (called convolutional path) is introduced between
probability distributions to determine when a polar compression (or
communication) scheme designed for one distribution can also succeed for
another one. The original polar decoding algorithm is also generalized to an
algorithm allowing to learn information about the source distribution using the
idea of checkers. These tools are used to construct a universal compression
algorithm for binary sources, operating at the lowest achievable rate
(entropy), with low complexity and with guaranteed small error probability. In
a second part of the paper, the problem of sketching high dimensional discrete
signals which are sparse is approached via the polarization technique. It is
shown that the number of measurements required for perfect recovery is
competitive with the $O(k \log (n/k))$ bound (with optimal constant for binary
signals), meanwhile affording a deterministic low complexity measurement
matrix.
|
1012.0375
|
Dynamic Resource Coordination and Interference Management for Femtocell
Networks
|
cs.IT math.IT
|
Femtocell is emerging as a key technology to secure the coverage and capacity
in indoor environments. However the deployment of a new femtocell layer may
originate undesired interference to the whole system. This paper investigates
spectrum resource coordination and interference management for the femtocell
networks. A resource coordination scheme based on broadcasting resource
coordination request messages by the femto mobile is proposed to reduce the
system interference.
|
1012.0384
|
Adaptive Sensing and Transmission Durations for Cognitive Radios
|
math.OC cs.IT math-ph math.IT math.MP
|
In a cognitive radio setting, secondary users opportunistically access the
spectrum allocated to primary users. Finding the optimal sensing and
transmission durations for the secondary users becomes crucial in order to
maximize the secondary throughput while protecting the primary users from
interference and service disruption. In this paper an adaptive sensing and
transmission scheme for cognitive radios is proposed. We consider a channel
allocated to a primary user which operates in an unslotted manner switching
activity at random times. A secondary transmitter adapts its sensing and
transmission durations according to its belief regarding the primary user state
of activity. The objective is to maximize a secondary utility function. This
function has a penalty term for collisions with primary transmission. It
accounts for the reliability-throughput tradeoff by explicitly incorporating
the impact of sensing duration on secondary throughput and primary activity
detection reliability. It also accounts for throughput reduction that results
from data overhead. Numerical simulations of the system performance demonstrate
the effectiveness of adaptive sensing and transmission scheme over non-adaptive
approach in increasing the secondary user utility.
|
1012.0392
|
Supporting Information for the Paper: Optimal Ternary
Constant-Composition Codes of Weight Four and Distance Five, IEEE Trans.
Inform. Theory, To Appear
|
cs.IT math.CO math.IT
|
Supporting Information for the Paper: Optimal Ternary Constant-Composition
Codes of Weight Four and Distance Five, IEEE Trans. Inform. Theory, To Appear.
|
1012.0412
|
Entropy power inequality for a family of discrete random variables
|
cs.IT math.IT
|
It is known that the Entropy Power Inequality (EPI) always holds if the
random variables have density. Not much work has been done to identify discrete
distributions for which the inequality holds with the differential entropy
replaced by the discrete entropy. Harremo\"{e}s and Vignat showed that it holds
for the pair (B(m,p), B(n,p)), m,n \in \mathbb{N}, (where B(n,p) is a Binomial
distribution with n trials each with success probability p) for p = 0.5. In
this paper, we considerably expand the set of Binomial distributions for which
the inequality holds and, in particular, identify n_0(p) such that for all m,n
\geq n_0(p), the EPI holds for (B(m,p), B(n,p)). We further show that the EPI
holds for the discrete random variables that can be expressed as the sum of n
independent identical distributed (IID) discrete random variables for large n.
|
1012.0416
|
Compress-and-Forward Scheme for Relay Networks: Backword Decoding and
Connection to Bisubmodular Flows
|
cs.IT math.IT
|
In this paper, a compress-and-forward scheme with backward decoding is
presented for the unicast wireless relay network. The encoding at the source
and relay is a generalization of the noisy network coding scheme (NNC). While
it achieves the same reliable data rate as noisy network coding scheme, the
backward decoding allows for a better decoding complexity as compared to the
joint decoding of the NNC scheme. Characterizing the layered decoding scheme is
shown to be equivalent to characterizing an information flow for the wireless
network. A node-flow for a graph with bisubmodular capacity constraints is
presented and a max-flow min-cut theorem is proved for it. This generalizes
many well-known results of flows over capacity constrained graphs studied in
computer science literature. The results for the unicast relay network are
generalized to the network with multiple sources with independent messages
intended for a single destination.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.