id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1105.3347
|
Generating Scale-free Networks with Adjustable Clustering Coefficient
Via Random Walks
|
physics.soc-ph cs.SI
|
This paper presents an algorithm for generating scale-free networks with
adjustable clustering coefficient. The algorithm is based on a random walk
procedure combined with a triangle generation scheme which takes into account
genetic factors; this way, preferential attachment and clustering control are
implemented using only local information. Simulations are presented which
support the validity of the scheme, characterizing its tuning capabilities.
|
1105.3351
|
Splitting method for spatio-temporal search efforts planning
|
cs.NE cs.SY math.OC
|
This article deals with the spatio-temporal sensors deployment in order to
maximize detection probability of an intelligent and randomly moving target in
an area under surveillance. Our work is based on the rare events simulation
framework. More precisely, we derive a novel stochastic optimization algorithm
based on the generalized splitting method. This new approach offers promising
results without any state-space discretization and can handle various types of
constraints.
|
1105.3368
|
Random Walks, Electric Networks and The Transience Class problem of
Sandpiles
|
cs.DM cond-mat.other cs.SI math-ph math.MP
|
The Abelian Sandpile Model is a discrete diffusion process defined on graphs
(Dhar \cite{DD90}, Dhar et al. \cite{DD95}) which serves as the standard model
of \textit{self-organized criticality}. The transience class of a sandpile is
defined as the maximum number of particles that can be added without making the
system recurrent (\cite{BT05}). We develop the theory of discrete diffusions in
contrast to continuous harmonic functions on graphs and establish deep
connections between standard results in the study of random walks on graphs and
sandpiles on graphs. Using this connection and building other necessary
machinery we improve the main result of Babai and Gorodezky (SODA
2007,\cite{LB07}) of the bound on the transience class of an $n \times n$ grid,
from $O(n^{30})$ to $O(n^{7})$. Proving that the transience class is small
validates the general notion that for most natural phenomenon, the time during
which the system is transient is small. In addition, we use the machinery
developed to prove a number of auxiliary results. We exhibit an equivalence
between two other tessellations of plane, the honeycomb and triangular
lattices. We give general upper bounds on the transience class as a function of
the number of edges to the sink.
Further, for planar sandpiles we derive an explicit algebraic expression
which provably approximates the transience class of $G$ to within $O(|E(G)|)$.
This expression is based on the spectrum of the Laplacian of the dual of the
graph $G$. We also show a lower bound of $\Omega(n^{3})$ on the transience
class on the grid improving the obvious bound of $\Omega(n^{2})$.
|
1105.3416
|
Implementation of Physical-layer Network Coding
|
cs.NI cs.IT math.IT
|
This paper presents the first implementation of a two-way relay network based
on the principle of physical-layer network coding. To date, only a simplified
version of physical-layer network coding (PNC) method, called analog network
coding (ANC), has been successfully implemented. The advantage of ANC is that
it is simple to implement; the disadvantage, on the other hand, is that the
relay amplifies the noise along with the signal before forwarding the signal.
PNC systems in which the relay performs XOR or other denoising PNC mappings of
the received signal have the potential for significantly better performance.
However, the implementation of such PNC systems poses many challenges. For
example, the relay must be able to deal with symbol and carrier-phase
asynchronies of the simultaneous signals received from the two end nodes, and
the relay must perform channel estimation before detecting the signals. We
investigate a PNC implementation in the frequency domain, referred to as FPNC,
to tackle these challenges. FPNC is based on OFDM. In FPNC, XOR mapping is
performed on the OFDM samples in each subcarrier rather than on the samples in
the time domain. We implement FPNC on the universal soft radio peripheral
(USRP) platform. Our implementation requires only moderate modifications of the
packet preamble design of 802.11a/g OFDM PHY. With the help of the cyclic
prefix (CP) in OFDM, symbol asynchrony and the multi-path fading effects can be
dealt with in a similar fashion. Our experimental results show that
symbol-synchronous and symbol-asynchronous FPNC have essentially the same BER
performance, for both channel-coded and unchannel-coded FPNC.
|
1105.3424
|
Competing epidemics on complex networks
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
Human diseases spread over networks of contacts between individuals and a
substantial body of recent research has focused on the dynamics of the
spreading process. Here we examine a model of two competing diseases spreading
over the same network at the same time, where infection with either disease
gives an individual subsequent immunity to both. Using a combination of
analytic and numerical methods, we derive the phase diagram of the system and
estimates of the expected final numbers of individuals infected with each
disease. The system shows an unusual dynamical transition between dominance of
one disease and dominance of the other as a function of their relative rates of
growth. Close to this transition the final outcomes show strong dependence on
stochastic fluctuations in the early stages of growth, dependence that
decreases with increasing network size, but does so sufficiently slowly as
still to be easily visible in systems with millions or billions of individuals.
In most regions of the phase diagram we find that one disease eventually
dominates while the other reaches only a vanishing fraction of the network, but
the system also displays a significant coexistence regime in which both
diseases reach epidemic proportions and infect an extensive fraction of the
network.
|
1105.3425
|
Delays and the Capacity of Continuous-time Channels
|
cs.IT math.IT
|
Any physical channel of communication offers two potential reasons why its
capacity (the number of bits it can transmit in a unit of time) might be
unbounded: (1) Infinitely many choices of signal strength at any given instant
of time, and (2) Infinitely many instances of time at which signals may be
sent. However channel noise cancels out the potential unboundedness of the
first aspect, leaving typical channels with only a finite capacity per instant
of time. The latter source of infinity seems less studied. A potential source
of unreliability that might restrict the capacity also from the second aspect
is delay: Signals transmitted by the sender at a given point of time may not be
received with a predictable delay at the receiving end. Here we examine this
source of uncertainty by considering a simple discrete model of delay errors.
In our model the communicating parties get to subdivide time as microscopically
finely as they wish, but still have to cope with communication delays that are
macroscopic and variable. The continuous process becomes the limit of our
process as the time subdivision becomes infinitesimal. We taxonomize this class
of communication channels based on whether the delays and noise are stochastic
or adversarial; and based on how much information each aspect has about the
other when introducing its errors. We analyze the limits of such channels and
reach somewhat surprising conclusions: The capacity of a physical channel is
finitely bounded only if at least one of the two sources of error (signal noise
or delay noise) is adversarial. In particular the capacity is finitely bounded
only if the delay is adversarial, or the noise is adversarial and acts with
knowledge of the stochastic delay. If both error sources are stochastic, or if
the noise is adversarial and independent of the stochastic delay, then the
capacity of the associated physical channel is infinite.
|
1105.3427
|
Real-Time Sequential Convex Programming for Optimal Control Applications
|
math.OC cs.SY
|
This paper proposes real-time sequential convex programming (RTSCP), a method
for solving a sequence of nonlinear optimization problems depending on an
online parameter. We provide a contraction estimate for the proposed method
and, as a byproduct, a new proof of the local convergence of sequential convex
programming. The approach is illustrated by an example where RTSCP is applied
to nonlinear model predictive control.
|
1105.3435
|
Visibility-preserving convexifications using single-vertex moves
|
cs.CG cs.RO math.CO
|
Devadoss asked: (1) can every polygon be convexified so that no internal
visibility (between vertices) is lost in the process? Moreover, (2) does such a
convexification exist, in which exactly one vertex is moved at a time (that is,
using {\em single-vertex moves})? We prove the redundancy of the "single-vertex
moves" condition: an affirmative answer to (1) implies an affirmative answer to
(2). Since Aichholzer et al. recently proved (1), this settles (2).
|
1105.3486
|
Xapagy: a cognitive architecture for narrative reasoning
|
cs.AI
|
We introduce the Xapagy cognitive architecture: a software system designed to
perform narrative reasoning. The architecture has been designed from scratch to
model and mimic the activities performed by humans when witnessing, reading,
recalling, narrating and talking about stories.
|
1105.3531
|
On the Tradeoff Between Multiuser Diversity and Training Overhead in
Multiple Access Channels
|
cs.IT math.IT
|
We consider a single antenna narrowband multiple access channel in which
users send training sequences to the base station and scheduling is performed
based on minimum mean square error (MMSE) channel estimates. In such a system,
there is an inherent tradeoff between training overhead and the amount of
multiuser diversity achieved. We analyze a block fading channel with
independent Rayleigh distributed channel gains, where the parameters to be
optimized are the number of users considered for transmission in each block and
the corresponding time and power spent on training by each user. We derive
closed form expressions for the optimal parameters in terms K and L, where K is
the number of users considered for transmission in each block and L is the
block length in symbols. Considering the behavior of the system as L grows
large, we optimize K with respect to an approximate expression for the
achievable rate, and obtain second order expressions for the resulting
parameters in terms of L.
|
1105.3538
|
The Exact Schema Theorem
|
cs.NE
|
A schema is a naturally defined subset of the space of fixed-length binary
strings. The Holland Schema Theorem gives a lower bound on the expected
fraction of a population in a schema after one generation of a simple genetic
algorithm. This paper gives formulas for the exact expected fraction of a
population in a schema after one generation of the simple genetic algorithm.
Holland's schema theorem has three parts, one for selection, one for crossover,
and one for mutation. The selection part is exact, whereas the crossover and
mutation parts are approximations. This paper shows how the crossover and
mutation parts can be made exact. Holland's schema theorem follows naturally as
a corollary. There is a close relationship between schemata and the
representation of the population in the Walsh basis. This relationship is used
in the derivation of the results, and can also make computation of the schema
averages more efficient. This paper gives a version of the Vose infinite
population model where crossover and mutation are separated into two functions
rather than a single "mixing" function.
|
1105.3559
|
Invariant Representative Cocycles of Cohomology Generators using
Irregular Graph Pyramids
|
cs.CV
|
Structural pattern recognition describes and classifies data based on the
relationships of features and parts. Topological invariants, like the Euler
number, characterize the structure of objects of any dimension. Cohomology can
provide more refined algebraic invariants to a topological space than does
homology. It assigns `quantities' to the chains used in homology to
characterize holes of any dimension. Graph pyramids can be used to describe
subdivisions of the same object at multiple levels of detail. This paper
presents cohomology in the context of structural pattern recognition and
introduces an algorithm to efficiently compute representative cocycles (the
basic elements of cohomology) in 2D using a graph pyramid. An extension to
obtain scanning and rotation invariant cocycles is given.
|
1105.3569
|
Diversity-multiplexing Gain Tradeoff: a Tool in Algebra?
|
cs.IT math.IT
|
Since the invention of space-time coding numerous algebraic methods have been
applied in code design.
In particular algebraic number theory and central simple algebras have been
on the forefront of the research.
In this paper we are turning the table and asking whether information theory
can be used as a tool in algebra. We will first derive some corollaries from
diversity-multiplexing gain (DMT) bounds by Zheng and Tse and later show how
these results can be used to analyze the unit group of orders of certain
division algebras. The authors do not claim that the algebraic results are new,
but we do find that this interesting relation between algebra and information
theory is quite surprising and worth pointing out.
|
1105.3574
|
Robustness and Assortativity for Diffusion-like Processes in Scale-free
Networks
|
physics.soc-ph cond-mat.dis-nn cond-mat.stat-mech cs.SI
|
By analysing the diffusive dynamics of epidemics and of distress in complex
networks, we study the effect of the assortativity on the robustness of the
networks. We first determine by spectral analysis the thresholds above which
epidemics/failures can spread; we then calculate the slowest diffusional times.
Our results shows that disassortative networks exhibit a higher epidemiological
threshold and are therefore easier to immunize, while in assortative networks
there is a longer time for intervention before epidemic/failure spreads.
Moreover, we study by computer simulations the sandpile cascade model, a
diffusive model of distress propagation (financial contagion). We show that,
while assortative networks are more prone to the propagation of
epidemic/failures, degree-targeted immunization policies increases their
resilience to systemic risk.
|
1105.3617
|
Face Shape and Reflectance Acquisition using a Multispectral Light Stage
|
cs.CV cs.GR
|
In this thesis, we discuss the design and calibration (geometric and
radiometric) of a novel shape and reflectance acquisition device called the
"Multispectral Light Stage". This device can capture highly detailed facial
geometry (down to the level of skin pores detail) and Multispectral reflectance
map which can be used to estimate biophysical skin parameters such as the
distribution of pigmentation and blood beneath the surface of the skin. We
extend the analysis of the original spherical gradient photometric stereo
method to study the effects of deformed diffuse lobes on the quality of
recovered surface normals. Based on our modified radiance equations, we develop
a minimal image set method to recover high quality photometric normals using
only four, instead of six, spherical gradient images. Using the same radiance
equations, we explore a Quadratic Programming (QP) based algorithm for
correction of surface normals obtained using spherical gradient photometric
stereo. Based on the proposed minimal image sets method, we present a
performance capture sequence that significantly reduces the data capture
requirement and post-processing computational cost of existing photometric
stereo based performance geometry capture methods. Furthermore, we explore the
use of images captured in our Light Stage to generate stimuli images for a
psychology experiment exploring the neural representation of 3D shape and
texture of a human face.
|
1105.3635
|
Probabilistic Inference from Arbitrary Uncertainty using Mixtures of
Factorized Generalized Gaussians
|
cs.AI
|
This paper presents a general and efficient framework for probabilistic
inference and learning from arbitrary uncertain information. It exploits the
calculation properties of finite mixture models, conjugate families and
factorization. Both the joint probability density of the variables and the
likelihood function of the (objective or subjective) observation are
approximated by a special mixture model, in such a way that any desired
conditional distribution can be directly obtained without numerical
integration. We have developed an extended version of the expectation
maximization (EM) algorithm to estimate the parameters of mixture models from
uncertain training examples (indirect observations). As a consequence, any
piece of exact or uncertain information about both input and output values is
consistently handled in the inference and learning stages. This ability,
extremely useful in certain situations, is not found in most alternative
methods. The proposed framework is formally justified from standard
probabilistic principles and illustrative examples are provided in the fields
of nonparametric pattern classification, nonlinear regression and pattern
completion. Finally, experiments on a real application and comparative results
over standard databases provide empirical evidence of the utility of the method
in a wide range of applications.
|
1105.3682
|
Where are my followers? Understanding the Locality Effect in Twitter
|
cs.SI physics.soc-ph
|
Twitter is one of the most used applications in the current Internet with
more than 200M accounts created so far. As other large-scale systems Twitter
can obtain enefit by exploiting the Locality effect existing among its users.
In this paper we perform the first comprehensive study of the Locality effect
of Twitter. For this purpose we have collected the geographical location of
around 1M Twitter users and 16M of their followers. Our results demonstrate
that language and cultural characteristics determine the level of Locality
expected for different countries. Those countries with a different language
than English such as Brazil typically show a high intra-country Locality
whereas those others where English is official or co-official language suffer
from an external Locality effect. This is, their users have a larger number of
followers in US than within their same country. This is produced by two
reasons: first, US is the dominant country in Twitter counting with around half
of the users, and second, these countries share a common language and cultural
characteristics with US.
|
1105.3685
|
Benchmarks, Performance Evaluation and Contests for 3D Shape Retrieval
|
cs.CV cs.CG
|
Benchmarking of 3D Shape retrieval allows developers and researchers to
compare the strengths of different algorithms on a standard dataset. Here we
describe the procedures involved in developing a benchmark and issues involved.
We then discuss some of the current 3D shape retrieval benchmarks efforts of
our group and others. We also review the different performance evaluation
measures that are developed and used by researchers in the community. After
that we give an overview of the 3D shape retrieval contest (SHREC) tracks run
under the EuroGraphics Workshop on 3D Object Retrieval and give details of
tracks that we organized for SHREC 2010. Finally we demonstrate some of the
results based on the different SHREC contest tracks and the NIST shape
benchmark.
|
1105.3686
|
Broadcast Channels with Delayed Finite-Rate Feedback: Predict or
Observe?
|
cs.IT math.IT
|
Most multiuser precoding techniques require accurate transmitter channel
state information (CSIT) to maintain orthogonality between the users. Such
techniques have proven quite fragile in time-varying channels because the CSIT
is inherently imperfect due to estimation and feedback delay, as well
quantization noise. An alternative approach recently proposed by Maddah-Ali and
Tse (MAT) allows for significant multiplexing gain in the multi-input
single-output (MISO) broadcast channel (BC) even with transmit CSIT that is
completely stale, i.e. uncorrelated with the current channel state. With $K$
users, their scheme claims to lose only a $\log(K)$ factor relative to the full
$K$ degrees of freedom (DoF) attainable in the MISO BC with perfect CSIT for
large $K$. However, their result does not consider the cost of the feedback,
which is potentially very large in high mobility (short channel coherence
time). In this paper, we more closely examine the MAT scheme and compare its
DoF gain to single user transmission (which always achieves 1 DoF) and partial
CSIT linear precoding (which achieves up to $K$). In particular, assuming the
channel coherence time is $N$ symbol periods and the feedback delay is $N_{\rm
fd}$ we show that when $N < (1+o(1)) K \log K$ (short coherence time), single
user transmission performs best, whereas for $N> (1+o(1)) (N_{\rm fd}+ K / \log
K)(1-\log^{-1}K)^{-1}$ (long coherence time), zero-forcing precoding
outperforms the other two. The MAT scheme is optimal for intermediate coherence
times, which for practical parameter choices is indeed quite a large and
significant range, even accounting for the feedback cost.
|
1105.3726
|
Controlling Complex Networks with Compensatory Perturbations
|
q-bio.MN cond-mat.dis-nn cs.SI nlin.CD physics.soc-ph
|
The response of complex networks to perturbations is of utmost importance in
areas as diverse as ecosystem management, emergency response, and cell
reprogramming. A fundamental property of networks is that the perturbation of
one node can affect other nodes, in a process that may cause the entire or
substantial part of the system to change behavior and possibly collapse. Recent
research in metabolic and food-web networks has demonstrated the concept that
network damage caused by external perturbations can often be mitigated or
reversed by the application of compensatory perturbations. Compensatory
perturbations are constrained to be physically admissible and amenable to
implementation on the network. However, the systematic identification of
compensatory perturbations that conform to these constraints remains an open
problem. Here, we present a method to construct compensatory perturbations that
can control the fate of general networks under such constraints. Our approach
accounts for the full nonlinear behavior of real complex networks and can bring
the system to a desirable target state even when this state is not directly
accessible. Applications to genetic networks show that compensatory
perturbations are effective even when limited to a small fraction of all nodes
in the network and that they are far more effective when limited to the
highest-degree nodes. The approach is conceptually simple and computationally
efficient, making it suitable for the rescue, control, and reprogramming of
large complex networks in various domains.
|
1105.3788
|
A Control-Oriented Notion of Finite State Approximation
|
math.OC cs.SY
|
We consider the problem of approximating discrete-time plants with
finite-valued sensors and actu- ators by deterministic finite memory systems
for the purpose of certified-by-design controller synthesis. Building on ideas
from robust control, we propose a control-oriented notion of finite state
approximation for these systems, demonstrate its relevance to the control
synthesis problem, and discuss its key features.
|
1105.3793
|
A lower bound on the average entropy of a function determined up to a
diagonal linear map on F_q^n
|
math.CO cs.IT math.IT
|
In this note, it is shown that if $f\colon\efq^n\to\efq^n$ is any function
and $\bA=(A_1,..., A_n)$ is uniformly distributed over $\efq^n$, then the
average over $(k_1,...,k_n)\in \efq^n$ of the Renyi (and hence, of the Shannon)
entropy of $f(\bA)+(k_1A_1,...,k_nA_n)$ is at least about $\log_2(q^n)-n$. In
fact, it is shown that the average collision probability of
$f(\bA)+(k_1A_1,...,k_nA_n)$ is at most about $2^n/q^n$.
|
1105.3821
|
Ontological Crises in Artificial Agents' Value Systems
|
cs.AI
|
Decision-theoretic agents predict and evaluate the results of their actions
using a model, or ontology, of their environment. An agent's goal, or utility
function, may also be specified in terms of the states of, or entities within,
its ontology. If the agent may upgrade or replace its ontology, it faces a
crisis: the agent's original goal may not be well-defined with respect to its
new ontology. This crisis must be resolved before the agent can make plans
towards achieving its goals.
We discuss in this paper which sorts of agents will undergo ontological
crises and why we may want to create such agents. We present some concrete
examples, and argue that a well-defined procedure for resolving ontological
crises is needed. We point to some possible approaches to solving this problem,
and evaluate these methods on our examples.
|
1105.3828
|
An Algorithmic Solution to the Five-Point Pose Problem Based on the
Cayley Representation of Rotations
|
cs.CV
|
We give a new algorithmic solution to the well-known five-point relative pose
problem. Our approach does not deal with the famous cubic constraint on an
essential matrix. Instead, we use the Cayley representation of rotations in
order to obtain a polynomial system from epipolar constraints. Solving that
system, we directly get relative rotation and translation parameters of the
cameras in terms of roots of a 10th degree polynomial.
|
1105.3829
|
Hierarchical Recursive Running Median
|
cs.DS cs.CV
|
To date, the histogram-based running median filter of Perreault and H\'ebert
is considered the fastest for 8-bit images, being roughly O(1) in average case.
We present here another approximately constant time algorithm which further
improves the aforementioned one and exhibits lower associated constant, being
at the time of writing the lowest theoretical complexity algorithm for
calculation of 2D and higher dimensional median filters. The algorithm scales
naturally to higher precision (e.g. 16-bit) integer data without any
modifications. Its adaptive version offers additional speed-up for images
showing compact modes in gray-value distribution. The experimental comparison
to the previous constant-time algorithm defines the application domain of this
new development, besides theoretical interest, as high bit depth data and/or
hardware without SIMD extensions. The C/C++ implementation of the algorithm is
available under GPL for research purposes.
|
1105.3833
|
Typical models: minimizing false beliefs
|
cs.AI
|
A knowledge system S describing a part of real world does in general not
contain complete information. Reasoning with incomplete information is prone to
errors since any belief derived from S may be false in the present state of the
world. A false belief may suggest wrong decisions and lead to harmful actions.
So an important goal is to make false beliefs as unlikely as possible. This
work introduces the notions of "typical atoms" and "typical models", and shows
that reasoning with typical models minimizes the expected number of false
beliefs over all ways of using incomplete information. Various properties of
typical models are studied, in particular, correctness and stability of beliefs
suggested by typical models, and their connection to oblivious reasoning.
|
1105.3834
|
A Multiple-Choice Test Recognition System based on the Gamera Framework
|
cs.CV
|
This article describes JECT-OMR, a system that analyzes digital images
representing scans of multiple-choice tests compiled by students. The system
performs a structural analysis of the document in order to get the chosen
answer for each question, and it also contains a bar-code decoder, used for the
identification of additional information encoded in the document. JECT-OMR was
implemented using the Python programming language, and leverages the power of
the Gamera framework in order to accomplish its task. The system exhibits an
accuracy of over 99% in the recognition of marked and non-marked squares
representing answers, thus making it suitable for real world applications
|
1105.3835
|
Protocols for Relay-Assisted Free-Space Optical Systems
|
cs.IT math.IT
|
We investigate transmission protocols for relay-assisted free-space optical
(FSO) systems, when multiple parallel relays are employed and there is no
direct link between the source and the destination. As alternatives to
all-active FSO relaying, where all the available relays transmit concurrently,
we propose schemes that select only a single relay to participate in the
communication between the source and the destination in each transmission slot.
This selection is based on the channel state information (CSI) obtained either
from all or from some of the FSO links. Thus, the need for synchronizing the
relays' transmissions is avoided and the slowly varying nature of the
atmospheric channel is exploited. For both relay selection and all-active
relaying, novel closed-form expressions for their outage performance are
derived, assuming the versatile Gamma-Gamma channel model. Furthermore, based
on the derived analytical results, the problem of allocating the optical power
resources to the FSO links is addressed, and optimum and suboptimum solutions
are proposed. Numerical results are provided for equal and non-equal length FSO
links, which illustrate the outage behavior of the considered relaying
protocols and demonstrate the significant performance gains offered by the
proposed power allocation schemes.
|
1105.3879
|
Non-Malleable Codes from the Wire-Tap Channel
|
cs.CR cs.IT math.IT
|
Recently, Dziembowski et al. introduced the notion of non-malleable codes
(NMC), inspired from the notion of non-malleability in cryptography and the
work of Gennaro et al. in 2004 on tamper proof security. Informally, when using
NMC, if an attacker modifies a codeword, decoding this modified codeword will
return either the original message or a completely unrelated value.
The definition of NMC is related to a family of modifications authorized to
the attacker. In their paper, Dziembowski et al. propose a construction valid
for the family of all bit-wise independent functions.
In this article, we study the link between the second version of the Wire-Tap
(WT) Channel, introduced by Ozarow and Wyner in 1984, and NMC. Using
coset-coding, we describe a new construction for NMC w.r.t. a subset of the
family of bit-wise independent functions. Our scheme is easier to build and
more efficient than the one proposed by Dziembowski et al.
|
1105.3931
|
Behavior of Graph Laplacians on Manifolds with Boundary
|
cs.LG math.NA stat.ML
|
In manifold learning, algorithms based on graph Laplacians constructed from
data have received considerable attention both in practical applications and
theoretical analysis. In particular, the convergence of graph Laplacians
obtained from sampled data to certain continuous operators has become an active
research topic recently. Most of the existing work has been done under the
assumption that the data is sampled from a manifold without boundary or that
the functions of interests are evaluated at a point away from the boundary.
However, the question of boundary behavior is of considerable practical and
theoretical interest. In this paper we provide an analysis of the behavior of
graph Laplacians at a point near or on the boundary, discuss their convergence
rates and their implications and provide some numerical results. It turns out
that while points near the boundary occupy only a small part of the total
volume of a manifold, the behavior of graph Laplacian there has different
scaling properties from its behavior elsewhere on the manifold, with global
effects on the whole manifold, an observation with potentially important
implications for the general problem of learning on manifolds.
|
1105.4004
|
Compressed k2-Triples for Full-In-Memory RDF Engines
|
cs.IR cs.DB
|
Current "data deluge" has flooded the Web of Data with very large RDF
datasets. They are hosted and queried through SPARQL endpoints which act as
nodes of a semantic net built on the principles of the Linked Data project.
Although this is a realistic philosophy for global data publishing, its query
performance is diminished when the RDF engines (behind the endpoints) manage
these huge datasets. Their indexes cannot be fully loaded in main memory, hence
these systems need to perform slow disk accesses to solve SPARQL queries. This
paper addresses this problem by a compact indexed RDF structure (called
k2-triples) applying compact k2-tree structures to the well-known
vertical-partitioning technique. It obtains an ultra-compressed representation
of large RDF graphs and allows SPARQL queries to be full-in-memory performed
without decompression. We show that k2-triples clearly outperforms
state-of-the-art compressibility and traditional vertical-partitioning query
resolution, remaining very competitive with multi-index solutions.
|
1105.4005
|
Link prediction in complex networks: a local na\"{\i}ve Bayes model
|
physics.soc-ph cs.SI physics.data-an
|
Common-neighbor-based method is simple yet effective to predict missing
links, which assume that two nodes are more likely to be connected if they have
more common neighbors. In such method, each common neighbor of two nodes
contributes equally to the connection likelihood. In this Letter, we argue that
different common neighbors may play different roles and thus lead to different
contributions, and propose a local na\"{\i}ve Bayes model accordingly.
Extensive experiments were carried out on eight real networks. Compared with
the common-neighbor-based methods, the present method can provide more accurate
predictions. Finally, we gave a detailed case study on the US air
transportation network.
|
1105.4026
|
A New Achievable DoF Region for the 3-user MxN Symmetric Interference
Channel
|
cs.IT math.IT
|
In this paper, the 3-user multiple-input multiple-output Gaussian
interference channel with M antennas at each transmitter and N antennas at each
receiver is considered. It is assumed that the channel coefficients are
constant and known to all transmitters and receivers. A novel scheme is
presented that spans a new achievable degrees of freedom region. For some
values of M and N, the proposed scheme achieve higher number of DoF than are
currently achievable, while for other values it meets the best known
upperbound. Simulation results are presented showing the superior performance
of the proposed schemes to earlier approaches.
|
1105.4037
|
Mass transportation with LQ cost functions
|
math.OC cs.SY
|
We study the optimal transport problem in the Euclidean space where the cost
function is given by the value function associated with a Linear Quadratic
minimization problem. Under appropriate assumptions, we generalize Brenier's
Theorem proving existence and uniqueness of an optimal transport map. In the
controllable case, we show that the optimal transport map has to be the
gradient of a convex function up to a linear change of coordinates. We give
regularity results and also investigate the non-controllable case.
|
1105.4042
|
Adaptive and optimal online linear regression on $\ell^1$-balls
|
stat.ML cs.LG math.ST stat.TH
|
We consider the problem of online linear regression on individual sequences.
The goal in this paper is for the forecaster to output sequential predictions
which are, after $T$ time rounds, almost as good as the ones output by the best
linear predictor in a given $\ell^1$-ball in $\\R^d$. We consider both the
cases where the dimension~$d$ is small and large relative to the time horizon
$T$. We first present regret bounds with optimal dependencies on $d$, $T$, and
on the sizes $U$, $X$ and $Y$ of the $\ell^1$-ball, the input data and the
observations. The minimax regret is shown to exhibit a regime transition around
the point $d = \sqrt{T} U X / (2 Y)$. Furthermore, we present efficient
algorithms that are adaptive, \ie, that do not require the knowledge of $U$,
$X$, $Y$, and $T$, but still achieve nearly optimal regret bounds.
|
1105.4044
|
Turnover Rate of Popularity Charts in Neutral Models
|
physics.soc-ph cs.SI
|
It has been shown recently that in many different cultural phenomena the
turnover rate on the most popular artefacts in a population exhibit some
regularities. A very simple expression for this turnover rate has been proposed
by Bentley et al. and its validity in two simple models for copying and
innovation is investigated in this paper. It is found that Bentley's formula is
an approximation of the real behaviour of the turnover rate in the
Wright-Fisher model, while it is not valid in the Moran model.
|
1105.4058
|
Human Identity Verification based on Heart Sounds: Recent Advances and
Future Directions
|
cs.CV stat.AP
|
Identity verification is an increasingly important process in our daily
lives, and biometric recognition is a natural solution to the authentication
problem.
One of the most important research directions in the field of biometrics is
the characterization of novel biometric traits that can be used in conjunction
with other traits, to limit their shortcomings or to enhance their performance.
The aim of this work is to introduce the reader to the usage of heart sounds
for biometric recognition, describing the strengths and the weaknesses of this
novel trait and analyzing in detail the methods developed so far by different
research groups and their performance.
|
1105.4082
|
Emergent velocity agreement in robot networks
|
cs.NI cs.RO
|
In this paper we propose and prove correct a new self-stabilizing velocity
agreement (flocking) algorithm for oblivious and asynchronous robot networks.
Our algorithm allows a flock of uniform robots to follow a flock head emergent
during the computation whatever its direction in plane. Robots are
asynchronous, oblivious and do not share a common coordinate system. Our
solution includes three modules architectured as follows: creation of a common
coordinate system that also allows the emergence of a flock-head, setting up
the flock pattern and moving the flock. The novelty of our approach steams in
identifying the necessary conditions on the flock pattern placement and the
velocity of the flock-head (rotation, translation or speed) that allow the
flock to both follow the exact same head and to preserve the flock pattern.
Additionally, our system is self-healing and self-stabilizing. In the event of
the head leave (the leading robot disappears or is damaged and cannot be
recognized by the other robots) the flock agrees on another head and follows
the trajectory of the new head. Also, robots are oblivious (they do not recall
the result of their previous computations) and we make no assumption on their
initial position. The step complexity of our solution is O(n).
|
1105.4143
|
Opportunities for Network Coding: To Wait or Not to Wait
|
math.OC cs.IT cs.NI math.IT
|
It has been well established that wireless network coding can significantly
improve the efficiency of multi-hop wireless networks. However, in a stochastic
environment some of the packets might not have coding pairs, which limits the
number of available coding opportunities. In this context, an important
decision is whether to delay packet transmission in hope that a coding pair
will be available in the future or transmit a packet without coding. The paper
addresses this problem by formulating a stochastic dynamic program whose
objective is to minimize the long-run average cost per unit time incurred due
to transmissions and delays. In particular, we identify optimal control actions
that would balance between costs of transmission against the costs incurred due
to the delays. Moreover, we seek to address a crucial question: what should be
observed as the state of the system? We analytically show that observing queue
lengths suffices if the system can be modeled as a Markov decision process. We
also show that a stationary threshold type policy based on queue lengths is
optimal. We further substantiate our results with simulation experiments for
more generalized settings.
|
1105.4183
|
Cubical Cohomology Ring of 3D Photographs
|
cs.CV
|
Cohomology and cohomology ring of three-dimensional (3D) objects are
topological invariants that characterize holes and their relations. Cohomology
ring has been traditionally computed on simplicial complexes. Nevertheless,
cubical complexes deal directly with the voxels in 3D images, no additional
triangulation is necessary, facilitating efficient algorithms for the
computation of topological invariants in the image context. In this paper, we
present formulas to directly compute the cohomology ring of 3D cubical
complexes without making use of any additional triangulation. Starting from a
cubical complex $Q$ that represents a 3D binary-valued digital picture whose
foreground has one connected component, we compute first the cohomological
information on the boundary of the object, $\partial Q$ by an incremental
technique; then, using a face reduction algorithm, we compute it on the whole
object; finally, applying the mentioned formulas, the cohomology ring is
computed from such information.
|
1105.4204
|
Fast O(1) bilateral filtering using trigonometric range kernels
|
cs.CV cs.CE cs.DC cs.DS
|
It is well-known that spatial averaging can be realized (in space or
frequency domain) using algorithms whose complexity does not depend on the size
or shape of the filter. These fast algorithms are generally referred to as
constant-time or O(1) algorithms in the image processing literature. Along with
the spatial filter, the edge-preserving bilateral filter [Tomasi1998] involves
an additional range kernel. This is used to restrict the averaging to those
neighborhood pixels whose intensity are similar or close to that of the pixel
of interest. The range kernel operates by acting on the pixel intensities. This
makes the averaging process non-linear and computationally intensive,
especially when the spatial filter is large. In this paper, we show how the
O(1) averaging algorithms can be leveraged for realizing the bilateral filter
in constant-time, by using trigonometric range kernels. This is done by
generalizing the idea in [Porikli2008] of using polynomial range kernels. The
class of trigonometric kernels turns out to be sufficiently rich, allowing for
the approximation of the standard Gaussian bilateral filter. The attractive
feature of our approach is that, for a fixed number of terms, the quality of
approximation achieved using trigonometric kernels is much superior to that
obtained in [Porikli2008] using polynomials.
|
1105.4206
|
Spatial Intercell Interference Cancellation with CSI Training and
Feedback
|
cs.IT math.IT
|
We investigate intercell interference cancellation (ICIC) with a practical
downlink training and uplink channel state information (CSI) feedback model.
The average downlink throughput for such a 2-cell network is derived. The user
location has a strong effect on the signal-to-interference ratio (SIR) and the
channel estimation error. This motivates adaptively switching between
traditional (single-cell) beamforming and ICIC at low signal-to-noise ratio
(SNR) where ICIC is preferred only with low SIR and accurate channel
estimation, and the use of ICIC with optimized training and feedback at high
SNR. For a given channel coherence time and fixed training and feedback
overheads, we develop optimal data vs. pilot power allocation for CSI training
as well as optimal feedback resource allocation to feed back CSI of different
channels. Both analog and finite-rate digital feedback are considered. With
analog feedback, the training power optimization provides a more significant
performance gain than feedback optimization; while conversely for digital
feedback, performance is more sensitive to the feedback bit allocation than the
training power optimization. We show that even with low-rate feedback and
standard training, ICIC can transform an interference-limited cellular network
into a noise-limited one.
|
1105.4224
|
On A Semi-Automatic Method for Generating Composition Tables
|
cs.AI cs.LO
|
Originating from Allen's Interval Algebra, composition-based reasoning has
been widely acknowledged as the most popular reasoning technique in qualitative
spatial and temporal reasoning. Given a qualitative calculus (i.e. a relation
model), the first thing we should do is to establish its composition table
(CT). In the past three decades, such work is usually done manually. This is
undesirable and error-prone, given that the calculus may contain tens or
hundreds of basic relations. Computing the correct CT has been identified by
Tony Cohn as a challenge for computer scientists in 1995. This paper addresses
this problem and introduces a semi-automatic method to compute the CT by
randomly generating triples of elements. For several important qualitative
calculi, our method can establish the correct CT in a reasonable short time.
This is illustrated by applications to the Interval Algebra, the Region
Connection Calculus RCC-8, the INDU calculus, and the Oriented Point Relation
Algebras. Our method can also be used to generate CTs for customised
qualitative calculi defined on restricted domains.
|
1105.4251
|
Synthesizing Products for Online Catalogs
|
cs.DB
|
A high-quality, comprehensive product catalog is essential to the success of
Product Search engines and shopping sites such as Yahoo! Shopping, Google
Product Search or Bing Shopping. But keeping catalogs up-to-date becomes a
challenging task, calling for the need of automated techniques. In this paper,
we introduce the problem of product synthesis, a key component of catalog
creation and maintenance. Given a set of offers advertised by merchants, the
goal is to identify new products and add them to the catalog together with
their (structured) attributes. A fundamental challenge is the scale of the
problem: a Product Search engine receives data from thousands of merchants and
millions of products; the product taxonomy contains thousands of categories,
where each category comes in a different schema; and merchants use
representations for products that are different from the ones used in the
catalog of the Product Search engine.
We propose a system that provides an end-to-end solution to the product
synthesis problem, and includes components for extraction, and addresses issues
involved in data extraction from offers, schema reconciliation, and data
fusion. We developed a novel and scalable technique for schema matching which
leverages knowledge about previously-known instance-level associations between
offers and products; and it is trained using automatically created training
sets (no manually-labeled data is needed). We present an experimental
evaluation of our system using data from Bing Shopping for more than 800K
offers, a thousand merchants, and 400 categories. The evaluation confirms that
our approach is able to automatically generate a large number of accurate
product specifications, and that our schema reconciliation component
outperforms state-of-the-art schema matching techniques in terms of precision
and recall.
|
1105.4252
|
Column-Oriented Storage Techniques for MapReduce
|
cs.DB cs.DC
|
Users of MapReduce often run into performance problems when they scale up
their workloads. Many of the problems they encounter can be overcome by
applying techniques learned from over three decades of research on parallel
DBMSs. However, translating these techniques to a MapReduce implementation such
as Hadoop presents unique challenges that can lead to new design choices. This
paper describes how column-oriented storage techniques can be incorporated in
Hadoop in a way that preserves its popular programming APIs.
We show that simply using binary storage formats in Hadoop can provide a 3x
performance boost over the naive use of text files. We then introduce a
column-oriented storage format that is compatible with the replication and
scheduling constraints of Hadoop and show that it can speed up MapReduce jobs
on real workloads by an order of magnitude. We also show that dealing with
complex column types such as arrays, maps, and nested records, which are common
in MapReduce jobs, can incur significant CPU overhead. Finally, we introduce a
novel skip list column format and lazy record construction strategy that avoids
deserializing unwanted records to provide an additional 1.5x performance boost.
Experiments on a real intranet crawl are used to show that our column-oriented
storage techniques can improve the performance of the map phase in Hadoop by as
much as two orders of magnitude.
|
1105.4253
|
Implementing Performance Competitive Logical Recovery
|
cs.DB
|
New hardware platforms, e.g. cloud, multi-core, etc., have led to a
reconsideration of database system architecture. Our Deuteronomy project
separates transactional functionality from data management functionality,
enabling a flexible response to exploiting new platforms. This separation
requires, however, that recovery is described logically. In this paper, we
extend current recovery methods to work in this logical setting. While this is
straightforward in principle, performance is an issue. We show how ARIES style
recovery optimizations can work for logical recovery where page information is
not captured on the log. In side-by-side performance experiments using a common
log, we compare logical recovery with a state-of-the art ARIES style recovery
implementation and show that logical redo performance can be competitive.
|
1105.4254
|
Personalized Social Recommendations - Accurate or Private?
|
cs.DB cs.CR cs.SI
|
With the recent surge of social networks like Facebook, new forms of
recommendations have become possible - personalized recommendations of ads,
content, and even new friend and product connections based on one's social
interactions. Since recommendations may use sensitive social information, it is
speculated that these recommendations are associated with privacy risks. The
main contribution of this work is in formalizing these expected trade-offs
between the accuracy and privacy of personalized social recommendations.
In this paper, we study whether "social recommendations", or recommendations
that are solely based on a user's social network, can be made without
disclosing sensitive links in the social graph. More precisely, we quantify the
loss in utility when existing recommendation algorithms are modified to satisfy
a strong notion of privacy, called differential privacy. We prove lower bounds
on the minimum loss in utility for any recommendation algorithm that is
differentially private. We adapt two privacy preserving algorithms from the
differential privacy literature to the problem of social recommendations, and
analyze their performance in comparison to the lower bounds, both analytically
and experimentally. We show that good private social recommendations are
feasible only for a small subset of the users in the social network or for a
lenient setting of privacy parameters.
|
1105.4255
|
Efficient Diversification of Web Search Results
|
cs.IR
|
In this paper we analyze the efficiency of various search results
diversification methods. While efficacy of diversification approaches has been
deeply investigated in the past, response time and scalability issues have been
rarely addressed. A unified framework for studying performance and feasibility
of result diversification solutions is thus proposed. First we define a new
methodology for detecting when, and how, query results need to be diversified.
To this purpose, we rely on the concept of "query refinement" to estimate the
probability of a query to be ambiguous. Then, relying on this novel ambiguity
detection method, we deploy and compare on a standard test set, three different
diversification methods: IASelect, xQuAD, and OptSelect. While the first two
are recent state-of-the-art proposals, the latter is an original algorithm
introduced in this paper. We evaluate both the efficiency and the effectiveness
of our approach against its competitors by using the standard TREC Web
diversification track testbed. Results shown that OptSelect is able to run two
orders of magnitude faster than the two other state-of-the-art approaches and
to obtain comparable figures in diversification effectiveness.
|
1105.4256
|
Social content matching in MapReduce
|
cs.SI cs.DC
|
Matching problems are ubiquitous. They occur in economic markets, labor
markets, internet advertising, and elsewhere. In this paper we focus on an
application of matching for social media. Our goal is to distribute content
from information suppliers to information consumers. We seek to maximize the
overall relevance of the matched content from suppliers to consumers while
regulating the overall activity, e.g., ensuring that no consumer is overwhelmed
with data and that all suppliers have chances to deliver their content.
We propose two matching algorithms, GreedyMR and StackMR, geared for the
MapReduce paradigm. Both algorithms have provable approximation guarantees, and
in practice they produce high-quality solutions. While both algorithms scale
extremely well, we can show that StackMR requires only a poly-logarithmic
number of MapReduce steps, making it an attractive option for applications with
very large datasets. We experimentally show the trade-offs between quality and
efficiency of our solutions on two large datasets coming from real-world
social-media web sites.
|
1105.4272
|
Calibration with Changing Checking Rules and Its Application to
Short-Term Trading
|
cs.LG
|
We provide a natural learning process in which a financial trader without a
risk receives a gain in case when Stock Market is inefficient. In this process,
the trader rationally choose his gambles using a prediction made by a
randomized calibrated algorithm. Our strategy is based on Dawid's notion of
calibration with more general changing checking rules and on some modification
of Kakade and Foster's randomized algorithm for computing calibrated forecasts.
|
1105.4274
|
On Instability of the Ergodic Limit Theorems with Respect to Small
Violations of Algorithmic Randomness
|
cs.IT math.IT
|
An instability property of the Birkhoff's ergodic theorem and related
asymptotic laws with respect to small violations of algorithmic randomness is
studied. The Shannon--McMillan--Breiman theorem and all universal compression
schemes are also among them.
|
1105.4276
|
Community structure of complex software systems: Analysis and
applications
|
cs.SI cs.SE physics.data-an physics.soc-ph
|
Due to notable discoveries in the fast evolving field of complex networks,
recent research in software engineering has also focused on representing
software systems with networks. Previous work has observed that these networks
follow scale-free degree distributions and reveal small-world phenomena, while
we here explore another property commonly found in different complex networks,
i.e. community structure. We adopt class dependency networks, where nodes
represent software classes and edges represent dependencies among them, and
show that these networks reveal a significant community structure,
characterized by similar properties as observed in other complex networks.
However, although intuitive and anticipated by different phenomena, identified
communities do not exactly correspond to software packages. We empirically
confirm our observations on several networks constructed from Java and various
third party libraries, and propose different applications of community
detection to software engineering.
|
1105.4278
|
Is the Multiverse Hypothesis capable of explaining the Fine Tuning of
Nature Laws and Constants? The Case of Cellular Automata
|
nlin.CG astro-ph.CO cs.NE
|
The objective of this paper is analyzing to which extent the multiverse
hypothesis provides a real explanation of the peculiarities of the laws and
constants in our universe. First we argue in favor of the thesis that all
multiverses except Tegmark's <<mathematical multiverse>> are too small to
explain the fine tuning, so that they merely shift the problem up one level.
But the <<mathematical multiverse>> is surely too large. To prove this
assessment, we have performed a number of experiments with cellular automata of
complex behavior, which can be considered as universes in the mathematical
multiverse. The analogy between what happens in some automata (in particular
Conway's <<Game of Life>>) and the real world is very strong. But if the
results of our experiments can be extrapolated to our universe, we should
expect to inhabit -- in the context of the multiverse -- a world in which at
least some of the laws and constants of nature should show a certain time
dependence. Actually, the probability of our existence in a world such as ours
would be mathematically equal to zero. In consequence, the results presented in
this paper can be considered as an inkling that the hypothesis of the
multiverse, whatever its type, does not offer an adequate explanation for the
peculiarities of the physical laws in our world. A slightly reduced version of
this paper has been published in the Journal for General Philosophy of Science,
Springer, March 2013, DOI: 10.1007/s10838-013-9215-7.
|
1105.4318
|
Correction of Noisy Sentences using a Monolingual Corpus
|
cs.DL cs.AI
|
Correction of Noisy Natural Language Text is an important and well studied
problem in Natural Language Processing. It has a number of applications in
domains like Statistical Machine Translation, Second Language Learning and
Natural Language Generation. In this work, we consider some statistical
techniques for Text Correction. We define the classes of errors commonly found
in text and describe algorithms to correct them. The data has been taken from a
poorly trained Machine Translation system. The algorithms use only a language
model in the target language in order to correct the sentences. We use phrase
based correction methods in both the algorithms. The phrases are replaced and
combined to give us the final corrected sentence. We also present the methods
to model different kinds of errors, in addition to results of the working of
the algorithms on the test set. We show that one of the approaches fail to
achieve the desired goal, whereas the other succeeds well. In the end, we
analyze the possible reasons for such a trend in performance.
|
1105.4340
|
Outage Probability of Diversity Combining Receivers in Arbitrarily
Fading Channels
|
cs.IT math.IT
|
We propose a simple and accurate method to evaluate the outage probability at
the output of arbitrarily fading L-branch diversity combining receiver. The
method is based on the saddlepoint approximation, which only requires the
knowledge of the moment generating functions of the signal-to-noise ratio at
the output of each diversity branch. In addition, we show that the obtained
results reduce to closed-form expressions in many particular cases of practical
interest. Numerical results illustrate a very high accuracy of the proposed
method for practical outage values and for a large mixture of fading and system
parameters.
|
1105.4354
|
Preprocessing for Automating Early Detection of Cervical Cancer
|
cs.CV
|
Uterine Cervical Cancer is one of the most common forms of cancer in women
worldwide. Most cases of cervical cancer can be prevented through screening
programs aimed at detecting precancerous lesions. During Digital Colposcopy,
colposcopic images or cervigrams are acquired in raw form. They contain
specular reflections which appear as bright spots heavily saturated with white
light and occur due to the presence of moisture on the uneven cervix surface
and. The cervix region occupies about half of the raw cervigram image. Other
parts of the image contain irrelevant information, such as equipment, frames,
text and non-cervix tissues. This irrelevant information can confuse automatic
identification of the tissues within the cervix. Therefore we focus on the
cervical borders, so that we have a geometric boundary on the relevant image
area. Our novel technique eliminates the SR, identifies the region of interest
and makes the cervigram ready for segmentation algorithms.
|
1105.4360
|
Random Matrix Model for Nakagami-Hoyt Fading
|
cs.IT math-ph math.IT math.MP
|
Random matrix model for the Nakagami-q (Hoyt) fading in multiple-input
multiple-output (MIMO) communication channels with arbitrary number of
transmitting and receiving antennas is considered. The joint probability
density for the eigenvalues of H{\dag}H (or HH{\dag}), where H is the channel
matrix, is shown to correspond to the Laguerre crossover ensemble of random
matrices and is given in terms of a Pfaffian. Exact expression for the marginal
density of eigenvalues is obtained as a series consisting of associated
Laguerre polynomials. This is used to study the effect of fading on the Shannon
channel capacity. Exact expressions for higher order density correlation
functions are also given which can be used to study the distribution of channel
capacity.
|
1105.4378
|
Performance of Hybrid Concatenated Trellis Codes CPFSK with Iterative
Decoding over Fading Channels
|
cs.IT math.IT
|
Concatenation is a method of building long codes out of shorter ones, it
attempts to meet the problem of decoding complexity by breaking the required
computation into manageable segments. Concatenated Continuous Phase Frequency
Shift Keying (CPFSK) facilitates powerful error correction. CPFSK also has the
advantage of being bandwidth efficient and compatible with nonlinear
amplifiers. Bandwidth efficient concatenated coded modulation schemes were
designed for communication over Additive White Gaussian noise (AWGN), and
Rayleigh fading channels. An analytical bounds on the performance of serial
concatenated convolutional codes (SCCC), and parallel concatenated
convolutionalcodes (PCCC), were derived as a base of comparison with the third
category known as hybrid concatenated trellis codes scheme (HCTC). An upper
bound to the average maximum-likelihood bit error probability of the three
schemes were obtained. Design rules for the parallel, outer, and inner codes
that maximize the interleaver's gain were discussed. Finally, a low complexity
iterative decoding algorithm that yields a better performance is proposed.
|
1105.4379
|
Performance of MC-MC CDMA Systems with Nonlinear Models of HPA
|
cs.IT math.IT
|
A new wireless communication system denoted as Multi-Code Multi-Carrier CDMA
(MC-MC CDMA), which is the combination of Multi-Code CDMA and Multi-Carrier
CDMA, is analyzed in this paper. This system can satisfy multi-rate services
using multi-code schemes and muti-carrier services used for high rate
transmission. The system is evaluated using Traveling Wave Tube Amplifier
(TWTA). This type of amplifiers continue to offer the best microwave high power
amplifiers (HPA) performance in terms of power efficiency, size and cost, but
lag behind Solid State Power Amplifiers (SSPA's) in linearity. This paper
presents a technique for improving TWTA linearity. The use of predistorter (PD)
linearization technique is described to provide TWTA performance comparable or
superior to conventional SSPA's. The characteristics of the PD scheme is
derived based on the extension of Saleh's model for HPA.
|
1105.4380
|
Performance of MF-MSK Systems with Pre-distortion Schemes
|
cs.IT math.IT
|
Efficient RF power amplifiers used in third generation systems require
linearization in order to reduce adjacent channel inter-modulation distortion,
without sacrificing efficiency. Digital baseband predistortion is a highly
cost-effective way to linearize power amplifiers (PAs). New communications
services have created a demand for highly linear high power amplifiers (HPA's).
Traveling Wave Tubes Amplifiers (TWTA) continue to offer the best microwave HPA
performance in terms of power efficiency, size and cost, but lag behind Solid
State Power Amplifiers (SSAP's) in linearity. This paper presents a technique
for improving TWTA linearity. The use of predistorter (PD) linearization
technique is described to provide TWTA performance comparable or superior to
conventional SSPA's. The characteristics of the PD scheme is derived based on
the extension of Saleh's model for HPA. The analysis results of Multi-frequency
Minimum Shift Keying (MF-MSK) in non-linear channels are presented in this
paper.
|
1105.4385
|
b-Bit Minwise Hashing for Large-Scale Linear SVM
|
cs.LG stat.AP stat.CO stat.ML
|
In this paper, we propose to (seamlessly) integrate b-bit minwise hashing
with linear SVM to substantially improve the training (and testing) efficiency
using much smaller memory, with essentially no loss of accuracy. Theoretically,
we prove that the resemblance matrix, the minwise hashing matrix, and the b-bit
minwise hashing matrix are all positive definite matrices (kernels).
Interestingly, our proof for the positive definiteness of the b-bit minwise
hashing kernel naturally suggests a simple strategy to integrate b-bit hashing
with linear SVM. Our technique is particularly useful when the data can not fit
in memory, which is an increasingly critical issue in large-scale machine
learning. Our preliminary experimental results on a publicly available webspam
dataset (350K samples and 16 million dimensions) verified the effectiveness of
our algorithm. For example, the training time was reduced to merely a few
seconds. In addition, our technique can be easily extended to many other linear
and nonlinear machine learning applications such as logistic regression.
|
1105.4394
|
Integrating Testing and Interactive Theorem Proving
|
cs.SE cs.AI cs.LO
|
Using an interactive theorem prover to reason about programs involves a
sequence of interactions where the user challenges the theorem prover with
conjectures. Invariably, many of the conjectures posed are in fact false, and
users often spend considerable effort examining the theorem prover's output
before realizing this. We present a synergistic integration of testing with
theorem proving, implemented in the ACL2 Sedan (ACL2s), for automatically
generating concrete counterexamples. Our method uses the full power of the
theorem prover and associated libraries to simplify conjectures; this
simplification can transform conjectures for which finding counterexamples is
hard into conjectures where finding counterexamples is trivial. In fact, our
approach even leads to better theorem proving, e.g. if testing shows that a
generalization step leads to a false conjecture, we force the theorem prover to
backtrack, allowing it to pursue more fruitful options that may yield a proof.
The focus of the paper is on the engineering of a synergistic integration of
testing with interactive theorem proving; this includes extending ACL2 with new
functionality that we expect to be of general interest. We also discuss our
experience in using ACL2s to teach freshman students how to reason about their
programs.
|
1105.4395
|
Default-all is dangerous!
|
cs.DB
|
We show that the default-all propagation scheme for database annotations is
dangerous. Dangerous here means that it can propagate annotations to the query
output which are semantically irrelevant to the query the user asked. This is
the result of considering all relationally equivalent queries and returning the
union of their where-provenance in an attempt to define a propagation scheme
that is insensitive to query rewriting. We propose an alternative
query-rewrite-insensitive (QRI) where-provenance called minimum propagation. It
is analogous to the minimum witness basis for why-provenance, straight-forward
to evaluate, and returns all relevant and only relevant annotations.
|
1105.4408
|
A Simple Proof of the Mutual Incoherence Condition for Orthogonal
Matching Pursuit
|
cs.IT math.IT
|
This paper provides a simple proof of the mutual incoherence condition $\mu <
\frac{1}{2K-1}$ under which K-sparse signal can be accurately reconstructed
from a small number of linear measurements using the orthogonal matching
pursuit (OMP) algorithm. Our proof, based on mathematical induction, is built
on an observation that the general step of the OMP process is in essence same
as the initial step since the residual is considered as a new measurement
preserving the sparsity level of an input vector.
|
1105.4452
|
GutenTag: A Multi-Term Caching Optimized Tag Query Processor for
Key-Value Based NoSQL Storage Systems
|
cs.IR cs.DB
|
NoSQL systems are more and more deployed as back-end infrastructure for
large-scale distributed online platforms like Google, Amazon or Facebook. Their
applicability results from the fact that most services of online platforms
access the stored data objects via their primary key. However, NoSQL systems do
not efficiently support services referring more than one data object, e.g. the
term-based search for data objects. To address this issue we propose our
architecture based on an inverted index on top of a NoSQL system. For queries
comprising more than one term, distributed indices yield a limited performance
in large distributed systems. We propose two extensions to cope with this
challenge. Firstly, we store index entries not only for single term but also
for a selected set of term combinations depending on their popularity derived
from a query history. Secondly, we additionally cache popular keys on gateway
nodes, which are a common concept in real-world systems, acting as interface
for services when accessing data objects in the back end. Our results show that
we can significantly reduces the bandwidth consumption for processing queries,
with an acceptable, marginal increase in the load of the gateway nodes.
|
1105.4477
|
On the Cohomology of 3D Digital Images
|
cs.CV
|
We propose a method for computing the cohomology ring of three--dimensional
(3D) digital binary-valued pictures. We obtain the cohomology ring of a 3D
digital binary--valued picture $I$, via a simplicial complex K(I)topologically
representing (up to isomorphisms of pictures) the picture I. The usefulness of
a simplicial description of the "digital" cohomology ring of 3D digital
binary-valued pictures is tested by means of a small program visualizing the
different steps of the method. Some examples concerning topological thinning,
the visualization of representative (co)cycles of (co)homology generators and
the computation of the cup product on the cohomology of simple pictures are
showed.
|
1105.4479
|
Return probability and k-step measures
|
physics.soc-ph cs.SI
|
The notion of return probability -- explored most famously by George
P\'{o}lya on d-dimensional lattices -- has potential as a measure for the
analysis of networks. We present an efficient method for finding return
probability distributions for connected undirected graphs. We argue that return
probability has the same discriminatory power as existing k-step measures -- in
particular, beta centrality (with negative beta), the graph-theoretical power
index (GPI), and subgraph centrality. We compare the running time of our
algorithm to beta centrality and subgraph centrality and find that it is
significantly faster. When return probability is used to measure the same
phenomena as beta centrality, it runs in linear time -- O(n+m), where n and m
are the number of nodes and edges, respectively -- which takes much less time
than either the matrix inversion or the sequence of matrix multiplications
required for calculating the exact or approximate forms of beta centrality,
respectively. We call this form of return probability the P\'{o}lya power index
(PPI). Computing subgraph centrality requires an expensive eigendecomposition
of the adjacency matrix; return probability runs in half the time of the
eigendecomposition on a 2000-node network. These performance improvements are
important because computationally efficient measures are necessary in order to
analyze large networks.
|
1105.4480
|
A Tool for Integer Homology Computation: Lambda-At Model
|
cs.CV
|
In this paper, we formalize the notion of lambda-AT-model (where $\lambda$ is
a non-null integer) for a given chain complex, which allows the computation of
homological information in the integer domain avoiding using the Smith Normal
Form of the boundary matrices. We present an algorithm for computing such a
model, obtaining Betti numbers, the prime numbers p involved in the invariant
factors of the torsion subgroup of homology, the amount of invariant factors
that are a power of p and a set of representative cycles of generators of
homology mod p, for each p. Moreover, we establish the minimum valid lambda for
such a construction, what cuts down the computational costs related to the
torsion subgroup. The tools described here are useful to determine topological
information of nD structured objects such as simplicial, cubical or simploidal
complexes and are applicable to extract such an information from digital
pictures.
|
1105.4502
|
Assessing Vaccination Sentiments with Online Social Media: Implications
for Infectious Disease Dynamics and Control
|
cs.SI physics.soc-ph q-bio.PE
|
There is great interest in the dynamics of health behaviors in social
networks and how they affect collective public health outcomes, but measuring
population health behaviors over time and space requires substantial resources.
Here, we use publicly available data from 101,853 users of online social media
collected over a time period of almost six months to measure the
spatio-temporal sentiment towards a new vaccine. We validated our approach by
identifying a strong correlation between sentiments expressed online and CDC-
estimated vaccination rates by region. Analysis of the network of opinionated
users showed that information flows more often between users who share the same
sentiments - and less often between users who do not share the same sentiments
- than expected by chance alone. We also found that most communities are
dominated by either positive or negative sentiments towards the novel vaccine.
Simulations of infectious disease transmission show that if clusters of
negative vaccine sentiments lead to clusters of unprotected individuals, the
likelihood of disease outbreaks are greatly increased. Online social media
provide unprecedented access to data allowing for inexpensive and efficient
tools to identify target areas for intervention efforts and to evaluate their
effectiveness.
|
1105.4514
|
Synthesis of Parallel Binary Machines
|
cs.CR cs.IT math.IT
|
Binary machines are a generalization of Feedback Shift Registers (FSRs) in
which both, feedback and feedforward, connections are allowed and no chain
connection between the register stages is required. In this paper, we present
an algorithm for synthesis of binary machines with the minimum number of stages
for a given degree of parallelization. Our experimental results show that for
sequences with high linear complexity such as complementary, Legendre, or truly
random, parallel binary machines are an order of magnitude smaller than
parallel FSRs generating the same sequence. The presented approach can
potentially be of advantage for any application which requires sequences with
high spectrum efficiency or high security, such as data transmission, wireless
communications, and cryptography.
|
1105.4540
|
On the Limits of Sequential Testing in High Dimensions
|
cs.IT math.IT math.ST stat.TH
|
This paper presents results pertaining to sequential methods for support
recovery of sparse signals in noise. Specifically, we show that any sequential
measurement procedure fails provided the average number of measurements per
dimension grows slower then log s / D(f0||f1) where s is the level of sparsity,
and D(f0||f1) the Kullback-Leibler divergence between the underlying
distributions. For comparison, we show any non-sequential procedure fails
provided the number of measurements grows at a rate less than log n /
D(f1||f0), where n is the total dimension of the problem. Lastly, we show that
a simple procedure termed sequential thresholding guarantees exact support
recovery provided the average number of measurements per dimension grows faster
than (log s + log log n) / D(f0||f1), a mere additive factor more than the
lower bound.
|
1105.4549
|
On Stochastic Gradient and Subgradient Methods with Adaptive Steplength
Sequences
|
math.OC cs.SY
|
The performance of standard stochastic approximation implementations can vary
significantly based on the choice of the steplength sequence, and in general,
little guidance is provided about good choices. Motivated by this gap, in the
first part of the paper, we present two adaptive steplength schemes for
strongly convex differentiable stochastic optimization problems, equipped with
convergence theory. The first scheme, referred to as a recursive steplength
stochastic approximation scheme, optimizes the error bounds to derive a rule
that expresses the steplength at a given iteration as a simple function of the
steplength at the previous iteration and certain problem parameters. This rule
is seen to lead to the optimal steplength sequence over a prescribed set of
choices. The second scheme, termed as a cascading steplength stochastic
approximation scheme, maintains the steplength sequence as a piecewise-constant
decreasing function with the reduction in the steplength occurring when a
suitable error threshold is met. In the second part of the paper, we allow for
nondifferentiable objective and we propose a local smoothing technique that
leads to a differentiable approximation of the function. Assuming a uniform
distribution on the local randomness, we establish a Lipschitzian property for
the gradient of the approximation and prove that the obtained Lipschitz bound
grows at a modest rate with problem size. This facilitates the development of
an adaptive steplength stochastic approximation framework, which now requires
sampling in the product space of the original measure and the artificially
introduced distribution. The resulting adaptive steplength schemes are applied
to three stochastic optimization problems. We observe that both schemes perform
well in practice and display markedly less reliance on user-defined parameters.
|
1105.4555
|
Secure Lossy Source-Channel Wiretapping with Side Information at the
Receiving Terminals
|
cs.IT math.IT
|
The problem of secure lossy source-channel wiretapping with arbitrarily
correlated side informations at both receivers is investigated. This scenario
consists of an encoder (referred to as Alice) that wishes to compress a source
and send it through a noisy channel to a legitimate receiver (referred to as
Bob). In this context, Alice must simultaneously satisfy the desired
requirements on the distortion level at Bob, and the equivocation rate at the
eavesdropper (referred to as Eve). This setting can be seen as a generalization
of the conventional problems of secure source coding with side information at
the decoders, and the wiretap channel. Inner and outer bounds on the
rate-distortion-equivocation region for the case of arbitrary channels and side
informations are derived. In some special cases of interest, it is shown that
separation holds. By means of an appropriate coding, the presence of any
statistical difference among the side informations, the channel noises, and the
distortion at Bob can be fully exploited in terms of secrecy.
|
1105.4582
|
Perception of Personality and Naturalness through Dialogues by Native
Speakers of American English and Arabic
|
cs.CL cs.RO
|
Linguistic markers of personality traits have been studied extensively, but
few cross-cultural studies exist. In this paper, we evaluate how native
speakers of American English and Arabic perceive personality traits and
naturalness of English utterances that vary along the dimensions of verbosity,
hedging, lexical and syntactic alignment, and formality. The utterances are the
turns within dialogue fragments that are presented as text transcripts to the
workers of Amazon's Mechanical Turk. The results of the study suggest that all
four dimensions can be used as linguistic markers of all personality traits by
both language communities. A further comparative analysis shows cross-cultural
differences for some combinations of measures of personality traits and
naturalness, the dimensions of linguistic variability and dialogue acts.
|
1105.4585
|
PAC-Bayesian Analysis of the Exploration-Exploitation Trade-off
|
cs.LG stat.ML
|
We develop a coherent framework for integrative simultaneous analysis of the
exploration-exploitation and model order selection trade-offs. We improve over
our preceding results on the same subject (Seldin et al., 2011) by combining
PAC-Bayesian analysis with Bernstein-type inequality for martingales. Such a
combination is also of independent interest for studies of multiple
simultaneously evolving martingales.
|
1105.4618
|
Bounding the Fat Shattering Dimension of a Composition Function Class
Built Using a Continuous Logic Connective
|
cs.LG
|
We begin this report by describing the Probably Approximately Correct (PAC)
model for learning a concept class, consisting of subsets of a domain, and a
function class, consisting of functions from the domain to the unit interval.
Two combinatorial parameters, the Vapnik-Chervonenkis (VC) dimension and its
generalization, the Fat Shattering dimension of scale e, are explained and a
few examples of their calculations are given with proofs. We then explain
Sauer's Lemma, which involves the VC dimension and is used to prove the
equivalence of a concept class being distribution-free PAC learnable and it
having finite VC dimension.
As the main new result of our research, we explore the construction of a new
function class, obtained by forming compositions with a continuous logic
connective, a uniformly continuous function from the unit hypercube to the unit
interval, from a collection of function classes. Vidyasagar had proved that
such a composition function class has finite Fat Shattering dimension of all
scales if the classes in the original collection do; however, no estimates of
the dimension were known. Using results by Mendelson-Vershynin and Talagrand,
we bound the Fat Shattering dimension of scale e of this new function class in
terms of the Fat Shattering dimensions of the collection's classes.
We conclude this report by providing a few open questions and future research
topics involving the PAC learning model.
|
1105.4665
|
Improved Linear Programming Decoding using Frustrated Cycles
|
cs.IT math.IT
|
We consider transmission over a binary-input additive white Gaussian noise
channel using low-density parity-check codes. One of the most popular
techniques for decoding low-density parity-check codes is the linear
programming decoder. In general, the linear programming decoder is suboptimal.
I.e., the word error rate is higher than the optimal, maximum a posteriori
decoder.
In this paper we present a systematic approach to enhance the linear program
decoder. More precisely, in the cases where the linear program outputs a
fractional solution, we give a simple algorithm to identify frustrated cycles
which cause the output of the linear program to be fractional. Then adding
these cycles, adaptively to the basic linear program, we show improved word
error rate performance.
|
1105.4683
|
On the BCJR Algorithm for Asynchronous Physical-layer Network Coding
|
cs.IT math.IT
|
In practical asynchronous bi-directional relaying, symbols transmitted by two
source nodes cannot arrive at the relay with perfect symbol alignment and the
symbol-asynchronous multiple-access channel (MAC) should be seriously
considered. Recently, Lu et al. proposed a Tanner-graph representation of
symbol-asynchronous MAC with rectangular-pulse shaping and further developed
the message-passing algorithm for optimal decoding of the asynchronous
physical-layer network coding. In this paper, we present a general channel
model for the asynchronous multiple-access channel with arbitrary
pulse-shaping. Then, the Bahl, Cocke, Jelinek, and Raviv (BCJR) algorithm is
developed for optimal decoding of asynchronous MAC channel. This formulation
can be well employed to develop various low-complexity algorithms, such as
Log-MAP algorithm, Max-Log-MAP algorithm, which are favorable in practice.
|
1105.4701
|
Online Learning, Stability, and Stochastic Gradient Descent
|
cs.LG
|
In batch learning, stability together with existence and uniqueness of the
solution corresponds to well-posedness of Empirical Risk Minimization (ERM)
methods; recently, it was proved that CV_loo stability is necessary and
sufficient for generalization and consistency of ERM. In this note, we
introduce CV_on stability, which plays a similar note in online learning. We
show that stochastic gradient descent (SDG) with the usual hypotheses is CVon
stable and we then discuss the implications of CV_on stability for convergence
of SGD.
|
1105.4702
|
Exploiting Conceptual Knowledge for Querying Information Systems
|
cs.IR cs.DB
|
Whereas today's information systems are well-equipped for efficient query
handling, their strict mathematical foundations hamper their use for everyday
tasks. In daily life, people expect information to be offered in a personalized
and focused way. But currently, personalization in digital systems still only
takes explicit knowledge into account and does not yet process conceptual
information often naturally implied by users. We discuss how to bridge the gap
between users and today's systems, building on results from cognitive
psychology.
|
1105.4705
|
A Tutorial in Connectome Analysis: Topological and Spatial Features of
Brain Networks
|
q-bio.NC cs.SI physics.soc-ph
|
High-throughput methods for yielding the set of connections in a neural
system, the connectome, are now being developed. This tutorial describes ways
to analyze the topological and spatial organization of the connectome at the
macroscopic level of connectivity between brain regions as well as the
microscopic level of connectivity between neurons. We will describe topological
features at three different levels: the local scale of individual nodes, the
regional scale of sets of nodes, and the global scale of the complete set of
nodes in a network. Such features can be used to characterize components of a
network and to compare different networks, e.g. the connectome of patients and
control subjects for clinical studies. At the global scale, different types of
networks can be distinguished and we will describe Erd\"os-R\'enyi random,
scale-free, small-world, modular, and hierarchical archetypes of networks.
Finally, the connectome also has a spatial organization and we describe methods
for analyzing wiring lengths of neural systems. As an introduction for new
researchers in the field of connectome analysis, we discuss the benefits and
limitations of each analysis approach.
|
1105.4712
|
Image Splicing Detection Using Inherent Lens Radial Distortion
|
cs.CV
|
Image splicing is a common form of image forgery. Such alterations may leave
no visual clues of tampering. In recent works camera characteristics
consistency across the image has been used to establish the authenticity and
integrity of digital images. Such constant camera characteristic properties are
inherent from camera manufacturing processes and are unique. The majority of
digital cameras are equipped with spherical lens and this introduces radial
distortions on images. This aberration is often disturbed and fails to be
consistent across the image, when an image is spliced. This paper describes the
detection of splicing operation on images by estimating radial distortion from
different portions of the image using line-based calibration. For the first
time, the detection of image splicing through the verification of consistency
of lens radial distortion has been explored in this paper. The conducted
experiments demonstrate the efficacy of our proposed approach for the detection
of image splicing on both synthetic and real images.
|
1105.4737
|
Sufficient Stochastic Maximum Principle for Discounted Control Problem
|
math.OC cs.SY math.PR
|
In this article, the sufficient Pontryagin's maximum principle for infinite
horizon discounted stochastic control problem is established. The sufficiency
is ensured by an additional assumption of concavity of the Hamiltonian
function. Throughout the paper, it is assumed that the control domain U is a
convex set and the control may enter the diffusion term of the state equation.
The general results are applied to the controlled stochastic logistic equation
of population dynamics.
|
1105.4764
|
Theorical and Numerical Analysis of the Rapid Pointwise Stabilization of
Coupled String-Beam Systems
|
math.OC cs.SY math.AP
|
We consider a pointwise stabilization problem for a coupled wave and plate
equations. We prove under rather general assumptions, that such systems can
stabilized so as to have arbitrarily high decay rates and are exactly
controllable. We propose a numerical approximation of the model and we study
numerically the construction of the feedbak law leading to exponential decay
with arbtrarily large rate.
|
1105.4868
|
Search for Hidden Knowledge in Collective Intelligence dealing
Indeterminacy Ontology of Folksonomy with Linguistic Pragmatics and Quantum
Logic
|
cs.IR
|
Information retrieval is not only the most frequent application executed on
the Web but it is also the base of different types of applications. Considering
collective intelligence of groups of individuals as a framework for evaluating
and incorporating new experiences and information we often cannot retrieve such
knowledge being tacit. Tacit knowledge underlies many competitive capabilities
and it is hard to articulate on discrete ontology structure. It is unstructured
or unorganized, and therefore remains hidden. Developing generic solutions that
can find the hidden knowledge is extremely complex. Moreover this will be a
great challenge for the developers of semantic technologies. This work aims to
explore ways to make explicit and available the tacit knowledge hidden in the
collective intelligence of a collaborative environment within organizations.
The environment was defined by folksonomies supported by a faceted semantic
search. Vector space model which incorporates an analogy with the mathematical
apparatus of quantum theory is adopted for the representation and manipulation
of the meaning of folksonomy. Vector space retrieval has been proven efficiency
when there isn't a data behavioural because it bears ranking algorithms
involving a small number of types of elements and few operations. A solution to
find what the user has in mind when posing a query could be based on "joint
meaning" understood as a joint construal of the creator of the contents and the
reader of the contents. The joint meaning was proposed to deal with vagueness
on ontology of folksonomy indeterminacy, incompleteness and inconsistencies on
collective intelligence. A proof-of concept prototype was built for
collaborative environment as evolution of the actual social networks (like
Facebook, LinkedIn,..) using the information visualization on a RIA application
with Semantic Web techniques and technologies.
|
1105.4880
|
Pareto Characterization of the Multicell MIMO Performance Region With
Simple Receivers
|
cs.IT math.IT
|
We study the performance region of a general multicell downlink scenario with
multiantenna transmitters, hardware impairments, and low-complexity receivers
that treat interference as noise. The Pareto boundary of this region describes
all efficient resource allocations, but is generally hard to compute. We
propose a novel explicit characterization that gives Pareto optimal transmit
strategies using a set of positive parameters---fewer than in prior work. We
also propose an implicit characterization that requires even fewer parameters
and guarantees to find the Pareto boundary for every choice of parameters, but
at the expense of solving quasi-convex optimization problems. The merits of the
two characterizations are illustrated for interference channels and ideal
network multiple-input multiple-output (MIMO).
|
1105.4910
|
Robust Coding for Lossy Computing with Observation Costs
|
cs.IT math.IT
|
An encoder wishes to minimize the bit rate necessary to guarantee that a
decoder is able to calculate a symbol-wise function of a sequence available
only at the encoder and a sequence that can be measured only at the decoder.
This classical problem, first studied by Yamamoto, is addressed here by
including two new aspects: (i) The decoder obtains noisy measurements of its
sequence, where the quality of such measurements can be controlled via a
cost-constrained "action" sequence, which is taken at the decoder or at the
encoder; (ii) Measurement at the decoder may fail in a way that is
unpredictable to the encoder, thus requiring robust encoding. The considered
scenario generalizes known settings such as the Heegard-Berger-Kaspi and the
"source coding with a vending machine" problems. The rate-distortion-cost
function is derived in relevant special cases, along with general upper and
lower bounds. Numerical examples are also worked out to obtain further insight
into the optimal system design.
|
1105.4965
|
Evolution of scaling emergence in large-scale spatial epidemic spreading
|
physics.soc-ph cs.SI physics.data-an q-bio.PE
|
Background: Zipf's law and Heaps' law are two representatives of the scaling
concepts, which play a significant role in the study of complexity science. The
coexistence of the Zipf's law and the Heaps' law motivates different
understandings on the dependence between these two scalings, which is still
hardly been clarified.
Methodology/Principal Findings: In this article, we observe an evolution
process of the scalings: the Zipf's law and the Heaps' law are naturally shaped
to coexist at the initial time, while the crossover comes with the emergence of
their inconsistency at the larger time before reaching a stable state, where
the Heaps' law still exists with the disappearance of strict Zipf's law. Such
findings are illustrated with a scenario of large-scale spatial epidemic
spreading, and the empirical results of pandemic disease support a universal
analysis of the relation between the two laws regardless of the biological
details of disease. Employing the United States(U.S.) domestic air
transportation and demographic data to construct a metapopulation model for
simulating the pandemic spread at the U.S. country level, we uncover that the
broad heterogeneity of the infrastructure plays a key role in the evolution of
scaling emergence.
Conclusions/Significance: The analyses of large-scale spatial epidemic
spreading help understand the temporal evolution of scalings, indicating the
coexistence of the Zipf's law and the Heaps' law depends on the collective
dynamics of epidemic processes, and the heterogeneity of epidemic spread
indicates the significance of performing targeted containment strategies at the
early time of a pandemic disease.
|
1105.4971
|
Distributed Evolutionary Computation using REST
|
cs.NE
|
This paper analises distributed evolutionary computation based on the
Representational State Transfer (REST) protocol, which overlays a farming model
on evolutionary computation. An approach to evolutionary distributed
optimisation of multilayer perceptrons (MLP) using REST and language Perl has
been done. In these experiments, a master-slave based evolutionary algorithm
(EA) has been implemented, where slave processes evaluate the costly fitness
function (training a MLP to solve a classification problem). Obtained results
show that the parallel version of the developed programs obtains similar or
better results using much less time than the sequential version, obtaining a
good speedup.
|
1105.4978
|
SOAP vs REST: Comparing a master-slave GA implementation
|
cs.NE
|
In this paper, a high-level comparison of both SOAP (Simple Object Access
Protocol) and REST (Representational State Transfer) is made. These are the two
main approaches for interfacing to the web with web services. Both approaches
are different and present some advantages and disadvantages for interfacing to
web services: SOAP is conceptually more difficult (has a steeper learning
curve) and more "heavy-weight" than REST, although it lacks of standards
support for security. In order to test their eficiency (in time), two
experiments have been performed using both technologies: a client-server model
implementation and a master-slave based genetic algorithm (GA). The results
obtained show clear differences in time between SOAP and REST implementations.
Although both techniques are suitable for developing parallel systems, SOAP is
heavier than REST, mainly due to the verbosity of SOAP communications (XML
increases the time taken to parse the messages).
|
1105.4989
|
Incremental Refinement using a Gaussian Test Channel
|
cs.IT math.IT
|
The additive rate-distortion function (ARDF) was developed in order to
universally bound the rate loss in the Wyner-Ziv problem, and has since then
been instrumental in e.g., bounding the rate loss in successive refinements,
universal quantization, and other multi-terminal source coding settings. The
ARDF is defined as the minimum mutual information over an additive test channel
followed by estimation. In the limit of high resolution, the ADRF coincides
with the true RDF for many sources and fidelity criterions. In the other
extreme, i.e., the limit of low resolutions, the behavior of the ARDF has not
previously been rigorously addressed. In this work, we consider the special
case of quadratic distortion and where the noise in the test channel is
Gaussian distributed. We first establish a link to the I-MMSE relation of Guo
et al. and use this to show that for any source the slope of the ARDF near zero
rate, converges to the slope of the Gaussian RDF near zero rate. We then
consider the multiplicative rate loss of the ARDF, and show that for bursty
sources it may be unbounded, contrary to the additive rate loss, which is upper
bounded by 1/2 bit for all sources. We finally show that unconditional
incremental refinement, i.e., where each refinement is encoded independently of
the other refinements, is ARDF optimal in the limit of low resolution,
independently of the source distribution. Our results also reveal under which
conditions linear estimation is ARDF optimal in the low rate regime.
|
1105.4991
|
Exchanging Secrets without Using Cryptography
|
cs.CR cs.IT math.IT
|
We consider the problem where a group of n nodes, connected to the same
broadcast channel (e.g., a wireless network), want to generate a common secret
bitstream, in the presence of an adversary Eve, who tries to obtain information
on the bitstream. We assume that the nodes initially share a (small) piece of
information, but do not have access to any out-of-band channel. We ask the
question: can this problem be solved without relying on Eve's computational
limitations, i.e., without using any form of public-key cryptography?
We propose a secret-agreement protocol, where the n nodes of the group keep
exchanging bits until they have all agreed on a bit sequence that Eve cannot
reconstruct with very high probability. In this task, the nodes are assisted by
a small number of interferers, whose role is to create channel noise in a way
that bounds the amount of information Eve can overhear. Our protocol has
polynomial-time complexity and requires no changes to the physical or MAC layer
of network devices.
First, we formally show that, under standard theoretical assumptions, our
protocol is information-theoretically secure, achieves optimal
secret-generation rate for n = 2 nodes, and scales well to an arbitrary number
of nodes. Second, we adapt our protocol to a small wireless 14-square-meter
testbed; we experimentally show that, if Eve uses a standard wireless physical
layer and is not too close to any of the nodes, 8 nodes can achieve a
secret-generation rate of 38 Kbps. To the best of our knowledge, ours is the
first experimental demonstration of information-theoretic secret exchange on a
wireless network at a rate beyond a few tens of bits per second.
|
1105.4995
|
Robust approachability and regret minimization in games with partial
monitoring
|
math.ST cs.LG stat.TH
|
Approachability has become a standard tool in analyzing earning algorithms in
the adversarial online learning setup. We develop a variant of approachability
for games where there is ambiguity in the obtained reward that belongs to a
set, rather than being a single vector. Using this variant we tackle the
problem of approachability in games with partial monitoring and develop simple
and efficient algorithms (i.e., with constant per-step complexity) for this
setup. We finally consider external regret and internal regret in repeated
games with partial monitoring and derive regret-minimizing strategies based on
approachability theory.
|
1105.4999
|
MIMO Broadcasting for Simultaneous Wireless Information and Power
Transfer
|
cs.IT math.IT
|
Wireless power transfer (WPT) is a promising new solution to provide
convenient and perpetual energy supplies to wireless networks. In practice, WPT
is implementable by various technologies such as inductive coupling, magnetic
resonate coupling, and electromagnetic (EM) radiation, for
short-/mid-/long-range applications, respectively. In this paper, we consider
the EM or radio signal enabled WPT in particular. Since radio signals can carry
energy as well as information at the same time, a unified study on simultaneous
wireless information and power transfer (SWIPT) is pursued. Specifically, this
paper studies a multiple-input multiple-output (MIMO) wireless broadcast system
consisting of three nodes, where one receiver harvests energy and another
receiver decodes information separately from the signals sent by a common
transmitter, and all the transmitter and receivers may be equipped with
multiple antennas. Two scenarios are examined, in which the information
receiver and energy receiver are separated and see different MIMO channels from
the transmitter, or co-located and see the identical MIMO channel from the
transmitter. For the case of separated receivers, we derive the optimal
transmission strategy to achieve different tradeoffs for maximal information
rate versus energy transfer, which are characterized by the boundary of a
so-called rate-energy (R-E) region. For the case of co-located receivers, we
show an outer bound for the achievable R-E region due to the potential
limitation that practical energy harvesting receivers are not yet able to
decode information directly. Under this constraint, we investigate two
practical designs for the co-located receiver case, namely time switching and
power splitting, and characterize their achievable R-E regions in comparison to
the outer bound.
|
1105.5032
|
The Complexity of Manipulative Attacks in Nearly Single-Peaked
Electorates
|
cs.GT cs.CC cs.MA
|
Many electoral bribery, control, and manipulation problems (which we will
refer to in general as "manipulative actions" problems) are NP-hard in the
general case. It has recently been noted that many of these problems fall into
polynomial time if the electorate is single-peaked (i.e., is polarized along
some axis/issue). However, real-world electorates are not truly single-peaked.
There are usually some mavericks, and so real-world electorates tend to merely
be nearly single-peaked. This paper studies the complexity of
manipulative-action algorithms for elections over nearly single-peaked
electorates, for various notions of nearness and various election systems. We
provide instances where even one maverick jumps the manipulative-action
complexity up to $\np$-hardness, but we also provide many instances where a
reasonable number of mavericks can be tolerated without increasing the
manipulative-action complexity.
|
1105.5053
|
Eigenvector localization as a tool to study small communities in online
social networks
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
We present and discuss a mathematical procedure for identification of small
"communities" or segments within large bipartite networks. The procedure is
based on spectral analysis of the matrix encoding network structure. The
principal tool here is localization of eigenvectors of the matrix, by means of
which the relevant network segments become visible. We exemplified our approach
by analyzing the data related to product reviewing on Amazon.com. We found
several segments, a kind of hybrid communities of densely interlinked reviewers
and products, which we were able to meaningfully interpret in terms of the type
and thematic categorization of reviewed items. The method provides a
complementary approach to other ways of community detection, typically aiming
at identification of large network modules.
|
1105.5072
|
Sub-optimality of Treating Interference as Noise in the Cellular Uplink
|
cs.IT math.IT
|
Despite the simplicity of the scheme of treating interference as noise (TIN),
it was shown to be sum-capacity optimal in the Gaussian 2-user interference
channel in \cite{ShangKramerChen,MotahariKhandani,AnnapureddyVeeravalli}. In
this paper, an interference network consisting of a point-to-point channel
interfering with a multiple access channel (MAC) is considered, with focus on
the weak interference scenario. Naive TIN in this network is performed by using
Gaussian codes at the transmitters, joint decoding at the MAC receiver while
treating interference as noise, and single user decoding at the point-to-point
receiver while treating both interferers as noise. It is shown that this naive
TIN scheme is never optimal in this scenario. In fact, a scheme that combines
both time division multiple access and TIN outperforms the naive TIN scheme. An
upper bound on the sum-capacity of the given network is also derived.
|
1105.5129
|
A Quantitative Version of the Gibbard-Satterthwaite Theorem for Three
Alternatives
|
math.CO cs.AI math.PR
|
The Gibbard-Satterthwaite theorem states that every non-dictatorial election
rule among at least three alternatives can be strategically manipulated. We
prove a quantitative version of the Gibbard-Satterthwaite theorem: a random
manipulation by a single random voter will succeed with a non-negligible
probability for any election rule among three alternatives that is far from
being a dictatorship and from having only two alternatives in its range.
|
1105.5170
|
Validation of Dunbar's number in Twitter conversations
|
physics.soc-ph cond-mat.other cs.HC cs.SI
|
Modern society's increasing dependency on online tools for both work and
recreation opens up unique opportunities for the study of social interactions.
A large survey of online exchanges or conversations on Twitter, collected
across six months involving 1.7 million individuals is presented here. We test
the theoretical cognitive limit on the number of stable social relationships
known as Dunbar's number. We find that users can entertain a maximum of 100-200
stable relationships in support for Dunbar's prediction. The "economy of
attention" is limited in the online world by cognitive and biological
constraints as predicted by Dunbar's theory. Inspired by this empirical
evidence we propose a simple dynamical mechanism, based on finite priority
queuing and time resources, that reproduces the observed social behavior.
|
1105.5174
|
Symmetry Reduction of Optimal Control Systems and Principal Connections
|
math.OC cs.SY math.SG
|
This paper explores the role of symmetries and reduction in nonlinear control
and optimal control systems. The focus of the paper is to give a geometric
framework of symmetry reduction of optimal control systems as well as to show
how to obtain explicit expressions of the reduced system by exploiting the
geometry. In particular, we show how to obtain a principal connection to be
used in the reduction for various choices of symmetry groups, as opposed to
assuming such a principal connection is given or choosing a particular symmetry
group to simplify the setting. Our result synthesizes some previous works on
symmetry reduction of nonlinear control and optimal control systems. Affine and
kinematic optimal control systems are of particular interest: We explicitly
work out the details for such systems and also show a few examples of symmetry
reduction of kinematic optimal control problems.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.