id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1103.5621
|
Application of Threshold Techniques for Readability Improvement of Jawi
Historical Manuscript Images
|
cs.CV
|
Historical documents such as old books and manuscripts have a high aesthetic
value and highly appreciated. Unfortunately, there are some documents cannot be
read due to quality problems like faded paper, ink expand, uneven colour tone,
torn paper and other elements disruption such as the existence of small spots.
The study aims to produce a copy of manuscript that shows clear wordings so
they can easily be read and the copy can also be displayed for visitors. 16
samples of Jawi historical manuscript with different quality problems were
obtained from The Royal Museum of Pahang, Malaysia. We applied three
binarization techniques; Otsu's method represents global threshold technique;
Sauvola and Niblack method which are categorized as local threshold techniques.
We compared the binarized images with the original manuscript to be visually
inspected by the museum's curator. The unclear features were marked and
analyzed. Most of the examined images show that with optimal parameters and
effective pre processing technique, local thresholding methods are work well
compare with the other one. Niblack's and Sauvola's techniques seem to be the
suitable approaches for these types of images. Most of binarized images with
these two methods show improvement for readability and character recognition.
For this research, even the differences of image result were hard to be
distinguished by human capabilities, after comparing the time cost and overall
achievement rate of recognized symbols, Niblack's method is performing better
than Sauvola's. We could improve the post processing step by adding edge
detection techniques and further enhanced by an innovative image refinement
technique and a formulation of a class proper method.
|
1103.5625
|
Information Theory and Population Genetics
|
q-bio.PE cs.IT math.IT
|
The key findings of classical population genetics are derived using a
framework based on information theory using the entropies of the allele
frequency distribution as a basis. The common results for drift, mutation,
selection, and gene flow will be rewritten both in terms of information
theoretic measurements and used to draw the classic conclusions for balance
conditions and common features of one locus dynamics. Linkage disequilibrium
will also be discussed including the relationship between mutual information
and r^2 and a simple model of hitchhiking.
|
1103.5633
|
A micromechanics-enhanced finite element formulation for modelling
heterogeneous materials
|
cond-mat.mtrl-sci cs.CE
|
In the analysis of composite materials with heterogeneous microstructures,
full resolution of the heterogeneities using classical numerical approaches can
be computationally prohibitive. This paper presents a micromechanics-enhanced
finite element formulation that accurately captures the mechanical behaviour of
heterogeneous materials in a computationally efficient manner. The strategy
exploits analytical solutions derived by Eshelby for ellipsoidal inclusions in
order to determine the mechanical perturbation fields as a result of the
underlying heterogeneities. Approximation functions for these perturbation
fields are then incorporated into a finite element formulation to augment those
of the macroscopic fields. A significant feature of this approach is that the
finite element mesh does not explicitly resolve the heterogeneities and that no
additional degrees of freedom are introduced. In this paper, hybrid-Trefftz
stress finite elements are utilised and performance of the proposed formulation
is demonstrated with numerical examples. The method is restricted here to
elastic particulate composites with ellipsoidal inclusions but it has been
designed to be extensible to a wider class of materials comprising arbitrary
shaped inclusions.
|
1103.5639
|
Partially Linear Estimation with Application to Sparse Signal Recovery
From Measurement Pairs
|
cs.IT math.IT
|
We address the problem of estimating a random vector X from two sets of
measurements Y and Z, such that the estimator is linear in Y. We show that the
partially linear minimum mean squared error (PLMMSE) estimator does not require
knowing the joint distribution of X and Y in full, but rather only its
second-order moments. This renders it of potential interest in various
applications. We further show that the PLMMSE method is minimax-optimal among
all estimators that solely depend on the second-order statistics of X and Y. We
demonstrate our approach in the context of recovering a signal, which is sparse
in a unitary dictionary, from noisy observations of it and of a filtered
version of it. We show that in this setting PLMMSE estimation has a clear
computational advantage, while its performance is comparable to
state-of-the-art algorithms. We apply our approach both in static and dynamic
estimation applications. In the former category, we treat the problem of image
enhancement from blurred/noisy image pairs, where we show that PLMMSE
estimation performs only slightly worse than state-of-the art algorithms, while
running an order of magnitude faster. In the dynamic setting, we provide a
recursive implementation of the estimator and demonstrate its utility in the
context of tracking maneuvering targets from position and acceleration
measurements.
|
1103.5676
|
Codeco: A Grammar Notation for Controlled Natural Language in Predictive
Editors
|
cs.CL
|
Existing grammar frameworks do not work out particularly well for controlled
natural languages (CNL), especially if they are to be used in predictive
editors. I introduce in this paper a new grammar notation, called Codeco, which
is designed specifically for CNLs and predictive editors. Two different parsers
have been implemented and a large subset of Attempto Controlled English (ACE)
has been represented in Codeco. The results show that Codeco is practical,
adequate and efficient.
|
1103.5678
|
Converging an Overlay Network to a Gradient Topology
|
cs.SY math.OC
|
In this paper, we investigate the topology convergence problem for the
gossip-based Gradient overlay network. In an overlay network where each node
has a local utility value, a Gradient overlay network is characterized by the
properties that each node has a set of neighbors with the same utility value (a
similar view) and a set of neighbors containing higher utility values (gradient
neighbor set), such that paths of increasing utilities emerge in the network
topology. The Gradient overlay network is built using gossiping and a
preference function that samples from nodes using a uniform random peer
sampling service. We analyze it using tools from matrix analysis, and we prove
both the necessary and sufficient conditions for convergence to a complete
gradient structure, as well as estimating the convergence time and providing
bounds on worst-case convergence time. Finally, we show in simulations the
potential of the Gradient overlay, by building a more efficient live-streaming
peer-to-peer (P2P) system than one built using uniform random peer sampling.
|
1103.5703
|
Exponential wealth distribution in a random market. A rigorous
explanation
|
q-fin.GN cs.MA nlin.AO
|
In simulations of some economic gas-like models, the asymptotic regime shows
an exponential wealth distribution, independently of the initial wealth
distribution given to the system. The appearance of this statistical
equilibrium for this type of gas-like models is explained in a rigorous
analytical way.
|
1103.5708
|
Planning to Be Surprised: Optimal Bayesian Exploration in Dynamic
Environments
|
cs.AI stat.ML
|
To maximize its success, an AGI typically needs to explore its initially
unknown world. Is there an optimal way of doing so? Here we derive an
affirmative answer for a broad class of environments.
|
1103.5738
|
Interference Alignment: A one-sided approach
|
cs.IT math.IT
|
Interference Alignment (IA) is the process of designing signals in such a way
that they cast overlapping shadows at their unintended receivers, while
remaining distinguishable at the intended ones. Our goal in this paper is to
come up with an algorithm for IA that runs at the transmitters only (and is
transparent to the receivers), that doesn't require channel reciprocity, and
that alleviates the need to alternate between the forward and reverse network
as is the case in Distributed IA (Gomadam, Cadambe, Jafar 08'), thereby
inducing significant overhead in certain environments where the channel changes
frequently. Most importantly, our effort is focused on ensuring that this
one-sided approach does not degrade the performance of the system w.r.t.
Distributed IA (since it cannot improve it). As a first step, we model the
interference in each receiver's desired signal as a function of the
transmitters' beamforming vectors. We then propose a simple steepest descent
(SD) algorithm and use it to minimize the interference in each receiver's
desired signal space. We mathematically establish equivalences between our
approach and the Distributed IA algorithm (Gomadam, Cadambe, Jafar 08') and
show that our algorithm also converges to an alignment solution (when the
solution is feasible).
|
1103.5740
|
Generating and Searching Families of FFT Algorithms
|
cs.IT cs.LO cs.SC math.IT
|
A fundamental question of longstanding theoretical interest is to prove the
lowest exact count of real additions and multiplications required to compute a
power-of-two discrete Fourier transform (DFT). For 35 years the split-radix
algorithm held the record by requiring just 4n log n - 6n + 8 arithmetic
operations on real numbers for a size-n DFT, and was widely believed to be the
best possible. Recent work by Van Buskirk et al. demonstrated improvements to
the split-radix operation count by using multiplier coefficients or "twiddle
factors" that are not n-th roots of unity for a size-n DFT. This paper presents
a Boolean Satisfiability-based proof of the lowest operation count for certain
classes of DFT algorithms. First, we present a novel way to choose new yet
valid twiddle factors for the nodes in flowgraphs generated by common
power-of-two fast Fourier transform algorithms, FFTs. With this new technique,
we can generate a large family of FFTs realizable by a fixed flowgraph. This
solution space of FFTs is cast as a Boolean Satisfiability problem, and a
modern Satisfiability Modulo Theory solver is applied to search for FFTs
requiring the fewest arithmetic operations. Surprisingly, we find that there
are FFTs requiring fewer operations than the split-radix even when all twiddle
factors are n-th roots of unity.
|
1103.5776
|
A Parametric Level Set Approach to Simultaneous Object Identification
and Background Reconstruction for Dual Energy Computed Tomography
|
cs.CV physics.med-ph
|
Dual energy computerized tomography has gained great interest because of its
ability to characterize the chemical composition of a material rather than
simply providing relative attenuation images as in conventional tomography. The
purpose of this paper is to introduce a novel polychromatic dual energy
processing algorithm with an emphasis on detection and characterization of
piecewise constant objects embedded in an unknown, cluttered background.
Physical properties of the objects, specifically the Compton scattering and
photoelectric absorption coefficients, are assumed to be known with some level
of uncertainty. Our approach is based on a level-set representation of the
characteristic function of the object and encompasses a number of
regularization techniques for addressing both the prior information we have
concerning the physical properties of the object as well as fundamental,
physics-based limitations associated with our ability to jointly recover the
Compton scattering and photoelectric absorption properties of the scene. In the
absence of an object with appropriate physical properties, our approach returns
a null characteristic function and thus can be viewed as simultaneously solving
the detection and characterization problems. Unlike the vast majority of
methods which define the level set function non-parametrically, i.e., as a
dense set of pixel values), we define our level set parametrically via radial
basis functions (RBF's) and employ a Gauss-Newton type algorithm for cost
minimization. Numerical results show that the algorithm successfully detects
objects of interest, finds their shape and location, and gives a adequate
reconstruction of the background.
|
1103.5789
|
On the Capacity of the K-User Cyclic Gaussian Interference Channel
|
cs.IT math.IT
|
This paper studies the capacity region of a $K$-user cyclic Gaussian
interference channel, where the $k$th user interferes with only the $(k-1)$th
user (mod $K$) in the network. Inspired by the work of Etkin, Tse and Wang,
which derived a capacity region outer bound for the two-user Gaussian
interference channel and proved that a simple Han-Kobayashi power splitting
scheme can achieve to within one bit of the capacity region for all values of
channel parameters, this paper shows that a similar strategy also achieves the
capacity region for the $K$-user cyclic interference channel to within a
constant gap in the weak interference regime. Specifically, a compact
representation of the Han-Kobayashi achievable rate region using
Fourier-Motzkin elimination is first derived, a capacity region outer bound is
then established. It is shown that the Etkin-Tse-Wang power splitting strategy
gives a constant gap of at most two bits (or one bit per dimension) in the weak
interference regime. Finally, the capacity result of the $K$-user cyclic
Gaussian interference channel in the strong interference regime is also given.
|
1103.5795
|
Heuristic Algorithm for Interpretation of Non-Atomic Categorical
Attributes in Similarity-based Fuzzy Databases - Scalability Evaluation
|
cs.DB
|
In this work we are analyzing scalability of the heuristic algorithm we used
in the past to discover knowledge from multi-valued symbolic attributes in
fuzzy databases. The non-atomic descriptors, characterizing a single attribute
of a database record, are commonly used in fuzzy databases to reflect
uncertainty about the recorded observation. In this paper, we present
implementation details and scalability tests of the algorithm, which we
developed to precisely interpret such non-atomic values and to transfer (i.e.
defuzzify) the fuzzy tuples to the forms acceptable for many regular (i.e.
atomic values based) data mining algorithms. Important advantages of our
approach are: (1) its linear scalability, and (2) its unique capability of
incorporating background knowledge, implicitly stored in the fuzzy database
models in the form of fuzzy similarity hierarchy, into the
interpretation/defuzzification process.
|
1103.5797
|
Computational Complexity Results for Genetic Programming and the Sorting
Problem
|
cs.NE
|
Genetic Programming (GP) has found various applications. Understanding this
type of algorithm from a theoretical point of view is a challenging task. The
first results on the computational complexity of GP have been obtained for
problems with isolated program semantics. With this paper, we push forward the
computational complexity analysis of GP on a problem with dependent program
semantics. We study the well-known sorting problem in this context and analyze
rigorously how GP can deal with different measures of sortedness.
|
1103.5808
|
Improved Edge Awareness in Discontinuity Preserving Smoothing
|
cs.CV
|
Discontinuity preserving smoothing is a fundamentally important procedure
that is useful in a wide variety of image processing contexts. It is directly
useful for noise reduction, and frequently used as an intermediate step in
higher level algorithms. For example, it can be particularly useful in edge
detection and segmentation. Three well known algorithms for discontinuity
preserving smoothing are nonlinear anisotropic diffusion, bilateral filtering,
and mean shift filtering. Although slight differences make them each better
suited to different tasks, all are designed to preserve discontinuities while
smoothing. However, none of them satisfy this goal perfectly: they each have
exception cases in which smoothing may occur across hard edges. The principal
contribution of this paper is the identification of a property we call edge
awareness that should be satisfied by any discontinuity preserving smoothing
algorithm. This constraint can be incorporated into existing algorithms to
improve quality, and usually has negligible changes in runtime performance
and/or complexity. We present modifications necessary to augment diffusion and
mean shift, as well as a new formulation of the bilateral filter that unifies
the spatial and range spaces to achieve edge awareness.
|
1103.5855
|
The FEM approach to the 3D electrodiffusion on 'meshes' optimized with
the Metropolis algorithm
|
cs.CG cs.CE math-ph math.MP
|
The presented article contains a 3D mesh generation routine optimized with
the Metropolis algorithm. The procedure enables to produce meshes of a
prescribed volume V_0 of elements. The finite volume meshes are used with the
Finite Element approach. The FEM analysis enables to deal with a set of coupled
nonlinear differential equations that describes the electrodiffusional problem.
Mesh quality and accuracy of FEM solutions are also examined. High quality of
FEM type space-dependent approximation and correctness of discrete
approximation in time are ensured by finding solutions to the 3D Laplace
problem and to the 3D diffusion equation, respectively. Their comparison with
analytical solutions confirms accuracy of obtained approximations.
|
1103.5946
|
Detecting the optimal number of communities in complex networks
|
physics.soc-ph cs.SI stat.AP
|
To obtain the optimal number of communities is an important problem in
detecting community structure. In this paper, we extend the measurement of
community detecting algorithms to find the optimal community number. Based on
the normalized mutual information index, which has been used as a measure for
similarity of communities, a statistic $\Omega(c)$ is proposed to detect the
optimal number of communities. In general, when $\Omega(c)$ reaches its local
maximum, especially the first one, the corresponding number of communities
\emph{c} is likely to be optimal in community detection. Moreover, the
statistic $\Omega(c)$ can also measure the significance of community structures
in complex networks, which has been paid more attention recently. Numerical and
empirical results show that the index $\Omega(c)$ is effective in both
artificial and real world networks.
|
1103.5985
|
On Empirical Entropy
|
cs.IT cs.LG math.IT
|
We propose a compression-based version of the empirical entropy of a finite
string over a finite alphabet. Whereas previously one considers the naked
entropy of (possibly higher order) Markov processes, we consider the sum of the
description of the random variable involved plus the entropy it induces. We
assume only that the distribution involved is computable. To test the new
notion we compare the Normalized Information Distance (the similarity metric)
with a related measure based on Mutual Information in Shannon's framework. This
way the similarities and differences of the last two concepts are exposed.
|
1103.5991
|
Sequential Analysis in High Dimensional Multiple Testing and Sparse
Recovery
|
math.ST cs.IT math.IT stat.TH
|
This paper studies the problem of high-dimensional multiple testing and
sparse recovery from the perspective of sequential analysis. In this setting,
the probability of error is a function of the dimension of the problem. A
simple sequential testing procedure is proposed. We derive necessary conditions
for reliable recovery in the non-sequential setting and contrast them with
sufficient conditions for reliable recovery using the proposed sequential
testing procedure. Applications of the main results to several commonly
encountered models show that sequential testing can be exponentially more
sensitive to the difference between the null and alternative distributions (in
terms of the dependence on dimension), implying that subtle cases can be much
more reliably determined using sequential methods.
|
1103.6052
|
Internal Constraints of the Trifocal Tensor
|
cs.CV
|
The fundamental matrix and trifocal tensor are convenient algebraic
representations of the epipolar geometry of two and three view configurations,
respectively. The estimation of these entities is central to most
reconstruction algorithms, and a solid understanding of their properties and
constraints is therefore very important. The fundamental matrix has 1 internal
constraint which is well understood, whereas the trifocal tensor has 8
independent algebraic constraints. The internal tensor constraints can be
represented in many ways, although there is only one minimal and sufficient set
of 8 constraints known. In this paper, we derive a second set of minimal and
sufficient constraints that is simpler. We also show how this can be used in a
new parameterization of the trifocal tensor. We hope that this increased
understanding of the internal constraints may lead to improved algorithms for
estimating the trifocal tensor, although the primary contribution is an
improved theoretical understanding.
|
1103.6060
|
Interference, Cooperation and Connectivity - A Degrees of Freedom
Perspective
|
cs.IT math.IT
|
We explore the interplay between interference, cooperation and connectivity
in heterogeneous wireless interference networks. Specifically, we consider a
4-user locally-connected interference network with pairwise clustered decoding
and show that its degrees of freedom (DoF) are bounded above by 12/5.
Interestingly, when compared to the corresponding fully connected setting which
is known to have 8/3 DoF, the locally connected network is only missing
interference-carrying links, but still has lower DoF, i.e., eliminating these
interference-carrying links reduces the DoF. The 12/5 DoF outer bound is
obtained through a novel approach that translates insights from interference
alignment over linear vector spaces into corresponding sub-modularity
relationships between entropy functions.
|
1103.6067
|
Short proofs of the Quantum Substate Theorem
|
quant-ph cs.CC cs.IT math.IT
|
The Quantum Substate Theorem due to Jain, Radhakrishnan, and Sen (2002) gives
us a powerful operational interpretation of relative entropy, in fact, of the
observational divergence of two quantum states, a quantity that is related to
their relative entropy. Informally, the theorem states that if the
observational divergence between two quantum states rho, sigma is small, then
there is a quantum state rho' close to rho in trace distance, such that rho'
when scaled down by a small factor becomes a substate of sigma. We present new
proofs of this theorem. The resulting statement is optimal up to a constant
factor in its dependence on observational divergence. In addition, the proofs
are both conceptually simpler and significantly shorter than the earlier proof.
|
1103.6073
|
Colorful Triangle Counting and a MapReduce Implementation
|
cs.DS cs.DM cs.SI
|
In this note we introduce a new randomized algorithm for counting triangles
in graphs. We show that under mild conditions, the estimate of our algorithm is
strongly concentrated around the true number of triangles. Specifically, if $p
\geq \max{(\frac{\Delta \log{n}}{t}, \frac{\log{n}}{\sqrt{t}})}$, where $n$,
$t$, $\Delta$ denote the number of vertices in $G$, the number of triangles in
$G$, the maximum number of triangles an edge of $G$ is contained, then for any
constant $\epsilon>0$ our unbiased estimate $T$ is concentrated around its
expectation, i.e., $ \Prob{|T - \Mean{T}| \geq \epsilon \Mean{T}} = o(1)$.
Finally, we present a \textsc{MapReduce} implementation of our algorithm.
|
1103.6149
|
Untainted Puncturing for Irregular Low-Density Parity-Check Codes
|
cs.IT math.IT
|
Puncturing is a well-known coding technique widely used for constructing
rate-compatible codes. In this paper, we consider the problem of puncturing
low-density parity-check codes and propose a new algorithm for intentional
puncturing. The algorithm is based on the puncturing of untainted symbols, i.e.
nodes with no punctured symbols within their neighboring set. It is shown that
the algorithm proposed here performs better than previous proposals for a range
of coding rates and short proportions of punctured symbols.
|
1103.6241
|
Ergodic Transmission Capacity of Wireless Ad Hoc Networks with
Interference Management
|
cs.IT math.IT
|
Most work on wireless network throughput ignores the temporal correlation
inherent to wireless channels because it degrades tractability. To better model
and quantify the temporal variations of wireless network throughput, this paper
introduces a metric termed ergodic transmission capacity (ETC), which includes
spatial and temporal ergodicity. All transmitters in the network form a
homogeneous Poisson point process and all channels are modeled by a finite
state Markov chain. The bounds on outage probability and ETC are characterized,
and their scaling behaviors for a sparse and dense network are discussed. From
these results, we show that the ETC can be characterized by the inner product
of the channel-state related vector and the invariant probability vector of the
Markov chain. This indicates that channel-aware opportunistic transmission does
not always increase ETC. Finally, we look at outage probability with
interference management from a stochastic geometry point of view. The improved
bounds on outage probability and ETC due to interference management are
characterized and they provide some useful insights on how to effectively
manage interference in sparse and dense networks.
|
1103.6258
|
Localized Dimension Growth in Random Network Coding: A Convolutional
Approach
|
cs.IT math.IT
|
We propose an efficient Adaptive Random Convolutional Network Coding (ARCNC)
algorithm to address the issue of field size in random network coding. ARCNC
operates as a convolutional code, with the coefficients of local encoding
kernels chosen randomly over a small finite field. The lengths of local
encoding kernels increase with time until the global encoding kernel matrices
at related sink nodes all have full rank. Instead of estimating the necessary
field size a priori, ARCNC operates in a small finite field. It adapts to
unknown network topologies without prior knowledge, by locally incrementing the
dimensionality of the convolutional code. Because convolutional codes of
different constraint lengths can coexist in different portions of the network,
reductions in decoding delay and memory overheads can be achieved with ARCNC.
We show through analysis that this method performs no worse than random linear
network codes in general networks, and can provide significant gains in terms
of average decoding delay in combination networks.
|
1104.0005
|
On the binary codes with parameters of triply-shortened 1-perfect codes
|
cs.IT math.CO math.IT
|
We study properties of binary codes with parameters close to the parameters
of 1-perfect codes. An arbitrary binary $(n=2^m-3, 2^{n-m-1}, 4)$ code $C$,
i.e., a code with parameters of a triply-shortened extended Hamming code, is a
cell of an equitable partition of the $n$-cube into six cells. An arbitrary
binary $(n=2^m-4, 2^{n-m}, 3)$ code $D$, i.e., a code with parameters of a
triply-shortened Hamming code, is a cell of an equitable family (but not a
partition) from six cells. As a corollary, the codes $C$ and $D$ are completely
semiregular; i.e., the weight distribution of such a code depends only on the
minimal and maximal codeword weights and the code parameters. Moreover, if $D$
is self-complementary, then it is completely regular. As an intermediate
result, we prove, in terms of distance distributions, a general criterion for a
partition of the vertices of a graph (from rather general class of graphs,
including the distance-regular graphs) to be equitable. Keywords: 1-perfect
code; triply-shortened 1-perfect code; equitable partition; perfect coloring;
weight distribution; distance distribution
|
1104.0025
|
Information content of colored motifs in complex networks
|
q-bio.QM cs.IT math.IT nlin.AO q-bio.MN q-bio.NC q-bio.PE
|
We study complex networks in which the nodes of the network are tagged with
different colors depending on the functionality of the nodes (colored graphs),
using information theory applied to the distribution of motifs in such
networks. We find that colored motifs can be viewed as the building blocks of
the networks (much more so than the uncolored structural motifs can be) and
that the relative frequency with which these motifs appear in the network can
be used to define the information content of the network. This information is
defined in such a way that a network with random coloration (but keeping the
relative number of nodes with different colors the same) has zero color
information content. Thus, colored motif information captures the
exceptionality of coloring in the motifs that is maintained via selection. We
study the motif information content of the C. elegans brain as well as the
evolution of colored motif information in networks that reflect the interaction
between instructions in genomes of digital life organisms. While we find that
colored motif information appears to capture essential functionality in the C.
elegans brain (where the color assignment of nodes is straightforward) it is
not obvious whether the colored motif information content always increases
during evolution, as would be expected from a measure that captures network
complexity. For a single choice of color assignment of instructions in the
digital life form Avida, we find rather that colored motif information content
increases or decreases during evolution, depending on how the genomes are
organized, and therefore could be an interesting tool to dissect genomic
rearrangements.
|
1104.0052
|
Peer Effects and Stability in Matching Markets
|
cs.SI cs.GT physics.soc-ph
|
Many-to-one matching markets exist in numerous different forms, such as
college admissions, matching medical interns to hospitals for residencies,
assigning housing to college students, and the classic firms and workers
market. In all these markets, externalities such as complementarities and peer
effects severely complicate the preference ordering of each agent. Further,
research has shown that externalities lead to serious problems for market
stability and for developing efficient algorithms to find stable matchings. In
this paper we make the observation that peer effects are often the result of
underlying social connections, and we explore a formulation of the many-to-one
matching market where peer effects are derived from an underlying social
network. The key feature of our model is that it captures peer effects and
complementarities using utility functions, rather than traditional preference
ordering. With this model and considering a weaker notion of stability, namely
two-sided exchange stability, we prove that stable matchings always exist and
characterize the set of stable matchings in terms of social welfare. We also
give distributed algorithms that are guaranteed to converge to a two-sided
exchange stable matching. To assess the competitive ratio of these algorithms
and to more generally characterize the efficiency of matching markets with
externalities, we provide general bounds on how far the welfare of the
worst-case stable matching can be from the welfare of the optimal matching, and
find that the structure of the social network (e.g. how well clustered the
network is) plays a large role.
|
1104.0111
|
Decentralized Online Learning Algorithms for Opportunistic Spectrum
Access
|
cs.LG cs.NI math.PR
|
The fundamental problem of multiple secondary users contending for
opportunistic spectrum access over multiple channels in cognitive radio
networks has been formulated recently as a decentralized multi-armed bandit
(D-MAB) problem. In a D-MAB problem there are $M$ users and $N$ arms (channels)
that each offer i.i.d. stochastic rewards with unknown means so long as they
are accessed without collision. The goal is to design a decentralized online
learning policy that incurs minimal regret, defined as the difference between
the total expected rewards accumulated by a model-aware genie, and that
obtained by all users applying the policy. We make two contributions in this
paper. First, we consider the setting where the users have a prioritized
ranking, such that it is desired for the $K$-th-ranked user to learn to access
the arm offering the $K$-th highest mean reward. For this problem, we present
the first distributed policy that yields regret that is uniformly logarithmic
over time without requiring any prior assumption about the mean rewards.
Second, we consider the case when a fair access policy is required, i.e., it is
desired for all users to experience the same mean reward. For this problem, we
present a distributed policy that yields order-optimal regret scaling with
respect to the number of users and arms, better than previously proposed
policies in the literature. Both of our distributed policies make use of an
innovative modification of the well known UCB1 policy for the classic
multi-armed bandit problem that allows a single user to learn how to play the
arm that yields the $K$-th largest mean reward.
|
1104.0118
|
A Comparative Study of Relaying Schemes with Decode-and-Forward over
Nakagami-m Fading Channels
|
cs.IT cs.PF math.IT
|
Utilizing relaying techniques to improve performance of wireless systems is a
promising avenue. However, it is crucial to understand what type of relaying
schemes should be used for achieving different performance objectives under
realistic fading conditions. In this paper, we present a general framework for
modelling and evaluating the performance of relaying schemes based on the
decode-and-forward (DF) protocol over independent and not necessarily
identically distributed (INID) Nakagami-m fading channels. In particular, we
present closed-form expressions for the statistics of the instantaneous output
signal-to-noise ratio of four significant relaying schemes with DF; two based
on repetitive transmission and the other two based on relay selection (RS).
These expressions are then used to obtain closed-form expressions for the
outage probability and the average symbol error probability for several
modulations of all considered relaying schemes over INID Nakagami-m fading.
Importantly, it is shown that when the channel state information for RS is
perfect, RS-based transmission schemes always outperform repetitive ones.
Furthermore, when the direct link between the source and the destination nodes
is sufficiently strong, relaying may not result in any gains and in this case
it should be switched-off.
|
1104.0121
|
Complex network analysis of water distribution systems
|
physics.soc-ph cond-mat.stat-mech cs.SI math-ph math.MP stat.AP
|
This paper explores a variety of strategies for understanding the formation,
structure, efficiency and vulnerability of water distribution networks. Water
supply systems are studied as spatially organized networks for which the
practical applications of abstract evaluation methods are critically evaluated.
Empirical data from benchmark networks are used to study the interplay between
network structure and operational efficiency, reliability and robustness.
Structural measurements are undertaken to quantify properties such as
redundancy and optimal-connectivity, herein proposed as constraints in network
design optimization problems. The role of the supply-demand structure towards
system efficiency is studied and an assessment of the vulnerability to failures
based on the disconnection of nodes from the source(s) is undertaken. The
absence of conventional degree-based hubs (observed through uncorrelated
non-heterogeneous sparse topologies) prompts an alternative approach to
studying structural vulnerability based on the identification of network
cut-sets and optimal connectivity invariants. A discussion on the scope,
limitations and possible future directions of this research is provided.
|
1104.0126
|
U-Sem: Semantic Enrichment, User Modeling and Mining of Usage Data on
the Social Web
|
cs.IR cs.AI cs.HC
|
With the growing popularity of Social Web applications, more and more user
data is published on the Web everyday. Our research focuses on investigating
ways of mining data from such platforms that can be used for modeling users and
for semantically augmenting user profiles. This process can enhance adaptation
and personalization in various adaptive Web-based systems. In this paper, we
present the U-Sem people modeling service, a framework for the semantic
enrichment and mining of people's profiles from usage data on the Social Web.
We explain the architecture of our people modeling service and describe its
application in an adult e-learning context as an example. Versions: Mar 21,
10:10, Mar 25, 09:37
|
1104.0128
|
Towards an automated query modification assistant
|
cs.IR cs.AI cs.HC
|
Users who need several queries before finding what they need can benefit from
an automatic search assistant that provides feedback on their query
modification strategies. We present a method to learn from a search log which
types of query modifications have and have not been effective in the past. The
method analyses query modifications along two dimensions: a traditional
term-based dimension and a semantic dimension, for which queries are enriches
with linked data entities. Applying the method to the search logs of two search
engines, we identify six opportunities for a query modification assistant to
improve search: modification strategies that are commonly used, but that often
do not lead to satisfactory results.
|
1104.0136
|
On Interference Alignment and the Deterministic Capacity for Cellular
Channels with Weak Symmetric Cross Links
|
cs.IT math.IT
|
In this paper, we study the uplink of a cellular system using the linear
deterministic approximation model, where there are two users transmitting to a
receiver, mutually interfering with a third transmitter communicating with a
second receiver. We give an achievable coding scheme and prove its optimality,
i.e. characterize the capacity region. This scheme is a form of interference
alignment which exploits the channel gain difference of the two-user cell.
|
1104.0148
|
A dynamic network in a dynamic population: asymptotic properties
|
math.PR cs.SI physics.soc-ph
|
We derive asymptotic properties for a stochastic dynamic network model in a
stochastic dynamic population. In the model, nodes give birth to new nodes
until they die, each node being equipped with a social index given at birth.
During the life of a node it creates edges to other nodes, nodes with high
social index at higher rate, and edges disappear randomly in time. For this
model we derive criterion for when a giant connected component exists after the
process has evolved for a long period of time, assuming the node population
grows to infinity. We also obtain an explicit expression for the degree
correlation $\rho$ (of neighbouring nodes) which shows that $\rho$ is always
positive irrespective of parameter values in one of the two treated submodels,
and may be either positive or negative in the other model, depending on the
parameters.
|
1104.0172
|
Weight enumeration of codes from finite spaces
|
math.CO cs.IT math.IT
|
We study the generalized and extended weight enumerator of the q-ary Simplex
code and the q-ary first order Reed-Muller code. For our calculations we use
that these codes correspond to a projective system containing all the points in
a finite projective or affine space. As a result from the geometric method we
use for the weight enumeration, we also completely determine the set of
supports of subcodes and words in an extension code.
|
1104.0183
|
Exact and Efficient Algorithm to Discover Extreme Stochastic Events in
Wind Generation over Transmission Power Grids
|
cs.SY math.OC physics.soc-ph
|
In this manuscript we continue the thread of [M. Chertkov, F. Pan, M.
Stepanov, Predicting Failures in Power Grids: The Case of Static Overloads,
IEEE Smart Grid 2011] and suggest a new algorithm discovering most probable
extreme stochastic events in static power grids associated with intermittent
generation of wind turbines. The algorithm becomes EXACT and EFFICIENT
(polynomial) in the case of the proportional (or other low parametric) control
of standard generation, and log-concave probability distribution of the
renewable generation, assumed known from the wind forecast. We illustrate the
algorithm's ability to discover problematic extreme events on the example of
the IEEE RTS-96 model of transmission with additions of 10%, 20% and 30% of
renewable generation. We observe that the probability of failure may grow but
it may also decrease with increase in renewable penetration, if the latter is
sufficiently diversified and distributed.
|
1104.0186
|
Reconciling long-term cultural diversity and short-term collective
social behavior
|
physics.soc-ph cs.SI physics.comp-ph
|
An outstanding open problem is whether collective social phenomena occurring
over short timescales can systematically reduce cultural heterogeneity in the
long run, and whether offline and online human interactions contribute
differently to the process. Theoretical models suggest that short-term
collective behavior and long-term cultural diversity are mutually excluding,
since they require very different levels of social influence. The latter
jointly depends on two factors: the topology of the underlying social network
and the overlap between individuals in multidimensional cultural space.
However, while the empirical properties of social networks are well understood,
little is known about the large-scale organization of real societies in
cultural space, so that random input specifications are necessarily used in
models. Here we use a large dataset to perform a high-dimensional analysis of
the scientific beliefs of thousands of Europeans. We find that inter-opinion
correlations determine a nontrivial ultrametric hierarchy of individuals in
cultural space, a result unaccessible to one-dimensional analyses and in
striking contrast with random assumptions. When empirical data are used as
inputs in models, we find that ultrametricity has strong and counterintuitive
effects, especially in the extreme case of long-range online-like interactions
bypassing social ties. On short time-scales, it strongly facilitates a
symmetry-breaking phase transition triggering coordinated social behavior. On
long time-scales, it severely suppresses cultural convergence by restricting it
within disjoint groups. We therefore find that, remarkably, the empirical
distribution of individuals in cultural space appears to optimize the
coexistence of short-term collective behavior and long-term cultural diversity,
which can be realized simultaneously for the same moderate level of mutual
influence.
|
1104.0215
|
Model-free control of microgrids
|
cs.SY math.OC
|
A new "model-free" control methodology is applied for the first time to power
systems included in microgrids networks. We evaluate its performances regarding
output load and supply variations in different working configuration of the
microgrid. Our approach, which utilizes "intelligent" PI controllers, does not
require any converter or microgrid model identification while ensuring the
stability and the robustness of the controlled system. Simulations results show
that with a simple control structure, the proposed control method is almost
insensitive to fluctuations and large load variations.
|
1104.0224
|
Density Evolution Analysis of Node-Based Verification-Based Algorithms
in Compressive Sensing
|
cs.IT math.IT
|
In this paper, we present a new approach for the analysis of iterative
node-based verification-based (NB-VB) recovery algorithms in the context of
compressive sensing. These algorithms are particularly interesting due to their
low complexity (linear in the signal dimension $n$). The asymptotic analysis
predicts the fraction of unverified signal elements at each iteration $\ell$ in
the asymptotic regime where $n \rightarrow \infty$. The analysis is similar in
nature to the well-known density evolution technique commonly used to analyze
iterative decoding algorithms. To perform the analysis, a message-passing
interpretation of NB-VB algorithms is provided. This interpretation lacks the
extrinsic nature of standard message-passing algorithms to which density
evolution is usually applied. This requires a number of non-trivial
modifications in the analysis. The analysis tracks the average performance of
the recovery algorithms over the ensembles of input signals and sensing
matrices as a function of $\ell$. Concentration results are devised to
demonstrate that the performance of the recovery algorithms applied to any
choice of the input signal over any realization of the sensing matrix follows
the deterministic results of the analysis closely. Simulation results are also
provided which demonstrate that the proposed asymptotic analysis matches the
performance of recovery algorithms for large but finite values of $n$. Compared
to the existing technique for the analysis of NB-VB algorithms, which is based
on numerically solving a large system of coupled differential equations, the
proposed method is much simpler and more accurate.
|
1104.0230
|
Separate Source-Channel Coding for Broadcasting Correlated Gaussians
|
cs.IT math.IT
|
The problem of broadcasting a pair of correlated Gaussian sources using
optimal separate source and channel codes is studied. Considerable performance
gains over previously known separate source-channel schemes are observed.
Although source-channel separation yields suboptimal performance in general, it
is shown that the proposed scheme is very competitive for any bandwidth
compression/expansion scenarios. In particular, for a high channel SNR
scenario, it can be shown to achieve optimal power-distortion tradeoff.
|
1104.0235
|
Gaussian Robust Classification
|
cs.LG
|
Supervised learning is all about the ability to generalize knowledge.
Specifically, the goal of the learning is to train a classifier using training
data, in such a way that it will be capable of classifying new unseen data
correctly. In order to acheive this goal, it is important to carefully design
the learner, so it will not overfit the training data. The later can is done
usually by adding a regularization term. The statistical learning theory
explains the success of this method by claiming that it restricts the
complexity of the learned model. This explanation, however, is rather abstract
and does not have a geometric intuition. The generalization error of a
classifier may be thought of as correlated with its robustness to perturbations
of the data: a classifier that copes with disturbance is expected to generalize
well. Indeed, Xu et al. [2009] have shown that the SVM formulation is
equivalent to a robust optimization (RO) formulation, in which an adversary
displaces the training and testing points within a ball of pre-determined
radius. In this work we explore a different kind of robustness, namely changing
each data point with a Gaussian cloud centered at the sample. Loss is evaluated
as the expectation of an underlying loss function on the cloud. This setup fits
the fact that in many applications, the data is sampled along with noise. We
develop an RO framework, in which the adversary chooses the covariance of the
noise. In our algorithm named GURU, the tuning parameter is a spectral bound on
the noise, thus it can be estimated using physical or applicative
considerations. Our experiments show that this framework performs as well as
SVM and even slightly better in some cases. Generalizations for Mercer kernels
and for the multiclass case are presented as well. We also show that our
framework may be further generalized, using the technique of convex perspective
functions.
|
1104.0262
|
Fast Linearized Bregman Iteration for Compressive Sensing and Sparse
Denoising
|
math.OC cs.IT math.IT
|
We propose and analyze an extremely fast, efficient, and simple method for
solving the problem:min{parallel to u parallel to(1) : Au = f, u is an element
of R-n}.This method was first described in [J. Darbon and S. Osher, preprint,
2007], with more details in [W. Yin, S. Osher, D. Goldfarb and J. Darbon, SIAM
J. Imaging Sciences, 1(1), 143-168, 2008] and rigorous theory given in [J. Cai,
S. Osher and Z. Shen, Math. Comp., to appear, 2008, see also UCLA CAM Report
08-06] and [J. Cai, S. Osher and Z. Shen, UCLA CAM Report, 08-52, 2008]. The
motivation was compressive sensing, which now has a vast and exciting history,
which seems to have started with Candes, et. al. [E. Candes, J. Romberg and T.
Tao, 52(2), 489-509, 2006] and Donoho, [D. L. Donoho, IEEE Trans. Inform.
Theory, 52, 1289-1306, 2006]. See [W. Yin, S. Osher, D. Goldfarb and J. Darbon,
SIAM J. Imaging Sciences 1(1), 143-168, 2008] and [J. Cai, S. Osher and Z.
Shen, Math. Comp., to appear, 2008, see also UCLA CAM Report, 08-06] and [J.
Cai, S. Osher and Z. Shen, UCLA CAM Report, 08-52, 2008] for a large set of
references. Our method introduces an improvement called "kicking" of the very
efficient method of [J. Darbon and S. Osher, preprint, 2007] and [W. Yin, S.
Osher, D. Goldfarb and J. Darbon, SIAM J. Imaging Sciences, 1(1), 143-168,
2008] and also applies it to the problem of denoising of undersampled signals.
The use of Bregman iteration for denoising of images began in [S. Osher, M.
Burger, D. Goldfarb, J. Xu and W. Yin, Multiscale Model. Simul, 4(2), 460-489,
2005] and led to improved results for total variation based methods. Here we
apply it to denoise signals, especially essentially sparse signals, which might
even be undersampled.
|
1104.0283
|
Evolving a New Feature for a Working Program
|
cs.NE
|
A genetic programming system is created. A first fitness function f1 is used
to evolve a program that implements a first feature. Then the fitness function
is switched to a second function f2, which is used to evolve a program that
implements a second feature while still maintaining the first feature. The
median number of generations G1 and G2 needed to evolve programs that work as
defined by f1 and f2 are measured. The behavior of G1 and G2 are observed as
the difficulty of the problem is increased.
In these systems, the density D1 of programs that work (for fitness function
f1) is measured in the general population of programs. The relationship
G1~1/sqrt(D1) is observed to approximately hold. Also, the density D2 of
programs that work (for fitness function f2) is measured in the general
population of programs. The relationship G2~1/sqrt(D2) is observed to
approximately hold.
|
1104.0319
|
Methods to Determine Node Centrality and Clustering in Graphs with
Uncertain Structure
|
cs.SI physics.soc-ph
|
Much of the past work in network analysis has focused on analyzing discrete
graphs, where binary edges represent the "presence" or "absence" of a
relationship. Since traditional network measures (e.g., betweenness centrality)
utilize a discrete link structure, complex systems must be transformed to this
representation in order to investigate network properties. However, in many
domains there may be uncertainty about the relationship structure and any
uncertainty information would be lost in translation to a discrete
representation. Uncertainty may arise in domains where there is moderating link
information that cannot be easily observed, i.e., links become inactive over
time but may not be dropped or observed links may not always corresponds to a
valid relationship. In order to represent and reason with these types of
uncertainty, we move beyond the discrete graph framework and develop social
network measures based on a probabilistic graph representation. More
specifically, we develop measures of path length, betweenness centrality, and
clustering coefficient---one set based on sampling and one based on
probabilistic paths. We evaluate our methods on three real-world networks from
Enron, Facebook, and DBLP, showing that our proposed methods more accurately
capture salient effects without being susceptible to local noise, and that the
resulting analysis produces a better understanding of the graph structure and
the uncertainty resulting from its change over time.
|
1104.0354
|
Low-rank Matrix Recovery from Errors and Erasures
|
cs.IT math.IT stat.ML
|
This paper considers the recovery of a low-rank matrix from an observed
version that simultaneously contains both (a) erasures: most entries are not
observed, and (b) errors: values at a constant fraction of (unknown) locations
are arbitrarily corrupted. We provide a new unified performance guarantee on
when the natural convex relaxation of minimizing rank plus support succeeds in
exact recovery. Our result allows for the simultaneous presence of random and
deterministic components in both the error and erasure patterns. On the one
hand, corollaries obtained by specializing this one single result in different
ways recover (up to poly-log factors) all the existing works in matrix
completion, and sparse and low-rank matrix recovery. On the other hand, our
results also provide the first guarantees for (a) recovery when we observe a
vanishing fraction of entries of a corrupted matrix, and (b) deterministic
matrix completion.
|
1104.0360
|
Some inequalities on generalized entropies
|
math.CA cond-mat.stat-mech cs.IT math.IT
|
We give several inequalities on generalized entropies involving Tsallis
entropies, using some inequalities obtained by improvements of Young's
inequality. We also give a generalized Han's inequality.
|
1104.0384
|
Relations between redundancy patterns of the Shannon code and wave
diffraction patterns of partially disordered media
|
cs.IT cond-mat.other cond-mat.stat-mech math.IT
|
The average redundancy of the Shannon code, $R_n$, as a function of the block
length $n$, is known to exhibit two very different types of behavior, depending
on the rationality or irrationality of certain parameters of the source: It
either converges to 1/2 as $n$ grows without bound, or it may have a
non-vanishing, oscillatory, (quasi-) periodic pattern around the value 1/2 for
all large $n$. In this paper, we make an attempt to shed some insight into this
erratic behavior of $R_n$, by drawing an analogy with the realm of physics of
wave propagation, in particular, the elementary theory of scattering and
diffraction. It turns out that there are two types of behavior of wave
diffraction patterns formed by crystals, which are correspondingly analogous to
the two types of patterns of $R_n$. When the crystal is perfect, the
diffraction intensity spectrum exhibits very sharp peaks, a.k.a. Bragg peaks,
at wavelengths of full constructive interference. These wavelengths correspond
to the frequencies of the harmonic waves of the oscillatory mode of $R_n$. On
the other hand, when the crystal is imperfect and there is a considerable
degree of disorder in its structure, the Bragg peaks disappear, and the
behavior of this mode is analogous to the one where $R_n$ is convergent.
|
1104.0395
|
Uncovering missing links with cold ends
|
physics.data-an cs.IR cs.SI physics.soc-ph
|
To evaluate the performance of prediction of missing links, the known data
are randomly divided into two parts, the training set and the probe set. We
argue that this straightforward and standard method may lead to terrible bias,
since in real biological and information networks, missing links are more
likely to be links connecting low-degree nodes. We therefore study how to
uncover missing links with low-degree nodes, namely links in the probe set are
of lower degree products than a random sampling. Experimental analysis on ten
local similarity indices and four disparate real networks reveals a surprising
result that the Leicht-Holme-Newman index [E. A. Leicht, P. Holme, and M. E. J.
Newman, Phys. Rev. E 73, 026120 (2006)] performs the best, although it was
known to be one of the worst indices if the probe set is a random sampling of
all links. We further propose an parameter-dependent index, which considerably
improves the prediction accuracy. Finally, we show the relevance of the
proposed index on three real sampling methods.
|
1104.0419
|
Soft-Decision-Driven Channel Estimation for Pipelined Turbo Receivers
|
cs.IT cs.SY math.IT math.OC
|
We consider channel estimation specific to turbo equalization for
multiple-input multiple-output (MIMO) wireless communication. We develop a
soft-decision-driven sequential algorithm geared to the pipelined turbo
equalizer architecture operating on orthogonal frequency division multiplexing
(OFDM) symbols. One interesting feature of the pipelined turbo equalizer is
that multiple soft-decisions become available at various processing stages. A
tricky issue is that these multiple decisions from different pipeline stages
have varying levels of reliability. This paper establishes an effective
strategy for the channel estimator to track the target channel, while dealing
with observation sets with different qualities. The resulting algorithm is
basically a linear sequential estimation algorithm and, as such, is
Kalman-based in nature. The main difference here, however, is that the proposed
algorithm employs puncturing on observation samples to effectively deal with
the inherent correlation among the multiple demapper/decoder module outputs
that cannot easily be removed by the traditional innovations approach. The
proposed algorithm continuously monitors the quality of the feedback decisions
and incorporates it in the channel estimation process. The proposed channel
estimation scheme shows clear performance advantages relative to existing
channel estimation techniques.
|
1104.0430
|
Two Birds and One Stone: Gaussian Interference Channel with a Shared
Out-of-Band Relay of Limited Rate
|
cs.IT math.IT
|
The two-user Gaussian interference channel with a shared out-of-band relay is
considered. The relay observes a linear combination of the source signals and
broadcasts a common message to the two destinations, through a perfect link of
fixed limited rate $R_0$ bits per channel use. The out-of-band nature of the
relay is reflected by the fact that the common relay message does not interfere
with the received signal at the two destinations. A general achievable rate is
established, along with upper bounds on the capacity region for the Gaussian
case. For $R_0$ values below a certain threshold, which depends on channel
parameters, the capacity region of this channel is determined in this paper to
within a constant gap of $\Delta=1.95$ bits. We identify interference regimes
where a two-for-one gain in achievable rates is possible for every bit relayed,
up to a constant approximation error. Instrumental to these results is a
carefully-designed quantize-and-forward type of relay strategy along with a
joint decoding scheme employed at destination ends. Further, we also study
successive decoding strategies with optimal decoding order (corresponding to
the order at which common, private, and relay messages are decoded), and show
that successive decoding also achieves two-for-one gains asymptotically in
regimes where a two-for-one gain is achievable by joint decoding; yet,
successive decoding produces unbounded loss asymptotically when compared to
joint decoding, in general.
|
1104.0446
|
Reconstruction of Binary Functions and Shapes from Incomplete Frequency
Information
|
cs.IT math.IT math.OC
|
The characterization of a binary function by partial frequency information is
considered. We show that it is possible to reconstruct binary signals from
incomplete frequency measurements via the solution of a simple linear
optimization problem. We further prove that if a binary function is spatially
structured (e.g. a general black-white image or an indicator function of a
shape), then it can be recovered from very few low frequency measurements in
general. These results would lead to efficient methods of sensing,
characterizing and recovering a binary signal or a shape as well as other
applications like deconvolution of binary functions blurred by a low-pass
filter. Numerical results are provided to demonstrate the theoretical
arguments.
|
1104.0454
|
Degree Fluctuations and the Convergence Time of Consensus Algorithms
|
math.OC cs.SY
|
We consider a consensus algorithm in which every node in a sequence of
undirected, B-connected graphs assigns equal weight to each of its neighbors.
Under the assumption that the degree of each node is fixed (except for times
when the node has no connections to other nodes), we show that consensus is
achieved within a given accuracy $\epsilon$ on n nodes in time $B+4n^3 B
\ln(2n/\epsilon)$. Because there is a direct relation between consensus
algorithms in time-varying environments and inhomogeneous random walks, our
result also translates into a general statement on such random walks. Moreover,
we give a simple proof of a result of Cao, Spielman, and Morse that the worst
case convergence time becomes exponentially large in the number of nodes $n$
under slight relaxation of the degree constancy assumption.
|
1104.0457
|
Nonuniform Coverage Control on the Line
|
math.OC cs.SY
|
This paper investigates control laws allowing mobile, autonomous agents to
optimally position themselves on the line for distributed sensing in a
nonuniform field. We show that a simple static control law, based only on local
measurements of the field by each agent, drives the agents close to the optimal
positions after the agents execute in parallel a number of
sensing/movement/computation rounds that is essentially quadratic in the number
of agents. Further, we exhibit a dynamic control law which, under slightly
stronger assumptions on the capabilities and knowledge of each agent, drives
the agents close to the optimal positions after the agents execute in parallel
a number of sensing/communication/computation/movement rounds that is
essentially linear in the number of agents. Crucially, both algorithms are
fully distributed and robust to unpredictable loss and addition of agents.
|
1104.0459
|
Enabling Multi-level Trust in Privacy Preserving Data Mining
|
cs.DB stat.AP
|
Privacy Preserving Data Mining (PPDM) addresses the problem of developing
accurate models about aggregated data without access to precise information in
individual data record. A widely studied \emph{perturbation-based PPDM}
approach introduces random perturbation to individual values to preserve
privacy before data is published. Previous solutions of this approach are
limited in their tacit assumption of single-level trust on data miners.
In this work, we relax this assumption and expand the scope of
perturbation-based PPDM to Multi-Level Trust (MLT-PPDM). In our setting, the
more trusted a data miner is, the less perturbed copy of the data it can
access. Under this setting, a malicious data miner may have access to
differently perturbed copies of the same data through various means, and may
combine these diverse copies to jointly infer additional information about the
original data that the data owner does not intend to release. Preventing such
\emph{diversity attacks} is the key challenge of providing MLT-PPDM services.
We address this challenge by properly correlating perturbation across copies at
different trust levels. We prove that our solution is robust against diversity
attacks with respect to our privacy goal. That is, for data miners who have
access to an arbitrary collection of the perturbed copies, our solution prevent
them from jointly reconstructing the original data more accurately than the
best effort using any individual copy in the collection. Our solution allows a
data owner to generate perturbed copies of its data for arbitrary trust levels
on-demand. This feature offers data owners maximum flexibility.
|
1104.0529
|
Random copying in space
|
physics.soc-ph cs.SI q-bio.PE
|
Random copying is a simple model for population dynamics in the absence of
selection, and has been applied to both biological and cultural evolution. In
this work, we investigate the effect that spatial structure has on the
dynamics. We focus in particular on how a measure of the diversity in the
population changes over time. We show that even when the vast majority of a
population's history may be well-described by a spatially-unstructured model,
spatial structure may nevertheless affect the expected level of diversity seen
at a local scale. We demonstrate this phenomenon explicitly by examining the
random copying process on small-world networks, and use our results to comment
on the use of simple random-copying models in an empirical context.
|
1104.0547
|
Joint Transmission and State Estimation: A Constrained Channel Coding
Approach
|
cs.IT math.IT
|
A scenario involving a source, a channel, and a destination, where the
destination is interested in {\em both} reliably reconstructing the message
transmitted by the source and estimating with a fidelity criterion the state of
the channel, is considered. The source knows the channel statistics, but is
oblivious to the actual channel state realization. Herein it is established
that a distortion constraint for channel state estimation can be reduced to an
additional cost constraint on the source input distribution, in the limit of
large coding block length. A newly defined capacity-distortion function thus
characterizes the fundamental tradeoff between transmission rate and state
estimation distortion. It is also shown that non-coherent communication coupled
with channel state estimation conditioned on treating the decoded message as
training symbols achieves the capacity-distortion function. Among the various
examples considered, the capacity-distortion function for a memoryless Rayleigh
fading channel is characterized to within 1.443 bits at high signal-to-noise
ratio. The constrained channel coding approach is also extended to multiple
access channels, leading to a coupled cost constraint on the input
distributions for the transmitting sources.
|
1104.0553
|
Determining Relevance of Accesses at Runtime (Extended Version)
|
cs.DB
|
Consider the situation where a query is to be answered using Web sources that
restrict the accesses that can be made on backend relational data by requiring
some attributes to be given as input of the service. The accesses provide
lookups on the collection of attributes values that match the binding. They can
differ in whether or not they require arguments to be generated from prior
accesses. Prior work has focused on the question of whether a query can be
answered using a set of data sources, and in developing static access plans
(e.g., Datalog programs) that implement query answering. We are interested in
dynamic aspects of the query answering problem: given partial information about
the data, which accesses could provide relevant data for answering a given
query? We consider immediate and long-term notions of "relevant accesses", and
ascertain the complexity of query relevance, for both conjunctive queries and
arbitrary positive queries. In the process, we relate dynamic relevance of an
access to query containment under access limitations and characterize the
complexity of this problem; we produce several complexity results about
containment that are of interest by themselves.
|
1104.0576
|
Adaptive Single-Trial Error/Erasure Decoding of Reed-Solomon Codes
|
cs.IT math.IT
|
Algebraic decoding algorithms are commonly applied for the decoding of
Reed-Solomon codes. Their main advantages are low computational complexity and
predictable decoding capabilities. Many algorithms can be extended for
correction of both errors and erasures. This enables the decoder to exploit
binary quantized reliability information obtained from the transmission
channel: Received symbols with high reliability are forwarded to the decoding
algorithm while symbols with low reliability are erased. In this paper we
investigate adaptive single-trial error/erasure decoding of Reed-Solomon codes,
i.e. we derive an adaptive erasing strategy which minimizes the residual
codeword error probability after decoding. Our result is applicable to any
error/erasure decoding algorithm as long as its decoding capabilities can be
expressed by a decoder capability function. Examples are Bounded Minimum
Distance decoding with the Berlekamp-Massey- or the Sugiyama algorithms and the
Guruswami-Sudan list decoder.
|
1104.0579
|
Image Retrieval Method Using Top-surf Descriptor
|
cs.CV
|
This report presents the results and details of a content-based image
retrieval project using the Top-surf descriptor. The experimental results are
preliminary, however, it shows the capability of deducing objects from parts of
the objects or from the objects that are similar. This paper uses a dataset
consisting of 1200 images of which 800 images are equally divided into 8
categories, namely airplane, beach, motorbike, forest, elephants, horses, bus
and building, while the other 400 images are randomly picked from the Internet.
The best results achieved are from building category.
|
1104.0582
|
Visual Concept Detection and Real Time Object Detection
|
cs.CV
|
Bag-of-words model is implemented and tried on 10-class visual concept
detection problem. The experimental results show that "DURF+ERT+SVM"
outperforms "SIFT+ERT+SVM" both in detection performance and computation
efficiency. Besides, combining DURF and SIFT results in even better detection
performance. Real-time object detection using SIFT and RANSAC is also tried on
simple objects, e.g. drink can, and good result is achieved.
|
1104.0599
|
Near concavity of the growth rate for coupled LDPC chains
|
cs.IT math.IT
|
Convolutional Low-Density-Parity-Check (LDPC) ensembles have excellent
performance. Their iterative threshold increases with their average degree, or
with the size of the coupling window in randomized constructions. In the later
case, as the window size grows, the Belief Propagation (BP) threshold attains
the maximum-a-posteriori (MAP) threshold of the underlying ensemble. In this
contribution we show that a similar phenomenon happens for the growth rate of
coupled ensembles. Loosely speaking, we observe that as the coupling strength
grows, the growth rate of the coupled ensemble comes close to the concave hull
of the underlying ensemble's growth rate. For ensembles randomly coupled across
a window the growth rate actually tends to the concave hull of the underlying
one as the window size increases. Our observations are supported by the
calculations of the combinatorial growth rate, and that of the growth rate
derived from the replica method. The observed concavity is a general feature of
coupled mean field graphical models and is already present at the level of
coupled Curie-Weiss models. There, the canonical free energy of the coupled
system tends to the concave hull of the underlying one. As we explain, the
behavior of the growth rate of coupled ensembles is exactly analogous.
|
1104.0640
|
On the Sphere Decoding Complexity of STBCs for Asymmetric MIMO Systems
|
cs.IT math.IT
|
In the landmark paper by Hassibi and Hochwald, it is claimed without proof
that the upper triangular matrix R encountered during the sphere decoding of
any linear dispersion code is full-ranked whenever the rate of the code is less
than the minimum of the number of transmit and receive antennas. In this paper,
we show that this claim is true only when the number of receive antennas is at
least as much as the number of transmit antennas. We also show that all known
families of high rate (rate greater than 1 complex symbol per channel use)
multigroup ML decodable codes have rank-deficient R matrix even when the
criterion on rate is satisfied, and that this rank-deficiency problem arises
only in asymmetric MIMO with number of receive antennas less than the number of
transmit antennas. Unlike the codes with full-rank R matrix, the average sphere
decoding complexity of the STBCs whose R matrix is rank-deficient is polynomial
in the constellation size. We derive the sphere decoding complexity of most of
the known high rate multigroup ML decodable codes, and show that for each code,
the complexity is a decreasing function of the number of receive antennas.
|
1104.0651
|
Meaningful Clustered Forest: an Automatic and Robust Clustering
Algorithm
|
cs.LG
|
We propose a new clustering technique that can be regarded as a numerical
method to compute the proximity gestalt. The method analyzes edge length
statistics in the MST of the dataset and provides an a contrario cluster
detection criterion. The approach is fully parametric on the chosen distance
and can detect arbitrarily shaped clusters. The method is also automatic, in
the sense that only a single parameter is left to the user. This parameter has
an intuitive interpretation as it controls the expected number of false
detections. We show that the iterative application of our method can (1)
provide robustness to noise and (2) solve a masking phenomenon in which a
highly populated and salient cluster dominates the scene and inhibits the
detection of less-populated, but still salient, clusters.
|
1104.0654
|
Block-Sparse Recovery via Convex Optimization
|
math.OC cs.CV cs.IT math.IT
|
Given a dictionary that consists of multiple blocks and a signal that lives
in the range space of only a few blocks, we study the problem of finding a
block-sparse representation of the signal, i.e., a representation that uses the
minimum number of blocks. Motivated by signal/image processing and computer
vision applications, such as face recognition, we consider the block-sparse
recovery problem in the case where the number of atoms in each block is
arbitrary, possibly much larger than the dimension of the underlying subspace.
To find a block-sparse representation of a signal, we propose two classes of
non-convex optimization programs, which aim to minimize the number of nonzero
coefficient blocks and the number of nonzero reconstructed vectors from the
blocks, respectively. Since both classes of problems are NP-hard, we propose
convex relaxations and derive conditions under which each class of the convex
programs is equivalent to the original non-convex formulation. Our conditions
depend on the notions of mutual and cumulative subspace coherence of a
dictionary, which are natural generalizations of existing notions of mutual and
cumulative coherence. We evaluate the performance of the proposed convex
programs through simulations as well as real experiments on face recognition.
We show that treating the face recognition problem as a block-sparse recovery
problem improves the state-of-the-art results by 10% with only 25% of the
training data.
|
1104.0729
|
Online and Batch Learning Algorithms for Data with Missing Features
|
cs.LG stat.ML
|
We introduce new online and batch algorithms that are robust to data with
missing features, a situation that arises in many practical applications. In
the online setup, we allow for the comparison hypothesis to change as a
function of the subset of features that is observed on any given round,
extending the standard setting where the comparison hypothesis is fixed
throughout. In the batch setup, we present a convex relation of a non-convex
problem to jointly estimate an imputation function, used to fill in the values
of missing features, along with the classification hypothesis. We prove regret
bounds in the online setting and Rademacher complexity bounds for the batch
i.i.d. setting. The algorithms are tested on several UCI datasets, showing
superior performance over baselines.
|
1104.0735
|
A Non-Orthogonal DF Scheme for the Single Relay Channel and the Effect
of Labelling
|
cs.IT math.IT
|
We consider the uncoded transmission over the half-duplex single relay
channel, with a single antenna at the source, relay and destination nodes, in a
Rayleigh fading environment. The phase during which the relay is in reception
mode is referred to as Phase 1 and the phase during which the relay is in
transmission mode is referred to as Phase 2. The following two cases are
considered: the Non-Orthogonal Decode and Forward (NODF) scheme, in which both
the source and the relay transmit during Phase 2 and the Orthogonal Decode and
Forward (ODF) scheme, in which the relay alone transmits during Phase 2. A near
ML decoder which gives full diversity (diversity order 2) for the NODF scheme
is proposed. Due to the proximity of the relay to the destination, the
Source-Destination link, in general, is expected to be much weaker than the
Relay-Destination link. Hence it is not clear whether the transmission made by
the source during Phase 2 in the NODF scheme, provides any performance
improvement over the ODF scheme or not. In this regard, it is shown that the
NODF scheme provides significant performance improvement over the ODF scheme.
In fact, at high SNR, the performance of the NODF scheme with the non-ideal
Source-Relay link, is same as that of the NODF scheme with an ideal
Source-Relay link. In other words, to study the high SNR performance of the
NODF scheme, one can assume that the Source-Relay link is ideal, whereas the
same is not true for the ODF scheme. Further, it is shown that proper choice of
the mapping of the bits on to the signal points at the source and the relay,
provides a significant improvement in performance, for both the NODF and the
ODF schemes.
|
1104.0742
|
Accelerating Growth and Size-dependent Distribution of Human Activities
Online
|
physics.soc-ph cs.SI
|
Research on human online activities usually assumes that total activity $T$
increases linearly with active population $P$, that is, $T\propto
P^{\gamma}(\gamma=1)$. However, we find examples of systems where total
activity grows faster than active population. Our study shows that the power
law relationship $T\propto P^{\gamma}(\gamma>1)$ is in fact ubiquitous in
online activities such as micro-blogging, news voting and photo tagging. We
call the pattern "accelerating growth" and find it relates to a type of
distribution that changes with system size. We show both analytically and
empirically how the growth rate $\gamma$ associates with a scaling parameter
$b$ in the size-dependent distribution. As most previous studies explain
accelerating growth by power law distribution, the model of size-dependent
distribution is novel and worth further exploration.
|
1104.0769
|
Enhanced stiffness modeling of manipulators with passive joints
|
cs.RO
|
The paper presents a methodology to enhance the stiffness analysis of serial
and parallel manipulators with passive joints. It directly takes into account
the loading influence on the manipulator configuration and, consequently, on
its Jacobians and Hessians. The main contributions of this paper are the
introduction of a non-linear stiffness model for the manipulators with passive
joints, a relevant numerical technique for its linearization and computing of
the Cartesian stiffness matrix which allows rank-deficiency. Within the
developed technique, the manipulator elements are presented as pseudo-rigid
bodies separated by multidimensional virtual springs and perfect passive
joints. Simulation examples are presented that deal with parallel manipulators
of the Ortholide family and demonstrate the ability of the developed
methodology to describe non-linear behavior of the manipulator structure such
as a sudden change of the elastic instability properties (buckling).
|
1104.0775
|
Evolving Pacing Strategies for Team Pursuit Track Cycling
|
cs.NE
|
Team pursuit track cycling is a bicycle racing sport held on velodromes and
is part of the Summer Olympics. It involves the use of strategies to minimize
the overall time that a team of cyclists needs to complete a race. We present
an optimisation framework for team pursuit track cycling and show how to evolve
strategies using metaheuristics for this interesting real-world problem. Our
experimental results show that these heuristics lead to significantly better
strategies than state-of-art strategies that are currently used by teams of
cyclists.
|
1104.0780
|
A distributed Approach for Access and Visibility Task with a Manikin and
a Robot in a Virtual Reality Environment
|
cs.RO
|
This paper presents a new method, based on a multi-agent system and on a
digital mock-up technology, to assess an efficient path planner for a manikin
or a robot for access and visibility task taking into account ergonomic
constraints or joint and mechanical limits. In order to solve this problem, the
human operator is integrated in the process optimization to contribute to a
global perception of the environment. This operator cooperates, in real-time,
with several automatic local elementary agents. The result of this work
validates solutions through the digital mock-up; it can be applied to simulate
maintenability and mountability tasks.
|
1104.0834
|
Haptic devices and objects, robots and mannequin simulation in a CAD-CAM
software: eM-Virtual Desktop
|
cs.RO
|
This paper presents the development of a new software in order to manage
objects, robots and mannequins in using the possibilities given by the haptic
feedback of the Phantom desktop devices. The haptic device provides 6
positional degrees of freedom sensing but three degrees force feedback. This
software called eM-Virtual Desktop is integrated in the Tecnomatix's solution
called eM-Workplace. The eM-Workplace provides powerful solutions for planning
and designing of complex assembly facilities, lines and workplaces. In the
digital mockup context, the haptic interfaces can be used to reduce the
development cycle of products. Three different loops are used to manage the
graphic, the collision detection and the haptic feedback according to theirs
own frequencies. The developed software is currently tested in industrial
context by a European automotive constructor.
|
1104.0839
|
A framework of motion capture system based human behaviours simulation
for ergonomic analysis
|
cs.RO
|
With the increasing of computer capabilities, Computer aided ergonomics (CAE)
offers new possibilities to integrate conventional ergonomic knowledge and to
develop new methods into the work design process. As mentioned in [1],
different approaches have been developed to enhance the efficiency of the
ergonomic evaluation. Ergonomic expert systems, ergonomic oriented information
systems, numerical models of human, etc. have been implemented in numerical
ergonomic software. Until now, there are ergonomic software tools available,
such as Jack, Ergoman, Delmia Human, 3DSSPP, and Santos, etc. [2-4]. The main
functions of these tools are posture analysis and posture prediction. In the
visualization part, Jack and 3DSSPP produce results to visualize virtual human
tasks in 3-dimensional, but without realistic physical properties. Nowadays,
with the development of computer technology, the simulation of physical world
is paid more attention. Physical engines [5] are used more and more in computer
game (CG) field. The advantage of physical engine is the nature physical world
environment simulation. The purpose of our research is to use the CG technology
to create a virtual environment with physical properties for ergonomic analysis
of virtual human.
|
1104.0840
|
Uniqueness domains and non singular assembly mode changing trajectories
|
cs.RO
|
Parallel robots admit generally several solutions to the direct kinematics
problem. The aspects are associated with the maximal singularity free domains
without any singular configurations. Inside these regions, some trajectories
are possible between two solutions of the direct kinematic problem without
meeting any type of singularity: non-singular assembly mode trajectories. An
established condition for such trajectories is to have cusp points inside the
joint space that must be encircled. This paper presents an approach based on
the notion of uniqueness domains to explain this behaviour.
|
1104.0843
|
Phase Transitions in Knowledge Compilation: an Experimental Study
|
cs.AI
|
Phase transitions in many complex combinational problems have been widely
studied in the past decade. In this paper, we investigate phase transitions in
the knowledge compilation empirically, where DFA, OBDD and d-DNNF are chosen as
the target languages to compile random k-SAT instances. We perform intensive
experiments to analyze the sizes of compilation results and draw the following
conclusions: there exists an easy-hard-easy pattern in compilations; the peak
point of sizes in the pattern is only related to the ratio of the number of
clauses to that of variables when k is fixed, regardless of target languages;
most sizes of compilation results increase exponentially with the number of
variables growing, but there also exists a phase transition that separates a
polynomial-increment region from the exponential-increment region; Moreover, we
explain why the phase transition in compilations occurs by analyzing
microstructures of DFAs, and conclude that a kind of solution
interchangeability with more than 2 variables has a sharp transition near the
peak point of the easy-hard-easy pattern, and thus it has a great impact on
sizes of DFAs.
|
1104.0862
|
Causal Rate Distortion Function and Relations to Filtering Theory
|
cs.IT math.IT
|
A causal rate distortion function is defined, its solution is described, and
its relation to filtering theory is discussed. The relation to filtering is
obtained via a causal constraint imposed on the reconstruction kernel to be
realizable.
|
1104.0867
|
Factorised Representations of Query Results
|
cs.DB cs.DS
|
Query tractability has been traditionally defined as a function of input
database and query sizes, or of both input and output sizes, where the query
result is represented as a bag of tuples. In this report, we introduce a
framework that allows to investigate tractability beyond this setting. The key
insight is that, although the cardinality of a query result can be exponential,
its structure can be very regular and thus factorisable into a nested
representation whose size is only polynomial in the size of both the input
database and query.
For a given query result, there may be several equivalent representations,
and we quantify the regularity of the result by its readability, which is the
minimum over all its representations of the maximum number of occurrences of
any tuple in that representation. We give a characterisation of
select-project-join queries based on the bounds on readability of their results
for any input database. We complement it with an algorithm that can find
asymptotically optimal upper bounds and corresponding factorised
representations.
|
1104.0871
|
Information Storage and Retrieval for Probe Storage using Optical
Diffraction Patterns
|
cs.IT cs.IR math.IT physics.optics
|
A novel method for fast information retrieval from a probe storage device is
considered. It is shown that information can be stored and retrieved using the
optical diffraction patterns obtained by the illumination of a large array of
cantilevers by a monochromatic light source. In thermo-mechanical probe
storage, the information is stored as a sequence of indentations on the polymer
medium. To retrieve the information, the array of probes is actuated by
applying a bending force to the cantilevers. Probes positioned over
indentations experience deflection by the depth of the indentation, probes over
the flat media remain un-deflected. Thus the array of actuated probes can be
viewed as an irregular optical grating, which creates a data-dependent
diffraction pattern when illuminated by laser light. We develop a low
complexity modulation scheme, which allows the extraction of information stored
in the pattern of indentations on the media from Fourier coefficients of the
intensity of the diffraction pattern. We then derive a low-complexity maximum
likelihood sequence detection algorithm for retrieving the user information
from the Fourier coefficients. The derivation of both the modulation and the
detection schemes is based on the Fraunhofer formula for data-dependent
diffraction patterns. We show that for as long as the Fresnel number F<0.1, the
optimal channel detector derived from Fraunhofer diffraction theory does not
suffer any significant performance degradation.
|
1104.0888
|
Settling the feasibility of interference alignment for the MIMO
interference channel: the symmetric square case
|
cs.IT math.IT
|
Determining the feasibility conditions for vector space interference
alignment in the K-user MIMO interference channel with constant channel
coefficients has attracted much recent attention yet remains unsolved. The main
result of this paper is restricted to the symmetric square case where all
transmitters and receivers have N antennas, and each user desires d transmit
dimensions. We prove that alignment is possible if and only if the number of
antennas satisfies N>= d(K+1)/2. We also show a necessary condition for
feasibility of alignment with arbitrary system parameters. An algebraic
geometry approach is central to the results.
|
1104.0906
|
Applications of Tauberian Theorem for High-SNR Analysis of Performance
over Fading Channels
|
cs.IT math.IT
|
This paper derives high-SNR asymptotic average error rates over fading
channels by relating them to the outage probability, under mild assumptions.
The analysis is based on the Tauberian theorem for Laplace-Stieltjes transforms
which is grounded on the notion of regular variation, and applies to a wider
range of channel distributions than existing approaches. The theory of regular
variation is argued to be the proper mathematical framework for finding
sufficient and necessary conditions for outage events to dominate high-SNR
error rate performance. It is proved that the diversity order being $d$ and the
cumulative distribution function (CDF) of the channel power gain having
variation exponent $d$ at 0 imply each other, provided that the instantaneous
error rate is upper-bounded by an exponential function of the instantaneous
SNR. High-SNR asymptotic average error rates are derived for specific
instantaneous error rates. Compared to existing approaches in the literature,
the asymptotic expressions are related to the channel distribution in a much
simpler manner herein, and related with outage more intuitively. The high-SNR
asymptotic error rate is also characterized under diversity combining schemes
with the channel power gain of each branch having a regularly varying CDF.
Numerical results are shown to corroborate our theoretical analysis.
|
1104.0923
|
Ordered community structure in networks
|
physics.soc-ph cs.SI
|
Community structure in networks is often a consequence of homophily, or
assortative mixing, based on some attribute of the vertices. For example,
researchers may be grouped into communities corresponding to their research
topic. This is possible if vertex attributes have discrete values, but many
networks exhibit assortative mixing by some continuous-valued attribute, such
as age or geographical location. In such cases, no discrete communities can be
identified. We consider how the notion of community structure can be
generalized to networks that are based on continuous-valued attributes: in
general, a network may contain discrete communities which are ordered according
to their attribute values. We propose a method of generating synthetic ordered
networks and investigate the effect of ordered community structure on the
spread of infectious diseases. We also show that community detection algorithms
fail to recover community structure in ordered networks, and evaluate an
alternative method using a layout algorithm to recover the ordering.
|
1104.0942
|
The Role of Social Networks in Online Shopping: Information Passing,
Price of Trust, and Consumer Choice
|
cs.SI cs.CY physics.soc-ph
|
While social interactions are critical to understanding consumer behavior,
the relationship between social and commerce networks has not been explored on
a large scale. We analyze Taobao, a Chinese consumer marketplace that is the
world's largest e-commerce website. What sets Taobao apart from its competitors
is its integrated instant messaging tool, which buyers can use to ask sellers
about products or ask other buyers for advice. In our study, we focus on how an
individual's commercial transactions are embedded in their social graphs. By
studying triads and the directed closure process, we quantify the presence of
information passing and gain insights into when different types of links form
in the network.
Using seller ratings and review information, we then quantify a price of
trust. How much will a consumer pay for transaction with a trusted seller? We
conclude by modeling this consumer choice problem: if a buyer wishes to
purchase a particular product, how does (s)he decide which store to purchase it
from? By analyzing the performance of various feature sets in an information
retrieval setting, we demonstrate how the social graph factors into
understanding consumer behavior.
|
1104.0954
|
Multiple Unicast Capacity of 2-Source 2-Sink Networks
|
cs.IT math.IT
|
We study the sum capacity of multiple unicasts in wired and wireless multihop
networks. With 2 source nodes and 2 sink nodes, there are a total of 4
independent unicast sessions (messages), one from each source to each sink node
(this setting is also known as an X network). For wired networks with arbitrary
connectivity, the sum capacity is achieved simply by routing. For wireless
networks, we explore the degrees of freedom (DoF) of multihop X networks with a
layered structure, allowing arbitrary number of hops, and arbitrary
connectivity within each hop. For the case when there are no more than two
relay nodes in each layer, the DoF can only take values 1, 4/3, 3/2 or 2, based
on the connectivity of the network, for almost all values of channel
coefficients. When there are arbitrary number of relays in each layer, the DoF
can also take the value 5/3 . Achievability schemes incorporate linear
forwarding, interference alignment and aligned interference neutralization
principles. Information theoretic converse arguments specialized for the
connectivity of the network are constructed based on the intuition from linear
dimension counting arguments.
|
1104.0988
|
Simple proofs for duality of generalized minimum poset weights and
weight distributions of (Near-)MDS poset codes
|
cs.IT math.IT
|
In 1991, Wei introduced generalized minimum Hamming weights for linear codes
and showed their monotonicity and duality. Recently, several authors extended
these results to the case of generalized minimum poset weights by using
different methods. Here, we would like to prove the duality by using matroid
theory. This gives yet another and very simple proof of it. In particular, our
argument will make it clear that the duality follows from the well-known
relation between the rank function and the corank function of a matroid. In
addition, we derive the weight distributions of linear MDS and Near-MDS poset
codes in the same spirit.
|
1104.0992
|
On the Degrees of Freedom Achievable Through Interference Alignment in a
MIMO Interference Channel
|
cs.IT math.AG math.IT
|
Consider a K-user flat fading MIMO interference channel where the k-th
transmitter (or receiver) is equipped with M_k (respectively N_k) antennas. If
a large number of statistically independent channel extensions are allowed
either across time or frequency, the recent work [1] suggests that the total
achievable degrees of freedom (DoF) can be maximized via interference
alignment, resulting in a total DoF that grows linearly with K even if M_k and
N_k are bounded. In this work we consider the case where no channel extension
is allowed, and establish a general condition that must be satisfied by any
degrees of freedom tuple (d_1, d2, ..., d_K) achievable through linear
interference alignment. For a symmetric system with M_k = M, N_k = N, d_k = d
for all k, this condition implies that the total achievable DoF cannot grow
linearly with K, and is in fact no more than K(M + N)=(K + 1). We also show
that this bound is tight when the number of antennas at each transceiver is
divisible by the number of data streams.
|
1104.1014
|
On Secrecy Rate Analysis of MIMO Wiretap Channels Driven by
Finite-Alphabet Input
|
cs.IT math.IT
|
This work investigates the effect of finite-alphabet source input on the
secrecy rate of a multi-antenna wiretap system. Existing works have
characterized maximum achievable secrecy rate or secrecy capacity for single
and multiple antenna systems based on Gaussian source signals and secrecy code.
Despite the impracticality of Gaussian sources, the compact closed-form
expression of mutual information between linear channel Gaussian input and
corresponding output has led to broad application of Gaussian input assumption
in physical secrecy analysis. For practical considerations, we study the effect
of finite discrete-constellation on the achievable secrecy rate of
multiple-antenna wire-tap channels. Our proposed precoding scheme converts the
multi-antenna system into a bank of parallel channels. Based on this precoding
strategy, we propose a decentralized power allocation algorithm based on dual
decomposition for maximizing the achievable secrecy rate. In addition, we
analyze the achievable secrecy rate for finite-alphabet inputs in low and high
SNR cases. Our results demonstrate substantial difference in secrecy rate
between systems given finite-alphabet inputs and systems with Gaussian inputs.
|
1104.1041
|
Compressed Sensing and Matrix Completion with Constant Proportion of
Corruptions
|
cs.IT math.IT stat.ML
|
We improve existing results in the field of compressed sensing and matrix
completion when sampled data may be grossly corrupted. We introduce three new
theorems. 1) In compressed sensing, we show that if the m \times n sensing
matrix has independent Gaussian entries, then one can recover a sparse signal x
exactly by tractable \ell1 minimimization even if a positive fraction of the
measurements are arbitrarily corrupted, provided the number of nonzero entries
in x is O(m/(log(n/m) + 1)). 2) In the very general sensing model introduced in
"A probabilistic and RIPless theory of compressed sensing" by Candes and Plan,
and assuming a positive fraction of corrupted measurements, exact recovery
still holds if the signal now has O(m/(log^2 n)) nonzero entries. 3) Finally,
we prove that one can recover an n \times n low-rank matrix from m corrupted
sampled entries by tractable optimization provided the rank is on the order of
O(m/(n log^2 n)); again, this holds when there is a positive fraction of
corrupted samples.
|
1104.1045
|
Tractable Set Constraints
|
cs.AI cs.CC cs.LO
|
Many fundamental problems in artificial intelligence, knowledge
representation, and verification involve reasoning about sets and relations
between sets and can be modeled as set constraint satisfaction problems (set
CSPs). Such problems are frequently intractable, but there are several
important set CSPs that are known to be polynomial-time tractable. We introduce
a large class of set CSPs that can be solved in quadratic time. Our class,
which we call EI, contains all previously known tractable set CSPs, but also
some new ones that are of crucial importance for example in description logics.
The class of EI set constraints has an elegant universal-algebraic
characterization, which we use to show that every set constraint language that
properly contains all EI set constraints already has a finite sublanguage with
an NP-hard constraint satisfaction problem.
|
1104.1057
|
Bounds on the Capacity of the Relay Channel with Noncausal State at
Source
|
cs.IT math.IT
|
We consider a three-terminal state-dependent relay channel with the channel
state available non-causally at only the source. Such a model may be of
interest for node cooperation in the framework of cognition, i.e.,
collaborative signal transmission involving cognitive and non-cognitive radios.
We study the capacity of this communication model. One principal problem is
caused by the relay's not knowing the channel state. For the discrete
memoryless (DM) model, we establish two lower bounds and an upper bound on
channel capacity. The first lower bound is obtained by a coding scheme in which
the source describes the state of the channel to the relay and destination,
which then exploit the gained description for a better communication of the
source's information message. The coding scheme for the second lower bound
remedies the relay's not knowing the states of the channel by first computing,
at the source, the appropriate input that the relay would send had the relay
known the states of the channel, and then transmitting this appropriate input
to the relay. The relay simply guesses the sent input and sends it in the next
block. The upper bound is non trivial and it accounts for not knowing the state
at the relay and destination. For the general Gaussian model, we derive lower
bounds on the channel capacity by exploiting ideas in the spirit of those we
use for the DM model; and we show that these bounds are optimal for small and
large noise at the relay irrespective to the strength of the interference.
Furthermore, we also consider a special case model in which the source input
has two components one of which is independent of the state. We establish a
better upper bound for both DM and Gaussian cases and we also characterize the
capacity in a number of special cases.
|
1104.1071
|
Analysis of Block OMP using Block RIP
|
cs.IT math.IT
|
Orthogonal matching pursuit (OMP) is a canonical greedy algorithm for sparse
signal reconstruction. When the signal of interest is block sparse, i.e., it
has nonzero coefficients occurring in clusters, the block version of OMP
algorithm (i.e., Block OMP) outperforms the conventional OMP. In this paper, we
demonstrate that a new notion of block restricted isometry property (Block
RIP), which is less stringent than standard restricted isometry property (RIP),
can be used for a very straightforward analysis of Block OMP. It is
demonstrated that Block OMP can exactly recover any block K-sparse signal in no
more than K steps if the Block RIP of order K+1 with a sufficiently small
isometry constant is satisfied. Using this result it can be proved that Block
OMP can yield better reconstruction properties than the conventional OMP when
the signal is block sparse.
|
1104.1074
|
SAR Imaging of Moving Targets via Compressive Sensing
|
cs.IT math.IT
|
An algorithm based on compressive sensing (CS) is proposed for synthetic
aperture radar (SAR) imaging of moving targets. The received SAR echo is
decomposed into the sum of basis sub-signals, which are generated by
discretizing the target spatial domain and velocity domain and synthesizing the
SAR received data for every discretized spatial position and velocity
candidate. In this way, the SAR imaging problem is converted into sub-signal
selection problem. In the case that moving targets are sparsely distributed in
the observed scene, their reflectivities, positions and velocities can be
obtained by using the CS technique. It is shown that, compared with traditional
algorithms, the target image obtained by the proposed algorithm has higher
resolution and lower side-lobe while the required number of measurements can be
an order of magnitude less than that by sampling at Nyquist sampling rate.
Moreover, multiple targets with different speeds can be imaged simultaneously,
so the proposed algorithm has higher efficiency.
|
1104.1075
|
Super Critical and Sub Critical Regimes of Percolation with Secure
Communication
|
cs.IT math.IT
|
Percolation in an information-theoretically secure graph is considered where
both the legitimate and the eavesdropper nodes are distributed as Poisson point
processes. For both the path-loss and the path-loss plus fading model, upper
and lower bounds on the minimum density of the legitimate nodes (as a function
of the density of the eavesdropper nodes) required for non-zero probability of
having an unbounded cluster are derived. The lower bound is universal in
nature, i.e. the constant does not depend on the density of the eavesdropper
nodes.
|
1104.1155
|
Modulation Diversity in Fading Channels with Quantized Receiver
|
cs.IT math.IT
|
In this paper, we address the design of codes which achieve modulation
diversity in block fading single-input single-output (SISO) channels with
signal quantization at receiver and low-complexity decoding. With an
unquantized receiver, coding based on algebraic rotations is known to achieve
modulation coding diversity. On the other hand, with a quantized receiver,
algebraic rotations may not guarantee diversity. Through analysis, we propose
specific rotations which result in the codewords having equidistant
component-wise projections. We show that the proposed coding scheme achieves
maximum modulation diversity with a low-complexity minimum distance decoder and
perfect channel knowledge. Relaxing the perfect channel knowledge assumption we
propose a novel training/estimation and receiver control technique to estimate
the channel. We show that our coding/training/estimation scheme and minimum
distance decoding achieve an error probability performance similar to that
achieved with perfect channel knowledge.
|
1104.1157
|
Accelerated Dual Descent for Network Optimization
|
math.OC cs.SY
|
Dual descent methods are commonly used to solve network optimization problems
because their implementation can be distributed through the network. However,
their convergence rates are typically very slow. This paper introduces a family
of dual descent algorithms that use approximate Newton directions to accelerate
the convergence rate of conventional dual descent. These approximate directions
can be computed using local information exchanges thereby retaining the
benefits of distributed implementations. The approximate Newton directions are
obtained through matrix splitting techniques and sparse Taylor approximations
of the inverse Hessian.We show that, similarly to conventional Newton methods,
the proposed algorithm exhibits superlinear convergence within a neighborhood
of the optimal value. Numerical analysis corroborates that convergence times
are between one to two orders of magnitude faster than existing distributed
optimization methods. A connection with recent developments that use consensus
iterations to compute approximate Newton directions is also presented.
|
1104.1159
|
LTL Control in Uncertain Environments with Probabilistic Satisfaction
Guarantees
|
math.OC cs.RO cs.SY
|
We present a method to generate a robot control strategy that maximizes the
probability to accomplish a task. The task is given as a Linear Temporal Logic
(LTL) formula over a set of properties that can be satisfied at the regions of
a partitioned environment. We assume that the probabilities with which the
properties are satisfied at the regions are known, and the robot can determine
the truth value of a proposition only at the current region. Motivated by
several results on partitioned-based abstractions, we assume that the motion is
performed on a graph. To account for noisy sensors and actuators, we assume
that a control action enables several transitions with known probabilities. We
show that this problem can be reduced to the problem of generating a control
policy for a Markov Decision Process (MDP) such that the probability of
satisfying an LTL formula over its states is maximized. We provide a complete
solution for the latter problem that builds on existing results from
probabilistic model checking. We include an illustrative case study.
|
1104.1190
|
A novel approach for determining fatigue resistances of different muscle
groups in static cases
|
cs.RO
|
In ergonomics and biomechanics, muscle fatigue models based on maximum
endurance time (MET) models are often used to integrate fatigue effect into
ergonomic and biomechanical application. However, due to the empirical
principle of those MET models, the disadvantages of this method are: 1) the MET
models cannot reveal the muscle physiology background very well; 2) there is no
general formation for those MET models to predict MET. In this paper, a
theoretical MET model is extended from a simple muscle fatigue model with
consideration of the external load and maximum voluntary contraction in passive
static exertion cases. The universal availability of the extended MET model is
analyzed in comparison to 24 existing empirical MET models. Using mathematical
regression method, 21 of the 24 MET models have intraclass correlations over
0.9, which means the extended MET model could replace the existing MET models
in a general and computationally efficient way. In addition, an important
parameter, fatigability (or fatigue resistance) of different muscle groups,
could be calculated via the mathematical regression approach. Its mean value
and its standard deviation are useful for predicting MET values of a given
population during static operations. The possible reasons influencing the
fatigue resistance were classified and discussed, and it is still a very
challenging work to find out the quantitative relationship between the fatigue
resistance and the influencing factors.
|
1104.1191
|
Can virtual reality predict body part discomfort and performance of
people in realistic world for assembling tasks?
|
cs.RO
|
This paper presents our work on relationship of evaluation results between
virtual environment (VE) and realistic environment (RE) for assembling tasks.
Evaluation results consist of subjective results (BPD and RPE) and objective
results (posture and physical performance). Same tasks were performed with same
experimental configurations and evaluation results were measured in RE and VE
respectively. Then these evaluation results were compared. Slight difference of
posture between VE and RE was found but not great difference of effect on
people according to conventional ergonomics posture assessment method.
Correlation of BPD and performance results between VE and RE are found by
linear regression method. Moreover, results of BPD, physical performance, and
RPE in VE are higher than that in RE with significant difference. Furthermore,
these results indicates that subjects feel more discomfort and fatigue in VE
than RE because of additional effort required in VE.
|
1104.1200
|
Modularity maximization and tree clustering: Novel ways to determine
effective geographic borders
|
physics.soc-ph cs.SI
|
Territorial subdivisions and geographic borders are essential for
understanding phenomena in sociology, political science, history, and
economics. They influence the interregional flow of information and
cross-border trade and affect the diffusion of innovation and technology.
However, most existing administrative borders were determined by a variety of
historic and political circumstances along with some degree of arbitrariness.
Societies have changed drastically, and it is doubtful that currently existing
borders reflect the most logical divisions. Fortunately, at this point in
history we are in a position to actually measure some aspects of the geographic
structure of society through human mobility. Large-scale transportation systems
such as trains and airlines provide data about the number of people traveling
between geographic locations, and many promising human mobility proxies are
being discovered, such as cell phones, bank notes, and various online social
networks. In this chapter we apply two optimization techniques to a human
mobility proxy (bank note circulation) to investigate the effective geographic
borders that emerge from a direct analysis of human mobility.
|
1104.1217
|
On Conditions for Linearity of Optimal Estimation
|
cs.IT math.IT
|
When is optimal estimation linear? It is well known that, when a Gaussian
source is contaminated with Gaussian noise, a linear estimator minimizes the
mean square estimation error. This paper analyzes, more generally, the
conditions for linearity of optimal estimators. Given a noise (or source)
distribution, and a specified signal to noise ratio (SNR), we derive conditions
for existence and uniqueness of a source (or noise) distribution for which the
$L_p$ optimal estimator is linear. We then show that, if the noise and source
variances are equal, then the matching source must be distributed identically
to the noise. Moreover, we prove that the Gaussian source-channel pair is
unique in the sense that it is the only source-channel pair for which the mean
square error (MSE) optimal estimator is linear at more than one SNR values.
Further, we show the asymptotic linearity of MSE optimal estimators for low SNR
if the channel is Gaussian regardless of the source and, vice versa, for high
SNR if the source is Gaussian regardless of the channel. The extension to the
vector case is also considered where besides the conditions inherited from the
scalar case, additional constraints must be satisfied to ensure linearity of
the optimal estimator.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.