id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1010.1044
|
On the Capacity of the $K$-User Cyclic Gaussian Interference Channel
|
cs.IT math.IT
|
This paper studies the capacity region of a $K$-user cyclic Gaussian
interference channel, where the $k$th user interferes with only the $(k-1)$th
user (mod $K$) in the network. Inspired by the work of Etkin, Tse and Wang, who
derived a capacity region outer bound for the two-user Gaussian interference
channel and proved that a simple Han-Kobayashi power splitting scheme can
achieve to within one bit of the capacity region for all values of channel
parameters, this paper shows that a similar strategy also achieves the capacity
region of the $K$-user cyclic interference channel to within a constant gap in
the weak interference regime. Specifically, for the $K$-user cyclic Gaussian
interference channel, a compact representation of the Han-Kobayashi achievable
rate region using Fourier-Motzkin elimination is first derived, a capacity
region outer bound is then established. It is shown that the Etkin-Tse-Wang
power splitting strategy gives a constant gap of at most 2 bits in the weak
interference regime. For the special 3-user case, this gap can be sharpened to
1 1/2 bits by time-sharing of several different strategies. The capacity result
of the $K$-user cyclic Gaussian interference channel in the strong interference
regime is also given. Further, based on the capacity results, this paper
studies the generalized degrees of freedom (GDoF) of the symmetric cyclic
interference channel. It is shown that the GDoF of the symmetric capacity is
the same as that of the classic two-user interference channel, no matter how
many users are in the network.
|
1010.1052
|
Mixed integer programming for the resolution of GPS carrier phase
ambiguities
|
cs.IT cs.DM math.IT
|
This arXiv upload is to clarify that the now well-known sorted QR MIMO
decoder was first presented in the 1995 IUGG General Assembly. We clearly go
much further in the sense that we directly incorporated reduction into this one
step, non-exact suboptimal integer solution. Except for these first few lines
up to this point, this paper is an unaltered version of the paper presented at
the IUGG1995 Assembly in Boulder.
Ambiguity resolution of GPS carrier phase observables is crucial in high
precision geodetic positioning and navigation applications. It consists of two
aspects: estimating the integer ambiguities in the mixed integer observation
model and examining whether they are sufficiently accurate to be fixed as known
nonrandom integers. We shall discuss the first point in this paper from the
point of view of integer programming. A one-step nonexact approach is proposed
by employing minimum diagonal pivoting Gaussian decompositions, which may be
thought of as an improvement of the simple rounding-off method, since the
weights and correlations of the floating-estimated ambiguities are fully taken
into account. The second approach is to reformulate the mixed integer least
squares problem into the standard 0-1 linear integer programming model, which
can then be solved by using, for instance, the practically robust and efficient
simplex algorithm for linear integer programming. It is exact, if proper bounds
for the ambiguities are given. Theoretical results on decorrelation by
unimodular transformation are given in the form of a theorem.
|
1010.1060
|
On the Capacity of Multi-Hop Wireless Networks with Partial Network
Knowledge
|
cs.IT math.IT
|
In large wireless networks, acquiring full network state information is
typically infeasible. Hence, nodes need to flow the information and manage the
interference based on partial information about the network. In this paper, we
consider multi-hop wireless networks and assume that each source only knows the
channel gains that are on the routes from itself to other destinations in the
network. We develop several distributed strategies to manage the interference
among the users and prove their optimality in maximizing the achievable
normalized sum-rate for some classes of networks.
|
1010.1069
|
Cooperative Distributed Sequential Spectrum Sensing
|
cs.IT math.IT stat.AP
|
We consider cooperative spectrum sensing for cognitive radios. We develop an
energy efficient detector with low detection delay using sequential hypothesis
testing. Sequential Probability Ratio Test (SPRT) is used at both the local
nodes and the fusion center. We also analyse the performance of this algorithm
and compare with the simulations. Modelling uncertainties in the distribution
parameters are considered. Slow fading with and without perfect channel state
information at the cognitive radios is taken into account.
|
1010.1071
|
A Novel Algorithm for Cooperative Distributed Sequential Spectrum
Sensing in Cognitive Radio
|
cs.IT math.IT stat.AP
|
This paper considers cooperative spectrum sensing in Cognitive Radios. In our
previous work we have developed DualSPRT, a distributed algorithm for
cooperative spectrum sensing using Sequential Probability Ratio Test (SPRT) at
the Cognitive Radios as well as at the fusion center. This algorithm works
well, but is not optimal. In this paper we propose an improved algorithm-
SPRT-CSPRT, which is motivated from Cumulative Sum Procedures (CUSUM). We
analyse it theoretically. We also modify this algorithm to handle uncertainties
in SNR's and fading.
|
1010.1147
|
XML Query Processing and Query Languges: A Survey
|
cs.DB cs.DS
|
Today's database is associated with interoperability between different
domains and applications. This consequently results in the importance of data
portability in database. XML format fits the requirements and it has been
increasingly used for serving applications across different domains and
purposes. However, querying XML document effectively and efficiently is still a
challenging issue. This paper discusses query processing issues on XML and
reviews proposed solutions for querying XML databases by various authors.
|
1010.1149
|
Bang--bang trajectories with a double switching time: sufficient strong
local optimality conditions
|
math.OC cs.SY
|
This paper gives sufficient conditions for a class of bang-bang extremals
with multiple switches to be locally optimal in the strong topology. The
conditions are the natural generalizations of the ones considered in previous
papers for more specific cases. We require both the strict bang-bang Legendre
condition, and the second order conditions for the finite dimensional problem
obtained by moving the switching times of the reference trajectory.
|
1010.1163
|
Secrecy Capacity of the Gaussian Wire-Tap Channel with Finite Complex
Constellation Input
|
cs.IT math.IT
|
The secrecy capacity of a discrete memoryless Gaussian Wire-Tap Channel when
the input is from a finite complex constellation is studied. It is shown that
the secrecy capacity curves of a finite constellation plotted against the SNR,
for a fixed noise variance of the eavesdropper's channel has a global maximum
at an internal point. This is in contrast to what is known in the case of
Gaussian codebook input where the secrecy capacity curve is a bounded,
monotonically increasing function of SNR. Secrecy capacity curves for some well
known constellations like BPSK, 4-QAM, 16-QAM and 8-PSK are plotted and the SNR
at which the maximum occurs is found through simulation. It is conjectured that
the secrecy capacity curves for finite constellations have a single maximum.
|
1010.1256
|
Entanglement-assisted quantum turbo codes
|
quant-ph cs.IT math.IT
|
An unexpected breakdown in the existing theory of quantum serial turbo coding
is that a quantum convolutional encoder cannot simultaneously be recursive and
non-catastrophic. These properties are essential for quantum turbo code
families to have a minimum distance growing with blocklength and for their
iterative decoding algorithm to converge, respectively. Here, we show that the
entanglement-assisted paradigm simplifies the theory of quantum turbo codes, in
the sense that an entanglement-assisted quantum (EAQ) convolutional encoder can
possess both of the aforementioned desirable properties. We give several
examples of EAQ convolutional encoders that are both recursive and
non-catastrophic and detail their relevant parameters. We then modify the
quantum turbo decoding algorithm of Poulin et al., in order to have the
constituent decoders pass along only "extrinsic information" to each other
rather than a posteriori probabilities as in the decoder of Poulin et al., and
this leads to a significant improvement in the performance of unassisted
quantum turbo codes. Other simulation results indicate that
entanglement-assisted turbo codes can operate reliably in a noise regime 4.73
dB beyond that of standard quantum turbo codes, when used on a memoryless
depolarizing channel. Furthermore, several of our quantum turbo codes are
within 1 dB or less of their hashing limits, so that the performance of quantum
turbo codes is now on par with that of classical turbo codes. Finally, we prove
that entanglement is the resource that enables a convolutional encoder to be
both non-catastrophic and recursive because an encoder acting on only
information qubits, classical bits, gauge qubits, and ancilla qubits cannot
simultaneously satisfy them.
|
1010.1286
|
Exact Hamming Distortion Analysis of Viterbi Encoded Trellis Coded
Quantizers
|
cs.IT math.IT
|
Let G be a finite strongly connected aperiodic directed graph in which each
edge carries a label from a finite alphabet A. Then G induces a trellis coded
quantizer for encoding an alphabet A memoryless source. A source sequence of
long finite length is encoded by finding a path in G of that length whose
sequence of labels is closest in Hamming distance to the source sequence;
finding the minimum distance path is a dynamic programming problem that is
solved using the Viterbi algorithm. We show how a Markov chain can be used to
obtain a closed form expression for the asymptotic expected Hamming distortion
per sample that results as the number of encoded source samples increases
without bound.
|
1010.1295
|
Optimal Packet Scheduling in an Energy Harvesting Communication System
|
cs.IT cs.NI math.IT
|
We consider the optimal packet scheduling problem in a single-user energy
harvesting wireless communication system. In this system, both the data packets
and the harvested energy are modeled to arrive at the source node randomly. Our
goal is to adaptively change the transmission rate according to the traffic
load and available energy, such that the time by which all packets are
delivered is minimized. Under a deterministic system setting, we assume that
the energy harvesting times and harvested energy amounts are known before the
transmission starts. For the data traffic arrivals, we consider two different
scenarios. In the first scenario, we assume that all bits have arrived and are
ready at the transmitter before the transmission starts. In the second
scenario, we consider the case where packets arrive during the transmissions,
with known arrival times and sizes. We develop optimal off-line scheduling
policies which minimize the time by which all packets are delivered to the
destination, under causality constraints on both data and energy arrivals.
|
1010.1303
|
Error Exponent for Multiple-Access Channels:Lower Bounds
|
cs.IT math.IT
|
A unified framework to obtain all known lower bounds (random coding, typical
random coding and expurgated bound) on the reliability function of a
point-to-point discrete memoryless channel (DMC) is presented. By using a
similar idea for a two-user discrete memoryless (DM) multiple-access channel
(MAC), three lower bounds on the reliability function are derived. The first
one (random coding) is identical to the best known lower bound on the
reliability function of DM-MAC. It is shown that the random coding bound is the
performance of the average code in the constant composition code ensemble. The
second bound (Typical random coding) is the typical performance of the constant
composition code ensemble. To derive the third bound (expurgated), we eliminate
some of the codewords from the codebook with larger rate. This is the first
bound of this type that explicitly uses the method of expurgation for MACs. It
is shown that the exponent of the typical random coding and the expurgated
bounds are greater than or equal to the exponent of the known random coding
bounds for all rate pairs. Moreover, an example is given where the exponent of
the expurgated bound is strictly larger. All these bounds can be universally
obtained for all discrete memoryless MACs with given input and output
alphabets.
|
1010.1309
|
Probing Capacity
|
cs.IT math.IT
|
We consider the problem of optimal probing of states of a channel by
transmitter and receiver for maximizing rate of reliable communication. The
channel is discrete memoryless (DMC) with i.i.d. states. The encoder takes
probing actions dependent on the message. It then uses the state information
obtained from probing causally or non-causally to generate channel input
symbols. The decoder may also take channel probing actions as a function of the
observed channel output and use the channel state information thus acquired,
along with the channel output, to estimate the message. We refer to the maximum
achievable rate for reliable communication for such systems as the 'Probing
Capacity'. We characterize this capacity when the encoder and decoder actions
are cost constrained. To motivate the problem, we begin by characterizing the
trade-off between the capacity and fraction of channel states the encoder is
allowed to observe, while the decoder is aware of channel states. In this
setting of 'to observe or not to observe' state at the encoder, we compute
certain numerical examples and note a pleasing phenomenon, where encoder can
observe a relatively small fraction of states and yet communicate at maximum
rate, i.e. rate when observing states at encoder is not cost constrained.
|
1010.1317
|
Typicality Graphs:Large Deviation Analysis
|
cs.IT math.IT
|
Let $\mathcal{X}$ and $\mathcal{Y}$ be finite alphabets and $P_{XY}$ a joint
distribution over them, with $P_X$ and $P_Y$ representing the marginals. For
any $\epsilon > 0$, the set of $n$-length sequences $x^n$ and $y^n$ that are
jointly typical \cite{ckbook} according to $P_{XY}$ can be represented on a
bipartite graph. We present a formal definition of such a graph, known as a
\emph{typicality} graph, and study some of its properties.
|
1010.1322
|
A New Upper Bound on the Average Error Exponent for Multiple-Access
Channels
|
cs.IT math.IT
|
A new lower bound for the average probability or error for a two-user
discrete memoryless (DM) multiple-access channel (MAC) is derived. This bound
has a structure very similar to the well-known sphere packing packing bound
derived by Haroutunian. However, since explicitly imposes independence of the
users' input distributions (conditioned on the time-sharing auxiliary variable)
results in a tighter sphere-packing exponent in comparison to Haroutunian's.
Also, the relationship between average and maximal error probabilities is
studied. Finally, by using a known sphere packing bound on the maximal
probability of error, a lower bound on the average error probability is
derived.
|
1010.1328
|
Complejidad descriptiva y computacional en maquinas de Turing pequenas
|
cs.CC cs.IT math.IT
|
We start by an introduction to the basic concepts of computability theory and
the introduction of the concept of Turing machine and computation universality.
Then se turn to the exploration of trade-offs between different measures of
complexity, particularly algorithmic (program-size) and computational (time)
complexity as a mean to explain these measure in a novel manner. The
investigation proceeds by an exhaustive exploration and systematic study of the
functions computed by a large set of small Turing machines with 2 and 3 states
with particular attention to runtimes, space-usages and patterns corresponding
to the computed functions when the machines have access to larger resources
(more states).
We report that the average runtime of Turing machines computing a function
increases as a function of the number of states, indicating that non-trivial
machines tend to occupy all the resources at hand. General slow-down was
witnessed and some incidental cases of (linear) speed-up were found. Throughout
our study various interesting structures were encountered. We unveil a study of
structures in the micro-cosmos of small Turing machines.
|
1010.1331
|
Improved Combinatorial Algorithms for Wireless Information Flow
|
cs.IT math.IT
|
The work of Avestimehr et al. '07 has recently proposed a deterministic model
for wireless networks and characterized the unicast capacity C of such networks
as the minimum rank of the adjacency matrices describing all possible
source-destination cuts. Amaudruz & Fragouli first proposed a polynomial-time
algorithm for finding the unicast capacity of a linear deterministic wireless
network in their 2009 paper. In this work, we improve upon Amaudruz &
Fragouli's work and further reduce the computational complexity of the
algorithm by fully exploring the useful combinatorial features intrinsic in the
problem. Our improvement applies generally with any size of finite fields
associated with the channel model. Comparing with other algorithms on solving
the same problem, our improved algorithm is very competitive in terms of
complexity.
|
1010.1358
|
Tight exponential analysis of universally composable privacy
amplification and its applications
|
cs.IT cs.CR math.IT
|
Motivated by the desirability of universal composability, we analyze in terms
of L_1 distinguishability the task of secret key generation from a joint random
variable. Under this secrecy criterion, using the Renyi entropy of order 1+s
for s in [0,1, we derive a new upper bound of Eve's distinguishability under
the application of the universal2 hash functions. It is also shown that this
bound gives the tight exponential rate of decrease in the case of independent
and identical distributions. The result is applied to the wire-tap channel
model and to secret key generation (distillation) by public discussion.
|
1010.1391
|
On the Full Column-Rank Condition of the Channel Estimation Matrix in
Doubly-Selective MIMO-OFDM Systems
|
cs.IT math.IT
|
Recently, this journal has published a paper which dealt with basis expansion
model (BEM) based least-squares (LS) channel estimation in doubly-selective
orthogonal frequency-division multiplexing (DS-OFDM) systems. The least-squares
channel estimator computes the pseudo-inverse of a channel estimation matrix.
For the existence of the pseudo-inverse, it is necessary that the channel
estimation matrix be of full column rank. In this paper, we investigate the
conditions that need to be satisfied that ensures the full column-rank
condition of the channel estimation matrix. In particular, we derive conditions
that the BEM and pilot pattern designs should satisfy to ensure that the
channel estimation matrix is of full column rank. We explore the polynomial BEM
(P-BEM), complex exponential BEM (CE-BEM), Slepian BEM (S-BEM) and generalized
complex exponential BEM (GCE-BEM). We present one possible way to design the
pilot patterns which satisfy the full column-rank conditions. Furthermore, the
proposed method is extended to the case of multiple-input multiple-output
(MIMO) DS-OFDM systems as well. Examples of pilot pattern designs are
presented, that ensure the channel estimation matrix is of full column rank for
a large DS-MIMO-OFDM system with as many as six transmitters, six receivers and
1024 subcarriers.
|
1010.1429
|
Optimizing Monotone Functions Can Be Difficult
|
cs.NE
|
Extending previous analyses on function classes like linear functions, we
analyze how the simple (1+1) evolutionary algorithm optimizes pseudo-Boolean
functions that are strictly monotone. Contrary to what one would expect, not
all of these functions are easy to optimize. The choice of the constant $c$ in
the mutation probability $p(n) = c/n$ can make a decisive difference.
We show that if $c < 1$, then the (1+1) evolutionary algorithm finds the
optimum of every such function in $\Theta(n \log n)$ iterations. For $c=1$, we
can still prove an upper bound of $O(n^{3/2})$. However, for $c > 33$, we
present a strictly monotone function such that the (1+1) evolutionary algorithm
with overwhelming probability does not find the optimum within $2^{\Omega(n)}$
iterations. This is the first time that we observe that a constant factor
change of the mutation probability changes the run-time by more than constant
factors.
|
1010.1437
|
Mixed-Membership Stochastic Block-Models for Transactional Networks
|
stat.ML cs.AI cs.SI stat.AP stat.ME
|
Transactional network data can be thought of as a list of one-to-many
communications(e.g., email) between nodes in a social network. Most social
network models convert this type of data into binary relations between pairs of
nodes. We develop a latent mixed membership model capable of modeling richer
forms of transactional network data, including relations between more than two
nodes. The model can cluster nodes and predict transactions. The block-model
nature of the model implies that groups can be characterized in very general
ways. This flexible notion of group structure enables discovery of rich
structure in transactional networks. Estimation and inference are accomplished
via a variational EM algorithm. Simulations indicate that the learning
algorithm can recover the correct generative model. Interesting structure is
discovered in the Enron email dataset and another dataset extracted from the
Reddit website. Analysis of the Reddit data is facilitated by a novel
performance measure for comparing two soft clusterings. The new model is
superior at discovering mixed membership in groups and in predicting
transactions.
|
1010.1438
|
Coalition Formation Games for Distributed Cooperation Among Roadside
Units in Vehicular Networks
|
cs.IT cs.GT cs.SY math.IT
|
Vehicle-to-roadside (V2R) communications enable vehicular networks to support
a wide range of applications for enhancing the efficiency of road
transportation. While existing work focused on non-cooperative techniques for
V2R communications between vehicles and roadside units (RSUs), this paper
investigates novel cooperative strategies among the RSUs in a vehicular
network. We propose a scheme whereby, through cooperation, the RSUs in a
vehicular network can coordinate the classes of data being transmitted through
V2R communications links to the vehicles. This scheme improves the diversity of
the information circulating in the network while exploiting the underlying
content-sharing vehicle-to-vehicle communication network. We model the problem
as a coalition formation game with transferable utility and we propose an
algorithm for forming coalitions among the RSUs. For coalition formation, each
RSU can take an individual decision to join or leave a coalition, depending on
its utility which accounts for the generated revenues and the costs for
coalition coordination. We show that the RSUs can self-organize into a
Nash-stable partition and adapt this partition to environmental changes.
Simulation results show that, depending on different scenarios, coalition
formation presents a performance improvement, in terms of the average payoff
per RSU, ranging between 20.5% and 33.2%, relative to the non-cooperative case.
|
1010.1456
|
A Hybrid Parallelization of AIM for Multi-Core Clusters: Implementation
Details and Benchmark Results on Ranger
|
cs.CE cs.MS
|
This paper presents implementation details and empirical results for a hybrid
message passing and shared memory paralleliziation of the adaptive integral
method (AIM). AIM is implemented on a (near) petaflop supercomputing cluster of
quad-core processors and its accuracy, complexity, and scalability are
investigated by solving benchmark scattering problems. The timing and speedup
results on up to 1024 processors show that the hybrid MPI/OpenMP
parallelization of AIM exhibits better strong scalability (fixed problem size
speedup) than pure MPI parallelization of it when multiple cores are used on
each processor.
|
1010.1496
|
Profile Based Sub-Image Search in Image Databases
|
cs.CV cs.IR cs.MM
|
Sub-image search with high accuracy in natural images still remains a
challenging problem. This paper proposes a new feature vector called profile
for a keypoint in a bag of visual words model of an image. The profile of a
keypoint captures the spatial geometry of all the other keypoints in an image
with respect to itself, and is very effective in discriminating true matches
from false matches. Sub-image search using profiles is a single-phase process
requiring no geometric validation, yields high precision on natural images, and
works well on small visual codebook. The proposed search technique differs from
traditional methods that first generate a set of candidates disregarding
spatial information and then verify them geometrically. Conventional methods
also use large codebooks. We achieve a precision of 81% on a combined data set
of synthetic and real natural images using a codebook size of 500 for top-10
queries; that is 31% higher than the conventional candidate generation
approach.
|
1010.1499
|
Completely Stale Transmitter Channel State Information is Still Very
Useful
|
cs.IT math.IT
|
Transmitter channel state information (CSIT) is crucial for the multiplexing
gains offered by advanced interference management techniques such as multiuser
MIMO and interference alignment. Such CSIT is usually obtained by feedback from
the receivers, but the feedback is subject to delays. The usual approach is to
use the fed back information to predict the current channel state and then
apply a scheme designed assuming perfect CSIT. When the feedback delay is large
compared to the channel coherence time, such a prediction approach completely
fails to achieve any multiplexing gain. In this paper, we show that even in
this case, the completely stale CSI is still very useful. More concretely, we
show that in a MIMO broadcast channel with $K$ transmit antennas and $K$
receivers each with 1 receive antenna, $\frac{K}{1+1/2+ ...+ \frac{1}{K}} (> 1)
$ degrees of freedom is achievable even when the fed back channel state is
completely independent of the current channel state. Moreover, we establish
that if all receivers have independent and identically distributed channels,
then this is the optimal number of degrees of freedom achievable. In the
optimal scheme, the transmitter uses the fed back CSI to learn the side
information that the receivers receive from previous transmissions rather than
to predict the current channel state. Our result can be viewed as the first
example of feedback providing a degree-of-freedom gain in memoryless channels.
|
1010.1508
|
Certain Relations between Mutual Information and Fidelity of Statistical
Estimation
|
stat.AP cs.IT math.IT
|
I present several new relations between mutual information (MI) and
statistical estimation error for a system that can be regarded simultaneously
as a communication channel and as an estimator of an input parameter. I first
derive a second-order result between MI and Fisher information (FI) that is
valid for sufficiently narrow priors, but arbitrary channels. A second relation
furnishes a lower bound on the MI in terms of the minimum mean-squared error
(MMSE) on the Bayesian estimation of the input parameter from the channel
output, one that is valid for arbitrary channels and priors. The existence of
such a lower bound, while extending previous work relating the MI to the FI
that is valid only in the asymptotic and high-SNR limits, elucidates further
the fundamental connection between information and estimation theoretic
measures of fidelity. The remaining relations I present are inequalities and
correspondences among MI, FI, and MMSE in the presence of nuisance parameters.
|
1010.1514
|
Quarantine generated phase transition in epidemic spreading
|
physics.soc-ph cond-mat.stat-mech cs.SI physics.bio-ph
|
We study the critical effect of quarantine on the propagation of epidemics on
an adaptive network of social contacts. For this purpose, we analyze the
susceptible-infected-recovered (SIR) model in the presence of quarantine, where
susceptible individuals protect themselves by disconnecting their links to
infected neighbors with probability w, and reconnecting them to other
susceptible individuals chosen at random. Starting from a single infected
individual, we show by an analytical approach and simulations that there is a
phase transition at a critical rewiring (quarantine) threshold w_c separating a
phase (w<w_c) where the disease reaches a large fraction of the population,
from a phase (w >= w_c) where the disease does not spread out. We find that in
our model the topology of the network strongly affects the size of the
propagation, and that w_c increases with the mean degree and heterogeneity of
the network. We also find that w_c is reduced if we perform a preferential
rewiring, in which the rewiring probability is proportional to the degree of
infected nodes.
|
1010.1523
|
Fuzzy overlapping communities in networks
|
physics.soc-ph cs.SI
|
Networks commonly exhibit a community structure, whereby groups of vertices
are more densely connected to each other than to other vertices. Often these
communities overlap, such that each vertex may occur in more than one
community. However, two distinct types of overlapping are possible: crisp
(where each vertex belongs fully to each community of which it is a member) and
fuzzy (where each vertex belongs to each community to a different extent). We
investigate the effects of the fuzziness of community overlap. We find that it
has a strong effect on the performance of community detection methods: some
algorithms perform better with fuzzy overlapping while others favour crisp
overlapping. We also evaluate the performance of some algorithms that recover
the belonging coefficients when the overlap is fuzzy. Finally, we investigate
whether real networks contain fuzzy or crisp overlapping.
|
1010.1526
|
Time Series Classification by Class-Specific Mahalanobis Distance
Measures
|
cs.LG
|
To classify time series by nearest neighbors, we need to specify or learn one
or several distance measures. We consider variations of the Mahalanobis
distance measures which rely on the inverse covariance matrix of the data.
Unfortunately --- for time series data --- the covariance matrix has often low
rank. To alleviate this problem we can either use a pseudoinverse, covariance
shrinking or limit the matrix to its diagonal. We review these alternatives and
benchmark them against competitive methods such as the related Large Margin
Nearest Neighbor Classification (LMNN) and the Dynamic Time Warping (DTW)
distance. As we expected, we find that the DTW is superior, but the Mahalanobis
distance measures are one to two orders of magnitude faster. To get best
results with Mahalanobis distance measures, we recommend learning one distance
measure per class using either covariance shrinking or the diagonal approach.
|
1010.1561
|
A Clustering Coefficient Network Formation Game
|
cs.SI cs.DS cs.GT physics.soc-ph
|
For the most up-to-date version please visit
http://www.cis.upenn.edu/~brautbar/ccgame.pdf
|
1010.1584
|
High-SIR Transmission Capacity of Wireless Networks with General Fading
and Node Distribution
|
cs.IT math.IT
|
In many wireless systems, interference is the main performance-limiting
factor, and is primarily dictated by the locations of concurrent transmitters.
In many earlier works, the locations of the transmitters is often modeled as a
Poisson point process for analytical tractability. While analytically
convenient, the PPP only accurately models networks whose nodes are placed
independently and use ALOHA as the channel access protocol, which preserves the
independence. Correlations between transmitter locations in non-Poisson
networks, which model intelligent access protocols, makes the outage analysis
extremely difficult. In this paper, we take an alternative approach and focus
on an asymptotic regime where the density of interferers $\eta$ goes to 0. We
prove for general node distributions and fading statistics that the success
probability $\p \sim 1-\gamma \eta^{\kappa}$ for $\eta \rightarrow 0$, and
provide values of $\gamma$ and $\kappa$ for a number of important special
cases. We show that $\kappa$ is lower bounded by 1 and upper bounded by a value
that depends on the path loss exponent and the fading. This new analytical
framework is then used to characterize the transmission capacity of a very
general class of networks, defined as the maximum spatial density of active
links given an outage constraint.
|
1010.1605
|
Communicating Under Channel Uncertainty
|
cs.IT math.IT
|
For a single transmit and receive antenna system, a new constellation design
is proposed to combat errors in the phase estimate of the channel coefficient.
The proposed constellation is a combination of PSK and PAM constellations,
where PSK is used to provide protection against phase errors, while PAM is used
to increase the transmission rate using the knowledge of the magnitude of the
channel coefficient. The performance of the proposed constellation is shown to
be significantly better than the widely used QAM in terms of probability of
error. The proposed strategy can also be extended to systems using multiple
transmit and receive antennas.
|
1010.1622
|
Manipulating quantum information on the controllable systems or
subspaces
|
quant-ph cs.SY math.OC
|
In this paper, we explore how to constructively manipulate qubits by rotating
Bloch spheres. It is revealed that three-rotation and one-rotation Hamiltonian
controls can be constructed to steer qubits when two tunable Hamiltonian
controls are available. It is demonstrated in this research that local-wave
function controls such as Bang-Bang, triangle-function and quadratic function
controls can be utilized to manipulate quantum states on the Bloch sphere. A
new kind of time-energy performance index is proposed to trade-off time and
energy resource cost, in which control magnitudes are optimized in terms of
this kind of performance. It is further exemplified that this idea can be
generalized to manipulate encoded qubits on the controllable subspace.
|
1010.1646
|
Thresholds for epidemic spreading in networks
|
cond-mat.stat-mech cs.SI physics.soc-ph
|
We study the threshold of epidemic models in quenched networks with degree
distribution given by a power-law. For the susceptible-infected-susceptible
(SIS) model the activity threshold lambda_c vanishes in the large size limit on
any network whose maximum degree k_max diverges with the system size, at odds
with heterogeneous mean-field (HMF) theory. The vanishing of the threshold has
not to do with the scale-free nature of the connectivity pattern and is instead
originated by the largest hub in the system being active for any spreading rate
lambda>1/sqrt{k_max} and playing the role of a self-sustained source that
spreads the infection to the rest of the system. The
susceptible-infected-removed (SIR) model displays instead agreement with HMF
theory and a finite threshold for scale-rich networks. We conjecture that on
quenched scale-rich networks the threshold of generic epidemic models is
vanishing or finite depending on the presence or absence of a steady state.
|
1010.1648
|
Large-System Analysis of Multiuser Detection with an Unknown Number of
Users: A High-SNR Approach
|
cs.IT math.IT
|
We analyze multiuser detection under the assumption that the number of users
accessing the channel is unknown by the receiver. In this environment, users'
activity must be estimated along with any other parameters such as data, power,
and location. Our main goal is to determine the performance loss caused by the
need for estimating the identities of active users, which are not known a
priori. To prevent a loss of optimality, we assume that identities and data are
estimated jointly, rather than in two separate steps. We examine the
performance of multiuser detectors when the number of potential users is large.
Statistical-physics methodologies are used to determine the macroscopic
performance of the detector in terms of its multiuser efficiency. Special
attention is paid to the fixed-point equation whose solution yields the
multiuser efficiency of the optimal (maximum a posteriori) detector in the
large signal-to-noise ratio regime. Our analysis yields closed-form approximate
bounds to the minimum mean-squared error in this regime. These illustrate the
set of solutions of the fixed-point equation, and their relationship with the
maximum system load. Next, we study the maximum load that the detector can
support for a given quality of service (specified by error probability).
|
1010.1669
|
Rate-Equivocation Optimal Spatially Coupled LDPC Codes for the BEC
Wiretap Channel
|
cs.IT math.IT
|
We consider transmission over a wiretap channel where both the main channel
and the wiretapper's channel are Binary Erasure Channels (BEC). We use
convolutional LDPC ensembles based on the coset encoding scheme. More
precisely, we consider regular two edge type convolutional LDPC ensembles. We
show that such a construction achieves the whole rate-equivocation region of
the BEC wiretap channel.
Convolutional LDPC ensemble were introduced by Felstr\"om and Zigangirov and
are known to have excellent thresholds. Recently, Kudekar, Richardson, and
Urbanke proved that the phenomenon of "Spatial Coupling" converts MAP threshold
into BP threshold for transmission over the BEC.
The phenomenon of spatial coupling has been observed to hold for general
binary memoryless symmetric channels. Hence, we conjecture that our
construction is a universal rate-equivocation achieving construction when the
main channel and wiretapper's channel are binary memoryless symmetric channels,
and the wiretapper's channel is degraded with respect to the main channel.
|
1010.1705
|
Rich-club connectivity dominates assortativity and transitivity of
complex networks
|
physics.soc-ph cs.SI
|
Rich-club, assortativity and clustering coefficients are frequently-used
measures to estimate topological properties of complex networks. Here we find
that the connectivity among a very small portion of the richest nodes can
dominate the assortativity and clustering coefficients of a large network,
which reveals that the rich-club connectivity is leveraged throughout the
network. Our study suggests that more attention should be payed to the
organization pattern of rich nodes, for the structure of a complex system as a
whole is determined by the associations between the most influential
individuals. Moreover, by manipulating the connectivity pattern in a very small
rich-club, it is sufficient to produce a network with desired assortativity or
transitivity. Conversely, our findings offer a simple explanation for the
observed assortativity and transitivity in many real world networks --- such
biases can be explained by the connectivities among the richest nodes.
|
1010.1746
|
Mapping XML Data to Relational Data: A DOM-Based Approach
|
cs.DB cs.DS
|
XML has emerged as the standard for representing and exchanging data on the
World Wide Web. It is critical to have efficient mechanisms to store and query
XML data to exploit the full power of this new technology. Several researchers
have proposed to use relational databases to store and query XML data. While
several algorithms of schema mapping and query mapping have been proposed, the
problem of mapping XML data to relational data, i.e., mapping an XML INSERT
statement to a sequence of SQL INSERT statements, has not been addressed
thoroughly in the literature. In this paper, we propose an efficient linear
algorithm for mapping XML data to relational data. This algorithm is based on
our previous proposed inlining algorithm for mapping DTDs to relational schemas
and can be easily adapted to other inlining algorithms.
|
1010.1763
|
Algorithms for nonnegative matrix factorization with the beta-divergence
|
cs.LG
|
This paper describes algorithms for nonnegative matrix factorization (NMF)
with the beta-divergence (beta-NMF). The beta-divergence is a family of cost
functions parametrized by a single shape parameter beta that takes the
Euclidean distance, the Kullback-Leibler divergence and the Itakura-Saito
divergence as special cases (beta = 2,1,0, respectively). The proposed
algorithms are based on a surrogate auxiliary function (a local majorization of
the criterion function). We first describe a majorization-minimization (MM)
algorithm that leads to multiplicative updates, which differ from standard
heuristic multiplicative updates by a beta-dependent power exponent. The
monotonicity of the heuristic algorithm can however be proven for beta in (0,1)
using the proposed auxiliary function. Then we introduce the concept of
majorization-equalization (ME) algorithm which produces updates that move along
constant level sets of the auxiliary function and lead to larger steps than MM.
Simulations on synthetic and real data illustrate the faster convergence of the
ME approach. The paper also describes how the proposed algorithms can be
adapted to two common variants of NMF : penalized NMF (i.e., when a penalty
function of the factors is added to the criterion function) and convex-NMF
(when the dictionary is assumed to belong to a known subspace).
|
1010.1800
|
Proactive Resource Allocation: Turning Predictable Behavior into
Spectral Gain
|
cs.IT cs.NI cs.SI math.IT
|
This paper introduces the novel concept of proactive resource allocation in
which the predictability of user behavior is exploited to balance the wireless
traffic over time, and hence, significantly reduce the bandwidth required to
achieve a given blocking/outage probability. We start with a simple model in
which the smart wireless devices are assumed to predict the arrival of new
requests and submit them to the network T time slots in advance. Using tools
from large deviation theory, we quantify the resulting prediction diversity
gain to establish that the decay rate of the outage event probabilities
increases linearly with the prediction duration T. This model is then
generalized to incorporate the effect of prediction errors and the randomness
in the prediction lookahead time T. Remarkably, we also show that, in the
cognitive networking scenario, the appropriate use of proactive resource
allocation by the primary users results in more spectral opportunities for the
secondary users at a marginal, or no, cost in the primary network outage.
Finally, we conclude by a discussion of the new research questions posed under
the umbrella of the proposed proactive (non-causal) wireless networking
framework.
|
1010.1824
|
Implications of Inter-Rater Agreement on a Student Information Retrieval
Evaluation
|
cs.IR
|
This paper is about an information retrieval evaluation on three different
retrieval-supporting services. All three services were designed to compensate
typical problems that arise in metadata-driven Digital Libraries, which are not
adequately handled by a simple tf-idf based retrieval. The services are: (1) a
co-word analysis based query expansion mechanism and re-ranking via (2)
Bradfordizing and (3) author centrality. The services are evaluated with
relevance assessments conducted by 73 information science students. Since the
students are neither information professionals nor domain experts the question
of inter-rater agreement is taken into consideration. Two important
implications emerge: (1) the inter-rater agreement rates were mainly fair to
moderate and (2) after a data-cleaning step which erased the assessments with
poor agreement rates the evaluation data shows that the three retrieval
services returned disjoint but still relevant result sets.
|
1010.1826
|
A probabilistic top-down parser for minimalist grammars
|
cs.CL
|
This paper describes a probabilistic top-down parser for minimalist grammars.
Top-down parsers have the great advantage of having a certain predictive power
during the parsing, which takes place in a left-to-right reading of the
sentence. Such parsers have already been well-implemented and studied in the
case of Context-Free Grammars, which are already top-down, but these are
difficult to adapt to Minimalist Grammars, which generate sentences bottom-up.
I propose here a way of rewriting Minimalist Grammars as Linear Context-Free
Rewriting Systems, allowing to easily create a top-down parser. This rewriting
allows also to put a probabilistic field on these grammars, which can be used
to accelerate the parser. Finally, I propose a method of refining the
probabilistic field by using algorithms used in data compression.
|
1010.1845
|
Navigation in non-uniform density social networks
|
physics.soc-ph cs.SI
|
Recent empirical investigations suggest a universal scaling law for the
spatial structure of social networks. It is found that the probability density
distribution of an individual to have a friend at distance $d$ scales as
$P(d)\propto d^{-1}$. Since population density is non-uniform in real social
networks, a scale invariant friendship network(SIFN) based on the above
empirical law is introduced to capture this phenomenon. We prove the time
complexity of navigation in 2-dimensional SIFN is at most $O(\log^4 n)$. In the
real searching experiment, individuals often resort to extra information
besides geography location. Thus, real-world searching process may be seen as a
projection of navigation in a $k$-dimensional SIFN($k>2$). Therefore, we also
discuss the relationship between high and low dimensional SIFN. Particularly,
we prove a 2-dimensional SIFN is the projection of a 3-dimensional SIFN. As a
matter of fact, this result can also be generated to any $k$-dimensional SIFN.
|
1010.1847
|
Restricted Isometries for Partial Random Circulant Matrices
|
cs.IT math.IT math.PR
|
In the theory of compressed sensing, restricted isometry analysis has become
a standard tool for studying how efficiently a measurement matrix acquires
information about sparse and compressible signals. Many recovery algorithms are
known to succeed when the restricted isometry constants of the sampling matrix
are small. Many potential applications of compressed sensing involve a
data-acquisition process that proceeds by convolution with a random pulse
followed by (nonrandom) subsampling. At present, the theoretical analysis of
this measurement technique is lacking. This paper demonstrates that the $s$th
order restricted isometry constant is small when the number $m$ of samples
satisfies $m \gtrsim (s \log n)^{3/2}$, where $n$ is the length of the pulse.
This bound improves on previous estimates, which exhibit quadratic scaling.
|
1010.1862
|
Utility Optimal Scheduling in Processing Networks
|
math.OC cs.SY
|
We consider the problem of utility optimal scheduling in general
\emph{processing networks} with random arrivals and network conditions. These
are generalizations of traditional data networks where commodities in one or
more queues can be combined to produce new commodities that are delivered to
other parts of the network. This can be used to model problems such as
in-network data fusion, stream processing, and grid computing. Scheduling
actions are complicated by the \emph{underflow problem} that arises when some
queues with required components go empty. In this paper, we develop the
Perturbed Max-Weight algorithm (PMW) to achieve optimal utility. The idea of
PMW is to perturb the weights used by the usual Max-Weight algorithm to
``push'' queue levels towards non-zero values (avoiding underflows). We show
that when the perturbations are carefully chosen, PMW is able to achieve a
utility that is within $O(1/V)$ of the optimal value for any $V\geq1$, while
ensuring an average network backlog of $O(V)$.
|
1010.1864
|
Transforming complex network to the acyclic one
|
physics.soc-ph cond-mat.dis-nn cs.SI
|
Acyclic networks are a class of complex networks in which links are directed
and don't have closed loops. Here we present an algorithm for transforming an
ordinary undirected complex network into an acyclic one. Further analysis of an
acyclic network allows finding structural properties of the network. With our
approach one can find the communities and key nodes in complex networks. Also
we propose a new parameter of complex networks which can mark most vulnerable
nodes of the system. The proposed algorithm can be applied to finding
communities and bottlenecks in general complex networks.
|
1010.1865
|
Corrections to "Unified Laguerre polynomial-series-based distribution of
small-scale fading envelopes''
|
math.PR cs.IT math.IT
|
In this correspondence, we point out two typographical errors in Chai and
Tjhung's paper and we offer the correct formula of the unified Laguerre
polynomial-series-based cumulative distribution function (cdf) for small-scale
fading distributions. A Laguerre polynomial-series-based cdf formula for
non-central chi-square distribution is also provided as a special case of our
unified cdf result.
|
1010.1866
|
Fast error-tolerant quartet phylogeny algorithms
|
q-bio.PE cs.CE cs.DS
|
We present an algorithm for phylogenetic reconstruction using quartets that
returns the correct topology for $n$ taxa in $O(n \log n)$ time with high
probability, in a probabilistic model where a quartet is not consistent with
the true topology of the tree with constant probability, independent of other
quartets. Our incremental algorithm relies upon a search tree structure for the
phylogeny that is balanced, with high probability, no matter what the true
topology is. Our experimental results show that our method is comparable in
runtime to the fastest heuristics, while still offering consistency guarantees.
|
1010.1886
|
Inner Product Spaces for MinSum Coordination Mechanisms
|
cs.GT cs.DS cs.MA
|
We study policies aiming to minimize the weighted sum of completion times of
jobs in the context of coordination mechanisms for selfish scheduling problems.
Our goal is to design local policies that achieve a good price of anarchy in
the resulting equilibria for unrelated machine scheduling. To obtain the
approximation bounds, we introduce a new technique that while conceptually
simple, seems to be quite powerful. With this method we are able to prove the
following results.
First, we consider Smith's Rule, which orders the jobs on a machine in
ascending processing time to weight ratio, and show that it achieves an
approximation ratio of 4. We also demonstrate that this is the best possible
for deterministic non-preemptive strongly local policies. Since Smith's Rule is
always optimal for a given assignment, this may seem unsurprising, but we then
show that better approximation ratios can be obtained if either preemption or
randomization is allowed.
We prove that ProportionalSharing, a preemptive strongly local policy,
achieves an approximation ratio of 2.618 for the weighted sum of completion
times, and an approximation ratio of 2.5 in the unweighted case. Again, we
observe that these bounds are tight. Next, we consider Rand, a natural
non-preemptive but randomized policy. We show that it achieves an approximation
ratio of at most 2.13; moreover, if the sum of the weighted completion times is
negligible compared to the cost of the optimal solution, this improves to \pi
/2.
Finally, we show that both ProportionalSharing and Rand induce potential
games, and thus always have a pure Nash equilibrium (unlike Smith's Rule). This
also allows us to design the first \emph{combinatorial} constant-factor
approximation algorithm minimizing weighted completion time for unrelated
machine scheduling that achieves a factor of 2+ \epsilon for any \epsilon > 0.
|
1010.1888
|
Multi-Objective Genetic Programming Projection Pursuit for Exploratory
Data Modeling
|
cs.LG cs.NE
|
For classification problems, feature extraction is a crucial process which
aims to find a suitable data representation that increases the performance of
the machine learning algorithm. According to the curse of dimensionality
theorem, the number of samples needed for a classification task increases
exponentially as the number of dimensions (variables, features) increases. On
the other hand, it is costly to collect, store and process data. Moreover,
irrelevant and redundant features might hinder classifier performance. In
exploratory analysis settings, high dimensionality prevents the users from
exploring the data visually. Feature extraction is a two-step process: feature
construction and feature selection. Feature construction creates new features
based on the original features and feature selection is the process of
selecting the best features as in filter, wrapper and embedded methods.
In this work, we focus on feature construction methods that aim to decrease
data dimensionality for visualization tasks. Various linear (such as principal
components analysis (PCA), multiple discriminants analysis (MDA), exploratory
projection pursuit) and non-linear (such as multidimensional scaling (MDS),
manifold learning, kernel PCA/LDA, evolutionary constructive induction)
techniques have been proposed for dimensionality reduction. Our algorithm is an
adaptive feature extraction method which consists of evolutionary constructive
induction for feature construction and a hybrid filter/wrapper method for
feature selection.
|
1010.1899
|
The Failure Probability at Sink Node of Random Linear Network Coding
|
cs.IT math.IT
|
In practice, since many communication networks are huge in scale or
complicated in structure even dynamic, the predesigned network codes based on
the network topology is impossible even if the topological structure is known.
Therefore, random linear network coding was proposed as an acceptable coding
technique. In this paper, we further study the performance of random linear
network coding by analyzing the failure probabilities at sink node for
different knowledge of network topology and get some tight and asymptotically
tight upper bounds of the failure probabilities. In particular, the worst cases
are indicated for these bounds. Furthermore, if the more information about the
network topology is utilized, the better upper bounds are obtained. These
bounds improve on the known ones. Finally, we also discuss the lower bound of
this failure probability and show that it is also asymptotically tight.
|
1010.1904
|
Weighted Indices for Evaluating the Quality of Research with Multiple
Authorship
|
cs.IR cs.CY cs.DL
|
Devising an index to measure the quality of research is a challenging task.
In this paper, we propose a set of indices to evaluate the quality of research
produced by an author. Our indices utilize a policy that assigns the weights to
multiple authors of a paper. We have considered two weight assignment policies:
positionally weighted and equally weighted. We propose two classes of weighted
indices: weighted h-indices and weighted citation h-cuts. Further, we compare
our weighted h-indices with the original h-index for a selected set of authors.
As opposed to h-index, our weighted h-indices take into account the weighted
contributions of individual authors in multi-authored papers, and may serve as
an improvement over h-index. The other class of weighted indices that we call
weighted citation h-cuts take into account the number of citations that are in
excess of those required to compute the index, and may serve as a supplement to
h-index or its variants.
|
1010.1911
|
On a Low-Rate TLDPC Code Ensemble and the Necessary Condition on the
Linear Minimum Distance for Sparse-Graph Codes
|
cs.IT math.IT
|
This paper addresses the issue of design of low-rate sparse-graph codes with
linear minimum distance in the blocklength. First, we define a necessary
condition which needs to be satisfied when the linear minimum distance is to be
ensured. The condition is formulated in terms of degree-1 and degree-2 variable
nodes and of low-weight codewords of the underlying code, and it generalizies
results known for turbo codes [8] and LDPC codes. Then, we present a new
ensemble of low-rate codes, which itself is a subclass of TLDPC codes [4], [5],
and which is designed under this necessary condition. The asymptotic analysis
of the ensemble shows that its iterative threshold is situated close to the
Shannon limit. In addition to the linear minimum distance property, it has a
simple structure and enjoys a low decoding complexity and a fast convergence.
|
1010.1953
|
Passive Supporters of Terrorism and Phase Transitions
|
physics.soc-ph cs.SI
|
We discuss some social contagion processes to describe the formation and
spread of radical opinions. The dynamics of opinion spread involves local
threshold processes as well as mean field effects. We calculate and observe
phase transitions in the dynamical variables resulting in a rapidly increasing
number of passive supporters. This strongly indicates that military solutions
are inappropriate.
|
1010.1973
|
For the Grid and Through the Grid: The Role of Power Line Communications
in the Smart Grid
|
cs.NI cs.IT cs.SY math.IT
|
Is Power Line Communications (PLC) a good candidate for Smart Grid
applications? The objective of this paper is to address this important
question. To do so we provide an overview of what PLC can deliver today by
surveying its history and describing the most recent technological advances in
the area. We then address Smart Grid applications as instances of sensor
networking and network control problems and discuss the main conclusion one can
draw from the literature on these subjects. The application scenario of PLC
within the Smart Grid is then analyzed in detail. Since a necessary ingredient
of network planning is modeling, we also discuss two aspects of engineering
modeling that relate to our question. The first aspect is modeling the PLC
channel through fading models. The second aspect we review is the Smart Grid
control and traffic modeling problem which allows us to achieve a better
understanding of the communications requirements. Finally, this paper reports
recent studies on the electrical and topological properties of a sample power
distribution network. Power grid topological studies are very important for PLC
networking as the power grid is not only the information source \textit{but
also} the information delivery system - a unique feature when PLC is used for
the Smart Grid.
|
1010.1985
|
Relay Strategies Based on Cross-Determinism for the Broadcast Relay
Channel
|
cs.IT math.IT
|
We consider a two-user Gaussian multiple-input multiple-output (MIMO)
broadcast channel with a common multiple-antenna relay, and a shared digital
(noiseless) link between the relay and the two destinations. For this channel,
this paper introduces an asymptotically sum-capacity-achieving
quantize-and-forward (QF) relay strategy. Our technique to design an
asymptotically optimal relay quantizer is based on identifying a
cross-deterministic relation between the relay observation, the source signal,
and the destination observation. In a relay channel, an approximate cross
deterministic relation corresponds to an approximately deterministic relation,
where the relay observation is to some extent a deterministic function of the
source and destination signals. We show that cross determinism can serve as a
measure for quantization penalty. By identifying an analogy between a
deterministic broadcast relay channel and a Gaussian MIMO relay channel, we
propose a three-stage dirty paper coding strategy, along with receiver
beamforming and quantization at the relay, to asymptotically achieve an
extended achievable rate region for the MIMO broadcast channel with a common
multiple-antenna relay.
|
1010.2030
|
Weight Distributions of Regular Low-Density Parity-Check Codes over
Finite Fields
|
cs.IT math.IT
|
The average weight distribution of a regular low-density parity-check (LDPC)
code ensemble over a finite field is thoroughly analyzed. In particular, a
precise asymptotic approximation of the average weight distribution is derived
for the small-weight case, and a series of fundamental qualitative properties
of the asymptotic growth rate of the average weight distribution are proved.
Based on this analysis, a general result, including all previous results as
special cases, is established for the minimum distance of individual codes in a
regular LDPC code ensemble.
|
1010.2067
|
Algorithmic Thermodynamics
|
math-ph cs.IT math.IT math.MP quant-ph
|
Algorithmic entropy can be seen as a special case of entropy as studied in
statistical mechanics. This viewpoint allows us to apply many techniques
developed for use in thermodynamics to the subject of algorithmic information
theory. In particular, suppose we fix a universal prefix-free Turing machine
and let X be the set of programs that halt for this machine. Then we can regard
X as a set of 'microstates', and treat any function on X as an 'observable'.
For any collection of observables, we can study the Gibbs ensemble that
maximizes entropy subject to constraints on expected values of these
observables. We illustrate this by taking the log runtime, length, and output
of a program as observables analogous to the energy E, volume V and number of
molecules N in a container of gas. The conjugate variables of these observables
allow us to define quantities which we call the 'algorithmic temperature' T,
'algorithmic pressure' P and algorithmic potential' mu, since they are
analogous to the temperature, pressure and chemical potential. We derive an
analogue of the fundamental thermodynamic relation dE = T dS - P d V + mu dN,
and use it to study thermodynamic cycles analogous to those for heat engines.
We also investigate the values of T, P and mu for which the partition function
converges. At some points on the boundary of this domain of convergence, the
partition function becomes uncomputable. Indeed, at these points the partition
function itself has nontrivial algorithmic entropy.
|
1010.2102
|
Hierarchical Multiclass Decompositions with Application to Authorship
Determination
|
cs.AI
|
This paper is mainly concerned with the question of how to decompose
multiclass classification problems into binary subproblems. We extend known
Jensen-Shannon bounds on the Bayes risk of binary problems to hierarchical
multiclass problems and use these bounds to develop a heuristic procedure for
constructing hierarchical multiclass decomposition for multinomials. We test
our method and compare it to the well known "all-pairs" decomposition. Our
tests are performed using a new authorship determination benchmark test of
machine learning authors. The new method consistently outperforms the all-pairs
decomposition when the number of classes is small and breaks even on larger
multiclass problems. Using both methods, the classification accuracy we
achieve, using an SVM over a feature set consisting of both high frequency
single tokens and high frequency token-pairs, appears to be exceptionally high
compared to known results in authorship determination.
|
1010.2128
|
Parameter Selection in Periodic Nonuniform Sampling of Multiband Signals
|
cs.SY
|
Periodic nonuniform sampling has been considered in literature as an
effective approach to reduce the sampling rate far below the Nyquist rate for
sparse spectrum multiband signals. In the presence of non-ideality the sampling
parameters play an important role on the quality of reconstructed signal. Also
the average sampling ratio is directly dependent on the sampling parameters
that they should be chosen for a minimum rate and complexity. In this paper we
consider the effect of sampling parameters on the reconstruction error and the
sampling ratio and suggest feasible approaches for achieving an optimal
sampling and reconstruction.
|
1010.2141
|
Modeling the evolution of continuously-observed networks: Communication
in a Facebook-like community
|
physics.soc-ph cs.SI
|
Building on existing stochastic actor-oriented models for panel data, we
employ a conditional logistic framework to explore growth mechanisms for tie
creation in continuously-observed networks. This framework models the
likelihood of tie formation distinguishing it from hazard models that consider
time to tie formation. It enables multiple growth mechanisms for network
evolution (homophily, focus constraints, reinforcement, reciprocity, triadic
closure, and popularity) to be modeled simultaneously. We apply this framework
to communication within a Facebook-like community. The findings exemplify the
inadequacy of descriptive measures that test single mechanisms independently.
They also indicate how system design shapes behavior and network evolution.
|
1010.2148
|
Ontological Matchmaking in Recommender Systems
|
cs.DB
|
The electronic marketplace offers great potential for the recommendation of
supplies. In the so called recommender systems, it is crucial to apply
matchmaking strategies that faithfully satisfy the predicates specified in the
demand, and take into account as much as possible the user preferences. We
focus on real-life ontology-driven matchmaking scenarios and identify a number
of challenges, being inspired by such scenarios. A key challenge is that of
presenting the results to the users in an understandable and clear-cut fashion
in order to facilitate the analysis of the results. Indeed, such scenarios
evoke the opportunity to rank and group the results according to specific
criteria. A further challenge consists of presenting the results to the user in
an asynchronous fashion, i.e. the 'push' mode, along with the 'pull' mode, in
which the user explicitly issues a query, and displays the results. Moreover,
an important issue to consider in real-life cases is the possibility of
submitting a query to multiple providers, and collecting the various results.
We have designed and implemented an ontology-based matchmaking system that
suitably addresses the above challenges. We have conducted a comprehensive
experimental study, in order to investigate the usability of the system, the
performance and the effectiveness of the matchmaking strategies with real
ontological datasets.
|
1010.2157
|
A Wideband Spectrum Sensing Method for Cognitive Radio using Sub-Nyquist
Sampling
|
cs.IT math.IT
|
Spectrum sensing is a fundamental component in cognitive radio. A major
challenge in this area is the requirement of a high sampling rate in the
sensing of a wideband signal. In this paper a wideband spectrum sensing model
is presented that utilizes a sub-Nyquist sampling scheme to bring substantial
savings in terms of the sampling rate. The correlation matrix of a finite
number of noisy samples is computed and used by a subspace estimator to detect
the occupied and vacant channels of the spectrum. In contrast with common
methods, the proposedmethod does not need the knowledge of signal properties
that mitigates the uncertainty problem. We evaluate the performance of this
method by computing the probability of detecting signal occupancy in terms of
the number of samples and the SNR of randomly generated signals. The results
show a reliable detection even in low SNR and small number of samples.
|
1010.2158
|
Non-uniform sampling and reconstruction of multi-band signals and its
application in wideband spectrum sensing of cognitive radio
|
cs.IT math.IT
|
Sampling theories lie at the heart of signal processing devices and
communication systems. To accommodate high operating rates while retaining low
computational cost, efficient analog-to digital (ADC) converters must be
developed. Many of limitations encountered in current converters are due to a
traditional assumption that the sampling state needs to acquire the data at the
Nyquist rate, corresponding to twice the signal bandwidth. In this thesis a
method of sampling far below the Nyquist rate for sparse spectrum multiband
signals is investigated. The method is called periodic non-uniform sampling,
and it is useful in a variety of applications such as data converters, sensor
array imaging and image compression. Firstly, a model for the sampling system
in the frequency domain is prepared. It relates the Fourier transform of
observed compressed samples with the unknown spectrum of the signal. Next, the
reconstruction process based on the topic of compressed sensing is provided. We
show that the sampling parameters play an important role on the average sample
ratio and the quality of the reconstructed signal. The concept of condition
number and its effect on the reconstructed signal in the presence of noise is
introduced, and a feasible approach for choosing a sample pattern with a low
condition number is given. We distinguish between the cases of known spectrum
and unknown spectrum signals respectively. One of the model parameters is
determined by the signal band locations that in case of unknown spectrum
signals should be estimated from sampled data. Therefore, we applied both
subspace methods and non-linear least square methods for estimation of this
parameter. We also used the information theoretic criteria (Akaike and MDL) and
the exponential fitting test techniques for model order selection in this case.
|
1010.2160
|
Robustness of interdependent networks under targeted attack
|
physics.soc-ph cs.SI physics.data-an
|
When an initial failure of nodes occurs in interdependent networks, a cascade
of failure between the networks occurs. Earlier studies focused on random
initial failures. Here we study the robustness of interdependent networks under
targeted attack on high or low degree nodes. We introduce a general technique
and show that the {\it targeted-attack} problem in interdependent networks can
be mapped to the {\it random-attack} problem in a transformed pair of
interdependent networks. We find that when the highly connected nodes are
protected and have lower probability to fail, in contrast to single scale free
(SF) networks where the percolation threshold $p_c=0$, coupled SF networks are
significantly more vulnerable with $p_c$ significantly larger than zero. The
result implies that interdependent networks are difficult to defend by
strategies such as protecting the high degree nodes that have been found useful
to significantly improve robustness of single networks.
|
1010.2173
|
Complex network model of the phase transition on the wealth
distributions - from Pareto to the society without middle class
|
physics.soc-ph cond-mat.dis-nn cs.SI
|
A model of distribution of the wealth in a society based on the properties of
complex networks has been proposed. The wealth is interpreted as a consequence
of communication possibilities and proportional to the number of connections
possessed by a person (as a vertex of the social network). Numerical simulation
of wealth distribution shows a transition from the Pareto law to distribution
with a gap demonstrating the absence of the middle class. Such a transition has
been described as a second-order phase transition, the order parameter has been
introduced and the value of the critical exponent has been found.
|
1010.2186
|
On Opinion Dynamics in Heterogeneous Networks
|
math-ph cs.SI math.MP physics.soc-ph
|
This paper studies the opinion dynamics model recently introduced by
Hegselmann and Krause: each agent in a group maintains a real number describing
its opinion; and each agent updates its opinion by averaging all other opinions
that are within some given confidence range. The confidence ranges are distinct
for each agent. This heterogeneity and state-dependent topology leads to
poorly-understood complex dynamic behavior. We classify the agents via their
interconnection topology and, accordingly, compute the equilibria of the
system. We conjecture that any trajectory of this model eventually converges to
a steady state under fixed topology. To establish this conjecture, we derive
two novel sufficient conditions: both conditions guarantee convergence and
constant topology for infinite time, while one condition also guarantees
monotonicity of the convergence. In the evolution under fixed topology for
infinite time, we define leader groups that determine the followers' rate and
direction of convergence.
|
1010.2198
|
Nearness to Local Subspace Algorithm for Subspace and Motion
Segmentation
|
cs.IT math.IT
|
There is a growing interest in computer science, engineering, and mathematics
for modeling signals in terms of union of subspaces and manifolds. Subspace
segmentation and clustering of high dimensional data drawn from a union of
subspaces are especially important with many practical applications in computer
vision, image and signal processing, communications, and information theory.
This paper presents a clustering algorithm for high dimensional data that comes
from a union of lower dimensional subspaces of equal and known dimensions. Such
cases occur in many data clustering problems, such as motion segmentation and
face recognition. The algorithm is reliable in the presence of noise, and
applied to the Hopkins 155 Dataset, it generates the best results to date for
motion segmentation. The two motion, three motion, and overall segmentation
rates for the video sequences are 99.43%, 98.69%, and 99.24%, respectively.
|
1010.2236
|
On the Scaling Law for Compressive Sensing and its Applications
|
cs.IT math.IT
|
$\ell_1$ minimization can be used to recover sufficiently sparse unknown
signals from compressed linear measurements. In fact, exact thresholds on the
sparsity (the size of the support set), under which with high probability a
sparse signal can be recovered from i.i.d. Gaussian measurements, have been
computed and are referred to as "weak thresholds" \cite{D}. It was also known
that there is a tradeoff between the sparsity and the $\ell_1$ minimization
recovery stability. In this paper, we give a \emph{closed-form}
characterization for this tradeoff which we call the scaling law for
compressive sensing recovery stability. In a nutshell, we are able to show that
as the sparsity backs off $\varpi$ ($0<\varpi<1$) from the weak threshold of
$\ell_1$ recovery, the parameter for the recovery stability will scale as
$\frac{1}{\sqrt{1-\varpi}}$. Our result is based on a careful analysis through
the Grassmann angle framework for the Gaussian measurement matrix. We will
further discuss how this scaling law helps in analyzing the iterative
reweighted $\ell_1$ minimization algorithms. If the nonzero elements over the
signal support follow an amplitude probability density function (pdf)
$f(\cdot)$ whose $t$-th derivative $f^{t}(0) \neq 0$ for some integer $t \geq
0$, then a certain iterative reweighted $\ell_1$ minimization algorithm can be
analytically shown to lift the phase transition thresholds (weak thresholds) of
the plain $\ell_1$ minimization algorithm.
|
1010.2247
|
Regions of Attraction for Hybrid Limit Cycles of Walking Robots
|
math.OC cs.RO cs.SY
|
This paper illustrates the application of recent research in
region-of-attraction analysis for nonlinear hybrid limit cycles. Three example
systems are analyzed in detail: the van der Pol oscillator, the "rimless
wheel", and the "compass gait", the latter two being simplified models of
underactuated walking robots. The method used involves decomposition of the
dynamics about the target cycle into tangential and transverse components, and
a search for a Lyapunov function in the transverse dynamics using
sum-of-squares analysis (semidefinite programming). Each example illuminates
different aspects of the procedure, including optimization of transversal
surfaces, the handling of impact maps, optimization of the Lyapunov function,
and orbitally-stabilizing control design.
|
1010.2285
|
Information-based complexity, feedback and dynamics in convex
programming
|
cs.IT cs.SY math.IT math.OC
|
We study the intrinsic limitations of sequential convex optimization through
the lens of feedback information theory. In the oracle model of optimization,
an algorithm queries an {\em oracle} for noisy information about the unknown
objective function, and the goal is to (approximately) minimize every function
in a given class using as few queries as possible. We show that, in order for a
function to be optimized, the algorithm must be able to accumulate enough
information about the objective. This, in turn, puts limits on the speed of
optimization under specific assumptions on the oracle and the type of feedback.
Our techniques are akin to the ones used in statistical literature to obtain
minimax lower bounds on the risks of estimation procedures; the notable
difference is that, unlike in the case of i.i.d. data, a sequential
optimization algorithm can gather observations in a {\em controlled} manner, so
that the amount of information at each step is allowed to change in time. In
particular, we show that optimization algorithms often obey the law of
diminishing returns: the signal-to-noise ratio drops as the optimization
algorithm approaches the optimum. To underscore the generality of the tools, we
use our approach to derive fundamental lower bounds for a certain active
learning problem. Overall, the present work connects the intuitive notions of
information in optimization, experimental design, estimation, and active
learning to the quantitative notion of Shannon information.
|
1010.2286
|
Divergence-based characterization of fundamental limitations of adaptive
dynamical systems
|
cs.IT math.IT math.OC
|
Adaptive dynamical systems arise in a multitude of contexts, e.g.,
optimization, control, communications, signal processing, and machine learning.
A precise characterization of their fundamental limitations is therefore of
paramount importance. In this paper, we consider the general problem of
adaptively controlling and/or identifying a stochastic dynamical system, where
our {\em a priori} knowledge allows us to place the system in a subset of a
metric space (the uncertainty set). We present an information-theoretic
meta-theorem that captures the trade-off between the metric complexity (or
richness) of the uncertainty set, the amount of information acquired online in
the process of controlling and observing the system, and the residual
uncertainty remaining after the observations have been collected. Following the
approach of Zames, we quantify {\em a priori} information by the Kolmogorov
(metric) entropy of the uncertainty set, while the information acquired online
is expressed as a sum of information divergences. The general theory is used to
derive new minimax lower bounds on the metric identification error, as well as
to give a simple derivation of the minimum time needed to stabilize an
uncertain stochastic linear system.
|
1010.2345
|
Using Context Dependent Semantic Similarity to Browse Information
Resources: an Application for the Industrial Design
|
cs.DL cs.IR
|
This paper deals with the semantic interpretation of information resources
(e.g., images, videos, 3D models). We present a case study of an approach based
on semantic and context dependent similarity applied to the industrial design.
Different application contexts are considered and modelled to browse a
repository of 3D digital objects according to different perspectives. The paper
briefly summarises the basic concepts behind the semantic similarity approach
and illustrates its application and results.
|
1010.2358
|
On Finding Frequent Patterns in Event Sequences
|
cs.DS cs.DB
|
Given a directed acyclic graph with labeled vertices, we consider the problem
of finding the most common label sequences ("traces") among all paths in the
graph (of some maximum length m). Since the number of paths can be huge, we
propose novel algorithms whose time complexity depends only on the size of the
graph, and on the frequency epsilon of the most frequent traces. In addition,
we apply techniques from streaming algorithms to achieve space usage that
depends only on epsilon, and not on the number of distinct traces. The abstract
problem considered models a variety of tasks concerning finding frequent
patterns in event sequences. Our motivation comes from working with a data set
of 2 million RFID readings from baggage trolleys at Copenhagen Airport. The
question of finding frequent passenger movement patterns is mapped to the above
problem. We report on experimental findings for this data set.
|
1010.2371
|
On Finding Similar Items in a Stream of Transactions
|
cs.DS cs.DB
|
While there has been a lot of work on finding frequent itemsets in
transaction data streams, none of these solve the problem of finding similar
pairs according to standard similarity measures. This paper is a first attempt
at dealing with this, arguably more important, problem. We start out with a
negative result that also explains the lack of theoretical upper bounds on the
space usage of data mining algorithms for finding frequent itemsets: Any
algorithm that (even only approximately and with a chance of error) finds the
most frequent k-itemset must use space Omega(min{mb,n^k,(mb/phi)^k}) bits,
where mb is the number of items in the stream so far, n is the number of
distinct items and phi is a support threshold. To achieve any non-trivial space
upper bound we must thus abandon a worst-case assumption on the data stream. We
work under the model that the transactions come in random order, and show that
surprisingly, not only is small-space similarity mining possible for the most
common similarity measures, but the mining accuracy improves with the length of
the stream for any fixed support threshold.
|
1010.2382
|
Capacity Achieving Modulation for Fixed Constellations with Average
Power Constraint
|
cs.IT math.IT
|
The capacity achieving probability mass function (PMF) of a finite signal
constellation with an average power constraint is in most cases non-uniform. A
common approach to generate non-uniform input PMFs is Huffman shaping, which
consists of first approximating the capacity achieving PMF by a sampled
Gaussian density and then to calculate the Huffman code of the sampled Gaussian
density. The Huffman code is then used as a prefix-free modulation code. This
approach showed good results in practice, can however lead to a significant gap
to capacity. In this work, a method is proposed that efficiently constructs
optimal prefix-free modulation codes for any finite signal constellation with
average power constraint in additive noise. The proposed codes operate as close
to capacity as desired. The major part of this work elaborates an analytical
proof of this property. The proposed method is applied to 64-QAM in AWGN and
numeric results are given, which show that, opposed to Huffman shaping, by
using the proposed method, it is possible to operate very close to capacity
over the whole range of parameters.
|
1010.2384
|
Learning Taxonomy for Text Segmentation by Formal Concept Analysis
|
cs.CL
|
In this paper the problems of deriving a taxonomy from a text and
concept-oriented text segmentation are approached. Formal Concept Analysis
(FCA) method is applied to solve both of these linguistic problems. The
proposed segmentation method offers a conceptual view for text segmentation,
using a context-driven clustering of sentences. The Concept-oriented Clustering
Segmentation algorithm (COCS) is based on k-means linear clustering of the
sentences. Experimental results obtained using COCS algorithm are presented.
|
1010.2433
|
Capacity of 1-to-K Broadcast Packet Erasure Channels with Channel Output
Feedback
|
cs.IT math.IT
|
This paper focuses on the 1-to-K broadcast packet erasure channel (PEC),
which is a generalization of the broadcast binary erasure channel from the
binary symbol to that of arbitrary finite fields GF(q) with sufficiently large
q. We consider the setting in which the source node has instant feedback of the
channel outputs of the K receivers after each transmission. Such a setting
directly models network coded packet transmission in the downlink direction
with integrated feedback mechanisms (such as Automatic Repeat reQuest (ARQ)).
The main results of this paper are: (i) The capacity region for general
1-to-3 broadcast PECs, and (ii) The capacity region for two classes of 1-to-K
broadcast PECs: the symmetric PECs, and the spatially independent PECs with
one-sided fairness constraints. This paper also develops (iii) A pair of outer
and inner bounds of the capacity region for arbitrary 1-to-K broadcast PECs,
which can be evaluated by any linear programming solver. For most practical
scenarios, the outer and inner bounds meet and thus jointly characterize the
capacity.
|
1010.2436
|
Capacity of 1-to-K Broadcast Packet Erasure Channels with Channel Output
Feedback (Full Version)
|
cs.IT math.IT
|
This paper focuses on the 1-to-K broadcast packet erasure channel (PEC),
which is a generalization of the broadcast binary erasure channel from the
binary symbol to that of arbitrary finite fields GF(q) with sufficiently large
q. We consider the setting in which the source node has instant feedback of the
channel outputs of the K receivers after each transmission. The capacity region
of the 1-to-K PEC with COF was previously known only for the case K=2. Such a
setting directly models network coded packet transmission in the downlink
direction with integrated feedback mechanisms (such as Automatic Repeat reQuest
(ARQ)).
The main results of this paper are: (i) The capacity region for general
1-to-3 broadcast PECs, and (ii) The capacity region for two types of 1-to-$K$
broadcast PECs: the symmetric PECs, and the spatially independent PECs with
one-sided fairness constraints. This paper also develops (iii) A pair of outer
and inner bounds of the capacity region for arbitrary 1-to-K broadcast PECs,
which can be easily evaluated by any linear programming solver. The proposed
inner bound is proven by a new class of intersession network coding schemes,
termed the packet evolution schemes, which is based on the concept of code
alignment in GF(q) that is in parallel with the interference alignment
techniques for the Euclidean space. Extensive numerical experiments show that
the outer and inner bounds meet for almost all broadcast PECs encountered in
practical scenarios and thus effectively bracket the capacity of general 1-to-K
broadcast PECs with COF.
|
1010.2437
|
On Achievable Rates of the Two-user Symmetric Gaussian Interference
Channel
|
cs.IT math.IT
|
We study the Han-Kobayashi (HK) achievable sum rate for the two-user
symmetric Gaussian interference channel. We find the optimal power split ratio
between the common and private messages (assuming no time-sharing), and derive
a closed form expression for the corresponding sum rate. This provides a finer
understanding of the achievable HK sum rate, and allows for precise comparisons
between this sum rate and that of orthogonal signaling. One surprising finding
is that despite the fact that the channel is symmetric, allowing for asymmetric
power split ratio at both users (i.e., asymmetric rates) can improve the sum
rate significantly. Considering the high SNR regime, we specify the
interference channel value above which the sum rate achieved using asymmetric
power splitting outperforms the symmetric case.
|
1010.2438
|
On Designing Multicore-aware Simulators for Biological Systems
|
cs.DC cs.CE q-bio.QM
|
The stochastic simulation of biological systems is an increasingly popular
technique in bioinformatics. It often is an enlightening technique, which may
however result in being computational expensive. We discuss the main
opportunities to speed it up on multi-core platforms, which pose new challenges
for parallelisation techniques. These opportunities are developed in two
general families of solutions involving both the single simulation and a bulk
of independent simulations (either replicas of derived from parameter sweep).
Proposed solutions are tested on the parallelisation of the CWC simulator
(Calculus of Wrapped Compartments) that is carried out according to proposed
solutions by way of the FastFlow programming framework making possible fast
development and efficient execution on multi-cores.
|
1010.2439
|
Conservation Law of Utility and Equilibria in Non-Zero Sum Games
|
cs.GT cs.AI math.OC
|
This short note demonstrates how one can define a transformation of a
non-zero sum game into a zero sum, so that the optimal mixed strategy achieving
equilibrium always exists. The transformation is equivalent to introduction of
a passive player into a game (a player with a singleton set of pure
strategies), whose payoff depends on the actions of the active players, and it
is justified by the law of conservation of utility in a game. In a transformed
game, each participant plays against all other players, including the passive
player. The advantage of this approach is that the transformed game is zero-sum
and has an equilibrium solution. The optimal strategy and the value of the new
game, however, can be different from strategies that are rational in the
original game. We demonstrate the principle using the Prisoner's Dilemma
example.
|
1010.2440
|
Enabling Data Discovery through Virtual Internet Repositories
|
cs.DL cs.IR
|
Mercury is a federated metadata harvesting, search and retrieval tool based
on both open source and software developed at Oak Ridge National Laboratory. It
was originally developed for NASA, and the Mercury development consortium now
includes funding from NASA, USGS, and DOE. A major new version of Mercury was
developed during 2007. This new version provides orders of magnitude
improvements in search speed, support for additional metadata formats,
integration with Google Maps for spatial queries, support for RSS delivery of
search results, among other features. Mercury provides a single portal to
information contained in disparate data management systems. It collects
metadata and key data from contributing project servers distributed around the
world and builds a centralized index. The Mercury search interfaces then allow
the users to perform simple, fielded, spatial and temporal searches across
these metadata sources. This centralized repository of metadata with
distributed data sources provides extremely fast search results to the user,
while allowing data providers to advertise the availability of their data and
maintain complete control and ownership of that data.
|
1010.2441
|
Note on Noisy Group Testing: Asymptotic Bounds and Belief Propagation
Reconstruction
|
cs.IT math.IT
|
An information theoretic perspective on group testing problems has recently
been proposed by Atia and Saligrama, in order to characterise the optimal
number of tests. Their results hold in the noiseless case, where only false
positives occur, and where only false negatives occur. We extend their results
to a model containing both false positives and false negatives, developing
simple information theoretic bounds on the number of tests required. Based on
these bounds, we obtain an improved order of convergence in the case of false
negatives only. Since these results are based on (computationally infeasible)
joint typicality decoding, we propose a belief propagation algorithm for the
detection of defective items and compare its actual performance to the
theoretical bounds.
|
1010.2457
|
Optimal designs for Lasso and Dantzig selector using Expander Codes
|
math.ST cs.IT math.IT math.PR stat.ME stat.ML stat.TH
|
We investigate the high-dimensional regression problem using adjacency
matrices of unbalanced expander graphs. In this frame, we prove that the
$\ell_{2}$-prediction error and the $\ell_{1}$-risk of the lasso and the
Dantzig selector are optimal up to an explicit multiplicative constant. Thus we
can estimate a high-dimensional target vector with an error term similar to the
one obtained in a situation where one knows the support of the largest
coordinates in advance.
Moreover, we show that these design matrices have an explicit restricted
eigenvalue. Precisely, they satisfy the restricted eigenvalue assumption and
the compatibility condition with an explicit constant.
Eventually, we capitalize on the recent construction of unbalanced expander
graphs due to Guruswami, Umans, and Vadhan, to provide a deterministic
polynomial time construction of these design matrices.
|
1010.2460
|
Line graphs as social networks
|
physics.soc-ph cs.SI physics.data-an
|
The line graphs are clustered and assortative. They share these topological
features with some social networks. We argue that this similarity reveals the
cliquey character of the social networks. In the model proposed here, a social
network is the line graph of an initial network of families, communities,
interest groups, school classes and small companies. These groups play the role
of nodes, and individuals are represented by links between these nodes. The
picture is supported by the data on the LiveJournal network of about 8 x 10^6
people. In particular, sharp maxima of the observed data of the degree
dependence of the clustering coefficient C(k) are associated with cliques in
the social network.
|
1010.2521
|
Cooperation, Norms, and Revolutions: A Unified Game-Theoretical Approach
|
physics.soc-ph cs.SI
|
Cooperation is of utmost importance to society as a whole, but is often
challenged by individual self-interests. While game theory has studied this
problem extensively, there is little work on interactions within and across
groups with different preferences or beliefs. Yet, people from different social
or cultural backgrounds often meet and interact. This can yield conflict, since
behavior that is considered cooperative by one population might be perceived as
non-cooperative from the viewpoint of another.
To understand the dynamics and outcome of the competitive interactions within
and between groups, we study game-dynamical replicator equations for multiple
populations with incompatible interests and different power (be this due to
different population sizes, material resources, social capital, or other
factors). These equations allow us to address various important questions: For
example, can cooperation in the prisoner's dilemma be promoted, when two
interacting groups have different preferences? Under what conditions can costly
punishment, or other mechanisms, foster the evolution of norms? When does
cooperation fail, leading to antagonistic behavior, conflict, or even
revolutions? And what incentives are needed to reach peaceful agreements
between groups with conflicting interests?
Our detailed quantitative analysis reveals a large variety of interesting
results, which are relevant for society, law and economics, and have
implications for the evolution of language and culture as well.
|
1010.2551
|
Fractional Repetition Codes for Repair in Distributed Storage Systems
|
cs.IT math.IT
|
We introduce a new class of exact Minimum-Bandwidth Regenerating (MBR) codes
for distributed storage systems, characterized by a low-complexity uncoded
repair process that can tolerate multiple node failures. These codes consist of
the concatenation of two components: an outer MDS code followed by an inner
repetition code. We refer to the inner code as a Fractional Repetition code
since it consists of splitting the data of each node into several packets and
storing multiple replicas of each on different nodes in the system.
Our model for repair is table-based, and thus, differs from the random access
model adopted in the literature. We present constructions of Fractional
Repetition codes based on regular graphs and Steiner systems for a large set of
system parameters. The resulting codes are guaranteed to achieve the storage
capacity for random access repair. The considered model motivates a new
definition of capacity for distributed storage systems, that we call Fractional
Repetition capacity. We provide upper bounds on this capacity while a precise
expression remains an open problem.
|
1010.2571
|
Cooperative Precoding with Limited Feedback for MIMO Interference
Channels
|
cs.IT math.IT
|
Multi-antenna precoding effectively mitigates the interference in wireless
networks. However, the resultant performance gains can be significantly
compromised in practice if the precoder design fails to account for the
inaccuracy in the channel state information (CSI) feedback. This paper
addresses this issue by considering finite-rate CSI feedback from receivers to
their interfering transmitters in the two-user multiple-input-multiple-output
(MIMO) interference channel, called cooperative feedback, and proposing a
systematic method for designing transceivers comprising linear precoders and
equalizers. Specifically, each precoder/equalizer is decomposed into inner and
outer components for nulling the cross-link interference and achieving array
gain, respectively. The inner precoders/equalizers are further optimized to
suppress the residual interference resulting from finite-rate cooperative
feedback. Further- more, the residual interference is regulated by additional
scalar cooperative feedback signals that are designed to control transmission
power using different criteria including fixed interference margin and maximum
sum throughput. Finally, the required number of cooperative precoder feedback
bits is derived for limiting the throughput loss due to precoder quantization.
|
1010.2572
|
Empirical study on some interconnecting bilayer networks
|
physics.soc-ph cs.SI
|
This manuscript serves as an online supplement of a preprint, which presents
a study on a kind of bilayer networks where some nodes (called interconnecting
nodes) in two layers merge. A model showing an important general property of
the bilayer networks is proposed. Then the analytic discussion of the model is
compared with empirical conclusions. We present all the empirical observations
in this online supplement.
|
1010.2595
|
Kolmogorov Complexity in perspective. Part II: Classification,
Information Processing and Duality
|
cs.LO cs.CC cs.IT math.IT physics.data-an
|
We survey diverse approaches to the notion of information: from Shannon
entropy to Kolmogorov complexity. Two of the main applications of Kolmogorov
complexity are presented: randomness and classification. The survey is divided
in two parts published in a same volume. Part II is dedicated to the relation
between logic and information system, within the scope of Kolmogorov
algorithmic information theory. We present a recent application of Kolmogorov
complexity: classification using compression, an idea with provocative
implementation by authors such as Bennett, Vitanyi and Cilibrasi. This stresses
how Kolmogorov complexity, besides being a foundation to randomness, is also
related to classification. Another approach to classification is also
considered: the so-called "Google classification". It uses another original and
attractive idea which is connected to the classification using compression and
to Kolmogorov complexity from a conceptual point of view. We present and unify
these different approaches to classification in terms of Bottom-Up versus
Top-Down operational modes, of which we point the fundamental principles and
the underlying duality. We look at the way these two dual modes are used in
different approaches to information system, particularly the relational model
for database introduced by Codd in the 70's. This allows to point out diverse
forms of a fundamental duality. These operational modes are also reinterpreted
in the context of the comprehension schema of axiomatic set theory ZF. This
leads us to develop how Kolmogorov's complexity is linked to intensionality,
abstraction, classification and information system.
|
1010.2619
|
Graph-theoretical Constructions for Graph Entropy and Network Coding
Based Communications
|
cs.IT cs.NI math.CO math.IT
|
The guessing number of a directed graph (digraph), equivalent to the entropy
of that digraph, was introduced as a direct criterion on the solvability of a
network coding instance. This paper makes two contributions on the guessing
number. First, we introduce an undirected graph on all possible configurations
of the digraph, referred to as the guessing graph, which encapsulates the
essence of dependence amongst configurations. We prove that the guessing number
of a digraph is equal to the logarithm of the independence number of its
guessing graph. Therefore, network coding solvability is no more a problem on
the operations made by each node, but is simplified into a problem on the
messages that can transit through the network. By studying the guessing graph
of a given digraph, and how to combine digraphs or alphabets, we are thus able
to derive bounds on the guessing number of digraphs. Second, we construct
specific digraphs with high guessing numbers, yielding network coding instances
where a large amount of information can transit. We first propose a
construction of digraphs with finite parameters based on cyclic codes, with
guessing number equal to the degree of the generator polynomial. We then
construct an infinite class of digraphs with arbitrary girth for which the
ratio between the linear guessing number and the number of vertices tends to
one, despite these digraphs being arbitrarily sparse. These constructions yield
solvable network coding instances with a relatively small number of
intermediate nodes for which the node operations are known and linear, although
these instances are sparse and the sources are arbitrarily far from their
corresponding sinks.
|
1010.2656
|
Indexing Finite Language Representation of Population Genotypes
|
cs.DS cs.CE q-bio.QM
|
With the recent advances in DNA sequencing, it is now possible to have
complete genomes of individuals sequenced and assembled. This rich and focused
genotype information can be used to do different population-wide studies, now
first time directly on whole genome level. We propose a way to index population
genotype information together with the complete genome sequence, so that one
can use the index to efficiently align a given sequence to the genome with all
plausible genotype recombinations taken into account. This is achieved through
converting a multiple alignment of individual genomes into a finite automaton
recognizing all strings that can be read from the alignment by switching the
sequence at any time. The finite automaton is indexed with an extension of
Burrows-Wheeler transform to allow pattern search inside the plausible
recombinant sequences. The size of the index stays limited, because of the high
similarity of individual genomes. The index finds applications in variation
calling and in primer design. On a variation calling experiment, we found about
1.0% of matches to novel recombinants just with exact matching, and up to 2.4%
with approximate matching.
|
1010.2667
|
Virtual Full-Duplex Wireless Communication via Rapid On-Off-Division
Duplex
|
cs.IT cs.NI math.IT
|
This paper introduces a novel paradigm for design- ing the physical and
medium access control (MAC) layers of mobile ad hoc or peer-to-peer networks
formed by half-duplex radios. A node equipped with such a radio cannot
simultaneously transmit and receive useful signals at the same frequency.
Unlike in conventional designs, where a node's transmission frames are
scheduled away from its reception, each node transmits its signal through a
randomly generated on-off duplex mask (or signature) over every frame interval,
and receive a signal through each of its own off-slots. This is called rapid
on-off- division duplex (RODD). Over the period of a single frame, every node
can transmit a message to some or all of its peers, and may simultaneously
receive a message from each peer. Thus RODD achieves virtual full-duplex
communication using half-duplex radios and can simplify the design of higher
layers of a network protocol stack significantly. The throughput of RODD is
evaluated under some general settings, which is significantly larger than that
of ALOHA. RODD is especially efficient in case the dominant traffic is
simultaneous broadcast from nodes to their one-hop peers, such as in
spontaneous wireless social networks, emergency situations or on battlefield.
Important design issues of peer discovery, distribution of on-off signatures,
synchronization and error-control coding are also addressed.
|
1010.2686
|
How to Achieve the Optimal DMT of Selective Fading MIMO Channels?
|
cs.IT math.IT
|
In this paper, we consider a particular class of selective fading channel
corresponding to a channel that is selective either in time or in frequency.
For this class of channel, we propose a systematic way to achieve the optimal
DMT derived in Coronel and B\"olcskei, IEEE ISIT, 2007 by extending the
non-vanishing determinant (NVD) criterion to the selective channel case. A new
code construction based on split NVD parallel codes is then proposed to satisfy
the NVD parallel criterion. This result is of significant interest not only in
its own right, but also because it settles a long-standing debate in the
literature related to the optimal DMT of selective fading channels.
|
1010.2692
|
Resource Allocation via Sum-Rate Maximization in the Uplink of
Multi-Cell OFDMA Networks
|
math.OC cs.IT math.IT
|
In this paper, we consider maximizing the sum-rate in the uplink of a
multi-cell OFDMA network. The problem has a non-convex combinatorial structure
and is known to be NP hard. Due to the inherent complexity of implementing the
optimal solution, firstly, we derive an upper and lower bound to the optimal
average network throughput. Moreover, we investigate the performance of a near
optimal single cell resource allocation scheme in the presence of ICI which
leads to another easily computable lower bound. We then develop a centralized
sub-optimal scheme that is composed of a geometric programming based power
control phase in conjunction with an iterative subcarrier allocation phase.
Although, the scheme is computationally complex, it provides an effective
benchmark for low complexity schemes even without the power control phase.
Finally, we propose less complex centralized and distributed schemes that are
well-suited for practical scenarios. The computational complexity of all
schemes is analyzed and performance is compared through simulations. Simulation
results demonstrate that the proposed low complexity schemes can achieve
comparable performance to the centralized sub-optimal scheme in various
scenarios. Moreover, comparisons with the upper and lower bounds provide
insight on the performance gap between the proposed schemes and the optimal
solution.
|
1010.2731
|
A Unified Framework for High-Dimensional Analysis of M-Estimators with
Decomposable Regularizers
|
math.ST cs.IT math.IT stat.ME stat.TH
|
High-dimensional statistical inference deals with models in which the the
number of parameters p is comparable to or larger than the sample size n. Since
it is usually impossible to obtain consistent procedures unless
$p/n\rightarrow0$, a line of recent work has studied models with various types
of low-dimensional structure, including sparse vectors, sparse and structured
matrices, low-rank matrices and combinations thereof. In such settings, a
general approach to estimation is to solve a regularized optimization problem,
which combines a loss function measuring how well the model fits the data with
some regularization function that encourages the assumed structure. This paper
provides a unified framework for establishing consistency and convergence rates
for such regularized M-estimators under high-dimensional scaling. We state one
main theorem and show how it can be used to re-derive some existing results,
and also to obtain a number of new results on consistency and convergence
rates, in both $\ell_2$-error and related norms. Our analysis also identifies
two key properties of loss and regularization functions, referred to as
restricted strong convexity and decomposability, that ensure corresponding
regularized M-estimators have fast convergence rates and which are optimal in
many well-studied cases.
|
1010.2733
|
Combinatorial Continuous Maximal Flows
|
cs.CV math.OC
|
Maximum flow (and minimum cut) algorithms have had a strong impact on
computer vision. In particular, graph cuts algorithms provide a mechanism for
the discrete optimization of an energy functional which has been used in a
variety of applications such as image segmentation, stereo, image stitching and
texture synthesis. Algorithms based on the classical formulation of max-flow
defined on a graph are known to exhibit metrication artefacts in the solution.
Therefore, a recent trend has been to instead employ a spatially continuous
maximum flow (or the dual min-cut problem) in these same applications to
produce solutions with no metrication errors. However, known fast continuous
max-flow algorithms have no stopping criteria or have not been proved to
converge. In this work, we revisit the continuous max-flow problem and show
that the analogous discrete formulation is different from the classical
max-flow problem. We then apply an appropriate combinatorial optimization
technique to this combinatorial continuous max-flow CCMF problem to find a
null-divergence solution that exhibits no metrication artefacts and may be
solved exactly by a fast, efficient algorithm with provable convergence.
Finally, by exhibiting the dual problem of our CCMF formulation, we clarify the
fact, already proved by Nozawa in the continuous setting, that the max-flow and
the total variation problems are not always equivalent.
|
1010.2741
|
MIMO Interference Alignment Over Correlated Channels with Imperfect CSI
|
cs.IT math.IT
|
Interference alignment (IA), given uncorrelated channel components and
perfect channel state information, obtains the maximum degrees of freedom in an
interference channel. Little is known, however, about how the sum rate of IA
behaves at finite transmit power, with imperfect channel state information, or
antenna correlation. This paper provides an approximate closed-form
signal-to-interference-plus-noise-ratio (SINR) expression for IA over
multiple-input-multiple-output (MIMO) channels with imperfect channel state
information and transmit antenna correlation. Assuming linear processing at the
transmitters and zero-forcing receivers, random matrix theory tools are
utilized to derive an approximation for the post-processing SINR distribution
of each stream for each user. Perfect channel knowledge and i.i.d. channel
coefficients constitute special cases. This SINR distribution not only allows
easy calculation of useful performance metrics like sum rate and symbol error
rate, but also permits a realistic comparison of IA with other transmission
techniques. More specifically, IA is compared with spatial multiplexing and
beamforming and it is shown that IA may not be optimal for some performance
criteria.
|
1010.2787
|
Interference Alignment with Analog Channel State Feedback
|
cs.IT math.IT
|
Interference alignment (IA) is a multiplexing gain optimal transmission
strategy for the interference channel. While the achieved sum rate with IA is
much higher than previously thought possible, the improvement often comes at
the cost of requiring network channel state information at the transmitters.
This can be achieved by explicit feedback, a flexible yet potentially costly
approach that incurs large overhead. In this paper we propose analog feedback
as an alternative to limited feedback or reciprocity based alignment. We show
that the full multiplexing gain observed with perfect channel knowledge is
preserved by analog feedback and that the mean loss in sum rate is bounded by a
constant when signal-to-noise ratio is comparable in both forward and feedback
channels. When signal-to-noise ratios are not quite symmetric, a fraction of
the multiplexing gain is achieved. We consider the overhead of training and
feedback and use this framework to optimize the system's effective throughput.
We present simulation results to demonstrate the performance of IA with analog
feedback, verify our theoretical analysis, and extend our conclusions on
optimal training and feedback length.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.