id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1103.1249
|
Randomizing world trade. II. A weighted network analysis
|
physics.soc-ph cond-mat.stat-mech cs.SI physics.data-an q-fin.GN
|
Based on the misleading expectation that weighted network properties always
offer a more complete description than purely topological ones, current
economic models of the International Trade Network (ITN) generally aim at
explaining local weighted properties, not local binary ones. Here we complement
our analysis of the binary projections of the ITN by considering its weighted
representations. We show that, unlike the binary case, all possible weighted
representations of the ITN (directed/undirected, aggregated/disaggregated)
cannot be traced back to local country-specific properties, which are therefore
of limited informativeness. Our two papers show that traditional macroeconomic
approaches systematically fail to capture the key properties of the ITN. In the
binary case, they do not focus on the degree sequence and hence cannot
characterize or replicate higher-order properties. In the weighted case, they
generally focus on the strength sequence, but the knowledge of the latter is
not enough in order to understand or reproduce indirect effects.
|
1103.1252
|
Automatic Wrapper Adaptation by Tree Edit Distance Matching
|
cs.AI cs.IR
|
Information distributed through the Web keeps growing faster day by day, and
for this reason, several techniques for extracting Web data have been suggested
during last years. Often, extraction tasks are performed through so called
wrappers, procedures extracting information from Web pages, e.g. implementing
logic-based techniques. Many fields of application today require a strong
degree of robustness of wrappers, in order not to compromise assets of
information or reliability of data extracted. Unfortunately, wrappers may fail
in the task of extracting data from a Web page, if its structure changes,
sometimes even slightly, thus requiring the exploiting of new techniques to be
automatically held so as to adapt the wrapper to the new structure of the page,
in case of failure. In this work we present a novel approach of automatic
wrapper adaptation based on the measurement of similarity of trees through
improved tree edit distance matching techniques.
|
1103.1254
|
Design of Automatically Adaptable Web Wrappers
|
cs.AI cs.IR
|
Nowadays, the huge amount of information distributed through the Web
motivates studying techniques to be adopted in order to extract relevant data
in an efficient and reliable way. Both academia and enterprises developed
several approaches of Web data extraction, for example using techniques of
artificial intelligence or machine learning. Some commonly adopted procedures,
namely wrappers, ensure a high degree of precision of information extracted
from Web pages, and, at the same time, have to prove robustness in order not to
compromise quality and reliability of data themselves. In this paper we focus
on some experimental aspects related to the robustness of the data extraction
process and the possibility of automatically adapting wrappers. We discuss the
implementation of algorithms for finding similarities between two different
version of a Web page, in order to handle modifications, avoiding the failure
of data extraction tasks and ensuring reliability of information extracted. Our
purpose is to evaluate performances, advantages and draw-backs of our novel
system of automatic wrapper adaptation.
|
1103.1255
|
A General Framework for Representing, Reasoning and Querying with
Annotated Semantic Web Data
|
cs.DB
|
We describe a generic framework for representing and reasoning with annotated
Semantic Web data, a task becoming more important with the recent increased
amount of inconsistent and non-reliable meta-data on the web. We formalise the
annotated language, the corresponding deductive system and address the query
answering problem. Previous contributions on specific RDF annotation domains
are encompassed by our unified reasoning formalism as we show by instantiating
it on (i) temporal, (ii) fuzzy, and (iii) provenance annotations. Moreover, we
provide a generic method for combining multiple annotation domains allowing to
represent, e.g. temporally-annotated fuzzy RDF. Furthermore, we address the
development of a query language -- AnQL -- that is inspired by SPARQL,
including several features of SPARQL 1.1 (subqueries, aggregates, assignment,
solution modifiers) along with the formal definitions of their semantics.
|
1103.1264
|
Polynomial cases of the Discretizable Molecular Distance Geometry
Problem
|
cs.CG cs.CE cs.DS q-bio.QM
|
An important application of distance geometry to biochemistry studies the
embeddings of the vertices of a weighted graph in the three-dimensional
Euclidean space such that the edge weights are equal to the Euclidean distances
between corresponding point pairs. When the graph represents the backbone of a
protein, one can exploit the natural vertex order to show that the search space
for feasible embeddings is discrete. The corresponding decision problem can be
solved using a binary tree based search procedure which is exponential in the
worst case. We discuss assumptions that bound the search tree width to a
polynomial size.
|
1103.1286
|
A note on Tempelmeier's {\beta}-service measure under non-stationary
stochastic demand
|
math.OC cs.SY
|
Tempelmeier (2007) considers the problem of computing replenishment cycle
policy parameters under non-stationary stochastic demand and service level
constraints. He analyses two possible service level measures: the minimum no
stock-out probability per period ({\alpha}-service level) and the so called
"fill rate", that is the fraction of demand satisfied immediately from stock on
hand ({\beta}-service level). For each of these possible measures, he presents
a mixed integer programming (MIP) model to determine the optimal replenishment
cycles and corresponding order-up-to levels minimizing the expected total setup
and holding costs. His approach is essentially based on imposing service level
dependent lower bounds on cycle order-up-to levels. In this note, we argue that
Tempelmeier's strategy, in the {\beta}-service level case, while being an
interesting option for practitioners, does not comply with the standard
definition of "fill rate". By means of a simple numerical example we
demonstrate that, as a consequence, his formulation might yield sub-optimal
policies.
|
1103.1305
|
Generic Approach for Hierarchical Modulation Performance Analysis:
Application to DVB-SH
|
cs.IT cs.PF math.IT
|
Broadcasting systems have to deal with channel diversity in order to offer
the best rate to the users. Hierarchical modulation is a practical solution to
provide several rates in function of the channel quality. Unfortunately the
performance evaluation of such modulations requires time consuming simulations.
We propose in this paper a novel approach based on the channel capacity to
avoid these simulations. The method allows to study the performance in terms of
spectrum efficiency of hierarchical and also classical modulations combined
with error correcting codes. Our method will be applied to the DVB-SH standard
which considers hierarchical modulation as an optional feature.
|
1103.1306
|
A Secure Communication Game with a Relay Helping the Eavesdropper
|
cs.IT math.IT
|
In this work a four terminal complex Gaussian network composed of a source, a
destination, an eavesdropper and a jammer relay is studied under two different
set of assumptions: (i) The jammer relay does not hear the source transmission,
and (ii) The jammer relay is causally given the source message. In both cases
the jammer relay assists the eavesdropper and aims to decrease the achievable
secrecy rates. The source, on the other hand, aims to increase it. To help the
eavesdropper, the jammer relay can use pure relaying and/or send interference.
Each of the problems is formulated as a two-player, non-cooperative, zero-sum
continuous game. Assuming Gaussian strategies at the source and the jammer
relay in the first problem, the Nash equilibrium is found and shown to be
achieved with mixed strategies in general. The optimal cumulative distribution
functions (cdf) for the source and the jammer relay that achieve the value of
the game, which is the Nash equilibrium secrecy rate, are found. For the second
problem, the Nash equilibrium solution is found and the results are compared to
the case when the jammer relay is not informed about the source message.
|
1103.1343
|
Realization theory of discrete-time linear switched systems
|
math.OC cs.SY
|
The paper presents realization theory of discrete-time linear switched
systems. A discrete-time linear switched system is a hybrid system, such that
the continuous sub-system associated with each discrete state is linear. In
this paper we present necessary and sufficient conditions for an input-output
map to admit a discrete-time linear switched state-space realization. The
conditions are formulated as finite rank conditions of a generalized
Hankel-matrix. In addition, we present a characterization of minimality of
discrete-time linear switched systems in terms of reachability and
observable.Further, we prove that minimal realizations are unique up to
isomorphism. We also discuss procedures for converting a linear switched system
to a minimal one and we present an algorithm for constructing a state-space
representation from input-output data.The paper uses the theory rational formal
power series in non-commutative variables. The latter theory was successfully
applied to bilinear and state-affine systems in the past.
|
1103.1349
|
On the notion of persistence of excitation for linear switched systems
|
math.OC cs.SY
|
The paper formulates the concept of persistence of excitation for
discrete-time linear switched systems, and provides sufficient conditions for
an input signal to be persistently exciting. Persistence of excitation is
formulated as a property of the input signal, and it is not tied to any
specific identification algorithm. The results of the paper rely on realization
theory and on the notion of Markov-parameters for linear switched systems.
|
1103.1359
|
An Analysis of Optimal Link Bombs
|
cs.DM cs.SI
|
We analyze the phenomenon of collusion for the purpose of boosting the
pagerank of a node in an interlinked environment. We investigate the optimal
attack pattern for a group of nodes (attackers) attempting to improve the
ranking of a specific node (the victim). We consider attacks where the
attackers can only manipulate their own outgoing links. We show that the
optimal attacks in this scenario are uncoordinated, i.e. the attackers link
directly to the victim and no one else. nodes do not link to each other. We
also discuss optimal attack patterns for a group that wants to hide itself by
not pointing directly to the victim. In these disguised attacks, the attackers
link to nodes $l$ hops away from the victim. We show that an optimal disguised
attack exists and how it can be computed. The optimal disguised attack also
allows us to find optimal link farm configurations. A link farm can be
considered a special case of our approach: the target page of the link farm is
the victim and the other nodes in the link farm are the attackers for the
purpose of improving the rank of the victim. The target page can however
control its own outgoing links for the purpose of improving its own rank, which
can be modeled as an optimal disguised attack of 1-hop on itself. Our results
are unique in the literature as we show optimality not only in the pagerank
score, but also in the rank based on the pagerank score. We further validate
our results with experiments on a variety of random graph models.
|
1103.1365
|
Design of Strict Control-Lyapunov Functions for Quantum Systems with QND
Measurements
|
math.OC cs.SY quant-ph
|
We consider discrete-time quantum systems subject to Quantum Non-Demolition
(QND) measurements and controlled by an adjustable unitary evolution between
two successive QND measures. In open-loop, such QND measurements provide a
non-deterministic preparation tool exploiting the back-action of the
measurement on the quantum state. We propose here a systematic method based on
elementary graph theory and inversion of Laplacian matrices to construct strict
control-Lyapunov functions. This yields an appropriate feedback law that
stabilizes globally the system towards a chosen target state among the
open-loop stable ones, and that makes in closed-loop this preparation
deterministic. We illustrate such feedback laws through simulations
corresponding to an experimental setup with QND photon counting.
|
1103.1367
|
Efficient Batch Query Answering Under Differential Privacy
|
cs.DB
|
Differential privacy is a rigorous privacy condition achieved by randomizing
query answers. This paper develops efficient algorithms for answering multiple
queries under differential privacy with low error. We pursue this goal by
advancing a recent approach called the matrix mechanism, which generalizes
standard differentially private mechanisms. This new mechanism works by first
answering a different set of queries (a strategy) and then inferring the
answers to the desired workload of queries. Although a few strategies are known
to work well on specific workloads, finding the strategy which minimizes error
on an arbitrary workload is intractable. We prove a new lower bound on the
optimal error of this mechanism, and we propose an efficient algorithm that
approaches this bound for a wide range of workloads.
|
1103.1396
|
Scale free networks by preferential depletion
|
physics.soc-ph cond-mat.stat-mech cs.SI physics.bio-ph physics.comp-ph
|
We show that not only preferential attachment but also preferential depletion
leads to scale-free networks. The resulting degree distribution exponents is
typically less than two (5/3) as opposed to the case of the growth models
studied before where the exponents are larger. Our approach applies in
particular to biological networks where in fact we find interesting agreement
with experimental measurements. We investigate the most important properties
characterizing these networks, as the cluster size distribution, the average
shortest path and the clustering coefficient.
|
1103.1401
|
Opportunistic Cooperation in Cognitive Femtocell Networks
|
math.OC cs.SY
|
We investigate opportunistic cooperation between unlicensed secondary users
and legacy primary users in a cognitive radio network. Specifically, we
consider a model of a cognitive network where a secondary user can
cooperatively transmit with the primary user in order to improve the latter's
effective transmission rate. In return, the secondary user gets more
opportunities for transmitting its own data when the primary user is idle. This
kind of interaction between the primary and secondary users is different from
the traditional dynamic spectrum access model in which the secondary users try
to avoid interfering with the primary users while seeking transmission
opportunities on vacant primary channels. In our model, the secondary users
need to balance the desire to cooperate more (to create more transmission
opportunities) with the need for maintaining sufficient energy levels for their
own transmissions. Such a model is applicable in the emerging area of cognitive
femtocell networks. We formulate the problem of maximizing the secondary user
throughput subject to a time average power constraint under these settings.
This is a constrained Markov Decision Problem and conventional solution
techniques based on dynamic programming require either extensive knowledge of
the system dynamics or learning based approaches that suffer from large
convergence times. However, using the technique of Lyapunov optimization, we
design a novel greedy and online control algorithm that overcomes these
challenges and is provably optimal.
|
1103.1403
|
Study of Throughput and Delay in Finite-Buffer Line Networks
|
cs.IT math.IT
|
In this work, we study the effects of finite buffers on the throughput and
delay of line networks with erasure links. We identify the calculation of
performance parameters such as throughput and delay to be equivalent to
determining the stationary distribution of an irreducible Markov chain. We note
that the number of states in the Markov chain grows exponentially in the size
of the buffers with the exponent scaling linearly with the number of hops in a
line network. We then propose a simplified iterative scheme to approximately
identify the steady-state distribution of the chain by decoupling the chain to
smaller chains. The approximate solution is then used to understand the effect
of buffer sizes on throughput and distribution of packet delay. Further, we
classify nodes based on congestion that yields an intelligent scheme for memory
allocation using the proposed framework. Finally, by simulations we confirm
that our framework yields an accurate prediction of the variation of the
throughput and delay distribution.
|
1103.1417
|
Localization from Incomplete Noisy Distance Measurements
|
math.ST cs.LG cs.SY math.OC math.PR stat.TH
|
We consider the problem of positioning a cloud of points in the Euclidean
space $\mathbb{R}^d$, using noisy measurements of a subset of pairwise
distances. This task has applications in various areas, such as sensor network
localization and reconstruction of protein conformations from NMR measurements.
Also, it is closely related to dimensionality reduction problems and manifold
learning, where the goal is to learn the underlying global geometry of a data
set using local (or partial) metric information. Here we propose a
reconstruction algorithm based on semidefinite programming. For a random
geometric graph model and uniformly bounded noise, we provide a precise
characterization of the algorithm's performance: In the noiseless case, we find
a radius $r_0$ beyond which the algorithm reconstructs the exact positions (up
to rigid transformations). In the presence of noise, we obtain upper and lower
bounds on the reconstruction error that match up to a factor that depends only
on the dimension $d$, and the average degree of the nodes in the graph.
|
1103.1424
|
On the Average Complexity of Sphere Decoding in Lattice Space-Time Coded
MIMO Channel
|
cs.IT math.IT
|
The exact average complexity analysis of the basic sphere decoder for general
space-time codes applied to multiple-input multiple-output (MIMO) wireless
channel is known to be difficult. In this work, we shed the light on the
computational complexity of sphere decoding for the quasi-static, LAttice
Space-Time (LAST) coded MIMO channel. Specifically, we drive an upper bound of
the tail distribution of the decoder's computational complexity. We show that,
when the computational complexity exceeds a certain limit, this upper bound
becomes dominated by the outage probability achieved by LAST coding and sphere
decoding schemes. We then calculate the minimum average computational
complexity that is required by the decoder to achieve near optimal performance
in terms of the system parameters. Our results indicate that there exists a
cut-off rate (multiplexing gain) for which the average complexity remains
bounded.
|
1103.1432
|
Vectorial Feedback with Carry Registers and Memory requirements
|
cs.IT cs.CR math.IT
|
In \cite{marjane2010}, we have introduced vectorial conception of FCSR's in
Fibonacci mode. This conception allows us to easily analyze FCSR's over binary
finite fields $\mathbb{F}_{2^{n}}$ for $n\geq 2$. In \cite{allailou2010}, we
describe and study the corresponding Galois mode and use it to design a new
stream cipher. In this paper, we introduce the Ring mode for vectorial FCSR,
explain the analysis of such Feedback registers and illustrate with a simple
example.
|
1103.1439
|
Generating Functional Analysis for Iterative CDMA Multiuser Detectors
|
cs.IT cond-mat.dis-nn math.IT
|
We investigate the detection dynamics of a soft parallel interference
canceller (soft-PIC), which includes a hard-PIC as a special case, for
code-division multiple-access (CDMA) multiuser detection, applied to a randomly
spread, fully synchronous base-band uncoded CDMA channel model with additive
white Gaussian noise under perfect power control in the large-system limit. We
analyze the detection dynamics of some iterative detectors, namely soft-PIC,
the Onsager-reaction-cancelling parallel interference canceller (ORC-PIC) and
the belief-propagation-based detector (BP-based detector), by the generating
functional analysis (GFA). The GFA allows us to study the asymptotic behavior
of the dynamics in the infinitely large system without assuming the
independence of messages. We study the detection dynamics and the stationary
estimates of an iterative algorithm.
We also show the decoupling principle in iterative multiuser detection
algorithms in the large-system limit. For a generic iterative multiuser
detection algorithm with binary input, it is shown that the multiuser channel
is equivalent to a bank of independent single-user additive non-Gaussian
channels, whose signal-to-noise ratio degrades due to both the multiple-access
interference and the Onsager reaction, at each stage of the algorithm. If an
algorithm cancels the Onsager reaction, the equivalent single-user channels
coincide with an additive white Gaussian noise channel. We also discuss ORC-PIC
and the BP-based detector.
|
1103.1448
|
Optimal Multi-Server Allocation to Parallel Queues With Independent
Random Queue-Server Connectivity
|
cs.IT cs.NI cs.SY math.IT math.OC
|
We investigate an optimal scheduling problem in a discrete-time system of L
parallel queues that are served by K identical, randomly connected servers.
Each queue may be connected to a subset of the K servers during any given time
slot. This model has been widely used in studies of emerging 3G/4G wireless
systems. We introduce the class of Most Balancing (MB) policies and provide
their mathematical characterization. We prove that MB policies are optimal; we
define optimality as minimization, in stochastic ordering sense, of a range of
cost functions of the queue lengths, including the process of total number of
packets in the system. We use stochastic coupling arguments for our proof. We
introduce the Least Connected Server First/Longest Connected Queue (LCSF/LCQ)
policy as an easy-to-implement approximation of MB policies. We conduct a
simulation study to compare the performance of several policies. The simulation
results show that: (a) in all cases, LCSF/LCQ approximations to the MB policies
outperform the other policies, (b) randomized policies perform fairly close to
the optimal one, and, (c) the performance advantage of the optimal policy over
the other simulated policies increases as the channel connectivity probability
decreases and as the number of servers in the system increases.
|
1103.1453
|
Cooperative Retransmissions Through Collisions
|
cs.IT cs.NI math.IT
|
Interference in wireless networks is one of the key capacity-limiting
factors. Recently developed interference-embracing techniques show promising
performance on turning collisions into useful transmissions. However, the
interference-embracing techniques are hard to apply in practical applications
due to their strict requirements. In this paper, we consider utilising the
interference-embracing techniques in a common scenario of two interfering
sender-receiver pairs. By employing opportunistic listening and analog network
coding (ANC), we show that compared to traditional ARQ retransmission, a higher
retransmission throughput can be achieved by allowing two interfering senders
to cooperatively retransmit selected lost packets at the same time. This
simultaneous retransmission is facilitated by a simple handshaking procedure
without introducing additional overhead. Simulation results demonstrate the
superior performance of the proposed cooperative retransmission.
|
1103.1474
|
Evaluation of a Novel Approach for Automatic Volume Determination of
Glioblastomas Based on Several Manual Expert Segmentations
|
cs.CV physics.med-ph q-bio.TO
|
The glioblastoma multiforme is the most common malignant primary brain tumor
and is one of the highest malignant human neoplasms. During the course of
disease, the evaluation of tumor volume is an essential part of the clinical
follow-up. However, manual segmentation for acquisition of tumor volume is a
time-consuming process. In this paper, a new approach for the automatic
segmentation and volume determination of glioblastomas (glioblastoma
multiforme) is presented and evaluated. The approach uses a user-defined seed
point inside the glioma to set up a directed 3D graph. The nodes of the graph
are obtained by sampling along rays that are sent through the surface points of
a polyhedron. After the graph has been constructed, the minimal s-t cut is
calculated to separate the glioblastoma from the background. For evaluation, 12
Magnetic Resonance Imaging (MRI) data sets were manually segmented slice by
slice, by neurosurgeons with several years of experience in the resection of
gliomas. Afterwards, the manual segmentations were compared with the results of
the presented approach via the Dice Similarity Coefficient (DSC). For a better
assessment of the DSC results, the manual segmentations of the experts were
also compared with each other and evaluated via the DSC. In addition, the 12
data sets were segmented once again by one of the neurosurgeons after a period
of two weeks, to also measure the intra-physician deviation of the DSC.
|
1103.1475
|
A Semi-Automatic Graph-Based Approach for Determining the Boundary of
Eloquent Fiber Bundles in the Human Brain
|
cs.CV
|
Diffusion Tensor Imaging (DTI) allows estimating the position, orientation
and dimension of bundles of nerve pathways. This non-invasive imaging technique
takes advantage of the diffusion of water molecules and determines the
diffusion coefficients for every voxel of the data set. The identification of
the diffusion coefficients and the derivation of information about fiber
bundles is of major interest for planning and performing neurosurgical
interventions. To minimize the risk of neural deficits during brain surgery as
tumor resection (e.g. glioma), the segmentation and integration of the results
in the operating room is of prime importance. In this contribution, a robust
and efficient graph-based approach for segmentating tubular fiber bundles in
the human brain is presented. To define a cost function, the fractional
anisotropy (FA) is used, derived from the DTI data, but this value may differ
from patient to patient. Besides manually definining seed regions describing
the structure of interest, additionally a manual definition of the cost
function by the user is necessary. To improve the approach the contribution
introduces a solution for automatically determining the cost function by using
different 3D masks for each individual data set.
|
1103.1516
|
Climbing depth-bounded adjacent discrepancy search for solving hybrid
flow shop scheduling problems with multiprocessor tasks
|
cs.RO cs.AI
|
This paper considers multiprocessor task scheduling in a multistage hybrid
flow-shop environment. The problem even in its simplest form is NP-hard in the
strong sense. The great deal of interest for this problem, besides its
theoretical complexity, is animated by needs of various manufacturing and
computing systems. We propose a new approach based on limited discrepancy
search to solve the problem. Our method is tested with reference to a proposed
lower bound as well as the best-known solutions in literature. Computational
results show that the developed approach is efficient in particular for
large-size problems.
|
1103.1529
|
Algorithmic tests and randomness with respect to a class of measures
|
math.LO cs.IT math.IT math.PR
|
The paper considers quantitative versions of different randomness notions:
algorithmic test measures the amount of non-randomness (and is infinite for
non-random sequences). We start with computable measures on Cantor space (and
Martin-Lof randomness), then consider uniform randomness (test is a function of
a sequence and a measure, not necessarily computable) and arbitrary
constructive metric spaces. We also consider tests for classes of measures, in
particular Bernoulli measures on Cantor space, and show how they are related to
uniform tests and original Martin-Lof definition. We show that Hyppocratic
(blind, oracle-free) randomness is equivalent to uniform randomness for
measures in an effectively orthogonal effectively compact class. We also
consider the notions of sparse set and on-line randomness and show how they can
be expressed in our framework.
|
1103.1530
|
A Discrete Evolutionary Model for Chess Players' Ratings
|
physics.soc-ph cs.AI
|
The Elo system for rating chess players, also used in other games and sports,
was adopted by the World Chess Federation over four decades ago. Although not
without controversy, it is accepted as generally reliable and provides a method
for assessing players' strengths and ranking them in official tournaments.
It is generally accepted that the distribution of players' rating data is
approximately normal but, to date, no stochastic model of how the distribution
might have arisen has been proposed. We propose such an evolutionary stochastic
model, which models the arrival of players into the rating pool, the games they
play against each other, and how the results of these games affect their
ratings. Using a continuous approximation to the discrete model, we derive the
distribution for players' ratings at time $t$ as a normal distribution, where
the variance increases in time as a logarithmic function of $t$. We validate
the model using published rating data from 2007 to 2010, showing that the
parameters obtained from the data can be recovered through simulations of the
stochastic model.
The distribution of players' ratings is only approximately normal and has
been shown to have a small negative skew. We show how to modify our
evolutionary stochastic model to take this skewness into account, and we
validate the modified model using the published official rating data.
|
1103.1542
|
The tractability of CSP classes defined by forbidden patterns
|
cs.AI cs.CC cs.DS
|
The constraint satisfaction problem (CSP) is a general problem central to
computer science and artificial intelligence. Although the CSP is NP-hard in
general, considerable effort has been spent on identifying tractable
subclasses. The main two approaches consider structural properties
(restrictions on the hypergraph of constraint scopes) and relational properties
(restrictions on the language of constraint relations). Recently, some authors
have considered hybrid properties that restrict the constraint hypergraph and
the relations simultaneously.
Our key contribution is the novel concept of a CSP pattern and classes of
problems defined by forbidden patterns (which can be viewed as forbidding
generic subproblems). We describe the theoretical framework which can be used
to reason about classes of problems defined by forbidden patterns. We show that
this framework generalises relational properties and allows us to capture known
hybrid tractable classes.
Although we are not close to obtaining a dichotomy concerning the
tractability of general forbidden patterns, we are able to make some progress
in a special case: classes of problems that arise when we can only forbid
binary negative patterns (generic subproblems in which only inconsistent tuples
are specified). In this case we are able to characterise very large classes of
tractable and NP-hard forbidden patterns. This leaves the complexity of just
one case unresolved and we conjecture that this last case is tractable.
|
1103.1544
|
Cost Sharing in Social Community Networks
|
cs.NI cs.SI
|
Wireless social community networks (WSCNs) is an emerging technology that
operate in the unlicensed spectrum and have been created as an alternative to
cellular wireless networks for providing low-cost, high speed wireless data
access in urban areas. WSCNs is an upcoming idea that is starting to gain
attention amongst the civilian Internet users. By using \emph{special} WiFi
routers that are provided by a social community network provider (SCNP), users
can effectively share their connection with the neighborhood in return for some
monthly monetary benefits. However, deployment maps of existing WSCNs reflect
their slow progress in capturing the WiFi router market. In this paper, we look
at a router design and cost sharing problem in WSCNs to improve deployment. We
devise asimple to implement, successful a mechanism is successful if it
achieves its intended purpose. For example in this work, a successful mechanism
would help install routers in a locality}, \emph{budget-balanced},
\emph{ex-post efficient}, and \emph{individually rational} {a mechanism is
individually rational if the benefit each agent obtains is greater than its
cost.} auction-based mechanism that generates the \emph{optimal} number of
features a router should have and allocates costs to residential users in
\emph{proportion} to the feature benefits they receive. Our problem is
important to a new-entrant SCNP when it wants to design its multi-feature
routers with the goal to popularize them and increase their deployment in a
residential locality. Our proposed mechanism accounts for heterogeneous user
preferences towards different router features and comes up with the optimal
\emph{(feature-set, user costs)} router blueprint that satisfies each user in a
locality, in turn motivating them to buy routers and thereby improve
deployment.
|
1103.1559
|
Minimum Pseudoweight Analysis of 3-Dimensional Turbo Codes
|
cs.IT math.IT
|
In this work, we consider pseudocodewords of (relaxed) linear programming
(LP) decoding of 3-dimensional turbo codes (3D-TCs). We present a relaxed LP
decoder for 3D-TCs, adapting the relaxed LP decoder for conventional turbo
codes proposed by Feldman in his thesis. We show that the 3D-TC polytope is
proper and $C$-symmetric, and make a connection to finite graph covers of the
3D-TC factor graph. This connection is used to show that the support set of any
pseudocodeword is a stopping set of iterative decoding of 3D-TCs using maximum
a posteriori constituent decoders on the binary erasure channel. Furthermore,
we compute ensemble-average pseudoweight enumerators of 3D-TCs and perform a
finite-length minimum pseudoweight analysis for small cover degrees. Also, an
explicit description of the fundamental cone of the 3D-TC polytope is given.
Finally, we present an extensive numerical study of small-to-medium block
length 3D-TCs, which shows that 1) typically (i.e., in most cases) when the
minimum distance $d_{\rm min}$ and/or the stopping distance $h_{\rm min}$ is
high, the minimum pseudoweight (on the additive white Gaussian noise channel)
is strictly smaller than both the $d_{\rm min}$ and the $h_{\rm min}$, and 2)
the minimum pseudoweight grows with the block length, at least for
small-to-medium block lengths.
|
1103.1587
|
All Roads Lead To Rome
|
cs.CV
|
This short article presents a class of projection-based solution algorithms
to the problem considered in the pioneering work on compressed sensing -
perfect reconstruction of a phantom image from 22 radial lines in the frequency
domain. Under the framework of projection-based image reconstruction, we will
show experimentally that several old and new tools of nonlinear filtering
(including Perona-Malik diffusion, nonlinear diffusion, Translation-Invariant
thresholding and SA-DCT thresholding) all lead to perfect reconstruction of the
phantom image.
|
1103.1598
|
Mean Interference in Hard-Core Wireless Networks
|
cs.IT cs.NI math.IT math.PR math.ST stat.TH
|
Mat\'ern hard core processes of types I and II are the point processes of
choice to model concurrent transmitters in CSMA networks. We determine the mean
interference observed at a node of the process and compare it with the mean
interference in a Poisson point process of the same density. It turns out that
despite the similarity of the two models, they behave rather differently. For
type I, the excess interference (relative to the Poisson case) increases
exponentially in the hard-core distance, while for type II, the gap never
exceeds 1 dB.
|
1103.1604
|
On Minimal Constraint Networks
|
cs.AI cs.CC cs.DB
|
In a minimal binary constraint network, every tuple of a constraint relation
can be extended to a solution. The tractability or intractability of computing
a solution to such a minimal network was a long standing open question. Dechter
conjectured this computation problem to be NP-hard. We prove this conjecture.
We also prove a conjecture by Dechter and Pearl stating that for k\geq2 it is
NP-hard to decide whether a single constraint can be decomposed into an
equivalent k-ary constraint network. We show that this holds even in case of
bi-valued constraints where k\geq3, which proves another conjecture of Dechter
and Pearl. Finally, we establish the tractability frontier for this problem
with respect to the domain cardinality and the parameter k.
|
1103.1625
|
A Gentle Introduction to the Kernel Distance
|
cs.CG cs.LG
|
This document reviews the definition of the kernel distance, providing a
gentle introduction tailored to a reader with background in theoretical
computer science, but limited exposure to technology more common to machine
learning, functional analysis and geometric measure theory. The key aspect of
the kernel distance developed here is its interpretation as an L_2 distance
between probability measures or various shapes (e.g. point sets, curves,
surfaces) embedded in a vector space (specifically an RKHS). This structure
enables several elegant and efficient solutions to data analysis problems. We
conclude with a glimpse into the mathematical underpinnings of this measure,
highlighting its recent independent evolution in two separate fields.
|
1103.1665
|
The Role of Singular Control in Frictionless Atom Cooling in a Harmonic
Trapping Potential
|
math.OC cond-mat.quant-gas cs.SY quant-ph
|
In this article we study the frictionless cooling of atoms trapped in a
harmonic potential, while minimizing the transient energy of the system. We
show that in the case of unbounded control, this goal is achieved by a singular
control, which is also the time-minimal solution for a "dual" problem, where
the energy is held fixed. In addition, we examine briefly how the solution is
modified when there are bounds on the control. The results presented here have
a broad range of applications, from the cooling of a Bose-Einstein condensate
confined in a harmonic trap to adiabatic quantum computing and finite time
thermodynamic processes.
|
1103.1672
|
The Generalized Degrees of Freedom of the MIMO Interference Channel
|
cs.IT math.IT
|
The generalized degrees of freedom (GDoF) region of the MIMO Gaussian
interference channel is obtained for the general case with an arbitrary number
of antennas at each node and where the SNR and interference-to-noise ratios
(INRs) vary with arbitrary exponents to a nominal SNR. The GDoF region reveals
various insights through the joint dependence of optimal interference
management techniques at high SNR on the SNR exponents that determine the
relative strengths of direct-link SNRs and cross-link INRs and the numbers of
antennas at the four terminals. For instance, it permits an in-depth look at
the issue of rate-splitting and partial decoding at high SNR and it reveals
that, unlike in the SISO case, treating interference as noise is not GDoF
optimal always even in the very weak interference regime. Moreover, while the
DoF-optimal strategy that relies just on transmit/receive zero-forcing
beamforming and time-sharing is not GDoF optimal (and thus has an unbounded gap
to capacity) the precise characterization of the very strong interference
regime, where single-user DoF performance can be achieved simultaneously for
both users, depends on the relative numbers of antennas at the four terminals
and thus deviates from what it is in the SISO case. For asymmetric numbers of
antennas at the four nodes the shape of the symmetric GDoF curve can be a
"distorted W" curve to the extent that for certain MIMO ICs it is a "V" curve.
|
1103.1680
|
Epidemic thresholds in directed complex networks
|
physics.soc-ph cs.SI
|
The spread of a disease, a computer virus or information is discussed in a
directed complex network. We are concerned with a steady state of the spread
for the SIR and SIS dynamic models. In a scale-free directed network it is
shown that the threshold of its outbreak in both models approaches zero under a
high correlation between nodal indegrees and outdegrees.
|
1103.1689
|
Information Theoretic Limits on Learning Stochastic Differential
Equations
|
cs.IT cs.LG math.IT math.ST q-fin.ST stat.ML stat.TH
|
Consider the problem of learning the drift coefficient of a stochastic
differential equation from a sample path. In this paper, we assume that the
drift is parametrized by a high dimensional vector. We address the question of
how long the system needs to be observed in order to learn this vector of
parameters. We prove a general lower bound on this time complexity by using a
characterization of mutual information as time integral of conditional
variance, due to Kadota, Zakai, and Ziv. This general lower bound is applied to
specific classes of linear and non-linear stochastic differential equations. In
the linear case, the problem under consideration is the one of learning a
matrix of interaction coefficients. We evaluate our lower bound for ensembles
of sparse and dense random matrices. The resulting estimates match the
qualitative behavior of upper bounds achieved by computationally efficient
procedures.
|
1103.1711
|
Planning Graph Heuristics for Belief Space Search
|
cs.AI
|
Some recent works in conditional planning have proposed reachability
heuristics to improve planner scalability, but many lack a formal description
of the properties of their distance estimates. To place previous work in
context and extend work on heuristics for conditional planning, we provide a
formal basis for distance estimates between belief states. We give a definition
for the distance between belief states that relies on aggregating underlying
state distance measures. We give several techniques to aggregate state
distances and their associated properties. Many existing heuristics exhibit a
subset of the properties, but in order to provide a standardized comparison we
present several generalizations of planning graph heuristics that are used in a
single planner. We compliment our belief state distance estimate framework by
also investigating efficient planning graph data structures that incorporate
BDDs to compute the most effective heuristics.
We developed two planners to serve as test-beds for our investigation. The
first, CAltAlt, is a conformant regression planner that uses A* search. The
second, POND, is a conditional progression planner that uses AO* search. We
show the relative effectiveness of our heuristic techniques within these
planners. We also compare the performance of these planners with several state
of the art approaches in conditional planning.
|
1103.1724
|
Approximate stabilization of an infinite dimensional quantum stochastic
system
|
math.OC cs.SY
|
We propose a feedback scheme for preparation of photon number states in a
microwave cavity. Quantum Non-Demolition (QND) measurements of the cavity field
and a control signal consisting of a microwave pulse injected into the cavity
are used to drive the system towards a desired target photon number state.
Unlike previous work, we do not use the Galerkin approximation of truncating
the infinite-dimensional system Hilbert space into a finite-dimensional
subspace. We use an (unbounded) strict Lyapunov function and prove that a
feedback scheme that minimizes the expectation value of the Lyapunov function
at each time step stabilizes the system at the desired photon number state with
(a pre-specified) arbitrarily high probability. Simulations of this scheme
demonstrate that we improve the performance of the controller by reducing
"leakage" to high photon numbers.
|
1103.1732
|
Semi-Global Approximate stabilization of an infinite dimensional quantum
stochastic system
|
math.OC cs.SY math-ph math.FA math.MP
|
In this paper we study the semi-global (approximate) state feedback
stabilization of an infinite dimensional quantum stochastic system towards a
target state. A discrete-time Markov chain on an infinite-dimensional Hilbert
space is used to model the dynamics of a quantum optical cavity. We can choose
an (unbounded) strict Lyapunov function that is minimized at each time-step in
order to prove (weak-$\ast$) convergence of probability measures to a final
state that is concentrated on the target state with (a pre-specified)
probability that may be made arbitrarily close to 1. The feedback parameters
and the Lyapunov function are chosen so that the stochastic flow that describes
the Markov process may be shown to be tight (concentrated on a compact set with
probability arbitrarily close to 1). We then use Prohorov's theorem and
properties of the Lyapunov function to prove the desired convergence result.
|
1103.1741
|
Mitigation of Malicious Attacks on Networks
|
physics.soc-ph cs.SI physics.comp-ph
|
Terrorist attacks on transportation networks have traumatized modern
societies. With a single blast, it has become possible to paralyze airline
traffic, electric power supply, ground transportation or Internet
communication. How and at which cost can one restructure the network such that
it will become more robust against a malicious attack? We introduce a unique
measure for robustness and use it to devise a method to mitigate economically
and efficiently this risk. We demonstrate its efficiency on the European
electricity system and on the Internet as well as on complex networks models.
We show that with small changes in the network structure (low cost) the
robustness of diverse networks can be improved dramatically while their
functionality remains unchanged. Our results are useful not only for improving
significantly with low cost the robustness of existing infrastructures but also
for designing economically robust network systems.
|
1103.1742
|
Generic Approach for Hierarchical Modulation Performance Analysis:
Application to DVB-SH and DVB-S2
|
cs.IT cs.PF math.IT
|
Broadcasting systems have to deal with channel variability in order to offer
the best rate to the users. Hierarchical modulation is a practical solution to
provide different rates to the receivers in function of the channel quality.
Unfortunately, the performance evaluation of such modulations requires time
consuming simulations. We propose in this paper a novel approach based on the
channel capacity to avoid these simulations. The method allows to study the
performance of hierarchical and also classical modulations combined with error
correcting codes. We will also compare hierarchical modulation with time
sharing strategy in terms of achievable rates and indisponibility. Our work
will be applied to the DVB-SH and DVB-S2 standards, which both consider
hierarchical modulation as an optional feature.
|
1103.1756
|
Limitation of network inhomogeneity in improving cooperation in
coevolutionary dynamics
|
physics.soc-ph cs.SI
|
Cooperative behavior is common in nature even if selfishness is sometimes
better for an individual. Empirical and theoretical studies have shown that the
invasion and expansion of cooperators are related to an inhomogeneous
connectivity distribution. Here we study the evolution of cooperation on an
adaptive network, in which an individual is able to avoid being exploited by
rewiring its link(s). Our results indicate that the broadening of connectivity
distribution is not always beneficial for cooperation. Compared with the
Poisson-like degree distribution, the exponential-like degree distribution is
detrimental to the occurrence of a higher level of cooperation in the
continuous snowdrift game (CSG).
|
1103.1773
|
Aorta Segmentation for Stent Simulation
|
cs.CV physics.med-ph
|
Simulation of arterial stenting procedures prior to intervention allows for
appropriate device selection as well as highlights potential complications. To
this end, we present a framework for facilitating virtual aortic stenting from
a contrast computer tomography (CT) scan. More specifically, we present a
method for both lumen and outer wall segmentation that may be employed in
determining both the appropriateness of intervention as well as the selection
and localization of the device. The more challenging recovery of the outer wall
is based on a novel minimal closure tracking algorithm. Our aortic segmentation
method has been validated on over 3000 multiplanar reformatting (MPR) planes
from 50 CT angiography data sets yielding a Dice Similarity Coefficient (DSC)
of 90.67%.
|
1103.1777
|
A Flexible Semi-Automatic Approach for Glioblastoma multiforme
Segmentation
|
cs.CE physics.med-ph q-bio.TO
|
Gliomas are the most common primary brain tumors, evolving from the cerebral
supportive cells. For clinical follow-up, the evaluation of the preoperative
tumor volume is essential. Volumetric assessment of tumor volume with manual
segmentation of its outlines is a time-consuming process that can be overcome
with the help of segmentation methods. In this paper, a flexible semi-automatic
approach for grade IV glioma segmentation is presented. The approach uses a
novel segmentation scheme for spherical objects that creates a directed 3D
graph. Thereafter, the minimal cost closed set on the graph is computed via a
polynomial time s-t cut, creating an optimal segmentation of the tumor. The
user can improve the results by specifying an arbitrary number of additional
seed points to support the algorithm with grey value information and
geometrical constraints. The presented method is tested on 12 magnetic
resonance imaging datasets. The ground truth of the tumor boundaries are
manually extracted by neurosurgeons. The segmented gliomas are compared with a
one click method, and the semi-automatic approach yields an average Dice
Similarity Coefficient (DSC) of 77.72% and 83.91%, respectively.
|
1103.1778
|
Pituitary Adenoma Segmentation
|
cs.CE physics.med-ph q-bio.TO
|
Sellar tumors are approximately 10-15% among all intracranial neoplasms. The
most common sellar lesion is the pituitary adenoma. Manual segmentation is a
time-consuming process that can be shortened by using adequate algorithms. In
this contribution, we present a segmentation method for pituitary adenoma. The
method is based on an algorithm we developed recently in previous work where
the novel segmentation scheme was successfully used for segmentation of
glioblastoma multiforme and provided an average Dice Similarity Coefficient
(DSC) of 77%. This scheme is used for automatic adenoma segmentation. In our
experimental evaluation, neurosurgeons with strong experiences in the treatment
of pituitary adenoma performed manual slice-by-slice segmentation of 10
magnetic resonance imaging (MRI) cases. Afterwards, the segmentations were
compared with the segmentation results of the proposed method via the DSC. The
average DSC for all data sets was 77.49% +/- 4.52%. Compared with a manual
segmentation that took, on the average, 3.91 +/- 0.54 minutes, the overall
segmentation in our implementation required less than 4 seconds.
|
1103.1784
|
On the Optimality of Myopic Sensing in Multi-channel Opportunistic
Access: the Case of Sensing Multiple Channels
|
cs.IT math.IT
|
Recent works have developed a simple and robust myopic sensing policy for
multi-channel opportunistic communication systems where a secondary user (SU)
can access one of N i.i.d. Markovian channels. The optimality of the myopic
sensing policy in maximizing the SU's cumulated reward is established under
certain conditions on channel parameters. This paper studies the generic case
where the SU can sense more than one channel each time. By characterizing the
myopic sensing policy in this context, we establish analytically its optimality
for certain system setting when the SU is allowed to sense two channels. In the
more generic case, we construct counterexamples to show that the myopic sensing
policy, despite its simple structure, is non-optimal.
|
1103.1791
|
Integrated information increases with fitness in the evolution of
animats
|
q-bio.PE cs.AI nlin.AO q-bio.NC
|
One of the hallmarks of biological organisms is their ability to integrate
disparate information sources to optimize their behavior in complex
environments. How this capability can be quantified and related to the
functional complexity of an organism remains a challenging problem, in
particular since organismal functional complexity is not well-defined. We
present here several candidate measures that quantify information and
integration, and study their dependence on fitness as an artificial agent
("animat") evolves over thousands of generations to solve a navigation task in
a simple, simulated environment. We compare the ability of these measures to
predict high fitness with more conventional information-theoretic processing
measures. As the animat adapts by increasing its "fit" to the world,
information integration and processing increase commensurately along the
evolutionary line of descent. We suggest that the correlation of fitness with
information integration and with processing measures implies that high fitness
requires both information processing as well as integration, but that
information integration may be a better measure when the task requires memory.
A correlation of measures of information integration (but also information
processing) and fitness strongly suggests that these measures reflect the
functional complexity of the animat, and that such measures can be used to
quantify functional complexity even in the absence of fitness data.
|
1103.1834
|
On the elusiveness of clusters
|
q-bio.PE cs.SI physics.soc-ph
|
Rooted phylogenetic networks are often used to represent conflicting
phylogenetic signals. Given a set of clusters, a network is said to represent
these clusters in the "softwired" sense if, for each cluster in the input set,
at least one tree embedded in the network contains that cluster. Motivated by
parsimony we might wish to construct such a network using as few reticulations
as possible, or minimizing the "level" of the network, i.e. the maximum number
of reticulations used in any "tangled" region of the network. Although these
are NP-hard problems, here we prove that, for every fixed k >= 0, it is
polynomial-time solvable to construct a phylogenetic network with level equal
to k representing a cluster set, or to determine that no such network exists.
However, this algorithm does not lend itself to a practical implementation. We
also prove that the comparatively efficient Cass algorithm correctly solves
this problem (and also minimizes the reticulation number) when input clusters
are obtained from two not necessarily binary gene trees on the same set of taxa
but does not always minimize level for general cluster sets. Finally, we
describe a new algorithm which generates in polynomial-time all binary
phylogenetic networks with exactly r reticulations representing a set of input
clusters (for every fixed r >= 0).
|
1103.1898
|
Recognizing Uncertainty in Speech
|
cs.CL
|
We address the problem of inferring a speaker's level of certainty based on
prosodic information in the speech signal, which has application in
speech-based dialogue systems. We show that using phrase-level prosodic
features centered around the phrases causing uncertainty, in addition to
utterance-level prosodic features, improves our model's level of certainty
classification. In addition, our models can be used to predict which phrase a
person is uncertain about. These results rely on a novel method for eliciting
utterances of varying levels of certainty that allows us to compare the utility
of contextually-based feature sets. We elicit level of certainty ratings from
both the speakers themselves and a panel of listeners, finding that there is
often a mismatch between speakers' internal states and their perceived states,
and highlighting the importance of this distinction.
|
1103.1918
|
Stochastic Optimal Control for Online Seller under Reputational
Mechanisms
|
math.OC cs.SY math.PR
|
In this work we propose and analyze a model which addresses the pulsing
behavior of sellers in an online auction (store). This pulsing behavior is
observed when sellers switch between advertising and processing states. We
assert that a seller switches her state in order to maximize her profit, and
further that this switch can be identified through the seller's reputation. We
show that for each seller there is an optimal reputation, i.e., the reputation
at which the seller should switch her state in order to maximize her total
profit. We design a stochastic behavioral model for an online seller, which
incorporates the dynamics of resource allocation and reputation. The design of
the model is optimized by using a stochastic advertising model from (16) and
used effectively in the Stochastic Optimal Control of Advertising (12). This
model of reputation is combined with the effect of online reputation on sales
price empirically verified in (9). We derive the Hamilton-Jacobi-Bellman (HJB)
differential equation, whose solution relates optimal wealth level to a
seller's reputation. We formulate both a full model, as well as a reduced model
with fewer parameters, both of which have the same qualitative description of
the optimal seller behavior. Coincidentally, the reduced model has a closed
form analytical solution that we construct.
|
1103.1923
|
Dengue disease, basic reproduction number and control
|
math.OC cs.SY physics.med-ph q-bio.OT
|
Dengue is one of the major international public health concerns. Although
progress is underway, developing a vaccine against the disease is challenging.
Thus, the main approach to fight the disease is vector control. A model for the
transmission of Dengue disease is presented. It consists of eight mutually
exclusive compartments representing the human and vector dynamics. It also
includes a control parameter (insecticide) in order to fight the mosquito. The
model presents three possible equilibria: two disease-free equilibria (DFE) and
another endemic equilibrium. It has been proved that a DFE is locally
asymptotically stable, whenever a certain epidemiological threshold, known as
the basic reproduction number, is less than one. We show that if we apply a
minimum level of insecticide, it is possible to maintain the basic reproduction
number below unity. A case study, using data of the outbreak that occurred in
2009 in Cape Verde, is presented.
|
1103.1943
|
Compressed Sensing over $\ell_p$-balls: Minimax Mean Square Error
|
cs.IT math.IT math.ST stat.TH
|
We consider the compressed sensing problem, where the object $x_0 \in \bR^N$
is to be recovered from incomplete measurements $y = Ax_0 + z$; here the
sensing matrix $A$ is an $n \times N$ random matrix with iid Gaussian entries
and $n < N$. A popular method of sparsity-promoting reconstruction is
$\ell^1$-penalized least-squares reconstruction (aka LASSO, Basis Pursuit).
It is currently popular to consider the strict sparsity model, where the
object $x_0$ is nonzero in only a small fraction of entries. In this paper, we
instead consider the much more broadly applicable $\ell_p$-sparsity model,
where $x_0$ is sparse in the sense of having $\ell_p$ norm bounded by $\xi
\cdot N^{1/p}$ for some fixed $0 < p \leq 1$ and $\xi > 0$.
We study an asymptotic regime in which $n$ and $N$ both tend to infinity with
limiting ratio $n/N = \delta \in (0,1)$, both in the noisy ($z \neq 0$) and
noiseless ($z=0$) cases. Under weak assumptions on $x_0$, we are able to
precisely evaluate the worst-case asymptotic minimax mean-squared
reconstruction error (AMSE) for $\ell^1$ penalized least-squares: min over
penalization parameters, max over $\ell_p$-sparse objects $x_0$. We exhibit the
asymptotically least-favorable object (hardest sparse signal to recover) and
the maximin penalization.
Our explicit formulas unexpectedly involve quantities appearing classically
in statistical decision theory. Occurring in the present setting, they reflect
a deeper connection between penalized $\ell^1$ minimization and scalar soft
thresholding. This connection, which follows from earlier work of the authors
and collaborators on the AMP iterative thresholding algorithm, is carefully
explained.
Our approach also gives precise results under weak-$\ell_p$ ball coefficient
constraints, as we show here.
|
1103.1952
|
Ray-Based and Graph-Based Methods for Fiber Bundle Boundary Estimation
|
cs.CV
|
Diffusion Tensor Imaging (DTI) provides the possibility of estimating the
location and course of eloquent structures in the human brain. Knowledge about
this is of high importance for preoperative planning of neurosurgical
interventions and for intraoperative guidance by neuronavigation in order to
minimize postoperative neurological deficits. Therefore, the segmentation of
these structures as closed, three-dimensional object is necessary. In this
contribution, two methods for fiber bundle segmentation between two defined
regions are compared using software phantoms (abstract model and anatomical
phantom modeling the right corticospinal tract). One method uses evaluation
points from sampled rays as candidates for boundary points, the other method
sets up a directed and weighted (depending on a scalar measure) graph and
performs a min-cut for optimal segmentation results. Comparison is done by
using the Dice Similarity Coefficient (DSC), a measure for spatial overlap of
different segmentation results.
|
1103.1958
|
On the Root Finding Step in List Decoding of Folded Reed-Solomon Codes
|
cs.IT math.IT
|
The root finding step of the Guruswami-Rudra list decoding algorithm for
folded Reed-Solomon codes is considered. It is shown that a multivariate
generalization of the Roth-Ruckenstein algorithm can be used to implement it.
This leads to an improved bound on the size of the list produced by the
decoder, as well as enables one to relax the constraints on the parameters of
folded codes. Furthermore, the class of time-domain folded Reed-Solomon codes
is introduced, which can be efficiently list decoded with the Guruswami-Rudra
algorithm, and provides greater flexibility in parameter selection than the
classical (frequency-domain) folded codes.
|
1103.1991
|
Connectivity of Large Scale Networks: Emergence of Unique Unbounded
Component
|
cs.IT cs.NI math.IT
|
This paper studies networks where all nodes are distributed on a unit square
$A\triangleq[(-1/2,1/2)^{2}$ following a Poisson distribution with known
density $\rho$ and a pair of nodes separated by an Euclidean distance $x$ are
directly connected with probability $g(\frac{x}{r_{\rho}})$, independent of the
event that any other pair of nodes are directly connected. Here
$g:[0,\infty)\rightarrow[0,1]$ satisfies the conditions of rotational
invariance, non-increasing monotonicity, integral boundedness and
$g(x)=o(\frac{1}{x^{2}\log^{2}x})$; further,
$r_{\rho}=\sqrt{\frac{\log\rho+b}{C\rho}}$ where $C=\int_{\Re^{2}}g(\Vert
\boldsymbol{x}\Vert)d\boldsymbol{x}$ and $b$ is a constant. Denote the above
network by\textmd{}$\mathcal{G}(\mathcal{X}_{\rho},g_{r_{\rho}},A)$. We show
that as $\rho\rightarrow\infty$, asymptotically almost surely a) there is no
component in $\mathcal{G}(\mathcal{X}_{\rho},g_{r_{\rho}},A)$ of fixed and
finite order $k>1$; b) the number of components with an unbounded order is one.
Therefore as $\rho\rightarrow\infty$, the network asymptotically almost surely
contains a unique unbounded component and isolated nodes only; a sufficient
condition for $\mathcal{G}(\mathcal{X}_{\rho},g_{r_{\rho}},A)$ to be
asymptotically almost surely connected is that there is no isolated node in the
network.{\normalsize{}}The contribution of these results, together with results
in a companion paper on the asymptotic distribution of isolated nodes in
\textmd{\normalsize $\mathcal{G}(\mathcal{X}_{\rho},g_{r_{\rho}},A)$}, is to
expand recent results obtained for connectivity of random geometric graphs from
the unit disk model to the more generic and more practical random connection
model.
|
1103.1994
|
Connectivity of Large Scale Networks: Distribution of Isolated Nodes
|
cs.IT cs.NI math.IT
|
Connectivity is one of the most fundamental properties of wireless multi-hop
networks. A network is said to be connected if there is a path between any pair
of nodes. A convenient way to study the connectivity of a random network is by
investigating the condition under which the network has no isolated node. The
condition under which the network has no isolated node provides a necessary
condition for a connected network. Further the condition for a network to have
no isolated node and the condition for the network to be connected can often be
shown to asymptotically converge to be the same as the number of nodes
approaches infinity, given a suitably defined random network and connection
model. Currently analytical results on the distribution of the number of
isolated nodes only exist for the unit disk model. This study advances research
in the area by providing the asymptotic distribution of the number of isolated
nodes in random networks with nodes Poissonly distributed on a unit square
under a generic random connection model. On that basis we derive a necessary
condition for the above network to be asymptotically almost surely connected.
These results, together with results in a companion paper on the sufficient
condition for a network to be connected, expand recent results obtained for
connectivity of random geometric graphs assuming a unit disk model to results
assuming a more generic and more practical random connection model.
|
1103.2046
|
Wireless Network Simplification: the Gaussian N-Relay Diamond Network
|
cs.IT math.IT
|
We consider the Gaussian N-relay diamond network, where a source wants to
communicate to a destination node through a layer of N-relay nodes. We
investigate the following question: what fraction of the capacity can we
maintain by using only k out of the N available relays? We show that
independent of the channel configurations and the operating SNR, we can always
find a subset of k relays which alone provide a rate (kC/(k+1))-G, where C is
the information theoretic cutset upper bound on the capacity of the whole
network and G is a constant that depends only on N and k (logarithmic in N and
linear in k). In particular, for k = 1, this means that half of the capacity of
any N-relay diamond network can be approximately achieved by routing
information over a single relay. We also show that this fraction is tight:
there are configurations of the N-relay diamond network where every subset of k
relays alone can at most provide approximately a fraction k/(k+1) of the total
capacity. These high-capacity k-relay subnetworks can be also discovered
efficiently. We propose an algorithm that computes a constant gap approximation
to the capacity of the Gaussian N-relay diamond network in O(N log N) running
time and discovers a high-capacity k-relay subnetwork in O(kN) running time.
This result also provides a new approximation to the capacity of the Gaussian
N-relay diamond network which is hybrid in nature: it has both multiplicative
and additive gaps. In the intermediate SNR regime, this hybrid approximation is
tighter than existing purely additive or purely multiplicative approximations
to the capacity of this network.
|
1103.2059
|
The Walk Distances in Graphs
|
math.CO cs.DM cs.SI math.MG
|
The walk distances in graphs are defined as the result of appropriate
transformations of the $\sum_{k=0}^\infty(tA)^k$ proximity measures, where $A$
is the weighted adjacency matrix of a graph and $t$ is a sufficiently small
positive parameter. The walk distances are graph-geodetic; moreover, they
converge to the shortest path distance and to the so-called long walk distance
as the parameter $t$ approaches its limiting values. We also show that the
logarithmic forest distances which are known to generalize the resistance
distance and the shortest path distance are a subclass of walk distances. On
the other hand, the long walk distance is equal to the resistance distance in a
transformed graph.
|
1103.2068
|
COMET: A Recipe for Learning and Using Large Ensembles on Massive Data
|
cs.LG cs.DC stat.ML
|
COMET is a single-pass MapReduce algorithm for learning on large-scale data.
It builds multiple random forest ensembles on distributed blocks of data and
merges them into a mega-ensemble. This approach is appropriate when learning
from massive-scale data that is too large to fit on a single machine. To get
the best accuracy, IVoting should be used instead of bagging to generate the
training subset for each decision tree in the random forest. Experiments with
two large datasets (5GB and 50GB compressed) show that COMET compares favorably
(in both accuracy and training time) to learning on a subsample of data using a
serial algorithm. Finally, we propose a new Gaussian approach for lazy ensemble
evaluation which dynamically decides how many ensemble members to evaluate per
data point; this can reduce evaluation cost by 100X or more.
|
1103.2071
|
Secure Satellite Communication Systems Design with Individual Secrecy
Rate Constraints
|
cs.IT cs.CR cs.NI math.IT
|
In this paper, we study multibeam satellite secure communication through
physical (PHY) layer security techniques, i.e., joint power control and
beamforming. By first assuming that the Channel State Information (CSI) is
available and the beamforming weights are fixed, a novel secure satellite
system design is investigated to minimize the transmit power with individual
secrecy rate constraints. An iterative algorithm is proposed to obtain an
optimized power allocation strategy. Moreover, sub-optimal beamforming weights
are obtained by completely eliminating the co-channel interference and nulling
the eavesdroppers' signal simultaneously. In order to obtain jointly optimized
power allocation and beamforming strategy in some practical cases, e.g., with
certain estimation errors of the CSI, we further evaluate the impact of the
eavesdropper's CSI on the secure multibeam satellite system design. The
convergence of the iterative algorithm is proven under justifiable assumptions.
The performance is evaluated by taking into account the impact of the number of
antenna elements, number of beams, individual secrecy rate requirement, and
CSI. The proposed novel secure multibeam satellite system design can achieve
optimized power allocation to ensure the minimum individual secrecy rate
requirement. The results show that the joint beamforming scheme is more
favorable than fixed beamforming scheme, especially in the cases of a larger
number of satellite antenna elements and higher secrecy rate requirement.
Finally, we compare the results under the current satellite air-interface in
DVB-S2 and the results under Gaussian inputs.
|
1103.2091
|
An Artificial Immune System Model for Multi-Agents Resource Sharing in
Distributed Environments
|
cs.AI
|
Natural Immune system plays a vital role in the survival of the all living
being. It provides a mechanism to defend itself from external predates making
it consistent systems, capable of adapting itself for survival incase of
changes. The human immune system has motivated scientists and engineers for
finding powerful information processing algorithms that has solved complex
engineering tasks. This paper explores one of the various possibilities for
solving problem in a Multiagent scenario wherein multiple robots are deployed
to achieve a goal collectively. The final goal is dependent on the performance
of individual robot and its survival without having to lose its energy beyond a
predetermined threshold value by deploying an evolutionary computational
technique otherwise called the artificial immune system that imitates the
biological immune system.
|
1103.2110
|
A hybrid model for bankruptcy prediction using genetic algorithm, fuzzy
c-means and mars
|
cs.NE cs.AI
|
Bankruptcy prediction is very important for all the organization since it
affects the economy and rise many social problems with high costs. There are
large number of techniques have been developed to predict the bankruptcy, which
helps the decision makers such as investors and financial analysts. One of the
bankruptcy prediction models is the hybrid model using Fuzzy C-means clustering
and MARS, which uses static ratios taken from the bank financial statements for
prediction, which has its own theoretical advantages. The performance of
existing bankruptcy model can be improved by selecting the best features
dynamically depend on the nature of the firm. This dynamic selection can be
accomplished by Genetic Algorithm and it improves the performance of prediction
model.
|
1103.2172
|
Cooperative Strategies for Interference-Limited Wireless Networks
|
cs.IT math.IT
|
Consider the communication of a single-user aided by a nearby relay involved
in a large wireless network where the nodes form an homogeneous Poisson point
process. Since this network is interference-limited the asymptotic error
probability is bounded from above by the outage probability experienced by the
user. We investigate the outage behavior for the well-known cooperative
schemes, namely, decode-and-forward (DF) and compress-and-forward (CF). In this
setting, the outage events are induced by both fading and the spatial proximity
of neighbor nodes who generate the strongest interference and hence the worst
communication case. Upper and lower bounds on the asymptotic error probability
which are tight in some cases are derived. It is shown that there exists a
clear trade off between the network density and the benefits of user
cooperation. These results are useful to evaluate performances and to optimize
relaying schemes in the context of large wireless networks.
|
1103.2177
|
Modeling and Analysis of K-Tier Downlink Heterogeneous Cellular Networks
|
cs.IT math.IT
|
Cellular networks are in a major transition from a carefully planned set of
large tower-mounted base-stations (BSs) to an irregular deployment of
heterogeneous infrastructure elements that often additionally includes micro,
pico, and femtocells, as well as distributed antennas. In this paper, we
develop a tractable, flexible, and accurate model for a downlink heterogeneous
cellular network (HCN) consisting of K tiers of randomly located BSs, where
each tier may differ in terms of average transmit power, supported data rate
and BS density. Assuming a mobile user connects to the strongest candidate BS,
the resulting Signal-to-Interference-plus-Noise-Ratio (SINR) is greater than 1
when in coverage, Rayleigh fading, we derive an expression for the probability
of coverage (equivalently outage) over the entire network under both open and
closed access, which assumes a strikingly simple closed-form in the high SINR
regime and is accurate down to -4 dB even under weaker assumptions. For
external validation, we compare against an actual LTE network (for tier 1) with
the other K-1 tiers being modeled as independent Poisson Point Processes. In
this case as well, our model is accurate to within 1-2 dB. We also derive the
average rate achieved by a randomly located mobile and the average load on each
tier of BSs. One interesting observation for interference-limited open access
networks is that at a given SINR, adding more tiers and/or BSs neither
increases nor decreases the probability of coverage or outage when all the
tiers have the same target-SINR.
|
1103.2184
|
On Stability of V-Like Formations
|
physics.soc-ph cs.SI nlin.AO
|
Group behavior has received much attention as a test case of
self-organization. There has been much written in recent years to investigate
interactions within groups of agents. These agents can be animals moving in an
interactive way, such as birds, but can also refer to situations such as people
driving in traffic. The models that describe these interactions are able to
reproduce different structures and patterns relating to the movement and
interaction of the agents involved. The advantages and necessities of this type
of analysis in any complex biological, technological, economic, or social
system are important and far-reaching. Each model that we will discuss
describes interaction between agents.
|
1103.2230
|
Control Complexity in Bucklin and Fallback Voting
|
cs.CC cs.MA
|
Electoral control models ways of changing the outcome of an election via such
actions as adding/deleting/partitioning either candidates or voters. To protect
elections from such control attempts, computational complexity has been
investigated and the corresponding NP-hardness results are termed "resistance."
It has been a long-running project of research in this area to classify the
major voting systems in terms of their resistance properties. We show that
fallback voting, an election system proposed by Brams and Sanver (2009) to
combine Bucklin with approval voting, is resistant to each of the common types
of control except to destructive control by either adding or deleting voters.
Thus fallback voting displays the broadest control resistance currently known
to hold among natural election systems with a polynomial-time winner problem.
We also study the control complexity of Bucklin voting and show that it
performs at least almost as well as fallback voting in terms of control
resistance. As Bucklin voting is a special case of fallback voting, each
resistance shown for Bucklin voting strengthens the corresponding resistance
for fallback voting. Such worst-case complexity analysis is at best an
indication of security against control attempts, rather than a proof. In
practice, the difficulty of control will depend on the structure of typical
instances. We investigate the parameterized control complexity of Bucklin and
fallback voting, according to several parameters that are often likely to be
small for typical instances. Our results, though still in the worst-case
complexity model, can be interpreted as significant strengthenings of the
resistance demonstrations based on NP-hardness.
|
1103.2240
|
Price-Based Resource Allocation for Spectrum-Sharing Femtocell Networks:
A Stackelberg Game Approach
|
cs.IT math.IT
|
This paper investigates the price-based resource allocation strategies for
the uplink transmission of a spectrum-sharing femtocell network, in which a
central macrocell is underlaid with distributed femtocells, all operating over
the same frequency band as the macrocell. Assuming that the macrocell base
station (MBS) protects itself by pricing the interference from the femtocell
users, a Stackelberg game is formulated to study the joint utility maximization
of the macrocell and the femtocells subject to a maximum tolerable interference
power constraint at the MBS. Especially, two practical femtocell channel
models: sparsely deployed scenario for rural areas and densely deployed
scenario for urban areas, are investigated. For each scenario, two pricing
schemes: uniform pricing and non-uniform pricing, are proposed. Then, the
Stackelberg equilibriums for these proposed games are studied, and an effective
distributed interference price bargaining algorithm with guaranteed convergence
is proposed for the uniform-pricing case. Finally, numerical examples are
presented to verify the proposed studies. It is shown that the proposed
algorithms are effective in resource allocation and macrocell protection
requiring minimal network overhead for spectrum-sharing-based two-tier
femtocell networks.
|
1103.2252
|
Analytically solvable processes on networks
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
We introduce a broad class of analytically solvable processes on networks. In
the special case, they reduce to random walk and consensus process - two most
basic processes on networks. Our class differs from previous models of
interactions (such as stochastic Ising model, cellular automata, infinite
particle system, and voter model) in several ways, two most important being:
(i) the model is analytically solvable even when the dynamical equation for
each node may be different and the network may have an arbitrary finite graph
and influence structure; and (ii) in addition, when local dynamic is described
by the same evolution equation, the model is decomposable: the equilibrium
behavior of the system can be expressed as an explicit function of network
topology and node dynamics
|
1103.2264
|
Rich-club and page-club coefficients for directed graphs
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
Rich-club and page-club coefficients and their null models are introduced for
directed graphs. Null models allow for a quantitative discussion of the
rich-club and page-club phenomena. These coefficients are computed for four
directed real-world networks: Arxiv High Energy Physics paper citation network,
Web network (released from Google), Citation network among US Patents, and
Email network from a EU research institution. The results show a high
correlation between rich-club and page-club ordering. For journal paper
citation network, we identify both rich-club and page-club ordering, showing
that {}"elite" papers are cited by other {}"elite" papers. Google web network
shows partial rich-club and page-club ordering up to some point and then a
narrow declining of the corresponding normalized coefficients, indicating the
lack of rich-club ordering and the lack of page-club ordering, i.e. high
in-degree (PageRank) pages purposely avoid sharing links with other high
in-degree (PageRank) pages. For UC patents citation network, we identify
page-club and rich-club ordering providing a conclusion that {}"elite" patents
are cited by other {}"elite" patents. Finally, for e-mail communication network
we show lack of both rich-club and page-club ordering. We construct an example
of synthetic network showing page-club ordering and the lack of rich-club
ordering.
|
1103.2289
|
A Token Based Algorithm to Distributed Computation in Sensor Networks
|
cs.NI cs.DC cs.IT cs.SY math.IT math.OC
|
We consider distributed algorithms for data aggregation and function
computation in sensor networks. The algorithms perform pairwise computations
along edges of an underlying communication graph. A token is associated with
each sensor node, which acts as a transmission permit. Nodes with active tokens
have transmission permits; they generate messages at a constant rate and send
each message to a randomly selected neighbor. By using different strategies to
control the transmission permits we can obtain tradeoffs between message and
time complexity. Gossip corresponds to the case when all nodes have permits all
the time. We study algorithms where permits are revoked after transmission and
restored upon reception. Examples of such algorithms include Simple-Random
Walk(SRW), Coalescent-Random-Walk(CRW) and Controlled Flooding(CFLD) and their
hybrid variants. SRW has a single node permit, which is passed on in the
network. CRW, initially initially has a permit for each node but these permits
are revoked gradually. The final result for SRW and CRW resides at a single(or
few) random node(s) making a direct comparison with GOSSIP difficult. A hybrid
two-phase algorithm switching from CRW to CFLD at a suitable pre-determined
time can be employed to achieve consensus. We show that such hybrid variants
achieve significant gains in both message and time complexity. The per-node
message complexity for n-node graphs, such as 2D mesh, torii, and Random
geometric graphs, scales as $O(polylog(n))$ and the corresponding time
complexity scales as O(n). The reduced per-node message complexity leads to
reduced energy utilization in sensor networks.
|
1103.2325
|
Self reference in word definitions
|
cs.CL cs.AI physics.soc-ph
|
Dictionaries are inherently circular in nature. A given word is linked to a
set of alternative words (the definition) which in turn point to further
descendants. Iterating through definitions in this way, one typically finds
that definitions loop back upon themselves. The graph formed by such
definitional relations is our object of study. By eliminating those links which
are not in loops, we arrive at a core subgraph of highly connected nodes.
We observe that definitional loops are conveniently classified by length,
with longer loops usually emerging from semantic misinterpretation. By breaking
the long loops in the graph of the dictionary, we arrive at a set of
disconnected clusters. We find that the words in these clusters constitute
semantic units, and moreover tend to have been introduced into the English
language at similar times, suggesting a possible mechanism for language
evolution.
|
1103.2342
|
SPPAM - Statistical PreProcessing AlgorithM
|
cs.AI
|
Most machine learning tools work with a single table where each row is an
instance and each column is an attribute. Each cell of the table contains an
attribute value for an instance. This representation prevents one important
form of learning, which is, classification based on groups of correlated
records, such as multiple exams of a single patient, internet customer
preferences, weather forecast or prediction of sea conditions for a given day.
To some extent, relational learning methods, such as inductive logic
programming, can capture this correlation through the use of intensional
predicates added to the background knowledge. In this work, we propose SPPAM,
an algorithm that aggregates past observations in one single record. We show
that applying SPPAM to the original correlated data, before the learning task,
can produce classifiers that are better than the ones trained using all
records.
|
1103.2351
|
Engineering Relative Compression of Genomes
|
cs.CE cs.IT math.IT q-bio.QM
|
Technology progress in DNA sequencing boosts the genomic database growth at
faster and faster rate. Compression, accompanied with random access
capabilities, is the key to maintain those huge amounts of data. In this paper
we present an LZ77-style compression scheme for relative compression of
multiple genomes of the same species. While the solution bears similarity to
known algorithms, it offers significantly higher compression ratios at
compression speed over a order of magnitude greater. One of the new successful
ideas is augmenting the reference sequence with phrases from the other
sequences, making more LZ-matches available.
|
1103.2356
|
Adaptive mosaic image representation for image processing
|
physics.data-an cs.CV
|
Method for a mosaic image representation (MIR) is proposed for a selective
treatment of image fragments of different transition frequency. MIR method is
based on piecewise-constant image approximation on a non-uniform orthogonal
grid constructed by the following recurrent multigrid algorithm. A sequence of
nested uniform grids is built, such that each cell of a current grid is
subdivided into four smaller cells for the next grid designing. In each grid
the cells are selected, where the color intensity function can be approximated
by its average value with a given precision (thereafter 'good' cells). After
replacing colors of good cells by their approximating constants the
reconstructed image looks like a mosaic composed of one-colored cells.
Multigrid algorithm results in the stratification of the image space into
regions of different transition frequency. Sizes of these regions depend on the
few tuning precision parameters that characterizes adaptability of the method
to the image fragments of different non-homogeneity degree. The method is found
efficient for prominent contour (skeleton) extraction, edge detection as well
as for the Lossy Compression of single images and video sequence of images.
|
1103.2376
|
Language, Emotions, and Cultures: Emotional Sapir-Whorf Hypothesis
|
cs.AI
|
An emotional version of Sapir-Whorf hypothesis suggests that differences in
language emotionalities influence differences among cultures no less than
conceptual differences. Conceptual contents of languages and cultures to
significant extent are determined by words and their semantic differences;
these could be borrowed among languages and exchanged among cultures. Emotional
differences, as suggested in the paper, are related to grammar and mostly
cannot be borrowed. Conceptual and emotional mechanisms of languages are
considered here along with their functions in the mind and cultural evolution.
A fundamental contradiction in human mind is considered: language evolution
requires reduced emotionality, but "too low" emotionality makes language
"irrelevant to life," disconnected from sensory-motor experience. Neural
mechanisms of these processes are suggested as well as their mathematical
models: the knowledge instinct, the language instinct, the dual model
connecting language and cognition, dynamic logic, neural modeling fields.
Mathematical results are related to cognitive science, linguistics, and
psychology. Experimental evidence and theoretical arguments are discussed.
Approximate equations for evolution of human minds and cultures are obtained.
Their solutions identify three types of cultures: "conceptual"-pragmatic
cultures, in which emotionality of language is reduced and differentiation
overtakes synthesis resulting in fast evolution at the price of uncertainty of
values, self doubts, and internal crises; "traditional-emotional" cultures
where differentiation lags behind synthesis, resulting in cultural stability at
the price of stagnation; and "multi-cultural" societies combining fast cultural
evolution and stability. Unsolved problems and future theoretical and
experimental directions are discussed.
|
1103.2406
|
Automatic Wrappers for Large Scale Web Extraction
|
cs.DB
|
We present a generic framework to make wrapper induction algorithms tolerant
to noise in the training data. This enables us to learn wrappers in a
completely unsupervised manner from automatically and cheaply obtained noisy
training data, e.g., using dictionaries and regular expressions. By removing
the site-level supervision that wrapper-based techniques require, we are able
to perform information extraction at web-scale, with accuracy unattained with
existing unsupervised extraction techniques. Our system is used in production
at Yahoo! and powers live applications.
|
1103.2408
|
Using Paxos to Build a Scalable, Consistent, and Highly Available
Datastore
|
cs.DB cs.DC
|
Spinnaker is an experimental datastore that is designed to run on a large
cluster of commodity servers in a single datacenter. It features key-based
range partitioning, 3-way replication, and a transactional get-put API with the
option to choose either strong or timeline consistency on reads. This paper
describes Spinnaker's Paxos-based replication protocol. The use of Paxos
ensures that a data partition in Spinnaker will be available for reads and
writes as long a majority of its replicas are alive. Unlike traditional
master-slave replication, this is true regardless of the failure sequence that
occurs. We show that Paxos replication can be competitive with alternatives
that provide weaker consistency guarantees. Compared to an eventually
consistent datastore, we show that Spinnaker can be as fast or even faster on
reads and only 5% to 10% slower on writes.
|
1103.2409
|
Fast Set Intersection in Memory
|
cs.DB cs.DS
|
Set intersection is a fundamental operation in information retrieval and
database systems. This paper introduces linear space data structures to
represent sets such that their intersection can be computed in a worst-case
efficient way. In general, given k (preprocessed) sets, with totally n
elements, we will show how to compute their intersection in expected time
O(n/sqrt(w)+kr), where r is the intersection size and w is the number of bits
in a machine-word. In addition,we introduce a very simple version of this
algorithm that has weaker asymptotic guarantees but performs even better in
practice; both algorithms outperform the state of the art techniques in terms
of execution time for both synthetic and real data sets and workloads.
|
1103.2410
|
Large-Scale Collective Entity Matching
|
cs.DB
|
There have been several recent advancements in Machine Learning community on
the Entity Matching (EM) problem. However, their lack of scalability has
prevented them from being applied in practical settings on large real-life
datasets. Towards this end, we propose a principled framework to scale any
generic EM algorithm. Our technique consists of running multiple instances of
the EM algorithm on small neighborhoods of the data and passing messages across
neighborhoods to construct a global solution. We prove formal properties of our
framework and experimentally demonstrate the effectiveness of our approach in
scaling EM algorithms.
|
1103.2411
|
Unification of Maximum Entropy and Bayesian Inference via Plausible
Reasoning
|
cs.IT math.IT math.ST physics.data-an stat.TH
|
This paper modifies Jaynes's axioms of plausible reasoning and derives the
minimum relative entropy principle, Bayes's rule, as well as maximum likelihood
from first principles. The new axioms, which I call the Optimum Information
Principle, is applicable whenever the decision maker is given the data and the
relevant background information. These axioms provide an answer to the question
"why maximize entropy when faced with incomplete information?"
|
1103.2431
|
The Embedding Capacity of Information Flows Under Renewal Traffic
|
cs.IT math.IT
|
Given two independent point processes and a certain rule for matching points
between them, what is the fraction of matched points over infinitely long
streams? In many application contexts, e.g., secure networking, a meaningful
matching rule is that of a maximum causal delay, and the problem is related to
embedding a flow of packets in cover traffic such that no traffic analysis can
detect it. We study the best undetectable embedding policy and the
corresponding maximum flow rate ---that we call the embedding capacity--- under
the assumption that the cover traffic can be modeled as arbitrary renewal
processes. We find that computing the embedding capacity requires the inversion
of very structured linear systems that, for a broad range of renewal models
encountered in practice, admits a fully analytical expression in terms of the
renewal function of the processes. Our main theoretical contribution is a
simple closed form of such relationship. This result enables us to explore
properties of the embedding capacity, obtaining closed-form solutions for
selected distribution families and a suite of sufficient conditions on the
capacity ordering. We evaluate our solution on real network traces, which shows
a noticeable match for tight delay constraints. A gap between the predicted and
the actual embedding capacities appears for looser constraints, and further
investigation reveals that it is caused by inaccuracy of the renewal traffic
model rather than of the solution itself.
|
1103.2447
|
Mini-step Strategy for Transient Analysis
|
cs.CE
|
Domain decomposition methods are widely used to solve sparse linear systems
from scientific problems, but they are not suited to solve sparse linear
systems extracted from integrated circuits. The reason is that the sparse
linear system of integrated circuits may be non-diagonal-dominant, and domain
decomposition method might be unconvergent for these non-diagonal-dominant
matrices. In this paper, we propose a mini-step strategy to do the circuit
transient analysis. Different from the traditional large-step approach, this
strategy is able to generate diagonal-dominant sparse linear systems. As a
result, preconditioned domain decomposition methods can be used to simulate the
large integrated circuits on the supercomputers and clouds.
|
1103.2467
|
Commuter networks and community detection: a method for planning sub
regional areas
|
physics.soc-ph cs.SI
|
A major issue for policy makers and planners is the definition of the "ideal"
regional partition, i.e. the delimitation of sub-regional domains showing a
sufficient level of homogeneity with respect to some specific territorial
features. In Sardinia, the second major island in the Mediterranean sea,
politicians and analysts have been involved in a 50 year process of
identification of the correct pattern for the province, an intermediate
administrative body in between the Regional and the municipal administration.
In this paper, we compare some intermediate body partitions of Sardinia with
the patterns of the communities of workers and students, by applying grouping
methodologies based on the characterization of Sardinian commuters' system as a
complex weighted network. We adopt an algorithm based on the maximization of
the weighted modularity of this network to detect productive basins composed by
municipalities showing a certain degree of cohesiveness in terms of commuter
flows. The results obtained lead to conclude that new provinces in Sardinia
seem to have been designed -even unconsciously- as labour basins of
municipalities with similar commuting behaviour.
|
1103.2469
|
Blind Compressed Sensing Over a Structured Union of Subspaces
|
cs.IT math.IT
|
This paper addresses the problem of simultaneous signal recovery and
dictionary learning based on compressive measurements. Multiple signals are
analyzed jointly, with multiple sensing matrices, under the assumption that the
unknown signals come from a union of a small number of disjoint subspaces. This
problem is important, for instance, in image inpainting applications, in which
the multiple signals are constituted by (incomplete) image patches taken from
the overall image. This work extends standard dictionary learning and
block-sparse dictionary optimization, by considering compressive measurements,
e.g., incomplete data). Previous work on blind compressed sensing is also
generalized by using multiple sensing matrices and relaxing some of the
restrictions on the learned dictionary. Drawing on results developed in the
context of matrix completion, it is proven that both the dictionary and signals
can be recovered with high probability from compressed measurements. The
solution is unique up to block permutations and invertible linear
transformations of the dictionary atoms. The recovery is contingent on the
number of measurements per signal and the number of signals being sufficiently
large; bounds are derived for these quantities. In addition, this paper
presents a computationally practical algorithm that performs dictionary
learning and signal recovery, and establishes conditions for its convergence to
a local optimum. Experimental results for image inpainting demonstrate the
capabilities of the method.
|
1103.2491
|
Heterogeneous Learning in Zero-Sum Stochastic Games with Incomplete
Information
|
cs.LG cs.GT cs.SY math.OC
|
Learning algorithms are essential for the applications of game theory in a
networking environment. In dynamic and decentralized settings where the
traffic, topology and channel states may vary over time and the communication
between agents is impractical, it is important to formulate and study games of
incomplete information and fully distributed learning algorithms which for each
agent requires a minimal amount of information regarding the remaining agents.
In this paper, we address this major challenge and introduce heterogeneous
learning schemes in which each agent adopts a distinct learning pattern in the
context of games with incomplete information. We use stochastic approximation
techniques to show that the heterogeneous learning schemes can be studied in
terms of their deterministic ordinary differential equation (ODE) counterparts.
Depending on the learning rates of the players, these ODEs could be different
from the standard replicator dynamics, (myopic) best response (BR) dynamics,
logit dynamics, and fictitious play dynamics. We apply the results to a class
of security games in which the attacker and the defender adopt different
learning schemes due to differences in their rationality levels and the
information they acquire.
|
1103.2493
|
A Constrained Evolutionary Gaussian Multiple Access Channel Game
|
cs.GT cs.SY math.DS math.OC
|
In this paper, we formulate an evolutionary multiple access channel game with
continuous-variable actions and coupled rate constraints. We characterize Nash
equilibria of the game and show that the pure Nash equilibria are Pareto
optimal and also resilient to deviations by coalitions of any size, i.e., they
are strong equilibria. We use the concepts of price of anarchy and strong price
of anarchy to study the performance of the system. The paper also addresses how
to select one specifc equilibrium solution using the concepts of normalized
equilibrium and evolutionary stable strategies. We examine the long-run
behavior of these strategies under several classes of evolutionary game
dynamics such as Brown-von Neumann-Nash dynamics, and replicator dynamics.
|
1103.2496
|
Evolutionary Games for Multiple Access Control
|
cs.GT cs.SY math.DS math.OC
|
In this paper, we formulate an evolutionarymultiple access control game with
continuousvariable actions and coupled constraints. We characterize equilibria
of the game and show that the pure equilibria are Pareto optimal and also
resilient to deviations by coalitions of any size, i.e., they are strong
equilibria. We use the concepts of price of anarchy and strong price of anarchy
to study the performance of the system. The paper also addresses how to select
one specific equilibrium solution using the concepts of normalized equilibrium
and evolutionarily stable strategies. We examine the long-run behavior of these
strategies under several classes of evolutionary game dynamics, such as
Brown-von Neumann-Nash dynamics, Smith dynamics and replicator dynamics. In
addition, we examine correlated equilibrium for the single-receiver model.
Correlated strategies are based on signaling structures before making decisions
on rates. We then focus on evolutionary games for hybrid additive white
Gaussian noise multiple access channel with multiple users and multiple
receivers, where each user chooses a rate and splits it over the receivers.
Users have coupled constraints determined by the capacity regions. Building
upon the static game, we formulate a system of hybrid evolutionary game
dynamics using G-function dynamics and Smith dynamics on rate control and
channel selection, respectively. We show that the evolving game has an
equilibrium and illustrate these dynamics with numerical examples.
|
1103.2501
|
On Gaussian Multiple Access Channels with Interference: Achievable Rates
and Upper Bounds
|
cs.IT math.IT
|
We study the interaction between two interfering Gaussian 2-user multiple
access channels. The capacity region is characterized under mixed
strong--extremely strong interference and individually very strong
interference. Furthermore, the sum capacity is derived under a less restricting
definition of very strong interference. Finally, a general upper bound on the
sum capacity is provided, which is nearly tight for weak cross links.
|
1103.2503
|
Coded Single-Tone Signaling for Resource Coordination and Interference
Management in Femtocell Networks
|
cs.IT math.IT
|
Resource coordination and interference management is the key to achieving the
benefits of femtocell networks. Over-the-air signaling is one of the most
effective means for distributed dynamic resource coordination and interference
management. However, the design of this type of signal is challenging. In this
letter, we address the challenges and propose an effective solution, referred
to as coded single-tone signaling (STS). The proposed coded STS scheme
possesses certain highly desirable properties, such as no dedicated resource
requirement (no overhead), no near-and-far effect, no inter-signal interference
(no multi-user interference), low peak-to-average power ratio (deep coverage).
In addition, the proposed coded STS can fully exploit frequency diversity and
provides a means for high quality wideband channel estimation. The coded STS
design is demonstrated through a concrete numerical example. Performance of the
proposed coded STS is evaluated through simulations.
|
1103.2539
|
SO(3)-invariant asymptotic observers for dense depth field estimation
based on visual data and known camera motion
|
math.OC cs.CV
|
In this paper, we use known camera motion associated to a video sequence of a
static scene in order to estimate and incrementally refine the surrounding
depth field. We exploit the SO(3)-invariance of brightness and depth fields
dynamics to customize standard image processing techniques. Inspired by the
Horn-Schunck method, we propose a SO(3)-invariant cost to estimate the depth
field. At each time step, this provides a diffusion equation on the unit
Riemannian sphere that is numerically solved to obtain a real time depth field
estimation of the entire field of view. Two asymptotic observers are derived
from the governing equations of dynamics, respectively based on optical flow
and depth estimations: implemented on noisy sequences of synthetic images as
well as on real data, they perform a more robust and accurate depth estimation.
This approach is complementary to most methods employing state observers for
range estimation, which uniquely concern single or isolated feature points.
|
1103.2544
|
Almost-perfect secret sharing
|
cs.IT cs.CR math.IT
|
Splitting a secret s between several participants, we generate (for each
value of s) shares for all participants. The goal: authorized groups of
participants should be able to reconstruct the secret but forbidden ones get no
information about it. In this paper we introduce several notions of non-
perfect secret sharing, where some small information leak is permitted. We
study its relation to the Kolmogorov complexity version of secret sharing
(establishing some connection in both directions) and the effects of changing
the secret size (showing that we can decrease the size of the secret and the
information leak at the same time).
|
1103.2545
|
On essentially conditional information inequalities
|
cs.IT cs.DM math.IT math.PR
|
In 1997, Z.Zhang and R.W.Yeung found the first example of a conditional
information inequality in four variables that is not "Shannon-type". This
linear inequality for entropies is called conditional (or constraint) since it
holds only under condition that some linear equations are satisfied for the
involved entropies. Later, the same authors and other researchers discovered
several unconditional information inequalities that do not follow from
Shannon's inequalities for entropy.
In this paper we show that some non Shannon-type conditional inequalities are
"essentially" conditional, i.e., they cannot be extended to any unconditional
inequality. We prove one new essentially conditional information inequality for
Shannon's entropy and discuss conditional information inequalities for
Kolmogorov complexity.
|
1103.2560
|
The Generalized Degrees of Freedom Region of the MIMO Interference
Channel
|
cs.IT math.IT
|
The generalized degrees of freedom (GDoF) region of the MIMO Gaussian
interference channel (IC) is obtained for the general case of an arbitrary
number of antennas at each node and where the signal-to-noise ratios (SNR) and
interference-to-noise ratios (INR) vary with arbitrary exponents to a nominal
SNR. The GDoF region reveals various insights through the joint dependence of
optimal interference management techniques (at high SNR) on the SNR exponents
that determine the relative strengths of direct-link SNRs and cross-link INRs
and the numbers of antennas at the four terminals. For instance, it permits an
in-depth look at the issue of rate-splitting and partial decoding and it
reveals that, unlike in the scalar IC, treating interference as noise is not
always GDoF-optimal even in the very weak interference regime. Moreover, while
the DoF-optimal strategy that relies just on transmit/receive zero-forcing
beamforming and time-sharing is not GDoF optimal (and thus has an unbounded gap
to capacity), the precise characterization of the very strong interference
regime -- where single-user DoF performance can be achieved simultaneously for
both users-- depends on the relative numbers of antennas at the four terminals
and thus deviates from what it is in the SISO case. For asymmetric numbers of
antennas at the four nodes the shape of the symmetric GDoF curve can be a
"distorted W" curve to the extent that for certain MIMO ICs it is a "V" curve.
|
1103.2566
|
Optimal query/update tradeoffs in versioned dictionaries
|
cs.DS cs.DB
|
External-memory dictionaries are a fundamental data structure in file systems
and databases. Versioned (or fully-persistent) dictionaries have an associated
version tree where queries can be performed at any version, updates can be
performed on leaf versions, and any version can be `cloned' by adding a child.
Various query/update tradeoffs are known for unversioned dictionaries, many of
them with matching upper and lower bounds. No fully-versioned external-memory
dictionaries are known with optimal space/query/update tradeoffs. In
particular, no versioned constructions are known that offer updates in $o(1)$
I/Os using O(N) space. We present the first cache-oblivious and cache-aware
constructions that achieve a wide range of optimal points on this tradeoff.
|
1103.2573
|
Optimization of Fast-Decodable Full-Rate STBC with Non-Vanishing
Determinants
|
cs.IT math.IT
|
Full-rate STBC (space-time block codes) with non-vanishing determinants
achieve the optimal diversity-multiplexing tradeoff but incur high decoding
complexity. To permit fast decoding, Sezginer, Sari and Biglieri proposed an
STBC structure with special QR decomposition characteristics. In this paper, we
adopt a simplified form of this fast-decodable code structure and present a new
way to optimize the code analytically. We show that the signal constellation
topology (such as QAM, APSK, or PSK) has a critical impact on the existence of
non-vanishing determinants of the full-rate STBC. In particular, we show for
the first time that, in order for APSK-STBC to achieve non-vanishing
determinant, an APSK constellation topology with constellation points lying on
square grid and ring radius $\sqrt{m^2+n^2} (m,n\emph{\emph{integers}})$ needs
to be used. For signal constellations with vanishing determinants, we present a
methodology to analytically optimize the full-rate STBC at specific
constellation dimension.
|
1103.2574
|
A multiplicative characterization of the power means
|
math.FA cs.IT math.CA math.IT
|
A startlingly simple characterization of the p-norms has recently been found
by Aubrun and Nechita (arXiv:1102.2618) and by Fernandez-Gonzalez, Palazuelos
and Perez-Garcia. We deduce a simple characterization of the power means of
order greater than or equal to 1.
|
1103.2579
|
Prices of Anarchy, Information, and Cooperation in Differential Games
|
cs.SY cs.GT math.OC
|
The price of anarchy (PoA) has been widely used in static games to quantify
the loss of efficiency due to noncooperation. Here, we extend this concept to a
general differential games framework. In addition, we introduce the price of
information (PoI) to characterize comparative game performances under different
information structures, as well as the price of cooperation to capture the
extent of benefit or loss a player accrues as a result of altruistic behavior.
We further characterize PoA and PoI for a class of scalar linear quadratic
differential games under open-loop and closed-loop feedback information
structures. We also obtain some explicit bounds on these indices in a large
population regime.
|
1103.2580
|
Inequalities Among Logarithmic-Mean Measures
|
cs.IT math.IT
|
In this paper we shall consider some famous means such as arithmetic,
harmonic, geometric, logarithmic means, etc. Inequalities involving logarithmic
mean with differences among other means are presented
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.