id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1202.6033
|
The Power of Local Information in Social Networks
|
cs.SI cs.DM cs.DS physics.soc-ph
|
We study the power of \textit{local information algorithms} for optimization
problems on social networks. We focus on sequential algorithms for which the
network topology is initially unknown and is revealed only within a local
neighborhood of vertices that have been irrevocably added to the output set.
The distinguishing feature of this setting is that locality is necessitated by
constraints on the network information visible to the algorithm, rather than
being desirable for reasons of efficiency or parallelizability. In this sense,
changes to the level of network visibility can have a significant impact on
algorithm design.
We study a range of problems under this model of algorithms with local
information. We first consider the case in which the underlying graph is a
preferential attachment network. We show that one can find the node of maximum
degree in the network in a polylogarithmic number of steps, using an
opportunistic algorithm that repeatedly queries the visible node of maximum
degree. This addresses an open question of Bollob{\'a}s and Riordan. In
contrast, local information algorithms require a linear number of queries to
solve the problem on arbitrary networks.
Motivated by problems faced by recruiters in online networks, we also
consider network coverage problems such as finding a minimum dominating set.
For this optimization problem we show that, if each node added to the output
set reveals sufficient information about the set's neighborhood, then it is
possible to design randomized algorithms for general networks that nearly match
the best approximations possible even with full access to the graph structure.
We show that this level of visibility is necessary.
We conclude that a network provider's decision of how much structure to make
visible to its users can have a significant effect on a user's ability to
interact strategically with the network.
|
1202.6037
|
Compressed Beamforming in Ultrasound Imaging
|
cs.IT cs.CV math.IT
|
Emerging sonography techniques often require increasing the number of
transducer elements involved in the imaging process. Consequently, larger
amounts of data must be acquired and processed. The significant growth in the
amounts of data affects both machinery size and power consumption. Within the
classical sampling framework, state of the art systems reduce processing rates
by exploiting the bandpass bandwidth of the detected signals. It has been
recently shown, that a much more significant sample-rate reduction may be
obtained, by treating ultrasound signals within the Finite Rate of Innovation
framework. These ideas follow the spirit of Xampling, which combines classic
methods from sampling theory with recent developments in Compressed Sensing.
Applying such low-rate sampling schemes to individual transducer elements,
which detect energy reflected from biological tissues, is limited by the noisy
nature of the signals. This often results in erroneous parameter extraction,
bringing forward the need to enhance the SNR of the low-rate samples. In our
work, we achieve SNR enhancement, by beamforming the sub-Nyquist samples
obtained from multiple elements. We refer to this process as "compressed
beamforming". Applying it to cardiac ultrasound data, we successfully image
macroscopic perturbations, while achieving a nearly eight-fold reduction in
sample-rate, compared to standard techniques.
|
1202.6042
|
A Regularized Graph Layout Framework for Dynamic Network Visualization
|
cs.SI cs.DM stat.CO
|
Many real-world networks, including social and information networks, are
dynamic structures that evolve over time. Such dynamic networks are typically
visualized using a sequence of static graph layouts. In addition to providing a
visual representation of the network structure at each time step, the sequence
should preserve the mental map between layouts of consecutive time steps to
allow a human to interpret the temporal evolution of the network. In this
paper, we propose a framework for dynamic network visualization in the on-line
setting where only present and past graph snapshots are available to create the
present layout. The proposed framework creates regularized graph layouts by
augmenting the cost function of a static graph layout algorithm with a grouping
penalty, which discourages nodes from deviating too far from other nodes
belonging to the same group, and a temporal penalty, which discourages large
node movements between consecutive time steps. The penalties increase the
stability of the layout sequence, thus preserving the mental map. We introduce
two dynamic layout algorithms within the proposed framework, namely dynamic
multidimensional scaling (DMDS) and dynamic graph Laplacian layout (DGLL). We
apply these algorithms on several data sets to illustrate the importance of
both grouping and temporal regularization for producing interpretable
visualizations of dynamic networks.
|
1202.6049
|
Attack Detection and Identification in Cyber-Physical Systems -- Part
II: Centralized and Distributed Monitor Design
|
math.OC cs.SY
|
Cyber-physical systems integrate computation, communication, and physical
capabilities to interact with the physical world and humans. Besides failures
of components, cyber-physical systems are prone to malicious attacks so that
specific analysis tools and monitoring mechanisms need to be developed to
enforce system security and reliability. This paper builds upon the results
presented in our companion paper [1] and proposes centralized and distributed
monitors for attack detection and identification. First, we design optimal
centralized attack detection and identification monitors. Optimality refers to
the ability of detecting (respectively identifying) every detectable
(respectively identifiable) attack. Second, we design an optimal distributed
attack detection filter based upon a waveform relaxation technique. Third, we
show that the attack identification problem is computationally hard, and we
design a sub-optimal distributed attack identification procedure with
performance guarantees. Finally, we illustrate the robustness of our monitors
to system noise and unmodeled dynamics through a simulation study.
|
1202.6078
|
Protocols for Learning Classifiers on Distributed Data
|
stat.ML cs.LG
|
We consider the problem of learning classifiers for labeled data that has
been distributed across several nodes. Our goal is to find a single classifier,
with small approximation error, across all datasets while minimizing the
communication between nodes. This setting models real-world communication
bottlenecks in the processing of massive distributed datasets. We present
several very general sampling-based solutions as well as some two-way protocols
which have a provable exponential speed-up over any one-way protocol. We focus
on core problems for noiseless data distributed across two or more nodes. The
techniques we introduce are reminiscent of active learning, but rather than
actively probing labels, nodes actively communicate with each other, each node
simultaneously learning the important data from another node.
|
1202.6079
|
Synthesising Graphical Theories
|
cs.AI math.CT quant-ph
|
In recent years, diagrammatic languages have been shown to be a powerful and
expressive tool for reasoning about physical, logical, and semantic processes
represented as morphisms in a monoidal category. In particular, categorical
quantum mechanics, or "Quantum Picturalism", aims to turn concrete features of
quantum theory into abstract structural properties, expressed in the form of
diagrammatic identities. One way we search for these properties is to start
with a concrete model (e.g. a set of linear maps or finite relations) and start
composing generators into diagrams and looking for graphical identities.
Naively, we could automate this procedure by enumerating all diagrams up to a
given size and check for equalities, but this is intractable in practice
because it produces far too many equations. Luckily, many of these identities
are not primitive, but rather derivable from simpler ones. In 2010, Johansson,
Dixon, and Bundy developed a technique called conjecture synthesis for
automatically generating conjectured term equations to feed into an inductive
theorem prover. In this extended abstract, we adapt this technique to
diagrammatic theories, expressed as graph rewrite systems, and demonstrate its
application by synthesising a graphical theory for studying entangled quantum
states.
|
1202.6086
|
Combinatorial limitations of average-radius list-decoding
|
cs.IT cs.CC math.CO math.IT
|
We study certain combinatorial aspects of list-decoding, motivated by the
exponential gap between the known upper bound (of $O(1/\gamma)$) and lower
bound (of $\Omega_p(\log (1/\gamma))$) for the list-size needed to decode up to
radius $p$ with rate $\gamma$ away from capacity, i.e., $1-\h(p)-\gamma$ (here
$p\in (0,1/2)$ and $\gamma > 0$). Our main result is the following:
We prove that in any binary code $C \subseteq \{0,1\}^n$ of rate
$1-\h(p)-\gamma$, there must exist a set $\mathcal{L} \subset C$ of
$\Omega_p(1/\sqrt{\gamma})$ codewords such that the average distance of the
points in $\mathcal{L}$ from their centroid is at most $pn$. In other words,
there must exist $\Omega_p(1/\sqrt{\gamma})$ codewords with low "average
radius." The standard notion of list-decoding corresponds to working with the
maximum distance of a collection of codewords from a center instead of average
distance. The average-radius form is in itself quite natural and is implied by
the classical Johnson bound.
The remaining results concern the standard notion of list-decoding, and help
clarify the combinatorial landscape of list-decoding:
1. We give a short simple proof, over all fixed alphabets, of the
above-mentioned $\Omega_p(\log (\gamma))$ lower bound. Earlier, this bound
followed from a complicated, more general result of Blinovsky.
2. We show that one {\em cannot} improve the $\Omega_p(\log (1/\gamma))$
lower bound via techniques based on identifying the zero-rate regime for list
decoding of constant-weight codes.
3. We show a "reverse connection" showing that constant-weight codes for list
decoding imply general codes for list decoding with higher rate.
4. We give simple second moment based proofs of tight (up to constant
factors) lower bounds on the list-size needed for list decoding random codes
and random linear codes from errors as well as erasures.
|
1202.6091
|
Interference Alignment for Partially Connected MIMO Cellular Networks
|
cs.IT math.IT
|
In this paper, we propose an iterative interference alignment (IA) algorithm
for MIMO cellular networks with partial connectivity, which is induced by
heterogeneous path losses and spatial correlation. Such systems impose several
key technical challenges in the IA algorithm design, namely the overlapping
between the direct and interfering links due to the MIMO cellular topology as
well as how to exploit the partial connectivity. We shall address these
challenges and propose a three stage IA algorithm. As illustration, we analyze
the achievable degree of freedom (DoF) of the proposed algorithm for a
symmetric partially connected MIMO cellular network. We show that there is
significant DoF gain compared with conventional IA algorithms due to partial
connectivity. The derived DoF bound is also backward compatible with that
achieved on fully connected K-pair MIMO interference channels.
|
1202.6095
|
Approaching Capacity at High-Rates with Iterative Hard-Decision Decoding
|
cs.IT math.IT
|
A variety of low-density parity-check (LDPC) ensembles have now been observed
to approach capacity with message-passing decoding. However, all of them use
soft (i.e., non-binary) messages and a posteriori probability (APP) decoding of
their component codes. In this paper, we show that one can approach capacity at
high rates using iterative hard-decision decoding (HDD) of generalized product
codes. Specifically, a class of spatially-coupled GLDPC codes with BCH
component codes is considered, and it is observed that, in the high-rate
regime, they can approach capacity under the proposed iterative HDD. These
codes can be seen as generalized product codes and are closely related to
braided block codes. An iterative HDD algorithm is proposed that enables one to
analyze the performance of these codes via density evolution (DE).
|
1202.6101
|
Maximum Inner-Product Search using Tree Data-structures
|
cs.CG cs.DS cs.IR
|
The problem of {\em efficiently} finding the best match for a query in a
given set with respect to the Euclidean distance or the cosine similarity has
been extensively studied in literature. However, a closely related problem of
efficiently finding the best match with respect to the inner product has never
been explored in the general setting to the best of our knowledge. In this
paper we consider this general problem and contrast it with the existing
best-match algorithms. First, we propose a general branch-and-bound algorithm
using a tree data structure. Subsequently, we present a dual-tree algorithm for
the case where there are multiple queries. Finally we present a new data
structure for increasing the efficiency of the dual-tree algorithm. These
branch-and-bound algorithms involve novel bounds suited for the purpose of
best-matching with inner products. We evaluate our proposed algorithms on a
variety of data sets from various applications, and exhibit up to five orders
of magnitude improvement in query time over the naive search technique.
|
1202.6103
|
Nonlinear Laplacian spectral analysis: Capturing intermittent and
low-frequency spatiotemporal patterns in high-dimensional data
|
physics.data-an cs.LG
|
We present a technique for spatiotemporal data analysis called nonlinear
Laplacian spectral analysis (NLSA), which generalizes singular spectrum
analysis (SSA) to take into account the nonlinear manifold structure of complex
data sets. The key principle underlying NLSA is that the functions used to
represent temporal patterns should exhibit a degree of smoothness on the
nonlinear data manifold M; a constraint absent from classical SSA. NLSA
enforces such a notion of smoothness by requiring that temporal patterns belong
in low-dimensional Hilbert spaces V_l spanned by the leading l Laplace-Beltrami
eigenfunctions on M. These eigenfunctions can be evaluated efficiently in high
ambient-space dimensions using sparse graph-theoretic algorithms. Moreover,
they provide orthonormal bases to expand a family of linear maps, whose
singular value decomposition leads to sets of spatiotemporal patterns at
progressively finer resolution on the data manifold. The Riemannian measure of
M and an adaptive graph kernel width enhances the capability of NLSA to detect
important nonlinear processes, including intermittency and rare events. The
minimum dimension of V_l required to capture these features while avoiding
overfitting is estimated here using spectral entropy criteria.
|
1202.6110
|
An Optimal Control Approach to the Persistent Monitoring Problem
|
cs.SY math.OC
|
We propose an optimal control framework for persistent monitoring problems
where the objective is to control the movement of mobile nodes to minimize an
uncertainty metric in a given mission space. For multi agent in a
one-dimensional mission space, we show that the optimal solution is obtained in
terms of a sequence of switching locations and waiting time on these switching
points, thus reducing it to a parametric optimization problem. Using
Infinitesimal Perturbation Analysis (IPA) we obtain a complete solution through
a gradient-based algorithm. We also discuss a receding horizon controller which
is capable of obtaining a near-optimal solution on-the-fly.
|
1202.6141
|
Monobit Digital Receivers for QPSK: Design, Analysis and Performance
|
cs.IT math.IT
|
Future communication system requires large bandwidth to achieve high data
rate up to multigigabit/ sec, which makes analog-to-digital (ADC) become a key
bottleneck for the implementation of digital receivers due to its high
complexity and large power consumption. Therefore, monobit receivers for BPSK
have been proposed to address this problem. In this work, QPSK modulation is
considered for higher data rate. First, the optimal receiver based on monobit
ADC with Nyquist sampling is derived, and its corresponding performance in the
form of deflection ratio is calculated. Then a suboptimal but more practical
monobit receiver is obtained, along with iterative demodulation and small
sample removal. The effect of the imbalances between the In-phase (I) and
Quadrature-phase (Q) branches, including the amplitude and phase imbalances, is
carefully investigated too. To combat the performance loss caused by IQ
imbalances, monobit receivers based on double training sequences are proposed.
Numerical simulations show that the low-complexity suboptimal receiver suffers
only 3dB signal to noise ratio (SNR) loss in AWGN channels and 1dB SNR loss in
multipath static channels compared with the matched filter based monobit
receiver with full channel state information (CSI). The impact of the phase
difference between the transmitter and receiver is presented. It is observed
that the performance degradation caused by the amplitude imbalance is
negligible. Receivers based on double training sequences can efficiently
compensate the performance loss in AWGN channel. Thanks to the diversity
offered by the multipath, the effect of imbalances on monobit receivers in
fading channels is slight. I
|
1202.6144
|
Attack Detection and Identification in Cyber-Physical Systems -- Part I:
Models and Fundamental Limitations
|
math.OC cs.SY
|
Cyber-physical systems integrate computation, communication, and physical
capabilities to interact with the physical world and humans. Besides failures
of components, cyber-physical systems are prone to malignant attacks, and
specific analysis tools as well as monitoring mechanisms need to be developed
to enforce system security and reliability. This paper proposes a unified
framework to analyze the resilience of cyber-physical systems against attacks
cast by an omniscient adversary. We model cyber-physical systems as linear
descriptor systems, and attacks as exogenous unknown inputs. Despite its
simplicity, our model captures various real-world cyber-physical systems, and
it includes and generalizes many prototypical attacks, including stealth,
(dynamic) false-data injection and replay attacks. First, we characterize
fundamental limitations of static, dynamic, and active monitors for attack
detection and identification. Second, we provide constructive algebraic
conditions to cast undetectable and unidentifiable attacks. Third, by using the
system interconnection structure, we describe graph-theoretic conditions for
the existence of undetectable and unidentifiable attacks. Finally, we validate
our findings through some illustrative examples with different cyber-physical
systems, such as a municipal water supply network and two electrical power
grids.
|
1202.6153
|
One Decade of Universal Artificial Intelligence
|
cs.AI
|
The first decade of this century has seen the nascency of the first
mathematical theory of general artificial intelligence. This theory of
Universal Artificial Intelligence (UAI) has made significant contributions to
many theoretical, philosophical, and practical AI questions. In a series of
papers culminating in book (Hutter, 2005), an exciting sound and complete
mathematical model for a super intelligent agent (AIXI) has been developed and
rigorously analyzed. While nowadays most AI researchers avoid discussing
intelligence, the award-winning PhD thesis (Legg, 2008) provided the
philosophical embedding and investigated the UAI-based universal measure of
rational intelligence, which is formal, objective and non-anthropocentric.
Recently, effective approximations of AIXI have been derived and experimentally
investigated in JAIR paper (Veness et al. 2011). This practical breakthrough
has resulted in some impressive applications, finally muting earlier critique
that UAI is only a theory. For the first time, without providing any domain
knowledge, the same agent is able to self-adapt to a diverse range of
interactive environments. For instance, AIXI is able to learn from scratch to
play TicTacToe, Pacman, Kuhn Poker, and other games by trial and error, without
even providing the rules of the games.
These achievements give new hope that the grand goal of Artificial General
Intelligence is not elusive.
This article provides an informal overview of UAI in context. It attempts to
gently introduce a very theoretical, formal, and mathematical subject, and
discusses philosophical and technical ingredients, traits of intelligence, some
social questions, and the past and future of UAI.
|
1202.6157
|
Distributed Power Allocation with SINR Constraints Using Trial and Error
Learning
|
cs.GT cs.AI cs.LG
|
In this paper, we address the problem of global transmit power minimization
in a self-congiguring network where radio devices are subject to operate at a
minimum signal to interference plus noise ratio (SINR) level. We model the
network as a parallel Gaussian interference channel and we introduce a fully
decentralized algorithm (based on trial and error) able to statistically
achieve a congiguration where the performance demands are met. Contrary to
existing solutions, our algorithm requires only local information and can learn
stable and efficient working points by using only one bit feedback. We model
the network under two different game theoretical frameworks: normal form and
satisfaction form. We show that the converging points correspond to equilibrium
points, namely Nash and satisfaction equilibrium. Similarly, we provide
sufficient conditions for the algorithm to converge in both formulations.
Moreover, we provide analytical results to estimate the algorithm's
performance, as a function of the network parameters. Finally, numerical
results are provided to validate our theoretical conclusions. Keywords:
Learning, power control, trial and error, Nash equilibrium, spectrum sharing.
|
1202.6158
|
Optimized on-line computation of PageRank algorithm
|
cs.DM cs.IR math.NA
|
In this paper we present new ideas to accelerate the computation of the
eigenvector of the transition matrix associated to the PageRank algorithm. New
ideas are based on the decomposition of the matrix-vector product that can be
seen as a fluid diffusion model, associated to new algebraic equations. We show
through experiments on synthetic data and on real data-sets how much this
approach can improve the computation efficiency.
|
1202.6165
|
Precoder Design for Multi-antenna Partial Decode-and-Forward (PDF)
Cooperative Systems with Statistical CSIT and MMSE-SIC Receivers
|
cs.IT math.IT
|
Cooperative communication is an important technology in next generation
wireless networks. Aside from conventional amplify-and-forward (AF) and
decode-and-forward (DF) protocols, the partial decode-and-forward (PDF)
protocol is an alternative relaying scheme that is especially promising for
scenarios in which the relay node cannot reliably decode the complete source
message. However, there are several important issues to be addressed regarding
the application of PDF protocols. In this paper, we propose a PDF protocol and
MIMO precoder designs at the source and relay nodes. The precoder designs are
adapted based on statistical channel state information for correlated MIMO
channels, and matched to practical minimum mean-square-error successive
interference cancelation (MMSE-SIC) receivers at the relay and destination
nodes. We show that under similar system settings, the proposed MIMO precoder
design with PDF protocol and MMSE-SIC receivers achieves substantial
performance enhancement compared with conventional baselines.
|
1202.6174
|
k-Color Multi-Robot Motion Planning
|
cs.RO
|
We present a simple and natural extension of the multi-robot motion planning
problem where the robots are partitioned into groups (colors), such that in
each group the robots are interchangeable. Every robot is no longer required to
move to a specific target, but rather to some target placement that is assigned
to its group. We call this problem k-color multi-robot motion planning and
provide a sampling-based algorithm specifically designed for solving it. At the
heart of the algorithm is a novel technique where the k-color problem is
reduced to several discrete multi-robot motion planning problems. These
reductions amplify basic samples into massive collections of free placements
and paths for the robots. We demonstrate the performance of the algorithm by an
implementation for the case of disc robots and polygonal robots translating in
the plane. We show that the algorithm successfully and efficiently copes with a
variety of challenging scenarios, involving many robots, while a simplified
version of this algorithm, that can be viewed as an extension of a prevalent
sampling-based algorithm for the k-color case, fails even on simple scenarios.
Interestingly, our algorithm outperforms a well established implementation of
PRM for the standard multi-robot problem, in which each robot has a distinct
color.
|
1202.6175
|
Delay-limited Source and Channel Coding of Quasi-Stationary Sources over
Block Fading Channels: Design and Scaling Laws
|
cs.IT math.IT
|
In this paper, delay-limited transmission of quasi-stationary sources over
block fading channels are considered. Considering distortion outage probability
as the performance measure, two source and channel coding schemes with power
adaptive transmission are presented. The first one is optimized for fixed rate
transmission, and hence enjoys simplicity of implementation. The second one is
a high performance scheme, which also benefits from optimized rate adaptation
with respect to source and channel states. In high SNR regime, the performance
scaling laws in terms of outage distortion exponent and asymptotic outage
distortion gain are derived, where two schemes with fixed transmission power
and adaptive or optimized fixed rates are considered as benchmarks for
comparisons. Various analytical and numerical results are provided which
demonstrate a superior performance for source and channel optimized rate and
power adaptive scheme. It is also observed that from a distortion outage
perspective, the fixed rate adaptive power scheme substantially outperforms an
adaptive rate fixed power scheme for delay-limited transmission of
quasi-stationary sources over wireless block fading channels. The effect of the
characteristics of the quasi-stationary source on performance, and the
implication of the results for transmission of stationary sources are also
investigated.
|
1202.6177
|
Can Intelligence Explode?
|
cs.AI physics.soc-ph
|
The technological singularity refers to a hypothetical scenario in which
technological advances virtually explode. The most popular scenario is the
creation of super-intelligent algorithms that recursively create ever higher
intelligences. It took many decades for these ideas to spread from science
fiction to popular science magazines and finally to attract the attention of
serious philosophers. David Chalmers' (JCS 2010) article is the first
comprehensive philosophical analysis of the singularity in a respected
philosophy journal. The motivation of my article is to augment Chalmers' and to
discuss some issues not addressed by him, in particular what it could mean for
intelligence to explode. In this course, I will (have to) provide a more
careful treatment of what intelligence actually is, separate speed from
intelligence explosion, compare what super-intelligent participants and
classical human observers might experience and do, discuss immediate
implications for the diversity and value of life, consider possible bounds on
intelligence, and contemplate intelligences right at the singularity.
|
1202.6221
|
Confusion Matrix Stability Bounds for Multiclass Classification
|
cs.LG
|
In this paper, we provide new theoretical results on the generalization
properties of learning algorithms for multiclass classification problems. The
originality of our work is that we propose to use the confusion matrix of a
classifier as a measure of its quality; our contribution is in the line of work
which attempts to set up and study the statistical properties of new evaluation
measures such as, e.g. ROC curves. In the confusion-based learning framework we
propose, we claim that a targetted objective is to minimize the size of the
confusion matrix C, measured through its operator norm ||C||. We derive
generalization bounds on the (size of the) confusion matrix in an extended
framework of uniform stability, adapted to the case of matrix valued loss.
Pivotal to our study is a very recent matrix concentration inequality that
generalizes McDiarmid's inequality. As an illustration of the relevance of our
theoretical results, we show how two SVM learning procedures can be proved to
be confusion-friendly. To the best of our knowledge, the present paper is the
first that focuses on the confusion matrix from a theoretical point of view.
|
1202.6228
|
PAC-Bayesian Generalization Bound on Confusion Matrix for Multi-Class
Classification
|
stat.ML cs.LG
|
In this work, we propose a PAC-Bayes bound for the generalization risk of the
Gibbs classifier in the multi-class classification framework. The novelty of
our work is the critical use of the confusion matrix of a classifier as an
error measure; this puts our contribution in the line of work aiming at dealing
with performance measure that are richer than mere scalar criterion such as the
misclassification rate. Thanks to very recent and beautiful results on matrix
concentration inequalities, we derive two bounds showing that the true
confusion risk of the Gibbs classifier is upper-bounded by its empirical risk
plus a term depending on the number of training examples in each class. To the
best of our knowledge, this is the first PAC-Bayes bounds based on confusion
matrices.
|
1202.6258
|
A Stochastic Gradient Method with an Exponential Convergence Rate for
Finite Training Sets
|
math.OC cs.LG
|
We propose a new stochastic gradient method for optimizing the sum of a
finite set of smooth functions, where the sum is strongly convex. While
standard stochastic gradient methods converge at sublinear rates for this
problem, the proposed method incorporates a memory of previous gradient values
in order to achieve a linear convergence rate. In a machine learning context,
numerical experiments indicate that the new algorithm can dramatically
outperform standard algorithms, both in terms of optimizing the training error
and reducing the test error quickly.
|
1202.6266
|
Realisation d'un systeme de reconnaissance automatique de la parole
arabe base sur CMU Sphinx
|
cs.CL
|
This paper presents the continuation of the work completed by Satori and all.
[SCH07] by the realization of an automatic speech recognition system (ASR) for
Arabic language based SPHINX 4 system. The previous work was limited to the
recognition of the first ten digits, whereas the present work is a remarkable
projection consisting in continuous Arabic speech recognition with a rate of
recognition of surroundings 96%.
|
1202.6278
|
On Optimal Message Assignments for Interference Channels with CoMP
Transmission
|
cs.IT math.IT
|
The degrees of freedom (DoF) number of the fully connected K-user Gaussian
interference channel is known to be K/2. In [1], the DoF for the same channel
model was studied while allowing each message to be available at its own
transmitter as well as M-1 successive transmitters. In particular, it was shown
that the DoF gain through cooperation does not scale with the number of users K
for a fixed value of M, i.e., the per user DoF number is 1/2 . In this work, we
relax the cooperation constraint such that each message can be assigned to M
transmitters without imposing further constraints on their location. Under the
new constraint, we study properties for different message assignments in terms
of the gain in the per user DoF number over that achieved without cooperation.
In particular, we show that a local cooperation constraint that confines the
transmit set of each message within a o(K) radius cannot achieve a per user DoF
number that is greater than 1/2. Moreover, we show that the same conclusion
about the per user DoF number holds for any assignment of messages such that
each message cannot be available at more than two transmitters. Finally, for
the case where M > 2, we do not know whether a per user DoF number that is
greater than 1/2 is achievable. However, we identify a candidate class of
message assignments that could potentially lead to a positive answer. [1] V. S.
Annapureddy, A. El Gamal, and V. V. Veervalli, "Degrees of Freedom of
Interference Channels with CoMP Transmission and Reception," Submitted to IEEE
Trans. Inf. Theory, Sep. 2011
|
1202.6299
|
Reduced-Dimension Linear Transform Coding of Correlated Signals in
Networks
|
cs.IT math.IT
|
A model, called the linear transform network (LTN), is proposed to analyze
the compression and estimation of correlated signals transmitted over directed
acyclic graphs (DAGs). An LTN is a DAG network with multiple source and
receiver nodes. Source nodes transmit subspace projections of random correlated
signals by applying reduced-dimension linear transforms. The subspace
projections are linearly processed by multiple relays and routed to intended
receivers. Each receiver applies a linear estimator to approximate a subset of
the sources with minimum mean squared error (MSE) distortion. The model is
extended to include noisy networks with power constraints on transmitters. A
key task is to compute all local compression matrices and linear estimators in
the network to minimize end-to-end distortion. The non-convex problem is solved
iteratively within an optimization framework using constrained quadratic
programs (QPs). The proposed algorithm recovers as special cases the regular
and distributed Karhunen-Loeve transforms (KLTs). Cut-set lower bounds on the
distortion region of multi-source, multi-receiver networks are given for linear
coding based on convex relaxations. Cut-set lower bounds are also given for any
coding strategy based on information theory. The distortion region and
compression-estimation tradeoffs are illustrated for different communication
demands (e.g. multiple unicast), and graph structures.
|
1202.6345
|
Collective behavior in the spatial spreading of obesity
|
physics.soc-ph cs.SI
|
Non-communicable diseases like diabetes, obesity and certain forms of cancer
have been increasing in many countries at alarming levels. A difficulty in the
conception of policies to reverse these trends is the identification of the
drivers behind the global epidemics. Here, we implement a spatial spreading
analysis to investigate whether diabetes, obesity and cancer show spatial
correlations revealing the effect of collective and global factors acting above
individual choices. We adapt a theoretical framework for critical physical
systems displaying collective behavior to decipher the laws of spatial
spreading of diseases. We find a regularity in the spatial fluctuations of
their prevalence revealed by a pattern of scale-free long-range correlations.
The fluctuations are anomalous, deviating in a fundamental way from the weaker
correlations found in the underlying population distribution. This collective
behavior indicates that the spreading dynamics of obesity, diabetes and some
forms of cancer like lung cancer are analogous to a critical point of
fluctuations, just as a physical system in a second-order phase transition.
According to this notion, individual interactions and habits may have
negligible influence in shaping the global patterns of spreading. Thus, obesity
turns out to be a global problem where local details are of little importance.
Interestingly, we find the same critical fluctuations in obesity and diabetes,
and in the activities of economic sectors associated with food production such
as supermarkets, food and beverage stores--- which cluster in a different
universality class than other generic sectors of the economy. These results
motivate future interventions to investigate the causality of this relation
providing guidance for the implementation of preventive health policies.
|
1202.6348
|
Power Optimization in Random Wireless Networks
|
cs.IT cond-mat.stat-mech cs.SI math.IT
|
Consider a wireless network of transmitter-receiver pairs where the
transmitters adjust their powers to maintain a target SINR level in the
presence of interference. In this paper, we analyze the optimal power vector
that achieves this target in large, random networks obtained by "erasing" a
finite fraction of nodes from a regular lattice of transmitter-receiver pairs.
We show that this problem is equivalent to the so-called Anderson model of
electron motion in dirty metals which has been used extensively in the analysis
of diffusion in random environments. A standard approximation to this model is
the so-called coherent potential approximation (CPA) method which we apply to
evaluate the first and second order intra-sample statistics of the optimal
power vector in one- and two-dimensional systems. This approach is equivalent
to traditional techniques from random matrix theory and free probability, but
while generally accurate (and in agreement with numerical simulations), it
fails to fully describe the system: in particular, results obtained in this way
fail to predict when power control becomes infeasible. In this regard, we find
that the infinite system is always unstable beyond a certain value of the
target SINR, but any finite system only has a small probability of becoming
unstable. This instability probability is proportional to the tails of the
eigenvalue distribution of the system which are calculated to exponential
accuracy using methodologies developed within the Anderson model and its ties
with random walks in random media. Finally, using these techniques, we also
calculate the tails of the system's power distribution under power control and
the rate of convergence of the Foschini-Miljanic power control algorithm in the
presence of random erasures. Overall, in the paper we try to strike a balance
between intuitive arguments and formal proofs.
|
1202.6350
|
Prime tight frames
|
math.FA cs.IT math.IT
|
We introduce a class of finite tight frames called prime tight frames and
prove some of their elementary properties. In particular, we show that any
finite tight frame can be written as a union of prime tight frames. We then
characterize all prime harmonic tight frames and use this characterization to
suggest effective analysis and synthesis computation strategies for such
frames. Finally, we describe all prime frames constructed from the spectral
tetris method, and, as a byproduct, we obtain a characterization of when the
spectral tetris construction works for redundancies below two.
|
1202.6384
|
Fast approximations to structured sparse coding and applications to
object classification
|
cs.CV
|
We describe a method for fast approximation of sparse coding. The input space
is subdivided by a binary decision tree, and we simultaneously learn a
dictionary and assignment of allowed dictionary elements for each leaf of the
tree. We store a lookup table with the assignments and the pseudoinverses for
each node, allowing for very fast inference. We give an algorithm for learning
the tree, the dictionary and the dictionary element assignment, and In the
process of describing this algorithm, we discuss the more general problem of
learning the groups in group structured sparse modelling. We show that our
method creates good sparse representations by using it in the object
recognition framework of \cite{lazebnik06,yang-cvpr-09}. Implementing our own
fast version of the SIFT descriptor the whole system runs at 20 frames per
second on $321 \times 481$ sized images on a laptop with a quad-core cpu, while
sacrificing very little accuracy on the Caltech 101 and 15 scenes benchmarks.
|
1202.6386
|
Relational Reinforcement Learning in Infinite Mario
|
cs.AI
|
Relational representations in reinforcement learning allow for the use of
structural information like the presence of objects and relationships between
them in the description of value functions. Through this paper, we show that
such representations allow for the inclusion of background knowledge that
qualitatively describes a state and can be used to design agents that
demonstrate learning behavior in domains with large state and actions spaces
such as computer games.
|
1202.6389
|
Consensus and Products of Random Stochastic Matrices: Exact Rate for
Convergence in Probability
|
math.PR cs.IT cs.SI math.IT
|
Distributed consensus and other linear systems with system stochastic
matrices $W_k$ emerge in various settings, like opinion formation in social
networks, rendezvous of robots, and distributed inference in sensor networks.
The matrices $W_k$ are often random, due to, e.g., random packet dropouts in
wireless sensor networks. Key in analyzing the performance of such systems is
studying convergence of matrix products $W_kW_{k-1}... W_1$. In this paper, we
find the exact exponential rate $I$ for the convergence in probability of the
product of such matrices when time $k$ grows large, under the assumption that
the $W_k$'s are symmetric and independent identically distributed in time.
Further, for commonly used random models like with gossip and link failure, we
show that the rate $I$ is found by solving a min-cut problem and, hence, easily
computable. Finally, we apply our results to optimally allocate the sensors'
transmission power in consensus+innovations distributed detection.
|
1202.6404
|
Signal Shaping for BICM at Low SNR
|
cs.IT math.IT
|
The mutual information of bit-interleaved coded modulation (BICM) systems,
sometimes called the BICM capacity, is investigated at low signal-to-noise
ratio (SNR), i.e., in the wideband regime. A new linear transform that depends
on bits' probabilities is introduced. This transform is used to prove the
asymptotical equivalence between certain BICM systems with uniform and
nonuniform input distributions. Using known results for BICM systems with a
uniform input distribution, we completely characterize the combinations of
input alphabet, input distribution, and binary labeling that achieve the
Shannon limit -1.59 dB. The main conclusion is that a BICM system achieves the
Shannon limit at low SNR if and only if it can be represented as a zero-mean
linear projection of a hypercube, which is the same condition as for uniform
input distributions. Hence, probabilistic shaping offers no extra degrees of
freedom to optimize the low-SNR mutual information of BICM systems, in addition
to what is provided by geometrical shaping. These analytical conclusions are
confirmed by numerical results, which also show that for a fixed input
alphabet, probabilistic shaping of BICM can improve the mutual information in
the low and medium SNR range over any coded modulation system with a uniform
input distribution.
|
1202.6409
|
Classification of poset-block spaces admitting MacWilliams-type identity
|
cs.IT math.IT
|
In this work we prove that a poset-block space admits a MacWilliams-type
identity if and only if the poset is hierarchical and at any level of the
poset, all the blocks have the same dimension. When the poset-block admits the
MacWilliams-type identity we explicit the relation between the weight
enumerators of a code and its dual.
|
1202.6423
|
Limits of Reliable Communication with Low Probability of Detection on
AWGN Channels
|
cs.IT cs.NI math.IT
|
We present a square root limit on the amount of information transmitted
reliably and with low probability of detection (LPD) over additive white
Gaussian noise (AWGN) channels. Specifically, if the transmitter has AWGN
channels to an intended receiver and a warden, both with non-zero noise power,
we prove that $o(\sqrt{n})$ bits can be sent from the transmitter to the
receiver in $n$ channel uses while lower-bounding $\alpha+\beta\geq1-\epsilon$
for any $\epsilon>0$, where $\alpha$ and $\beta$ respectively denote the
warden's probabilities of a false alarm when the sender is not transmitting and
a missed detection when the sender is transmitting. Moreover, in most practical
scenarios, a lower bound on the noise power on the channel between the
transmitter and the warden is known and $O(\sqrt{n})$ bits can be sent in $n$
LPD channel uses. Conversely, attempting to transmit more than $O(\sqrt{n})$
bits either results in detection by the warden with probability one or a
non-zero probability of decoding error at the receiver as $n\rightarrow\infty$.
|
1202.6429
|
Stable image reconstruction using total variation minimization
|
cs.CV cs.IT math.IT math.NA
|
This article presents near-optimal guarantees for accurate and robust image
recovery from under-sampled noisy measurements using total variation
minimization. In particular, we show that from O(slog(N)) nonadaptive linear
measurements, an image can be reconstructed to within the best s-term
approximation of its gradient up to a logarithmic factor, and this factor can
be removed by taking slightly more measurements. Along the way, we prove a
strengthened Sobolev inequality for functions lying in the null space of
suitably incoherent matrices.
|
1202.6436
|
A Mean Value Theorem Approach to Robust Control Design for Uncertain
Nonlinear Systems
|
cs.SY
|
This paper presents a scheme to design a tracking controller for a class of
uncertain nonlinear systems using a robust feedback linearization approach. The
scheme is composed of two steps. In the first step, a linearized uncertainty
model for the corresponding uncertain nonlinear system is developed using a
robust feedback linearization approach. In this step, the standard feedback
linearization approach is used to linearize the nominal nonlinear dynamics of
the uncertain nonlinear system. The remaining nonlinear uncertainties are then
linearized at an arbitrary point using the mean value theorem. This approach
gives a multi-input multi-output (MIMO) linear uncertain system model with a
structured uncertainty representation. In the second step, a minimax linear
quadratic regulation (LQR) controller is designed for MIMO linearized uncertain
system model. In order to demonstrate the effectiveness of the proposed method,
it is applied to a velocity and altitude tracking control problem for an
air-breathing hypersonic flight vehicle.
|
1202.6445
|
Principal Component Pursuit with Reduced Linear Measurements
|
cs.IT math.IT
|
In this paper, we study the problem of decomposing a superposition of a
low-rank matrix and a sparse matrix when a relatively few linear measurements
are available. This problem arises in many data processing tasks such as
aligning multiple images or rectifying regular texture, where the goal is to
recover a low-rank matrix with a large fraction of corrupted entries in the
presence of nonlinear domain transformation. We consider a natural convex
heuristic to this problem which is a variant to the recently proposed Principal
Component Pursuit. We prove that under suitable conditions, this convex program
guarantees to recover the correct low-rank and sparse components despite
reduced measurements. Our analysis covers both random and deterministic
measurement models.
|
1202.6447
|
Quaternary Constant-Composition Codes with Weight Four and Distances
Five or Six
|
cs.IT math.CO math.IT
|
The sizes of optimal constant-composition codes of weight three have been
determined by Chee, Ge and Ling with four cases in doubt. Group divisible codes
played an important role in their constructions. In this paper, we study the
problem of constructing optimal quaternary constant-composition codes with
Hamming weight four and minimum distances five or six through group divisible
codes and Room square approaches. The problem is solved leaving only five
lengths undetermined. Previously, the results on the sizes of such quaternary
constant-composition codes were scarce.
|
1202.6481
|
Coding Scheme for Optimizing Random I/O Performance
|
cs.IT math.IT
|
Flash memories intended for SSD and mobile applications need to provide high
random I/O performance. This requires using efficient schemes for reading small
chunks of data (e.g. 0.5KB - 4KB) from random addresses. Furthermore, in order
to be cost efficient, it is desirable to use high density Multi-Level Cell
(MLC) memories, such as the ones based on 3 or 4 bit per cell technologies.
Unfortunately, these two requirements are contradicting, as reading an MLC
memory, whose data is coded conventionally, requires multiple sensing
operations, resulting in slow reading and degraded random I/O performance. This
paper describes a novel coding scheme that optimizes random read throughput, by
allowing reading small data chunks from an MLC memory using a single sensing
operation.
|
1202.6504
|
Learning from Distributions via Support Measure Machines
|
stat.ML cs.LG
|
This paper presents a kernel-based discriminative learning framework on
probability measures. Rather than relying on large collections of vectorial
training examples, our framework learns using a collection of probability
distributions that have been constructed to meaningfully represent training
data. By representing these probability distributions as mean embeddings in the
reproducing kernel Hilbert space (RKHS), we are able to apply many standard
kernel-based learning techniques in straightforward fashion. To accomplish
this, we construct a generalization of the support vector machine (SVM) called
a support measure machine (SMM). Our analyses of SMMs provides several insights
into their relationship to traditional SVMs. Based on such insights, we propose
a flexible SVM (Flex-SVM) that places different kernel functions on each
training example. Experimental results on both synthetic and real-world data
demonstrate the effectiveness of our proposed framework.
|
1202.6517
|
Eye Pupil Location Using Webcam
|
cs.HC cs.CV
|
Three different algorithms used for eye pupil location were described and
tested. Algorithm efficiency comparison was based on human faces images taken
from the BioID database. Moreover all the eye localisation methods were
implemented in a dedicated application supporting eye movement based computer
control. In this case human face images were acquired by a webcam and processed
in a real-time.
|
1202.6548
|
mlpy: Machine Learning Python
|
cs.MS cs.LG stat.ML
|
mlpy is a Python Open Source Machine Learning library built on top of
NumPy/SciPy and the GNU Scientific Libraries. mlpy provides a wide range of
state-of-the-art machine learning methods for supervised and unsupervised
problems and it is aimed at finding a reasonable compromise among modularity,
maintainability, reproducibility, usability and efficiency. mlpy is
multiplatform, it works with Python 2 and 3 and it is distributed under GPL3 at
the website http://mlpy.fbk.eu.
|
1202.6555
|
Adaptive sensing using deterministic partial Hadamard matrices
|
cs.IT math.IT
|
This paper investigates the construction of deterministic matrices preserving
the entropy of random vectors with a given probability distribution. In
particular, it is shown that for random vectors having i.i.d. discrete
components, this is achieved by selecting a subset of rows of a Hadamard matrix
such that (i) the selection is deterministic (ii) the fraction of selected rows
is vanishing. In contrast, it is shown that for random vectors with i.i.d.
continuous components, no partial Hadamard matrix of reduced dimension allows
to preserve the entropy. These results are in agreement with the results of
Wu-Verdu on almost lossless analog compression. This paper is however motivated
by the complexity attribute of Hadamard matrices, which allows the use of
efficient and stable reconstruction algorithms. The proof technique is based on
a polar code martingale argument and on a new entropy power inequality for
integer-valued random variables.
|
1202.6583
|
A Lexical Analysis Tool with Ambiguity Support
|
cs.CL cs.FL
|
Lexical ambiguities naturally arise in languages. We present Lamb, a lexical
analyzer that produces a lexical analysis graph describing all the possible
sequences of tokens that can be found within the input string. Parsers can
process such lexical analysis graphs and discard any sequence of tokens that
does not produce a valid syntactic sentence, therefore performing, together
with Lamb, a context-sensitive lexical analysis in lexically-ambiguous language
specifications.
|
1202.6586
|
Filling-Based Techniques Applied to Object Projection Feature Estimation
|
cs.CV
|
3D motion tracking is a critical task in many computer vision applications.
Unsupervised markerless 3D motion tracking systems determine the most relevant
object in the screen and then track it by continuously estimating its
projection features (center and area) from the edge image and a point inside
the relevant object projection (namely, inner point), until the tracking fails.
Existing object projection feature estimation techniques are based on
ray-casting from the inner point. These techniques present three main
drawbacks: when the inner point is surrounded by edges, rays may not reach
other relevant areas; as a consequence of that issue, the estimated features
may greatly vary depending on the position of the inner point relative to the
object projection; and finally, increasing the number of rays being casted and
the ray-casting iterations (which would make the results more accurate and
stable) increases the processing time to the point the tracking cannot be
performed on the fly. In this paper, we analyze an intuitive filling-based
object projection feature estimation technique that solves the aforementioned
problems but is too sensitive to edge miscalculations. Then, we propose a less
computing-intensive modification to that technique that would not be affected
by the existing techniques issues and would be no more sensitive to edge
miscalculations than ray-casting-based techniques.
|
1202.6596
|
Physical Layer Security with Uncoordinated Helpers Implementing
Cooperative Jamming
|
cs.IT cs.CR math.IT
|
A wireless communication network is considered, consisting of a source
(Alice), a destination (Bob) and an eavesdropper (Eve), each equipped with a
single antenna. The communication is assisted by multiple helpers, each
equipped with two antennas, which implement cooperative jamming, i.e.,
transmitting noise to confound Eve. The optimal structure of the jamming noise
that maximizes the secrecy rate is derived. A nulling noise scenario is also
considered, in which each helper transmits noise that nulls out at Bob. Each
helper only requires knowledge of its own link to Bob to determine the noise
locally. For the optimally structured noise, global information of all the
links is required. Although analysis shows that under the two-antenna per
helper scenario the nulling solution is sub-optimal in terms of the achievable
secrecy rate, simulations show that the performance difference is rather small,
with the inexpensive and easy to implement nulling scheme performing near
optimal.
|
1202.6597
|
Outage Constrained Secrecy Rate Maximization Using Cooperative Jamming
|
cs.IT cs.CR math.IT
|
We consider a Gaussian MISO wiretap channel, where a multi-antenna source
communicates with a single-antenna destination in the presence of a
single-antenna eavesdropper. The communication is assisted by multi-antenna
helpers that act as jammers to the eavesdropper. Each helper independently
transmits noise which lies in the null space of the channel to the destination,
thus creates no interference to the destination. Under the assumption that
there is eavesdropper channel uncertainty, we derive the optimal covariance
matrix for the source signal so that the secrecy rate is maximized subject to
probability of outage and power constraints. Assuming that the eavesdropper
channels follow zero-mean Gaussian model with known covariances, we derive the
outage probability in a closed form. Simulation results in support of the
analysis are provided.
|
1202.6601
|
Multiple spreaders affect the indirect influence on Twitter
|
cs.SI physics.soc-ph
|
Most studies on social influence have focused on direct influence, while
another interesting question can be raised as whether indirect influence exists
between two users who're not directly connected in the network and what affects
such influence. In addition, the theory of \emph{complex contagion} tells us
that more spreaders will enhance the indirect influence between two users. Our
observation of intensity of indirect influence, propagated by $n$ parallel
spreaders and quantified by retweeting probability on Twitter, shows that
complex contagion is validated globally but is violated locally. In other
words, the retweeting probability increases non-monotonically with some local
drops.
|
1202.6609
|
Towards an Integrated Visualization Of Semantically Enriched 3D City
Models: An Ontology of 3D Visualization Techniques
|
cs.AI cs.GR cs.HC
|
3D city models - which represent in 3 dimensions the geometric elements of a
city - are increasingly used for an intended wide range of applications. Such
uses are made possible by using semantically enriched 3D city models and by
presenting such enriched 3D city models in a way that allows decision-making
processes to be carried out from the best choices among sets of objectives, and
across issues and scales. In order to help in such a decision-making process we
have defined a framework to find the best visualization technique(s) for a set
of potentially heterogeneous data that have to be visualized within the same 3D
city model, in order to perform a given task in a specific context. We have
chosen an ontology-based approach. This approach and the specification and use
of the resulting ontology of 3D visualization techniques are described in this
paper.
|
1202.6641
|
Search versus Decision for Election Manipulation Problems
|
cs.GT cs.CC cs.MA
|
Most theoretical definitions about the complexity of manipulating elections
focus on the decision problem of recognizing which instances can be
successfully manipulated, rather than the search problem of finding the
successful manipulative actions. Since the latter is a far more natural goal
for manipulators, that definitional focus may be misguided if these two
complexities can differ. Our main result is that they probably do differ: If
integer factoring is hard, then for election manipulation, election bribery,
and some types of election control, there are election systems for which
recognizing which instances can be successfully manipulated is in polynomial
time but producing the successful manipulations cannot be done in polynomial
time.
|
1202.6649
|
The Complexity of Controlling Candidate-Sequential Elections
|
cs.GT cs.CC cs.MA
|
Candidate control of elections is the study of how adding or removing
candidates can affect the outcome. However, the traditional study of the
complexity of candidate control is in the model in which all candidates and
votes are known up front. This paper develops a model for studying online
control for elections where the structure is sequential with respect to the
candidates, and in which the decision regarding adding and deleting must be
irrevocably made at the moment the candidate is presented. We show that great
complexity---PSPACE-completeness---can occur in this setting, but we also
provide within this setting polynomial-time algorithms for the most important
of election systems, plurality.
|
1202.6654
|
Optimal Transmission Policies for Energy Harvesting Two-hop Networks
|
cs.IT math.IT
|
In this paper, a two-hop communication system with energy harvesting nodes is
considered. Unlike battery powered wireless nodes, both the source and the
relay are able to harvest energy from environment during communication,
therefore, both data and energy causality over the two hops need to be
considered. Assuming both nodes know the harvested energies in advance,
properties of optimal transmission policies to maximize the delivered data by a
given deadline are identified. Using these properties, optimal power allocation
and transmission schedule for the case in which both nodes harvest two energy
packets is developed.
|
1202.6655
|
The Complexity of Online Manipulation of Sequential Elections
|
cs.GT cs.CC cs.MA
|
Most work on manipulation assumes that all preferences are known to the
manipulators. However, in many settings elections are open and sequential, and
manipulators may know the already cast votes but may not know the future votes.
We introduce a framework, in which manipulators can see the past votes but not
the future ones, to model online coalitional manipulation of sequential
elections, and we show that in this setting manipulation can be extremely
complex even for election systems with simple winner problems. Yet we also show
that for some of the most important election systems such manipulation is
simple in certain settings. This suggests that when using sequential voting,
one should pay great attention to the details of the setting in choosing one's
voting rule. Among the highlights of our classifications are: We show that,
depending on the size of the manipulative coalition, the online manipulation
problem can be complete for each level of the polynomial hierarchy or even for
PSPACE. We obtain the most dramatic contrast to date between the
nonunique-winner and unique-winner models: Online weighted manipulation for
plurality is in P in the nonunique-winner model, yet is coNP-hard (constructive
case) and NP-hard (destructive case) in the unique-winner model. And we obtain
what to the best of our knowledge are the first P^NP[1]-completeness and
P^NP-completeness results in the field of computational social choice, in
particular proving such completeness for, respectively, the complexity of
3-candidate and 4-candidate (and unlimited-candidate) online weighted coalition
manipulation of veto elections.
|
1202.6658
|
Independent signaling achieves the capacity region of the Gaussian
interference channel with common information to within one bit
|
cs.IT math.IT
|
The interference channel with common information (IC-CI) consists of two
transmit-receive pairs that communicate over a common noisy medium. Each
transmitter has an individual message for its paired receiver, and
additionally, both transmitters have a common message to deliver to both
receivers. In this paper, through explicit inner and outer bounds on the
capacity region, we establish the capacity region of the Gaussian IC-CI to
within a bounded gap of one bit, independently of the values of all channel
parameters. Using this constant-gap characterization, the generalized degrees
of freedom (GDoF) region is determined. It is shown that the introduction of
the common message leads to an increase in the GDoF over that achievable over
the Gaussian interference channel without a common message, and hence to an
unbounded improvement in the achievable rate. A surprising feature of the
capacity-within-one-bit result is that most of the available benefit (i.e., to
within one bit of capacity) due to the common message is achieved through a
simple and explicit coding scheme that involves independent signaling at the
two transmitters so that, in effect, this scheme forgoes the opportunity for
transmitter cooperation that is inherently available due to shared knowledge of
the common message at both transmitters.
|
1202.6666
|
Perturbation of the Eigenvectors of the Graph Laplacian: Application to
Image Denoising
|
physics.data-an cs.CV stat.ML
|
The original contributions of this paper are twofold: a new understanding of
the influence of noise on the eigenvectors of the graph Laplacian of a set of
image patches, and an algorithm to estimate a denoised set of patches from a
noisy image. The algorithm relies on the following two observations: (1) the
low-index eigenvectors of the diffusion, or graph Laplacian, operators are very
robust to random perturbations of the weights and random changes in the
connections of the patch-graph; and (2) patches extracted from smooth regions
of the image are organized along smooth low-dimensional structures in the
patch-set, and therefore can be reconstructed with few eigenvectors.
Experiments demonstrate that our denoising algorithm outperforms the denoising
gold-standards.
|
1202.6669
|
On the Capacity of Rate-Adaptive Packetized Wireless Communication Links
under Jamming
|
cs.IT cs.GT math.IT
|
We formulate the interaction between the communicating nodes and an adversary
within a game-theoretic context. We show that earlier information-theoretic
capacity results for a jammed channel correspond to a pure Nash Equilibrium
(NE). However, when both players are allowed to randomize their actions (i.e.,
coding rate and jamming power) new mixed Nash equilibria appear with surprising
properties. We show the existence of a threshold ($J_{TH}$) such that if the
jammer average power exceeds $J_{TH}$, the channel capacity at the NE is the
same as if the jammer was using its maximum allowable power, $J_{Max}$, all the
time. This indicates that randomization significantly advantages powerful
jammers. We also show how the NE strategies can be derived, and we provide very
simple (e.g., semi-uniform) approximations to the optimal communication and
jamming strategies. Such strategies are very simple to implement in current
hardware and software.
|
1202.6677
|
Trajectory and Policy Aware Sender Anonymity in Location Based Services
|
cs.DB
|
We consider Location-based Service (LBS) settings, where a LBS provider logs
the requests sent by mobile device users over a period of time and later wants
to publish/share these logs. Log sharing can be extremely valuable for
advertising, data mining research and network management, but it poses a
serious threat to the privacy of LBS users. Sender anonymity solutions prevent
a malicious attacker from inferring the interests of LBS users by associating
them with their service requests after gaining access to the anonymized logs.
With the fast-increasing adoption of smartphones and the concern that historic
user trajectories are becoming more accessible, it becomes necessary for any
sender anonymity solution to protect against attackers that are
trajectory-aware (i.e. have access to historic user trajectories) as well as
policy-aware (i.e they know the log anonymization policy). We call such
attackers TP-aware.
This paper introduces a first privacy guarantee against TP-aware attackers,
called TP-aware sender k-anonymity. It turns out that there are many possible
TP-aware anonymizations for the same LBS log, each with a different utility to
the consumer of the anonymized log. The problem of finding the optimal TP-aware
anonymization is investigated. We show that trajectory-awareness renders the
problem computationally harder than the trajectory-unaware variants found in
the literature (NP-complete in the size of the log, versus PTIME). We describe
a PTIME l-approximation algorithm for trajectories of length l and empirically
show that it scales to large LBS logs (up to 2 million users).
|
1202.6685
|
Faceted Semantic Search for Personalized Social Search
|
cs.IR
|
Actual social networks (like Facebook, Twitter, Linkedin, ...) need to deal
with vagueness on ontological indeterminacy. In this paper is analyzed the
prototyping of a faceted semantic search for personalized social search using
the "joint meaning" in a community environment. User researches in a
"collaborative" environment defined by folksonomies can be supported by the
most common features on the faceted semantic search. A solution for the
context-aware personalized search is based on "joint meaning" understood as a
joint construal of the creators of the contents and the user of the contents
using the faced taxonomy with the Semantic Web. A proof-of concept prototype
shows how the proposed methodological approach can also be applied to existing
presentation components, built with different languages and/or component
technologies.
|
1203.0024
|
Verification of Relational Data-Centric Dynamic Systems with External
Services
|
cs.DB
|
Data-centric dynamic systems are systems where both the process controlling
the dynamics and the manipulation of data are equally central. In this paper we
study verification of (first-order) mu-calculus variants over relational
data-centric dynamic systems, where data are represented by a full-fledged
relational database, and the process is described in terms of atomic actions
that evolve the database. The execution of such actions may involve calls to
external services, providing fresh data inserted into the system. As a result
such systems are typically infinite-state. We show that verification is
undecidable in general, and we isolate notable cases, where decidability is
achieved. Specifically we start by considering service calls that return values
deterministically (depending only on passed parameters). We show that in a
mu-calculus variant that preserves knowledge of objects appeared along a run we
get decidability under the assumption that the fresh data introduced along a
run are bounded, though they might not be bounded in the overall system. In
fact we tie such a result to a notion related to weak acyclicity studied in
data exchange. Then, we move to nondeterministic services where the assumption
of data bounded run would result in a bound on the service calls that can be
invoked during the execution and hence would be too restrictive. So we
investigate decidability under the assumption that knowledge of objects is
preserved only if they are continuously present. We show that if infinitely
many values occur in a run but do not accumulate in the same state, then we get
again decidability. We give syntactic conditions to avoid this accumulation
through the novel notion of "generate-recall acyclicity", which takes into
consideration that every service call activation generates new values that
cannot be accumulated indefinitely.
|
1203.0029
|
Assortativity Decreases the Robustness of Interdependent Networks
|
physics.soc-ph cs.SI physics.data-an
|
It was recently recognized that interdependencies among different networks
can play a crucial role in triggering cascading failures and hence system-wide
disasters. A recent model shows how pairs of interdependent networks can
exhibit an abrupt percolation transition as failures accumulate. We report on
the effects of topology on failure propagation for a model system consisting of
two interdependent networks. We find that the internal node correlations in
each of the two interdependent networks significantly changes the critical
density of failures that triggers the total disruption of the two-network
system. Specifically, we find that the assortativity (i.e. the likelihood of
nodes with similar degree to be connected) within a single network decreases
the robustness of the entire system. The results of this study on the influence
of assortativity may provide insights into ways of improving the robustness of
network architecture, and thus enhances the level of protection of critical
infrastructures.
|
1203.0030
|
Design of State-based Schedulers for a Network of Control Loops
|
cs.SY
|
For a closed-loop system, which has a contention-based multiple access
network on its sensor link, the Medium Access Controller (MAC) may discard some
packets when the traffic on the link is high. We use a local state-based
scheduler to select a few critical data packets to send to the MAC. In this
paper, we analyze the impact of such a scheduler on the closed-loop system in
the presence of traffic, and show that there is a dual effect with state-based
scheduling. In general, this makes the optimal scheduler and controller hard to
find. However, by removing past controls from the scheduling criterion, we find
that certainty equivalence holds. This condition is related to the classical
result of Bar-Shalom and Tse, and it leads to the design of a scheduler with a
certainty equivalent controller. This design, however, does not result in an
equivalent system to the original problem, in the sense of Witsenhausen.
Computing the estimate is difficult, but can be simplified by introducing a
symmetry constraint on the scheduler. Based on these findings, we propose a
dual predictor architecture for the closed-loop system, which ensures
separation between scheduler, observer and controller. We present an example of
this architecture, which illustrates a network-aware event-triggering
mechanism.
|
1203.0038
|
Inference in Hidden Markov Models with Explicit State Duration
Distributions
|
stat.ML cs.LG
|
In this letter we borrow from the inference techniques developed for
unbounded state-cardinality (nonparametric) variants of the HMM and use them to
develop a tuning-parameter free, black-box inference procedure for
Explicit-state-duration hidden Markov models (EDHMM). EDHMMs are HMMs that have
latent states consisting of both discrete state-indicator and discrete
state-duration random variables. In contrast to the implicit geometric state
duration distribution possessed by the standard HMM, EDHMMs allow the direct
parameterisation and estimation of per-state duration distributions. As most
duration distributions are defined over the positive integers, truncation or
other approximations are usually required to perform EDHMM inference.
|
1203.0055
|
Stochastic Database Cracking: Towards Robust Adaptive Indexing in
Main-Memory Column-Stores
|
cs.DB
|
Modern business applications and scientific databases call for inherently
dynamic data storage environments. Such environments are characterized by two
challenging features: (a) they have little idle system time to devote on
physical design; and (b) there is little, if any, a priori workload knowledge,
while the query and data workload keeps changing dynamically. In such
environments, traditional approaches to index building and maintenance cannot
apply. Database cracking has been proposed as a solution that allows on-the-fly
physical data reorganization, as a collateral effect of query processing.
Cracking aims to continuously and automatically adapt indexes to the workload
at hand, without human intervention. Indexes are built incrementally,
adaptively, and on demand. Nevertheless, as we show, existing adaptive indexing
methods fail to deliver workload-robustness; they perform much better with
random workloads than with others. This frailty derives from the inelasticity
with which these approaches interpret each query as a hint on how data should
be stored. Current cracking schemes blindly reorganize the data within each
query's range, even if that results into successive expensive operations with
minimal indexing benefit. In this paper, we introduce stochastic cracking, a
significantly more resilient approach to adaptive indexing. Stochastic cracking
also uses each query as a hint on how to reorganize data, but not blindly so;
it gains resilience and avoids performance bottlenecks by deliberately applying
certain arbitrary choices in its decision-making. Thereby, we bring adaptive
indexing forward to a mature formulation that confers the workload-robustness
previous approaches lacked. Our extensive experimental study verifies that
stochastic cracking maintains the desired properties of original database
cracking while at the same time it performs well with diverse realistic
workloads.
|
1203.0056
|
SharedDB: Killing One Thousand Queries With One Stone
|
cs.DB
|
Traditional database systems are built around the query-at-a-time model. This
approach tries to optimize performance in a best-effort way. Unfortunately,
best effort is not good enough for many modern applications. These applications
require response time guarantees in high load situations. This paper describes
the design of a new database architecture that is based on batching queries and
shared computation across possibly hundreds of concurrent queries and updates.
Performance experiments with the TPC-W benchmark show that the performance of
our implementation, SharedDB, is indeed robust across a wide range of dynamic
workloads.
|
1203.0057
|
Pushing the Boundaries of Crowd-enabled Databases with Query-driven
Schema Expansion
|
cs.DB
|
By incorporating human workers into the query execution process crowd-enabled
databases facilitate intelligent, social capabilities like completing missing
data at query time or performing cognitive operators. But despite all their
flexibility, crowd-enabled databases still maintain rigid schemas. In this
paper, we extend crowd-enabled databases by flexible query-driven schema
expansion, allowing the addition of new attributes to the database at query
time. However, the number of crowd-sourced mini-tasks to fill in missing values
may often be prohibitively large and the resulting data quality is doubtful.
Instead of simple crowd-sourcing to obtain all values individually, we leverage
the user-generated data found in the Social Web: By exploiting user ratings we
build perceptual spaces, i.e., highly-compressed representations of opinions,
impressions, and perceptions of large numbers of users. Using few training
samples obtained by expert crowd sourcing, we then can extract all missing data
automatically from the perceptual space with high quality and at low costs.
Extensive experiments show that our approach can boost both performance and
quality of crowd-enabled databases, while also providing the flexibility to
expand schemas in a query-driven fashion.
|
1203.0058
|
A Bayesian Approach to Discovering Truth from Conflicting Sources for
Data Integration
|
cs.DB cs.LG
|
In practical data integration systems, it is common for the data sources
being integrated to provide conflicting information about the same entity.
Consequently, a major challenge for data integration is to derive the most
complete and accurate integrated records from diverse and sometimes conflicting
sources. We term this challenge the truth finding problem. We observe that some
sources are generally more reliable than others, and therefore a good model of
source quality is the key to solving the truth finding problem. In this work,
we propose a probabilistic graphical model that can automatically infer true
records and source quality without any supervision. In contrast to previous
methods, our principled approach leverages a generative process of two types of
errors (false positive and false negative) by modeling two different aspects of
source quality. In so doing, ours is also the first approach designed to merge
multi-valued attribute types. Our method is scalable, due to an efficient
sampling-based inference algorithm that needs very few iterations in practice
and enjoys linear time complexity, with an even faster incremental variant.
Experiments on two real world datasets show that our new method outperforms
existing state-of-the-art approaches to the truth finding problem.
|
1203.0059
|
How to Price Shared Optimizations in the Cloud
|
cs.DB
|
Data-management-as-a-service systems are increasingly being used in
collaborative settings, where multiple users access common datasets. Cloud
providers have the choice to implement various optimizations, such as indexing
or materialized views, to accelerate queries over these datasets. Each
optimization carries a cost and may benefit multiple users. This creates a
major challenge: how to select which optimizations to perform and how to share
their cost among users. The problem is especially challenging when users are
selfish and will only report their true values for different optimizations if
doing so maximizes their utility. In this paper, we present a new approach for
selecting and pricing shared optimizations by using Mechanism Design. We first
show how to apply the Shapley Value Mechanism to the simple case of selecting
and pricing additive optimizations, assuming an offline game where all users
access the service for the same time-period. Second, we extend the approach to
online scenarios where users come and go. Finally, we consider the case of
substitutive optimizations. We show analytically that our mechanisms induce
truth- fulness and recover the optimization costs. We also show experimentally
that our mechanisms yield higher utility than the state-of-the-art approach
based on regret accumulation.
|
1203.0060
|
Dense Subgraph Maintenance under Streaming Edge Weight Updates for
Real-time Story Identification
|
cs.DB
|
Recent years have witnessed an unprecedented proliferation of social media.
People around the globe author, every day, millions of blog posts, social
network status updates, etc. This rich stream of information can be used to
identify, on an ongoing basis, emerging stories, and events that capture
popular attention. Stories can be identified via groups of tightly-coupled
real-world entities, namely the people, locations, products, etc., that are
involved in the story. The sheer scale, and rapid evolution of the data
involved necessitate highly efficient techniques for identifying important
stories at every point of time. The main challenge in real-time story
identification is the maintenance of dense subgraphs (corresponding to groups
of tightly-coupled entities) under streaming edge weight updates (resulting
from a stream of user-generated content). This is the first work to study the
efficient maintenance of dense subgraphs under such streaming edge weight
updates. For a wide range of definitions of density, we derive theoretical
results regarding the magnitude of change that a single edge weight update can
cause. Based on these, we propose a novel algorithm, DYNDENS, which outperforms
adaptations of existing techniques to this setting, and yields meaningful
results. Our approach is validated by a thorough experimental evaluation on
large-scale real and synthetic datasets.
|
1203.0061
|
ReStore: Reusing Results of MapReduce Jobs
|
cs.DB
|
Analyzing large scale data has emerged as an important activity for many
organizations in the past few years. This large scale data analysis is
facilitated by the MapReduce programming and execution model and its
implementations, most notably Hadoop. Users of MapReduce often have analysis
tasks that are too complex to express as individual MapReduce jobs. Instead,
they use high-level query languages such as Pig, Hive, or Jaql to express their
complex tasks. The compilers of these languages translate queries into
workflows of MapReduce jobs. Each job in these workflows reads its input from
the distributed file system used by the MapReduce system and produces output
that is stored in this distributed file system and read as input by the next
job in the workflow. The current practice is to delete these intermediate
results from the distributed file system at the end of executing the workflow.
One way to improve the performance of workflows of MapReduce jobs is to keep
these intermediate results and reuse them for future workflows submitted to the
system. In this paper, we present ReStore, a system that manages the storage
and reuse of such intermediate results. ReStore can reuse the output of whole
MapReduce jobs that are part of a workflow, and it can also create additional
reuse opportunities by materializing and storing the output of query execution
operators that are executed within a MapReduce job. We have implemented ReStore
as an extension to the Pig dataflow system on top of Hadoop, and we
experimentally demonstrate significant speedups on queries from the PigMix
benchmark.
|
1203.0076
|
Using Barriers to Reduce the Sensitivity to Edge Miscalculations of
Casting-Based Object Projection Feature Estimation
|
cs.CV
|
3D motion tracking is a critical task in many computer vision applications.
Unsupervised markerless 3D motion tracking systems determine the most relevant
object in the screen and then track it by continuously estimating its
projection features (center and area) from the edge image and a point inside
the relevant object projection (namely, inner point), until the tracking fails.
Existing reliable object projection feature estimation techniques are based on
ray-casting or grid-filling from the inner point. These techniques assume the
edge image to be accurate. However, in real case scenarios, edge
miscalculations may arise from low contrast between the target object and its
surroundings or motion blur caused by low frame rates or fast moving target
objects. In this paper, we propose a barrier extension to casting-based
techniques that mitigates the effect of edge miscalculations.
|
1203.0077
|
Queries with Guarded Negation (full version)
|
cs.DB
|
A well-established and fundamental insight in database theory is that
negation (also known as complementation) tends to make queries difficult to
process and difficult to reason about. Many basic problems are decidable and
admit practical algorithms in the case of unions of conjunctive queries, but
become difficult or even undecidable when queries are allowed to contain
negation. Inspired by recent results in finite model theory, we consider a
restricted form of negation, guarded negation. We introduce a fragment of SQL,
called GN-SQL, as well as a fragment of Datalog with stratified negation,
called GN-Datalog, that allow only guarded negation, and we show that these
query languages are computationally well behaved, in terms of testing query
containment, query evaluation, open-world query answering, and boundedness.
GN-SQL and GN-Datalog subsume a number of well known query languages and
constraint languages, such as unions of conjunctive queries, monadic Datalog,
and frontier-guarded tgds. In addition, an analysis of standard benchmark
workloads shows that most usage of negation in SQL in practice is guarded
negation.
|
1203.0088
|
The Mind Grows Circuits
|
cs.AI cs.FL
|
There is a vast supply of prior art that study models for mental processes.
Some studies in psychology and philosophy approach it from an inner perspective
in terms of experiences and percepts. Others such as neurobiology or
connectionist-machines approach it externally by viewing the mind as complex
circuit of neurons where each neuron is a primitive binary circuit. In this
paper, we also model the mind as a place where a circuit grows, starting as a
collection of primitive components at birth and then builds up incrementally in
a bottom up fashion. A new node is formed by a simple composition of prior
nodes when we undergo a repeated experience that can be described by that
composition. Unlike neural networks, however, these circuits take "concepts" or
"percepts" as inputs and outputs. Thus the growing circuits can be likened to a
growing collection of lambda expressions that are built on top of one another
in an attempt to compress the sensory input as a heuristic to bound its
Kolmogorov Complexity.
|
1203.0096
|
Joint Estimation of Angle and Delay of Radio Wave Arrival under
Multiplicative Noise Environment
|
cs.CE math.ST stat.TH
|
We propose a novel technique for joint estimation of angle and delay of radio
wave arrival in a multipath mobile communication channel using knowledge of the
transmitted pulse shape function. Employing an array of sensors to sample the
radio received signal, and subsequent array signal processing can provide the
characterization of a high-rank channel in terms of the multipath angles of
arrival and time delays. Although several works have been reported in the
literature for estimation of the high-rank channel parameters, we are not aware
of any work that deals with the problem of estimation in a fading channel,
which essentially leads to a multiplicative noise environment.
|
1203.0135
|
Optimal Mix of Incentive Strategies for Product Marketing on Social
Networks
|
cs.SI physics.soc-ph
|
We consider the problem of devising incentive strategies for viral marketing
of a product. In particular, we assume that the seller can influence
penetration of the product by offering two incentive programs: a) direct
incentives to potential buyers (influence) and b) referral rewards for
customers who influence potential buyers to make the purchase (exploit
connections). The problem is to determine the optimal timing of these programs
over a finite time horizon. In contrast to algorithmic perspective popular in
the literature, we take a mean-field approach and formulate the problem as a
continuous-time deterministic optimal control problem. We show that the optimal
strategy for the seller has a simple structure and can take both forms, namely,
influence-and-exploit and exploit-and-influence. We also show that in some
cases it may optimal for the seller to deploy incentive programs mostly for low
degree nodes. We support our theoretical results through numerical studies and
provide practical insights by analyzing various scenarios.
|
1203.0145
|
The Horse Raced Past: Gardenpath Processing in Dynamical Systems
|
cs.CL
|
I pinpoint an interesting similarity between a recent account to rational
parsing and the treatment of sequential decisions problems in a dynamical
systems approach. I argue that expectation-driven search heuristics aiming at
fast computation resembles a high-risk decision strategy in favor of large
transition velocities. Hale's rational parser, combining generalized
left-corner parsing with informed $\mathrm{A}^*$ search to resolve processing
conflicts, explains gardenpath effects in natural sentence processing by
misleading estimates of future processing costs that are to be minimized. On
the other hand, minimizing the duration of cognitive computations in
time-continuous dynamical systems can be described by combining vector space
representations of cognitive states by means of filler/role decompositions and
subsequent tensor product representations with the paradigm of stable
heteroclinic sequences. Maximizing transition velocities according to a
high-risk decision strategy could account for a fast race even between states
that are apparently remote in representation space.
|
1203.0146
|
Relevant Sampling of Band-limited Functions
|
math.PR cs.IT math.IT
|
We study the random sampling of band-limited functions of several variables.
If a bandlimited function with bandwidth one has its essential support on a
cube of volume $R^d$, then $\cO (R^d \log R^d)$ random samples suffice to
approximate the function up to a given error with high probability.
|
1203.0160
|
Scaling Datalog for Machine Learning on Big Data
|
cs.DB cs.LG cs.PF
|
In this paper, we present the case for a declarative foundation for
data-intensive machine learning systems. Instead of creating a new system for
each specific flavor of machine learning task, or hardcoding new optimizations,
we argue for the use of recursive queries to program a variety of machine
learning systems. By taking this approach, database query optimization
techniques can be utilized to identify effective execution plans, and the
resulting runtime plans can be executed on a single unified data-parallel query
processing engine. As a proof of concept, we consider two programming
models--Pregel and Iterative Map-Reduce-Update---from the machine learning
domain, and show how they can be captured in Datalog, tuned for a specific
task, and then compiled into an optimized physical plan. Experiments performed
on a large computing cluster with real data demonstrate that this declarative
approach can provide very good performance while offering both increased
generality and programming ease.
|
1203.0197
|
Statistical Approach for Selecting Elite Ants
|
cs.NE
|
Applications of ACO algorithms to obtain better solutions for combinatorial
optimization problems have become very popular in recent years. In ACO
algorithms, group of agents repeatedly perform well defined actions and
collaborate with other ants in order to accomplish the defined task. In this
paper, we introduce new mechanisms for selecting the Elite ants dynamically
based on simple statistical tools. We also investigate the performance of newly
proposed mechanisms.
|
1203.0202
|
Pictures of Processes: Automated Graph Rewriting for Monoidal Categories
and Applications to Quantum Computing
|
math.CT cs.AI quant-ph
|
This work is about diagrammatic languages, how they can be represented, and
what they in turn can be used to represent. More specifically, it focuses on
representations and applications of string diagrams. String diagrams are used
to represent a collection of processes, depicted as "boxes" with multiple
(typed) inputs and outputs, depicted as "wires". If we allow plugging input and
output wires together, we can intuitively represent complex compositions of
processes, formalised as morphisms in a monoidal category.
[...] The first major contribution of this dissertation is the introduction
of a discretised version of a string diagram called a string graph. String
graphs form a partial adhesive category, so they can be manipulated using
double-pushout graph rewriting. Furthermore, we show how string graphs modulo a
rewrite system can be used to construct free symmetric traced and compact
closed categories on a monoidal signature.
The second contribution is in the application of graphical languages to
quantum information theory. We use a mixture of diagrammatic and algebraic
techniques to prove a new classification result for strongly complementary
observables. [...] We also introduce a graphical language for multipartite
entanglement and illustrate a simple graphical axiom that distinguishes the two
maximally-entangled tripartite qubit states: GHZ and W. [...]
The third contribution is a description of two software tools developed in
part by the author to implement much of the theoretical content described here.
The first tool is Quantomatic, a desktop application for building string graphs
and graphical theories, as well as performing automated graph rewriting
visually. The second is QuantoCoSy, which performs fully automated,
model-driven theory creation using a procedure called conjecture synthesis.
|
1203.0203
|
Fast Reinforcement Learning with Large Action Sets using
Error-Correcting Output Codes for MDP Factorization
|
cs.LG stat.ML
|
The use of Reinforcement Learning in real-world scenarios is strongly limited
by issues of scale. Most RL learning algorithms are unable to deal with
problems composed of hundreds or sometimes even dozens of possible actions, and
therefore cannot be applied to many real-world problems. We consider the RL
problem in the supervised classification framework where the optimal policy is
obtained through a multiclass classifier, the set of classes being the set of
actions of the problem. We introduce error-correcting output codes (ECOCs) in
this setting and propose two new methods for reducing complexity when using
rollouts-based approaches. The first method consists in using an ECOC-based
classifier as the multiclass classifier, reducing the learning complexity from
O(A2) to O(Alog(A)). We then propose a novel method that profits from the
ECOC's coding dictionary to split the initial MDP into O(log(A)) seperate
two-action MDPs. This second method reduces learning complexity even further,
from O(A2) to O(log(A)), thus rendering problems with large action sets
tractable. We finish by experimentally demonstrating the advantages of our
approach on a set of benchmark problems, both in speed and performance.
|
1203.0220
|
The Equational Approach to CF2 Semantics
|
cs.AI cs.LO
|
We introduce a family of new equational semantics for argumentation networks
which can handle odd and even loops in a uniform manner. We offer one version
of equational semantics which is equivalent to CF2 semantics, and a better
version which gives the same results as traditional Dung semantics for even
loops but can still handle odd loops.
|
1203.0222
|
On the sensitivity of the simulated European Neolithic transition to
climate extremes
|
q-bio.PE cs.MA math.DS physics.geo-ph
|
Was the spread of agropastoralism from the Fertile Crescent throughout Europe
influenced by extreme climate events, or was it independent of climate? We here
generate idealized climate events using palaeoclimate records. In a
mathematical model of regional sociocultural development, these events disturb
the subsistence base of simulated forager and farmer societies. We evaluate the
regional simulated transition timings and durations against a published large
set of radiocarbon dates for western Eurasia; the model is able to
realistically hindcast much of the inhomogeneous space-time evolution of
regional Neolithic transitions. Our study shows that the consideration of
climate events improves the simulation of typical lags between cultural
complexes, but that the overall difference to a model without climate events is
not significant. Climate events may not have been as important for early
sociocultural dynamics as endogenous factors.
|
1203.0251
|
Bayesian Posteriors Without Bayes' Theorem
|
math.ST cs.IT math.IT math.PR stat.TH
|
The classical Bayesian posterior arises naturally as the unique solution of
several different optimization problems, without the necessity of interpreting
data as conditional probabilities and then using Bayes' Theorem. For example,
the classical Bayesian posterior is the unique posterior that minimizes the
loss of Shannon information in combining the prior and the likelihood
distributions. These results, direct corollaries of recent results about
conflations of probability distributions, reinforce the use of Bayesian
posteriors, and may help partially reconcile some of the differences between
classical and Bayesian statistics.
|
1203.0265
|
Image Fusion and Re-Modified SPIHT for Fused Image
|
cs.CV
|
This paper presents the Discrete Wavelet based fusion techniques for
combining perceptually important image features. SPIHT (Set Partitioning in
Hierarchical Trees) algorithm is an efficient method for lossy and lossless
coding of fused image. This paper presents some modifications on the SPIHT
algorithm. It is based on the idea of insignificant correlation of wavelet
coefficient among the medium and high frequency sub bands. In RE-MSPIHT
algorithm, wavelet coefficients are scaled prior to SPIHT coding based on the
sub band importance, with the goal of minimizing the MSE.
|
1203.0290
|
Weight spectrum of codes associated with the Grassmannian G(3,7)
|
cs.IT math.IT
|
In this paper we consider the problem of determining the weight spectrum of
q-ary codes C(3,m) associated with Grassmann varieties G(3,m). For m=6 this was
done by Nogin. We derive a formula for the weight of a codeword of C(3,m), in
terms of certain varieties associated with alternating trilinear forms on
(F_q)^m. The classification of such forms under the action of the general
linear group GL(m,F_q) is the other component that is required to calculate the
spectrum of C(3,m). For m=7, we explicitly determine the varieties mentioned
above. The classification problem for alternating 3-forms on (F_q)^7 was solved
by Cohen and Helminck, which we then use to determine the spectrum of C(3,7).
|
1203.0298
|
Application of Gist SVM in Cancer Detection
|
cs.LG
|
In this paper, we study the application of GIST SVM in disease prediction
(detection of cancer). Pattern classification problems can be effectively
solved by Support vector machines. Here we propose a classifier which can
differentiate patients having benign and malignant cancer cells. To improve the
accuracy of classification, we propose to determine the optimal size of the
training set and perform feature selection. To find the optimal size of the
training set, different sizes of training sets are experimented and the one
with highest classification rate is selected. The optimal features are selected
through their F-Scores.
|
1203.0332
|
A Personalized Tag-Based Recommendation in Social Web Systems
|
cs.SI cs.IR
|
Tagging activity has been recently identified as a potential source of
knowledge about personal interests, preferences, goals, and other attributes
known from user models. Tags themselves can be therefore used for finding
personalized recommendations of items. In this paper, we present a tag-based
recommender system which suggests similar Web pages based on the similarity of
their tags from a Web 2.0 tagging application. The proposed approach extends
the basic similarity calculus with external factors such as tag popularity, tag
representativeness and the affinity between user and tag. In order to study and
evaluate the recommender system, we have conducted an experiment involving 38
people from 12 countries using data from Del.icio.us, a social bookmarking web
system on which users can share their personal bookmarks.
|
1203.0411
|
The Complexity of Online Voter Control in Sequential Elections
|
cs.GT cs.CC cs.MA
|
Previous work on voter control, which refers to situations where a chair
seeks to change the outcome of an election by deleting, adding, or partitioning
voters, takes for granted that the chair knows all the voters' preferences and
that all votes are cast simultaneously. However, elections are often held
sequentially and the chair thus knows only the previously cast votes and not
the future ones, yet needs to decide instantaneously which control action to
take. We introduce a framework that models online voter control in sequential
elections. We show that the related problems can be much harder than in the
standard (non-online) case: For certain election systems, even with efficient
winner problems, online control by deleting, adding, or partitioning voters is
PSPACE-complete, even if there are only two candidates. In addition, we obtain
(by a new characterization of coNP in terms of weight-bounded alternating
Turing machines) completeness for coNP in the deleting/adding cases with a
bounded deletion/addition limit, and we obtain completeness for NP in the
partition cases with an additional restriction. We also show that for
plurality, online control by deleting or adding voters is in P, and for
partitioning voters is coNP-hard.
|
1203.0436
|
(Dual) Hoops Have Unique Halving
|
cs.AI math.LO
|
Continuous logic extends the multi-valued Lukasiewicz logic by adding a
halving operator on propositions. This extension is designed to give a more
satisfactory model theory for continuous structures. The semantics of these
logics can be given using specialisations of algebraic structures known as
hoops. As part of an investigation into the metatheory of propositional
continuous logic, we were indebted to Prover9 for finding a proof of an
important algebraic law.
|
1203.0453
|
Change-Point Detection in Time-Series Data by Relative Density-Ratio
Estimation
|
stat.ML cs.LG stat.ME
|
The objective of change-point detection is to discover abrupt property
changes lying behind time-series data. In this paper, we present a novel
statistical change-point detection algorithm based on non-parametric divergence
estimation between time-series samples from two retrospective segments. Our
method uses the relative Pearson divergence as a divergence measure, and it is
accurately and efficiently estimated by a method of direct density-ratio
estimation. Through experiments on artificial and real-world datasets including
human-activity sensing, speech, and Twitter messages, we demonstrate the
usefulness of the proposed method.
|
1203.0474
|
Orthogonal Designs and a Cubic Binary Function
|
cs.IT math.CO math.IT
|
Orthogonal designs are fundamental mathematical notions used in the
construction of space time block codes for wireless transmissions. Designs have
two important parameters, the rate and the decoding delay; the main problem of
the theory is to construct designs maximizing the rate and minimizing the
decoding delay. All known constructions of CODs are inductive or algorithmic.
In this paper, we present an explicit construction of optimal CODs. We do not
apply recurrent procedures and do calculate the matrix elements directly. Our
formula is based on a cubic function in two binary n-vectors. In our previous
work (Comm. Math. Phys., 2010, and J. Pure and Appl. Algebra, 2011), we used
this function to define a series of non-associative algebras generalizing the
classical algebra of octonions and to obtain sum of squares identities of
Hurwitz-Radon type.
|
1203.0488
|
Multi-Level Feature Descriptor for Robust Texture Classification via
Locality-Constrained Collaborative Strategy
|
cs.CV cs.IR
|
This paper introduces a simple but highly efficient ensemble for robust
texture classification, which can effectively deal with translation, scale and
changes of significant viewpoint problems. The proposed method first inherits
the spirit of spatial pyramid matching model (SPM), which is popular for
encoding spatial distribution of local features, but in a flexible way,
partitioning the original image into different levels and incorporating
different overlapping patterns of each level. This flexible setup helps capture
the informative features and produces sufficient local feature codes by some
well-chosen aggregation statistics or pooling operations within each
partitioned region, even when only a few sample images are available for
training. Then each texture image is represented by several orderless feature
codes and thereby all the training data form a reliable feature pond. Finally,
to take full advantage of this feature pond, we develop a collaborative
representation-based strategy with locality constraint (LC-CRC) for the final
classification, and experimental results on three well-known public texture
datasets demonstrate the proposed approach is very competitive and even
outperforms several state-of-the-art methods. Particularly, when only a few
samples of each category are available for training, our approach still
achieves very high classification performance.
|
1203.0502
|
Identifying influential spreaders and efficiently estimating infection
numbers in epidemic models: a walk counting approach
|
physics.bio-ph cs.SI physics.soc-ph
|
We introduce a new method to efficiently approximate the number of infections
resulting from a given initially-infected node in a network of susceptible
individuals. Our approach is based on counting the number of possible infection
walks of various lengths to each other node in the network. We analytically
study the properties of our method, in particular demonstrating different forms
for SIS and SIR disease spreading (e.g. under the SIR model our method counts
self-avoiding walks). In comparison to existing methods to infer the spreading
efficiency of different nodes in the network (based on degree, k-shell
decomposition analysis and different centrality measures), our method directly
considers the spreading process and, as such, is unique in providing estimation
of actual numbers of infections. Crucially, in simulating infections on various
real-world networks with the SIR model, we show that our walks-based method
improves the inference of effectiveness of nodes over a wide range of infection
rates compared to existing methods. We also analyse the trade-off between
estimate accuracy and computational cost, showing that the better accuracy here
can still be obtained at a comparable computational cost to other methods.
|
1203.0504
|
Modelling Social Structures and Hierarchies in Language Evolution
|
cs.CL cs.AI cs.MA
|
Language evolution might have preferred certain prior social configurations
over others. Experiments conducted with models of different social structures
(varying subgroup interactions and the role of a dominant interlocutor) suggest
that having isolated agent groups rather than an interconnected agent is more
advantageous for the emergence of a social communication system. Distinctive
groups that are closely connected by communication yield systems less like
natural language than fully isolated groups inhabiting the same world.
Furthermore, the addition of a dominant male who is asymmetrically favoured as
a hearer, and equally likely to be a speaker has no positive influence on the
disjoint groups.
|
1203.0512
|
Establishing linguistic conventions in task-oriented primeval dialogue
|
cs.CL cs.AI cs.MA
|
In this paper, we claim that language is likely to have emerged as a
mechanism for coordinating the solution of complex tasks. To confirm this
thesis, computer simulations are performed based on the coordination task
presented by Garrod & Anderson (1987). The role of success in task-oriented
dialogue is analytically evaluated with the help of performance measurements
and a thorough lexical analysis of the emergent communication system.
Simulation results confirm a strong effect of success mattering on both
reliability and dispersion of linguistic conventions.
|
1203.0518
|
Overview of EIREX 2011: Crowdsourcing
|
cs.IR
|
The second Information Retrieval Education through EXperimentation track
(EIREX 2011) was run at the University Carlos III of Madrid, during the 2011
spring semester. EIREX 2011 is the second in a series of experiments designed
to foster new Information Retrieval (IR) education methodologies and resources,
with the specific goal of teaching undergraduate IR courses from an
experimental perspective. For an introduction to the motivation behind the
EIREX experiments, see the first sections of [Urbano et al., 2011a]. For
information on other editions of EIREX and related data, see the website at
http://ir.kr.inf.uc3m.es/eirex/. The EIREX series have the following goals: a)
to help students get a view of the Information Retrieval process as they would
find it in a real-world scenario, either industrial or academic; b) to make
students realize the importance of laboratory experiments in Computer Science
and have them initiated in their execution and analysis; c) to create a public
repository of resources to teach Information Retrieval courses; d) to seek the
collaboration and active participation of other Universities in this endeavor.
This overview paper summarizes the results of the EIREX 2011 track, focusing on
the creation of the test collection and the analysis to assess its reliability.
|
1203.0535
|
On Facebook, most ties are weak
|
cs.SI cs.CY physics.soc-ph
|
Pervasive socio-technical networks bring new conceptual and technological
challenges to developers and users alike. A central research theme is
evaluation of the intensity of relations linking users and how they facilitate
communication and the spread of information. These aspects of human
relationships have been studied extensively in the social sciences under the
framework of the "strength of weak ties" theory proposed by Mark Granovetter.13
Some research has considered whether that theory can be extended to online
social networks like Facebook, suggesting interaction data can be used to
predict the strength of ties. The approaches being used require handling
user-generated data that is often not publicly available due to privacy
concerns. Here, we propose an alternative definition of weak and strong ties
that requires knowledge of only the topology of the social network (such as who
is a friend of whom on Facebook), relying on the fact that online social
networks, or OSNs, tend to fragment into communities. We thus suggest
classifying as weak ties those edges linking individuals belonging to different
communities and strong ties as those connecting users in the same community. We
tested this definition on a large network representing part of the Facebook
social graph and studied how weak and strong ties affect the
information-diffusion process. Our findings suggest individuals in OSNs
self-organize to create well-connected communities, while weak ties yield
cohesion and optimize the coverage of information spread.
|
1203.0550
|
Algorithms for Learning Kernels Based on Centered Alignment
|
cs.LG cs.AI
|
This paper presents new and effective algorithms for learning kernels. In
particular, as shown by our empirical results, these algorithms consistently
outperform the so-called uniform combination solution that has proven to be
difficult to improve upon in the past, as well as other algorithms for learning
kernels based on convex combinations of base kernels in both classification and
regression. Our algorithms are based on the notion of centered alignment which
is used as a similarity measure between kernels or kernel matrices. We present
a number of novel algorithmic, theoretical, and empirical results for learning
kernels based on our notion of centered alignment. In particular, we describe
efficient algorithms for learning a maximum alignment kernel by showing that
the problem can be reduced to a simple QP and discuss a one-stage algorithm for
learning both a kernel and a hypothesis based on that kernel using an
alignment-based regularization. Our theoretical results include a novel
concentration bound for centered alignment between kernel matrices, the proof
of the existence of effective predictors for kernels with high alignment, both
for classification and for regression, and the proof of stability-based
generalization bounds for a broad family of algorithms for learning kernels
based on centered alignment. We also report the results of experiments with our
centered alignment-based algorithms in both classification and regression.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.