id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1009.1166
|
Functorial Data Migration
|
cs.DB math.CT
|
In this paper we present a simple database definition language: that of
categories and functors. A database schema is a small category and an instance
is a set-valued functor on it. We show that morphisms of schemas induce three
"data migration functors", which translate instances from one schema to the
other in canonical ways. These functors parameterize projections, unions, and
joins over all tables simultaneously and can be used in place of conjunctive
and disjunctive queries. We also show how to connect a database and a
functional programming language by introducing a functorial connection between
the schema and the category of types for that language. We begin the paper with
a multitude of examples to motivate the definitions, and near the end we
provide a dictionary whereby one can translate database concepts into
category-theoretic concepts and vice-versa.
|
1009.1174
|
Parameterized Complexity Results in Symmetry Breaking
|
cs.AI cs.CC
|
Symmetry is a common feature of many combinatorial problems. Unfortunately
eliminating all symmetry from a problem is often computationally intractable.
This paper argues that recent parameterized complexity results provide insight
into that intractability and help identify special cases in which symmetry can
be dealt with more tractably
|
1009.1193
|
Coarse Network Coding: A Simple Relay Strategy to Resolve Interference
|
cs.IT math.IT
|
Reminiscent of the parity function in network coding for the butterfly
network, it is shown that forwarding an even/odd indicator bit for a scalar
quantization of a relay observation recovers 1 bit of information at the two
destinations in a noiseless interference channel where interference is treated
as noise. Based on this observation, a coding strategy is proposed to improve
the rate of both users at the same time using a relay node in an interference
channel. In this strategy, the relay observes a linear combination of the two
source signals, and broadcasts a common message to the two destinations over a
shared out-of-band link of rate R0 bits per channel use. The relay message
consists of the bin index of a structured binning scheme obtained from a
2^R0-way partition of the squared lattice in the complex plane. We show that
such scalar quantization-binning relay strategy asymptotically achieves the
cut-set bound in an interference channel with a common out-of-band relay link
of limited rate, improving the sum rate by two bits for every bit relayed,
asymptotically at high signal to noise ratios (SNR) and when interference is
treated as noise. We then use low-density parity-check (LDPC) codes along with
bit-interleaved coded-modulation (BICM) as a practical coding scheme for the
proposed strategy. We consider matched and mismatched scenarios, depending on
whether the input alphabet of the interference signal is known or unknown to
the decoder, respectively. For the matched scenario, we show the proposed
strategy results in significant gains in SNR. For the mismatched scenario, we
show that the proposed strategy results in rate improvements that, without the
relay, cannot be achieved by merely increasing transmit powers. Finally, we use
generalized mutual information analysis to characterize the theoretical
performance of the mismatched scenario and validate our simulation results.
|
1009.1225
|
A family of sequences with large size and good correlation property
arising from $M$-ary Sidelnikov sequences of period $q^d-1$
|
cs.IT math.IT
|
Let $q$ be any prime power and let $d$ be a positive integer greater than 1.
In this paper, we construct a family of $M$-ary sequences of period $q-1$ from
a given $M$-ary, with $M|q-1$, Sidelikov sequence of period $q^d-1$. Under mild
restrictions on $d$, we show that the maximum correlation magnitude of the
family is upper bounded by $(2d -1) \sqrt { q }+1$ and the asymptotic size, as
$q\rightarrow \infty$, of that is $\frac{ (M-1)q^{d-1}}{d }$. This extends the
pioneering work of Yu and Gong for $d=2$ case.
|
1009.1254
|
Multiuser broadcast erasure channel with feedback - capacity and
algorithms
|
cs.IT cs.DM math.IT
|
We consider the $N$-user broadcast erasure channel with $N$ unicast sessions
(one for each user) where receiver feedback is regularly sent to the
transmitter in the form of ACK/NACK messages. We first provide a generic outer
bound to the capacity of this system; we then propose a virtual-queue-based
inter-session mixing coding algorithm, determine its rate region and show that
it achieves capacity under certain conditions on channel statistics, assuming
that instantaneous feedback is known to all users. Removing this assumption
results in a rate region that asymptotically differs from the outer bound by 1
bit as $L\to \infty$, where $L$ is the number of bits per packet (packet
length). For the case of arbitrary channel statistics, we present a
modification of the previous algorithm whose rate region is identical to the
outer bound for N=3, when instant feedback is known to all users, and differs
from the bound by 1 bit as $L\to \infty$, when the 3 users know only their own
ACK. The proposed algorithms do not require any prior knowledge of channel
statistics.
|
1009.1305
|
Wideband Spectrum Sensing at Sub-Nyquist Rates
|
cs.AR cs.IT math.IT
|
We present a mixed analog-digital spectrum sensing method that is especially
suited to the typical wideband setting of cognitive radio (CR). The advantages
of our system with respect to current architectures are threefold. First, our
analog front-end is fixed and does not involve scanning hardware. Second, both
the analog-to-digital conversion (ADC) and the digital signal processing (DSP)
rates are substantially below Nyquist. Finally, the sensing resources are
shared with the reception path of the CR, so that the lowrate streaming samples
can be used for communication purposes of the device, besides the sensing
functionality they provide. Combining these advantages leads to a real time map
of the spectrum with minimal use of mobile resources. Our approach is based on
the modulated wideband converter (MWC) system, which samples sparse wideband
inputs at sub-Nyquist rates. We report on results of hardware experiments,
conducted on an MWC prototype circuit, which affirm fast and accurate spectrum
sensing in parallel to CR communication.
|
1009.1362
|
Approximate Lesion Localization in Dermoscopy Images
|
cs.CV
|
Background: Dermoscopy is one of the major imaging modalities used in the
diagnosis of melanoma and other pigmented skin lesions. Due to the difficulty
and subjectivity of human interpretation, automated analysis of dermoscopy
images has become an important research area. Border detection is often the
first step in this analysis. Methods: In this article, we present an
approximate lesion localization method that serves as a preprocessing step for
detecting borders in dermoscopy images. In this method, first the black frame
around the image is removed using an iterative algorithm. The approximate
location of the lesion is then determined using an ensemble of thresholding
algorithms. Results: The method is tested on a set of 428 dermoscopy images.
The localization error is quantified by a metric that uses dermatologist
determined borders as the ground truth. Conclusion: The results demonstrate
that the method presented here achieves both fast and accurate localization of
lesions in dermoscopy images.
|
1009.1446
|
Comparing Prediction Market Structures, With an Application to Market
Making
|
q-fin.TR cs.AI cs.CE
|
Ensuring sufficient liquidity is one of the key challenges for designers of
prediction markets. Various market making algorithms have been proposed in the
literature and deployed in practice, but there has been little effort to
evaluate their benefits and disadvantages in a systematic manner. We introduce
a novel experimental design for comparing market structures in live trading
that ensures fair comparison between two different microstructures with the
same trading population. Participants trade on outcomes related to a
two-dimensional random walk that they observe on their computer screens. They
can simultaneously trade in two markets, corresponding to the independent
horizontal and vertical random walks. We use this experimental design to
compare the popular inventory-based logarithmic market scoring rule (LMSR)
market maker and a new information based Bayesian market maker (BMM). Our
experiments reveal that BMM can offer significant benefits in terms of price
stability and expected loss when controlling for liquidity; the caveat is that,
unlike LMSR, BMM does not guarantee bounded loss. Our investigation also
elucidates some general properties of market makers in prediction markets. In
particular, there is an inherent tradeoff between adaptability to market shocks
and convergence during market equilibrium.
|
1009.1460
|
Two-Way Transmission Capacity of Wireless Ad-hoc Networks
|
cs.IT math.IT
|
The transmission capacity of an ad-hoc network is the maximum density of
active transmitters per unit area, given an outage constraint at each receiver
for a fixed rate of transmission. Most prior work on finding the transmission
capacity of ad-hoc networks has focused only on one-way communication where a
source communicates with a destination and no data is sent from the destination
to the source. In practice, however, two-way or bidirectional data transmission
is required to support control functions like packet acknowledgements and
channel feedback. This paper extends the concept of transmission capacity to
two-way wireless ad-hoc networks by incorporating the concept of a two-way
outage with different rate requirements in both directions. Tight upper and
lower bounds on the two-way transmission capacity are derived for frequency
division duplexing. The derived bounds are used to derive the optimal solution
for bidirectional bandwidth allocation that maximizes the two-way transmission
capacity, which is shown to perform better than allocating bandwidth
proportional to the desired rate in both directions. Using the proposed two-way
transmission capacity framework, a lower bound on the two-way transmission
capacity with transmit beamforming using limited feedback is derived as a
function of bandwidth, and bits allocated for feedback.
|
1009.1498
|
A Statistical Measure of Complexity
|
nlin.AO cs.IT math.IT physics.data-an
|
In this chapter, a statistical measure of complexity is introduced and some
of its properties are discussed. Also, some straightforward applications are
shown.
|
1009.1512
|
Applications of semidefinite programming to coding theory
|
cs.IT math.IT
|
We survey recent generalizations and improvements of the linear programming
method that involve semidefinite programming. A general framework using group
representations and tools from graph theory is provided.
|
1009.1513
|
Artificial Neural Networks, Symmetries and Differential Evolution
|
cs.NE
|
Neuroevolution is an active and growing research field, especially in times
of increasingly parallel computing architectures. Learning methods for
Artificial Neural Networks (ANN) can be divided into two groups. Neuroevolution
is mainly based on Monte-Carlo techniques and belongs to the group of global
search methods, whereas other methods such as backpropagation belong to the
group of local search methods. ANN's comprise important symmetry properties,
which can influence Monte-Carlo methods. On the other hand, local search
methods are generally unaffected by these symmetries. In the literature,
dealing with the symmetries is generally reported as being not effective or
even yielding inferior results. In this paper, we introduce the so called
Minimum Global Optimum Proximity principle derived from theoretical
considerations for effective symmetry breaking, applied to offline supervised
learning. Using Differential Evolution (DE), which is a popular and robust
evolutionary global optimization method, we experimentally show significant
global search efficiency improvements by symmetry breaking.
|
1009.1533
|
Sensing Matrix Optimization for Block-Sparse Decoding
|
cs.IT math.IT
|
Recent work has demonstrated that using a carefully designed sensing matrix
rather than a random one, can improve the performance of compressed sensing. In
particular, a well-designed sensing matrix can reduce the coherence between the
atoms of the equivalent dictionary, and as a consequence, reduce the
reconstruction error. In some applications, the signals of interest can be well
approximated by a union of a small number of subspaces (e.g., face recognition
and motion segmentation). This implies the existence of a dictionary which
leads to block-sparse representations. In this work, we propose a framework for
sensing matrix design that improves the ability of block-sparse approximation
techniques to reconstruct and classify signals. This method is based on
minimizing a weighted sum of the inter-block coherence and the sub-block
coherence of the equivalent dictionary. Our experiments show that the proposed
algorithm significantly improves signal recovery and classification ability of
the Block-OMP algorithm compared to sensing matrix optimization methods that do
not employ block structure.
|
1009.1686
|
Statistical Behavior of Embeddedness and Communities of Overlapping
Cliques in Online Social Networks
|
cs.SI physics.soc-ph
|
Degree distribution of nodes, especially a power law degree distribution, has
been regarded as one of the most significant structural characteristics of
social and information networks. Node degree, however, only discloses the
first-order structure of a network. Higher-order structures such as the edge
embeddedness and the size of communities may play more important roles in many
online social networks. In this paper, we provide empirical evidence on the
existence of rich higherorder structural characteristics in online social
networks, develop mathematical models to interpret and model these
characteristics, and discuss their various applications in practice. In
particular, 1) We show that the embeddedness distribution of social links in
many social networks has interesting and rich behavior that cannot be captured
by well-known network models. We also provide empirical results showing a clear
correlation between the embeddedness distribution and the average number of
messages communicated between pairs of social network nodes. 2) We formally
prove that random k-tree, a recent model for complex networks, has a power law
embeddedness distribution, and show empirically that the random k-tree model
can be used to capture the rich behavior of higherorder structures we observed
in real-world social networks. 3) Going beyond the embeddedness, we show that a
variant of the random k-tree model can be used to capture the power law
distribution of the size of communities of overlapping cliques discovered
recently.
|
1009.1690
|
Probabilistic Models over Ordered Partitions with Application in
Learning to Rank
|
cs.IR stat.ML
|
This paper addresses the general problem of modelling and learning rank data
with ties. We propose a probabilistic generative model, that models the process
as permutations over partitions. This results in super-exponential
combinatorial state space with unknown numbers of partitions and unknown
ordering among them. We approach the problem from the discrete choice theory,
where subsets are chosen in a stagewise manner, reducing the state space per
each stage significantly. Further, we show that with suitable parameterisation,
we can still learn the models in linear time. We evaluate the proposed models
on the problem of learning to rank with the data from the recently held Yahoo!
challenge, and demonstrate that the models are competitive against well-known
rivals.
|
1009.1691
|
Multi-scale turbulence modeling and maximum information principle. Part
1
|
physics.flu-dyn cs.IT math.IT
|
We discuss averaged turbulence modeling of multi-scales of length for an
incompressible Newtonian fluid, with the help of the maximum information
principle. We suppose that there exists a function basis to decompose the
turbulent fluctuations in a flow of our concern into the components associated
with various spatial scales and that there is a probability density function
$\pdf$ of these fluctuation components. The unbiased form for $\pdf$ is
determined and the turbulence model is closed, with the multi-scale
correlations up to the fourth order, through maximizing the information under
the constraints of equality and inequality for that flow. Due to the
computational difficulty to maximize the information, a closely related but
simple alternative objective is sought, like the determinant or the trace of
the second order correlations of the turbulent flow. Some preliminary results
and implications from the application to homogeneous turbulence are presented.
Some issues yet to be resolved are indicated.
|
1009.1720
|
Is there a physically universal cellular automaton or Hamiltonian?
|
quant-ph cs.AI
|
It is known that both quantum and classical cellular automata (CA) exist that
are computationally universal in the sense that they can simulate, after
appropriate initialization, any quantum or classical computation, respectively.
Here we introduce a different notion of universality: a CA is called physically
universal if every transformation on any finite region can be (approximately)
implemented by the autonomous time evolution of the system after the complement
of the region has been initialized in an appropriate way. We pose the question
of whether physically universal CAs exist. Such CAs would provide a model of
the world where the boundary between a physical system and its controller can
be consistently shifted, in analogy to the Heisenberg cut for the quantum
measurement problem. We propose to study the thermodynamic cost of computation
and control within such a model because implementing a cyclic process on a
microsystem may require a non-cyclic process for its controller, whereas
implementing a cyclic process on system and controller may require the
implementation of a non-cyclic process on a "meta"-controller, and so on.
Physically universal CAs avoid this infinite hierarchy of controllers and the
cost of implementing cycles on a subsystem can be described by mixing
properties of the CA dynamics. We define a physical prior on the CA
configurations by applying the dynamics to an initial state where half of the
CA is in the maximum entropy state and half of it is in the all-zero state
(thus reflecting the fact that life requires non-equilibrium states like the
boundary between a hold and a cold reservoir). As opposed to Solomonoff's
prior, our prior does not only account for the Kolmogorov complexity but also
for the cost of isolating the system during the state preparation if the
preparation process is not robust.
|
1009.1731
|
Identifying the Community Structure of the International-Trade Multi
Network
|
physics.soc-ph cs.SI
|
We study the community structure of the multi-network of commodity-specific
trade relations among world countries over the 1992-2003 period. We compare
structures across commodities and time by means of the normalized mutual
information index (NMI). We also compare them with exogenous community
structures induced by geographical distances and regional trade agreements. We
find that commodity-specific community structures are very heterogeneous and
much more fragmented than that characterizing the aggregate ITN. This shows
that the aggregate properties of the ITN may result (and be very different)
from the aggregation of very diverse commodity-specific layers of the multi
network. We also show that commodity-specific community structures, especially
those related to the chemical sector, are becoming more and more similar to the
aggregate one. Finally, our findings suggest that geographical distance is much
more correlated with the observed community structure than RTAs. This result
strengthens previous findings from the empirical literature on trade.
|
1009.1759
|
On Compression of Data Encrypted with Block Ciphers
|
cs.IT cs.CR math.IT
|
This paper investigates compression of data encrypted with block ciphers,
such as the Advanced Encryption Standard (AES). It is shown that such data can
be feasibly compressed without knowledge of the secret key. Block ciphers
operating in various chaining modes are considered and it is shown how
compression can be achieved without compromising security of the encryption
scheme. Further, it is shown that there exists a fundamental limitation to the
practical compressibility of block ciphers when no chaining is used between
blocks. Some performance results for practical code constructions used to
compress binary sources are presented.
|
1009.1889
|
Spatially regularized compressed sensing of diffusion MRI data
|
cs.IT math.IT physics.med-ph
|
The present paper introduces a method for substantial reduction of the number
of diffusion encoding gradients required for reliable reconstruction of HARDI
signals. The method exploits the theory of compressed sensing (CS), which
establishes conditions on which a signal of interest can be recovered from its
under-sampled measurements, provided that the signal admits a sparse
representation in the domain of a linear transform. In the case at hand, the
latter is defined to be spherical ridgelet transformation, which excels in
sparsifying HARDI signals. What makes the resulting reconstruction procedure
even more accurate is a combination of the sparsity constraints in the
diffusion domain with additional constraints imposed on the estimated diffusion
field in the spatial domain. Accordingly, the present paper describes a novel
way to combine the diffusion- and spatial-domain constraints to achieve a
maximal reduction in the number of diffusion measurements, while sacrificing
little in terms of reconstruction accuracy. Finally, details are provided on a
particularly efficient numerical scheme which can be used to solve the
aforementioned reconstruction problem by means of standard and readily
available estimation tools. The paper is concluded with experimental results
which support the practical value of the proposed reconstruction methodology.
|
1009.1983
|
Evolutionary Computational Method of Facial Expression Analysis for
Content-based Video Retrieval using 2-Dimensional Cellular Automata
|
cs.CV
|
In this paper, Deterministic Cellular Automata (DCA) based video shot
classification and retrieval is proposed. The deterministic 2D Cellular
automata model captures the human facial expressions, both spontaneous and
posed. The determinism stems from the fact that the facial muscle actions are
standardized by the encodings of Facial Action Coding System (FACS) and Action
Units (AUs). Based on these encodings, we generate the set of evolutionary
update rules of the DCA for each facial expression. We consider a
Person-Independent Facial Expression Space (PIFES) to analyze the facial
expressions based on Partitioned 2D-Cellular Automata which capture the
dynamics of facial expressions and classify the shots based on it. Target video
shot is retrieved by comparing the similar expression is obtained for the query
frame's face with respect to the key faces expressions in the database video.
Consecutive key face expressions in the database that are highly similar to the
query frame's face, then the key faces are used to generate the set of
retrieved video shots from the database. A concrete example of its application
which realizes an affective interaction between the computer and the user is
proposed. In the affective interaction, the computer can recognize the facial
expression of any given video shot. This interaction endows the computer with
certain ability to adapt to the user's feedback.
|
1009.1990
|
Complexity of Non-Monotonic Logics
|
cs.CC cs.AI cs.LO
|
Over the past few decades, non-monotonic reasoning has developed to be one of
the most important topics in computational logic and artificial intelligence.
Different ways to introduce non-monotonic aspects to classical logic have been
considered, e.g., extension with default rules, extension with modal belief
operators, or modification of the semantics. In this survey we consider a
logical formalism from each of the above possibilities, namely Reiter's default
logic, Moore's autoepistemic logic and McCarthy's circumscription.
Additionally, we consider abduction, where one is not interested in inferences
from a given knowledge base but in computing possible explanations for an
observation with respect to a given knowledge base.
Complexity results for different reasoning tasks for propositional variants
of these logics have been studied already in the nineties. In recent years,
however, a renewed interest in complexity issues can be observed. One current
focal approach is to consider parameterized problems and identify reasonable
parameters that allow for FPT algorithms. In another approach, the emphasis
lies on identifying fragments, i.e., restriction of the logical language, that
allow more efficient algorithms for the most important reasoning tasks. In this
survey we focus on this second aspect. We describe complexity results for
fragments of logical languages obtained by either restricting the allowed set
of operators (e.g., forbidding negations one might consider only monotone
formulae) or by considering only formulae in conjunctive normal form but with
generalized clause types.
The algorithmic problems we consider are suitable variants of satisfiability
and implication in each of the logics, but also counting problems, where one is
not only interested in the existence of certain objects (e.g., models of a
formula) but asks for their number.
|
1009.2003
|
AI 3D Cybug Gaming
|
cs.AI
|
In this short paper I briefly discuss 3D war Game based on artificial
intelligence concepts called AI WAR. Going in to the details, I present the
importance of CAICL language and how this language is used in AI WAR. Moreover
I also present a designed and implemented 3D War Cybug for AI WAR using CAICL
and discus the implemented strategy to defeat its enemies during the game life.
|
1009.2009
|
Hierarchical Semi-Markov Conditional Random Fields for Recursive
Sequential Data
|
stat.ML cs.AI
|
Inspired by the hierarchical hidden Markov models (HHMM), we present the
hierarchical semi-Markov conditional random field (HSCRF), a generalisation of
embedded undirectedMarkov chains tomodel complex hierarchical, nestedMarkov
processes. It is parameterised in a discriminative framework and has polynomial
time algorithms for learning and inference. Importantly, we consider
partiallysupervised learning and propose algorithms for generalised
partially-supervised learning and constrained inference. We demonstrate the
HSCRF in two applications: (i) recognising human activities of daily living
(ADLs) from indoor surveillance cameras, and (ii) noun-phrase chunking. We show
that the HSCRF is capable of learning rich hierarchical models with reasonable
accuracy in both fully and partially observed data cases.
|
1009.2021
|
The Complexity of Causality and Responsibility for Query Answers and
non-Answers
|
cs.DB cs.AI
|
An answer to a query has a well-defined lineage expression (alternatively
called how-provenance) that explains how the answer was derived. Recent work
has also shown how to compute the lineage of a non-answer to a query. However,
the cause of an answer or non-answer is a more subtle notion and consists, in
general, of only a fragment of the lineage. In this paper, we adapt Halpern,
Pearl, and Chockler's recent definitions of causality and responsibility to
define the causes of answers and non-answers to queries, and their degree of
responsibility. Responsibility captures the notion of degree of causality and
serves to rank potentially many causes by their relative contributions to the
effect. Then, we study the complexity of computing causes and responsibilities
for conjunctive queries. It is known that computing causes is NP-complete in
general. Our first main result shows that all causes to conjunctive queries can
be computed by a relational query which may involve negation. Thus, causality
can be computed in PTIME, and very efficiently so. Next, we study computing
responsibility. Here, we prove that the complexity depends on the conjunctive
query and demonstrate a dichotomy between PTIME and NP-complete cases. For the
PTIME cases, we give a non-trivial algorithm, consisting of a reduction to the
max-flow computation problem. Finally, we prove that, even when it is in PTIME,
responsibility is complete for LOGSPACE, implying that, unlike causality, it
cannot be computed by a relational query.
|
1009.2032
|
Feedback stabilisation of switched systems via iterative approximate
eigenvector assignment
|
cs.SY math.OC
|
This paper presents and implements an iterative feedback design algorithm for
stabilisation of discrete-time switched systems under arbitrary switching
regimes. The algorithm seeks state feedback gains so that the closed-loop
switching system admits a common quadratic Lyapunov function (CQLF) and hence
is uniformly globally exponentially stable. Although the feedback design
problem considered can be solved directly via linear matrix inequalities
(LMIs), direct application of LMIs for feedback design does not provide
information on closed-loop system structure. In contrast, the feedback matrices
computed by the proposed algorithm assign closed-loop structure approximating
that required to satisfy Lie-algebraic conditions that guarantee existence of a
CQLF. The main contribution of the paper is to provide, for single-input
systems, a numerical implementation of the algorithm based on iterative
approximate common eigenvector assignment, and to establish cases where such
algorithm is guaranteed to succeed. We include pseudocode and a few numerical
examples to illustrate advantages and limitations of the proposed technique.
|
1009.2041
|
Multi-Agent Only-Knowing Revisited
|
cs.AI
|
Levesque introduced the notion of only-knowing to precisely capture the
beliefs of a knowledge base. He also showed how only-knowing can be used to
formalize non-monotonic behavior within a monotonic logic. Despite its appeal,
all attempts to extend only-knowing to the many agent case have undesirable
properties. A belief model by Halpern and Lakemeyer, for instance, appeals to
proof-theoretic constructs in the semantics and needs to axiomatize validity as
part of the logic. It is also not clear how to generalize their ideas to a
first-order case. In this paper, we propose a new account of multi-agent
only-knowing which, for the first time, has a natural possible-world semantics
for a quantified language with equality. We then provide, for the propositional
fragment, a sound and complete axiomatization that faithfully lifts Levesque's
proof theory to the many agent case. We also discuss comparisons to the earlier
approach by Halpern and Lakemeyer.
|
1009.2054
|
Multiplex Structures: Patterns of Complexity in Real-World Networks
|
cs.SI cs.AI physics.soc-ph
|
Complex network theory aims to model and analyze complex systems that consist
of multiple and interdependent components. Among all studies on complex
networks, topological structure analysis is of the most fundamental importance,
as it represents a natural route to understand the dynamics, as well as to
synthesize or optimize the functions, of networks. A broad spectrum of network
structural patterns have been respectively reported in the past decade, such as
communities, multipartites, hubs, authorities, outliers, bow ties, and others.
Here, we show that most individual real-world networks demonstrate multiplex
structures. That is, a multitude of known or even unknown (hidden) patterns can
simultaneously situate in the same network, and moreover they may be overlapped
and nested with each other to collaboratively form a heterogeneous, nested or
hierarchical organization, in which different connective phenomena can be
observed at different granular levels. In addition, we show that the multiplex
structures hidden in exploratory networks can be well defined as well as
effectively recognized within an unified framework consisting of a set of
proposed concepts, models, and algorithms. Our findings provide a strong
evidence that most real-world complex systems are driven by a combination of
heterogeneous mechanisms that may collaboratively shape their ubiquitous
multiplex structures as we observe currently. This work also contributes a
mathematical tool for analyzing different sources of networks from a new
perspective of unveiling multiplex structures, which will be beneficial to
multiple disciplines including sociology, economics and computer science.
|
1009.2077
|
A new sufficient condition for sum-rate tightness in quadratic Gaussian
multiterminal source coding
|
cs.IT math.IT
|
This work considers the quadratic Gaussian multiterminal (MT) source coding
problem and provides a new sufficient condition for the Berger-Tung sum-rate
bound to be tight. The converse proof utilizes a set of virtual remote sources
given which the MT sources are block independent with a maximum block size of
two. The given MT source coding problem is then related to a set of
two-terminal problems with matrix-distortion constraints, for which a new lower
bound on the sum-rate is given. Finally, a convex optimization problem is
formulated and a sufficient condition derived for the optimal BT scheme to
satisfy the subgradient based Karush-Kuhn-Tucker condition. The set of sum-rate
tightness problems defined by our new sufficient condition subsumes all
previously known tight cases, and opens new direction for a more general
partial solution.
|
1009.2084
|
Ontology Temporal Evolution for Multi-Entity Bayesian Networks under
Exogenous and Endogenous Semantic Updating
|
cs.AI cs.LO
|
It is a challenge for any Knowledge Base reasoning to manage ubiquitous
uncertain ontology as well as uncertain updating times, while achieving
acceptable service levels at minimum computational cost. This paper proposes an
application-independent merging ontologies for any open interaction system. A
solution that uses Multi-Entity Bayesan Networks with SWRL rules, and a Java
program is presented to dynamically monitor Exogenous and Endogenous temporal
evolution on updating merging ontologies on a probabilistic framework for the
Semantic Web.
|
1009.2118
|
Restricted strong convexity and weighted matrix completion: Optimal
bounds with noise
|
cs.IT math.IT math.ST stat.TH
|
We consider the matrix completion problem under a form of row/column weighted
entrywise sampling, including the case of uniform entrywise sampling as a
special case. We analyze the associated random observation operator, and prove
that with high probability, it satisfies a form of restricted strong convexity
with respect to weighted Frobenius norm. Using this property, we obtain as
corollaries a number of error bounds on matrix completion in the weighted
Frobenius norm under noisy sampling and for both exact and near low-rank
matrices. Our results are based on measures of the "spikiness" and
"low-rankness" of matrices that are less restrictive than the incoherence
conditions imposed in previous work. Our technique involves an $M$-estimator
that includes controls on both the rank and spikiness of the solution, and we
establish non-asymptotic error bounds in weighted Frobenius norm for recovering
matrices lying with $\ell_q$-"balls" of bounded spikiness. Using
information-theoretic methods, we show that no algorithm can achieve better
estimates (up to a logarithmic factor) over these same sets, showing that our
conditions on matrices and associated rates are essentially optimal.
|
1009.2221
|
Performance Bounds and Design Criteria for Estimating Finite Rate of
Innovation Signals
|
cs.IT math.IT
|
In this paper, we consider the problem of estimating finite rate of
innovation (FRI) signals from noisy measurements, and specifically analyze the
interaction between FRI techniques and the underlying sampling methods. We
first obtain a fundamental limit on the estimation accuracy attainable
regardless of the sampling method. Next, we provide a bound on the performance
achievable using any specific sampling approach. Essential differences between
the noisy and noise-free cases arise from this analysis. In particular, we
identify settings in which noise-free recovery techniques deteriorate
substantially under slight noise levels, thus quantifying the numerical
instability inherent in such methods. This instability, which is only present
in some families of FRI signals, is shown to be related to a specific type of
structure, which can be characterized by viewing the signal model as a union of
subspaces. Finally, we develop a methodology for choosing the optimal sampling
kernels based on a generalization of the Karhunen--Lo\`eve transform. The
results are illustrated for several types of time-delay estimation problems.
|
1009.2270
|
Active Integrity Constraints and Revision Programming
|
cs.DB
|
We study active integrity constraints and revision programming, two
formalisms designed to describe integrity constraints on databases and to
specify policies on preferred ways to enforce them. Unlike other more commonly
accepted approaches, these two formalisms attempt to provide a declarative
solution to the problem. However, the original semantics of founded repairs for
active integrity constraints and justified revisions for revision programs
differ. Our main goal is to establish a comprehensive framework of semantics
for active integrity constraints, to find a parallel framework for revision
programs, and to relate the two. By doing so, we demonstrate that the two
formalisms proposed independently of each other and based on different
intuitions when viewed within a broader semantic framework turn out to be
notational variants of each other. That lends support to the adequacy of the
semantics we develop for each of the formalisms as the foundation for a
declarative approach to the problem of database update and repair. In the paper
we also study computational properties of the semantics we consider and
establish results concerned with the concept of the minimality of change and
the invariance under the shifting transformation.
|
1009.2274
|
Robust Beamforming for Security in MIMO Wiretap Channels with Imperfect
CSI
|
cs.IT math.IT
|
In this paper, we investigate methods for reducing the likelihood that a
message transmitted between two multiantenna nodes is intercepted by an
undetected eavesdropper. In particular, we focus on the judicious transmission
of artificial interference to mask the desired signal at the time it is
broadcast. Unlike previous work that assumes some prior knowledge of the
eavesdropper's channel and focuses on maximizing secrecy capacity, we consider
the case where no information regarding the eavesdropper is available, and we
use signal-to-interference-plus-noise-ratio (SINR) as our performance metric.
Specifically, we focus on the problem of maximizing the amount of power
available to broadcast a jamming signal intended to hide the desired signal
from a potential eavesdropper, while maintaining a prespecified SINR at the
desired receiver. The jamming signal is designed to be orthogonal to the
information signal when it reaches the desired receiver, assuming both the
receiver and the eavesdropper employ optimal beamformers and possess exact
channel state information (CSI). In practice, the assumption of perfect CSI at
the transmitter is often difficult to justify. Therefore, we also study the
resulting performance degradation due to the presence of imperfect CSI, and we
present robust beamforming schemes that recover a large fraction of the
performance in the perfect CSI case. Numerical simulations verify our
analytical performance predictions, and illustrate the benefit of the robust
beamforming schemes.
|
1009.2275
|
PhishDef: URL Names Say It All
|
cs.CR cs.LG cs.NI
|
Phishing is an increasingly sophisticated method to steal personal user
information using sites that pretend to be legitimate. In this paper, we take
the following steps to identify phishing URLs. First, we carefully select
lexical features of the URLs that are resistant to obfuscation techniques used
by attackers. Second, we evaluate the classification accuracy when using only
lexical features, both automatically and hand-selected, vs. when using
additional features. We show that lexical features are sufficient for all
practical purposes. Third, we thoroughly compare several classification
algorithms, and we propose to use an online method (AROW) that is able to
overcome noisy training data. Based on the insights gained from our analysis,
we propose PhishDef, a phishing detection system that uses only URL names and
combines the above three elements. PhishDef is a highly accurate method (when
compared to state-of-the-art approaches over real datasets), lightweight (thus
appropriate for online and client-side deployment), proactive (based on online
classification rather than blacklists), and resilient to training data
inaccuracies (thus enabling the use of large noisy training data).
|
1009.2305
|
Message Error Analysis of Loopy Belief Propagation for the Sum-Product
Algorithm
|
cs.IT math.IT
|
Belief propagation is known to perform extremely well in many practical
statistical inference and learning problems using graphical models, even in the
presence of multiple loops. The iterative use of belief propagation algorithm
on loopy graphs is referred to as Loopy Belief Propagation (LBP). Various
sufficient conditions for convergence of LBP have been presented; however,
general necessary conditions for its convergence to a unique fixed point remain
unknown. Because the approximation of beliefs to true marginal probabilities
has been shown to relate to the convergence of LBP, several methods have been
explored whose aim is to obtain distance bounds on beliefs when LBP fails to
converge. In this paper, we derive uniform and non-uniform error bounds on
messages, which are tighter than existing ones in literature, and use these
bounds to derive sufficient conditions for the convergence of LBP in terms of
the sum-product algorithm. We subsequently use these bounds to study the
dynamic behavior of the sum-product algorithm, and analyze the relation between
convergence of LBP and sparsity and walk-summability of graphical models. We
finally use the bounds derived to investigate the accuracy of LBP, as well as
the scheduling priority in asynchronous LBP.
|
1009.2370
|
Optimized IR-HARQ Schemes Based on Punctured LDPC Codes over the BEC
|
cs.IT math.IT
|
We study incremental redundancy hybrid ARQ (IR-HARQ) schemes based on
punctured, finite-length, LDPC codes. The transmission is assumed to take place
over time varying binary erasure channels, such as mobile wireless channels at
the applications layer. We analyze and optimize the throughput and delay
performance of these IR-HARQ protocols under iterative, message-passing
decoding. We derive bounds on the performance that are achievable by such
schemes, and show that, with a simple extension, the iteratively decoded,
punctured LDPC code based IR-HARQ protocol can be made rateless, and operating
close to the general theoretical optimum for a wide range of channel erasure
rates.
|
1009.2443
|
Delay-Optimal User Scheduling and Inter-Cell Interference Management in
Cellular Network via Distributive Stochastic Learning
|
cs.IT math.IT
|
In this paper, we propose a distributive queueaware intra-cell user
scheduling and inter-cell interference (ICI) management control design for a
delay-optimal celluar downlink system with M base stations (BSs), and K users
in each cell. Each BS has K downlink queues for K users respectively with
heterogeneous arrivals and delay requirements. The ICI management control is
adaptive to joint queue state information (QSI) over a slow time scale, while
the user scheduling control is adaptive to both the joint QSI and the joint
channel state information (CSI) over a faster time scale. We show that the
problem can be modeled as an infinite horizon average cost Partially Observed
Markov Decision Problem (POMDP), which is NP-hard in general. By exploiting the
special structure of the problem, we shall derive an equivalent Bellman
equation to solve the POMDP problem. To address the distributive requirement
and the issue of dimensionality and computation complexity, we derive a
distributive online stochastic learning algorithm, which only requires local
QSI and local CSI at each of the M BSs. We show that the proposed learning
algorithm converges almost surely (with probability 1) and has significant gain
compared with various baselines. The proposed solution only has linear
complexity order O(MK).
|
1009.2464
|
Various virtual structures on single file system
|
cs.IT math.IT
|
The article provides a new approach to creating hierarchical structure of
file system. First, it gives overview of the existing ways of storing files in
current operating systems. Second, it describes the new way of building
structures of a file system. This approach allows creating various structures
by different attributes on the same set of files using multiple tree-like
structures.
|
1009.2528
|
Is Witsenhausen's counterexample a relevant toy?
|
cs.IT cs.SY math.IT
|
This paper answers a question raised by Doyle on the relevance of the
Witsenhausen counterexample as a toy decentralized control problem. The
question has two sides, the first of which focuses on the lack of an external
channel in the counterexample. Using existing results, we argue that the core
difficulty in the counterexample is retained even in the presence of such a
channel. The second side questions the LQG formulation of the counterexample.
We consider alternative formulations and show that the understanding developed
for the LQG case guides the investigation for these other cases as well.
Specifically, we consider 1) a variation on the original counterexample with
general, but bounded, noise distributions, and 2) an adversarial extension with
bounded disturbance and quadratic costs. For each of these formulations, we
show that quantization-based nonlinear strategies outperform linear strategies
by an arbitrarily large factor. Further, these nonlinear strategies also
perform within a constant factor of the optimal, uniformly over all possible
parameter choices (for fixed noise distributions in the Bayesian case).
Fortuitously, the assumption of bounded noise results in a significant
simplification of proofs as compared to those for the LQG formulation.
Therefore, the results in this paper are also of pedagogical interest.
|
1009.2556
|
Securing Dynamic Distributed Storage Systems against Eavesdropping and
Adversarial Attacks
|
cs.IT cs.CR math.IT
|
We address the problem of securing distributed storage systems against
eavesdropping and adversarial attacks. An important aspect of these systems is
node failures over time, necessitating, thus, a repair mechanism in order to
maintain a desired high system reliability. In such dynamic settings, an
important security problem is to safeguard the system from an intruder who may
come at different time instances during the lifetime of the storage system to
observe and possibly alter the data stored on some nodes. In this scenario, we
give upper bounds on the maximum amount of information that can be stored
safely on the system. For an important operating regime of the distributed
storage system, which we call the 'bandwidth-limited regime', we show that our
upper bounds are tight and provide explicit code constructions. Moreover, we
provide a way to short list the malicious nodes and expurgate the system.
|
1009.2566
|
Reinforcement Learning by Comparing Immediate Reward
|
cs.LG
|
This paper introduces an approach to Reinforcement Learning Algorithm by
comparing their immediate rewards using a variation of Q-Learning algorithm.
Unlike the conventional Q-Learning, the proposed algorithm compares current
reward with immediate reward of past move and work accordingly. Relative reward
based Q-learning is an approach towards interactive learning. Q-Learning is a
model free reinforcement learning method that used to learn the agents. It is
observed that under normal circumstances algorithm take more episodes to reach
optimal Q-value due to its normal reward or sometime negative reward. In this
new form of algorithm agents select only those actions which have a higher
immediate reward signal in comparison to previous one. The contribution of this
article is the presentation of new Q-Learning Algorithm in order to maximize
the performance of algorithm and reduce the number of episode required to reach
optimal Q-value. Effectiveness of proposed algorithm is simulated in a 20 x20
Grid world deterministic environment and the result for the two forms of
Q-Learning Algorithms is given.
|
1009.2602
|
Joint Channel Probing and Proportional Fair Scheduling in Wireless
Networks
|
cs.IT math.IT
|
The design of a scheduling scheme is crucial for the efficiency and
user-fairness of wireless networks. Assuming that the quality of all user
channels is available to a central controller, a simple scheme which maximizes
the utility function defined as the sum logarithm throughput of all users has
been shown to guarantee proportional fairness. However, to acquire the channel
quality information may consume substantial amount of resources. In this work,
it is assumed that probing the quality of each user's channel takes a fraction
of the coherence time, so that the amount of time for data transmission is
reduced. The multiuser diversity gain does not always increase as the number of
users increases. In case the statistics of the channel quality is available to
the controller, the problem of sequential channel probing for user scheduling
is formulated as an optimal stopping time problem. A joint channel probing and
proportional fair scheduling scheme is developed. This scheme is extended to
the case where the channel statistics are not available to the controller, in
which case a joint learning, probing and scheduling scheme is designed by
studying a generalized bandit problem. Numerical results demonstrate that the
proposed scheduling schemes can provide significant gain over existing schemes.
|
1009.2631
|
Google matrix of business process management
|
cs.CY cs.IR physics.soc-ph q-fin.GN
|
Development of efficient business process models and determination of their
characteristic properties are subject of intense interdisciplinary research.
Here, we consider a business process model as a directed graph. Its nodes
correspond to the units identified by the modeler and the link direction
indicates the causal dependencies between units. It is of primary interest to
obtain the stationary flow on such a directed graph, which corresponds to the
steady-state of a firm during the business process. Following the ideas
developed recently for the World Wide Web, we construct the Google matrix for
our business process model and analyze its spectral properties. The importance
of nodes is characterized by Page-Rank and recently proposed CheiRank and
2DRank, respectively. The results show that this two-dimensional ranking gives
a significant information about the influence and communication properties of
business model units. We argue that the Google matrix method, described here,
provides a new efficient tool helping companies to make their decisions on how
to evolve in the exceedingly dynamic global market.
|
1009.2651
|
Left-Inverses of Fractional Laplacian and Sparse Stochastic Processes
|
cs.IT math.IT math.ST stat.TH
|
The fractional Laplacian $(-\triangle)^{\gamma/2}$ commutes with the primary
coordination transformations in the Euclidean space $\RR^d$: dilation,
translation and rotation, and has tight link to splines, fractals and stable
Levy processes. For $0<\gamma<d$, its inverse is the classical Riesz potential
$I_\gamma$ which is dilation-invariant and translation-invariant. In this work,
we investigate the functional properties (continuity, decay and invertibility)
of an extended class of differential operators that share those invariance
properties. In particular, we extend the definition of the classical Riesz
potential $I_\gamma$ to any non-integer number $\gamma$ larger than $d$ and
show that it is the unique left-inverse of the fractional Laplacian
$(-\triangle)^{\gamma/2}$ which is dilation-invariant and
translation-invariant. We observe that, for any $1\le p\le \infty$ and
$\gamma\ge d(1-1/p)$, there exists a Schwartz function $f$ such that $I_\gamma
f$ is not $p$-integrable. We then introduce the new unique left-inverse
$I_{\gamma, p}$ of the fractional Laplacian $(-\triangle)^{\gamma/2}$ with the
property that $I_{\gamma, p}$ is dilation-invariant (but not
translation-invariant) and that $I_{\gamma, p}f$ is $p$-integrable for any
Schwartz function $f$. We finally apply that linear operator $I_{\gamma, p}$
with $p=1$ to solve the stochastic partial differential equation
$(-\triangle)^{\gamma/2} \Phi=w$ with white Poisson noise as its driving term
$w$.
|
1009.2653
|
Opinion fluctuations and disagreement in social networks
|
cs.SI cs.SY math.OC math.PR
|
We study a tractable opinion dynamics model that generates long-run
disagreements and persistent opinion fluctuations. Our model involves an
inhomogeneous stochastic gossip process of continuous opinion dynamics in a
society consisting of two types of agents: regular agents, who update their
beliefs according to information that they receive from their social neighbors;
and stubborn agents, who never update their opinions. When the society contains
stubborn agents with different opinions, the belief dynamics never lead to a
consensus (among the regular agents). Instead, beliefs in the society fail to
converge almost surely, the belief profile keeps on fluctuating in an ergodic
fashion, and it converges in law to a non-degenerate random vector. The
structure of the network and the location of the stubborn agents within it
shape the opinion dynamics. The expected belief vector evolves according to an
ordinary differential equation coinciding with the Kolmogorov backward equation
of a continuous-time Markov chain with absorbing states corresponding to the
stubborn agents and converges to a harmonic vector, with every regular agent's
value being the weighted average of its neighbors' values, and boundary
conditions corresponding to the stubborn agents'. Expected cross-products of
the agents' beliefs allow for a similar characterization in terms of coupled
Markov chains on the network. We prove that, in large-scale societies which are
highly fluid, meaning that the product of the mixing time of the Markov chain
on the graph describing the social network and the relative size of the
linkages to stubborn agents vanishes as the population size grows large, a
condition of \emph{homogeneous influence} emerges, whereby the stationary
beliefs' marginal distributions of most of the regular agents have
approximately equal first and second moments.
|
1009.2706
|
Minimization Strategies for Maximally Parallel Multiset Rewriting
Systems
|
cs.FL cs.CC cs.CL cs.DM
|
Maximally parallel multiset rewriting systems (MPMRS) give a convenient way
to express relations between unstructured objects. The functioning of various
computational devices may be expressed in terms of MPMRS (e.g., register
machines and many variants of P systems). In particular, this means that MPMRS
are computationally complete; however, a direct translation leads to quite a
big number of rules. Like for other classes of computationally complete
devices, there is a challenge to find a universal system having the smallest
number of rules. In this article we present different rule minimization
strategies for MPMRS based on encodings and structural transformations. We
apply these strategies to the translation of a small universal register machine
(Korec, 1996) and we show that there exists a universal MPMRS with 23 rules.
Since MPMRS are identical to a restricted variant of P systems with antiport
rules, the results we obtained improve previously known results on the number
of rules for those systems.
|
1009.2722
|
Learning Latent Tree Graphical Models
|
stat.ML cs.IT math.IT
|
We study the problem of learning a latent tree graphical model where samples
are available only from a subset of variables. We propose two consistent and
computationally efficient algorithms for learning minimal latent trees, that
is, trees without any redundant hidden nodes. Unlike many existing methods, the
observed nodes (or variables) are not constrained to be leaf nodes. Our first
algorithm, recursive grouping, builds the latent tree recursively by
identifying sibling groups using so-called information distances. One of the
main contributions of this work is our second algorithm, which we refer to as
CLGrouping. CLGrouping starts with a pre-processing procedure in which a tree
over the observed variables is constructed. This global step groups the
observed nodes that are likely to be close to each other in the true latent
tree, thereby guiding subsequent recursive grouping (or equivalent procedures)
on much smaller subsets of variables. This results in more accurate and
efficient learning of latent trees. We also present regularized versions of our
algorithms that learn latent tree approximations of arbitrary distributions. We
compare the proposed algorithms to other methods by performing extensive
numerical experiments on various latent tree graphical models such as hidden
Markov models and star graphs. In addition, we demonstrate the applicability of
our methods on real-world datasets by modeling the dependency structure of
monthly stock returns in the S&P index and of the words in the 20 newsgroups
dataset.
|
1009.2764
|
A Blink Tree latch method and protocol to support synchronous node
deletion
|
cs.DB
|
A Blink Tree latch method and protocol supports synchronous node deletion in
a high concurrency environment. Full source code is available.
|
1009.2913
|
Information filtering in complex weighted networks
|
physics.soc-ph cond-mat.dis-nn cond-mat.stat-mech cs.IR
|
Many systems in nature, society and technology can be described as networks,
where the vertices are the system's elements and edges between vertices
indicate the interactions between the corresponding elements. Edges may be
weighted if the interaction strength is measurable. However, the full network
information is often redundant because tools and techniques from network
analysis do not work or become very inefficient if the network is too dense and
some weights may just reflect measurement errors, and shall be discarded.
Moreover, since weight distributions in many complex weighted networks are
broad, most of the weight is concentrated among a small fraction of all edges.
It is then crucial to properly detect relevant edges. Simple thresholding would
leave only the largest weights, disrupting the multiscale structure of the
system, which is at the basis of the structure of complex networks, and ought
to be kept. In this paper we propose a weight filtering technique based on a
global null model (GloSS filter), keeping both the weight distribution and the
full topological structure of the network. The method correctly quantifies the
statistical significance of weights assigned independently to the edges from a
given distribution. Applications to real networks reveal that the GloSS filter
is indeed able to identify relevantconnections between vertices.
|
1009.2927
|
Underlay Cognitive Radio with Full or Partial Channel Quality
Information
|
cs.IT math.IT
|
Underlay cognitive radios (UCRs) allow a secondary user to enter a primary
user's spectrum through intelligent utilization of multiuser channel quality
information (CQI) and sharing of codebook. The aim of this work is to study
two-user Gaussian UCR systems by assuming the full or partial knowledge of
multiuser CQI. Key contribution of this work is motivated by the fact that the
full knowledge of multiuser CQI is not always available. We first establish a
location-aided UCR model where the secondary user is assumed to have partial
CQI about the secondary-transmitter to primary-receiver link as well as full
CQI about the other links. Then, new UCR approaches are proposed and carefully
analyzed in terms of the secondary user's achievable rate, denoted by $C_2$,
the capacity penalty to primary user, denoted by $\Delta C_1$, and capacity
outage probability. Numerical examples are provided to visually compare the
performance of UCRs with full knowledge of multiuser CQI and the proposed
approaches with partial knowledge of multiuser CQI.
|
1009.2955
|
Throughput Analysis of Buffer-Constrained Wireless Systems in the Finite
Blocklength Regime
|
cs.IT math.IT
|
In this paper, wireless systems operating under queueing constraints in the
form of limitations on the buffer violation probabilities are considered. The
throughput under such constraints is captured by the effective capacity
formulation. It is assumed that finite blocklength codes are employed for
transmission. Under this assumption, a recent result on the channel coding rate
in the finite blocklength regime is incorporated into the analysis and the
throughput achieved with such codes in the presence of queueing constraints and
decoding errors is identified. Performance of different transmission strategies
(e.g., variable-rate, variable-power, and fixed-rate transmissions) is studied.
Interactions between the throughput, queueing constraints, coding blocklength,
decoding error probabilities, and signal-to-noise ratio are investigated and
several conclusions with important practical implications are drawn.
|
1009.2997
|
Sensor Scheduling for Energy-Efficient Target Tracking in Sensor
Networks
|
cs.MA
|
In this paper we study the problem of tracking an object moving randomly
through a network of wireless sensors. Our objective is to devise strategies
for scheduling the sensors to optimize the tradeoff between tracking
performance and energy consumption. We cast the scheduling problem as a
Partially Observable Markov Decision Process (POMDP), where the control actions
correspond to the set of sensors to activate at each time step. Using a
bottom-up approach, we consider different sensing, motion and cost models with
increasing levels of difficulty. At the first level, the sensing regions of the
different sensors do not overlap and the target is only observed within the
sensing range of an active sensor. Then, we consider sensors with overlapping
sensing range such that the tracking error, and hence the actions of the
different sensors, are tightly coupled. Finally, we consider scenarios wherein
the target locations and sensors' observations assume values on continuous
spaces. Exact solutions are generally intractable even for the simplest models
due to the dimensionality of the information and action spaces. Hence, we
devise approximate solution techniques, and in some cases derive lower bounds
on the optimal tradeoff curves. The generated scheduling policies, albeit
suboptimal, often provide close-to-optimal energy-tracking tradeoffs.
|
1009.3029
|
Invariant Spectral Hashing of Image Saliency Graph
|
cs.CV
|
Image hashing is the process of associating a short vector of bits to an
image. The resulting summaries are useful in many applications including image
indexing, image authentication and pattern recognition. These hashes need to be
invariant under transformations of the image that result in similar visual
content, but should drastically differ for conceptually distinct contents. This
paper proposes an image hashing method that is invariant under rotation,
scaling and translation of the image. The gist of our approach relies on the
geometric characterization of salient point distribution in the image. This is
achieved by the definition of a "saliency graph" connecting these points
jointly with an image intensity function on the graph nodes. An invariant hash
is then obtained by considering the spectrum of this function in the
eigenvector basis of the Laplacian graph, that is, its graph Fourier transform.
Interestingly, this spectrum is invariant under any relabeling of the graph
nodes. The graph reveals geometric information of the image, making the hash
robust to image transformation, yet distinct for different visual content. The
efficiency of the proposed method is assessed on a set of MRI 2-D slices and on
a database of faces.
|
1009.3041
|
Secret Sharing LDPC Codes for the BPSK-constrained Gaussian Wiretap
Channel
|
cs.IT cs.CR math.IT
|
The problem of secret sharing over the Gaussian wiretap channel is
considered. A source and a destination intend to share secret information over
a Gaussian channel in the presence of a wiretapper who observes the
transmission through another Gaussian channel. Two constraints are imposed on
the source-to-destination channel; namely, the source can transmit only binary
phase shift keyed (BPSK) symbols, and symbol-by-symbol hard-decision
quantization is applied to the received symbols of the destination. An
error-free public channel is also available for the source and destination to
exchange messages in order to help the secret sharing process. The wiretapper
can perfectly observe all messages in the public channel. It is shown that a
secret sharing scheme that employs a random ensemble of regular low density
parity check (LDPC) codes can achieve the key capacity of the BPSK-constrained
Gaussian wiretap channel asymptotically with increasing block length. To
accommodate practical constraints of finite block length and limited decoding
complexity, fixed irregular LDPC codes are also designed to replace the regular
LDPC code ensemble in the proposed secret sharing scheme.
|
1009.3052
|
Secret-key Agreement with Channel State Information at the Transmitter
|
cs.IT math.IT
|
We study the capacity of secret-key agreement over a wiretap channel with
state parameters. The transmitter communicates to the legitimate receiver and
the eavesdropper over a discrete memoryless wiretap channel with a memoryless
state sequence. The transmitter and the legitimate receiver generate a shared
secret key, that remains secret from the eavesdropper. No public discussion
channel is available. The state sequence is known noncausally to the
transmitter. We derive lower and upper bounds on the secret-key capacity. The
lower bound involves constructing a common state reconstruction sequence at the
legitimate terminals and binning the set of reconstruction sequences to obtain
the secret-key. For the special case of Gaussian channels with additive
interference (secret-keys from dirty paper channel) our bounds differ by 0.5
bit/symbol and coincide in the high signal-to-noise-ratio and high
interference-to-noise-ratio regimes. For the case when the legitimate receiver
is also revealed the state sequence, we establish that our lower bound achieves
the the secret-key capacity. In addition, for this special case, we also
propose another scheme that attains the capacity and requires only causal side
information at the transmitter and the receiver.
|
1009.3078
|
Asymmetric Totally-corrective Boosting for Real-time Object Detection
|
cs.CV
|
Real-time object detection is one of the core problems in computer vision.
The cascade boosting framework proposed by Viola and Jones has become the
standard for this problem. In this framework, the learning goal for each node
is asymmetric, which is required to achieve a high detection rate and a
moderate false positive rate. We develop new boosting algorithms to address
this asymmetric learning problem. We show that our methods explicitly optimize
asymmetric loss objectives in a totally corrective fashion. The methods are
totally corrective in the sense that the coefficients of all selected weak
classifiers are updated at each iteration. In contract, conventional boosting
like AdaBoost is stage-wise in that only the current weak classifier's
coefficient is updated. At the heart of the totally corrective boosting is the
column generation technique. Experiments on face detection show that our
methods outperform the state-of-the-art asymmetric boosting methods.
|
1009.3083
|
The Capacity of the Semi-Deterministic Cognitive Interference Channel
and its Application to Constant Gap Results for the Gaussian Channel
|
cs.IT math.IT
|
The cognitive interference channel (C-IFC) consists of a classical two-user
interference channel in which the message of one user (the "primary" user) is
non-causally available at the transmitter of the other user (the "cognitive"
user). We obtain the capacity of the semi-deterministic C-IFC: a discrete
memoryless C-IFC in which the cognitive receiver output is a noise-less
deterministic function of the channel inputs. We then use the insights obtained
from the capacity-achieving scheme for the semi-deterministic model to derive
new, unified and tighter constant gap results for the complex-valued Gaussian
C-IFC. We prove: (1) a constant additive gap (difference between inner and
outer bounds) of half a bit/sec/Hz per real dimension, of relevance at high
SNRs, and (b) a constant multiplicative gap (ratio between outer and inner
bounds) of a factor two, of relevance at low SNRs
|
1009.3090
|
Open-Loop Spatial Multiplexing and Diversity Communications in Ad Hoc
Networks
|
cs.IT math.IT
|
This paper investigates the performance of open-loop multi-antenna
point-to-point links in ad hoc networks with slotted ALOHA medium access
control (MAC). We consider spatial multiplexing transmission with linear
maximum ratio combining and zero forcing receivers, as well as orthogonal space
time block coded transmission. New closed-form expressions are derived for the
outage probability, throughput and transmission capacity. Our results
demonstrate that both the best performing scheme and the optimum number of
transmit antennas depend on different network parameters, such as the node
intensity and the signal-to-interference-and-noise ratio operating value. We
then compare the performance to a network consisting of single-antenna devices
and an idealized fully centrally coordinated MAC. These results show that
multi-antenna schemes with a simple decentralized slotted ALOHA MAC can
outperform even idealized single-antenna networks in various practical
scenarios.
|
1009.3130
|
Strong Secrecy on the Binary Erasure Wiretap Channel Using Large-Girth
LDPC Codes
|
cs.IT math.IT
|
For an arbitrary degree distribution pair (DDP), we construct a sequence of
low-density parity-check (LDPC) code ensembles with girth growing
logarithmically in block-length using Ramanujan graphs. When the DDP has
minimum left degree at least three, we show using density evolution analysis
that the expected bit-error probability of these ensembles, when passed through
a binary erasure channel with erasure probability $\epsilon$, decays as
$\mathcal{O}(\exp(-c_1 n^{c_2}))$ with the block-length $n$ for positive
constants $c_1$ and $c_2$, as long as $\epsilon$ is lesser than the erasure
threshold $\epsilon_\mathrm{th}$ of the DDP. This guarantees that the coset
coding scheme using the dual sequence provides strong secrecy over the binary
erasure wiretap channel for erasure probabilities greater than $1 -
\epsilon_\mathrm{th}$.
|
1009.3145
|
Universal Rate-Efficient Scalar Quantization
|
cs.IT math.IT
|
Scalar quantization is the most practical and straightforward approach to
signal quantization. However, it has been shown that scalar quantization of
oversampled or Compressively Sensed signals can be inefficient in terms of the
rate-distortion trade-off, especially as the oversampling rate or the sparsity
of the signal increases. In this paper, we modify the scalar quantizer to have
discontinuous quantization regions. We demonstrate that with this modification
it is possible to achieve exponential decay of the quantization error as a
function of the oversampling rate instead of the quadratic decay exhibited by
current approaches. Our approach is universal in the sense that prior knowledge
of the signal model is not necessary in the quantizer design, only in the
reconstruction. Thus, we demonstrate that it is possible to reduce the
quantization error by incorporating side information on the acquired signal,
such as sparse signal models or signal similarity with known signals. In doing
so, we establish a relationship between quantization performance and the
Kolmogorov entropy of the signal model.
|
1009.3186
|
Group Testing with Probabilistic Tests: Theory, Design and Application
|
cs.IT math.IT
|
Identification of defective members of large populations has been widely
studied in the statistics community under the name of group testing. It
involves grouping subsets of items into different pools and detecting defective
members based on the set of test results obtained for each pool.
In a classical noiseless group testing setup, it is assumed that the sampling
procedure is fully known to the reconstruction algorithm, in the sense that the
existence of a defective member in a pool results in the test outcome of that
pool to be positive. However, this may not be always a valid assumption in some
cases of interest. In particular, we consider the case where the defective
items in a pool can become independently inactive with a certain probability.
Hence, one may obtain a negative test result in a pool despite containing some
defective items. As a result, any sampling and reconstruction method should be
able to cope with two different types of uncertainty, i.e., the unknown set of
defective items and the partially unknown, probabilistic testing procedure.
In this work, motivated by the application of detecting infected people in
viral epidemics, we design non-adaptive sampling procedures that allow
successful identification of the defective items through a set of probabilistic
tests. Our design requires only a small number of tests to single out the
defective items. In particular, for a population of size $N$ and at most $K$
defective items with activation probability $p$, our results show that $M =
O(K^2\log{(N/K)}/p^3)$ tests is sufficient if the sampling procedure should
work for all possible sets of defective items, while $M = O(K\log{(N)}/p^3)$
tests is enough to be successful for any single set of defective items.
Moreover, we show that the defective members can be recovered using a simple
reconstruction algorithm with complexity of $O(MN)$.
|
1009.3238
|
Tableaux for the Lambek-Grishin calculus
|
cs.CL
|
Categorial type logics, pioneered by Lambek, seek a proof-theoretic
understanding of natural language syntax by identifying categories with
formulas and derivations with proofs. We typically observe an intuitionistic
bias: a structural configuration of hypotheses (a constituent) derives a single
conclusion (the category assigned to it). Acting upon suggestions of Grishin to
dualize the logical vocabulary, Moortgat proposed the Lambek-Grishin calculus
(LG) with the aim of restoring symmetry between hypotheses and conclusions. We
develop a theory of labeled modal tableaux for LG, inspired by the
interpretation of its connectives as binary modal operators in the relational
semantics of Kurtonina and Moortgat. As a linguistic application of our method,
we show that grammars based on LG are context-free through use of an
interpolation lemma. This result complements that of Melissen, who proved that
LG augmented by mixed associativity and -commutativity was exceeds LTAG in
expressive power.
|
1009.3240
|
A Unified View of Regularized Dual Averaging and Mirror Descent with
Implicit Updates
|
cs.LG
|
We study three families of online convex optimization algorithms:
follow-the-proximally-regularized-leader (FTRL-Proximal), regularized dual
averaging (RDA), and composite-objective mirror descent. We first prove
equivalence theorems that show all of these algorithms are instantiations of a
general FTRL update. This provides theoretical insight on previous experimental
observations. In particular, even though the FOBOS composite mirror descent
algorithm handles L1 regularization explicitly, it has been observed that RDA
is even more effective at producing sparsity. Our results demonstrate that
FOBOS uses subgradient approximations to the L1 penalty from previous rounds,
leading to less sparsity than RDA, which handles the cumulative penalty in
closed form. The FTRL-Proximal algorithm can be seen as a hybrid of these two,
and outperforms both on a large, real-world dataset.
Our second contribution is a unified analysis which produces regret bounds
that match (up to logarithmic terms) or improve the best previously known
bounds. This analysis also extends these algorithms in two important ways: we
support a more general type of composite objective and we analyze implicit
updates, which replace the subgradient approximation of the current loss
function with an exact optimization.
|
1009.3243
|
The "Unfriending" Problem: The Consequences of Homophily in Friendship
Retention for Causal Estimates of Social Influence
|
stat.AP cs.SI physics.data-an physics.soc-ph
|
An increasing number of scholars are using longitudinal social network data
to try to obtain estimates of peer or social influence effects. These data may
provide additional statistical leverage, but they can introduce new inferential
problems. In particular, while the confounding effects of homophily in
friendship formation are widely appreciated, homophily in friendship retention
may also confound causal estimates of social influence in longitudinal network
data. We provide evidence for this claim in a Monte Carlo analysis of the
statistical model used by Christakis, Fowler, and their colleagues in numerous
articles estimating "contagion" effects in social networks. Our results
indicate that homophily in friendship retention induces significant upward bias
and decreased coverage levels in the Christakis and Fowler model if there is
non-negligible friendship attrition over time.
|
1009.3253
|
Transmission Strategies in Multiple Access Fading Channels with
Statistical QoS Constraints
|
cs.IT math.IT
|
Effective capacity, which provides the maximum constant arrival rate that a
given service process can support while satisfying statistical delay
constraints, is analyzed in a multiuser scenario. In particular, the effective
capacity region of fading multiple access channels (MAC) in the presence of
quality of service (QoS) constraints is studied. Perfect channel side
information (CSI) is assumed to be available at both the transmitters and the
receiver. It is initially assumed the transmitters send the information at a
fixed power level and hence do not employ power control policies. Under this
assumption, the performance achieved by superposition coding with successive
decoding techniques is investigated. It is shown that varying the decoding
order with respect to the channel states can significantly increase the
achievable throughput region. In the two-user case, the optimal decoding
strategy is determined for the scenario in which the users have the same QoS
constraints. The performance of orthogonal transmission strategies is also
analyzed. It is shown that for certain QoS constraints, time-division
multiple-access (TDMA) can achieve better performance than superposition coding
if fixed successive decoding order is used at the receiver side.
In the subsequent analysis, power control policies are incorporated into the
transmission strategies. The optimal power allocation policies for any fixed
decoding order over all channel states are identified. For a given variable
decoding order strategy, the conditions that the optimal power control policies
must satisfy are determined, and an algorithm that can be used to compute these
optimal policies is provided.
|
1009.3291
|
Rebuilding for Array Codes in Distributed Storage Systems
|
cs.IT cs.DC cs.NI math.IT
|
In distributed storage systems that use coding, the issue of minimizing the
communication required to rebuild a storage node after a failure arises. We
consider the problem of repairing an erased node in a distributed storage
system that uses an EVENODD code. EVENODD codes are maximum distance separable
(MDS) array codes that are used to protect against erasures, and only require
XOR operations for encoding and decoding. We show that when there are two
redundancy nodes, to rebuild one erased systematic node, only 3/4 of the
information needs to be transmitted. Interestingly, in many cases, the required
disk I/O is also minimized.
|
1009.3321
|
Niche as a determinant of word fate in online groups
|
cs.CL cond-mat.dis-nn nlin.AO physics.soc-ph q-bio.PE
|
Patterns of word use both reflect and influence a myriad of human activities
and interactions. Like other entities that are reproduced and evolve, words
rise or decline depending upon a complex interplay between {their intrinsic
properties and the environments in which they function}. Using Internet
discussion communities as model systems, we define the concept of a word niche
as the relationship between the word and the characteristic features of the
environments in which it is used. We develop a method to quantify two important
aspects of the size of the word niche: the range of individuals using the word
and the range of topics it is used to discuss. Controlling for word frequency,
we show that these aspects of the word niche are strong determinants of changes
in word frequency. Previous studies have already indicated that word frequency
itself is a correlate of word success at historical time scales. Our analysis
of changes in word frequencies over time reveals that the relative sizes of
word niches are far more important than word frequencies in the dynamics of the
entire vocabulary at shorter time scales, as the language adapts to new
concepts and social groupings. We also distinguish endogenous versus exogenous
factors as additional contributors to the fates of words, and demonstrate the
force of this distinction in the rise of novel words. Our results indicate that
short-term nonstationarity in word statistics is strongly driven by individual
proclivities, including inclinations to provide novel information and to
project a distinctive social identity.
|
1009.3345
|
Cooperative Feedback for MIMO Interference Channels
|
cs.IT math.IT
|
Multi-antenna precoding effectively mitigates the interference in wireless
networks. However, the precoding efficiency can be significantly degraded by
the overhead due to the required feedback of channel state information (CSI).
This paper addresses such an issue by proposing a systematic method of
designing precoders for the two-user multiple-input-multiple-output (MIMO)
interference channels based on finite-rate CSI feedback from receivers to their
interferers, called cooperative feedback. Specifically, each precoder is
decomposed into inner and outer precoders for nulling interference and
improving the data link array gain, respectively. The inner precoders are
further designed to suppress residual interference resulting from finite-rate
cooperative feedback. To regulate residual interference due to precoder
quantization, additional scalar cooperative feedback signals are designed to
control transmitters' power using different criteria including applying
interference margins, maximizing sum throughput, and minimizing outage
probability. Simulation shows that such additional feedback effectively
alleviates performance degradation due to quantized precoder feedback.
|
1009.3346
|
Conditional Random Fields and Support Vector Machines: A Hybrid Approach
|
cs.LG
|
We propose a novel hybrid loss for multiclass and structured prediction
problems that is a convex combination of log loss for Conditional Random Fields
(CRFs) and a multiclass hinge loss for Support Vector Machines (SVMs). We
provide a sufficient condition for when the hybrid loss is Fisher consistent
for classification. This condition depends on a measure of dominance between
labels - specifically, the gap in per observation probabilities between the
most likely labels. We also prove Fisher consistency is necessary for
parametric consistency when learning models such as CRFs.
We demonstrate empirically that the hybrid loss typically performs as least
as well as - and often better than - both of its constituent losses on variety
of tasks. In doing so we also provide an empirical comparison of the efficacy
of probabilistic and margin based approaches to multiclass and structured
prediction and the effects of label dominance on these results.
|
1009.3353
|
A Lower Bound on the Estimator Variance for the Sparse Linear Model
|
math.ST cs.IT math.IT stat.TH
|
We study the performance of estimators of a sparse nonrandom vector based on
an observation which is linearly transformed and corrupted by additive white
Gaussian noise. Using the reproducing kernel Hilbert space framework, we derive
a new lower bound on the estimator variance for a given differentiable bias
function (including the unbiased case) and an almost arbitrary transformation
matrix (including the underdetermined case considered in compressed sensing
theory). For the special case of a sparse vector corrupted by white Gaussian
noise-i.e., without a linear transformation-and unbiased estimation, our lower
bound improves on previously proposed bounds.
|
1009.3354
|
Direct vs. Two-Step Approach for Unique Word Generation in UW-OFDM
|
cs.IT math.IT
|
Unique word OFDM is a novel technique for constructing OFDM symbols, that has
many advantages over cyclic prefix OFDM. In this paper we investigate two
different approaches for the generation of an OFDM symbol containing a unique
word in its time domain representation. The two-step and the direct approach
seem very similar at first sight, but actually produce completely different
OFDM symbols. Also the overall system's bit error ratio differs significantly
for the two approaches. We will prove these propositions analytically, and we
will give simulation results for further illustration.
|
1009.3387
|
Distributed STBCs with Full-diversity Partial Interference Cancellation
Decoding
|
cs.IT math.IT
|
Recently, Guo and Xia introduced low complexity decoders called Partial
Interference Cancellation (PIC) and PIC with Successive Interference
Cancellation (PIC-SIC), which include the Zero Forcing (ZF) and ZF-SIC
receivers as special cases, for point-to-point MIMO channels. In this paper, we
show that PIC and PIC-SIC decoders are capable of achieving the full
cooperative diversity available in wireless relay networks. We give sufficient
conditions for a Distributed Space-Time Block Code (DSTBC) to achieve full
diversity with PIC and PIC-SIC decoders and construct a new class of DSTBCs
with low complexity full-diversity PIC-SIC decoding using complex orthogonal
designs. The new class of codes includes a number of known full-diversity
PIC/PIC-SIC decodable Space-Time Block Codes (STBCs) constructed for
point-to-point channels as special cases. The proposed DSTBCs achieve higher
rates (in complex symbols per channel use) than the multigroup ML decodable
DSTBCs available in the literature. Simulation results show that the proposed
codes have better bit error rate performance than the best known low
complexity, full-diversity DSTBCs.
|
1009.3396
|
Collaborative Decoding of Interleaved Reed-Solomon Codes using Gaussian
Elimination
|
cs.IT math.IT
|
We propose an alternative method for collaborative decoding of interleaved
Reed-Solomon codes. Simulation results for a concatenated coding scheme using
polar codes as inner codes are included.
|
1009.3455
|
A control-theoretical methodology for the scheduling problem
|
cs.SY
|
This paper presents a novel methodology to develop scheduling algorithms. The
scheduling problem is phrased as a control problem, and control-theoretical
techniques are used to design a scheduling algorithm that meets specific
requirements. Unlike most approaches to feedback scheduling, where a controller
integrates a "basic" scheduling algorithm and dynamically tunes its parameters
and hence its performances, our methodology essentially reduces the design of a
scheduling algorithm to the synthesis of a controller that closes the feedback
loop. This approach allows the re-use of control-theoretical techniques to
design efficient scheduling algorithms; it frames and solves the scheduling
problem in a general setting; and it can naturally tackle certain peculiar
requirements such as robustness and dynamic performance tuning. A few
experiments demonstrate the feasibility of the approach on a real-time
benchmark.
|
1009.3481
|
Linear Transceiver Design for Interference Alignment: Complexity and
Computation
|
cs.IT math.IT
|
Consider a MIMO interference channel whereby each transmitter and receiver
are equipped with multiple antennas. The basic problem is to design optimal
linear transceivers (or beamformers) that can maximize system throughput. The
recent work [1] suggests that optimal beamformers should maximize the total
degrees of freedom and achieve interference alignment in high SNR. In this
paper we first consider the interference alignment problem in spatial domain
and prove that the problem of maximizing the total degrees of freedom for a
given MIMO interference channel is NP-hard. Furthermore, we show that even
checking the achievability of a given tuple of degrees of freedom for all
receivers is NP-hard when each receiver is equipped with at least three
antennas. Interestingly, the same problem becomes polynomial time solvable when
each transmit/receive node is equipped with no more than two antennas. Finally,
we propose a distributed algorithm for transmit covariance matrix design, while
assuming each receiver uses a linear MMSE beamformer. The simulation results
show that the proposed algorithm outperforms the existing interference
alignment algorithms in terms of system throughput.
|
1009.3499
|
Multiplicative Attribute Graph Model of Real-World Networks
|
cs.SI physics.soc-ph
|
Large scale real-world network data such as social and information networks
are ubiquitous. The study of such social and information networks seeks to find
patterns and explain their emergence through tractable models. In most
networks, and especially in social networks, nodes have a rich set of
attributes (e.g., age, gender) associated with them.
Here we present a model that we refer to as the Multiplicative Attribute
Graphs (MAG), which naturally captures the interactions between the network
structure and the node attributes. We consider a model where each node has a
vector of categorical latent attributes associated with it. The probability of
an edge between a pair of nodes then depends on the product of individual
attribute-attribute affinities. The model yields itself to mathematical
analysis and we derive thresholds for the connectivity and the emergence of the
giant connected component, and show that the model gives rise to networks with
a constant diameter. We analyze the degree distribution to show that MAG model
can produce networks with either log-normal or power-law degree distributions
depending on certain conditions.
|
1009.3514
|
Reduced Complexity Decoding for Bit-Interleaved Coded Multiple
Beamforming with Constellation Precoding
|
cs.IT math.IT
|
Multiple beamforming is realized by singular value decomposition of the
channel matrix which is assumed to be known to both the transmitter and the
receiver. Bit-Interleaved Coded Multiple Beamforming (BICMB) can achieve full
diversity as long as the code rate Rc and the number of employed subchannels S
satisfy the condition RcS<=1. Bit-Interleaved Coded Multiple Beamforming with
Constellation Precoding (BICMB-CP), on the other hand, can achieve full
diversity without the condition RcS<=1. However, the decoding complexity of
BICMB-CP is much higher than BICMB. In this paper, a reduced complexity
decoding technique, which is based on Sphere Decoding (SD), is proposed to
reduce the complexity of Maximum Likelihood (ML) decoding for BICMB-CP. The
decreased complexity decoding achieves several orders of magnitude reduction,
in terms of the average number of real multiplications needed to acquire one
precoded bit metric, not only with respect to conventional ML decoding, but
also, with respect to conventional SD.
|
1009.3515
|
Safe Feature Elimination in Sparse Supervised Learning
|
cs.LG math.OC
|
We investigate fast methods that allow to quickly eliminate variables
(features) in supervised learning problems involving a convex loss function and
a $l_1$-norm penalty, leading to a potentially substantial reduction in the
number of variables prior to running the supervised learning algorithm. The
methods are not heuristic: they only eliminate features that are {\em
guaranteed} to be absent after solving the learning problem. Our framework
applies to a large class of problems, including support vector machine
classification, logistic regression and least-squares.
The complexity of the feature elimination step is negligible compared to the
typical computational effort involved in the sparse supervised learning
problem: it grows linearly with the number of features times the number of
examples, with much better count if data is sparse. We apply our method to data
sets arising in text classification and observe a dramatic reduction of the
dimensionality, hence in computational effort required to solve the learning
problem, especially when very sparse classifiers are sought. Our method allows
to immediately extend the scope of existing algorithms, allowing us to run them
on data sets of sizes that were out of their reach before.
|
1009.3520
|
Bit-Interleaved Coded Multiple Beamforming with Perfect Coding
|
cs.IT math.IT
|
When the Channel State Information (CSI) is known by both the transmitter and
the receiver, beamforming techniques employing Singular Value Decomposition
(SVD) are commonly used in Multiple-Input Multiple-Output (MIMO) systems.
Without channel coding, there is a trade-off between full diversity and full
multiplexing. When channel coding is added, both of them can be achieved as
long as the code rate Rc and the number of employed subchannels S satisfy the
condition RcS<=1. By adding a properly designed constellation precoder, both
full diversity and full multiplexing can be achieved for both uncoded and coded
systems with the trade-off of a higher decoding complexity, e.g., Fully
Precoded Multiple Beamforming (FPMB) and Bit-Interleaved Coded Multiple
Beamforming with Full Precoding (BICMB-FP) without the condition RcS<=1.
Recently discovered Perfect Space-Time Block Code (PSTBC) is a full-rate
full-diversity space-time code, which achieves efficient shaping and high
coding gain for MIMO systems. In this paper, a new technique, Bit-Interleaved
Coded Multiple Beamforming with Perfect Coding (BICMB-PC), is introduced.
BICMB-PC transmits PSTBCs through convolutional coded SVD systems. Similar to
BICMB-FP, BICMB-PC achieves both full diversity and full multiplexing, and its
performance is almost the same as BICMB-FP. The advantage of BICMB-PC is that
it can provide a much lower decoding complexity than BICMB-FP, since the real
and imaginary parts of the received signal can be separated for BICMB-PC of
dimensions 2 and 4, and only the part corresponding to the coded bit is
required to acquire one bit metric for the Viterbi decoder.
|
1009.3525
|
Analyzing Weighted $\ell_1$ Minimization for Sparse Recovery with
Nonuniform Sparse Models\footnote{The results of this paper were presented in
part at the International Symposium on Information Theory, ISIT 2009}
|
cs.IT math.IT
|
In this paper we introduce a nonuniform sparsity model and analyze the
performance of an optimized weighted $\ell_1$ minimization over that sparsity
model. In particular, we focus on a model where the entries of the unknown
vector fall into two sets, with entries of each set having a specific
probability of being nonzero. We propose a weighted $\ell_1$ minimization
recovery algorithm and analyze its performance using a Grassmann angle
approach. We compute explicitly the relationship between the system
parameters-the weights, the number of measurements, the size of the two sets,
the probabilities of being nonzero- so that when i.i.d. random Gaussian
measurement matrices are used, the weighted $\ell_1$ minimization recovers a
randomly selected signal drawn from the considered sparsity model with
overwhelming probability as the problem dimension increases. This allows us to
compute the optimal weights. We demonstrate through rigorous analysis and
simulations that for the case when the support of the signal can be divided
into two different subclasses with unequal sparsity fractions, the optimal
weighted $\ell_1$ minimization outperforms the regular $\ell_1$ minimization
substantially. We also generalize the results to an arbitrary number of
classes.
|
1009.3567
|
Mobile Testbeds with an Attitude
|
cs.NI cs.RO cs.SI
|
There have been significant recent advances in mobile networks, specifically
in multi-hop wireless networks including DTNs and sensor networks. It is
critical to have a testing environment to realistically evaluate such networks
and their protocols and services. Towards this goal, we propose a novel, mobile
testbed of two main components. The first consists of a network of robots with
personality- mimicking, human-encounter behaviors, which will be the focus of
this demo. The personality is build upon behavioral profiling of mobile users
based on extensive wireless-network measurements and analysis. The second
component combines the testbed with the human society using a new concept that
we refer to as participatory testing utilizing crowd sourcing.
|
1009.3589
|
Deep Self-Taught Learning for Handwritten Character Recognition
|
cs.LG cs.CV cs.NE
|
Recent theoretical and empirical work in statistical machine learning has
demonstrated the importance of learning algorithms for deep architectures,
i.e., function classes obtained by composing multiple non-linear
transformations. Self-taught learning (exploiting unlabeled examples or
examples from other distributions) has already been applied to deep learners,
but mostly to show the advantage of unlabeled examples. Here we explore the
advantage brought by {\em out-of-distribution examples}. For this purpose we
developed a powerful generator of stochastic variations and noise processes for
character images, including not only affine transformations but also slant,
local elastic deformations, changes in thickness, background images, grey level
changes, contrast, occlusion, and various types of noise. The
out-of-distribution examples are obtained from these highly distorted images or
by including examples of object classes different from those in the target test
set. We show that {\em deep learners benefit more from out-of-distribution
examples than a corresponding shallow learner}, at least in the area of
handwritten character recognition. In fact, we show that they beat previously
published results and reach human-level performance on both handwritten digit
classification and 62-class handwritten character recognition.
|
1009.3593
|
Retrospective Interference Alignment
|
cs.IT math.IT
|
We explore similarities and differences in recent works on blind interference
alignment under different models such as staggered block fading model and the
delayed CSIT model. In particular we explore the possibility of achieving
interference alignment with delayed CSIT when the transmitters are distributed.
Our main contribution is an interference alignment scheme, called retrospective
interference alignment in this work, that is specialized to settings with
distributed transmitters. With this scheme we show that the 2 user X channel
with only delayed channel state information at the transmitters can achieve 8/7
DoF, while the interference channel with 3 users is able to achieve 9/8 DoF. We
also consider another setting where delayed channel output feedback is
available to transmitters. In this setting the X channel and the 3 user
interference channel are shown to achieve 4/3 and 6/5 DoF, respectively.
|
1009.3602
|
Construction of Frequency Hopping Sequence Set Based upon Generalized
Cyclotomy
|
cs.IT math.IT
|
Frequency hopping (FH) sequences play a key role in frequency hopping spread
spectrum communication systems. It is important to find FH sequences which have
simultaneously good Hamming correlation, large family size and large period. In
this paper, a new set of FH sequences with large period is proposed, and the
Hamming correlation distribution of the new set is investigated. The
construction of new FH sequences is based upon Whiteman's generalized
cyclotomy. It is shown that the proposed FH sequence set is optimal with
respect to the average Hamming correlation bound.
|
1009.3604
|
Geometric Decision Tree
|
cs.LG
|
In this paper we present a new algorithm for learning oblique decision trees.
Most of the current decision tree algorithms rely on impurity measures to
assess the goodness of hyperplanes at each node while learning a decision tree
in a top-down fashion. These impurity measures do not properly capture the
geometric structures in the data. Motivated by this, our algorithm uses a
strategy to assess the hyperplanes in such a way that the geometric structure
in the data is taken into account. At each node of the decision tree, we find
the clustering hyperplanes for both the classes and use their angle bisectors
as the split rule at that node. We show through empirical studies that this
idea leads to small decision trees and better performance. We also present some
analysis to show that the angle bisectors of clustering hyperplanes that we use
as the split rules at each node, are solutions of an interesting optimization
problem and hence argue that this is a principled method of learning a decision
tree.
|
1009.3613
|
On the Doubt about Margin Explanation of Boosting
|
cs.LG
|
Margin theory provides one of the most popular explanations to the success of
\texttt{AdaBoost}, where the central point lies in the recognition that
\textit{margin} is the key for characterizing the performance of
\texttt{AdaBoost}. This theory has been very influential, e.g., it has been
used to argue that \texttt{AdaBoost} usually does not overfit since it tends to
enlarge the margin even after the training error reaches zero. Previously the
\textit{minimum margin bound} was established for \texttt{AdaBoost}, however,
\cite{Breiman1999} pointed out that maximizing the minimum margin does not
necessarily lead to a better generalization. Later, \cite{Reyzin:Schapire2006}
emphasized that the margin distribution rather than minimum margin is crucial
to the performance of \texttt{AdaBoost}. In this paper, we first present the
\textit{$k$th margin bound} and further study on its relationship to previous
work such as the minimum margin bound and Emargin bound. Then, we improve the
previous empirical Bernstein bounds
\citep{Maurer:Pontil2009,Audibert:Munos:Szepesvari2009}, and based on such
findings, we defend the margin-based explanation against Breiman's doubts by
proving a new generalization error bound that considers exactly the same
factors as \cite{Schapire:Freund:Bartlett:Lee1998} but is sharper than
\cite{Breiman1999}'s minimum margin bound. By incorporating factors such as
average margin and variance, we present a generalization error bound that is
heavily related to the whole margin distribution. We also provide margin
distribution bounds for generalization error of voting classifiers in finite
VC-dimension space.
|
1009.3642
|
MIMO Identical Eigenmode Transmission System (IETS) - A Channel
Decomposition Perspective
|
cs.IT cs.NI math.IT
|
In the past few years considerable attention has been given to the design of
Multiple-Input Multiple-Output (MIMO) Eigenmode Transmission Systems (EMTS).
This paper presents an in-depth analysis of a new MIMO eigenmode transmission
strategy. The non-linear decomposition technique called Geometric Mean
Decomposition (GMD) is employed for the formation of eigenmodes over MIMO
flatfading channel. Exploiting GMD technique, identical, parallel and
independent transmission pipes are created for data transmission at higher
rate. The system based on such decomposition technique is referred to as MIMO
Identical Eigenmode Transmission System (IETS). The comparative analysis of the
MIMO transceiver design exploiting nonlinear and linear decomposition
techniques for variable constellation is presented in this paper. The new
transmission strategy is tested in combination with the Vertical Bell Labs
Layered Space Time (V-BLAST) decoding scheme using different number of antennas
on both sides of the communication link. The analysis is supported by various
simulation results.
|
1009.3657
|
On Bounded Weight Codes
|
cs.IT math.IT
|
The maximum size of a binary code is studied as a function of its length N,
minimum distance D, and minimum codeword weight W. This function B(N,D,W) is
first characterized in terms of its exponential growth rate in the limit as N
tends to infinity for fixed d=D/N and w=W/N. The exponential growth rate of
B(N,D,W) is shown to be equal to the exponential growth rate of A(N,D) for w <=
1/2, and equal to the exponential growth rate of A(N,D,W) for 1/2< w <= 1.
Second, analytic and numerical upper bounds on B(N,D,W) are derived using the
semidefinite programming (SDP) method. These bounds yield a non-asymptotic
improvement of the second Johnson bound and are tight for certain values of the
parameters.
|
1009.3663
|
Optimally Sparse Frames
|
math.NA cs.IT math.FA math.IT
|
Frames have established themselves as a means to derive redundant, yet stable
decompositions of a signal for analysis or transmission, while also promoting
sparse expansions. However, when the signal dimension is large, the computation
of the frame measurements of a signal typically requires a large number of
additions and multiplications, and this makes a frame decomposition intractable
in applications with limited computing budget. To address this problem, in this
paper, we focus on frames in finite-dimensional Hilbert spaces and introduce
sparsity for such frames as a new paradigm. In our terminology, a sparse frame
is a frame whose elements have a sparse representation in an orthonormal basis,
thereby enabling low-complexity frame decompositions. To introduce a precise
meaning of optimality, we take the sum of the numbers of vectors needed of this
orthonormal basis when expanding each frame vector as sparsity measure. We then
analyze the recently introduced algorithm Spectral Tetris for construction of
unit norm tight frames and prove that the tight frames generated by this
algorithm are in fact optimally sparse with respect to the standard unit vector
basis. Finally, we show that even the generalization of Spectral Tetris for the
construction of unit norm frames associated with a given frame operator
produces optimally sparse frames.
|
1009.3665
|
A Dynamic Data Middleware Cache for Rapidly-growing Scientific
Repositories
|
cs.DC cs.DB
|
Modern scientific repositories are growing rapidly in size. Scientists are
increasingly interested in viewing the latest data as part of query results.
Current scientific middleware cache systems, however, assume repositories are
static. Thus, they cannot answer scientific queries with the latest data. The
queries, instead, are routed to the repository until data at the cache is
refreshed. In data-intensive scientific disciplines, such as astronomy,
indiscriminate query routing or data refreshing often results in runaway
network costs. This severely affects the performance and scalability of the
repositories and makes poor use of the cache system. We present Delta, a
dynamic data middleware cache system for rapidly-growing scientific
repositories. Delta's key component is a decision framework that adaptively
decouples data objects---choosing to keep some data object at the cache, when
they are heavily queried, and keeping some data objects at the repository, when
they are heavily updated. Our algorithm profiles incoming workload to search
for optimal data decoupling that reduces network costs. It leverages formal
concepts from the network flow problem, and is robust to evolving scientific
workloads. We evaluate the efficacy of Delta, through a prototype
implementation, by running query traces collected from a real astronomy survey.
|
1009.3702
|
Totally Corrective Multiclass Boosting with Binary Weak Learners
|
cs.LG
|
In this work, we propose a new optimization framework for multiclass boosting
learning. In the literature, AdaBoost.MO and AdaBoost.ECC are the two
successful multiclass boosting algorithms, which can use binary weak learners.
We explicitly derive these two algorithms' Lagrange dual problems based on
their regularized loss functions. We show that the Lagrange dual formulations
enable us to design totally-corrective multiclass algorithms by using the
primal-dual optimization technique. Experiments on benchmark data sets suggest
that our multiclass boosting can achieve a comparable generalization capability
with state-of-the-art, but the convergence speed is much faster than stage-wise
gradient descent boosting. In other words, the new totally corrective
algorithms can maximize the margin more aggressively.
|
1009.3711
|
Structural Learning of Attack Vectors for Generating Mutated XSS Attacks
|
cs.SE cs.CR cs.LG
|
Web applications suffer from cross-site scripting (XSS) attacks that
resulting from incomplete or incorrect input sanitization. Learning the
structure of attack vectors could enrich the variety of manifestations in
generated XSS attacks. In this study, we focus on generating more threatening
XSS attacks for the state-of-the-art detection approaches that can find
potential XSS vulnerabilities in Web applications, and propose a mechanism for
structural learning of attack vectors with the aim of generating mutated XSS
attacks in a fully automatic way. Mutated XSS attack generation depends on the
analysis of attack vectors and the structural learning mechanism. For the
kernel of the learning mechanism, we use a Hidden Markov model (HMM) as the
structure of the attack vector model to capture the implicit manner of the
attack vector, and this manner is benefited from the syntax meanings that are
labeled by the proposed tokenizing mechanism. Bayes theorem is used to
determine the number of hidden states in the model for generalizing the
structure model. The paper has the contributions as following: (1)
automatically learn the structure of attack vectors from practical data
analysis to modeling a structure model of attack vectors, (2) mimic the manners
and the elements of attack vectors to extend the ability of testing tool for
identifying XSS vulnerabilities, (3) be helpful to verify the flaws of
blacklist sanitization procedures of Web applications. We evaluated the
proposed mechanism by Burp Intruder with a dataset collected from public XSS
archives. The results show that mutated XSS attack generation can identify
potential vulnerabilities.
|
1009.3712
|
Preventing SQL Injection through Automatic Query Sanitization with
ASSIST
|
cs.SE cs.DB
|
Web applications are becoming an essential part of our everyday lives. Many
of our activities are dependent on the functionality and security of these
applications. As the scale of these applications grows, injection
vulnerabilities such as SQL injection are major security challenges for
developers today. This paper presents the technique of automatic query
sanitization to automatically remove SQL injection vulnerabilities in code. In
our technique, a combination of static analysis and program transformation are
used to automatically instrument web applications with sanitization code. We
have implemented this technique in a tool named ASSIST (Automatic and Static
SQL Injection Sanitization Tool) for protecting Java-based web applications.
Our experimental evaluation showed that our technique is effective against SQL
injection vulnerabilities and has a low overhead.
|
1009.3713
|
Relational Constraint Driven Test Case Synthesis for Web Applications
|
cs.SE cs.DB
|
This paper proposes a relational constraint driven technique that synthesizes
test cases automatically for web applications. Using a static analysis,
servlets can be modeled as relational transducers, which manipulate backend
databases. We present a synthesis algorithm that generates a sequence of HTTP
requests for simulating a user session. The algorithm relies on backward
symbolic image computation for reaching a certain database state, given a code
coverage objective. With a slight adaptation, the technique can be used for
discovering workflow attacks on web applications.
|
1009.3728
|
Network-Error Correcting Codes using Small Fields
|
cs.IT math.IT
|
Existing construction algorithms of block network-error correcting codes
require a rather large field size, which grows with the size of the network and
the number of sinks, and thereby can be prohibitive in large networks. In this
work, we give an algorithm which, starting from a given network-error
correcting code, can obtain another network code using a small field, with the
same error correcting capability as the original code. An algorithm for
designing network codes using small field sizes proposed recently by Ebrahimi
and Fragouli can be seen as a special case of our algorithm. The major step in
our algorithm is to find a least degree irreducible polynomial which is coprime
to another large degree polynomial. We utilize the algebraic properties of
finite fields to implement this step so that it becomes much faster than the
brute-force method. As a result the algorithm given by Ebrahimi and Fragouli is
also quickened.
|
1009.3751
|
Matching Dyadic Distributions to Channels
|
cs.IT math.IT math.PR
|
Many communication channels with discrete input have non-uniform capacity
achieving probability mass functions (PMF). By parsing a stream of independent
and equiprobable bits according to a full prefix-free code, a modu-lator can
generate dyadic PMFs at the channel input. In this work, we show that for
discrete memoryless channels and for memoryless discrete noiseless channels,
searching for good dyadic input PMFs is equivalent to minimizing the
Kullback-Leibler distance between a dyadic PMF and a weighted version of the
capacity achieving PMF. We define a new algorithm called Geometric Huffman
Coding (GHC) and prove that GHC finds the optimal dyadic PMF in O(m \log m)
steps where m is the number of input symbols of the considered channel.
Furthermore, we prove that by generating dyadic PMFs of blocks of consecutive
input symbols, GHC achieves capacity when the block length goes to infinity.
|
1009.3771
|
An extensible web interface for databases and its application to storing
biochemical data
|
cs.PL cs.CE
|
This paper presents a generic web-based database interface implemented in
Prolog. We discuss the advantages of the implementation platform and
demonstrate the system's applicability in providing access to integrated
biochemical data. Our system exploits two libraries of SWI-Prolog to create a
schema-transparent interface within a relational setting. As is expected in
declarative programming, the interface was written with minimal programming
effort due to the high level of the language and its suitability to the task.
We highlight two of Prolog's features that are well suited to the task at hand:
term representation of structured documents and relational nature of Prolog
which facilitates transparent integration of relational databases. Although we
developed the system for accessing in-house biochemical and genomic data the
interface is generic and provides a number of extensible features. We describe
some of these features with references to our research databases. Finally we
outline an in-house library that facilitates interaction between Prolog and the
R statistical package. We describe how it has been employed in the present
context to store output from statistical analysis on to the database.
|
1009.3802
|
Robust Low-Rank Subspace Segmentation with Semidefinite Guarantees
|
cs.CV cs.IT cs.LG math.IT
|
Recently there is a line of research work proposing to employ Spectral
Clustering (SC) to segment (group){Throughout the paper, we use segmentation,
clustering, and grouping, and their verb forms, interchangeably.}
high-dimensional structural data such as those (approximately) lying on
subspaces {We follow {liu2010robust} and use the term "subspace" to denote both
linear subspaces and affine subspaces. There is a trivial conversion between
linear subspaces and affine subspaces as mentioned therein.} or low-dimensional
manifolds. By learning the affinity matrix in the form of sparse
reconstruction, techniques proposed in this vein often considerably boost the
performance in subspace settings where traditional SC can fail. Despite the
success, there are fundamental problems that have been left unsolved: the
spectrum property of the learned affinity matrix cannot be gauged in advance,
and there is often one ugly symmetrization step that post-processes the
affinity for SC input. Hence we advocate to enforce the symmetric positive
semidefinite constraint explicitly during learning (Low-Rank Representation
with Positive SemiDefinite constraint, or LRR-PSD), and show that factually it
can be solved in an exquisite scheme efficiently instead of general-purpose SDP
solvers that usually scale up poorly. We provide rigorous mathematical
derivations to show that, in its canonical form, LRR-PSD is equivalent to the
recently proposed Low-Rank Representation (LRR) scheme {liu2010robust}, and
hence offer theoretic and practical insights to both LRR-PSD and LRR, inviting
future research. As per the computational cost, our proposal is at most
comparable to that of LRR, if not less. We validate our theoretic analysis and
optimization scheme by experiments on both synthetic and real data sets.
|
1009.3824
|
Optimization and Convergence of Observation Channels in Stochastic
Control
|
math.OC cs.IT math.IT
|
This paper studies the optimization of observation channels (stochastic
kernels) in partially observed stochastic control problems. In particular,
existence and continuity properties are investigated mostly (but not
exclusively) concentrating on the single-stage case. Continuity properties of
the optimal cost in channels are explored under total variation, setwise
convergence, and weak convergence. Sufficient conditions for compactness of a
class of channels under total variation and setwise convergence are presented
and applications to quantization are explored.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.