id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1106.1925
|
Ranking via Sinkhorn Propagation
|
stat.ML cs.IR cs.LG
|
It is of increasing importance to develop learning methods for ranking. In
contrast to many learning objectives, however, the ranking problem presents
difficulties due to the fact that the space of permutations is not smooth. In
this paper, we examine the class of rank-linear objective functions, which
includes popular metrics such as precision and discounted cumulative gain. In
particular, we observe that expectations of these gains are completely
characterized by the marginals of the corresponding distribution over
permutation matrices. Thus, the expectations of rank-linear objectives can
always be described through locations in the Birkhoff polytope, i.e.,
doubly-stochastic matrices (DSMs). We propose a technique for learning
DSM-based ranking functions using an iterative projection operator known as
Sinkhorn normalization. Gradients of this operator can be computed via
backpropagation, resulting in an algorithm we call Sinkhorn propagation, or
SinkProp. This approach can be combined with a wide range of gradient-based
approaches to rank learning. We demonstrate the utility of SinkProp on several
information retrieval data sets.
|
1106.1933
|
Lyapunov stochastic stability and control of robust dynamic coalitional
games with transferable utilities
|
cs.GT cs.LG cs.SY math.OC
|
This paper considers a dynamic game with transferable utilities (TU), where
the characteristic function is a continuous-time bounded mean ergodic process.
A central planner interacts continuously over time with the players by choosing
the instantaneous allocations subject to budget constraints. Before the game
starts, the central planner knows the nature of the process (bounded mean
ergodic), the bounded set from which the coalitions' values are sampled, and
the long run average coalitions' values. On the other hand, he has no knowledge
of the underlying probability function generating the coalitions' values. Our
goal is to find allocation rules that use a measure of the extra reward that a
coalition has received up to the current time by re-distributing the budget
among the players. The objective is two-fold: i) guaranteeing convergence of
the average allocations to the core (or a specific point in the core) of the
average game, ii) driving the coalitions' excesses to an a priori given cone.
The resulting allocation rules are robust as they guarantee the aforementioned
convergence properties despite the uncertain and time-varying nature of the
coaltions' values. We highlight three main contributions. First, we design an
allocation rule based on full observation of the extra reward so that the
average allocation approaches a specific point in the core of the average game,
while the coalitions' excesses converge to an a priori given direction. Second,
we design a new allocation rule based on partial observation on the extra
reward so that the average allocation converges to the core of the average
game, while the coalitions' excesses converge to an a priori given cone. And
third, we establish connections to approachability theory and attainability
theory.
|
1106.1940
|
The Degree Sequence of Random Apollonian Networks
|
cs.SI math.PR physics.soc-ph
|
We analyze the asymptotic behavior of the degree sequence of Random
Apollonian Networks \cite{maximal}. For previous weaker results see
\cite{comment,maximal}.
|
1106.1944
|
Operating LDPC Codes with Zero Shaping Gap
|
cs.IT math.IT
|
Unequal transition probabilities between input and output symbols, input
power constraints, or input symbols of unequal durations can lead to
non-uniform capacity achieving input distributions for communication channels.
Using uniform input distributions reduces the achievable rate, which is called
the shaping gap. Gallager's idea for reliable communication with zero shaping
gap is to do encoding, matching, and jointly decoding and dematching. In this
work, a scheme is proposed that consists in matching, encoding, decoding, and
dematching. Only matching is channel specific whereas coding is not. Thus
off-the-shelf LDPC codes can be applied. Analytical formulas for shaping and
coding gap of the proposed scheme are derived and it is shown that the shaping
gap can be made zero. Numerical results show that the proposed scheme allows to
operate off-the-shelf LDPC codes with zero shaping gap and a coding gap that is
unchanged compared to uniform transmission.
|
1106.1953
|
Analysis of cubic permutation polynomials for turbo codes
|
cs.IT math.IT
|
Quadratic permutation polynomials (QPPs) have been widely studied and used as
interleavers in turbo codes. However, less attention has been given to cubic
permutation polynomials (CPPs). This paper proves a theorem which states
sufficient and necessary conditions for a cubic permutation polynomial to be a
null permutation polynomial. The result is used to reduce the search complexity
of CPP interleavers for short lengths (multiples of 8, between 40 and 352), by
improving the distance spectrum over the set of polynomials with the largest
spreading factor. The comparison with QPP interleavers is made in terms of
search complexity and upper bounds of the bit error rate (BER) and frame error
rate (FER) for AWGN and for independent fading Rayleigh channels. Cubic
permutation polynomials leading to better performance than quadratic
permutation polynomials are found for some lengths.
|
1106.1957
|
Interdefinability of defeasible logic and logic programming under the
well-founded semantics
|
cs.AI cs.LO
|
We provide a method of translating theories of Nute's defeasible logic into
logic programs, and a corresponding translation in the opposite direction.
Under certain natural restrictions, the conclusions of defeasible theories
under the ambiguity propagating defeasible logic ADL correspond to those of the
well-founded semantics for normal logic programs, and so it turns out that the
two formalisms are closely related. Using the same translation of logic
programs into defeasible theories, the semantics for the ambiguity blocking
defeasible logic NDL can be seen as indirectly providing an ambiguity blocking
semantics for logic programs. We also provide antimonotone operators for both
ADL and NDL, each based on the Gelfond-Lifschitz (GL) operator for logic
programs. For defeasible theories without defeaters or priorities on rules, the
operator for ADL corresponds to the GL operator and so can be seen as partially
capturing the consequences according to ADL. Similarly, the operator for NDL
captures the consequences according to NDL, though in this case no restrictions
on theories apply. Both operators can be used to define stable model semantics
for defeasible theories.
|
1106.1969
|
The Capacity Region of Multiway Relay Channels Over Finite Fields with
Full Data Exchange
|
cs.IT math.IT
|
The multi-way relay channel is a multicast network where L users exchange
data through a relay. In this paper, the capacity region of a class of
multi-way relay channels is derived, where the channel inputs and outputs take
values over finite fields. The cut-set upper bound to the capacity region is
derived and is shown to be achievable by our proposed functional-decode-forward
coding strategy. More specifically, for the general case where the users can
transmit at possibly different rates, functional-decode-forward, combined with
rate splitting and joint source-channel decoding, is proved to achieve the
capacity region; while for the case where all users transmit at a common rate,
rate splitting and joint source-channel decoding are not required to achieve
the capacity. That the capacity-achieving coding strategies do not utilize the
users' received signals in the users' encoding functions implies that feedback
does not increase the capacity region of this class of multi-way relay
channels.
|
1106.1975
|
Exact Reconstruction of the Rank Order Coding using Frames Theory
|
cs.CV cs.NE
|
Our goal is to revisit rank order coding by proposing an original exact
decoding procedure for it. Rank order coding was proposed by Simon Thorpe et
al. who stated that the retina represents the visual stimulus by the order in
which its cells are activated. A classical rank order coder/decoder was then
designed on this basis [1]. Though, it appeared that the decoding procedure
employed yields reconstruction errors that limit the model Rate/Quality
performances when used as an image codec. The attempts made in the literature
to overcome this issue are time consuming and alter the coding procedure, or
are lacking mathematical support and feasibility for standard size images. Here
we solve this problem in an original fashion by using the frames theory, where
a frame of a vector space designates an extension for the notion of basis.
First, we prove that the analyzing filter bank considered is a frame, and then
we define the corresponding dual frame that is necessary for the exact image
reconstruction. Second, to deal with the problem of memory overhead, we design
a recursive out-of-core blockwise algorithm for the computation of this dual
frame. Our work provides a mathematical formalism for the retinal model under
study and defines a simple and exact reverse transform for it with up to 270 dB
of PSNR gain compared to [1]. Furthermore, the framework presented here can be
extended to several models of the visual cortical areas using redundant
representations.
|
1106.1998
|
A Linear Time Natural Evolution Strategy for Non-Separable Functions
|
cs.AI
|
We present a novel Natural Evolution Strategy (NES) variant, the Rank-One NES
(R1-NES), which uses a low rank approximation of the search distribution
covariance matrix. The algorithm allows computation of the natural gradient
with cost linear in the dimensionality of the parameter space, and excels in
solving high-dimensional non-separable problems, including the best result to
date on the Rosenbrock function (512 dimensions).
|
1106.2007
|
Modular networks of word correlations on Twitter
|
physics.soc-ph cs.HC cs.SI
|
Complex networks are important tools for analyzing the information flow in
many aspects of nature and human society. Using data from the microblogging
service Twitter, we study networks of correlations in the appearance of words
from three different categories, international brands, nouns and US major
cities. We create networks where the strength of links is determined by a
similarity measure based on the rate of coappearance of words. In comparison
with the null model, where words are assumed to be uncorrelated, the
heavy-tailed distribution of pair correlations is shown to be a consequence of
modules of words representing similar entities.
|
1106.2013
|
Secrecy Results for Compound Wiretap Channels
|
cs.IT math.IT
|
We derive a lower bound on the secrecy capacity of the compound wiretap
channel with channel state information at the transmitter which matches the
general upper bound on the secrecy capacity of general compound wiretap
channels given by Liang et al. and thus establishing a full coding theorem in
this case. We achieve this with a stronger secrecy criterion and the maximum
error probability criterion, and with a decoder that is robust against the
effect of randomisation in the encoding. This relieves us from the need of
decoding the randomisation parameter which is in general not possible within
this model. Moreover we prove a lower bound on the secrecy capacity of the
compound wiretap channel without channel state information and derive a
multi-letter expression for the capacity in this communication scenario.
|
1106.2025
|
Censored Truncated Sequential Spectrum Sensing for Cognitive Radio
Networks
|
cs.SY cs.IT math.IT stat.AP
|
Reliable spectrum sensing is a key functionality of a cognitive radio
network. Cooperative spectrum sensing improves the detection reliability of a
cognitive radio system but also increases the system energy consumption which
is a critical factor particularly for low-power wireless technologies. A
censored truncated sequential spectrum sensing technique is considered as an
energy-saving approach. To design the underlying sensing parameters, the
maximum energy consumption per sensor is minimized subject to a lower bounded
global probability of detection and an upper bounded false alarm rate. This way
both the interference to the primary user due to miss detection and the network
throughput as a result of a low false alarm rate is controlled. We compare the
performance of the proposed scheme with a fixed sample size censoring scheme
under different scenarios. It is shown that as the sensing cost of the
cognitive radios increases, the energy efficiency of the censored truncated
sequential approach grows significantly.
|
1106.2050
|
Multi-User Privacy: The Gray-Wyner System and Generalized Common
Information
|
cs.IT math.IT
|
The problem of preserving privacy when a multivariate source is required to
be revealed partially to multiple users is modeled as a Gray-Wyner source
coding problem with K correlated sources at the encoder and K decoders in which
the kth decoder, k = 1, 2, ...,K, losslessly reconstructs the kth source via a
common link and a private link. The privacy requirement of keeping each decoder
oblivious of all sources other than the one intended for it is introduced via
an equivocation constraint at each decoder such that the total equivocation
summed over all decoders is E. The set of achievable rates-equivocation tuples
is completely characterized. Using this characterization, two different
definitions of common information are presented and are shown to be equivalent.
|
1106.2055
|
Channels That Die
|
cs.IT math.IT
|
Given the possibility of communication systems failing catastrophically, we
investigate limits to communicating over channels that fail at random times.
These channels are finite-state semi-Markov channels. We show that
communication with arbitrarily small probability of error is not possible.
Making use of results in finite blocklength channel coding, we determine
sequences of blocklengths that optimize transmission volume communicated at
fixed maximum message error probabilities. We provide a partial ordering of
communication channels. A dynamic programming formulation is used to show the
structural result that channel state feedback does not improve performance.
|
1106.2057
|
Discriminatory Lossy Source Coding: Side Information Privacy
|
cs.IT math.IT
|
A lossy source coding problem is studied in which a source encoder
communicates with two decoders, one with and one without correlated side
information with an additional constraint on the privacy of the side
information at the uninformed decoder. Two cases of this problem arise
depending on the availability of the side information at the encoder. The set
of all feasible rate-distortion-equivocation tuples are characterized for both
cases. The difference between the informed and uninformed cases and the
advantages of encoder side information for enhancing privacy are highlighted
for a binary symmetric source with erasure side information and Hamming
distortion.
|
1106.2109
|
Analysis of Error Floors of Non-Binary LDPC Codes over MBIOS Channel
|
cs.IT math.IT
|
In this paper, we investigate the error floors of non-binary low-density
parity-check (LDPC) codes transmitted over the memoryless binary-input
output-symmetric (MBIOS) channels. We provide a necessary and sufficient
condition for successful decoding of zigzag cycle codes over the MBIOS channel
by the belief propagation decoder. We consider an expurgated ensemble of
non-binary LDPC codes by using the above necessary and sufficient condition,
and hence exhibit lower error floors. Finally, we show lower bounds of the
error floors for the expurgated LDPC code ensembles over the MBIOS channel.
|
1106.2113
|
Using Hopfield to Solve Resource-Leveling Problem
|
cs.NE
|
Although the traditional permute matrix coming along with Hopfield is able to
describe many common problems, it seems to have limitation in solving more
complicated problem with more constrains, like resource leveling which is
actually a NP problem. This paper tries to find a better solution for it by
using neural network. In order to give the neural network description of
resource leveling problem, a new description method called Augmented permute
matrix is proposed by expending the ability of the traditional one. An Embedded
Hybrid Model combining Hopfield model and SA are put forward to improve the
optimization in essence in which Hopfield servers as State Generator for the
SA. The experiment results show that Augmented permute matrix is able to
completely and appropriately describe the application. The energy function and
hybrid model given in this study are also highly efficient in solving resource
leveling problem.
|
1106.2124
|
Omni-tomography/Multi-tomography -- Integrating Multiple Modalities for
Simultaneous Imaging
|
physics.med-ph cs.CV math.NA stat.AP
|
Current tomographic imaging systems need major improvements, especially when
multi-dimensional, multi-scale, multi-temporal and multi-parametric phenomena
are under investigation. Both preclinical and clinical imaging now depend on in
vivo tomography, often requiring separate evaluations by different imaging
modalities to define morphologic details, delineate interval changes due to
disease or interventions, and study physiological functions that have
interconnected aspects. Over the past decade, fusion of multimodality images
has emerged with two different approaches: post-hoc image registration and
combined acquisition on PET-CT, PET-MRI and other hybrid scanners. There are
intrinsic limitations for both the post-hoc image analysis and dual/triple
modality approaches defined by registration errors and physical constraints in
the acquisition chain. We envision that tomography will evolve beyond current
modality fusion and towards grand fusion, a large scale fusion of all or many
imaging modalities, which may be referred to as omni-tomography or
multi-tomography. Unlike modality fusion, grand fusion is here proposed for
truly simultaneous but often localized reconstruction in terms of all or many
relevant imaging mechanisms such as CT, MRI, PET, SPECT, US, optical, and
possibly more. In this paper, the technical basis for omni-tomography is
introduced and illustrated with a top-level design of a next generation
scanner, interior tomographic reconstructions of representative modalities, and
anticipated applications of omni-tomography.
|
1106.2134
|
Components in time-varying graphs
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
Real complex systems are inherently time-varying. Thanks to new communication
systems and novel technologies, it is today possible to produce and analyze
social and biological networks with detailed information on the time of
occurrence and duration of each link. However, standard graph metrics
introduced so far in complex network theory are mainly suited for static
graphs, i.e., graphs in which the links do not change over time, or graphs
built from time-varying systems by aggregating all the links as if they were
concurrent in time. In this paper, we extend the notion of connectedness, and
the definitions of node and graph components, to the case of time-varying
graphs, which are represented as time-ordered sequences of graphs defined over
a fixed set of nodes. We show that the problem of finding strongly connected
components in a time-varying graph can be mapped into the problem of
discovering the maximal-cliques in an opportunely constructed static graph,
which we name the affine graph. It is therefore an NP-complete problem. As a
practical example, we have performed a temporal component analysis of
time-varying graphs constructed from three data sets of human interactions. The
results show that taking time into account in the definition of graph
components allows to capture important features of real systems. In particular,
we observe a large variability in the size of node temporal in- and
out-components. This is due to intrinsic fluctuations in the activity patterns
of individuals, which cannot be detected by static graph analysis.
|
1106.2156
|
A Computational Framework for Nonlinear Dimensionality Reduction of
Large Data Sets: The Exploratory Inspection Machine (XIM)
|
cs.NE
|
In this paper, we present a novel computational framework for nonlinear
dimensionality reduction which is specifically suited to process large data
sets: the Exploratory Inspection Machine (XIM). XIM introduces a conceptual
cross-link between hitherto separate domains of machine learning, namely
topographic vector quantization and divergence-based neighbor embedding
approaches. There are three ways to conceptualize XIM, namely (i) as the
inversion of the Exploratory Observation Machine (XOM) and its variants, such
as Neighbor Embedding XOM (NE-XOM), (ii) as a powerful optimization scheme for
divergence-based neighbor embedding cost functions inspired by Stochastic
Neighbor Embedding (SNE) and its variants, such as t-distributed SNE (t-SNE),
and (iii) as an extension of topographic vector quantization methods, such as
the Self-Organizing Map (SOM). By preserving both global and local data
structure, XIM combines the virtues of classical and advanced recent embedding
methods. It permits direct visualization of large data collections without the
need for prior data reduction. Finally, XIM can contribute to many application
domains of data analysis and visualization important throughout the sciences
and engineering, such as pattern matching, constrained incremental learning,
data clustering, and the analysis of non-metric dissimilarity data.
|
1106.2229
|
Fast, Linear Time Hierarchical Clustering using the Baire Metric
|
stat.ML cs.IR stat.AP
|
The Baire metric induces an ultrametric on a dataset and is of linear
computational complexity, contrasted with the standard quadratic time
agglomerative hierarchical clustering algorithm. In this work we evaluate
empirically this new approach to hierarchical clustering. We compare
hierarchical clustering based on the Baire metric with (i) agglomerative
hierarchical clustering, in terms of algorithm properties; (ii) generalized
ultrametrics, in terms of definition; and (iii) fast clustering through k-means
partititioning, in terms of quality of results. For the latter, we carry out an
in depth astronomical study. We apply the Baire distance to spectrometric and
photometric redshifts from the Sloan Digital Sky Survey using, in this work,
about half a million astronomical objects. We want to know how well the (more
costly to determine) spectrometric redshifts can predict the (more easily
obtained) photometric redshifts, i.e. we seek to regress the spectrometric on
the photometric redshifts, and we use clusterwise regression for this.
|
1106.2233
|
Clustering with Multi-Layer Graphs: A Spectral Perspective
|
cs.LG cs.CV cs.SI stat.ML
|
Observational data usually comes with a multimodal nature, which means that
it can be naturally represented by a multi-layer graph whose layers share the
same set of vertices (users) with different edges (pairwise relationships). In
this paper, we address the problem of combining different layers of the
multi-layer graph for improved clustering of the vertices compared to using
layers independently. We propose two novel methods, which are based on joint
matrix factorization and graph regularization framework respectively, to
efficiently combine the spectrum of the multiple graph layers, namely the
eigenvectors of the graph Laplacian matrices. In each case, the resulting
combination, which we call a "joint spectrum" of multiple graphs, is used for
clustering the vertices. We evaluate our approaches by simulations with several
real world social network datasets. Results demonstrate the superior or
competitive performance of the proposed methods over state-of-the-art technique
and common baseline methods, such as co-regularization and summation of
information from individual graphs.
|
1106.2289
|
PRESY: A Context Based Query Reformulation Tool for Information
Retrieval on the Web
|
cs.IR
|
Problem Statement: The huge number of information on the web as well as the
growth of new inexperienced users creates new challenges for information
retrieval. It has become increasingly difficult for these users to find
relevant documents that satisfy their individual needs. Certainly the current
search engines (such as Google, Bing and Yahoo) offer an efficient way to
browse the web content. However, the result quality is highly based on uses
queries which need to be more precise to find relevant documents. This task
still complicated for the majority of inept users who cannot express their
needs with significant words in the query. For that reason, we believe that a
reformulation of the initial user's query can be a good alternative to improve
the information selectivity. This study proposes a novel approach and presents
a prototype system called PRESY (Profile-based REformulation SYstem) for
information retrieval on the web. Approach: It uses an incremental approach to
categorize users by constructing a contextual base. The latter is composed of
two types of context (static and dynamic) obtained using the users' profiles.
The architecture proposed was implemented using .Net environment to perform
queries reformulating tests. Results: The experiments gives at the end of this
article show that the precision of the returned content is effectively
improved. The tests were performed with the most popular searching engine (i.e.
Google, Bind and Yahoo) selected in particular for their high selectivity.
Among the given results, we found that query reformulation improve the first
three results by 10.7% and 11.7% of the next seven returned elements. So as we
can see the reformulation of users' initial queries improves the pertinence of
returned content.
|
1106.2312
|
Evolutionary Biclustering of Clickstream Data
|
cs.NE
|
Biclustering is a two way clustering approach involving simultaneous
clustering along two dimensions of the data matrix. Finding biclusters of web
objects (i.e. web users and web pages) is an emerging topic in the context of
web usage mining. It overcomes the problem associated with traditional
clustering methods by allowing automatic discovery of browsing pattern based on
a subset of attributes. A coherent bicluster of clickstream data is a local
browsing pattern such that users in bicluster exhibit correlated browsing
pattern through a subset of pages of a web site. This paper proposed a new
application of biclustering to web data using a combination of heuristics and
meta-heuristics such as K-means, Greedy Search Procedure and Genetic Algorithms
to identify the coherent browsing pattern. Experiment is conducted on the
benchmark clickstream msnbc dataset from UCI repository. Results demonstrate
the efficiency and beneficial outcome of the proposed method by correlating the
users and pages of a web site in high degree.This approach shows excellent
performance at finding high degree of overlapped coherent biclusters from web
data.
|
1106.2327
|
A framework for coupled deformation-diffusion analysis with application
to degradation/healing
|
cs.NA cs.CE math.NA physics.comp-ph
|
This paper deals with the formulation and numerical implementation of a fully
coupled continuum model for deformation-diffusion in linearized elastic solids.
The mathematical model takes into account the effect of the deformation on the
diffusion process, and the affect of the transport of an inert chemical species
on the deformation of the solid. We then present a robust computational
framework for solving the proposed mathematical model, which consists of
coupled non-linear partial differential equations. It should be noted that many
popular numerical formulations may produce unphysical negative values for the
concentration, particularly, when the diffusion process is anisotropic. The
violation of the non-negative constraint by these numerical formulations is not
mere numerical noise. In the proposed computational framework we employ a novel
numerical formulation that will ensure that the concentration of the diffusant
be always non-negative, which is one of the main contributions of this paper.
Representative numerical examples are presented to show the robustness,
convergence, and performance of the proposed computational framework. Another
contribution of this paper is to systematically study the affect of transport
of the diffusant on the deformation of the solid and vice-versa, and their
implication in modeling degradation/healing of materials. We show that the
coupled response is both qualitatively and quantitatively different from the
uncoupled response.
|
1106.2357
|
Comparing Haar-Hilbert and Log-Gabor Based Iris Encoders on Bath Iris
Image Database
|
cs.CV
|
This papers introduces a new family of iris encoders which use 2-dimensional
Haar Wavelet Transform for noise attenuation, and Hilbert Transform to encode
the iris texture. In order to prove the usefulness of the newly proposed iris
encoding approach, the recognition results obtained by using these new encoders
are compared to those obtained using the classical Log- Gabor iris encoder.
Twelve tests involving single/multienrollment and conducted on Bath Iris Image
Database are presented here. One of these tests achieves an Equal Error Rate
comparable to the lowest value reported so far for this database. New Matlab
tools for iris image processing are also released together with this paper: a
second version of the Circular Fuzzy Iris Segmentator (CFIS2), a fast Log-Gabor
encoder and two Haar-Hilbert based encoders.
|
1106.2363
|
Random design analysis of ridge regression
|
math.ST cs.AI cs.LG stat.ML stat.TH
|
This work gives a simultaneous analysis of both the ordinary least squares
estimator and the ridge regression estimator in the random design setting under
mild assumptions on the covariate/response distributions. In particular, the
analysis provides sharp results on the ``out-of-sample'' prediction error, as
opposed to the ``in-sample'' (fixed design) error. The analysis also reveals
the effect of errors in the estimated covariance structure, as well as the
effect of modeling errors, neither of which effects are present in the fixed
design setting. The proofs of the main results are based on a simple
decomposition lemma combined with concentration inequalities for random vectors
and matrices.
|
1106.2369
|
Efficient Optimal Learning for Contextual Bandits
|
cs.LG cs.AI stat.ML
|
We address the problem of learning in an online setting where the learner
repeatedly observes features, selects among a set of actions, and receives
reward for the action taken. We provide the first efficient algorithm with an
optimal regret. Our algorithm uses a cost sensitive classification learner as
an oracle and has a running time $\mathrm{polylog}(N)$, where $N$ is the number
of classification rules among which the oracle might choose. This is
exponentially faster than all previous algorithms that achieve optimal regret
in this setting. Our formulation also enables us to create an algorithm with
regret that is additive rather than multiplicative in feedback delay as in all
previous work.
|
1106.2404
|
Some Results on the Information Loss in Dynamical Systems
|
cs.IT math.IT nlin.SI
|
In this work we investigate the information loss in (nonlinear) dynamical
input-output systems and provide some general results. In particular, we
present an upper bound on the information loss rate, defined as the
(non-negative) difference between the entropy rates of the jointly stationary
stochastic processes at the input and output of the system.
We further introduce a family of systems with vanishing information loss
rate. It is shown that not only linear filters belong to that family, but -
under certain circumstances - also finite-precision implementations of the
latter, which typically consist of nonlinear elements.
|
1106.2414
|
Some remarks on cops and drunk robbers
|
cs.DM cs.GT cs.RO math.CO math.PR
|
The cops and robbers game has been extensively studied under the assumption
of optimal play by both the cops and the robbers. In this paper we study the
problem in which cops are chasing a drunk robber (that is, a robber who
performs a random walk) on a graph. Our main goal is to characterize the "cost
of drunkenness." Specifically, we study the ratio of expected capture times for
the optimal version and the drunk robber one. We also examine the algorithmic
side of the problem; that is, how to compute near-optimal search schedules for
the cops. Finally, we present a preliminary investigation of the invisible
robber game and point out differences between this game and graph search.
|
1106.2428
|
On the classification of Hermitian self-dual additive codes over GF(9)
|
math.CO cs.IT math.IT quant-ph
|
Additive codes over GF(9) that are self-dual with respect to the Hermitian
trace inner product have a natural application in quantum information theory,
where they correspond to ternary quantum error-correcting codes. However, these
codes have so far received far less interest from coding theorists than
self-dual additive codes over GF(4), which correspond to binary quantum codes.
Self-dual additive codes over GF(9) have been classified up to length 8, and in
this paper we extend the complete classification to codes of length 9 and 10.
The classification is obtained by using a new algorithm that combines two graph
representations of self-dual additive codes. The search space is first reduced
by the fact that every code can be mapped to a weighted graph, and a different
graph is then introduced that transforms the problem of code equivalence into a
problem of graph isomorphism. By an extension technique, we are able to
classify all optimal codes of length 11 and 12. There are 56,005,876
(11,3^11,5) codes and 6493 (12,3^12,6) codes. We also find the smallest codes
with trivial automorphism group.
|
1106.2429
|
Efficient Transductive Online Learning via Randomized Rounding
|
cs.LG stat.ML
|
Most traditional online learning algorithms are based on variants of mirror
descent or follow-the-leader. In this paper, we present an online algorithm
based on a completely different approach, tailored for transductive settings,
which combines "random playout" and randomized rounding of loss subgradients.
As an application of our approach, we present the first computationally
efficient online algorithm for collaborative filtering with trace-norm
constrained matrices. As a second application, we solve an open question
linking batch learning and transductive online learning
|
1106.2436
|
From Bandits to Experts: On the Value of Side-Observations
|
cs.LG stat.ML
|
We consider an adversarial online learning setting where a decision maker can
choose an action in every stage of the game. In addition to observing the
reward of the chosen action, the decision maker gets side observations on the
reward he would have obtained had he chosen some of the other actions. The
observation structure is encoded as a graph, where node i is linked to node j
if sampling i provides information on the reward of j. This setting naturally
interpolates between the well-known "experts" setting, where the decision maker
can view all rewards, and the multi-armed bandits setting, where the decision
maker can only view the reward of the chosen action. We develop practical
algorithms with provable regret guarantees, which depend on non-trivial
graph-theoretic properties of the information feedback structure. We also
provide partially-matching lower bounds.
|
1106.2464
|
On the Sum Capacity of K-user Cascade Gaussian Z-Interference Channel
|
cs.IT math.IT
|
A $K$-user cascade Gaussian Z-interference channel is a subclass of the
general $K$-user Gaussian interference channel, where each user, except the
first one, experiences interference only from the previous user. Under simple
Han-Kobayashi schemes assuming Gaussian inputs and no time sharing, it is shown
that the maximum sum rate is achieved by each user transmitting either common
or private signals. For K=3, channel conditions under which the achieved sum
rate is either equal to or within 0.5 bits to the sum capacity are identified.
|
1106.2473
|
Resolving Author Name Homonymy to Improve Resolution of Structures in
Co-author Networks
|
cs.DL cs.SI physics.soc-ph
|
We investigate how author name homonymy distorts clustered large-scale
co-author networks, and present a simple, effective, scalable and generalizable
algorithm to ameliorate such distortions. We evaluate the performance of the
algorithm to improve the resolution of mesoscopic network structures. To this
end, we establish the ground truth for a sample of author names that is
statistically representative of different types of nodes in the co-author
network, distinguished by their role for the connectivity of the network. We
finally observe that this distinction of node roles based on the mesoscopic
structure of the network, in combination with a quantification of author name
commonality, suggests a new approach to assess network distortion by homonymy
and to analyze the reduction of distortion in the network after disambiguation,
without requiring ground truth sampling.
|
1106.2489
|
Eliciting Forecasts from Self-interested Experts: Scoring Rules for
Decision Makers
|
cs.GT cs.AI cs.MA cs.SI
|
Scoring rules for eliciting expert predictions of random variables are
usually developed assuming that experts derive utility only from the quality of
their predictions (e.g., score awarded by the rule, or payoff in a prediction
market). We study a more realistic setting in which (a) the principal is a
decision maker and will take a decision based on the expert's prediction; and
(b) the expert has an inherent interest in the decision. For example, in a
corporate decision market, the expert may derive different levels of utility
from the actions taken by her manager. As a consequence the expert will usually
have an incentive to misreport her forecast to influence the choice of the
decision maker if typical scoring rules are used. We develop a general model
for this setting and introduce the concept of a compensation rule. When
combined with the expert's inherent utility for decisions, a compensation rule
induces a net scoring rule that behaves like a normal scoring rule. Assuming
full knowledge of expert utility, we provide a complete characterization of all
(strictly) proper compensation rules. We then analyze the situation where the
expert's utility function is not fully known to the decision maker. We show
bounds on: (a) expert incentive to misreport; (b) the degree to which an expert
will misreport; and (c) decision maker loss in utility due to such uncertainty.
These bounds depend in natural ways on the degree of uncertainty, the local
degree of convexity of net scoring function, and natural properties of the
decision maker's utility function. They also suggest optimization procedures
for the design of compensation rules. Finally, we briefly discuss the use of
compensation rules as market scoring rules for self-interested experts in a
prediction market.
|
1106.2503
|
A Large-Scale Community Structure Analysis In Facebook
|
cs.SI cs.CY physics.soc-ph
|
Understanding social dynamics that govern human phenomena, such as
communications and social relationships is a major problem in current
computational social sciences. In particular, given the unprecedented success
of online social networks (OSNs), in this paper we are concerned with the
analysis of aggregation patterns and social dynamics occurring among users of
the largest OSN as the date: Facebook. In detail, we discuss the mesoscopic
features of the community structure of this network, considering the
perspective of the communities, which has not yet been studied on such a large
scale. To this purpose, we acquired a sample of this network containing
millions of users and their social relationships; then, we unveiled the
communities representing the aggregation units among which users gather and
interact; finally, we analyzed the statistical features of such a network of
communities, discovering and characterizing some specific organization patterns
followed by individuals interacting in online social networks, that emerge
considering different sampling techniques and clustering methodologies. This
study provides some clues of the tendency of individuals to establish social
interactions in online social networks that eventually contribute to building a
well-connected social structure, and opens space for further social studies.
|
1106.2522
|
Degrees of Freedom Region of the Gaussian MIMO Broadcast Channel with
Common and Private Messages
|
cs.IT math.IT
|
We consider the Gaussian multiple-input multiple-output (MIMO) broadcast
channel with common and private messages. We obtain the degrees of freedom
(DoF) region of this channel. We first show that a parallel Gaussian broadcast
channel with unmatched sub-channels can be constructed from any given Gaussian
MIMO broadcast channel by using the generalized singular value decomposition
(GSVD) and a relaxation on the power constraint for the channel input, in a way
that the capacity region of the constructed parallel channel provides an outer
bound for the capacity region of the original channel. The capacity region of
the parallel Gaussian broadcast channel with unmatched sub-channels is known,
using which we obtain an explicit outer bound for the DoF region of the
Gaussian MIMO broadcast channel. We finally show that this outer bound for the
DoF region can be attained both by the achievable scheme that uses a classical
Gaussian coding for the common message and dirty-paper coding (DPC) for the
private messages, as well as by a variation of the zero-forcing (ZF) scheme.
|
1106.2573
|
Nodal dynamics, not degree distributions, determine the structural
controllability of complex networks
|
physics.soc-ph cs.SI nlin.AO
|
Structural controllability has been proposed as an analytical framework for
making predictions regarding the control of complex networks across myriad
disciplines in the physical and life sciences (Liu et al.,
Nature:473(7346):167-173, 2011). Although the integration of control theory and
network analysis is important, we argue that the application of the structural
controllability framework to most if not all real-world networks leads to the
conclusion that a single control input, applied to the power dominating set
(PDS), is all that is needed for structural controllability. This result is
consistent with the well-known fact that controllability and its dual
observability are generic properties of systems. We argue that more important
than issues of structural controllability are the questions of whether a system
is almost uncontrollable, whether it is almost unobservable, and whether it
possesses almost pole-zero cancellations.
|
1106.2581
|
Distributed Storage Allocations for Optimal Delay
|
cs.IT math.IT
|
We examine the problem of creating an encoded distributed storage
representation of a data object for a network of mobile storage nodes so as to
achieve the optimal recovery delay. A source node creates a single data object
and disseminates an encoded representation of it to other nodes for storage,
subject to a given total storage budget. A data collector node subsequently
attempts to recover the original data object by contacting other nodes and
accessing the data stored in them. By using an appropriate code, successful
recovery is achieved when the total amount of data accessed is at least the
size of the original data object. The goal is to find an allocation of the
given budget over the nodes that optimizes the recovery delay incurred by the
data collector; two objectives are considered: (i) maximization of the
probability of successful recovery by a given deadline, and (ii) minimization
of the expected recovery delay. We solve the problem completely for the second
objective in the case of symmetric allocations (in which all nonempty nodes
store the same amount of data), and show that the optimal symmetric allocation
for the two objectives can be quite different. A simple data dissemination and
storage protocol for a mobile delay-tolerant network is evaluated under various
scenarios via simulations. Our results show that the choice of storage
allocation can have a significant impact on the recovery delay performance, and
that coding may or may not be beneficial depending on the circumstances.
|
1106.2587
|
Relative Lempel-Ziv Factorization for Efficient Storage and Retrieval of
Web Collections
|
cs.DS cs.DB cs.IR
|
Compression techniques that support fast random access are a core component
of any information system. Current state-of-the-art methods group documents
into fixed-sized blocks and compress each block with a general-purpose adaptive
algorithm such as GZIP. Random access to a specific document then requires
decompression of a block. The choice of block size is critical: it trades
between compression effectiveness and document retrieval times. In this paper
we present a scalable compression method for large document collections that
allows fast random access. We build a representative sample of the collection
and use it as a dictionary in a LZ77-like encoding of the rest of the
collection, relative to the dictionary. We demonstrate on large collections,
that using a dictionary as small as 0.1% of the collection size, our algorithm
is dramatically faster than previous methods, and in general gives much better
compression.
|
1106.2601
|
Knowledge Dispersion Index for Measuring Intellectual Capital
|
cs.SI q-fin.GN
|
In this paper we propose a novel index to quantify and measure the flow of
information on macro and micro scales. We discuss the implications of this
index for knowledge management fields and also as intellectual capital that can
thus be utilized by entrepreneurs. We explore different function and human
oriented metrics that can be used at micro-scales to process the flow of
information. We present a table of about 23 metrics, such as change in IT
inventory and percentage of employees with advanced degrees, that can be used
at micro scales to wholly quantify knowledge dispersion as intellectual
capital. At macro scales we split the economy in an industrial and consumer
sector where the flow of information in each determines how fast an economy is
going to grow and how overall an economy will perform given the aggregate
demand. Lastly, we propose a model for knowledge dispersion based on graph
theory and show how corrections in the flow become self-evident. Through the
principals of flow conservation and capacity constrains we also speculate how
this flow might seeks some equilibrium and exhibit self-correction codes. This
proposed model allows us to account for perturbations in form of local noise,
evolution of networks, provide robustness against local damage from lower
nodes, and help determine the underlying classification into network
super-families.
|
1106.2610
|
Pathlength scaling in graphs with incomplete navigational information
|
physics.soc-ph cs.SI
|
The graph-navigability problem concerns how one can find as short paths as
possible between a pair of vertices, given an incomplete picture of a graph. We
study the navigability of graphs where the vertices are tagged by a number
(between 1 and the total number of vertices) in a way to aid navigation. This
information is too little to ensure errorfree navigation but enough, as we will
show, for the agents to do significantly better than a random walk. In our
setup, given a graph, we first assign information to the vertices that agents
can utilize for their navigation. To evaluate the navigation, we calculate the
average distance traveled over random pairs of source and target and different
graph realizations. We show that this type of embedding can be made quite
efficiently; the more information is embedded, the more efficient it gets. We
also investigate the embedded navigational information in a standard graph
layout algorithm and find that although this information does not make
algorithms as efficient as the above-mentioned schemes, it is significantly
helpful.
|
1106.2647
|
From Causal Models To Counterfactual Structures
|
cs.AI
|
Galles and Pearl claimed that "for recursive models, the causal model
framework does not add any restrictions to counterfactuals, beyond those
imposed by Lewis's [possible-worlds] framework." This claim is examined
carefully, with the goal of clarifying the exact relationship between causal
models and Lewis's framework. Recursive models are shown to correspond
precisely to a subclass of (possible-world) counterfactual structures. On the
other hand, a slight generalization of recursive models, models where all
equations have unique solutions, is shown to be incomparable in expressive
power to counterfactual structures, despite the fact that the Galles and Pearl
arguments should apply to them as well. The problem with the Galles and Pearl
argument is identified: an axiom that they viewed as irrelevant, because it
involved disjunction (which was not in their language), is not irrelevant at
all.
|
1106.2652
|
Actual causation and the art of modeling
|
cs.AI
|
We look more carefully at the modeling of causality using structural
equations. It is clear that the structural equations can have a major impact on
the conclusions we draw about causality. In particular, the choice of variables
and their values can also have a significant impact on causality. These choices
are, to some extent, subjective. We consider what counts as an appropriate
choice. More generally, we consider what makes a model an appropriate model,
especially if we want to take defaults into account, as was argued is necessary
in recent work.
|
1106.2662
|
Learning Equilibria with Partial Information in Decentralized Wireless
Networks
|
cs.LG cs.AI cs.GT cs.MA
|
In this article, a survey of several important equilibrium concepts for
decentralized networks is presented. The term decentralized is used here to
refer to scenarios where decisions (e.g., choosing a power allocation policy)
are taken autonomously by devices interacting with each other (e.g., through
mutual interference). The iterative long-term interaction is characterized by
stable points of the wireless network called equilibria. The interest in these
equilibria stems from the relevance of network stability and the fact that they
can be achieved by letting radio devices to repeatedly interact over time. To
achieve these equilibria, several learning techniques, namely, the best
response dynamics, fictitious play, smoothed fictitious play, reinforcement
learning algorithms, and regret matching, are discussed in terms of information
requirements and convergence properties. Most of the notions introduced here,
for both equilibria and learning schemes, are illustrated by a simple case
study, namely, an interference channel with two transmitter-receiver pairs.
|
1106.2692
|
Generating Schemata of Resolution Proofs
|
cs.AI
|
Two distinct algorithms are presented to extract (schemata of) resolution
proofs from closed tableaux for propositional schemata. The first one handles
the most efficient version of the tableau calculus but generates very complex
derivations (denoted by rather elaborate rewrite systems). The second one has
the advantage that much simpler systems can be obtained, however the considered
proof procedure is less efficient.
|
1106.2695
|
Robust Mobile Object Tracking Based on Multiple Feature Similarity and
Trajectory Filtering
|
cs.CV
|
This paper presents a new algorithm to track mobile objects in different
scene conditions. The main idea of the proposed tracker includes estimation,
multi-features similarity measures and trajectory filtering. A feature set
(distance, area, shape ratio, color histogram) is defined for each tracked
object to search for the best matching object. Its best matching object and its
state estimated by the Kalman filter are combined to update position and size
of the tracked object. However, the mobile object trajectories are usually
fragmented because of occlusions and misdetections. Therefore, we also propose
a trajectory filtering, named global tracker, aims at removing the noisy
trajectories and fusing the fragmented trajectories belonging to a same mobile
object. The method has been tested with five videos of different scene
conditions. Three of them are provided by the ETISEO benchmarking project
(http://www-sop.inria.fr/orion/ETISEO) in which the proposed tracker
performance has been compared with other seven tracking algorithms. The
advantages of our approach over the existing state of the art ones are: (i) no
prior knowledge information is required (e.g. no calibration and no contextual
models are needed), (ii) the tracker is more reliable by combining multiple
feature similarities, (iii) the tracker can perform in different scene
conditions: single/several mobile objects, weak/strong illumination,
indoor/outdoor scenes, (iv) a trajectory filtering is defined and applied to
improve the tracker performance, (v) the tracker performance outperforms many
algorithms of the state of the art.
|
1106.2696
|
Who clicks there!: Anonymizing the photographer in a camera saturated
society
|
cs.CR cs.CV
|
In recent years, social media has played an increasingly important role in
reporting world events. The publication of crowd-sourced photographs and videos
in near real-time is one of the reasons behind the high impact. However, the
use of a camera can draw the photographer into a situation of conflict.
Examples include the use of cameras by regulators collecting evidence of Mafia
operations; citizens collecting evidence of corruption at a public service
outlet; and political dissidents protesting at public rallies. In all these
cases, the published images contain fairly unambiguous clues about the location
of the photographer (scene viewpoint information). In the presence of adversary
operated cameras, it can be easy to identify the photographer by also combining
leaked information from the photographs themselves. We call this the camera
location detection attack. We propose and review defense techniques against
such attacks. Defenses such as image obfuscation techniques do not protect
camera-location information; current anonymous publication technologies do not
help either. However, the use of view synthesis algorithms could be a promising
step in the direction of providing probabilistic privacy guarantees.
|
1106.2729
|
Nested Graph Words for Object Recognition
|
cs.MM cs.CV
|
In this paper, we propose a new, scalable approach for the task of object
based image search or object recognition. Despite the very large literature
existing on the scalability issues in CBIR in the sense of retrieval
approaches, the scalability of media and scalability of features remain an
issue. In our work we tackle the problem of scalability and structural
organization of features. The proposed features are nested local graphs built
upon sets of SURF feature points with Delaunay triangulation. A
Bag-of-Visual-Words (BoVW) framework is applied on these graphs, giving birth
to a Bag-of-Graph-Words representation. The nested nature of the descriptors
consists in scaling from trivial Delaunay graphs - isolated feature points - by
increasing the number of nodes layer by layer up to graphs with maximal number
of nodes. For each layer of graphs its proper visual dictionary is built. The
experiments conducted on the SIVAL data set reveal that the graph features at
different layers exhibit complementary performances on the same content. The
nested approach, the combination of all existing layers, yields significant
improvement of the object recognition performance compared to single level
approaches.
|
1106.2773
|
On Optimal Harvesting in Stochastic Environments: Optimal Policies in a
Relaxed Model
|
math.OC cs.SY q-bio.PE
|
This paper examines the objective of optimally harvesting a single species in
a stochastic environment. This problem has previously been analyzed in Alvarez
(2000) using dynamic programming techniques and, due to the natural payoff
structure of the price rate function (the price decreases as the population
increases), no optimal harvesting policy exists. This paper establishes a
relaxed formulation of the harvesting model in such a manner that existence of
an optimal relaxed harvesting policy can not only be proven but also
identified. The analysis embeds the harvesting problem in an
infinite-dimensional linear program over a space of occupation measures in
which the initial position enters as a parameter and then analyzes an auxiliary
problem having fewer constraints. In this manner upper bounds are determined
for the optimal value (with the given initial position); these bounds depend on
the relation of the initial population size to a specific target size. The more
interesting case occurs when the initial population exceeds this target size; a
new argument is required to obtain a sharp upper bound. Though the initial
population size only enters as a parameter, the value is determined in a
closed-form functional expression of this parameter.
|
1106.2774
|
Orthogonal Matching Pursuit with Replacement
|
cs.IT math.IT stat.ML
|
In this paper, we consider the problem of compressed sensing where the goal
is to recover almost all the sparse vectors using a small number of fixed
linear measurements. For this problem, we propose a novel partial
hard-thresholding operator that leads to a general family of iterative
algorithms. While one extreme of the family yields well known hard thresholding
algorithms like ITI (Iterative Thresholding with Inversion) and HTP (Hard
Thresholding Pursuit), the other end of the spectrum leads to a novel algorithm
that we call Orthogonal Matching Pursuit with Replacement (OMPR). OMPR, like
the classic greedy algorithm OMP, adds exactly one coordinate to the support at
each iteration, based on the correlation with the current residual. However,
unlike OMP, OMPR also removes one coordinate from the support. This simple
change allows us to prove that OMPR has the best known guarantees for sparse
recovery in terms of the Restricted Isometry Property (a condition on the
measurement matrix). In contrast, OMP is known to have very weak performance
guarantees under RIP. Given its simple structure, we are able to extend OMPR
using locality sensitive hashing to get OMPR-Hash, the first provably
sub-linear (in dimensionality) algorithm for sparse recovery. Our proof
techniques are novel and flexible enough to also permit the tightest known
analysis of popular iterative algorithms such as CoSaMP and Subspace Pursuit.
We provide experimental results on large problems providing recovery for
vectors of size up to million dimensions. We demonstrate that for large-scale
problems our proposed methods are more robust and faster than existing methods.
|
1106.2781
|
Optimal Dividend Payments for the Piecewise-Deterministic Poisson Risk
Model
|
math.OC cs.SY math.PR q-fin.RM
|
This paper considers the optimal dividend payment problem in
piecewise-deterministic compound Poisson risk models. The objective is to
maximize the expected discounted dividend payout up to the time of ruin. We
provide a comparative study in this general framework of both restricted and
unrestricted payment schemes, which were only previously treated separately in
certain special cases of risk models in the literature. In the case of
restricted payment scheme, the value function is shown to be a classical
solution of the corresponding HJB equation, which in turn leads to an optimal
restricted payment policy known as the threshold strategy. In the case of
unrestricted payment scheme, by solving the associated integro-differential
quasi-variational inequality, we obtain the value function as well as an
optimal unrestricted dividend payment scheme known as the barrier strategy.
When claim sizes are exponentially distributed, we provide easily verifiable
conditions under which the threshold and barrier strategies are optimal
restricted and unrestricted dividend payment policies, respectively. The main
results are illustrated with several examples, including a new example
concerning regressive growth rates.
|
1106.2788
|
Co-evolution of Selection and Influence in Social Networks
|
cs.SI physics.soc-ph stat.ML
|
Many networks are complex dynamical systems, where both attributes of nodes
and topology of the network (link structure) can change with time. We propose a
model of co-evolving networks where both node at- tributes and network
structure evolve under mutual influence. Specifically, we consider a mixed
membership stochastic blockmodel, where the probability of observing a link
between two nodes depends on their current membership vectors, while those
membership vectors themselves evolve in the presence of a link between the
nodes. Thus, the network is shaped by the interaction of stochastic processes
describing the nodes, while the processes themselves are influenced by the
changing network structure. We derive an efficient variational inference
procedure for our model, and validate the model on both synthetic and
real-world data.
|
1106.2792
|
Algebraic codes for Slepian-Wolf code design
|
cs.IT math.IT
|
Practical constructions of lossless distributed source codes (for the
Slepian-Wolf problem) have been the subject of much investigation in the past
decade. In particular, near-capacity achieving code designs based on LDPC codes
have been presented for the case of two binary sources, with a binary-symmetric
correlation. However, constructing practical codes for the case of non-binary
sources with arbitrary correlation remains by and large open. From a practical
perspective it is also interesting to consider coding schemes whose performance
remains robust to uncertainties in the joint distribution of the sources.
In this work we propose the usage of Reed-Solomon (RS) codes for the
asymmetric version of this problem. We show that algebraic soft-decision
decoding of RS codes can be used effectively under certain correlation
structures. In addition, RS codes offer natural rate adaptivity and performance
that remains constant across a family of correlation structures with the same
conditional entropy. The performance of RS codes is compared with dedicated and
rate adaptive multistage LDPC codes (Varodayan et al. '06), where each LDPC
code is used to compress the individual bit planes. Our simulations show that
in classical Slepian-Wolf scenario, RS codes outperform both dedicated and
rate-adaptive LDPC codes under $q$-ary symmetric correlation, and are better
than rate-adaptive LDPC codes in the case of sparse correlation models, where
the conditional distribution of the sources has only a few dominant entries. In
a feedback scenario, the performance of RS codes is comparable with both
designs of LDPC codes. Our simulations also demonstrate that the performance of
RS codes in the presence of inaccuracies in the joint distribution of the
sources is much better as compared to multistage LDPC codes.
|
1106.2794
|
Power Management during Scan Based Sequential Circuit Testing
|
cs.CE
|
This paper shows that not every scan cell contributes equally to the power
consumption during scan based test. The transitions at some scan cells cause
more toggles at the internal signal lines of a circuit than the transitions at
other scan cells. Hence the transitions at these scan cells have a larger
impact on the power consumption during test application. These scan cells are
called power sensitive scan cells.A verilog based approach is proposed to
identify a set of power sensitive scan cells. Additional hardware is added to
freeze the outputs of power sensitive scan cells during scan shifting in order
to reduce the shift power consumption.when multiple scan chain is incorporated
along with freezing the power sensitive scan cell,over all power during testing
can be reduced to a larger extend.
|
1106.2819
|
Optimizing Constellations for Single-Subcarrier Intensity-Modulated
Optical Systems
|
cs.IT math.IT
|
We optimize modulation formats for the additive white Gaussian noise channel
with nonnegative input, also known as the intensity-modulated direct-detection
channel, with and without confining them to a lattice structure. Our
optimization criteria are the average electrical, average optical, and peak
power. The nonnegative constraint on the input to the channel is translated
into a conical constraint in signal space, and modulation formats are designed
by sphere packing inside this cone. Some dense packings are found, which yield
more power-efficient modulation formats than previously known. For example, at
a spectral efficiency of 1.5 bit/s/Hz, the modulation format optimized for
average electrical power has a 2.55 dB average electrical power gain over the
best known format to achieve a symbol error rate of 10^-6. The corresponding
gains for formats optimized for average and peak optical power are 1.35 and
1.72 dB, respectively. Using modulation formats optimized for peak power in
average-power limited systems results in a smaller power penalty than when
using formats optimized for average power in peak-power limited systems. We
also evaluate the modulation formats in terms of their mutual information to
predict their performance in the presence of capacity-achieving error-
correcting codes, and finally show numerically and analytically that the
optimal modulation formats for reliable transmission in the wideband regime
have only one nonzero point.
|
1106.2844
|
Unleashing the power of Schrijver's permanental inequality with the help
of the Bethe Approximation
|
math.CO cs.CC cs.IT math-ph math.IT math.MP
|
Let $A \in \Omega_n$ be doubly-stochastic $n \times n$ matrix. Alexander
Schrijver proved in 1998 the following remarkable inequality per(\widetilde{A})
\geq \prod_{1 \leq i,j \leq n} (1- A(i,j)); \widetilde{A}(i,j) =:
A(i,j)(1-A(i,j)), 1 \leq i,j \leq n.
We use the above Shrijver's inequality to prove the following lower bound:
\frac{per(A)}{F(A)} \geq 1; F(A) =: \prod_{1 \leq i,j \leq n} (1- A(i,j))^{1-
A(i,j)}.
We use this new lower bound to prove S.Friedland's Asymptotic Lower Matching
Conjecture(LAMC) on monomer-dimer problem.
We use some ideas of our proof of (LAMC) to disprove [Lu,Mohr,Szekely]
positive correlation conjecture.
We present explicit doubly-stochastic $n \times n$ matrices $A$ with the
ratio $\frac{per(A)}{F(A)} = \sqrt{2}^{n}$; conjecture that
\max_{A \in \Omega_n}\frac{per(A)}{F(A)} \approx (\sqrt{2})^{n} and give some
examples supporting the conjecture.
If true, the conjecture (and other ones stated in the paper) would imply a
deterministic poly-time algorithm to approximate the permanent of $n \times n$
nonnegative matrices within the relative factor $(\sqrt{2})^{n}$. The best
current such factor is $e^n$.
|
1106.2882
|
Learning, investments and derivatives
|
q-fin.GN cs.LG
|
The recent crisis and the following flight to simplicity put most derivative
businesses around the world under considerable pressure. We argue that the
traditional modeling techniques must be extended to include product design. We
propose a quantitative framework for creating products which meet the challenge
of being optimal from the investors point of view while remaining relatively
simple and transparent.
|
1106.2886
|
The Finite Field Multi-Way Relay Channel with Correlated Sources: The
Three-User Case
|
cs.IT math.IT
|
The three-user finite field multi-way relay channel with correlated sources
is considered. The three users generate possibly correlated messages, and each
user is to transmit its message to the two other users reliably in the Shannon
sense. As there is no direct link among the users, communication is carried out
via a relay, and the link from the users to the relay and those from the relay
to the users are finite field adder channels with additive noise of arbitrary
distribution. The problem is to determine the set of all possible achievable
rates, defined as channel uses per source symbol for reliable communication.
For two classes of source/channel combinations, the solution is obtained using
Slepian-Wolf source coding combined with functional-decode-forward channel
coding.
|
1106.2888
|
On Achievable Rate Regions of the Asymmetric AWGN Two-Way Relay Channel
|
cs.IT math.IT
|
This paper investigates the additive white Gaussian noise two-way relay
channel, where two users exchange messages through a relay. Asymmetrical
channels are considered where the users can transmit data at different rates
and at different power levels. We modify and improve existing coding schemes to
obtain three new achievable rate regions. Comparing four downlink-optimal
coding schemes, we show that the scheme that gives the best sum-rate
performance is (i) complete-decode-forward, when both users transmit at low
signal-to-noise ratio (SNR); (ii) functional-decode-forward with nested lattice
codes, when both users transmit at high SNR; (iii) functional-decode-forward
with rate splitting and time-division multiplexing, when one user transmits at
low SNR and another user at medium--high SNR.
|
1106.2946
|
A Unified Relevance Retrieval Model by Eliteness Hypothesis
|
cs.IR
|
In this paper, an Eliteness Hypothesis for information retrieval is proposed,
where we define two generative processes to create information items and
queries. By assuming the deterministic relationships between the eliteness of
terms and relevance, we obtain a new theoretical retrieval framework. The
resulting ranking function is a unified one as it is capable of using available
relevance information on both the document and the query, which is otherwise
unachievable by existing retrieval models. Our preliminary experiment on a
simple ranking function has demonstrated the potential of the approach.
|
1106.2994
|
Widely Linear vs. Conventional Subspace-Based Estimation of SIMO
Flat-Fading Channels: Mean-Squared Error Analysis
|
cs.IT math.IT stat.OT
|
We analyze the mean-squared error (MSE) performance of widely linear (WL) and
conventional subspace-based channel estimation for single-input multiple-output
(SIMO) flat-fading channels employing binary phase-shift-keying (BPSK)
modulation when the covariance matrix is estimated using a finite number of
samples. The conventional estimator suffers from a phase ambiguity that reduces
to a sign ambiguity for the WL estimator. We derive closed-form expressions for
the MSE of the two estimators under four different ambiguity resolution
scenarios. The first scenario is optimal resolution, which minimizes the
Euclidean distance between the channel estimate and the actual channel. The
second scenario assumes that a randomly chosen coefficient of the actual
channel is known and the third assumes that the one with the largest magnitude
is known. The fourth scenario is the more realistic case where pilot symbols
are used to resolve the ambiguities. Our work demonstrates that there is a
strong relationship between the accuracy of ambiguity resolution and the
relative performance of WL and conventional subspace-based estimators, and
shows that the less information available about the actual channel for
ambiguity resolution, or the lower the accuracy of this information, the higher
the performance gap in favor of the WL estimator.
|
1106.3077
|
Chameleons in imagined conversations: A new approach to understanding
coordination of linguistic style in dialogs
|
cs.CL physics.soc-ph
|
Conversational participants tend to immediately and unconsciously adapt to
each other's language styles: a speaker will even adjust the number of articles
and other function words in their next utterance in response to the number in
their partner's immediately preceding utterance. This striking level of
coordination is thought to have arisen as a way to achieve social goals, such
as gaining approval or emphasizing difference in status. But has the adaptation
mechanism become so deeply embedded in the language-generation process as to
become a reflex? We argue that fictional dialogs offer a way to study this
question, since authors create the conversations but don't receive the social
benefits (rather, the imagined characters do). Indeed, we find significant
coordination across many families of function words in our large movie-script
corpus. We also report suggestive preliminary findings on the effects of gender
and other features; e.g., surprisingly, for articles, on average, characters
adapt more to females than to males.
|
1106.3094
|
Simple rules govern finite-size effects in scale-free networks
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
We give an intuitive though general explanation of the finite-size effect in
scale-free networks in terms of the degree distribution of the starting
network. This result clarifies the relevance of the starting network in the
final degree distribution. We use two different approaches: the deterministic
mean-field approximation used by Barab\'asi and Albert (but taking into account
the nodes of the starting network), and the probability distribution of the
degree of each node, which considers the stochastic process. Numerical
simulations show that the accuracy of the predictions of the mean-field
approximation depend on the contribution of the dispersion in the final
distribution. The results in terms of the probability distribution of the
degree of each node are very accurate when compared to numerical simulations.
The analysis of the standard deviation of the degree distribution allows us to
assess the influence of the starting core when fitting the model to real data.
|
1106.3134
|
Communicate only when necessary: Cooperative tasking for multi-agent
systems
|
cs.MA cs.SY math.OC
|
New advances in large scale distributed systems have amazingly offered
complex functionalities through parallelism of simple and rudimentary
components. The key issue in cooperative control of multi-agent systems is the
synthesis of local control and interaction rules among the agents such that the
entire controlled system achieves a desired global behavior. For this purpose,
three fundamental problems have to be addressed: (1) task decomposition for
top-down design, such that the fulfillment of local tasks guarantees the
satisfaction of the global task, by the team; (2) fault-tolerant top-down
design, such that the global task remains decomposable and achievable, in spite
of some failures, and (3) design of interactions among agents to make an
undecomposable task decomposable and achievable in a top-down framework. The
first two problems have been addressed in our previous works, by identifying
necessary and sufficient conditions for task automaton decomposition, and
fault-tolerant task decomposability. This paper deals with the third problem
and proposes a procedure to redistribute the events among agents in order to
enforce decomposability of an undecomposable task automaton. The
decomposability conditions are used to identify the root causes of
undecomposability which are found to be due to over-communications that have to
be deleted, while respecting the fault-tolerant decomposability conditions; or
because of the lack of communications that require new sharing of events, while
considering new violations of decomposability conditions. This result provides
a sufficient condition to make any undecomposable deterministic task automaton
decomposable in order to facilitate cooperative tasking. Illustrative examples
are presented to show the concept of task automaton decomposabilization.
|
1106.3153
|
Algorithmic analogies to kamae-Weiss theorem on normal numbers
|
cs.IT math.IT
|
In this paper we study subsequences of random numbers. In Kamae (1973),
selection functions that depend only on coordinates are studied, and their
necessary and sufficient condition for the selected sequences to be normal
numbers is given. In van Lambalgen (1987), an algorithmic analogy to the
theorem is conjectured in terms of algorithmic randomness and Kolmogorov
complexity. In this paper, we show different algorithmic analogies to the
theorem.
|
1106.3184
|
The restricted isometry property for time-frequency structured random
matrices
|
cs.IT math.CA math.IT math.PR
|
We establish the restricted isometry property for finite dimensional Gabor
systems, that is, for families of time--frequency shifts of a randomly chosen
window function. We show that the $s$-th order restricted isometry constant of
the associated $n \times n^2$ Gabor synthesis matrix is small provided $s \leq
c \, n^{2/3} / \log^2 n$. This improves on previous estimates that exhibit
quadratic scaling of $n$ in $s$. Our proof develops bounds for a corresponding
chaos process.
|
1106.3273
|
A Quasi-Sure Approach to the Control of Non-Markovian Stochastic
Differential Equations
|
math.PR cs.SY math.OC q-fin.RM
|
We study stochastic differential equations (SDEs) whose drift and diffusion
coefficients are path-dependent and controlled. We construct a value process on
the canonical path space, considered simultaneously under a family of singular
measures, rather than the usual family of processes indexed by the controls.
This value process is characterized by a second order backward SDE, which can
be seen as a non-Markovian analogue of the Hamilton-Jacobi-Bellman partial
differential equation. Moreover, our value process yields a generalization of
the G-expectation to the context of SDEs.
|
1106.3276
|
Sufficient Conditions for Low-rank Matrix Recovery, Translated from
Sparse Signal Recovery
|
cs.IT math.IT math.OC
|
The low-rank matrix recovery (LMR) is a rank minimization problem subject to
linear equality constraints, and it arises in many fields such as signal and
image processing, statistics, computer vision, system identification and
control. This class of optimization problems is $\N\P$-hard and a popular
approach replaces the rank function with the nuclear norm of the matrix
variable. In this paper, we extend the concept of $s$-goodness for a sensing
matrix in sparse signal recovery (proposed by Juditsky and Nemirovski [Math
Program, 2011]) to linear transformations in LMR. Then, we give
characterizations of $s$-goodness in the context of LMR. Using the two
characteristic $s$-goodness constants, ${\gamma}_s$ and $\hat{\gamma}_s$, of a
linear transformation, not only do we derive necessary and sufficient
conditions for a linear transformation to be $s$-good, but also provide
sufficient conditions for exact and stable $s$-rank matrix recovery via the
nuclear norm minimization under mild assumptions. Moreover, we give computable
upper bounds for one of the $s$-goodness characteristics which leads to
verifiable sufficient conditions for exact low-rank matrix recovery.
|
1106.3279
|
Optimal Portfolio Liquidation with Limit Orders
|
q-fin.TR cs.SY math.OC
|
This paper addresses the optimal scheduling of the liquidation of a portfolio
using a new angle. Instead of focusing only on the scheduling aspect like
Almgren and Chriss, or only on the liquidity-consuming orders like Obizhaeva
and Wang, we link the optimal trade-schedule to the price of the limit orders
that have to be sent to the limit order book to optimally liquidate a
portfolio. Most practitioners address these two issues separately: they compute
an optimal trading curve and they then send orders to the markets to try to
follow it. The results obtained here solve simultaneously the two problems. As
in a previous paper that solved the "intra-day market making problem", the
interactions of limit orders with the market are modeled via a Poisson process
pegged to a diffusive "fair price" and a Hamilton-Jacobi-Bellman equation is
used to solve the problem involving both non-execution risk and price risk.
Backtests are carried out to exemplify the use of our results, both on long
periods of time (for the entire liquidation process) and on slices of 5 minutes
(to follow a given trading curve).
|
1106.3286
|
ReProCS: A Missing Link between Recursive Robust PCA and Recursive
Sparse Recovery in Large but Correlated Noise
|
cs.IT math.IT
|
This work studies the recursive robust principal components' analysis (PCA)
problem. Here, "robust" refers to robustness to both independent and correlated
sparse outliers, although we focus on the latter. A key application where this
problem occurs is in video surveillance where the goal is to separate a slowly
changing background from moving foreground objects on-the-fly. The background
sequence is well modeled as lying in a low dimensional subspace, that can
gradually change over time, while the moving foreground objects constitute the
correlated sparse outliers. In this and many other applications, the foreground
is an outlier for PCA but is actually the "signal of interest" for the
application; where as the background is the corruption or noise. Thus our
problem can also be interpreted as one of recursively recovering a time
sequence of sparse signals in the presence of large but spatially correlated
noise.
This work has two key contributions. First, we provide a new way of looking
at this problem and show how a key part of our solution strategy involves
solving a noisy compressive sensing (CS) problem. Second, we show how we can
utilize the correlation of the outliers to our advantage in order to even deal
with very large support sized outliers. The main idea is as follows. The
correlation model applied to the previous support estimate helps predict the
current support. This prediction serves as "partial support knowledge" for
solving the modified-CS problem instead of CS. The support estimate of the
modified-CS reconstruction is, in turn, used to update the correlation model
parameters using a Kalman filter (or any adaptive filter). We call the
resulting approach "support-predicted modified-CS".
|
1106.3325
|
Distributed Transactions for Google App Engine: Optimistic Distributed
Transactions built upon Local Multi-Version Concurrency Control
|
cs.DC cs.DB cs.DS cs.SE
|
Massively scalable web applications encounter a fundamental tension in
computing between "performance" and "correctness": performance is often
addressed by using a large and therefore distributed machine where programs are
multi-threaded and interruptible, whereas correctness requires data invariants
to be maintained with certainty. A solution to this problem is "transactions"
[Gray-Reuter].
Some distributed systems such as Google App Engine
[http://code.google.com/appengine/docs/] provide transaction semantics but only
for functions that access one of a set of predefined local regions of the
database: a "Local Transaction" (LT)
[http://code.google.com/appengine/docs/python/datastore/transactions.html]. To
address this problem we give a "Distributed Transaction" (DT) algorithm which
provides transaction semantics for functions that operate on any set of objects
distributed across the machine. Our algorithm is in an "optimistic"
[http://en.wikipedia.org/wiki/Optimistic_concurrency_control] style. We assume
Sequential [Time-]Consistency
[http://en.wikipedia.org/wiki/Sequential_consistency] for Local Transactions.
|
1106.3355
|
On epsilon-optimality of the pursuit learning algorithm
|
cs.LG
|
Estimator algorithms in learning automata are useful tools for adaptive,
real-time optimization in computer science and engineering applications. This
paper investigates theoretical convergence properties for a special case of
estimator algorithms: the pursuit learning algorithm. In this note, we identify
and fill a gap in existing proofs of probabilistic convergence for pursuit
learning. It is tradition to take the pursuit learning tuning parameter to be
fixed in practical applications, but our proof sheds light on the importance of
a vanishing sequence of tuning parameters in a theoretical convergence
analysis.
|
1106.3361
|
Random forest models of the retention constants in the thin layer
chromatography
|
cs.AI
|
In the current study we examine an application of the machine learning
methods to model the retention constants in the thin layer chromatography
(TLC). This problem can be described with hundreds or even thousands of
descriptors relevant to various molecular properties, most of them redundant
and not relevant for the retention constant prediction. Hence we employed
feature selection to significantly reduce the number of attributes.
Additionally we have tested application of the bagging procedure to the feature
selection. The random forest regression models were built using selected
variables. The resulting models have better correlation with the experimental
data than the reference models obtained with linear regression. The
cross-validation confirms robustness of the models.
|
1106.3373
|
Perturbation Analysis of Orthogonal Matching Pursuit
|
cs.IT math.IT
|
Orthogonal Matching Pursuit (OMP) is a canonical greedy pursuit algorithm for
sparse approximation. Previous studies of OMP have mainly considered the exact
recovery of a sparse signal $\bm x$ through $\bm \Phi$ and $\bm y=\bm \Phi \bm
x$, where $\bm \Phi$ is a matrix with more columns than rows. In this paper,
based on Restricted Isometry Property (RIP), the performance of OMP is analyzed
under general perturbations, which means both $\bm y$ and $\bm \Phi$ are
perturbed. Though exact recovery of an almost sparse signal $\bm x$ is no
longer feasible, the main contribution reveals that the exact recovery of the
locations of $k$ largest magnitude entries of $\bm x$ can be guaranteed under
reasonable conditions. The error between $\bm x$ and solution of OMP is also
estimated. It is also demonstrated that the sufficient condition is rather
tight by constructing an example. When $\bm x$ is strong-decaying, it is proved
that the sufficient conditions can be relaxed, and the locations can even be
recovered in the order of the entries' magnitude.
|
1106.3381
|
The rates of convergence for generalized entropy of the normalized sums
of IID random variables
|
cs.IT math.IT math.PR
|
We consider the generalized differential entropy of normalized sums of
independent and identically distributed (IID) continuous random variables. We
prove that the R\'{e}nyi entropy and Tsallis entropy of order $\alpha\
(\alpha>0)$ of the normalized sum of IID continuous random variables with
bounded moments are convergent to the corresponding R\'{e}nyi entropy and
Tsallis entropy of the Gaussian limit, and obtain sharp rates of convergence.
|
1106.3395
|
Decoding finger movements from ECoG signals using switching linear
models
|
cs.LG
|
One of the major challenges of ECoG-based Brain-Machine Interfaces is the
movement prediction of a human subject. Several methods exist to predict an arm
2-D trajectory. The fourth BCI Competition gives a dataset in which the aim is
to predict individual finger movements (5-D trajectory). The difficulty lies in
the fact that there is no simple relation between ECoG signals and finger
movement. We propose in this paper to decode finger flexions using switching
models. This method permits to simplify the system as it is now described as an
ensemble of linear models depending on an internal state. We show that an
interesting accuracy prediction can be obtained by such a model.
|
1106.3396
|
Large margin filtering for signal sequence labeling
|
cs.LG
|
Signal Sequence Labeling consists in predicting a sequence of labels given an
observed sequence of samples. A naive way is to filter the signal in order to
reduce the noise and to apply a classification algorithm on the filtered
samples. We propose in this paper to jointly learn the filter with the
classifier leading to a large margin filtering for classification. This method
allows to learn the optimal cutoff frequency and phase of the filter that may
be different from zero. Two methods are proposed and tested on a toy dataset
and on a real life BCI dataset from BCI Competition III.
|
1106.3397
|
Handling uncertainties in SVM classification
|
cs.LG
|
This paper addresses the pattern classification problem arising when
available target data include some uncertainty information. Target data
considered here is either qualitative (a class label) or quantitative (an
estimation of the posterior probability). Our main contribution is a SVM
inspired formulation of this problem allowing to take into account class label
through a hinge loss as well as probability estimates using epsilon-insensitive
cost function together with a minimum norm (maximum margin) objective. This
formulation shows a dual form leading to a quadratic problem and allows the use
of a representer theorem and associated kernel. The solution provided can be
used for both decision and posterior probability estimation. Based on empirical
evidence our method outperforms regular SVM in terms of probability predictions
and classification performances.
|
1106.3402
|
The Capacity Region of the Linear Shift Deterministic Y-Channel
|
cs.IT math.IT
|
The linear shift deterministic Y-channel is studied. That is, we have three
users and one relay, where each user wishes to broadcast one message to each
other user via the relay, resulting in a multi-way relaying setup. The cut-set
bounds for this setup are shown to be not sufficient to characterize its
capacity region. New upper bounds are derived, which when combined with the
cut-set bounds provide an outer bound on the capacity region. It is shown that
this outer bound is achievable, and as a result, the capacity region of the
linear shift deterministic Y-channel is characterized.
|
1106.3409
|
System Identification in Wireless Relay Networks via Gaussian Process
|
cs.IT math.IT stat.AP
|
We present a flexible stochastic model for a class of cooperative wireless
relay networks, in which the relay processing functionality is not known at the
destination. In addressing this problem we develop efficient algorithms to
perform relay identification in a wireless relay network. We first construct a
statistical model based on a representation of the system using Gaussian
Processes in a non-standard manner due to the way we treat the imperfect
channel state information. We then formulate the estimation problem to perform
system identification, taking into account complexity and computational
efficiency. Next we develop a set of three algorithms to solve the
identification problem each of decreasing complexity, trading-off the
estimation bias for computational efficiency. The joint optimisation problem is
tackled via a Bayesian framework using the Iterated Conditioning on the Modes
methodology. We develop a lower bound and several sub-optimal computationally
efficient solutions to the identification problem, for comparison. We
illustrate the estimation performance of our methodology for a range of widely
used relay functionalities. The relative total error attained by our algorithm
when compared to the lower bound is found to be at worst 9% for low SNR values
under all functions considered. The effect of the relay functional estimation
error is also studied via BER simulations and is shown to be less than 2dB
worse than the lower bound.
|
1106.3457
|
Extensional Higher-Order Logic Programming
|
cs.PL cs.AI cs.LO
|
We propose a purely extensional semantics for higher-order logic programming.
In this semantics program predicates denote sets of ordered tuples, and two
predicates are equal iff they are equal as sets. Moreover, every program has a
unique minimum Herbrand model which is the greatest lower bound of all Herbrand
models of the program and the least fixed-point of an immediate consequence
operator. We also propose an SLD-resolution proof procedure which is proven
sound and complete with respect to the minimum model semantics. In other words,
we provide a purely extensional theoretical framework for higher-order logic
programming which generalizes the familiar theory of classical (first-order)
logic programming.
|
1106.3464
|
Polar Fusion Technique Analysis for Evaluating the Performances of Image
Fusion of Thermal and Visual Images for Human Face Recognition
|
cs.CV
|
This paper presents a comparative study of two different methods, which are
based on fusion and polar transformation of visual and thermal images. Here,
investigation is done to handle the challenges of face recognition, which
include pose variations, changes in facial expression, partial occlusions,
variations in illumination, rotation through different angles, change in scale
etc. To overcome these obstacles we have implemented and thoroughly examined
two different fusion techniques through rigorous experimentation. In the first
method log-polar transformation is applied to the fused images obtained after
fusion of visual and thermal images whereas in second method fusion is applied
on log-polar transformed individual visual and thermal images. After this step,
which is thus obtained in one form or another, Principal Component Analysis
(PCA) is applied to reduce dimension of the fused images. Log-polar transformed
images are capable of handling complicacies introduced by scaling and rotation.
The main objective of employing fusion is to produce a fused image that
provides more detailed and reliable information, which is capable to overcome
the drawbacks present in the individual visual and thermal face images.
Finally, those reduced fused images are classified using a multilayer
perceptron neural network. The database used for the experiments conducted here
is Object Tracking and Classification Beyond Visible Spectrum (OTCBVS) database
benchmark thermal and visual face images. The second method has shown better
performance, which is 95.71% (maximum) and on an average 93.81% as correct
recognition rate.
|
1106.3466
|
Next Level of Data Fusion for Human Face Recognition
|
cs.CV
|
This paper demonstrates two different fusion techniques at two different
levels of a human face recognition process. The first one is called data fusion
at lower level and the second one is the decision fusion towards the end of the
recognition process. At first a data fusion is applied on visual and
corresponding thermal images to generate fused image. Data fusion is
implemented in the wavelet domain after decomposing the images through
Daubechies wavelet coefficients (db2). During the data fusion maximum of
approximate and other three details coefficients are merged together. After
that Principle Component Analysis (PCA) is applied over the fused coefficients
and finally two different artificial neural networks namely Multilayer
Perceptron(MLP) and Radial Basis Function(RBF) networks have been used
separately to classify the images. After that, for decision fusion based
decisions from both the classifiers are combined together using Bayesian
formulation. For experiments, IRIS thermal/visible Face Database has been used.
Experimental results show that the performance of multiple classifier system
along with decision fusion works well over the single classifier system.
|
1106.3467
|
High Performance Human Face Recognition using Independent High Intensity
Gabor Wavelet Responses: A Statistical Approach
|
cs.CV
|
In this paper, we present a technique by which high-intensity feature vectors
extracted from the Gabor wavelet transformation of frontal face images, is
combined together with Independent Component Analysis (ICA) for enhanced face
recognition. Firstly, the high-intensity feature vectors are automatically
extracted using the local characteristics of each individual face from the
Gabor transformed images. Then ICA is applied on these locally extracted
high-intensity feature vectors of the facial images to obtain the independent
high intensity feature (IHIF) vectors. These IHIF forms the basis of the work.
Finally, the image classification is done using these IHIF vectors, which are
considered as representatives of the images. The importance behind implementing
ICA along with the high-intensity features of Gabor wavelet transformation is
twofold. On the one hand, selecting peaks of the Gabor transformed face images
exhibit strong characteristics of spatial locality, scale, and orientation
selectivity. Thus these images produce salient local features that are most
suitable for face recognition. On the other hand, as the ICA employs locally
salient features from the high informative facial parts, it reduces redundancy
and represents independent features explicitly. These independent features are
most useful for subsequent facial discrimination and associative recall. The
efficiency of IHIF method is demonstrated by the experiment on frontal facial
images dataset, selected from the FERET, FRAV2D, and the ORL database.
|
1106.3498
|
On the expressive power of unit resolution
|
cs.AI cs.CC
|
This preliminary report addresses the expressive power of unit resolution
regarding input data encoded with partial truth assignments of propositional
variables. A characterization of the functions that are computable in this way,
which we propose to call propagatable functions, is given. By establishing that
propagatable functions can also be computed using monotone circuits, we show
that there exist polynomial time complexity propagable functions requiring an
exponential amount of clauses to be computed using unit resolution. These
results shed new light on studying CNF encodings of NP-complete problems in
order to solve them using propositional satisfiability algorithms.
|
1106.3508
|
Surrogate Parenthood: Protected and Informative Graphs
|
cs.SI physics.soc-ph
|
Many applications, including provenance and some analyses of social networks,
require path-based queries over graph-structured data. When these graphs
contain sensitive information, paths may be broken, resulting in uninformative
query results. This paper presents innovative techniques that give users more
informative graph query results; the techniques leverage a common industry
practice of providing what we call surrogates: alternate, less sensitive
versions of nodes and edges releasable to a broader community. We describe
techniques for interposing surrogate nodes and edges to protect sensitive graph
components, while maximizing graph connectivity and giving users as much
information as possible. In this work, we formalize the problem of creating a
protected account G' of a graph G. We provide a utility measure to compare the
informativeness of alternate protected accounts and an opacity measure for
protected accounts, which indicates the likelihood that an attacker can
recreate the topology of the original graph from the protected account. We
provide an algorithm to create a maximally useful protected account of a
sensitive graph, and show through evaluation with the PLUS prototype that using
surrogates and protected accounts adds value for the user, with no significant
impact on the time required to generate results for graph queries.
|
1106.3517
|
DWT Based Fingerprint Recognition using Non Minutiae Features
|
cs.CV
|
Forensic applications like criminal investigations, terrorist identification
and National security issues require a strong fingerprint data base and
efficient identification system. In this paper we propose DWT based Fingerprint
Recognition using Non Minutiae (DWTFR) algorithm. Fingerprint image is
decomposed into multi resolution sub bands of LL, LH, HL and HH by applying 3
level DWT. The Dominant local orientation angle {\theta} and Coherence are
computed on LL band only. The Centre Area Features and Edge Parameters are
determined on each DWT level by considering all four sub bands. The comparison
of test fingerprint with database fingerprint is decided based on the Euclidean
Distance of all the features. It is observed that the values of FAR, FRR and
TSR are improved compared to the existing algorithm.
|
1106.3552
|
Decompositions of two player games: potential, zero-sum, and stable
games
|
cs.GT cs.SY math-ph math.MP math.OC q-bio.PE
|
We introduce several methods of decomposition for two player normal form
games. Viewing the set of all games as a vector space, we exhibit explicit
orthonormal bases for the subspaces of potential games, zero-sum games, and
their orthogonal complements which we call anti-potential games and
anti-zero-sum games, respectively. Perhaps surprisingly, every anti-potential
game comes either from the Rock-Paper-Scissors type games (in the case of
symmetric games) or from the Matching Pennies type games (in the case of
asymmetric games). Using these decompositions, we prove old (and some new)
cycle criteria for potential and zero-sum games (as orthogonality relations
between subspaces). We illustrate the usefulness of our decomposition by (a)
analyzing the generalized Rock-Paper-Scissors game, (b) completely
characterizing the set of all null-stable games, (c) providing a large class of
strict stable games, (d) relating the game decomposition to the decomposition
of vector fields for the replicator equations, (e) constructing Lyapunov
functions for some replicator dynamics, and (f) constructing Zeeman games
-games with an interior asymptotically stable Nash equilibrium and a pure
strategy ESS.
|
1106.3554
|
Impact of Heterogeneous Human Activities on Epidemic Spreading
|
physics.data-an cs.SI physics.soc-ph
|
Recent empirical observations suggest a heterogeneous nature of human
activities. The heavy-tailed inter-event time distribution at population level
is well accepted, while whether the individual acts in a heterogeneous way is
still under debate. Motivated by the impact of temporal heterogeneity of human
activities on epidemic spreading, this paper studies the susceptible-infected
model on a fully mixed population, where each individual acts in a completely
homogeneous way but different individuals have different mean activities.
Extensive simulations show that the heterogeneity of activities at population
level remarkably affects the speed of spreading, even though each individual
behaves regularly. Further more, the spreading speed of this model is more
sensitive to the change of system heterogeneity compared with the model
consisted of individuals acting with heavy-tailed inter-event time
distribution. This work refines our understanding of the impact of
heterogeneous human activities on epidemic spreading.
|
1106.3582
|
Link Biased Strategies in Network Formation Games
|
math.OC cs.SI physics.soc-ph
|
We show a simple method for constructing an infinite family of graph
formation games with link bias so that the resulting games admits, as a
\textit{pairwise stable} solution, a graph with an arbitrarily specified degree
distribution. Pairwise stability is used as the equilibrium condition over the
more commonly used Nash equilibrium to prevent the occurrence of ill-behaved
equilibrium strategies that do not occur in ordinary play. We construct this
family of games by solving an integer programming problem whose constraints
enforce the terminal pairwise stability property we desire.
|
1106.3595
|
Information Equals Amortized Communication
|
cs.IT cs.CC math.IT
|
We show how to efficiently simulate the sending of a message M to a receiver
who has partial information about the message, so that the expected number of
bits communicated in the simulation is close to the amount of additional
information that the message reveals to the receiver. This is a generalization
and strengthening of the Slepian-Wolf theorem, which shows how to carry out
such a simulation with low amortized communication in the case that M is a
deterministic function of X. A caveat is that our simulation is interactive.
As a consequence, we prove that the internal information cost (namely the
information revealed to the parties) involved in computing any relation or
function using a two party interactive protocol is exactly equal to the
amortized communication complexity of computing independent copies of the same
relation or function. We also show that the only way to prove a strong direct
sum theorem for randomized communication complexity is by solving a particular
variant of the pointer jumping problem that we define. Our work implies that a
strong direct sum theorem for communication complexity holds if and only if
efficient compression of communication protocols is possible.
|
1106.3600
|
How Insight Emerges in a Distributed, Content-addressable Memory
|
q-bio.NC cs.AI
|
We begin this chapter with the bold claim that it provides a neuroscientific
explanation of the magic of creativity. Creativity presents a formidable
challenge for neuroscience. Neuroscience generally involves studying what
happens in the brain when someone engages in a task that involves responding to
a stimulus, or retrieving information from memory and using it the right way,
or at the right time. If the relevant information is not already encoded in
memory, the task generally requires that the individual make systematic use of
information that is encoded in memory. But creativity is different. It
paradoxically involves studying how someone pulls out of their brain something
that was never put into it! Moreover, it must be something both new and useful,
or appropriate to the task at hand. The ability to pull out of memory something
new and appropriate that was never stored there in the first place is what we
refer to as the magic of creativity. Even if we are so fortunate as to
determine which areas of the brain are active and how these areas interact
during creative thought, we will not have an answer to the question of how the
brain comes up with solutions and artworks that are new and appropriate. On the
other hand, since the representational capacity of neurons emerges at a level
that is higher than that of the individual neurons themselves, the inner
workings of neurons is too low a level to explain the magic of creativity. Thus
we look to a level that is midway between gross brain regions and neurons.
Since creativity generally involves combining concepts from different domains,
or seeing old ideas from new perspectives, we focus our efforts on the neural
mechanisms underlying the representation of concepts and ideas. Thus we ask
questions about the brain at the level that accounts for its representational
capacity, i.e. at the level of distributed aggregates of neurons.
|
1106.3625
|
On the Locality of Codeword Symbols
|
cs.IT cs.CC cs.DM math.IT
|
Consider a linear [n,k,d]_q code C. We say that that i-th coordinate of C has
locality r, if the value at this coordinate can be recovered from accessing
some other r coordinates of C. Data storage applications require codes with
small redundancy, low locality for information coordinates, large distance, and
low locality for parity coordinates. In this paper we carry out an in-depth
study of the relations between these parameters.
We establish a tight bound for the redundancy n-k in terms of the message
length, the distance, and the locality of information coordinates. We refer to
codes attaining the bound as optimal. We prove some structure theorems about
optimal codes, which are particularly strong for small distances. This gives a
fairly complete picture of the tradeoffs between codewords length, worst-case
distance and locality of information symbols.
We then consider the locality of parity check symbols and erasure correction
beyond worst case distance for optimal codes. Using our structure theorem, we
obtain a tight bound for the locality of parity symbols possible in such codes
for a broad class of parameter settings. We prove that there is a tradeoff
between having good locality for parity checks and the ability to correct
erasures beyond the minimum distance.
|
1106.3627
|
Analog Network Coding in the Generalized High-SNR Regime
|
cs.IT math.IT
|
In a recent paper [4], Mari\'c et al. analyzed the performance of the analog
network coding (ANC) in a layered relay network for the high-SNR regime. They
have proved that under the ANC scheme, if each relay transmits the received
signals at the upper bound of the power constraint, the transmission rate will
approach the network capacity. In this paper, we consider a more general
scenario defined as the generalized high-SNR regime, where the relays at layer
$l$ in a layered relay network with $L$ layers do not satisfy the high-SNR
conditions, and then determine an ANC relay scheme in such network. By relating
the received SNR at the nodes with the propagated noise, we derive the rate
achievable by the ANC scheme proposed in this paper. The result shows that the
achievable ANC rate approaches the upper bound of the ANC capacity as the
received powers at relays in high SNR increase. A comparison of the two ANC
schemes implies that the scheme proposed in [4] may not always be the optimal
one in the generalized high-SNR regime. The result also demonstrates that the
upper and lower bounds of the ANC rate coincide in the limit as the number of
relays at layer L-1 dissatisfying the high-SNR conditions tends to infinity,
yielding an asymptotic capacity result.
|
1106.3629
|
Total Variation Minimization Based Compressive Wideband Spectrum Sensing
for Cognitive Radios
|
cs.IT math.IT
|
Wideband spectrum sensing is a critical component of a functioning cognitive
radio system. Its major challenge is the too high sampling rate requirement.
Compressive sensing (CS) promises to be able to deal with it. Nearly all the
current CS based compressive wideband spectrum sensing methods exploit only the
frequency sparsity to perform. Motivated by the achievement of a fast and
robust detection of the wideband spectrum change, total variation mnimization
is incorporated to exploit the temporal and frequency structure information to
enhance the sparse level. As a sparser vector is obtained, the spectrum sensing
period would be shorten and sensing accuracy would be enhanced. Both
theoretical evaluation and numerical experiments can demonstrate the
performance improvement.
|
1106.3651
|
Robust Bayesian reinforcement learning through tight lower bounds
|
cs.LG stat.ML
|
In the Bayesian approach to sequential decision making, exact calculation of
the (subjective) utility is intractable. This extends to most special cases of
interest, such as reinforcement learning problems. While utility bounds are
known to exist for this problem, so far none of them were particularly tight.
In this paper, we show how to efficiently calculate a lower bound, which
corresponds to the utility of a near-optimal memoryless policy for the decision
problem, which is generally different from both the Bayes-optimal policy and
the policy which is optimal for the expected MDP under the current belief. We
then show how these can be applied to obtain robust exploration policies in a
Bayesian reinforcement learning setting.
|
1106.3655
|
Bayesian multitask inverse reinforcement learning
|
stat.ML cs.AI
|
We generalise the problem of inverse reinforcement learning to multiple
tasks, from multiple demonstrations. Each one may represent one expert trying
to solve a different task, or as different experts trying to solve the same
task. Our main contribution is to formalise the problem as statistical
preference elicitation, via a number of structured priors, whose form captures
our biases about the relatedness of different tasks or expert policies. In
doing so, we introduce a prior on policy optimality, which is more natural to
specify. We show that our framework allows us not only to learn to efficiently
from multiple experts but to also effectively differentiate between the goals
of each. Possible applications include analysing the intrinsic motivations of
subjects in behavioural experiments and learning from multiple teachers.
|
1106.3680
|
Efficient Two-Stage Group Testing Algorithms for DNA Screening
|
cs.DM cs.CE cs.DS cs.IT math.CO math.IT q-bio.QM
|
Group testing algorithms are very useful tools for DNA library screening.
Building on recent work by Levenshtein (2003) and Tonchev (2008), we construct
in this paper new infinite classes of combinatorial structures, the existence
of which are essential for attaining the minimum number of individual tests at
the second stage of a two-stage disjunctive testing algorithm.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.