id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1205.3137
|
Unsupervised Discovery of Mid-Level Discriminative Patches
|
cs.CV cs.AI cs.LG
|
The goal of this paper is to discover a set of discriminative patches which
can serve as a fully unsupervised mid-level visual representation. The desired
patches need to satisfy two requirements: 1) to be representative, they need to
occur frequently enough in the visual world; 2) to be discriminative, they need
to be different enough from the rest of the visual world. The patches could
correspond to parts, objects, "visual phrases", etc. but are not restricted to
be any one of them. We pose this as an unsupervised discriminative clustering
problem on a huge dataset of image patches. We use an iterative procedure which
alternates between clustering and training discriminative classifiers, while
applying careful cross-validation at each step to prevent overfitting. The
paper experimentally demonstrates the effectiveness of discriminative patches
as an unsupervised mid-level visual representation, suggesting that it could be
used in place of visual words for many tasks. Furthermore, discriminative
patches can also be used in a supervised regime, such as scene classification,
where they demonstrate state-of-the-art performance on the MIT Indoor-67
dataset.
|
1205.3180
|
Community-Quality-Based Player Ranking in Collaborative Games with no
Explicit Objectives
|
cs.GT cs.SI
|
Player ranking can be used to determine the quality of the contributions of a
player to a collaborative community. However, collaborative games with no
explicit objectives do not support player ranking, as there is no metric to
measure the quality of player contributions. An implicit objective of such
communities is not being disruptive towards other players. In this paper, we
propose a parameterizable approach for real-time player ranking in
collaborative games with no explicit objectives. Our method computes a ranking
by applying a simple heuristic community quality function. We also demonstrate
the capabilities of our approach by applying several parameterizations of it to
a case study and comparing the obtained results.
|
1205.3181
|
Multiple Identifications in Multi-Armed Bandits
|
cs.LG stat.ML
|
We study the problem of identifying the top $m$ arms in a multi-armed bandit
game. Our proposed solution relies on a new algorithm based on successive
rejects of the seemingly bad arms, and successive accepts of the good ones.
This algorithmic contribution allows to tackle other multiple identifications
settings that were previously out of reach. In particular we show that this
idea of successive accepts and rejects applies to the multi-bandit best arm
identification problem.
|
1205.3183
|
A Model-Driven Probabilistic Parser Generator
|
cs.CL
|
Existing probabilistic scanners and parsers impose hard constraints on the
way lexical and syntactic ambiguities can be resolved. Furthermore, traditional
grammar-based parsing tools are limited in the mechanisms they allow for taking
context into account. In this paper, we propose a model-driven tool that allows
for statistical language models with arbitrary probability estimators. Our work
on model-driven probabilistic parsing is built on top of ModelCC, a model-based
parser generator, and enables the probabilistic interpretation and resolution
of anaphoric, cataphoric, and recursive references in the disambiguation of
abstract syntax graphs. In order to prove the expression power of ModelCC, we
describe the design of a general-purpose natural language parser.
|
1205.3188
|
The robustness of interdependent clustered networks
|
physics.soc-ph cs.SI
|
It was recently found that cascading failures can cause the abrupt breakdown
of a system of interdependent networks. Using the percolation method developed
for single clustered networks by Newman [Phys. Rev. Lett. {\bf 103}, 058701
(2009)], we develop an analytical method for studying how clustering within the
networks of a system of interdependent networks affects the system's
robustness. We find that clustering significantly increases the vulnerability
of the system, which is represented by the increased value of the percolation
threshold $p_c$ in interdependent networks.
|
1205.3193
|
A Comparative Study of Collaborative Filtering Algorithms
|
cs.IR stat.ML
|
Collaborative filtering is a rapidly advancing research area. Every year
several new techniques are proposed and yet it is not clear which of the
techniques work best and under what conditions. In this paper we conduct a
study comparing several collaborative filtering techniques -- both classic and
recent state-of-the-art -- in a variety of experimental contexts. Specifically,
we report conclusions controlling for number of items, number of users,
sparsity level, performance criteria, and computational complexity. Our
conclusions identify what algorithms work well and in what conditions, and
contribute to both industrial deployment collaborative filtering algorithms and
to the research community.
|
1205.3212
|
SportSense: Real-Time Detection of NFL Game Events from Twitter
|
cs.SI physics.soc-ph
|
We report our experience in building a working system, SportSense
(http://www.sportsense.us), which exploits Twitter users as human sensors of
the physical world to detect events in real-time. Using the US National
Football League (NFL) games as a case study, we report in-depth measurement
studies of the delay and post rate of tweets, and their dependence on other
properties. We subsequently develop a novel event detection method based on
these findings, and demonstrate that it can effectively and accurately extract
game events using open access Twitter data. SportSense has been evolving during
the 2010-11 and 2011-12 NFL seasons and is able to recognize NFL game big plays
in 30 to 90 seconds with 98% true positive, and 9% false positive rates. Using
a smart electronic TV program guide, we show that SportSense can utilize human
sensors to empower novel services.
|
1205.3225
|
Using Superposition Codebooks and Partial Decode and Forward in Low SNR
Parallel Relay Networks
|
cs.IT math.IT
|
A new communication scheme for Gaussian parallel relay networks based on
superposition coding and partial decoding at the relays is presented. Some
specific examples are proposed in which two codebook layers are superimposed.
The first level codebook is constructed with symbols from a binary or ternary
alphabet while the second level codebook is composed of codewords chosen with
Gaussian symbols. The new communication scheme is a generalization of
decode-and-forward, amplify-and-forward, and bursty-amplify-and-forward. The
asymptotic low SNR regime is studied using achievable rates and minimum
energy-per-bit as performance metrics. It is shown that the new scheme
outperforms all previously known schemes for some channels and parameter
ranges.
|
1205.3231
|
Survey on Distributed Data Mining in P2P Networks
|
cs.DB
|
The exponential increase of availability of digital data and the necessity to
process it in business and scientific fields has literally forced upon us the
need to analyze and mine useful knowledge from it. Traditionally data mining
has used a data warehousing model of gathering all data into a central site,
and then running an algorithm upon that data. Such a centralized approach is
fundamentally inappropriate due to many reasons like huge amount of data,
infeasibility to centralize data stored at multiple sites, bandwidth limitation
and privacy concerns. To solve these problems, Distributed Data Mining (DDM)
has emerged as a hot research area. Careful attention in the usage of
distributed resources of data, computing, communication, and human factors in a
near optimal fashion are paid by distributed data mining. DDM is gaining
attention in peer-to-peer (P2P) systems which are emerging as a choice of
solution for applications such as file sharing, collaborative movie and song
scoring, electronic commerce, and surveillance using sensor networks. The main
intension of this draft paper is to provide an overview of DDM and P2P Data
Mining. The paper discusses the need for DDM, taxonomy of DDM architectures,
various DDM approaches, DDM related works in P2P systems and issues and
challenges in P2P data mining.
|
1205.3245
|
Critical paths in a metapopulation model of H1N1: Efficiently delaying
influenza spreading through flight cancellation
|
physics.soc-ph cs.SI q-bio.PE
|
Disease spreading through human travel networks has been a topic of great
interest in recent years, as witnessed during outbreaks of influenza A (H1N1)
or SARS pandemics. One way to stop spreading over the airline network are
travel restrictions for major airports or network hubs based on the total
number of passengers of an airport. Here, we test alternative strategies using
edge removal, cancelling targeted flight connections rather than restricting
traffic for network hubs, for controlling spreading over the airline network.
We employ a SEIR metapopulation model that takes into account the population of
cities, simulates infection within cities and across the network of the top 500
airports, and tests different flight cancellation methods for limiting the
course of infection. The time required to spread an infection globally, as
simulated by a stochastic global spreading model was used to rank the candidate
control strategies. The model includes both local spreading dynamics at the
level of populations and long-range connectivity obtained from real global
airline travel data. Simulated spreading in this network showed that spreading
infected 37% less individuals after cancelling a quarter of flight connections
between cities, as selected by betweenness centrality. The alternative strategy
of closing down whole airports causing the same number of cancelled connections
only reduced infections by 18%. In conclusion, selecting highly ranked single
connections between cities for cancellation was more effective, resulting in
fewer individuals infected with influenza, compared to shutting down whole
airports. It is also a more efficient strategy, affecting fewer passengers
while producing the same reduction in infections.
The network of connections between the top 500 airports is available under
the resources link on our website http://www.biological-networks.org.
|
1205.3252
|
Two-way Wireless Video Communication using Randomized Cooperation,
Network Coding and Packet Level FEC
|
cs.IT cs.NI math.IT
|
Two-way real-time video communication in wireless networks requires high
bandwidth, low delay and error resiliency. This paper addresses these demands
by proposing a system with the integration of Network Coding (NC), user
cooperation using Randomized Distributed Space-time Coding (R-DSTC) and packet
level Forward Error Correction (FEC) under a one-way delay constraint.
Simulation results show that the proposed scheme significantly outperforms both
conventional direct transmission as well as R-DSTC based two-way cooperative
transmission, and is most effective when the distance between the users is
large.
|
1205.3269
|
Characterization and Moment Stability Analysis of Quasilinear Quantum
Stochastic Systems with Quadratic Coupling to External Fields
|
quant-ph cs.SY math.DS math.OC math.PR
|
The paper is concerned with open quantum systems whose Heisenberg dynamics
are described by quantum stochastic differential equations driven by external
boson fields. The system-field coupling operators are assumed to be quadratic
polynomials of the system observables, with the latter satisfying canonical
commutation relations. In combination with a cubic system Hamiltonian, this
leads to a class of quasilinear quantum stochastic systems which retain
algebraic closedness in the evolution of mixed moments of the observables.
Although such a system is nonlinear and its quantum state is no longer
Gaussian, the dynamics of the moments of any order are amenable to exact
analysis, including the computation of their steady-state values. In
particular, a generalized criterion is developed for quadratic stability of the
quasilinear systems. The results of the paper are applicable to the generation
of non-Gaussian quantum states with manageable moments and an optimal design of
linear quantum controllers for quasilinear quantum plants.
|
1205.3272
|
Capacity and Spectral Efficiency of Interference Avoiding Cognitive
Radio with Imperfect Detection
|
cs.IT math.IT
|
In this paper, we consider a model in which the unlicensed or the Secondary
User (SU) equipped with a Cognitive Radio (CR) (together referred to as CR)
interweaves its transmission with that of the licensed or the Primary User
(PU). In this model, when the CR detects the PU to be (i) busy it does not
transmit and; (ii) PU to be idle it transmits. Two situations based on CR's
detection of PU are considered, where the CR detects PU (i) perfectly -
referred to as the "ideal case" and; (ii) imperfectly - referred to as "non
ideal case". For both the cases we bring out the rate region, sum capacity of
PU and CR and spectral efficiency factor - the ratio of sum capacity of PU and
CR to the capacity of PU without CR. We consider the Rayleigh fading channel to
provide insight to our results. For the ideal case we study the effect of PU
occupancy on spectral efficiency factor. For the non ideal case, in addition to
the effect of occupancy, we study the effect of false alarm and missed
detection on the rate region and spectral efficiency factor. We characterize
the set of values of false alarm and missed detection probabilities for which
the system benefits, in the form of admissible regions. We show that false
alarm has a more profound effect on the spectral efficiency factor than missed
detection. We also show that when PU occupancy is small, the effects of both
false alarm and missed detection decrease. Finally, for the standard detection
techniques viz. energy detection, matched filter and magnitude squared
coherence, we show that that the matched filter performs best followed by
magnitude squared coherence followed by energy detection with respect to
spectral efficiency factor.
|
1205.3277
|
Cross-Layer Optimization of Two-Way Relaying for Statistical QoS
Guarantees
|
cs.IT math.IT
|
Two-way relaying promises considerable improvements on spectral efficiency in
wireless relay networks. While most existing works focus on physical layer
approaches to exploit its capacity gain, the benefits of two-way relaying on
upper layers are much less investigated. In this paper, we study the
cross-layer design and optimization for delay quality-of-service (QoS)
provisioning in two-way relay systems. Our goal is to find the optimal
transmission policy to maximize the weighted sum throughput of the two users in
the physical layer while guaranteeing the individual statistical delay-QoS
requirement for each user in the datalink layer. This statistical delay-QoS
requirement is characterized by the QoS exponent. By integrating the concept of
effective capacity, the cross-layer optimization problem is equivalent to a
weighted sum effective capacity maximization problem. We derive the jointly
optimal power and rate adaptation policies for both three-phase and two-phase
two-way relay protocols. Numerical results show that the proposed adaptive
transmission policies can efficiently provide QoS guarantees and improve the
performance. In addition, the throughput gain obtained by the considered
three-phase and two-phase protocols over direct transmission is significant
when the delay-QoS requirements are loose, but the gain diminishes at tight
delay requirements. It is also found that, in the two-phase protocol, the relay
node should be placed closer to the source with more stringent delay
requirement.
|
1205.3286
|
On Linear Coherent Estimation with Spatial Collaboration
|
cs.IT math.IT
|
We consider a power-constrained sensor network, consisting of multiple sensor
nodes and a fusion center (FC), that is deployed for the purpose of estimating
a common random parameter of interest. In contrast to the distributed
framework, the sensor nodes are allowed to update their individual observations
by (linearly) combining observations from neighboring nodes. The updated
observations are communicated to the FC using an analog amplify-and-forward
modulation scheme and through a coherent multiple access channel. The optimal
collaborative strategy is obtained by minimizing the cumulative transmission
power subject to a maximum distortion constraint. For the distributed scenario
(i.e., with no observation sharing), the solution reduces to the
power-allocation problem considered by [Xiao, TSP08]. Collaboration among
neighbors significantly improves power efficiency of the network in the low
local-SNR regime, as demonstrated through an insightful example and numerical
simulations.
|
1205.3310
|
Planar Difference Functions
|
math.CO cs.IT math.IT quant-ph
|
In 1980 Alltop produced a family of cubic phase sequences that nearly meet
the Welch bound for maximum non-peak correlation magnitude. This family of
sequences were shown by Wooters and Fields to be useful for quantum state
tomography. Alltop's construction used a function that is not planar, but whose
difference function is planar. In this paper we show that Alltop type functions
cannot exist in fields of characteristic 3 and that for a known class of planar
functions, $x^3$ is the only Alltop type function.
|
1205.3316
|
Arabic Language Learning Assisted by Computer, based on Automatic Speech
Recognition
|
cs.CL
|
This work consists of creating a system of the Computer Assisted Language
Learning (CALL) based on a system of Automatic Speech Recognition (ASR) for the
Arabic language using the tool CMU Sphinx3 [1], based on the approach of HMM.
To this work, we have constructed a corpus of six hours of speech recordings
with a number of nine speakers. we find in the robustness to noise a grounds
for the choice of the HMM approach [2]. the results achieved are encouraging
since our corpus is made by only nine speakers, but they are always reasons
that open the door for other improvement works.
|
1205.3317
|
Spatially-Coupled Random Access on Graphs
|
cs.IT math.IT
|
In this paper we investigate the effect of spatial coupling applied to the
recently-proposed coded slotted ALOHA (CSA) random access protocol. Thanks to
the bridge between the graphical model describing the iterative interference
cancelation process of CSA over the random access frame and the erasure
recovery process of low-density parity-check (LDPC) codes over the binary
erasure channel (BEC), we propose an access protocol which is inspired by the
convolutional LDPC code construction. The proposed protocol exploits the
terminations of its graphical model to achieve the spatial coupling effect,
attaining performance close to the theoretical limits of CSA. As for the
convolutional LDPC code case, large iterative decoding thresholds are obtained
by simply increasing the density of the graph. We show that the threshold
saturation effect takes place by defining a suitable counterpart of the
maximum-a-posteriori decoding threshold of spatially-coupled LDPC code
ensembles. In the asymptotic setting, the proposed scheme allows sustaining a
traffic close to 1 [packets/slot].
|
1205.3321
|
Tree Projections and Structural Decomposition Methods: The Power of
Local Consistency and Larger Islands of Tractability
|
cs.DB
|
Evaluating conjunctive queries and solving constraint satisfaction problems
are fundamental problems in database theory and artificial intelligence,
respectively. These problems are NP-hard, so that several research efforts have
been made in the literature for identifying tractable classes, known as islands
of tractability, as well as for devising clever heuristics for solving
efficiently real-world instances. Many heuristic approaches are based on
enforcing on the given instance a property called local consistency, where (in
database terms) each tuple in every query atom matches at least one tuple in
every other query atom. Interestingly, it turns out that, for many well-known
classes of queries, such as for the acyclic queries, enforcing local
consistency is even sufficient to solve the given instance correctly. However,
the precise power of such a procedure was unclear, but for some very restricted
cases. The paper provides full answers to the long-standing questions about the
precise power of algorithms based on enforcing local consistency. The classes
of instances where enforcing local consistency turns out to be a correct
query-answering procedure are however not efficiently recognizable. In fact,
the paper finally focuses on certain subclasses defined in terms of the novel
notion of greedy tree projections. These latter classes are shown to be
efficiently recognizable and strictly larger than most islands of tractability
known so far, both in the general case of tree projections and for specific
structural decomposition methods.
|
1205.3336
|
Distribution of the search of evolutionary product unit neural networks
for classification
|
cs.NE cs.AI cs.CV
|
This paper deals with the distributed processing in the search for an optimum
classification model using evolutionary product unit neural networks. For this
distributed search we used a cluster of computers. Our objective is to obtain a
more efficient design than those net architectures which do not use a
distributed process and which thus result in simpler designs. In order to get
the best classification models we use evolutionary algorithms to train and
design neural networks, which require a very time consuming computation. The
reasons behind the need for this distribution are various. It is complicated to
train this type of nets because of the difficulty entailed in determining their
architecture due to the complex error surface. On the other hand, the use of
evolutionary algorithms involves running a great number of tests with different
seeds and parameters, thus resulting in a high computational cost
|
1205.3352
|
Revisiting the effect of external fields in Axelrod's model of social
dynamics
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
The study of the effects of spatially uniform fields on the steady-state
properties of Axelrod's model has yielded plenty of controversial results. Here
we re-examine the impact of this type of field for a selection of parameters
such that the field-free steady state of the model is heterogeneous or
multicultural. Analyses of both one and two-dimensional versions of Axelrod's
model indicate that, contrary to previous claims in the literature, the steady
state remains heterogeneous regardless of the value of the field strength.
Turning on the field leads to a discontinuous decrease on the number of
cultural domains, which we argue is due to the instability of zero-field
heterogeneous absorbing configurations. We find, however, that spatially
nonuniform fields that implement a consensus rule among the neighborhood of the
agents enforces homogenization. Although the overall effects of the fields are
essentially the same irrespective of the dimensionality of the model, we argue
that the dimensionality has a significant impact on the stability of the
field-free homogeneous steady state.
|
1205.3378
|
On Real-Time and Causal Secure Source Coding
|
cs.IT math.IT
|
We investigate two source coding problems with secrecy constraints. In the
first problem we consider real--time fully secure transmission of a memoryless
source. We show that although classical variable--rate coding is not an option
since the lengths of the codewords leak information on the source, the key rate
can be as low as the average Huffman codeword length of the source. In the
second problem we consider causal source coding with a fidelity criterion and
side information at the decoder and the eavesdropper. We show that when the
eavesdropper has degraded side information, it is optimal to first use a causal
rate distortion code and then encrypt its output with a key.
|
1205.3380
|
Unfair items detection in educational measurement
|
cs.AI physics.ed-ph
|
Measurement professionals cannot come to an agreement on the definition of
the term 'item fairness'. In this paper a continuous measure of item unfairness
is proposed. The more the unfairness measure deviates from zero, the less fair
the item is. If the measure exceeds the cutoff value, the item is identified as
definitely unfair. The new approach can identify unfair items that would not be
identified with conventional procedures. The results are in accord with
experts' judgments on the item qualities. Since no assumptions about scores
distributions and/or correlations are assumed, the method is applicable to any
educational test. Its performance is illustrated through application to scores
of a real test.
|
1205.3407
|
Limits on classical communication from quantum entropy power
inequalities
|
quant-ph cs.IT math-ph math.IT math.MP
|
Almost all modern communication systems rely on electromagnetic fields as a
means of information transmission, and finding the capacities of these systems
is a problem of significant practical importance. The Additive White Gaussian
Noise (AWGN) channel is often a good approximate description of such systems,
and its capacity is given by a simple formula. However, when quantum effects
are important, estimating the capacity becomes difficult: a lower bound is
known, but a similar upper bound is missing. We present strong new upper bounds
for the classical capacity of quantum additive noise channels, including
quantum analogues of the AWGN channel. Our main technical tool is a quantum
entropy power inequality that controls the entropy production as two quantum
signals combine at a beam splitter. Its proof involves a new connection between
entropy production rates and a quantum Fisher information, and uses a quantum
diffusion that smooths arbitrary states towards gaussians.
|
1205.3409
|
The entropy power inequality for quantum systems
|
quant-ph cs.IT math-ph math.IT math.MP
|
When two independent analog signals, X and Y are added together giving Z=X+Y,
the entropy of Z, H(Z), is not a simple function of the entropies H(X) and
H(Y), but rather depends on the details of X and Y's distributions.
Nevertheless, the entropy power inequality (EPI), which states that exp [2H(Z)]
\geq exp[2H(X)] + exp[2H(Y)], gives a very tight restriction on the entropy of
Z. This inequality has found many applications in information theory and
statistics. The quantum analogue of adding two random variables is the
combination of two independent bosonic modes at a beam splitter. The purpose of
this work is to give a detailed outline of the proof of two separate
generalizations of the entropy power inequality to the quantum regime. Our
proofs are similar in spirit to standard classical proofs of the EPI, but some
new quantities and ideas are needed in the quantum setting. Specifically, we
find a new quantum de Bruijin identity relating entropy production under
diffusion to a divergence-based quantum Fisher information. Furthermore, this
Fisher information exhibits certain convexity properties in the context of beam
splitters.
|
1205.3423
|
f-Divergence for convex bodies
|
math.FA cs.IT math.IT
|
We introduce f-divergence, a concept from information theory and statistics,
for convex bodies in R^n. We prove that f-divergences are SL(n) invariant
valuations and we establish an affine isoperimetric inequality for these
quantities. We show that generalized affine surface area and in particular the
L_p affine surface area from the L_p Brunn Minkowski theory are special cases
of f-divergences.
|
1205.3426
|
Bounded epsilon-Reach Set Computation of a Class of Deterministic and
Transversal Linear Hybrid Automata
|
cs.SY
|
We define a special class of hybrid automata, called Deterministic and
Transversal Linear Hybrid Automata (DTLHA), whose continuous dynamics in each
location are linear time-invariant (LTI) with a constant input, and for which
every discrete transition up to a given bounded time is deterministic and,
importantly, transversal. For such a DTLHA starting from an initial state, we
show that it is possible to compute an approximation of the reach set of a
DTLHA over a finite time interval that is arbitrarily close to the exact reach
set, called a bounded epsilon-reach set, through sampling and polyhedral
over-approximation of sampled states. We propose an algorithm and an attendant
architecture for the overall bounded epsilon-reach set computation process.
|
1205.3441
|
Genetic Programming for Multibiometrics
|
cs.NE cs.CR cs.LG
|
Biometric systems suffer from some drawbacks: a biometric system can provide
in general good performances except with some individuals as its performance
depends highly on the quality of the capture. One solution to solve some of
these problems is to use multibiometrics where different biometric systems are
combined together (multiple captures of the same biometric modality, multiple
feature extraction algorithms, multiple biometric modalities...). In this
paper, we are interested in score level fusion functions application (i.e., we
use a multibiometric authentication scheme which accept or deny the claimant
for using an application). In the state of the art, the weighted sum of scores
(which is a linear classifier) and the use of an SVM (which is a non linear
classifier) provided by different biometric systems provide one of the best
performances. We present a new method based on the use of genetic programming
giving similar or better performances (depending on the complexity of the
database). We derive a score fusion function by assembling some classical
primitives functions (+, *, -, ...). We have validated the proposed method on
three significant biometric benchmark datasets from the state of the art.
|
1205.3474
|
Degrees-of-Freedom Region of the MISO Broadcast Channel with General
Mixed-CSIT
|
cs.IT math.IT
|
In the setting of the two-user broadcast channel, recent work by Maddah-Ali
and Tse has shown that knowledge of prior channel state information at the
transmitter (CSIT) can be useful, even in the absence of any knowledge of
current CSIT. Very recent work by Kobayashi et al., Yang et al., and Gou and
Jafar, extended this to the case where, instead of no current CSIT knowledge,
the transmitter has partial knowledge, and where under a symmetry assumption,
the quality of this knowledge is identical for the different users' channels.
Motivated by the fact that in multiuser settings, the quality of CSIT
feedback may vary across different links, we here generalize the above results
to the natural setting where the current CSIT quality varies for different
users' channels. For this setting we derive the optimal degrees-of-freedom
(DoF) region, and provide novel multi-phase broadcast schemes that achieve this
optimal region. Finally this generalization incorporates and generalizes the
corresponding result in Maleki et al. which considered the broadcast channel
with one user having perfect CSIT and the other only having prior CSIT.
|
1205.3504
|
A Note on Extending Taylor's Power Law for Characterizing Human
Microbial Communities: Inspiration from Comparative Studies on the
Distribution Patterns of Insects and Galaxies, and as a Case Study for
Medical Ecology
|
cs.CE q-bio.QM
|
Many natural patterns, such as the distributions of blood particles in a
blood sample, proteins on cell surfaces, biological populations in their
habitat, galaxies in the universe, the sequence of human genes, and the fitness
in evolutionary computing, have been found to follow power law. Taylor's power
law (Taylor 1961: Nature, 189:732-) is well recognized as one of the
fundamental models in population ecology. A fundamental property of biological
populations, which Taylor's power law reveals, is the near universal
heterogeneity of population abundance distribution in habitat. Obviously, the
heterogeneity also exists at the community level, where not only the
distributions of population abundances but also the proportions of the species
composition in the community are often heterogeneous. Nevertheless, existing
community diversity indexes such as Shannon index and Simpson index can only
measure "local" or "static" diversity in the sense that they are computed for
each habitat at a specific time point, and the indexes alone do not reflect the
diversity changes. In this note, I propose to extend the application scope of
Taylor's power law to the studies of human microbial communities, specifically,
the community heterogeneity at both population and community levels. I further
suggested that population dispersion models such as Taylor (1980: Nature, 286,
53-), which is known to generate population distribution patterns consistent
with the power law, should also be very useful for analyzing the distribution
patterns of human microbes within the human body. Overall, I hope that the
approach to human microbial community with the power law offers an example that
ecological theories can play an important role in the emerging medical ecology,
which aims at studying the ecology of human microbiome and its implications to
human diseases and health, as well as in personalized medicine.
|
1205.3506
|
Efficient Expression Templates for Operator Overloading-based Automatic
Differentiation
|
cs.MS cs.CE
|
Expression templates are a well-known set of techniques for improving the
efficiency of operator overloading-based forward mode automatic differentiation
schemes in the C++ programming language by translating the differentiation from
individual operators to whole expressions. However standard expression template
approaches result in a large amount of duplicate computation, particularly for
large expression trees, degrading their performance. In this paper we describe
several techniques for improving the efficiency of expression templates and
their implementation in the automatic differentiation package Sacado. We
demonstrate their improved efficiency through test functions as well as their
application to differentiation of a large-scale fluid dynamics simulation code.
|
1205.3549
|
Normalized Maximum Likelihood Coding for Exponential Family with Its
Applications to Optimal Clustering
|
cs.LG
|
We are concerned with the issue of how to calculate the normalized maximum
likelihood (NML) code-length. There is a problem that the normalization term of
the NML code-length may diverge when it is continuous and unbounded and a
straightforward computation of it is highly expensive when the data domain is
finite . In previous works it has been investigated how to calculate the NML
code-length for specific types of distributions. We first propose a general
method for computing the NML code-length for the exponential family. Then we
specifically focus on Gaussian mixture model (GMM), and propose a new efficient
method for computing the NML to them. We develop it by generalizing Rissanen's
re-normalizing technique. Then we apply this method to the clustering issue, in
which a clustering structure is modeled using a GMM, and the main task is to
estimate the optimal number of clusters on the basis of the NML code-length. We
demonstrate using artificial data sets the superiority of the NML-based
clustering over other criteria such as AIC, BIC in terms of the data size
required for high accuracy rate to be achieved.
|
1205.3566
|
Risk-sensitive Dissipativity of Linear Quantum Stochastic Systems under
Lur'e Type Perturbations of Hamiltonians
|
quant-ph cs.SY math.DS math.OC math.PR
|
This paper is concerned with a stochastic dissipativity theory using
quadratic-exponential storage functions for open quantum systems with
canonically commuting dynamic variables governed by quantum stochastic
differential equations. The system is linearly coupled to external boson fields
and has a quadratic Hamiltonian which is perturbed by nonquadratic functions of
linear combinations of system variables. Such perturbations are similar to
those in the classical Lur'e systems and make the quantum dynamics nonlinear.
We study their effect on the quantum expectation of the exponential of a
positive definite quadratic form of the system variables. This allows
conditions to be established for the risk-sensitive stochastic storage function
of the quantum system to remain bounded, thus securing boundedness for the
moments of system variables of arbitrary order. These results employ a
noncommutative analogue of the Doleans-Dade exponential and a multivariate
partial differential version of the Gronwall-Bellman lemma.
|
1205.3569
|
The Simulation and Mapping of Building Performance Indicators based on
European Weather Stations
|
physics.comp-ph cs.CE
|
Due to the climate change debate, a lot of research and maps of external
climate parameters are available. However, maps of indoor climate performance
parameters are still lacking. This paper presents a methodology for obtaining
maps of performances of similar buildings that are virtually spread over whole
Europe. The produced maps are useful for analyzing regional climate influence
on building performance indicators such as energy use and indoor climate. This
is shown using the Bestest building as a reference benchmark. An important
application of the mapping tool is the visualization of potential building
measures over the EU. Also the performances of single building components can
be simulated and mapped. It is concluded that the presented method is efficient
as it takes less than 15 minutes to simulate and produce the maps on a
2.6GHz/4GB computer. Moreover, the approach is applicable for any type of
building.
|
1205.3594
|
A Mean-field Approach for an Intercarrier Interference Canceller for
OFDM
|
cond-mat.dis-nn cond-mat.stat-mech cs.IT math.IT
|
The similarity of the mathematical description of random-field spin systems
to orthogonal frequency-division multiplexing (OFDM) scheme for wireless
communication is exploited in an intercarrier-interference (ICI) canceller used
in the demodulation of OFDM. The translational symmetry in the Fourier domain
generically concentrates the major contribution of ICI from each subcarrier in
the subcarrier's neighborhood. This observation in conjunction with mean field
approach leads to a development of an ICI canceller whose necessary cost of
computation scales linearly with respect to the number of subcarriers. It is
also shown that the dynamics of the mean-field canceller are well captured by a
discrete map of a single macroscopic variable, without taking the spatial and
time correlations of estimated variables into account.
|
1205.3630
|
Alignment and integration of complex networks by hypergraph-based
spectral clustering
|
physics.soc-ph cs.SI q-bio.MN q-bio.QM
|
Complex networks possess a rich, multi-scale structure reflecting the
dynamical and functional organization of the systems they model. Often there is
a need to analyze multiple networks simultaneously, to model a system by more
than one type of interaction or to go beyond simple pairwise interactions, but
currently there is a lack of theoretical and computational methods to address
these problems. Here we introduce a framework for clustering and community
detection in such systems using hypergraph representations. Our main result is
a generalization of the Perron-Frobenius theorem from which we derive spectral
clustering algorithms for directed and undirected hypergraphs. We illustrate
our approach with applications for local and global alignment of
protein-protein interaction networks between multiple species, for tripartite
community detection in folksonomies, and for detecting clusters of overlapping
regulatory pathways in directed networks.
|
1205.3643
|
Capacitated Team Formation Problem on Social Networks
|
cs.SI physics.soc-ph
|
In a team formation problem, one is required to find a group of users that
can match the requirements of a collaborative task. Example of such
collaborative tasks abound, ranging from software product development to
various participatory sensing tasks in knowledge creation. Due to the nature of
the task, team members are often required to work on a co-operative basis.
Previous studies have indicated that co-operation becomes effective in presence
of social connections. Therefore, effective team selection requires the team
members to be socially close as well as a division of the task among team
members so that no user is overloaded by the assignment. In this work, we
investigate how such teams can be formed on a social network.
Since our team formation problems are proven to be NP-hard, we design
efficient approximate algorithms for finding near optimum teams with provable
guarantees. As traditional data-sets from on-line social networks (e.g.
Twitter, Facebook etc) typically do not contain instances of large scale
collaboration, we have crawled millions of software repositories spanning a
period of four years and hundreds of thousands of developers from GitHub, a
popular open-source social coding network. We perform large scale experiments
on this data-set to evaluate the accuracy and efficiency of our algorithms.
Experimental results suggest that our algorithms achieve significant
improvement in finding effective teams, as compared to naive strategies and
scale well with the size of the data. Finally, we provide a validation of our
techniques by comparing with existing software teams in GitHub.
|
1205.3647
|
Error-correcting pairs for a public-key cryptosystem
|
cs.IT math.IT
|
Code-based cryptography is an interesting alternative to classic
number-theory PKC since it is conjectured to be secure against quantum computer
attacks. Many families of codes have been proposed for these cryptosystems, one
of the main requirements is having high performance t-bounded decoding
algorithms which in the case of having high an error-correcting pair is
achieved. In this article the class of codes with a t-ECP is proposed for the
McEliece cryptosystem. The hardness of retrieving the t-ECP for a given code is
considered. As a first step distinguishers of several subclasses are given.
|
1205.3663
|
The Good, the Bad, and the Odd: Cycles in Answer-Set Programs
|
cs.AI cs.LO
|
Backdoors of answer-set programs are sets of atoms that represent clever
reasoning shortcuts through the search space. Assignments to backdoor atoms
reduce the given program to several programs that belong to a tractable target
class. Previous research has considered target classes based on notions of
acyclicity where various types of cycles (good and bad cycles) are excluded
from graph representations of programs. We generalize the target classes by
taking the parity of the number of negative edges on bad cycles into account
and consider backdoors for such classes. We establish new hardness results and
non-uniform polynomial-time tractability relative to directed or undirected
cycles.
|
1205.3668
|
Synthesis and Adaptation of Effective Motor Synergies for the Solution
of Reaching Tasks
|
cs.RO cs.SY nlin.AO physics.comp-ph
|
Taking inspiration from the hypothesis of muscle synergies, we propose a
method to generate open loop controllers for an agent solving point-to-point
reaching tasks. The controller output is defined as a linear combination of a
small set of predefined actuations, termed synergies. The method can be
interpreted from a developmental perspective, since it allows the agent to
autonomously synthesize and adapt an effective set of synergies to new
behavioral needs. This scheme greatly reduces the dimensionality of the control
problem, while keeping a good performance level. The framework is evaluated in
a planar kinematic chain, and the quality of the solutions is quantified in
several scenarios.
|
1205.3676
|
Consensus of Multi-Agent Networks in the Presence of Adversaries Using
Only Local Information
|
cs.DC cs.SY
|
This paper addresses the problem of resilient consensus in the presence of
misbehaving nodes. Although it is typical to assume knowledge of at least some
nonlocal information when studying secure and fault-tolerant consensus
algorithms, this assumption is not suitable for large-scale dynamic networks.
To remedy this, we emphasize the use of local strategies to deal with
resilience to security breaches. We study a consensus protocol that uses only
local information and we consider worst-case security breaches, where the
compromised nodes have full knowledge of the network and the intentions of the
other nodes. We provide necessary and sufficient conditions for the normal
nodes to reach consensus despite the influence of the malicious nodes under
different threat assumptions. These conditions are stated in terms of a novel
graph-theoretic property referred to as network robustness.
|
1205.3720
|
A k-shell decomposition method for weighted networks
|
physics.soc-ph cs.SI
|
We present a generalized method for calculating the k-shell structure of
weighted networks. The method takes into account both the weight and the degree
of a network, in such a way that in the absence of weights we resume the shell
structure obtained by the classic k-shell decomposition. In the presence of
weights, we show that the method is able to partition the network in a more
refined way, without the need of any arbitrary threshold on the weight values.
Furthermore, by simulating spreading processes using the
susceptible-infectious-recovered model in four different weighted real-world
networks, we show that the weighted k-shell decomposition method ranks the
nodes more accurately, by placing nodes with higher spreading potential into
shells closer to the core. In addition, we demonstrate our new method on a real
economic network and show that the core calculated using the weighted k-shell
method is more meaningful from an economic perspective when compared with the
unweighted one.
|
1205.3727
|
Accurate 3D maps from depth images and motion sensors via nonlinear
Kalman filtering
|
cs.RO
|
This paper investigates the use of depth images as localisation sensors for
3D map building. The localisation information is derived from the 3D data
thanks to the ICP (Iterative Closest Point) algorithm. The covariance of the
ICP, and thus of the localization error, is analysed, and described by a Fisher
Information Matrix. It is advocated this error can be much reduced if the data
is fused with measurements from other motion sensors, or even with prior
knowledge on the motion. The data fusion is performed by a recently introduced
specific extended Kalman filter, the so-called Invariant EKF, and is directly
based on the estimated covariance of the ICP. The resulting filter is very
natural, and is proved to possess strong properties. Experiments with a Kinect
sensor and a three-axis gyroscope prove clear improvement in the accuracy of
the localization, and thus in the accuracy of the built 3D map.
|
1205.3752
|
Revisiting Homomorphic Wavelet Estimation and Phase Unwrapping
|
cs.IT math.IT physics.geo-ph
|
Surface-consistent deconvolution is a standard processing technique in land
data to uniformize the wavelet across all sources and receivers. The required
wavelet estimation step is generally done in the homomorphic domain since this
is a convenient way to separate the phase and the amplitude spectrum in a
linear fashion. Unfortunately all surface-consistent deconvolutions make a
minimum-phase assumption which is likely to be sub-optimal. Recent developments
in statistical wavelet estimation demonstrate that nonminimum wavelets can be
estimated directly from seismic data, thus offering promise to create a
nonminimum phase surface-consistent deconvolution approach. Unfortunately the
major impediment turns out to be phase unwrapping. In this paper we review
several existing phase unwrapping techniques and discuss their advantages and
inconveniences.
|
1205.3753
|
Blind Deconvolution of Ultrasonic Signals Using High-Order Spectral
Analysis and Wavelets
|
cs.IT math.IT
|
Defect detection by ultrasonic method is limited by the pulse width.
Resolution can be improved through a deconvolution process with a priori
information of the pulse or by its estimation. In this paper a regularization
of the Wiener filter using wavelet shrinkage is presented for the estimation of
the reflectivity function. The final result shows an improved signal to noise
ratio with better axial resolution.
|
1205.3756
|
Achieving the Capacity of any DMC using only Polar Codes
|
cs.IT math.IT
|
We construct a channel coding scheme to achieve the capacity of any discrete
memoryless channel based solely on the techniques of polar coding. In
particular, we show how source polarization and randomness extraction via
polarization can be employed to "shape" uniformly-distributed i.i.d. random
variables into approximate i.i.d. random variables distributed ac- cording to
the capacity-achieving distribution. We then combine this shaper with a variant
of polar channel coding, constructed by the duality with source coding, to
achieve the channel capacity. Our scheme inherits the low complexity encoder
and decoder of polar coding. It differs conceptually from Gallager's method for
achieving capacity, and we discuss the advantages and disadvantages of the two
schemes. An application to the AWGN channel is discussed.
|
1205.3766
|
Efficient Topology-Controlled Sampling of Implicit Shapes
|
cs.CV
|
Sampling from distributions of implicitly defined shapes enables analysis of
various energy functionals used for image segmentation. Recent work describes a
computationally efficient Metropolis-Hastings method for accomplishing this
task. Here, we extend that framework so that samples are accepted at every
iteration of the sampler, achieving an order of magnitude speed up in
convergence. Additionally, we show how to incorporate topological constraints.
|
1205.3767
|
Universal Algorithm for Online Trading Based on the Method of
Calibration
|
cs.LG q-fin.PM
|
We present a universal algorithm for online trading in Stock Market which
performs asymptotically at least as good as any stationary trading strategy
that computes the investment at each step using a fixed function of the side
information that belongs to a given RKHS (Reproducing Kernel Hilbert Space).
Using a universal kernel, we extend this result for any continuous stationary
strategy. In this learning process, a trader rationally chooses his gambles
using predictions made by a randomized well-calibrated algorithm. Our strategy
is based on Dawid's notion of calibration with more general checking rules and
on some modification of Kakade and Foster's randomized rounding algorithm for
computing the well-calibrated forecasts. We combine the method of randomized
calibration with Vovk's method of defensive forecasting in RKHS. Unlike the
statistical theory, no stochastic assumptions are made about the stock prices.
Our empirical results on historical markets provide strong evidence that this
type of technical trading can "beat the market" if transaction costs are
ignored.
|
1205.3776
|
The ideal of the trifocal variety
|
math.AG cs.CV
|
Techniques from representation theory, symbolic computational algebra, and
numerical algebraic geometry are used to find the minimal generators of the
ideal of the trifocal variety. An effective test for determining whether a
given tensor is a trifocal tensor is also given.
|
1205.3832
|
Social Climber attachment in forming networks produces phase transition
in a measure of connectivity
|
physics.soc-ph cond-mat.dis-nn cs.SI
|
Formation and fragmentation of networks is typically studied using
percolation theory, but most previous research has been restricted to studying
a phase transition in cluster size, examining the emergence of a giant
component. This approach does not study the effects of evolving network
structure on dynamics that occur at the nodes, such as the synchronization of
oscillators and the spread of information, epidemics, and neuronal excitations.
We introduce and analyze new link-formation rules, called Social Climber (SC)
attachment, that may be combined with arbitrary percolation models to produce a
previously unstudied phase transition using the largest eigenvalue of the
network adjacency matrix as the order parameter. This eigenvalue is significant
in the analyses of many network-coupled dynamical systems in which it measures
the quality of global coupling and is hence a natural measure of connectivity.
We highlight the important self-organized properties of SC attachment and
discuss implications for controlling dynamics on networks.
|
1205.3853
|
Secrecy Is Cheap if the Adversary Must Reconstruct
|
cs.IT cs.CR math.IT
|
A secret key can be used to conceal information from an eavesdropper during
communication, as in Shannon's cipher system. Most theoretical guarantees of
secrecy require the secret key space to grow exponentially with the length of
communication. Here we show that when an eavesdropper attempts to reconstruct
an information sequence, as posed in the literature by Yamamoto, very little
secret key is required to effect unconditionally maximal distortion;
specifically, we only need the secret key space to increase unboundedly,
growing arbitrarily slowly with the blocklength. As a corollary, even with a
secret key of constant size we can still cause the adversary arbitrarily close
to maximal distortion, regardless of the length of the information sequence.
|
1205.3856
|
Social Turing Tests: Crowdsourcing Sybil Detection
|
cs.SI physics.soc-ph
|
As popular tools for spreading spam and malware, Sybils (or fake accounts)
pose a serious threat to online communities such as Online Social Networks
(OSNs). Today, sophisticated attackers are creating realistic Sybils that
effectively befriend legitimate users, rendering most automated Sybil detection
techniques ineffective. In this paper, we explore the feasibility of a
crowdsourced Sybil detection system for OSNs. We conduct a large user study on
the ability of humans to detect today's Sybil accounts, using a large corpus of
ground-truth Sybil accounts from the Facebook and Renren networks. We analyze
detection accuracy by both "experts" and "turkers" under a variety of
conditions, and find that while turkers vary significantly in their
effectiveness, experts consistently produce near-optimal results. We use these
results to drive the design of a multi-tier crowdsourcing Sybil detection
system. Using our user study data, we show that this system is scalable, and
can be highly effective either as a standalone system or as a complementary
technique to current tools.
|
1205.3863
|
Three-Receiver Broadcast Channels with Side Information
|
cs.IT math.IT
|
Three-receiver broadcast channel (BC) is of interest due to its information
theoretical differences with two receiver one. In this paper, we derive
achievable rate regions for two classes of 3-receiver BC with side information
available at the transmitter, Multilevel BC and 3-receiver less noisy BC, by
using superposition coding, Gel'fand-Pinsker binning scheme and Nair-El Gamal
indirect decoding. Our rate region for multilevel BC subsumes the Steinberg
rate region for 2-receiver degraded BC with side information as its special
case. We also find the capacity region of 3-receiver less noisy BC when side
information is available both at the transmitter and at the receivers.
|
1205.3952
|
Automating embedded analysis capabilities and managing software
complexity in multiphysics simulation part II: application to partial
differential equations
|
cs.MS cs.CE
|
A template-based generic programming approach was presented in a previous
paper that separates the development effort of programming a physical model
from that of computing additional quantities, such as derivatives, needed for
embedded analysis algorithms. In this paper, we describe the implementation
details for using the template-based generic programming approach for
simulation and analysis of partial differential equations (PDEs). We detail
several of the hurdles that we have encountered, and some of the software
infrastructure developed to overcome them. We end with a demonstration where we
present shape optimization and uncertainty quantification results for a 3D PDE
application.
|
1205.3964
|
Machine Recognition of Hand Written Characters using Neural Networks
|
cs.AI
|
Even today in Twenty First Century Handwritten communication has its own
stand and most of the times, in daily life it is globally using as means of
communication and recording the information like to be shared with others.
Challenges in handwritten characters recognition wholly lie in the variation
and distortion of handwritten characters, since different people may use
different style of handwriting, and direction to draw the same shape of the
characters of their known script. This paper demonstrates the nature of
handwritten characters, conversion of handwritten data into electronic data,
and the neural network approach to make machine capable of recognizing hand
written characters.
|
1205.3966
|
Neural Networks for Handwritten English Alphabet Recognition
|
cs.AI cs.CV
|
This paper demonstrates the use of neural networks for developing a system
that can recognize hand-written English alphabets. In this system, each English
alphabet is represented by binary values that are used as input to a simple
feature extraction system, whose output is fed to our neural network system.
|
1205.3981
|
kLog: A Language for Logical and Relational Learning with Kernels
|
cs.AI cs.LG cs.PL
|
We introduce kLog, a novel approach to statistical relational learning.
Unlike standard approaches, kLog does not represent a probability distribution
directly. It is rather a language to perform kernel-based learning on
expressive logical and relational representations. kLog allows users to specify
learning problems declaratively. It builds on simple but powerful concepts:
learning from interpretations, entity/relationship data modeling, logic
programming, and deductive databases. Access by the kernel to the rich
representation is mediated by a technique we call graphicalization: the
relational representation is first transformed into a graph --- in particular,
a grounded entity/relationship diagram. Subsequently, a choice of graph kernel
defines the feature space. kLog supports mixed numerical and symbolic data, as
well as background knowledge in the form of Prolog or Datalog programs as in
inductive logic programming systems. The kLog framework can be applied to
tackle the same range of tasks that has made statistical relational learning so
popular, including classification, regression, multitask learning, and
collective classification. We also report about empirical comparisons, showing
that kLog can be either more accurate, or much faster at the same level of
accuracy, than Tilde and Alchemy. kLog is GPLv3 licensed and is available at
http://klog.dinfo.unifi.it along with tutorials.
|
1205.3993
|
Diffusion Strategies Outperform Consensus Strategies for Distributed
Estimation over Adaptive Networks
|
cs.IT cs.SI math.IT
|
Adaptive networks consist of a collection of nodes with adaptation and
learning abilities. The nodes interact with each other on a local level and
diffuse information across the network to solve estimation and inference tasks
in a distributed manner. In this work, we compare the mean-square performance
of two main strategies for distributed estimation over networks: consensus
strategies and diffusion strategies. The analysis in the paper confirms that
under constant step-sizes, diffusion strategies allow information to diffuse
more thoroughly through the network and this property has a favorable effect on
the evolution of the network: diffusion networks are shown to converge faster
and reach lower mean-square deviation than consensus networks, and their
mean-square stability is insensitive to the choice of the combination weights.
In contrast, and surprisingly, it is shown that consensus networks can become
unstable even if all the individual nodes are stable and able to solve the
estimation task on their own. When this occurs, cooperation over the network
leads to a catastrophic failure of the estimation task. This phenomenon does
not occur for diffusion networks: we show that stability of the individual
nodes always ensures stability of the diffusion network irrespective of the
combination topology. Simulation results support the theoretical findings.
|
1205.3997
|
Free Energy and the Generalized Optimality Equations for Sequential
Decision Making
|
stat.ML cs.AI cs.GT cs.SY
|
The free energy functional has recently been proposed as a variational
principle for bounded rational decision-making, since it instantiates a natural
trade-off between utility gains and information processing costs that can be
axiomatically derived. Here we apply the free energy principle to general
decision trees that include both adversarial and stochastic environments. We
derive generalized sequential optimality equations that not only include the
Bellman optimality equations as a limit case, but also lead to well-known
decision-rules such as Expectimax, Minimax and Expectiminimax. We show how
these decision-rules can be derived from a single free energy principle that
assigns a resource parameter to each node in the decision tree. These resource
parameters express a concrete computational cost that can be measured as the
amount of samples that are needed from the distribution that belongs to each
node. The free energy principle therefore provides the normative basis for
generalized optimality equations that account for both adversarial and
stochastic environments.
|
1205.3999
|
Optimal Weights Mixed Filter for Removing Mixture of Gaussian and
Impulse Noises
|
cs.CV
|
According to the character of Gaussian, we modify the Rank-Ordered Absolute
Differences (ROAD) to Rank-Ordered Absolute Differences of mixture of Gaussian
and impulse noises (ROADG). It will be more effective to detect impulse noise
when the impulse is mixed with Gaussian noise. Combining rightly the ROADG with
Optimal Weights Filter (OWF), we obtain a new method to deal with the mixed
noise, called Optimal Weights Mixed Filter (OWMF). The simulation results show
that the method is effective to remove the mixed noise.
|
1205.4013
|
Multi-scale Dynamics in a Massive Online Social Network
|
cs.SI physics.soc-ph
|
Data confidentiality policies at major social network providers have severely
limited researchers' access to large-scale datasets. The biggest impact has
been on the study of network dynamics, where researchers have studied citation
graphs and content-sharing networks, but few have analyzed detailed dynamics in
the massive social networks that dominate the web today. In this paper, we
present results of analyzing detailed dynamics in the Renren social network,
covering a period of 2 years when the network grew from 1 user to 19 million
users and 199 million edges. Rather than validate a single model of network
dynamics, we analyze dynamics at different granularities (user-, community- and
network- wide) to determine how much, if any, users are influenced by dynamics
processes at different scales. We observe in- dependent predictable processes
at each level, and find that while the growth of communities has moderate and
sustained impact on users, significant events such as network merge events have
a strong but short-lived impact that is quickly dominated by the continuous
arrival of new users.
|
1205.4067
|
Optimum Commutative Group Codes
|
cs.IT math.GR math.IT
|
A method for finding an optimum $n$-dimensional commutative group code of a
given order $M$ is presented. The approach explores the structure of lattices
related to these codes and provides a significant reduction in the number of
non-isometric cases to be analyzed. The classical factorization of matrices
into Hermite and Smith normal forms and also basis reduction of lattices are
used to characterize isometric commutative group codes. Several examples of
optimum commutative group codes are also presented.
|
1205.4070
|
A New Ensemble of Rate-Compatible LDPC Codes
|
cs.IT math.IT
|
In this paper, we presented three approaches to improve the design of Kite
codes (newly proposed rateless codes), resulting in an ensemble of
rate-compatible
LDPC codes with code rates varying "continuously" from 0.1 to 0.9 for
additive white Gaussian noise (AWGN) channels. The new ensemble rate-compatible
LDPC codes can be constructed conveniently with an empirical formula.
Simulation results show that, when applied to incremental redundancy hybrid
automatic repeat request (IR-HARQ) system, the constructed codes (with higher
order modulation) perform well in a wide range of signal-to-noise-ratios
(SNRs).
|
1205.4080
|
Dynamic Compressive Sensing of Time-Varying Signals via Approximate
Message Passing
|
cs.IT math.IT
|
In this work the dynamic compressive sensing (CS) problem of recovering
sparse, correlated, time-varying signals from sub-Nyquist, non-adaptive, linear
measurements is explored from a Bayesian perspective. While there has been a
handful of previously proposed Bayesian dynamic CS algorithms in the
literature, the ability to perform inference on high-dimensional problems in a
computationally efficient manner remains elusive. In response, we propose a
probabilistic dynamic CS signal model that captures both amplitude and support
correlation structure, and describe an approximate message passing algorithm
that performs soft signal estimation and support detection with a computational
complexity that is linear in all problem dimensions. The algorithm, DCS-AMP,
can perform either causal filtering or non-causal smoothing, and is capable of
learning model parameters adaptively from the data through an
expectation-maximization learning procedure. We provide numerical evidence that
DCS-AMP performs within 3 dB of oracle bounds on synthetic data under a variety
of operating conditions. We further describe the result of applying DCS-AMP to
two real dynamic CS datasets, as well as a frequency estimation task, to
bolster our claim that DCS-AMP is capable of offering state-of-the-art
performance and speed on real-world high-dimensional problems.
|
1205.4126
|
Some Properties of Large Excursions of a Stationary Gaussian Process
|
cs.IT math.IT math.PR
|
The present work investigates two properties of level crossings of a
stationary Gaussian process $X(t)$ with autocorrelation function $R_X(\tau)$.
We show firstly that if $R_X(\tau)$ admits finite second and fourth derivatives
at the origin, the length of up-excursions above a large negative level
$-\gamma$ is asymptotically exponential as $-\gamma \to -\infty$. Secondly,
assuming that $R_X(\tau)$ admits a finite second derivative at the origin and
some defined properties, we derive the mean number of crossings as well as the
length of successive excursions above two subsequent large levels. The
asymptotic results are shown to be effective even for moderate values of
crossing level. An application of the developed results is proposed to derive
the probability of successive excursions above adjacent levels during a time
window.
|
1205.4133
|
Constrained Overcomplete Analysis Operator Learning for Cosparse Signal
Modelling
|
math.NA cs.LG
|
We consider the problem of learning a low-dimensional signal model from a
collection of training samples. The mainstream approach would be to learn an
overcomplete dictionary to provide good approximations of the training samples
using sparse synthesis coefficients. This famous sparse model has a less well
known counterpart, in analysis form, called the cosparse analysis model. In
this new model, signals are characterised by their parsimony in a transformed
domain using an overcomplete (linear) analysis operator. We propose to learn an
analysis operator from a training corpus using a constrained optimisation
framework based on L1 optimisation. The reason for introducing a constraint in
the optimisation framework is to exclude trivial solutions. Although there is
no final answer here for which constraint is the most relevant constraint, we
investigate some conventional constraints in the model adaptation field and use
the uniformly normalised tight frame (UNTF) for this purpose. We then derive a
practical learning algorithm, based on projected subgradients and
Douglas-Rachford splitting technique, and demonstrate its ability to robustly
recover a ground truth analysis operator, when provided with a clean training
set, of sufficient size. We also find an analysis operator for images, using
some noisy cosparse signals, which is indeed a more realistic experiment. As
the derived optimisation problem is not a convex program, we often find a local
minimum using such variational methods. Some local optimality conditions are
derived for two different settings, providing preliminary theoretical support
for the well-posedness of the learning problem under appropriate conditions.
|
1205.4135
|
Guesswork, large deviations and Shannon entropy
|
cs.IT math.IT
|
How hard is it guess a password? Massey showed that that the Shannon entropy
of the distribution from which the password is selected is a lower bound on the
expected number of guesses, but one which is not tight in general. In a series
of subsequent papers under ever less restrictive stochastic assumptions, an
asymptotic relationship as password length grows between scaled moments of the
guesswork and specific R\'{e}nyi entropy was identified.
Here we show that, when appropriately scaled, as the password length grows
the logarithm of the guesswork satisfies a Large Deviation Principle (LDP),
providing direct estimates of the guesswork distribution when passwords are
long. The rate function governing the LDP possess a specific, restrictive form
that encapsulates underlying structure in the nature of guesswork. Returning to
Massey's original observation, a corollary to the LDP shows that expectation of
the logarithm of the guesswork is the specific Shannon entropy of the password
selection process.
|
1205.4138
|
Extraction of Historical Events from Wikipedia
|
cs.IR cs.DL
|
The DBpedia project extracts structured information from Wikipedia and makes
it available on the web. Information is gathered mainly with the help of
infoboxes that contain structured information of the Wikipedia article. A lot
of information is only contained in the article body and is not yet included in
DBpedia. In this paper we focus on the extraction of historical events from
Wikipedia articles that are available for about 2,500 years for different
languages. We have extracted about 121,000 events with more than 325,000 links
to DBpedia entities and provide access to this data via a Web API, SPARQL
endpoint, Linked Data Interface and in a timeline application.
|
1205.4139
|
Fast Correlation Computation Method for Matching Pursuit Algorithms in
Compressed Sensing
|
cs.IT math.IT
|
There have been many matching pursuit algorithms (MPAs) which handle the
sparse signal recovery problem a.k.a. compressed sensing (CS). In the MPAs, the
correlation computation step has a dominant computational complexity. In this
letter, we propose a new fast correlation computation method when we use some
classes of partial unitary matrices as the sensing matrix. Those partial
unitary matrices include partial Fourier matrices and partial Hadamard matrices
which are popular sensing matrices. The proposed correlation computation method
can be applied to almost all MPAs without causing any degradation of their
recovery performance. And, for most practical parameters, the proposed method
can reduce the computational complexity of the MPAs substantially.
|
1205.4144
|
Information Theoretic cutting of a cake
|
cs.IT cs.GT math.IT
|
Cutting a cake is a metaphor for the problem of dividing a resource (cake)
among several agents. The problem becomes non-trivial when the agents have
different valuations for different parts of the cake (i.e. one agent may like
chocolate while the other may like cream). A fair division of the cake is one
that takes into account the individual valuations of agents and partitions the
cake based on some fairness criterion. Fair division may be accomplished in a
distributed or centralized way. Due to its natural and practical appeal, it has
been a subject of study in economics. To best of our knowledge the role of
partial information in fair division has not been studied so far from an
information theoretic perspective. In this paper we study two important
algorithms in fair division, namely "divide and choose" and "adjusted winner"
for the case of two agents. We quantify the benefit of negotiation in the
divide and choose algorithm, and its use in tricking the adjusted winner
algorithm. Also we analyze the role of implicit information transmission
through actions for the repeated divide and choose problem by finding a
trembling hand perfect equilibrium for an specific setup. Lastly we consider a
centralized algorithm for maximizing the overall welfare of the agents under
the Nash collective utility function (CUF). This corresponds to a clustering
problem of the type traditionally studied in data mining and machine learning.
Drawing a conceptual link between this problem and the portfolio selection
problem in stock markets, we prove an upper bound on the increase of the Nash
CUF for a clustering refinement.
|
1205.4148
|
On cyclic codes over the ring $Z_p + uZ_p + ... + u^{k-1}Z_p$
|
cs.IT math.IT math.NT
|
In this paper, we study cyclic codes over the ring $ \Z_p + u\Z_p +...+
u^{k-1}\Z_p $, where $u^k =0$. We find a set of generator for these codes. We
also study the rank, the dual and the Hamming distance of these codes.
|
1205.4159
|
Theory of Dependent Hierarchical Normalized Random Measures
|
cs.LG math.ST stat.ML stat.TH
|
This paper presents theory for Normalized Random Measures (NRMs), Normalized
Generalized Gammas (NGGs), a particular kind of NRM, and Dependent Hierarchical
NRMs which allow networks of dependent NRMs to be analysed. These have been
used, for instance, for time-dependent topic modelling. In this paper, we first
introduce some mathematical background of completely random measures (CRMs) and
their construction from Poisson processes, and then introduce NRMs and NGGs.
Slice sampling is also introduced for posterior inference. The dependency
operators in Poisson processes and for the corresponding CRMs and NRMs is then
introduced and Posterior inference for the NGG presented. Finally, we give
dependency and composition results when applying these operators to NRMs so
they can be used in a network with hierarchical and dependent relations.
|
1205.4168
|
Approximate Feedback Capacity of the Gaussian Multicast Channel
|
cs.IT math.IT
|
We characterize the capacity region to within log{2(M-1)} bits/s/Hz for the
M-transmitter K-receiver Gaussian multicast channel with feedback where each
receiver wishes to decode every message from the M transmitters. Extending
Cover-Leung's achievable scheme intended for (M,K)=(2,1), we show that this
generalized scheme achieves the cutset-based outer bound within log{2(M-1)}
bits per transmitter for all channel parameters. In contrast to the capacity in
the non-feedback case, the feedback capacity improves upon the naive
intersection of the feedback capacities of K individual multiple access
channels. We find that feedback provides unbounded multiplicative gain at high
signal-to-noise ratios as was shown in the Gaussian interference channel. To
complement the results, we establish the exact feedback capacity of the
Avestimehr-Diggavi-Tse (ADT) deterministic model, from which we make the
observation that feedback can also be beneficial for function computation.
|
1205.4208
|
Frameless ALOHA Protocol for Wireless Networks
|
cs.IT math.IT
|
We propose a novel distributed random access scheme for wireless networks
based on slotted ALOHA, motivated by the analogies between successive
interference cancellation and iterative belief-propagation decoding on erasure
channels. The proposed scheme assumes that each user independently accesses the
wireless link in each slot with a predefined probability, resulting in a
distribution of user transmissions over slots. The operation bears analogy with
rateless codes, both in terms of probability distributions as well as to the
fact that the ALOHA frame becomes fluid and adapted to the current contention
process. Our aim is to optimize the slot access probability in order to achieve
rateless-like distributions, focusing both on the maximization of the
resolution probability of user transmissions and the throughput of the scheme.
|
1205.4213
|
Online Structured Prediction via Coactive Learning
|
cs.LG cs.AI cs.IR
|
We propose Coactive Learning as a model of interaction between a learning
system and a human user, where both have the common goal of providing results
of maximum utility to the user. At each step, the system (e.g. search engine)
receives a context (e.g. query) and predicts an object (e.g. ranking). The user
responds by correcting the system if necessary, providing a slightly improved
-- but not necessarily optimal -- object as feedback. We argue that such
feedback can often be inferred from observable user behavior, for example, from
clicks in web-search. Evaluating predictions by their cardinal utility to the
user, we propose efficient learning algorithms that have ${\cal
O}(\frac{1}{\sqrt{T}})$ average regret, even though the learning algorithm
never observes cardinal utility values as in conventional online learning. We
demonstrate the applicability of our model and learning algorithms on a movie
recommendation task, as well as ranking for web-search.
|
1205.4217
|
Thompson Sampling: An Asymptotically Optimal Finite Time Analysis
|
stat.ML cs.LG
|
The question of the optimality of Thompson Sampling for solving the
stochastic multi-armed bandit problem had been open since 1933. In this paper
we answer it positively for the case of Bernoulli rewards by providing the
first finite-time analysis that matches the asymptotic rate given in the Lai
and Robbins lower bound for the cumulative regret. The proof is accompanied by
a numerical comparison with other optimal policies, experiments that have been
lacking in the literature until now for the Bernoulli case.
|
1205.4220
|
Diffusion Adaptation over Networks
|
cs.MA cs.LG
|
Adaptive networks are well-suited to perform decentralized information
processing and optimization tasks and to model various types of self-organized
and complex behavior encountered in nature. Adaptive networks consist of a
collection of agents with processing and learning abilities. The agents are
linked together through a connection topology, and they cooperate with each
other through local interactions to solve distributed optimization, estimation,
and inference problems in real-time. The continuous diffusion of information
across the network enables agents to adapt their performance in relation to
streaming data and network conditions; it also results in improved adaptation
and learning performance relative to non-cooperative agents. This article
provides an overview of diffusion strategies for adaptation and learning over
networks. The article is divided into several sections: 1. Motivation; 2.
Mean-Square-Error Estimation; 3. Distributed Optimization via Diffusion
Strategies; 4. Adaptive Diffusion Strategies; 5. Performance of
Steepest-Descent Diffusion Strategies; 6. Performance of Adaptive Diffusion
Strategies; 7. Comparing the Performance of Cooperative Strategies; 8.
Selecting the Combination Weights; 9. Diffusion with Noisy Information
Exchanges; 10. Extensions and Further Considerations; Appendix A: Properties of
Kronecker Products; Appendix B: Graph Laplacian and Network Connectivity;
Appendix C: Stochastic Matrices; Appendix D: Block Maximum Norm; Appendix E:
Comparison with Consensus Strategies; References.
|
1205.4233
|
Three Schemes for Wireless Coded Broadcast to Heterogeneous Users
|
cs.NI cs.IT math.IT
|
We study and compare three coded schemes for single-server wireless broadcast
of multiple description coded content to heterogeneous users. The users (sink
nodes) demand different number of descriptions over links with different packet
loss rates. The three coded schemes are based on the LT codes, growth codes,
and randomized chunked codes. The schemes are compared on the basis of the
total number of transmissions required to deliver the demands of all users,
which we refer to as the server (source) delivery time. We design the degree
distributions of LT codes by solving suitably defined linear optimization
problems, and numerically characterize the achievable delivery time for
different coding schemes. We find that including a systematic phase (uncoded
transmission) is significantly beneficial for scenarios with low demands, and
that coding is necessary for efficiently delivering high demands. Different
demand and error rate scenarios may require very different coding schemes.
Growth codes and chunked codes do not perform as well as optimized LT codes in
the heterogeneous communication scenario.
|
1205.4234
|
Visualization of features of a series of measurements with
one-dimensional cellular structure
|
cs.LG
|
This paper describes the method of visualization of periodic constituents and
instability areas in series of measurements, being based on the algorithm of
smoothing out and concept of one-dimensional cellular automata. A method can be
used at the analysis of temporal series, related to the volumes of thematic
publications in web-space.
|
1205.4265
|
Quantifying synergistic mutual information
|
cs.IT math.IT q-bio.QM
|
Quantifying cooperation or synergy among random variables in predicting a
single target random variable is an important problem in many complex systems.
We review three prior information-theoretic measures of synergy and introduce a
novel synergy measure defined as the difference between the whole and the union
of its parts. We apply all four measures against a suite of binary circuits to
demonstrate that our measure alone quantifies the intuitive concept of synergy
across all examples. We show that for our measure of synergy that independent
predictors can have positive redundant information.
|
1205.4266
|
Chernoff Bounds for Analysis of Rate-Compatible Sphere-Packing with
Numerous Transmissions
|
cs.IT math.IT
|
Recent results by Chen et al. and Polyanskiy et al. explore using feedback to
approach capacity with short blocklengths. This paper explores Chernoff
bounding techniques to extend the rate-compatible sphere-packing (RCSP)
analysis proposed by Chen et al. to scenarios involving numerous
retransmissions and different step sizes in each incremental retransmission.
Williamson et al. employ exact RCSP computations for up to six transmissions.
However, exact RCSP computation with more than six retransmissions becomes
unwieldy because of joint error probabilities involving numerous chi-squared
distributions. This paper explores Chernoff approaches for upper and lower
bounds to provide support for computations involving more than six
transmissions.
We present two versions of upper and lower bounds for the two-transmission
case. One of the versions is extended to the general case of $m$ transmissions
where $m \geq 1$. Computing the general bounds requires minimization of
exponential functions with the auxiliary parameters, but is less complex and
more stable than multiple rounds of numerical integration. These bounds also
provide a good estimate of the expected throughput and expected latency, which
are useful for optimization purposes.
|
1205.4295
|
Efficient Methods for Unsupervised Learning of Probabilistic Models
|
cs.LG cs.AI cs.IT cs.NE math.IT physics.data-an
|
In this thesis I develop a variety of techniques to train, evaluate, and
sample from intractable and high dimensional probabilistic models. Abstract
exceeds arXiv space limitations -- see PDF.
|
1205.4298
|
Task-specific Word-Clustering for Part-of-Speech Tagging
|
cs.CL
|
While the use of cluster features became ubiquitous in core NLP tasks, most
cluster features in NLP are based on distributional similarity. We propose a
new type of clustering criteria, specific to the task of part-of-speech
tagging. Instead of distributional similarity, these clusters are based on the
beha vior of a baseline tagger when applied to a large corpus. These cluster
features provide similar gains in accuracy to those achieved by
distributional-similarity derived clusters. Using both types of cluster
features together further improve tagging accuracies. We show that the method
is effective for both the in-domain and out-of-domain scenarios for English,
and for French, German and Italian. The effect is larger for out-of-domain
text.
|
1205.4324
|
Universal Properties of Mythological Networks
|
physics.soc-ph cs.CL cs.SI
|
As in statistical physics, the concept of universality plays an important,
albeit qualitative, role in the field of comparative mythology. Here we apply
statistical mechanical tools to analyse the networks underlying three iconic
mythological narratives with a view to identifying common and distinguishing
quantitative features. Of the three narratives, an Anglo-Saxon and a Greek text
are mostly believed by antiquarians to be partly historically based while the
third, an Irish epic, is often considered to be fictional. Here we show that
network analysis is able to discriminate real from imaginary social networks
and place mythological narratives on the spectrum between them. Moreover, the
perceived artificiality of the Irish narrative can be traced back to anomalous
features associated with six characters. Considering these as amalgams of
several entities or proxies, renders the plausibility of the Irish text
comparable to the others from a network-theoretic point of view.
|
1205.4332
|
Graph-based Code Design for Quadratic-Gaussian Wyner-Ziv Problem with
Arbitrary Side Information
|
cs.IT math.IT
|
Wyner-Ziv coding (WZC) is a compression technique using decoder side
information, which is unknown at the encoder, to help the reconstruction. In
this paper, we propose and implement a new WZC structure, called residual WZC,
for the quadratic-Gaussian Wyner-Ziv problem where side information can be
arbitrarily distributed. In our two-stage residual WZC, the source is quantized
twice and the input of the second stage is the quantization error (residue) of
the first stage. The codebook of the first stage quantizer must be
simultaneously good for source and channel coding, since it also acts as a
channel code at the decoder. Stemming from the non-ideal quantization at the
encoder, a problem of channel decoding beyond capacity is identified and solved
when we design the practical decoder. Moreover,by using the modified reinforced
belief-propagation quantization algorithm, the low-density parity check code
(LDPC), whose edge degree is optimized for channel coding, also performs well
as a source code. We then implement the residual WZC by an LDPC and a low
density generator matrix code (LDGM). The simulation results show that our
practical construction approaches the Wyner-Ziv bound. Compared with previous
works, our construction can offer more design lexibility in terms of
distribution of side information and practical code rate selection.
|
1205.4336
|
Fuzzy - Rough Feature Selection With {\Pi}- Membership Function For
Mammogram Classification
|
cs.CV
|
Breast cancer is the second leading cause for death among women and it is
diagnosed with the help of mammograms. Oncologists are miserably failed in
identifying the micro calcification at the early stage with the help of the
mammogram visually. In order to improve the performance of the breast cancer
screening, most of the researchers have proposed Computer Aided Diagnosis using
image processing. In this study mammograms are preprocessed and features are
extracted, then the abnormality is identified through the classification. If
all the extracted features are used, most of the cases are misidentified. Hence
feature selection procedure is sought. In this paper, Fuzzy-Rough feature
selection with {\pi} membership function is proposed. The selected features are
used to classify the abnormalities with help of Ant-Miner and Weka tools. The
experimental analysis shows that the proposed method improves the mammograms
classification accuracy.
|
1205.4338
|
Results on the Fundamental Gain of Memory-Assisted Universal Source
Coding
|
cs.IT math.IT
|
Many applications require data processing to be performed on individual
pieces of data which are of finite sizes, e.g., files in cloud storage units
and packets in data networks. However, traditional universal compression
solutions would not perform well over the finite-length sequences. Recently, we
proposed a framework called memory-assisted universal compression that holds a
significant promise for reducing the amount of redundant data from the
finite-length sequences. The proposed compression scheme is based on the
observation that it is possible to learn source statistics (by memorizing
previous sequences from the source) at some intermediate entities and then
leverage the memorized context to reduce redundancy of the universal
compression of finite-length sequences. We first present the fundamental gain
of the proposed memory-assisted universal source coding over conventional
universal compression (without memorization) for a single parametric source.
Then, we extend and investigate the benefits of the memory-assisted universal
source coding when the data sequences are generated by a compound source which
is a mixture of parametric sources. We further develop a clustering technique
within the memory-assisted compression framework to better utilize the memory
by classifying the observed data sequences from a mixture of parametric
sources. Finally, we demonstrate through computer simulations that the proposed
joint memorization and clustering technique can achieve up to 6-fold
improvement over the traditional universal compression technique when a mixture
of non-binary Markov sources is considered.
|
1205.4343
|
New Analysis and Algorithm for Learning with Drifting Distributions
|
cs.LG stat.ML
|
We present a new analysis of the problem of learning with drifting
distributions in the batch setting using the notion of discrepancy. We prove
learning bounds based on the Rademacher complexity of the hypothesis set and
the discrepancy of distributions both for a drifting PAC scenario and a
tracking scenario. Our bounds are always tighter and in some cases
substantially improve upon previous ones based on the $L_1$ distance. We also
present a generalization of the standard on-line to batch conversion to the
drifting scenario in terms of the discrepancy and arbitrary convex combinations
of hypotheses. We introduce a new algorithm exploiting these learning
guarantees, which we show can be formulated as a simple QP. Finally, we report
the results of preliminary experiments demonstrating the benefits of this
algorithm.
|
1205.4349
|
From Exact Learning to Computing Boolean Functions and Back Again
|
cs.LG cs.DM
|
The goal of the paper is to relate complexity measures associated with the
evaluation of Boolean functions (certificate complexity, decision tree
complexity) and learning dimensions used to characterize exact learning
(teaching dimension, extended teaching dimension). The high level motivation is
to discover non-trivial relations between exact learning of an unknown concept
and testing whether an unknown concept is part of a concept class or not.
Concretely, the goal is to provide lower and upper bounds of complexity
measures for one problem type in terms of the other.
|
1205.4377
|
Multi-Stage Classifier Design
|
cs.CV stat.ML
|
In many classification systems, sensing modalities have different acquisition
costs. It is often {\it unnecessary} to use every modality to classify a
majority of examples. We study a multi-stage system in a prediction time cost
reduction setting, where the full data is available for training, but for a
test example, measurements in a new modality can be acquired at each stage for
an additional cost. We seek decision rules to reduce the average measurement
acquisition cost. We formulate an empirical risk minimization problem (ERM) for
a multi-stage reject classifier, wherein the stage $k$ classifier either
classifies a sample using only the measurements acquired so far or rejects it
to the next stage where more attributes can be acquired for a cost. To solve
the ERM problem, we show that the optimal reject classifier at each stage is a
combination of two binary classifiers, one biased towards positive examples and
the other biased towards negative examples. We use this parameterization to
construct stage-by-stage global surrogate risk, develop an iterative algorithm
in the boosting framework and present convergence and generalization results.
We test our work on synthetic, medical and explosives detection datasets. Our
results demonstrate that substantial cost reduction without a significant
sacrifice in accuracy is achievable.
|
1205.4378
|
Inferring Taxi Status Using GPS Trajectories
|
cs.AI cs.DB
|
In this paper, we infer the statuses of a taxi, consisting of occupied,
non-occupied and parked, in terms of its GPS trajectory. The status information
can enable urban computing for improving a city's transportation systems and
land use planning. In our solution, we first identify and extract a set of
effective features incorporating the knowledge of a single trajectory,
historical trajectories and geographic data like road network. Second, a
parking status detection algorithm is devised to find parking places (from a
given trajectory), dividing a trajectory into segments (i.e.,
sub-trajectories). Third, we propose a two-phase inference model to learn the
status (occupied or non-occupied) of each point from a taxi segment. This model
first uses the identified features to train a local probabilistic classifier
and then carries out a Hidden Semi-Markov Model (HSMM) for globally considering
long term travel patterns. We evaluated our method with a large-scale
real-world trajectory dataset generated by 600 taxis, showing the advantages of
our method over baselines.
|
1205.4384
|
Network Mapping by Replaying Hyperbolic Growth
|
cs.SI cond-mat.stat-mech cs.NI physics.soc-ph
|
Recent years have shown a promising progress in understanding geometric
underpinnings behind the structure, function, and dynamics of many complex
networks in nature and society. However these promises cannot be readily
fulfilled and lead to important practical applications, without a simple,
reliable, and fast network mapping method to infer the latent geometric
coordinates of nodes in a real network. Here we present HyperMap, a simple
method to map a given real network to its hyperbolic space. The method utilizes
a recent geometric theory of complex networks modeled as random geometric
graphs in hyperbolic spaces. The method replays the network's geometric growth,
estimating at each time step the hyperbolic coordinates of new nodes in a
growing network by maximizing the likelihood of the network snapshot in the
model. We apply HyperMap to the AS Internet, and find that: 1) the method
produces meaningful results, identifying soft communities of ASs belonging to
the same geographic region; 2) the method has a remarkable predictive power:
using the resulting map, we can predict missing links in the Internet with high
precision, outperforming popular existing methods; and 3) the resulting map is
highly navigable, meaning that a vast majority of greedy geometric routing
paths are successful and low-stretch. Even though the method is not without
limitations, and is open for improvement, it occupies a unique attractive
position in the space of trade-offs between simplicity, accuracy, and
computational complexity.
|
1205.4387
|
Precision-biased Parsing and High-Quality Parse Selection
|
cs.CL
|
We introduce precision-biased parsing: a parsing task which favors precision
over recall by allowing the parser to abstain from decisions deemed uncertain.
We focus on dependency-parsing and present an ensemble method which is capable
of assigning parents to 84% of the text tokens while being over 96% accurate on
these tokens. We use the precision-biased parsing task to solve the related
high-quality parse-selection task: finding a subset of high-quality (accurate)
trees in a large collection of parsed text. We present a method for choosing
over a third of the input trees while keeping unlabeled dependency parsing
accuracy of 97% on these trees. We also present a method which is not based on
an ensemble but rather on directly predicting the risk associated with
individual parser decisions. In addition to its efficiency, this method
demonstrates that a parsing system can provide reasonable estimates of
confidence in its predictions without relying on ensembles or aggregate corpus
counts.
|
1205.4389
|
Minimum Mean-Squared Error Iterative Successive Parallel Arbitrated
Decision Feedback Detectors for DS-CDMA Systems
|
cs.IT math.IT
|
In this paper we propose minimum mean squared error (MMSE) iterative
successive parallel arbitrated decision feedback (DF) receivers for direct
sequence code division multiple access (DS-CDMA) systems. We describe the MMSE
design criterion for DF multiuser detectors along with successive, parallel and
iterative interference cancellation structures. A novel efficient DF structure
that employs successive cancellation with parallel arbitrated branches and a
near-optimal low complexity user ordering algorithm are presented. The proposed
DF receiver structure and the ordering algorithm are then combined with
iterative cascaded DF stages for mitigating the deleterious effects of error
propagation for convolutionally encoded systems with both Viterbi and turbo
decoding as well as for uncoded schemes. We mathematically study the relations
between the MMSE achieved by the analyzed DF structures, including the novel
scheme, with imperfect and perfect feedback. Simulation results for an uplink
scenario assess the new iterative DF detectors against linear receivers and
evaluate the effects of error propagation of the new cancellation methods
against existing ones.
|
1205.4390
|
Reduced-Rank Adaptive Filtering Based on Joint Iterative Optimization of
Adaptive Filters
|
cs.IT math.IT
|
This letter proposes a novel adaptive reduced-rank filtering scheme based on
joint iterative optimization of adaptive filters. The novel scheme consists of
a joint iterative optimization of a bank of full-rank adaptive filters that
forms the projection matrix and an adaptive reduced-rank filter that operates
at the output of the bank of filters. We describe minimum mean-squared error
(MMSE) expressions for the design of the projection matrix and the reduced-rank
filter and low-complexity normalized least-mean squares (NLMS) adaptive
algorithms for its efficient implementation. Simulations for an interference
suppression application show that the proposed scheme outperforms in
convergence and tracking the state-ofthe- art reduced-rank schemes at
significantly lower complexity.
|
1205.4431
|
Large Social Networks can be Targeted for Viral Marketing with Small
Seed Sets
|
cs.SI physics.soc-ph
|
In a "tipping" model, each node in a social network, representing an
individual, adopts a behavior if a certain number of his incoming neighbors
previously held that property. A key problem for viral marketers is to
determine an initial "seed" set in a network such that if given a property then
the entire network adopts the behavior. Here we introduce a method for quickly
finding seed sets that scales to very large networks. Our approach finds a set
of nodes that guarantees spreading to the entire network under the tipping
model. After experimentally evaluating 31 real-world networks, we found that
our approach often finds such sets that are several orders of magnitude smaller
than the population size. Our approach also scales well - on a Friendster
social network consisting of 5.6 million nodes and 28 million edges we found a
seed sets in under 3.6 hours. We also find that highly clustered local
neighborhoods and dense network-wide community structure together suppress the
ability of a trend to spread under the tipping model.
|
1205.4450
|
Spectral Graph Cut from a Filtering Point of View
|
cs.CV
|
Spectral graph theory is well known and widely used in computer vision. In
this paper, we analyze image segmentation algorithms that are based on spectral
graph theory, e.g., normalized cut, and show that there is a natural connection
between spectural graph theory based image segmentationand and edge preserving
filtering. Based on this connection we show that the normalized cut algorithm
is equivalent to repeated iterations of bilateral filtering. Then, using this
equivalence we present and implement a fast normalized cut algorithm for image
segmentation. Experiments show that our implementation can solve the original
optimization problem in the normalized cut algorithm 10 to 100 times faster.
Furthermore, we present a new algorithm called conditioned normalized cut for
image segmentation that can easily incorporate color image patches and
demonstrate how this segmentation problem can be solved with edge preserving
filtering.
|
1205.4454
|
Combined Decode-Forward and Layered Noisy Network Coding Schemes for
Relay Channels
|
cs.IT math.IT
|
We propose two coding schemes combining decode-forward (DF) and noisy network
coding (NNC) with different flavors. The first is a combined DF-NNC scheme for
the one-way relay channel which includes both DF and NNC as special cases by
performing rate splitting, partial block Markov encoding and NNC. The second
combines two different DF strategies and layered NNC for the two-way relay
channel. One DF strategy performs coherent block Markov encoding at the source
at the cost of power splitting at the relay, the other performs independent
source and relay encoding but with full relay power, and layered NNC allows a
different compression rate for each destination. Analysis and simulation show
that both proposed schemes supersede each individual scheme and take full
advantage of both DF and NNC.
|
1205.4463
|
Pilgrims Face Recognition Dataset -- HUFRD
|
cs.CV cs.CY
|
In this work, we define a new pilgrims face recognition dataset, called HUFRD
dataset. The new developed dataset presents various pilgrims' images taken from
outside the Holy Masjid El-Harram in Makkah during the 2011-2012 Hajj and Umrah
seasons. Such dataset will be used to test our developed facial recognition and
detection algorithms, as well as assess in the missing and found recognition
system \cite{crowdsensing}.
|
1205.4467
|
Beating the news using Social Media: the case study of American Idol
|
physics.soc-ph cs.HC cs.SI
|
We present a contribution to the debate on the predictability of social
events using big data analytics. We focus on the elimination of contestants in
the American Idol TV shows as an example of a well defined electoral phenomenon
that each week draws millions of votes in the USA. We provide evidence that
Twitter activity during the time span defined by the TV show airing and the
voting period following it, correlates with the contestants ranking and allows
the anticipation of the voting outcome. Furthermore, the fraction of Tweets
that contain geolocation information allows us to map the fanbase of each
contestant, both within the US and abroad, showing that strong regional
polarizations occur. Although American Idol voting is just a minimal and
simplified version of complex societal phenomena such as political elections,
this work shows that the volume of information available in online systems
permits the real time gathering of quantitative indicators anticipating the
future unfolding of opinion formation events.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.