id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1210.3741 | Online computation of sparse representations of time varying stimuli
using a biologically motivated neural network | q-bio.NC cs.NE | Natural stimuli are highly redundant, possessing significant spatial and
temporal correlations. While sparse coding has been proposed as an efficient
strategy employed by neural systems to encode sensory stimuli, the underlying
mechanisms are still not well understood. Most previous approaches model the
neural dynamics by the sparse representation dictionary itself and compute the
representation coefficients offline. In reality, faced with the challenge of
constantly changing stimuli, neurons must compute the sparse representations
dynamically in an online fashion. Here, we describe a leaky linearized Bregman
iteration (LLBI) algorithm which computes the time varying sparse
representations using a biologically motivated network of leaky rectifying
neurons. Compared to previous attempt of dynamic sparse coding, LLBI exploits
the temporal correlation of stimuli and demonstrate better performance both in
representation error and the smoothness of temporal evolution of sparse
coefficients.
|
1210.3769 | Analysis of Blocking Probability in a Relay-based Cellular OFDMA Network | cs.IT cs.NI math.IT | Relay deployment in Orthogonal Frequency Division Multiple Access (OFDMA)
based cellular networks helps in coverage extension and or capacity
improvement. In OFDMA system, each user requires different number of
subcarriers to meet its rate requirement. This resource requirement depends on
the Signal to Interference Ratio (SIR) experienced by a user. Traditional
methods to compute blocking probability cannot be used in relay based cellular
OFDMA networks. In this paper, we present an approach to compute the blocking
probability of such networks. We determine an expression of the probability
distribution of the users resource requirement based on its experienced SIR and
then classify the users into various classes depending upon their subcarrier
requirement. We consider the system to be a multidimensional system with
different classes and evaluate the blocking probability of system using the
multi-dimensional Erlang loss formulas.
|
1210.3812 | A Unified Analytical Design Method of Standard Controllers using
Inversion Formulae | cs.SY math.OC | The aim of this paper is to present a comprehensive range of design
techniques for the synthesis of the standard compensators (Lead and Lag
networks as well as PID controllers) that in the last twenty years have proved
to be of great educational value in a vast number of undergraduate and
postgraduate courses in Control throughout Italy, but that to-date remain
mostly confined within this country. These techniques hinge upon a set of
simple closed-form formulae for the computation of the parameters of the
controller as functions of the typical specifications introduced in Control
courses, i.e., the steady-state performance, the stability margins and the
crossover frequencies.
|
1210.3819 | On Precoding for Constant K-User MIMO Gaussian Interference Channel with
Finite Constellation Inputs | cs.IT math.IT | This paper considers linear precoding for constant channel-coefficient
$K$-User MIMO Gaussian Interference Channel (MIMO GIC) where each
transmitter-$i$ (Tx-$i$), requires to send $d_i$ independent complex symbols
per channel use that take values from fixed finite constellations with uniform
distribution, to receiver-$i$ (Rx-$i$) for $i=1,2,\cdots,K$. We define the
maximum rate achieved by Tx-$i$ using any linear precoder, when the
interference channel-coefficients are zero, as the signal to noise ratio (SNR)
tends to infinity to be the Constellation Constrained Saturation Capacity
(CCSC) for Tx-$i$. We derive a high SNR approximation for the rate achieved by
Tx-$i$ when interference is treated as noise and this rate is given by the
mutual information between Tx-$i$ and Rx-$i$, denoted as $I[X_i;Y_i]$. A set of
necessary and sufficient conditions on the precoders under which $I[X_i;Y_i]$
tends to CCSC for Tx-$i$ is derived. Interestingly, the precoders designed for
interference alignment (IA) satisfy these necessary and sufficient conditions.
Further, we propose gradient-ascent based algorithms to optimize the sum-rate
achieved by precoding with finite constellation inputs and treating
interference as noise. Simulation study using the proposed algorithms for a
3-user MIMO GIC with two antennas at each node with $d_i=1$ for all $i$, and
with BPSK and QPSK inputs, show more than 0.1 bits/sec/Hz gain in the ergodic
sum-rate over that yielded by precoders obtained from some known IA algorithms,
at moderate SNRs.
|
1210.3832 | Image Processing using Smooth Ordering of its Patches | cs.CV | We propose an image processing scheme based on reordering of its patches. For
a given corrupted image, we extract all patches with overlaps, refer to these
as coordinates in high-dimensional space, and order them such that they are
chained in the "shortest possible path", essentially solving the traveling
salesman problem. The obtained ordering applied to the corrupted image, implies
a permutation of the image pixels to what should be a regular signal. This
enables us to obtain good recovery of the clean image by applying relatively
simple 1D smoothing operations (such as filtering or interpolation) to the
reordered set of pixels. We explore the use of the proposed approach to image
denoising and inpainting, and show promising results in both cases.
|
1210.3835 | Exploiting Network Cooperation in Green Wireless Communication | cs.IT cs.PF math.IT | There is a growing interest in energy efficient or so-called "green" wireless
communication to reduce the energy consumption in cellular networks. Since
today's wireless terminals are typically equipped with multiple network access
interfaces such as Bluetooth, Wi-Fi, and cellular networks, this paper
investigates user terminals cooperating with each other in transmitting their
data packets to the base station (BS), by exploiting the multiple network
access interfaces, called inter-network cooperation. We also examine the
conventional schemes without user cooperation and with intra-network
cooperation for comparison. Given target outage probability and data rate
requirements, we analyze the energy consumption of conventional schemes as
compared to the proposed inter-network cooperation by taking into account both
physical-layer channel impairments (including path loss, fading, and thermal
noise) and upper-layer protocol overheads. It is shown that distances between
different network entities (i.e., user terminals and BS) have a significant
influence on the energy efficiency of proposed inter-network cooperation
scheme. Specifically, when the cooperating users are close to BS or the users
are far away from each other, the inter-network cooperation may consume more
energy than conventional schemes without user cooperation or with intra-network
cooperation. However, as the cooperating users move away from BS and the
inter-user distance is not too large, the inter-network cooperation
significantly reduces the energy consumption over conventional schemes.
|
1210.3853 | Transceiver Design For SC-FDE Based MIMO Relay Systems | cs.IT math.IT | In this paper, we propose a joint transceiver design for single-carrier
frequency-domain equalization (SC-FDE) based multiple-input multiple-output
(MIMO) relay systems. To this end, we first derive the optimal minimum
mean-squared error linear and decision-feedback frequency-domain equalization
filters at the destination along with the corresponding error covariance
matrices at the output of the equalizer. Subsequently, we formulate the source
and relay precoding matrix design problem as the minimization of a family of
Schur-convex and Schur-concave functions of the mean-squared errors at the
output of the equalizer under separate power constraints for the source and the
relay. By exploiting properties of the error covariance matrix and results from
majorization theory, we derive the optimal structures of the source and relay
precoding matrices, which allows us to transform the matrix optimization
problem into a scalar power optimization problem. Adopting a high
signal-to-noise ratio approximation for the objective function, we obtain the
global optimal solution for the power allocation variables. Simulation results
illustrate the excellent performance of the proposed system and its superiority
compared to conventional orthogonal frequency-division multiplexing based MIMO
relay systems.
|
1210.3865 | Opinion Mining for Relating Subjective Expressions and Annual Earnings
in US Financial Statements | cs.CL cs.AI cs.IR q-fin.GN | Financial statements contain quantitative information and manager's
subjective evaluation of firm's financial status. Using information released in
U.S. 10-K filings. Both qualitative and quantitative appraisals are crucial for
quality financial decisions. To extract such opinioned statements from the
reports, we built tagging models based on the conditional random field (CRF)
techniques, considering a variety of combinations of linguistic factors
including morphology, orthography, predicate-argument structure, syntax, and
simple semantics. Our results show that the CRF models are reasonably effective
to find opinion holders in experiments when we adopted the popular MPQA corpus
for training and testing. The contribution of our paper is to identify opinion
patterns in multiword expressions (MWEs) forms rather than in single word
forms.
We find that the managers of corporations attempt to use more optimistic
words to obfuscate negative financial performance and to accentuate the
positive financial performance. Our results also show that decreasing earnings
were often accompanied by ambiguous and mild statements in the reporting year
and that increasing earnings were stated in assertive and positive way.
|
1210.3906 | Design of Multiple-Edge Protographs for QC LDPC Codes Avoiding Short
Inevitable Cycles | cs.IT math.IT | There have been lots of efforts on the construction of quasi-cyclic (QC)
low-density parity-check (LDPC) codes with large girth. However, most of them
are focused on protographs with single edges and little research has been done
for the construction of QC LDPC codes lifted from protographs with multiple
edges. Compared to single-edge protographs, multiple-edge protographs have
benefits such that QC LDPC codes lifted from them can potentially have larger
minimum Hamming distance. In this paper, all subgraph patterns of multiple-edge
protographs, which prevent QC LDPC codes from having large girth by inducing
inevitable cycles, are fully investigated based on graph-theoretic approach. By
using combinatorial designs, a systematic construction method of multiple-edge
protographs is proposed for regular QC LDPC codes with girth at least 12 and
also other method is proposed for regular QC LDPC codes with girth at least 14.
A construction algorithm of QC LDPC codes by lifting multiple-edge protographs
is proposed and it is shown that the resulting QC LDPC codes have larger upper
bounds on the minimum Hamming distance than those lifted from single-edge
protographs. Simulation results are provided to compare the performance of the
proposed QC LDPC codes, the progressive edge-growth (PEG) LDPC codes, and the
PEG QC LDPC codes.
|
1210.3921 | Stein's density approach and information inequalities | math.PR cs.IT math.IT | We provide a new perspective on Stein's so-called density approach by
introducing a new operator and characterizing class which are valid for a much
wider family of probability distributions on the real line. We prove an
elementary factorization property of this operator and propose a new Stein
identity which we use to derive information inequalities in terms of what we
call the \emph{generalized Fisher information distance}. We provide explicit
bounds on the constants appearing in these inequalities for several important
cases. We conclude with a comparison between our results and known results in
the Gaussian case, hereby improving on several known inequalities from the
literature.
|
1210.3926 | Learning Attitudes and Attributes from Multi-Aspect Reviews | cs.CL cs.IR cs.LG | The majority of online reviews consist of plain-text feedback together with a
single numeric score. However, there are multiple dimensions to products and
opinions, and understanding the `aspects' that contribute to users' ratings may
help us to better understand their individual preferences. For example, a
user's impression of an audiobook presumably depends on aspects such as the
story and the narrator, and knowing their opinions on these aspects may help us
to recommend better products. In this paper, we build models for rating systems
in which such dimensions are explicit, in the sense that users leave separate
ratings for each aspect of a product. By introducing new corpora consisting of
five million reviews, rated with between three and six aspects, we evaluate our
models on three prediction tasks: First, we use our model to uncover which
parts of a review discuss which of the rated aspects. Second, we use our model
to summarize reviews, which for us means finding the sentences that best
explain a user's rating. Finally, since aspect ratings are optional in many of
the datasets we consider, we use our model to recover those ratings that are
missing from a user's evaluation. Our model matches state-of-the-art approaches
on existing small-scale datasets, while scaling to the real-world datasets we
introduce. Moreover, our model is able to `disentangle' content and sentiment
words: we automatically learn content words that are indicative of a particular
aspect as well as the aspect-specific sentiment words that are indicative of a
particular rating.
|
1210.3937 | Introduction to the 28th International Conference on Logic Programming
Special Issue | cs.PL cs.AI | We are proud to introduce this special issue of the Journal of Theory and
Practice of Logic Programming (TPLP), dedicated to the full papers accepted for
the 28th International Conference on Logic Programming (ICLP). The ICLP
meetings started in Marseille in 1982 and since then constitute the main venue
for presenting and discussing work in the area of logic programming.
|
1210.3946 | Local optima networks and the performance of iterated local search | cs.AI | Local Optima Networks (LONs) have been recently proposed as an alternative
model of combinatorial fitness landscapes. The model compresses the information
given by the whole search space into a smaller mathematical object that is the
graph having as vertices the local optima and as edges the possible weighted
transitions between them. A new set of metrics can be derived from this model
that capture the distribution and connectivity of the local optima in the
underlying configuration space. This paper departs from the descriptive
analysis of local optima networks, and actively studies the correlation between
network features and the performance of a local search heuristic. The NK family
of landscapes and the Iterated Local Search metaheuristic are considered. With
a statistically-sound approach based on multiple linear regression, it is shown
that some LONs' features strongly influence and can even partly predict the
performance of a heuristic search algorithm. This study validates the
expressive power of LONs as a model of combinatorial fitness landscapes.
|
1210.3953 | Wireless Network-Coded Four-Way Relaying Using Latin Hyper-Cubes | cs.IT math.IT | This paper deals with physical layer network-coding for the four-way wireless
relaying scenario where four nodes A, B, C and D wish to communicate their
messages to all the other nodes with the help of the relay node R. The scheme
given in the paper is based on the denoise-and-forward scheme proposed first by
Popovski et al. Intending to minimize the number of channel uses, the protocol
employs two phases: Multiple Access (MA) phase and Broadcast (BC) phase with
each phase utilizing one channel use. This paper does the equivalent for the
four-way relaying scenario as was done for the two-way relaying scenario by
Koike-Akino et al., and for three-way relaying scenario in [3]. It is observed
that adaptively changing the network coding map used at the relay according to
the channel conditions greatly reduces the impact of multiple access
interference which occurs at the relay during the MA phase. These network
coding maps are so chosen so that they satisfy a requirement called exclusive
law. We show that when the four users transmit points from the same M-PSK
constellation, every such network coding map that satisfies the exclusive law
can be represented by a 4-fold Latin Hyper-Cube of side M. The network code map
used by the relay for the BC phase is explicitly obtained and is aimed at
reducing the effect of interference at the MA stage.
|
1210.4006 | The Perturbed Variation | cs.LG stat.ML | We introduce a new discrepancy score between two distributions that gives an
indication on their similarity. While much research has been done to determine
if two samples come from exactly the same distribution, much less research
considered the problem of determining if two finite samples come from similar
distributions. The new score gives an intuitive interpretation of similarity;
it optimally perturbs the distributions so that they best fit each other. The
score is defined between distributions, and can be efficiently estimated from
samples. We provide convergence bounds of the estimated score, and develop
hypothesis testing procedures that test if two data sets come from similar
distributions. The statistical power of this procedures is presented in
simulations. We also compare the score's capacity to detect similarity with
that of other known measures on real data.
|
1210.4007 | Extending modularity by capturing the similarity attraction feature in
the null model | cs.SI physics.data-an physics.soc-ph | Modularity is a widely used measure for evaluating community structure in
networks. The definition of modularity involves a comparison of
within-community edges in the observed network and that number in an equivalent
randomized network. This equivalent randomized network is called the null
model, which serves as a reference. To make the comparison significant, the
null model should characterize some features of the observed network. However,
the null model in the original definition of modularity is unrealistically
mixed, in the sense that any node can be linked to any other node without
preference and only connectivity matters. Thus, it fails to be a good
representation of real-world networks. A common feature of many real-world
networks is "similarity attraction", i.e., edges tend to link to nodes that are
similar to each other. We propose a null model that captures the similarity
attraction feature. This null model enables us to create a framework for
defining a family of Dist-Modularity adapted to various networks, including
networks with additional information on nodes. We demonstrate that
Dist-Modularity is useful in identifying communities at different scales.
|
1210.4008 | Location-Based Events Detection on Micro-Blogs | cs.SI cs.IR physics.soc-ph | The increasing use of social networks generates enormous amounts of data that
can be used for many types of analysis. Some of these data have temporal and
geographical information, which can be used for comprehensive examination. In
this paper, we propose a new method to analyze the massive volume of messages
available in Twitter to identify places in the world where topics such as TV
shows, climate change, disasters, and sports are emerging. The proposed method
is based on a neural network that is used to detect outliers from a time
series, which is built upon statistical data from tweets located on different
political divisions (i.e., countries, cities). The outliers are used to
identify topics within an abnormal behavior in Twitter. The effectiveness of
our method is evaluated in an online environment indicating new findings on
modeling local people's behavior from different places.
|
1210.4021 | Local Optima Networks, Landscape Autocorrelation and Heuristic Search
Performance | cs.AI cs.NE | Recent developments in fitness landscape analysis include the study of Local
Optima Networks (LON) and applications of the Elementary Landscapes theory.
This paper represents a first step at combining these two tools to explore
their ability to forecast the performance of search algorithms. We base our
analysis on the Quadratic Assignment Problem (QAP) and conduct a large
statistical study over 600 generated instances of different types. Our results
reveal interesting links between the network measures, the autocorrelation
measures and the performance of heuristic search algorithms.
|
1210.4081 | Getting Feasible Variable Estimates From Infeasible Ones: MRF Local
Polytope Study | cs.NA cs.CV cs.DS cs.LG math.OC | This paper proposes a method for construction of approximate feasible primal
solutions from dual ones for large-scale optimization problems possessing
certain separability properties. Whereas infeasible primal estimates can
typically be produced from (sub-)gradients of the dual function, it is often
not easy to project them to the primal feasible set, since the projection
itself has a complexity comparable to the complexity of the initial problem. We
propose an alternative efficient method to obtain feasibility and show that its
properties influencing the convergence to the optimum are similar to the
properties of the Euclidean projection. We apply our method to the local
polytope relaxation of inference problems for Markov Random Fields and
demonstrate its superiority over existing methods.
|
1210.4130 | Relational Theories with Null Values and Non-Herbrand Stable Models | cs.LO cs.AI cs.DB | Generalized relational theories with null values in the sense of Reiter are
first-order theories that provide a semantics for relational databases with
incomplete information. In this paper we show that any such theory can be
turned into an equivalent logic program, so that models of the theory can be
generated using computational methods of answer set programming. As a step
towards this goal, we develop a general method for calculating stable models
under the domain closure assumption but without the unique name assumption.
|
1210.4145 | A Biologically Realistic Model of Saccadic Eye Control with
Probabilistic Population Codes | cs.NE q-bio.NC | The posterior parietal cortex is believed to direct eye movements, especially
in regards to target tracking tasks, and a number of debates exist over the
precise nature of the computations performed by the parietal cortex, with each
side supported by different sets of biological evidence. In this paper I will
present my model which navigates a course between some of these debates,
towards the end of presenting a model which can explain some of the competing
interpretations among the data sets. In particular, rather than assuming that
proprioception or efference copies form the key source of information for
computing eye position information, I use a biological plausible implementation
of a Kalman filter to optimally combine the two signals, and a simple gain
control mechanism in order to accommodate the latency of the proprioceptive
signal. Fitting within the Bayesian brain hypothesis, the result is a Bayes
optimal solution to the eye control problem, with a range of data supporting
claims of biological plausibility.
|
1210.4184 | The Kernel Pitman-Yor Process | cs.LG cs.AI stat.ML | In this work, we propose the kernel Pitman-Yor process (KPYP) for
nonparametric clustering of data with general spatial or temporal
interdependencies. The KPYP is constructed by first introducing an infinite
sequence of random locations. Then, based on the stick-breaking construction of
the Pitman-Yor process, we define a predictor-dependent random probability
measure by considering that the discount hyperparameters of the
Beta-distributed random weights (stick variables) of the process are not
uniform among the weights, but controlled by a kernel function expressing the
proximity between the location assigned to each weight and the given
predictors.
|
1210.4211 | Profit Maximization over Social Networks | cs.SI cs.GT physics.soc-ph | Influence maximization is the problem of finding a set of influential users
in a social network such that the expected spread of influence under a certain
propagation model is maximized. Much of the previous work has neglected the
important distinction between social influence and actual product adoption.
However, as recognized in the management science literature, an individual who
gets influenced by social acquaintances may not necessarily adopt a product (or
technology), due, e.g., to monetary concerns. In this work, we distinguish
between influence and adoption by explicitly modeling the states of being
influenced and of adopting a product. We extend the classical Linear Threshold
(LT) model to incorporate prices and valuations, and factor them into users'
decision-making process of adopting a product. We show that the expected profit
function under our proposed model maintains submodularity under certain
conditions, but no longer exhibits monotonicity, unlike the expected influence
spread function. To maximize the expected profit under our extended LT model,
we employ an unbudgeted greedy framework to propose three profit maximization
algorithms. The results of our detailed experimental study on three real-world
datasets demonstrate that of the three algorithms, \textsf{PAGE}, which assigns
prices dynamically based on the profit potential of each candidate seed, has
the best performance both in the expected profit achieved and in running time.
|
1210.4231 | An example illustrating the imprecision of the efficient approach for
diagnosis of Petri nets via integer linear programming | cs.SY cs.AI | This document demonstrates that the efficient approach for diagnosis of Petri
nets via integer linear programming may be unable to detect a fault even if the
system is diagnosable.
|
1210.4235 | Node Classification in Networks of Stochastic Evidence Accumulators | cs.SY math.OC | This paper considers a network of stochastic evidence accumulators, each
represented by a drift-diffusion model accruing evidence towards a decision in
continuous time by observing a noisy signal and by exchanging information with
other units according to a fixed communication graph. We bring into focus the
relationship between the location of each unit in the communication graph and
its certainty as measured by the inverse of the variance of its state. We show
that node classification according to degree distributions or geodesic
distances cannot faithfully capture node ranking in terms of certainty.
Instead, all possible paths connecting each unit with the rest in the network
must be incorporated. We make this precise by proving that node classification
according to information centrality provides a rank ordering with respect to
node certainty, thereby affording a direct interpretation of the certainty
level of each unit in terms of the structural properties of the underlying
communication graph.
|
1210.4243 | Outage Probability Analysis of Dual Hop Relay Networks in Presence of
Interference | cs.IT math.IT | Cooperative relaying improves the performance of wireless networks by forming
a network of multiple independent virtual sources transmitting the same
information as the source node. However, interference induced in the network
reduces the performance of cooperative communications. In this work the
statistical properties, the cumulative distribution function (CDF) and the
probability density function (PDF) for a basic dual hop cooperative relay
network with an arbitrary number of interferers over Rayleigh fading channels
are derived. Two system models are considered: in the first system model, the
interferers are only at the relay node; and in the second system model,
interferers are both at the relay and the destination. This work is further
extended to Nakagami-m faded interfering channels. Simulation results are
presented on outage probability performance to verify the theoretical analysis.
|
1210.4246 | A Latent Parameter Node-Centric Model for Spatial Networks | cs.SI physics.soc-ph | Spatial networks, in which nodes and edges are embedded in space, play a
vital role in the study of complex systems. For example, many social networks
attach geo-location information to each user, allowing the study of not only
topological interactions between users, but spatial interactions as well. The
defining property of spatial networks is that edge distances are associated
with a cost, which may subtly influence the topology of the network. However,
the cost function over distance is rarely known, thus developing a model of
connections in spatial networks is a difficult task.
In this paper, we introduce a novel model for capturing the interaction
between spatial effects and network structure. Our approach represents a unique
combination of ideas from latent variable statistical models and spatial
network modeling. In contrast to previous work, we view the ability to form
long/short-distance connections to be dependent on the individual nodes
involved. For example, a node's specific surroundings (e.g. network structure
and node density) may make it more likely to form a long distance link than
other nodes with the same degree. To capture this information, we attach a
latent variable to each node which represents a node's spatial reach. These
variables are inferred from the network structure using a Markov Chain Monte
Carlo algorithm.
We experimentally evaluate our proposed model on 4 different types of
real-world spatial networks (e.g. transportation, biological, infrastructure,
and social). We apply our model to the task of link prediction and achieve up
to a 35% improvement over previous approaches in terms of the area under the
ROC curve. Additionally, we show that our model is particularly helpful for
predicting links between nodes with low degrees. In these cases, we see much
larger improvements over previous models.
|
1210.4247 | Deterministic Selection of Phase Sequences in Low Complexity SLM Scheme | cs.IT math.IT | Selected mapping (SLM) is a suitable scheme, which can solve the
peak-to-average power ratio (PAPR) problem. Recently, many researchers have
concentrated on reducing the computational complexity of the SLM schemes. One
of the low complexity SLM schemes is the Class III SLM scheme which uses only
one inverse fast fourier transform (IFFT) operation for generating one
orthogonal frequency division multiplexing (OFDM) signal sequence. By selecting
rotations and cyclic shifts randomly, it can generate $N^3$ alternative OFDM
signal sequences, where $N$ is the FFT size. But this selection can not
guarantee the optimal PAPR reduction performances. Therefore, in this paper, we
propose a simple deterministic cyclic shifts selection method which is optimal
in case of having low variance of correlation coefficient between two
alternative OFDM signal sequences. And we show that cyclic shifts are highly
dependent on the PAPR reduction performance than rotations. For small FFT size
and the number of alternative signal sequences is close to $N/8$, simulation
results show that the proposed scheme can achieve better PAPR reduction
performance than the Class III SLM scheme.
|
1210.4251 | Performance Analysis Cluster and GPU Computing Environment on Molecular
Dynamic Simulation of BRV-1 and REM2 with GROMACS | cs.DC cs.CE q-bio.BM | One of application that needs high performance computing resources is
molecular d ynamic. There is some software available that perform molecular
dynamic, one of these is a well known GROMACS. Our previous experiment
simulating molecular dynamics of Indonesian grown herbal compounds show
sufficient speed up on 32 n odes Cluster computing environment. In order to
obtain a reliable simulation, one usually needs to run the experiment on the
scale of hundred nodes. But this is expensive to develop and maintain. Since
the invention of Graphical Processing Units that is also useful for general
programming, many applications have been developed to run on this. This paper
reports our experiments that evaluate the performance of GROMACS that runs on
two different environment, Cluster computing resources and GPU based PCs. We
run the experiment on BRV-1 and REM2 compounds. Four different GPUs are
installed on the same type of PCs of quad cores; they are Gefore GTS 250, GTX
465, GTX 470 and Quadro 4000. We build a cluster of 16 nodes based on these
four quad cores PCs. The preliminary experiment shows that those run on GTX 470
is the best among the other type of GPUs and as well as the cluster computing
resource. A speed up around 11 and 12 is gained, while the cost of computer
with GPU is only about 25 percent that of Cluster we built.
|
1210.4276 | Semi-Supervised Classification Through the Bag-of-Paths Group
Betweenness | stat.ML cs.LG | This paper introduces a novel, well-founded, betweenness measure, called the
Bag-of-Paths (BoP) betweenness, as well as its extension, the BoP group
betweenness, to tackle semisupervised classification problems on weighted
directed graphs. The objective of semi-supervised classification is to assign a
label to unlabeled nodes using the whole topology of the graph and the labeled
nodes at our disposal. The BoP betweenness relies on a bag-of-paths framework
assigning a Boltzmann distribution on the set of all possible paths through the
network such that long (high-cost) paths have a low probability of being picked
from the bag, while short (low-cost) paths have a high probability of being
picked. Within that context, the BoP betweenness of node j is defined as the
sum of the a posteriori probabilities that node j lies in-between two arbitrary
nodes i, k, when picking a path starting in i and ending in k. Intuitively, a
node typically receives a high betweenness if it has a large probability of
appearing on paths connecting two arbitrary nodes of the network. This quantity
can be computed in closed form by inverting a n x n matrix where n is the
number of nodes. For the group betweenness, the paths are constrained to start
and end in nodes within the same class, therefore defining a group betweenness
for each class. Unlabeled nodes are then classified according to the class
showing the highest group betweenness. Experiments on various real-world data
sets show that BoP group betweenness outperforms all the tested state
of-the-art methods. The benefit of the BoP betweenness is particularly
noticeable when only a few labeled nodes are available.
|
1210.4277 | Improving Smoothed l0 Norm in Compressive Sensing Using Adaptive
Parameter Selection | cs.IT math.IT | Signal reconstruction in compressive sensing involves finding a sparse
solution that satisfies a set of linear constraints. Several approaches to this
problem have been considered in existing reconstruction algorithms. They each
provide a trade-off between reconstruction capabilities and required
computation time. In an attempt to push the limits for this trade-off, we
consider a smoothed l0 norm (SL0) algorithm in a noiseless setup. We argue that
using a set of carefully chosen parameters in our proposed adaptive SL0
algorithm may result in significantly better reconstruction capabilities in
terms of phase transition while retaining the same required computation time as
existing SL0 algorithms. A large set of simulations further support this claim.
Simulations even reveal that the theoretical l1 curve may be surpassed in major
parts of the phase space.
|
1210.4290 | A Fast Iterative Algorithm for Recovery of Sparse Signals from One-Bit
Quantized Measurements | cs.IT math.IT | This paper considers the problem of reconstructing sparse or compressible
signals from one-bit quantized measurements. We study a new method that uses a
log-sum penalty function, also referred to as the Gaussian entropy, for sparse
signal recovery. Also, in the proposed method, sigmoid functions are introduced
to quantify the consistency between the acquired one-bit quantized data and the
reconstructed measurements. A fast iterative algorithm is developed by
iteratively minimizing a convex surrogate function that bounds the original
objective function, which leads to an iterative reweighted process that
alternates between estimating the sparse signal and refining the weights of the
surrogate function. Connections between the proposed algorithm and other
existing methods are discussed. Numerical results are provided to illustrate
the effectiveness of the proposed algorithm.
|
1210.4293 | Communications with decode-and-forward relays in mesh networks | cs.IT math.IT | We consider mesh networks composed of groups of relaying nodes which operate
in decode-and-forward mode, where each node from a group relays information to
all the nodes in the next group. We study these networks in two setups, one
where the nodes have complete channel state information from the nodes that
transmit to them, and another when they only have the statistics of the
channel. We derive recursive expressions for the probabilities of errors of the
nodes and present several implementations of detectors used in these networks.
We compare the mesh networks with multihop networks, the latter being formed by
a set of parallel sections of multiple relaying nodes. We demonstrate with
numerous simulations that there are significant improvements in performance of
mesh over multihop networks in various scenarios.
|
1210.4301 | Reputation Aggregation in Peer-to-Peer Network Using Differential Gossip
Algorithm | cs.NI cs.SI | Reputation aggregation in peer to peer networks is generally a very time and
resource consuming process. Moreover, most of the methods consider that a node
will have same reputation with all the nodes in the network, which is not true.
This paper proposes a reputation aggregation algorithm that uses a variant of
gossip algorithm called differential gossip. In this paper, estimate of
reputation is considered to be having two parts, one common component which is
same with every node, and the other one is information received from immediate
neighbours based on the neighbours' direct interaction with the node. The
differential gossip is fast and requires less amount of resources. This
mechanism allows computation of independent reputation value by a node, of
every other node in the network, for each node. The differential gossip trust
has been investigated for a power law network formed using preferential
attachment \emph{(PA)} Model. The reputation computed using differential gossip
trust shows good amount of immunity to the collusion. We have verified the
performance of the algorithm on the power law networks of different sizes
ranging from 100 nodes to 50,000 nodes.
|
1210.4329 | Impact of Scheduling in the Return-Link of Multi-Beam Satellite MIMO
Systems | cs.IT math.IT | The utilization of universal frequency reuse in multi-beam satellite systems
introduces a non-negligible level of co-channel interference (CCI), which in
turn penalizes the quality of service experienced by users. Taking this as
starting point, the paper focuses on resource management performed by the
gateway (hub) on the return-link, with particular emphasis on a scheduling
algorithm based on bipartite graph approach. The study gives important insights
into the achievable per-user rate and the role played by the number of users
and spot beams considered for scheduling. More interestingly, it is shown that
a free-slot assignment strategy helps to exploit the available satellite
resources, thus guaranteeing a max-min rate requirement to users. Remarks about
the trade-off between efficiency-loss and performance increase are finally
drawn at the end of the paper.
|
1210.4347 | Hilbert Space Embedding for Dirichlet Process Mixtures | stat.ML cs.LG | This paper proposes a Hilbert space embedding for Dirichlet Process mixture
models via a stick-breaking construction of Sethuraman. Although Bayesian
nonparametrics offers a powerful approach to construct a prior that avoids the
need to specify the model size/complexity explicitly, an exact inference is
often intractable. On the other hand, frequentist approaches such as kernel
machines, which suffer from the model selection/comparison problems, often
benefit from efficient learning algorithms. This paper discusses the
possibility to combine the best of both worlds by using the Dirichlet Process
mixture model as a case study.
|
1210.4377 | Order statistics of observed network degrees | stat.ME cs.SI physics.soc-ph | This article discusses the properties of extremes of degree sequences
calculated from network data. We introduce the notion of a normalized degree,
in order to permit a comparison of degree sequences between networks with
differing numbers of nodes. We model each normalized degree as a bounded
continuous random variable, and determine the properties of the ordered
k-maxima and minima of the normalized network degrees when they comprise a
random sample from a Beta distribution. In this setting, their means and
variances take a simplified form given by their ordering, and we discuss the
relation of these quantities to other prescribed decays such as power laws. We
verify the derived properties from simulated sets of normalized degrees, and
discuss possible extensions to more flexible classes of distributions.
|
1210.4383 | Distributed Formation of Balanced and Bistochastic Weighted Diagraphs in
Multi-Agent Systems | cs.MA | Consensus strategies find a variety of applications in distributed
coordination and decision making in multi-agent systems. In particular, average
consensus plays a key role in a number of applications and is closely
associated with two classes of digraphs, weight-balanced (for continuous-time
systems) and bistochastic (for discrete-time systems). A weighted digraph is
called balanced if, for each node, the sum of the weights of the edges outgoing
from that node is equal to the sum of the weights of the edges incoming to that
node. In addition, a weight-balanced digraph is bistochastic if all weights are
nonnegative and, for each node, the sum of weights of edges incoming to that
node and the sum of the weights of edges out-going from that node is unity;
this implies that the corresponding weight matrix is column and row stochastic
(i.e., doubly stochastic). We propose two distributed algorithms: one solves
the weight-balance problem and the other solves the bistochastic matrix
formation problem for a distributed system whose components (nodes) can
exchange information via interconnection links (edges) that form an arbitrary,
possibly directed, strongly connected communication topology (digraph). Both
distributed algorithms achieve their goals asymptotically and operate
iteratively by having each node adapt the (nonnegative) weights on its outgoing
edges based on the weights of its incoming links (i.e., based on purely local
information). We also provide examples to illustrate the operation,
performance, and potential advantages of the proposed algorithms.
|
1210.4405 | Semantic integration and analysis of clinical data | cs.DB | There is a growing need to semantically process and integrate clinical data
from different sources for Clinical Data Management and Clinical Decision
Support in the healthcare IT industry. In the clinical practice domain, the
semantic gap between clinical information systems and domain ontologies is
quite often difficult to bridge in one step. In this paper, we report our
experience in using a two-step formalization approach to formalize clinical
data, i.e. from database schemas to local formalisms and from local formalisms
to domain (unifying) formalisms. We use N3 rules to explicitly and formally
state the mapping from local ontologies to domain ontologies. The resulting
data expressed in domain formalisms can be integrated and analyzed, though
originating from very distinct sources. Practices of applying the two-step
approach in the infectious disorders and cancer domains are introduced.
|
1210.4416 | A Direct Proof of a Theorem Concerning Singular Hamiltonian Systems | cs.SY | This technical report presents a direct proof of Theorem~1 in [1] and some
consequences that also account for (20) in [1]. This direct proof exploits a
state space change of basis which replaces the coupled difference equations
(10) in [1] with two equivalent difference equations which, instead, are
decoupled.
|
1210.4459 | Efficient Computation of Pareto Optimal Beamforming Vectors for the MISO
Interference Channel with Successive Interference Cancellation | cs.IT math.IT | We study the two-user multiple-input single-output (MISO) Gaussian
interference channel where the transmitters have perfect channel state
information and employ single-stream beamforming. The receivers are capable of
performing successive interference cancellation, so when the interfering signal
is strong enough, it can be decoded, treating the desired signal as noise, and
subtracted from the received signal, before the desired signal is decoded. We
propose efficient methods to compute the Pareto-optimal rate points and
corresponding beamforming vector pairs, by maximizing the rate of one link
given the rate of the other link. We do so by splitting the original problem
into four subproblems corresponding to the combinations of the receivers'
decoding strategies - either decode the interference or treat it as additive
noise. We utilize recently proposed parameterizations of the optimal
beamforming vectors to equivalently reformulate each subproblem as a
quasi-concave problem, which we solve very efficiently either analytically or
via scalar numerical optimization. The computational complexity of the proposed
methods is several orders-of-magnitude less than the complexity of the
state-of-the-art methods. We use the proposed methods to illustrate the effect
of the strength and spatial correlation of the channels on the shape of the
rate region.
|
1210.4460 | Fast SVM-based Feature Elimination Utilizing Data Radius, Hard-Margin,
Soft-Margin | stat.ML cs.LG | Margin maximization in the hard-margin sense, proposed as feature elimination
criterion by the MFE-LO method, is combined here with data radius utilization
to further aim to lower generalization error, as several published bounds and
bound-related formulations pertaining to lowering misclassification risk (or
error) pertain to radius e.g. product of squared radius and weight vector
squared norm. Additionally, we propose additional novel feature elimination
criteria that, while instead being in the soft-margin sense, too can utilize
data radius, utilizing previously published bound-related formulations for
approaching radius for the soft-margin sense, whereby e.g. a focus was on the
principle stated therein as "finding a bound whose minima are in a region with
small leave-one-out values may be more important than its tightness". These
additional criteria we propose combine radius utilization with a novel and
computationally low-cost soft-margin light classifier retraining approach we
devise named QP1; QP1 is the soft-margin alternative to the hard-margin LO. We
correct an error in the MFE-LO description, find MFE-LO achieves the highest
generalization accuracy among the previously published margin-based feature
elimination (MFE) methods, discuss some limitations of MFE-LO, and find our
novel methods herein outperform MFE-LO, attain lower test set classification
error rate. On several datasets that each both have a large number of features
and fall into the `large features few samples' dataset category, and on
datasets with lower (low-to-intermediate) number of features, our novel methods
give promising results. Especially, among our methods the tunable ones, that do
not employ (the non-tunable) LO approach, can be tuned more aggressively in the
future than herein, to aim to demonstrate for them even higher performance than
herein.
|
1210.4469 | A Rule-based Model of a Hypothetical Zombie Outbreak: Insights on the
role of emotional factors during behavioral adaptation of an artificial
population | q-bio.PE cs.MA cs.SI physics.soc-ph | Models of infectious diseases have been developed since the first half of the
twentieth century. Most models haven't considered the role that emotional
factors of the individual may play on the population's behavioral adaptation
during the spread of a pandemic disease. Considering that local interactions
among individuals generate patterns that -at a large scale- govern the action
of masses, we have studied the behavioral adaptation of a population induced by
the spread of an infectious disease. Therefore, we have developed a rule-based
model of a hypothetical zombie outbreak, written in Kappa language, and
simulated using Guillespie's stochastic approach. Our study addresses the
specificity and heterogeneity of the system at the individual level, a highly
desirable characteristic, mostly overlooked in classic epidemic models.
Together with the basic elements of a typical epidemiological model, our model
includes an individual representation of the disease progression and the
traveling of agents among cities being affected. It also introduces an
approximation to measure the effect of panic in the population as a function of
the individual situational awareness. In addition, the effect of two possible
countermeasures to overcome the zombie threat is considered: the availability
of medical treatment and the deployment of special armed forces. However, due
to the special characteristics of this hypothetical infectious disease, even
using exaggerated numbers of countermeasures, only a small percentage of the
population can be saved at the end of the simulations. As expected from a
rule-based model approach, the global dynamics of our model resulted primarily
governed by the mechanistic description of local interactions occurring at the
individual level. As a whole, people's situational awareness resulted essential
to modulate the inner dynamics of the system.
|
1210.4481 | Epitome for Automatic Image Colorization | cs.CV cs.LG cs.MM | Image colorization adds color to grayscale images. It not only increases the
visual appeal of grayscale images, but also enriches the information contained
in scientific images that lack color information. Most existing methods of
colorization require laborious user interaction for scribbles or image
segmentation. To eliminate the need for human labor, we develop an automatic
image colorization method using epitome. Built upon a generative graphical
model, epitome is a condensed image appearance and shape model which also
proves to be an effective summary of color information for the colorization
task. We train the epitome from the reference images and perform inference in
the epitome to colorize grayscale images, rendering better colorization results
than previous method in our experiments.
|
1210.4482 | Separation of Reliability and Secrecy in Rate-Limited Secret-Key
Generation | cs.IT math.IT | For a discrete or a continuous source model, we study the problem of
secret-key generation with one round of rate-limited public communication
between two legitimate users. Although we do not provide new bounds on the
wiretap secret-key (WSK) capacity for the discrete source model, we use an
alternative achievability scheme that may be useful for practical applications.
As a side result, we conveniently extend known bounds to the case of a
continuous source model. Specifically, we consider a sequential key-generation
strategy, that implements a rate-limited reconciliation step to handle
reliability, followed by a privacy amplification step performed with extractors
to handle secrecy. We prove that such a sequential strategy achieves the best
known bounds for the rate-limited WSK capacity (under the assumption of
degraded sources in the case of two-way communication). However, we show that,
unlike the case of rate-unlimited public communication, achieving the
reconciliation capacity in a sequential strategy does not necessarily lead to
achieving the best known bounds for the WSK capacity. Consequently, reliability
and secrecy can be treated successively but not independently, thereby
exhibiting a limitation of sequential strategies for rate-limited public
communication. Nevertheless, we provide scenarios for which reliability and
secrecy can be treated successively and independently, such as the two-way
rate-limited SK capacity, the one-way rate-limited WSK capacity for degraded
binary symmetric sources, and the one-way rate-limited WSK capacity for
Gaussian degraded sources.
|
1210.4502 | Comparing several heuristics for a packing problem | cs.NE | Packing problems are in general NP-hard, even for simple cases. Since now
there are no highly efficient algorithms available for solving packing
problems. The two-dimensional bin packing problem is about packing all given
rectangular items, into a minimum size rectangular bin, without overlapping.
The restriction is that the items cannot be rotated. The current paper is
comparing a greedy algorithm with a hybrid genetic algorithm in order to see
which technique is better for the given problem. The algorithms are tested on
different sizes data.
|
1210.4505 | Coherent Fading Channels Driven by Arbitrary Inputs: Asymptotic
Characterization of the Constrained Capacity and Related Information- and
Estimation-Theoretic Quantities | cs.IT math.IT | We consider the characterization of the asymptotic behavior of the average
minimum mean-squared error (MMSE) and the average mutual information in scalar
and vector fading coherent channels, where the receiver knows the exact fading
channel state but the transmitter knows only the fading channel distribution,
driven by a range of inputs. We construct low-snr and -- at the heart of the
novelty of the contribution -- high-snr asymptotic expansions for the average
MMSE and the average mutual information for coherent channels subject to
Rayleigh fading, Ricean fading or Nakagami fading and driven by discrete inputs
(with finite support) or various continuous inputs. We reveal the role that the
so-called canonical MMSE in a standard additive white Gaussian noise (AWGN)
channel plays in the characterization of the asymptotic behavior of the average
MMSE and the average mutual information in a fading coherent channel. We also
reveal connections to and generalizations of the MMSE dimension. The most
relevant element that enables the construction of these non-trivial expansions
is the realization that the integral representation of the estimation- and
information- theoretic quantities can be seen as an h-transform of a kernel
with a monotonic argument: this enables the use of a novel asymptotic expansion
of integrals technique -- the Mellin transform method -- that leads immediately
to not only the high-snr but also the low-snr expansions of the average MMSE
and -- via the I-MMSE relationship -- to expansions of the average mutual
information. We conclude with applications of the results to the
characterization and optimization of the constrained capacity of a bank of
parallel independent coherent fading channels driven by arbitrary discrete
inputs.
|
1210.4507 | Submodularity and Optimality of Fusion Rules in Balanced Binary Relay
Trees | cs.IT cs.MA math.IT | We study the distributed detection problem in a balanced binary relay tree,
where the leaves of the tree are sensors generating binary messages. The root
of the tree is a fusion center that makes the overall decision. Every other
node in the tree is a fusion node that fuses two binary messages from its child
nodes into a new binary message and sends it to the parent node at the next
level. We assume that the fusion nodes at the same level use the same fusion
rule. We call a string of fusion rules used at different levels a fusion
strategy. We consider the problem of finding a fusion strategy that maximizes
the reduction in the total error probability between the sensors and the fusion
center. We formulate this problem as a deterministic dynamic program and
express the solution in terms of Bellman's equations. We introduce the notion
of stringsubmodularity and show that the reduction in the total error
probability is a stringsubmodular function. Consequentially, we show that the
greedy strategy, which only maximizes the level-wise reduction in the total
error probability, is within a factor of the optimal strategy in terms of
reduction in the total error probability.
|
1210.4517 | Gaming the Game: Honeypot Venues Against Cheaters in Location-based
Social Networks | cs.SI cs.CR | The proliferation of location-based social networks (LBSNs) has provided the
community with an abundant source of information that can be exploited and used
in many different ways. LBSNs offer a number of conveniences to its
participants, such as - but not limited to - a list of places in the vicinity
of a user, recommendations for an area never explored before provided by other
peers, tracking of friends, monetary rewards in the form of special deals from
the venues visited as well as a cheap way of advertisement for the latter.
However, service convenience and security have followed disjoint paths in LBSNs
and users can misuse the offered features. The major threat for the service
providers is that of fake check-ins. Users can easily manipulate the
localization module of the underlying application and declare their presence in
a counterfeit location. The incentives for these behaviors can be both earning
monetary as well as virtual rewards. Therefore, while fake check-ins driven
from the former motive can cause monetary losses, those aiming in virtual
rewards are also harmful. In particular, they can significantly degrade the
services offered from the LBSN providers (such as recommendations) or third
parties using these data (e.g., urban planners). In this paper, we propose and
analyze a honeypot venue-based solution, enhanced with a challenge-response
scheme, that flags users who are generating fake spatial information. We
believe that our work will stimulate further research on this important topic
and will provide new directions with regards to possible solutions.
|
1210.4567 | Gender identity and lexical variation in social media | cs.CL | We present a study of the relationship between gender, linguistic style, and
social networks, using a novel corpus of 14,000 Twitter users. Prior
quantitative work on gender often treats this social variable as a female/male
binary; we argue for a more nuanced approach. By clustering Twitter users, we
find a natural decomposition of the dataset into various styles and topical
interests. Many clusters have strong gender orientations, but their use of
linguistic resources sometimes directly conflicts with the population-level
language statistics. We view these clusters as a more accurate reflection of
the multifaceted nature of gendered language styles. Previous corpus-based work
has also had little to say about individuals whose linguistic styles defy
population-level gender patterns. To identify such individuals, we train a
statistical classifier, and measure the classifier confidence for each
individual in the dataset. Examining individuals whose language does not match
the classifier's model for their gender, we find that they have social networks
that include significantly fewer same-gender social connections and that, in
general, social network homophily is correlated with the use of same-gender
language markers. Pairing computational methods and social theory thus offers a
new perspective on how gender emerges as individuals position themselves
relative to audiences, topics, and mainstream gender norms.
|
1210.4596 | Optimal Achievable Rates for Interference Networks with Random Codes | cs.IT math.IT | The optimal rate region for interference networks is characterized when
encoding is restricted to random code ensembles with superposition coding and
time sharing. A simple simultaneous nonunique decoding rule, under which each
receiver decodes for the intended message as well as the interfering messages,
is shown to achieve this optimal rate region regardless of the relative
strengths of signal, interference, and noise. This result implies that the
Han-Kobayashi bound, the best known inner bound on the capacity region of the
two-user-pair interference channel, cannot be improved merely by using the
optimal maximum likelihood decoder.
|
1210.4601 | A Direct Approach to Multi-class Boosting and Extensions | cs.LG | Boosting methods combine a set of moderately accurate weaklearners to form a
highly accurate predictor. Despite the practical importance of multi-class
boosting, it has received far less attention than its binary counterpart. In
this work, we propose a fully-corrective multi-class boosting formulation which
directly solves the multi-class problem without dividing it into multiple
binary classification problems. In contrast, most previous multi-class boosting
algorithms decompose a multi-boost problem into multiple binary boosting
problems. By explicitly deriving the Lagrange dual of the primal optimization
problem, we are able to construct a column generation-based fully-corrective
approach to boosting which directly optimizes multi-class classification
performance. The new approach not only updates all weak learners' coefficients
at every iteration, but does so in a manner flexible enough to accommodate
various loss functions and regularizations. For example, it enables us to
introduce structural sparsity through mixed-norm regularization to promote
group sparsity and feature sharing. Boosting with shared features is
particularly beneficial in complex prediction problems where features can be
expensive to compute. Our experiments on various data sets demonstrate that our
direct multi-class boosting generalizes as well as, or better than, a range of
competing multi-class boosting methods. The end result is a highly effective
and compact ensemble classifier which can be trained in a distributed fashion.
|
1210.4614 | Random Sequences Based on the Divisor Pairs Function | cs.CR cs.IT math.IT | This paper investigates the randomness properties of a function of the
divisor pairs of a natural number. This function, the antecedents of which go
to very ancient times, has randomness properties that can find applications in
cryptography, key distribution, and other problems of computer science. It is
shown that the function is aperiodic and it has excellent autocorrelation
properties.
|
1210.4643 | Econoinformatics meets Data-Centric Social Sciences | q-fin.GN cs.SI physics.soc-ph | Our society has been computerised and globalised due to emergence and spread
of information and communication technology (ICT). This enables us to
investigate our own socio-economic systems based on large amounts of data on
human activities. In this article, methods of treating complexity arising from
a vast amount of data, and linking data from different sources, are discussed.
Furthermore, several examples are given of studies into the applications of
econoinformatics for the Japanese stock exchange, foreign exchange markets,
domestic hotel booking data and international flight booking data are shown. It
is the main message that spatio-temporal information is a key element to
synthesise data from different data sources.
|
1210.4657 | Mean-Field Learning: a Survey | cs.LG cs.GT cs.MA math.DS stat.ML | In this paper we study iterative procedures for stationary equilibria in
games with large number of players. Most of learning algorithms for games with
continuous action spaces are limited to strict contraction best reply maps in
which the Banach-Picard iteration converges with geometrical convergence rate.
When the best reply map is not a contraction, Ishikawa-based learning is
proposed. The algorithm is shown to behave well for Lipschitz continuous and
pseudo-contractive maps. However, the convergence rate is still unsatisfactory.
Several acceleration techniques are presented. We explain how cognitive users
can improve the convergence rate based only on few number of measurements. The
methodology provides nice properties in mean field games where the payoff
function depends only on own-action and the mean of the mean-field (first
moment mean-field games). A learning framework that exploits the structure of
such games, called, mean-field learning, is proposed. The proposed mean-field
learning framework is suitable not only for games but also for non-convex
global optimization problems. Then, we introduce mean-field learning without
feedback and examine the convergence to equilibria in beauty contest games,
which have interesting applications in financial markets. Finally, we provide a
fully distributed mean-field learning and its speedup versions for satisfactory
solution in wireless networks. We illustrate the convergence rate improvement
with numerical examples.
|
1210.4663 | A PRQ Search Method for Probabilistic Objects | cs.DB cs.CG cs.DS | This article proposes an PQR search method for probabilistic objects. The
main idea of our method is to use a strategy called \textit{pre-approximation}
that can reduce the initial problem to a highly simplified version, implying
that it makes the rest of steps easy to tackle. In particular, this strategy
itself is pretty simple and easy to implement. Furthermore, motivated by the
cost analysis, we further optimize our solution. The optimizations are mainly
based on two insights: (\romannumeral 1) the number of \textit{effective
subdivision}s is no more than 1; and (\romannumeral 2) an entity with the
larger \textit{span} is more likely to subdivide a single region. We
demonstrate the effectiveness and efficiency of our proposed approaches through
extensive experiments under various experimental settings.
|
1210.4695 | Regulating the information in spikes: a useful bias | q-bio.NC cs.IT cs.LG math.IT | The bias/variance tradeoff is fundamental to learning: increasing a model's
complexity can improve its fit on training data, but potentially worsens
performance on future samples. Remarkably, however, the human brain
effortlessly handles a wide-range of complex pattern recognition tasks. On the
basis of these conflicting observations, it has been argued that useful biases
in the form of "generic mechanisms for representation" must be hardwired into
cortex (Geman et al).
This note describes a useful bias that encourages cooperative learning which
is both biologically plausible and rigorously justified.
|
1210.4700 | Optimal Lempel-Ziv based lossy compression for memoryless data: how to
make the right mistakes | cs.IT math.IT | Compression refers to encoding data using bits, so that the representation
uses as few bits as possible. Compression could be lossless: i.e. encoded data
can be recovered exactly from its representation) or lossy where the data is
compressed more than the lossless case, but can still be recovered to within
prespecified distortion metric. In this paper, we prove the optimality of
Codelet Parsing, a quasi-linear time algorithm for lossy compression of
sequences of bits that are independently and identically distributed (\iid) and
Hamming distortion. Codelet Parsing extends the lossless Lempel Ziv algorithm
to the lossy case---a task that has been a focus of the source coding
literature for better part of two decades now. Given \iid sequences $\x$, the
expected length of the shortest lossy representation such that $\x$ can be
reconstructed to within distortion $\dist$ is given by the rate distortion
function, $\rd$. We prove the optimality of the Codelet Parsing algorithm for
lossy compression of memoryless bit sequences. It splits the input sequence
naturally into phrases, representing each phrase by a codelet, a potentially
distorted phrase of the same length. The codelets in the lossy representation
of a length-$n$ string ${\x}$ have length roughly $(\log n)/\rd$, and like the
lossless Lempel Ziv algorithm, Codelet Parsing constructs codebooks logarithmic
in the sequence length.
|
1210.4749 | An Auction Approach to Distributed Power Allocation for Multiuser
Cooperative Networks | cs.NI cs.GT cs.IT math.IT | This paper studies a wireless network where multiple users cooperate with
each other to improve the overall network performance. Our goal is to design an
optimal distributed power allocation algorithm that enables user cooperation,
in particular, to guide each user on the decision of transmission mode
selection and relay selection. Our algorithm has the nice interpretation of an
auction mechanism with multiple auctioneers and multiple bidders. Specifically,
in our proposed framework, each user acts as both an auctioneer (seller) and a
bidder (buyer). Each auctioneer determines its trading price and allocates
power to bidders, and each bidder chooses the demand from each auctioneer. By
following the proposed distributed algorithm, each user determines how much
power to reserve for its own transmission, how much power to purchase from
other users, and how much power to contribute for relaying the signals of
others. We derive the optimal bidding and pricing strategies that maximize the
weighted sum rates of the users. Extensive simulations are carried out to
verify our proposed approach.
|
1210.4752 | Discrete Signal Processing on Graphs | cs.SI physics.soc-ph | In social settings, individuals interact through webs of relationships. Each
individual is a node in a complex network (or graph) of interdependencies and
generates data, lots of data. We label the data by its source, or formally
stated, we index the data by the nodes of the graph. The resulting signals
(data indexed by the nodes) are far removed from time or image signals indexed
by well ordered time samples or pixels. DSP, discrete signal processing,
provides a comprehensive, elegant, and efficient methodology to describe,
represent, transform, analyze, process, or synthesize these well ordered time
or image signals. This paper extends to signals on graphs DSP and its basic
tenets, including filters, convolution, z-transform, impulse response, spectral
representation, Fourier transform, frequency response, and illustrates DSP on
graphs by classifying blogs, linear predicting and compressing data from
irregularly located weather stations, or predicting behavior of customers of a
mobile service provider.
|
1210.4759 | From Unbalanced Initial Occupant Distribution to Balanced Exit Usage in
a Simulation Model of Pedestrian Dynamics | physics.soc-ph cs.MA | It is tested in this contribution if and to which extend a method of a
pedestrian simulation tool that attempts to make pedestrians walk into the
direction of estimated earliest arrival can help to automatically distribute
pedestrians - who are initially distributed arbitrarily in the scenario -
equally on the various exits of the scenario.
|
1210.4778 | Average Consensus in the Presence of Delays and Dynamically Changing
Directed Graph Topologies | cs.MA cs.DC | Classical approaches for asymptotic convergence to the global average in a
distributed fashion typically assume timely and reliable exchange of
information between neighboring components of a given multi-component system.
These assumptions are not necessarily valid in practical settings due to
varying delays that might affect transmissions at different times, as well as
possible changes in the underlying interconnection topology (e.g., due to
component mobility). In this work, we propose protocols to overcome these
limitations. We first consider a fixed interconnection topology (captured by a
- possibly directed - graph) and propose a discrete-time protocol that can
reach asymptotic average consensus in a distributed fashion, despite the
presence of arbitrary (but bounded) delays in the communication links. The
protocol requires that each component has knowledge of the number of its
outgoing links (i.e., the number of components to which it sends information).
We subsequently extend the protocol to also handle changes in the underlying
interconnection topology and describe a variety of rather loose conditions
under which the modified protocol allows the components to reach asymptotic
average consensus. The proposed algorithms are illustrated via examples.
|
1210.4787 | Approximating Acceptance Probabilities of CTMC-Paths on Multi-Clock
Deterministic Timed Automata | cs.SY cs.FL | We consider the problem of approximating the probability mass of the set of
timed paths under a continuous-time Markov chain (CTMC) that are accepted by a
deterministic timed automaton (DTA). As opposed to several existing works on
this topic, we consider DTA with multiple clocks. Our key contribution is an
algorithm to approximate these probabilities using finite difference methods.
An error bound is provided which indicates the approximation error. The
stepping stones towards this result include rigorous proofs for the
measurability of the set of accepted paths and the integral-equation system
characterizing the acceptance probability, and a differential characterization
for the acceptance probability.
|
1210.4791 | A computational formulation for constrained solid and liquid membranes
considering isogeometric finite elements | cs.CE physics.comp-ph | A geometrically exact membrane formulation is presented that is based on
curvilinear coordinates and isogeometric finite elements, and is suitable for
both solid and liquid membranes. The curvilinear coordinate system is used to
describe both the theory and the finite element equations of the membrane. In
the latter case this avoids the use of local cartesian coordinates at the
element level. Consequently, no transformation of derivatives is required. The
formulation considers a split of the in-plane and out-of-plane membrane
contributions, which allows the construction of a stable formulation for liquid
membranes with constant surface tension. The proposed membrane formulation is
general, and accounts for dead and live loading, as well as enclosed volume,
area, and contact constraints. The new formulation is illustrated by several
challenging examples, considering linear and quadratic Lagrange elements, as
well as isogeometric elements based on quadratic NURBS and cubic T-splines. It
is seen that the isogeometric elements are much more accurate than standard
Lagrange elements. The gain is especially large for the liquid membrane
formulation since it depends explicitly on the surface curvature.
|
1210.4792 | Scalable Matrix-valued Kernel Learning for High-dimensional Nonlinear
Multivariate Regression and Granger Causality | stat.ML cs.LG | We propose a general matrix-valued multiple kernel learning framework for
high-dimensional nonlinear multivariate regression problems. This framework
allows a broad class of mixed norm regularizers, including those that induce
sparsity, to be imposed on a dictionary of vector-valued Reproducing Kernel
Hilbert Spaces. We develop a highly scalable and eigendecomposition-free
algorithm that orchestrates two inexact solvers for simultaneously learning
both the input and output components of separable matrix-valued kernels. As a
key application enabled by our framework, we show how high-dimensional causal
inference tasks can be naturally cast as sparse function estimation problems,
leading to novel nonlinear extensions of a class of Graphical Granger Causality
techniques. Our algorithmic developments and extensive empirical studies are
complemented by theoretical analyses in terms of Rademacher generalization
bounds.
|
1210.4795 | Full Rank Solutions for the MIMO Gaussian Wiretap Channel with an
Average Power Constraint | cs.IT math.IT | This paper considers a multiple-input multiple-output (MIMO) Gaussian wiretap
channel model, where there exists a transmitter, a legitimate receiver and an
eavesdropper, each equipped with multiple antennas. In this paper, we first
revisit the rank property of the optimal input covariance matrix that achieves
the secrecy capacity of the multiple antenna MIMO Gaussian wiretap channel
under the average power constraint. Next, we obtain necessary and sufficient
conditions on the MIMO wiretap channel parameters such that the optimal input
covariance matrix is full-rank, and we fully characterize the resulting
covariance matrix as well. Numerical results are presented to illustrate the
proposed theoretical findings.
|
1210.4808 | Basic Experiment Planning via Information Metrics: the RoboMendel
Problem | cs.IT math.IT | In this paper we outline some mathematical questions that emerge from trying
to "turn the scientific method into math". Specifically, we consider the
problem of experiment planning (choosing the best experiment to do next) in
explicit probabilistic and information theoretic terms. We formulate this as an
information measurement problem; that is, we seek a rigorous definition of an
information metric to measure the likely information yield of an experiment,
such that maximizing the information metric will indeed reliably choose the
best experiment to perform. We present the surprising result that defining the
metric purely in terms of prediction power on observable variables yields a
metric that can converge to the classical mutual information measuring how
informative the experimental observation is about an underlying hidden
variable. We show how the expectation potential information metric can compute
the "information rate" of an experiment as well its total possible yield, and
the information value of experimental controls. To illustrate the utility of
these concepts for guiding fundamental scientific inquiry, we present an
extensive case study (RoboMendel) applying these metrics to propose sequences
of experiments for discovering the basic principles of genetics.
|
1210.4831 | Closing the Gap to the Capacity of APSK: Constellation Shaping and
Degree Distributions | cs.IT math.IT | Constellation shaping is an energy-efficient strategy involving the
transmission of lower-energy signals more frequently than higher-energy
signals. Previous work has shown that shaping is particularly effective when
used with coded amplitude phase-shift keying (APSK), a modulation that has been
popularized recently due to its inclusion in the DVB-S2 standard. While shaped
APSK can provide significant gains when used with standard off-the-shelf LDPC
codes, such as the codes in the DVB-S2 standard, additional non-negligible
gains can be achieved by optimizing the LDPC code with respect to the shaped
APSK modulation. In this paper, we optimize the degree distributions of the
LDPC code used in conjunction with shaped APSK. The optimization process is an
extension of the EXIT-chart technique of ten Brink, et al., which has been
adapted to account for the shaped APSK modulation. We begin by constraining the
code to have the same number of distinct variable-node degrees as the codes in
the DVB-S2 standard, and show that the optimization provides 32-APSK systems
with an additional coding gain of 0.34 dB at a system rate of R=3 bits per
symbol, compared to shaped systems that use the long LDPC code from the DVB-S2
standard. We then increase the number of allowed variable node degrees by one,
and find that an additional 0.1 dB gain is achievable.
|
1210.4839 | Leveraging Side Observations in Stochastic Bandits | cs.LG stat.ML | This paper considers stochastic bandits with side observations, a model that
accounts for both the exploration/exploitation dilemma and relationships
between arms. In this setting, after pulling an arm i, the decision maker also
observes the rewards for some other actions related to i. We will see that this
model is suited to content recommendation in social networks, where users'
reactions may be endorsed or not by their friends. We provide efficient
algorithms based on upper confidence bounds (UCBs) to leverage this additional
information and derive new bounds improving on standard regret guarantees. We
also evaluate these policies in the context of movie recommendation in social
networks: experiments on real datasets show substantial learning rate speedups
ranging from 2.2x to 14x on dense networks.
|
1210.4840 | Lifted Relax, Compensate and then Recover: From Approximate to Exact
Lifted Probabilistic Inference | cs.AI | We propose an approach to lifted approximate inference for first-order
probabilistic models, such as Markov logic networks. It is based on performing
exact lifted inference in a simplified first-order model, which is found by
relaxing first-order constraints, and then compensating for the relaxation.
These simplified models can be incrementally improved by carefully recovering
constraints that have been relaxed, also at the first-order level. This leads
to a spectrum of approximations, with lifted belief propagation on one end, and
exact lifted inference on the other. We discuss how relaxation, compensation,
and recovery can be performed, all at the firstorder level, and show
empirically that our approach substantially improves on the approximations of
both propositional solvers and lifted belief propagation.
|
1210.4841 | An Efficient Message-Passing Algorithm for the M-Best MAP Problem | cs.AI cs.LG stat.ML | Much effort has been directed at algorithms for obtaining the highest
probability configuration in a probabilistic random field model known as the
maximum a posteriori (MAP) inference problem. In many situations, one could
benefit from having not just a single solution, but the top M most probable
solutions known as the M-Best MAP problem. In this paper, we propose an
efficient message-passing based algorithm for solving the M-Best MAP problem.
Specifically, our algorithm solves the recently proposed Linear Programming
(LP) formulation of M-Best MAP [7], while being orders of magnitude faster than
a generic LP-solver. Our approach relies on studying a particular partial
Lagrangian relaxation of the M-Best MAP LP which exposes a natural
combinatorial structure of the problem that we exploit.
|
1210.4842 | Causal Inference by Surrogate Experiments: z-Identifiability | cs.AI stat.ME | We address the problem of estimating the effect of intervening on a set of
variables X from experiments on a different set, Z, that is more accessible to
manipulation. This problem, which we call z-identifiability, reduces to
ordinary identifiability when Z = empty and, like the latter, can be given
syntactic characterization using the do-calculus [Pearl, 1995; 2000]. We
provide a graphical necessary and sufficient condition for z-identifiability
for arbitrary sets X,Z, and Y (the outcomes). We further develop a complete
algorithm for computing the causal effect of X on Y using information provided
by experiments on Z. Finally, we use our results to prove completeness of
do-calculus relative to z-identifiability, a result that does not follow from
completeness relative to ordinary identifiability.
|
1210.4843 | Deterministic MDPs with Adversarial Rewards and Bandit Feedback | cs.GT cs.LG | We consider a Markov decision process with deterministic state transition
dynamics, adversarially generated rewards that change arbitrarily from round to
round, and a bandit feedback model in which the decision maker only observes
the rewards it receives. In this setting, we present a novel and efficient
online decision making algorithm named MarcoPolo. Under mild assumptions on the
structure of the transition dynamics, we prove that MarcoPolo enjoys a regret
of O(T^(3/4)sqrt(log(T))) against the best deterministic policy in hindsight.
Specifically, our analysis does not rely on the stringent unichain assumption,
which dominates much of the previous work on this topic.
|
1210.4845 | Exploiting Uniform Assignments in First-Order MPE | cs.AI | The MPE (Most Probable Explanation) query plays an important role in
probabilistic inference. MPE solution algorithms for probabilistic relational
models essentially adapt existing belief assessment method, replacing summation
with maximization. But the rich structure and symmetries captured by relational
models together with the properties of the maximization operator offer an
opportunity for additional simplification with potentially significant
computational ramifications. Specifically, these models often have groups of
variables that define symmetric distributions over some population of formulas.
The maximizing choice for different elements of this group is the same. If we
can realize this ahead of time, we can significantly reduce the size of the
model by eliminating a potentially significant portion of random variables.
This paper defines the notion of uniformly assigned and partially uniformly
assigned sets of variables, shows how one can recognize these sets efficiently,
and how the model can be greatly simplified once we recognize them, with little
computational effort. We demonstrate the effectiveness of these ideas
empirically on a number of models.
|
1210.4846 | Variational Dual-Tree Framework for Large-Scale Transition Matrix
Approximation | cs.LG stat.ML | In recent years, non-parametric methods utilizing random walks on graphs have
been used to solve a wide range of machine learning problems, but in their
simplest form they do not scale well due to the quadratic complexity. In this
paper, a new dual-tree based variational approach for approximating the
transition matrix and efficiently performing the random walk is proposed. The
approach exploits a connection between kernel density estimation, mixture
modeling, and random walk on graphs in an optimization of the transition matrix
for the data graph that ties together edge transitions probabilities that are
similar. Compared to the de facto standard approximation method based on
k-nearestneighbors, we demonstrate order of magnitudes speedup without
sacrificing accuracy for Label Propagation tasks on benchmark data sets in
semi-supervised learning.
|
1210.4848 | Uncertain Congestion Games with Assorted Human Agent Populations | cs.GT cs.MA | Congestion games model a wide variety of real-world resource congestion
problems, such as selfish network routing, traffic route guidance in congested
areas, taxi fleet optimization and crowd movement in busy areas. However,
existing research in congestion games assumes: (a) deterministic movement of
agents between resources; and (b) perfect rationality (i.e. maximizing their
own expected value) of all agents. Such assumptions are not reasonable in
dynamic domains where decision support has to be provided to humans. For
instance, in optimizing the performance of a taxi fleet serving a city,
movement of taxis can be involuntary or nondeterministic (decided by the
specific customer who hires the taxi) and more importantly, taxi drivers may
not follow advice provided by the decision support system (due to bounded
rationality of humans). To that end, we contribute: (a) a general framework for
representing congestion games under uncertainty for populations with assorted
notions of rationality. (b) a scalable approach for solving the decision
problem for perfectly rational agents which are in the mix with boundedly
rational agents; and (c) a detailed evaluation on a synthetic and realworld
data set to illustrate the usefulness of our new approach with respect to key
social welfare metrics in the context of an assorted human-agent population. An
interesting result from our experiments on a real-world taxi fleet optimization
problem is that it is better (in terms of revenue and operational efficiency)
for taxi drivers to follow perfectly rational strategies irrespective of the
percentage of drivers not following the advice.
|
1210.4849 | Toward Large-Scale Agent Guidance in an Urban Taxi Service | cs.MA cs.AI cs.GT | Empty taxi cruising represents a wastage of resources in the context of urban
taxi services. In this work, we seek to minimize such wastage. An analysis of a
large trace of taxi operations reveals that the services' inefficiency is
caused by drivers' greedy cruising behavior. We model the existing system as a
continuous time Markov chain. To address the problem, we propose that each taxi
be equipped with an intelligent agent that will guide the driver when cruising
for passengers. Then, drawing from AI literature on multiagent planning, we
explore two possible ways to compute such guidance. The first formulation
assumes fully cooperative drivers. This allows us, in principle, to compute
systemwide optimal cruising policy. This is modeled as a Markov decision
process. The second formulation assumes rational drivers, seeking to maximize
their own profit. This is modeled as a stochastic congestion game, a
specialization of stochastic games. Nash equilibrium policy is proposed as the
solution to the game, where no driver has the incentive to singly deviate from
it. Empirical result shows that both formulations improve the efficiency of the
service significantly.
|
1210.4850 | Markov Determinantal Point Processes | cs.LG cs.IR stat.ML | A determinantal point process (DPP) is a random process useful for modeling
the combinatorial problem of subset selection. In particular, DPPs encourage a
random subset Y to contain a diverse set of items selected from a base set Y.
For example, we might use a DPP to display a set of news headlines that are
relevant to a user's interests while covering a variety of topics. Suppose,
however, that we are asked to sequentially select multiple diverse sets of
items, for example, displaying new headlines day-by-day. We might want these
sets to be diverse not just individually but also through time, offering
headlines today that are unlike the ones shown yesterday. In this paper, we
construct a Markov DPP (M-DPP) that models a sequence of random sets {Yt}. The
proposed M-DPP defines a stationary process that maintains DPP margins.
Crucially, the induced union process Zt = Yt u Yt-1 is also marginally
DPP-distributed. Jointly, these properties imply that the sequence of random
sets are encouraged to be diverse both at a given time step as well as across
time steps. We describe an exact, efficient sampling procedure, and a method
for incrementally learning a quality measure over items in the base set Y based
on external preferences. We apply the M-DPP to the task of sequentially
displaying diverse and relevant news articles to a user with topic preferences.
|
1210.4851 | Learning to Rank With Bregman Divergences and Monotone Retargeting | cs.LG stat.ML | This paper introduces a novel approach for learning to rank (LETOR) based on
the notion of monotone retargeting. It involves minimizing a divergence between
all monotonic increasing transformations of the training scores and a
parameterized prediction function. The minimization is both over the
transformations as well as over the parameters. It is applied to Bregman
divergences, a large class of "distance like" functions that were recently
shown to be the unique class that is statistically consistent with the
normalized discounted gain (NDCG) criterion [19]. The algorithm uses
alternating projection style updates, in which one set of simultaneous
projections can be computed independent of the Bregman divergence and the other
reduces to parameter estimation of a generalized linear model. This results in
easily implemented, efficiently parallelizable algorithm for the LETOR task
that enjoys global optimum guarantees under mild conditions. We present
empirical results on benchmark datasets showing that this approach can
outperform the state of the art NDCG consistent techniques.
|
1210.4852 | The Do-Calculus Revisited | cs.AI stat.ME | The do-calculus was developed in 1995 to facilitate the identification of
causal effects in non-parametric models. The completeness proofs of [Huang and
Valtorta, 2006] and [Shpitser and Pearl, 2006] and the graphical criteria of
[Tian and Shpitser, 2010] have laid this identification problem to rest. Recent
explorations unveil the usefulness of the do-calculus in three additional
areas: mediation analysis [Pearl, 2012], transportability [Pearl and
Bareinboim, 2011] and metasynthesis. Meta-synthesis (freshly coined) is the
task of fusing empirical results from several diverse studies, conducted on
heterogeneous populations and under different conditions, so as to synthesize
an estimate of a causal relation in some target environment, potentially
different from those under study. The talk surveys these results with emphasis
on the challenges posed by meta-synthesis. For background material, see
http://bayes.cs.ucla.edu/csl_papers.html
|
1210.4853 | Weighted Sets of Probabilities and MinimaxWeighted Expected Regret: New
Approaches for Representing Uncertainty and Making Decisions | cs.GT cs.AI q-fin.TR | We consider a setting where an agent's uncertainty is represented by a set of
probability measures, rather than a single measure. Measure-bymeasure updating
of such a set of measures upon acquiring new information is well-known to
suffer from problems; agents are not always able to learn appropriately. To
deal with these problems, we propose using weighted sets of probabilities: a
representation where each measure is associated with a weight, which denotes
its significance. We describe a natural approach to updating in such a
situation and a natural approach to determining the weights. We then show how
this representation can be used in decision-making, by modifying a standard
approach to decision making-minimizing expected regret-to obtain minimax
weighted expected regret (MWER).We provide an axiomatization that characterizes
preferences induced by MWER both in the static and dynamic case.
|
1210.4854 | Semantic Understanding of Professional Soccer Commentaries | cs.CL cs.AI | This paper presents a novel approach to the problem of semantic parsing via
learning the correspondences between complex sentences and rich sets of events.
Our main intuition is that correct correspondences tend to occur more
frequently. Our model benefits from a discriminative notion of similarity to
learn the correspondence between sentence and an event and a ranking machinery
that scores the popularity of each correspondence. Our method can discover a
group of events (called macro-events) that best describes a sentence. We
evaluate our method on our novel dataset of professional soccer commentaries.
The empirical results show that our method significantly outperforms the
state-of-theart.
|
1210.4855 | A Slice Sampler for Restricted Hierarchical Beta Process with
Applications to Shared Subspace Learning | cs.LG cs.CV stat.ML | Hierarchical beta process has found interesting applications in recent years.
In this paper we present a modified hierarchical beta process prior with
applications to hierarchical modeling of multiple data sources. The novel use
of the prior over a hierarchical factor model allows factors to be shared
across different sources. We derive a slice sampler for this model, enabling
tractable inference even when the likelihood and the prior over parameters are
non-conjugate. This allows the application of the model in much wider contexts
without restrictions. We present two different data generative models a linear
GaussianGaussian model for real valued data and a linear Poisson-gamma model
for count data. Encouraging transfer learning results are shown for two real
world applications text modeling and content based image retrieval.
|
1210.4856 | Exploiting compositionality to explore a large space of model structures | cs.LG stat.ML | The recent proliferation of richly structured probabilistic models raises the
question of how to automatically determine an appropriate model for a dataset.
We investigate this question for a space of matrix decomposition models which
can express a variety of widely used models from unsupervised learning. To
enable model selection, we organize these models into a context-free grammar
which generates a wide variety of structures through the compositional
application of a few simple rules. We use our grammar to generically and
efficiently infer latent components and estimate predictive likelihood for
nearly 2500 structures using a small toolbox of reusable algorithms. Using a
greedy search over our grammar, we automatically choose the decomposition
structure from raw data by evaluating only a small fraction of all models. The
proposed method typically finds the correct structure for synthetic data and
backs off gracefully to simpler models under heavy noise. It learns sensible
structures for datasets as diverse as image patches, motion capture, 20
Questions, and U.S. Senate votes, all using exactly the same code.
|
1210.4857 | Generalized Belief Propagation on Tree Robust Structured Region Graphs | cs.AI | This paper provides some new guidance in the construction of region graphs
for Generalized Belief Propagation (GBP). We connect the problem of choosing
the outer regions of a LoopStructured Region Graph (SRG) to that of finding a
fundamental cycle basis of the corresponding Markov network. We also define a
new class of tree-robust Loop-SRG for which GBP on any induced (spanning) tree
of the Markov network, obtained by setting to zero the off-tree interactions,
is exact. This class of SRG is then mapped to an equivalent class of
tree-robust cycle bases on the Markov network. We show that a treerobust cycle
basis can be identified by proving that for every subset of cycles, the graph
obtained from the edges that participate in a single cycle only, is multiply
connected. Using this we identify two classes of tree-robust cycle bases:
planar cycle bases and "star" cycle bases. In experiments we show that
tree-robustness can be successfully exploited as a design principle to improve
the accuracy and convergence of GBP.
|
1210.4859 | Mechanism Design for Cost Optimal PAC Learning in the Presence of
Strategic Noisy Annotators | cs.LG cs.GT stat.ML | We consider the problem of Probably Approximate Correct (PAC) learning of a
binary classifier from noisy labeled examples acquired from multiple annotators
(each characterized by a respective classification noise rate). First, we
consider the complete information scenario, where the learner knows the noise
rates of all the annotators. For this scenario, we derive sample complexity
bound for the Minimum Disagreement Algorithm (MDA) on the number of labeled
examples to be obtained from each annotator. Next, we consider the incomplete
information scenario, where each annotator is strategic and holds the
respective noise rate as a private information. For this scenario, we design a
cost optimal procurement auction mechanism along the lines of Myerson's optimal
auction design framework in a non-trivial manner. This mechanism satisfies
incentive compatibility property, thereby facilitating the learner to elicit
true noise rates of all the annotators.
|
1210.4860 | Spectral Estimation of Conditional Random Graph Models for Large-Scale
Network Data | cs.SI cs.LG physics.soc-ph stat.ML | Generative models for graphs have been typically committed to strong prior
assumptions concerning the form of the modeled distributions. Moreover, the
vast majority of currently available models are either only suitable for
characterizing some particular network properties (such as degree distribution
or clustering coefficient), or they are aimed at estimating joint probability
distributions, which is often intractable in large-scale networks. In this
paper, we first propose a novel network statistic, based on the Laplacian
spectrum of graphs, which allows to dispense with any parametric assumption
concerning the modeled network properties. Second, we use the defined statistic
to develop the Fiedler random graph model, switching the focus from the
estimation of joint probability distributions to a more tractable conditional
estimation setting. After analyzing the dependence structure characterizing
Fiedler random graphs, we evaluate them experimentally in edge prediction over
several real-world networks, showing that they allow to reach a much higher
prediction accuracy than various alternative statistical models.
|
1210.4861 | Uniform Solution Sampling Using a Constraint Solver As an Oracle | cs.AI | We consider the problem of sampling from solutions defined by a set of hard
constraints on a combinatorial space. We propose a new sampling technique that,
while enforcing a uniform exploration of the search space, leverages the
reasoning power of a systematic constraint solver in a black-box scheme. We
present a series of challenging domains, such as energy barriers and highly
asymmetric spaces, that reveal the difficulties introduced by hard constraints.
We demonstrate that standard approaches such as Simulated Annealing and Gibbs
Sampling are greatly affected, while our new technique can overcome many of
these difficulties. Finally, we show that our sampling scheme naturally defines
a new approximate model counting technique, which we empirically show to be
very accurate on a range of benchmark problems.
|
1210.4862 | Sample-efficient Nonstationary Policy Evaluation for Contextual Bandits | cs.LG stat.ML | We present and prove properties of a new offline policy evaluator for an
exploration learning setting which is superior to previous evaluators. In
particular, it simultaneously and correctly incorporates techniques from
importance weighting, doubly robust evaluation, and nonstationary policy
evaluation approaches. In addition, our approach allows generating longer
histories by careful control of a bias-variance tradeoff, and further decreases
variance by incorporating information about randomness of the target policy.
Empirical evidence from synthetic and realworld exploration learning problems
shows the new evaluator successfully unifies previous approaches and uses
information an order of magnitude more efficiently.
|
1210.4863 | DBN-Based Combinatorial Resampling for Articulated Object Tracking | cs.CV | Particle Filter is an effective solution to track objects in video sequences
in complex situations. Its key idea is to estimate the density over the
possible states of the object using a weighted sample whose elements are called
particles. One of its crucial step is a resampling step in which particles are
resampled to avoid some degeneracy problem. In this paper, we introduce a new
resampling method called Combinatorial Resampling that exploits some features
of articulated objects to resample over an implicitly created sample of an
exponential size better representing the density to estimate. We prove that it
is sound and, through experimentations both on challenging synthetic and real
video sequences, we show that it outperforms all classical resampling methods
both in terms of the quality of its results and in terms of response times.
|
1210.4864 | Graph-Coupled HMMs for Modeling the Spread of Infection | cs.SI physics.soc-ph stat.AP | We develop Graph-Coupled Hidden Markov Models (GCHMMs) for modeling the
spread of infectious disease locally within a social network. Unlike most
previous research in epidemiology, which typically models the spread of
infection at the level of entire populations, we successfully leverage mobile
phone data collected from 84 people over an extended period of time to model
the spread of infection on an individual level. Our model, the GCHMM, is an
extension of widely-used Coupled Hidden Markov Models (CHMMs), which allow
dependencies between state transitions across multiple Hidden Markov Models
(HMMs), to situations in which those dependencies are captured through the
structure of a graph, or to social networks that may change over time. The
benefit of making infection predictions on an individual level is enormous, as
it allows people to receive more personalized and relevant health advice.
|
1210.4865 | Scaling Up Decentralized MDPs Through Heuristic Search | cs.AI cs.MA | Decentralized partially observable Markov decision processes (Dec-POMDPs) are
rich models for cooperative decision-making under uncertainty, but are often
intractable to solve optimally (NEXP-complete). The transition and observation
independent Dec-MDP is a general subclass that has been shown to have
complexity in NP, but optimal algorithms for this subclass are still
inefficient in practice. In this paper, we first provide an updated proof that
an optimal policy does not depend on the histories of the agents, but only the
local observations. We then present a new algorithm based on heuristic search
that is able to expand search nodes by using constraint optimization. We show
experimental results comparing our approach with the state-of-the-art DecMDP
and Dec-POMDP solvers. These results show a reduction in computation time and
an increase in scalability by multiple orders of magnitude in a number of
benchmarks.
|
1210.4866 | A Bayesian Approach to Constraint Based Causal Inference | cs.AI stat.ME | We target the problem of accuracy and robustness in causal inference from
finite data sets. Some state-of-the-art algorithms produce clear output
complete with solid theoretical guarantees but are susceptible to propagating
erroneous decisions, while others are very adept at handling and representing
uncertainty, but need to rely on undesirable assumptions. Our aim is to combine
the inherent robustness of the Bayesian approach with the theoretical strength
and clarity of constraint-based methods. We use a Bayesian score to obtain
probability estimates on the input statements used in a constraint-based
procedure. These are subsequently processed in decreasing order of reliability,
letting more reliable decisions take precedence in case of con icts, until a
single output model is obtained. Tests show that a basic implementation of the
resulting Bayesian Constraint-based Causal Discovery (BCCD) algorithm already
outperforms established procedures such as FCI and Conservative PC. It can also
indicate which causal decisions in the output have high reliability and which
do not.
|
1210.4867 | Lifted Relational Variational Inference | cs.LG stat.ML | Hybrid continuous-discrete models naturally represent many real-world
applications in robotics, finance, and environmental engineering. Inference
with large-scale models is challenging because relational structures
deteriorate rapidly during inference with observations. The main contribution
of this paper is an efficient relational variational inference algorithm that
factors largescale probability models into simpler variational models, composed
of mixtures of iid (Bernoulli) random variables. The algorithm takes
probability relational models of largescale hybrid systems and converts them to
a close-to-optimal variational models. Then, it efficiently calculates marginal
probabilities on the variational models by using a latent (or lifted) variable
elimination or a lifted stochastic sampling. This inference is unique because
it maintains the relational structure upon individual observations and during
inference steps.
|
1210.4868 | Graphical-model Based Multiple Testing under Dependence, with
Applications to Genome-wide Association Studies | stat.ME cs.CE stat.AP | Large-scale multiple testing tasks often exhibit dependence, and leveraging
the dependence between individual tests is still one challenging and important
problem in statistics. With recent advances in graphical models, it is feasible
to use them to perform multiple testing under dependence. We propose a multiple
testing procedure which is based on a Markov-random-field-coupled mixture
model. The ground truth of hypotheses is represented by a latent binary Markov
random field, and the observed test statistics appear as the coupled mixture
variables. The parameters in our model can be automatically learned by a novel
EM algorithm. We use an MCMC algorithm to infer the posterior probability that
each hypothesis is null (termed local index of significance), and the false
discovery rate can be controlled accordingly. Simulations show that the
numerical performance of multiple testing can be improved substantially by
using our procedure. We apply the procedure to a real-world genome-wide
association study on breast cancer, and we identify several SNPs with strong
association evidence.
|
1210.4869 | Response Aware Model-Based Collaborative Filtering | cs.LG cs.IR stat.ML | Previous work on recommender systems mainly focus on fitting the ratings
provided by users. However, the response patterns, i.e., some items are rated
while others not, are generally ignored. We argue that failing to observe such
response patterns can lead to biased parameter estimation and sub-optimal model
performance. Although several pieces of work have tried to model users'
response patterns, they miss the effectiveness and interpretability of the
successful matrix factorization collaborative filtering approaches. To bridge
the gap, in this paper, we unify explicit response models and PMF to establish
the Response Aware Probabilistic Matrix Factorization (RAPMF) framework. We
show that RAPMF subsumes PMF as a special case. Empirically we demonstrate the
merits of RAPMF from various aspects.
|
1210.4870 | Crowdsourcing Control: Moving Beyond Multiple Choice | cs.AI cs.LG | To ensure quality results from crowdsourced tasks, requesters often aggregate
worker responses and use one of a plethora of strategies to infer the correct
answer from the set of noisy responses. However, all current models assume
prior knowledge of all possible outcomes of the task. While not an unreasonable
assumption for tasks that can be posited as multiple-choice questions (e.g.
n-ary classification), we observe that many tasks do not naturally fit this
paradigm, but instead demand a free-response formulation where the outcome
space is of infinite size (e.g. audio transcription). We model such tasks with
a novel probabilistic graphical model, and design and implement LazySusan, a
decision-theoretic controller that dynamically requests responses as necessary
in order to infer answers to these tasks. We also design an EM algorithm to
jointly learn the parameters of our model while inferring the correct answers
to multiple tasks at a time. Live experiments on Amazon Mechanical Turk
demonstrate the superiority of LazySusan at solving SAT Math questions,
eliminating 83.2% of the error and achieving greater net utility compared to
the state-ofthe-art strategy, majority-voting. We also show in live experiments
that our EM algorithm outperforms majority-voting on a visualization task that
we design.
|
1210.4871 | Learning Mixtures of Submodular Shells with Application to Document
Summarization | cs.LG cs.CL cs.IR stat.ML | We introduce a method to learn a mixture of submodular "shells" in a
large-margin setting. A submodular shell is an abstract submodular function
that can be instantiated with a ground set and a set of parameters to produce a
submodular function. A mixture of such shells can then also be so instantiated
to produce a more complex submodular function. What our algorithm learns are
the mixture weights over such shells. We provide a risk bound guarantee when
learning in a large-margin structured-prediction setting using a projected
subgradient method when only approximate submodular optimization is possible
(such as with submodular function maximization). We apply this method to the
problem of multi-document summarization and produce the best results reported
so far on the widely used NIST DUC-05 through DUC-07 document summarization
corpora.
|
1210.4872 | Nested Dictionary Learning for Hierarchical Organization of Imagery and
Text | cs.LG cs.CV stat.ML | A tree-based dictionary learning model is developed for joint analysis of
imagery and associated text. The dictionary learning may be applied directly to
the imagery from patches, or to general feature vectors extracted from patches
or superpixels (using any existing method for image feature extraction). Each
image is associated with a path through the tree (from root to a leaf), and
each of the multiple patches in a given image is associated with one node in
that path. Nodes near the tree root are shared between multiple paths,
representing image characteristics that are common among different types of
images. Moving toward the leaves, nodes become specialized, representing
details in image classes. If available, words (text) are also jointly modeled,
with a path-dependent probability over words. The tree structure is inferred
via a nested Dirichlet process, and a retrospective stick-breaking sampler is
used to infer the tree depth and width.
|
1210.4874 | Dynamic Stochastic Orienteering Problems for Risk-Aware Applications | cs.AI cs.DS | Orienteering problems (OPs) are a variant of the well-known prize-collecting
traveling salesman problem, where the salesman needs to choose a subset of
cities to visit within a given deadline. OPs and their extensions with
stochastic travel times (SOPs) have been used to model vehicle routing problems
and tourist trip design problems. However, they suffer from two limitations
travel times between cities are assumed to be time independent and the route
provided is independent of the risk preference (with respect to violating the
deadline) of the user. To address these issues, we make the following
contributions: We introduce (1) a dynamic SOP (DSOP) model, which is an
extension of SOPs with dynamic (time-dependent) travel times; (2) a
risk-sensitive criterion to allow for different risk preferences; and (3) a
local search algorithm to solve DSOPs with this risk-sensitive criterion. We
evaluated our algorithms on a real-world dataset for a theme park navigation
problem as well as synthetic datasets employed in the literature.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.