id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1206.6876
|
Identification of Conditional Interventional Distributions
|
cs.AI stat.ME
|
The subject of this paper is the elucidation of effects of actions from
causal assumptions represented as a directed graph, and statistical knowledge
given as a probability distribution. In particular, we are interested in
predicting conditional distributions resulting from performing an action on a
set of variables and, subsequently, taking measurements of another set. We
provide a necessary and sufficient graphical condition for the cases where such
distributions can be uniquely computed from the available information, as well
as an algorithm which performs this computation whenever the condition holds.
Furthermore, we use our results to prove completeness of do-calculus [Pearl,
1995] for the same identification problem.
|
1206.6877
|
Inference in Hybrid Bayesian Networks Using Mixtures of Gaussians
|
cs.AI stat.ME
|
The main goal of this paper is to describe a method for exact inference in
general hybrid Bayesian networks (BNs) (with a mixture of discrete and
continuous chance variables). Our method consists of approximating general
hybrid Bayesian networks by a mixture of Gaussians (MoG) BNs. There exists a
fast algorithm by Lauritzen-Jensen (LJ) for making exact inferences in MoG
Bayesian networks, and there exists a commercial implementation of this
algorithm. However, this algorithm can only be used for MoG BNs. Some
limitations of such networks are as follows. All continuous chance variables
must have conditional linear Gaussian distributions, and discrete chance nodes
cannot have continuous parents. The methods described in this paper will enable
us to use the LJ algorithm for a bigger class of hybrid Bayesian networks. This
includes networks with continuous chance nodes with non-Gaussian distributions,
networks with no restrictions on the topology of discrete and continuous
variables, networks with conditionally deterministic variables that are a
nonlinear function of their continuous parents, and networks with continuous
chance variables whose variances are functions of their parents.
|
1206.6878
|
Efficient Selection of Disambiguating Actions for Stereo Vision
|
cs.CV
|
In many domains that involve the use of sensors, such as robotics or sensor
networks, there are opportunities to use some form of active sensing to
disambiguate data from noisy or unreliable sensors. These disambiguating
actions typically take time and expend energy. One way to choose the next
disambiguating action is to select the action with the greatest expected
entropy reduction, or information gain. In this work, we consider active
sensing in aid of stereo vision for robotics. Stereo vision is a powerful
sensing technique for mobile robots, but it can fail in scenes that lack strong
texture. In such cases, a structured light source, such as vertical laser line
can be used for disambiguation. By treating the stereo matching problem as a
specially structured HMM-like graphical model, we demonstrate that for a scan
line with n columns and maximum stereo disparity d, the entropy minimizing aim
point for the laser can be selected in O(nd) time - cost no greater than the
stereo algorithm itself. In contrast, a typical HMM formulation would suggest
at least O(nd^2) time for the entropy calculation alone.
|
1206.6879
|
Practical Linear Value-approximation Techniques for First-order MDPs
|
cs.AI
|
Recent work on approximate linear programming (ALP) techniques for
first-order Markov Decision Processes (FOMDPs) represents the value function
linearly w.r.t. a set of first-order basis functions and uses linear
programming techniques to determine suitable weights. This approach offers the
advantage that it does not require simplification of the first-order value
function, and allows one to solve FOMDPs independent of a specific domain
instantiation. In this paper, we address several questions to enhance the
applicability of this work: (1) Can we extend the first-order ALP framework to
approximate policy iteration to address performance deficiencies of previous
approaches? (2) Can we automatically generate basis functions and evaluate
their impact on value function quality? (3) How can we decompose intractable
problems with universally quantified rewards into tractable subproblems? We
propose answers to these questions along with a number of novel optimizations
and provide a comparative empirical evaluation on logistics problems from the
ICAPS 2004 Probabilistic Planning Competition.
|
1206.6883
|
Learning Neighborhoods for Metric Learning
|
cs.LG
|
Metric learning methods have been shown to perform well on different learning
tasks. Many of them rely on target neighborhood relationships that are computed
in the original feature space and remain fixed throughout learning. As a
result, the learned metric reflects the original neighborhood relations. We
propose a novel formulation of the metric learning problem in which, in
addition to the metric, the target neighborhood relations are also learned in a
two-step iterative approach. The new formulation can be seen as a
generalization of many existing metric learning methods. The formulation
includes a target neighbor assignment rule that assigns different numbers of
neighbors to instances according to their quality; `high quality' instances get
more neighbors. We experiment with two of its instantiations that correspond to
the metric learning algorithms LMNN and MCML and compare it to other metric
learning methods on a number of datasets. The experimental results show
state-of-the-art performance and provide evidence that learning the
neighborhood relations does improve predictive performance.
|
1206.6918
|
Source-Channel Coding for the Multiple-Access Relay Channel
|
cs.IT math.IT
|
This work considers reliable transmission of general correlated sources over
the multiple-access relay channel (MARC) and the multiple-access broadcast
relay channel (MABRC). In MARCs only the destination is interested in a
reconstruction of the sources, while in MABRCs both the relay and the
destination want to reconstruct the sources. We assume that both the relay and
the destination have correlated side information. We find sufficient conditions
for reliable communication based on operational separation, as well as
necessary conditions on the achievable source-channel rate. For correlated
sources transmitted over fading Gaussian MARCs and MABRCs we find conditions
under which informational separation is optimal.
|
1206.6921
|
Dworkin's Paradox
|
physics.soc-ph cs.SI
|
How to distribute welfare in a society is a key issue in the subject of
distributional justice, which is deeply involved with notions of fairness.
Following a thought experiment by Dworkin, this work considers a society of
individuals with different preferences on the welfare distribution and an
official to mediate the coordination among them. Based on a simple assumption
that an individual's welfare is proportional to how her preference is fulfilled
by the actual distribution, we show that an egalitarian preference is a strict
Nash equilibrium and can be favorable even in certain inhomogeneous situations.
These suggest how communication can encourage and secure a notion of fairness.
|
1206.6938
|
MIMO Physical Layer Network Coding Based on VBLAST Detection
|
cs.IT math.IT
|
For MIMO two-way relay channel, this paper proposes a novel scheme,
VBLAST-PNC, to transform the two superimposed packets received by the relay to
their network coding form. Different from traditional schemes, which tries to
detect each packet before network coding them, VBLAST-PNC detects the summation
of the two packets before network coding. In particular, after firstly
detecting the second layer signal in 2-by-2 MIMO system with VBLAST, we only
cancel part of the detected signal, rather than canceling all the components,
from the first layer. Then we directly map the obtained signal, summation of
the first layer and the second layer, to their network coding form. With such
partial interference cancellation, the error propagation effect is mitigated
and the performance is thus improved as shown in simulations.
|
1206.7038
|
Comments on "Comments on "Prediction of Subharmonic Oscillation in
Switching Converters Under Different Control Strategies""
|
cs.SY math.DS nlin.CD
|
arXiv admin note: This submission has been removed by arXiv administrators
due to unprofessional personal attack.
|
1206.7050
|
An Analysis of Interactions Within and Between Extreme Right Communities
in Social Media
|
cs.SI cs.CY physics.soc-ph
|
Many extreme right groups have had an online presence for some time through
the use of dedicated websites. This has been accompanied by increased activity
in social media platforms in recent years, enabling the dissemination of
extreme right content to a wider audience. In this paper, we present an
analysis of the activity of a selection of such groups on Twitter, using
network representations based on reciprocal follower and interaction
relationships, while also analyzing topics found in their corresponding tweets.
International relationships between certain extreme right groups across
geopolitical boundaries are initially identified. Furthermore, we also discover
stable communities of accounts within local interaction networks, in addition
to associated topics, where the underlying extreme right ideology of these
communities is often identifiable.
|
1206.7051
|
Stochastic Variational Inference
|
stat.ML cs.AI stat.CO stat.ME
|
We develop stochastic variational inference, a scalable algorithm for
approximating posterior distributions. We develop this technique for a large
class of probabilistic models and we demonstrate it with two probabilistic
topic models, latent Dirichlet allocation and the hierarchical Dirichlet
process topic model. Using stochastic variational inference, we analyze several
large collections of documents: 300K articles from Nature, 1.8M articles from
The New York Times, and 3.8M articles from Wikipedia. Stochastic inference can
easily handle data sets of this size and outperforms traditional variational
inference, which can only handle a smaller subset. (We also show that the
Bayesian nonparametric topic model outperforms its parametric counterpart.)
Stochastic variational inference lets us apply complex Bayesian models to
massive data sets.
|
1206.7064
|
Software Verification and Graph Similarity for Automated Evaluation of
Students' Assignments
|
cs.AI
|
In this paper we promote introducing software verification and control flow
graph similarity measurement in automated evaluation of students' programs. We
present a new grading framework that merges results obtained by combination of
these two approaches with results obtained by automated testing, leading to
improved quality and precision of automated grading. These two approaches are
also useful in providing a comprehensible feedback that can help students to
improve the quality of their programs We also present our corresponding tools
that are publicly available and open source. The tools are based on LLVM
low-level intermediate code representation, so they could be applied to a
number of programming languages. Experimental evaluation of the proposed
grading framework is performed on a corpus of university students' programs
written in programming language C. Results of the experiments show that
automatically generated grades are highly correlated with manually determined
grades suggesting that the presented tools can find real-world applications in
studying and grading.
|
1206.7112
|
A Hybrid Method for Distance Metric Learning
|
cs.LG cs.IR stat.ML
|
We consider the problem of learning a measure of distance among vectors in a
feature space and propose a hybrid method that simultaneously learns from
similarity ratings assigned to pairs of vectors and class labels assigned to
individual vectors. Our method is based on a generative model in which class
labels can provide information that is not encoded in feature vectors but yet
relates to perceived similarity between objects. Experiments with synthetic
data as well as a real medical image retrieval problem demonstrate that
leveraging class labels through use of our method improves retrieval
performance significantly.
|
1207.0016
|
Bounds and Capacity Theorems for Cognitive Interference Channels with
State
|
cs.IT math.IT
|
A class of cognitive interference channel with state is investigated, in
which two transmitters (transmitters 1 and 2) communicate with two receivers
(receivers 1 and 2) over an interference channel. The two transmitters jointly
transmit a common message to the two receivers, and transmitter 2 also sends a
separate message to receiver 2. The channel is corrupted by an independent and
identically distributed (i.i.d.) state sequence. The scenario in which the
state sequence is noncausally known only at transmitter 2 is first studied. For
the discrete memoryless channel and its degraded version, inner and outer
bounds on the capacity region are obtained. The capacity region is
characterized for the degraded semideterministic channel and channels that
satisfy a less noisy condition. The Gaussian channels are further studied,
which are partitioned into two cases based on how the interference compares
with the signal at receiver 1. For each case, inner and outer bounds on the
capacity region are derived, and partial boundary of the capacity region is
characterized. The full capacity region is characterized for channels that
satisfy certain conditions. The second scenario in which the state sequence is
noncausally known at both transmitter 2 and receiver 2 is further studied. The
capacity region is obtained for both the discrete memoryless and Gaussian
channels. It is also shown that this capacity is achieved by certain Gaussian
channels with state noncausally known only at transmitter 2.
|
1207.0017
|
Identifying Topical Twitter Communities via User List Aggregation
|
cs.SI physics.soc-ph
|
A particular challenge in the area of social media analysis is how to find
communities within a larger network of social interactions. Here a community
may be a group of microblogging users who post content on a coherent topic, or
who are associated with a specific event or news story. Twitter provides the
ability to curate users into lists, corresponding to meaningful topics or
themes. Here we describe an approach for crowdsourcing the list building
efforts of many different Twitter users, in order to identify topical
communities. This approach involves the use of ensemble community finding to
produce stable groupings of user lists, and by extension, individual Twitter
users. We examine this approach in the context of a case study surrounding the
detection of communities on Twitter relating to the London 2012 Olympics.
|
1207.0018
|
Quasi-Orthogonal Space-Time-Frequency Trellis Codes for MIMO-OFDM
Systems
|
cs.IT math.IT
|
The main objective of this project is to design the full-rate
Space-Time-Frequency Trellis code (STFTC), which is based on Quasi-Orthogonal
designs for Multiple-Input Multiple-Output (MIMO) Orthogonal Frequency Division
Multiplexing (OFDM) systems. The proposed Quasi-Orthogonal Space-Time-Frequency
Trellis code combines set partitioning and the structure of quasi-orthogonal
space-frequency designs in a systematic way. In addition to multipath diversity
and transmit diversity, the proposed code provides receive diversity, array
gain, and achieve high-coding gain over a frequency selective fading channel.
As simulation results demonstrate, the code outperforms the existing
Quasi-Orthogonal Space-Time-Frequency Trellis codes in terms of frame error
rate performance.
|
1207.0023
|
Subspace System Identification via Weighted Nuclear Norm Optimization
|
cs.SY
|
We present a subspace system identification method based on weighted nuclear
norm approximation. The weight matrices used in the nuclear norm minimization
are the same weights as used in standard subspace identification methods. We
show that the inclusion of the weights improves the performance in terms of fit
on validation data. As a second benefit, the weights reduce the size of the
optimization problems that need to be solved. Experimental results from
randomly generated examples as well as from the Daisy benchmark collection are
reported. The key to an efficient implementation is the use of the alternating
direction method of multipliers to solve the optimization problem.
|
1207.0032
|
Linear Coherent Estimation with Spatial Collaboration
|
cs.IT math.IT
|
A power constrained sensor network that consists of multiple sensor nodes and
a fusion center (FC) is considered, where the goal is to estimate a random
parameter of interest. In contrast to the distributed framework, the sensor
nodes may be partially connected, where individual nodes can update their
observations by (linearly) combining observations from other adjacent nodes.
The updated observations are communicated to the FC by transmitting through a
coherent multiple access channel. The optimal collaborative strategy is
obtained by minimizing the expected mean-square-error subject to power
constraints at the sensor nodes. Each sensor can utilize its available power
for both collaboration with other nodes and transmission to the FC. Two kinds
of constraints, namely the cumulative and individual power constraints are
considered. The effects due to imperfect information about observation and
channel gains are also investigated. The resulting performance improvement is
illustrated analytically through the example of a homogeneous network with
equicorrelated parameters. Assuming random geometric graph topology for
collaboration, numerical results demonstrate a significant reduction in
distortion even for a moderately connected network, particularly in the low
local-SNR regime.
|
1207.0036
|
The Kullback-Leibler Divergence as a Lyapunov Function for Incentive
Based Game Dynamics
|
math.DS cs.GT cs.IT math.IT
|
It has been shown that the Kullback-Leibler divergence is a Lyapunov function
for the replicator equations at evolutionary stable states, or ESS. In this
paper we extend the result to a more general class of game dynamics. As a
result, sufficient conditions can be given for the asymptotic stability of rest
points for the entire class of incentive dynamics. The previous known results
will be can be shown as corollaries to the main theorem.
|
1207.0037
|
The Uniform Distribution in Incentive Dynamics
|
cs.GT cs.IT math.DS math.IT
|
The uniform distribution is an important counterexample in game theory as
many of the canonical game dynamics have been shown not to converge to the
equilibrium in certain cases. In particular none of the canonical game dynamics
converge to the uniform distribution in a form of rock-paper-scissors where the
amount an agent can lose is more than the agent can win, despite this fact, it
is the unique Nash equilibrium. I will show that certain incentive dynamics are
asymptotically stable at the uniform distribution when it is an incentive
equilibrium.
|
1207.0052
|
The Complexity of Learning Principles and Parameters Grammars
|
cs.FL cs.CL
|
We investigate models for learning the class of context-free and
context-sensitive languages (CFLs and CSLs). We begin with a brief discussion
of some early hardness results which show that unrestricted language learning
is impossible, and unrestricted CFL learning is computationally infeasible; we
then briefly survey the literature on algorithms for learning restricted
subclasses of the CFLs. Finally, we introduce a new family of subclasses, the
principled parametric context-free grammars (and a corresponding family of
principled parametric context-sensitive grammars), which roughly model the
"Principles and Parameters" framework in psycholinguistics. We present three
hardness results: first, that the PPCFGs are not efficiently learnable given
equivalence and membership oracles, second, that the PPCFGs are not efficiently
learnable from positive presentations unless P = NP, and third, that the PPCSGs
are not efficiently learnable from positive presentations unless integer
factorization is in P.
|
1207.0057
|
Implicit Density Estimation by Local Moment Matching to Sample from
Auto-Encoders
|
cs.LG stat.ML
|
Recent work suggests that some auto-encoder variants do a good job of
capturing the local manifold structure of the unknown data generating density.
This paper contributes to the mathematical understanding of this phenomenon and
helps define better justified sampling algorithms for deep learning based on
auto-encoder variants. We consider an MCMC where each step samples from a
Gaussian whose mean and covariance matrix depend on the previous state, defines
through its asymptotic distribution a target density. First, we show that good
choices (in the sense of consistency) for these mean and covariance functions
are the local expected value and local covariance under that target density.
Then we show that an auto-encoder with a contractive penalty captures
estimators of these local moments in its reconstruction function and its
Jacobian. A contribution of this work is thus a novel alternative to
maximum-likelihood density estimation, which we call local moment matching. It
also justifies a recently proposed sampling algorithm for the Contractive
Auto-Encoder and extends it to the Denoising Auto-Encoder.
|
1207.0097
|
Cooperative Target Realization in Multi-Agent Systems Allowing
Choice-Based Actions
|
cs.SY
|
In this paper, we study cooperative multi-agent systems in which the target
objective and the controls exercised by the agents are dependent on the choices
they made at initial system time. Such systems have been investigated in
several recently published papers, mainly from the perspective of system
analysis on issues such as control communication complexity, control energy
cost and the feasibility of realization of target functions. This paper
continues this line of research by developing optimal control design
methodology for linear systems that are collaboratively manipulated by multiple
agents based on their distributed choices. For target matrices that satisfy
particular structural constraints, we derive control algorithms that can
achieve the specified targets with minimum control cost. We compare
state-feedback as well as open-loop control strategies for target realization
and extend the optimality result to an arbitrary target matrix. The optimal
control solutions are obtained by minimizing the average control cost subject
to the set of specified target-state constraints by means of modern variation
theory and the Lagrange multiplier method.
|
1207.0099
|
Density-Difference Estimation
|
cs.LG stat.ML
|
We address the problem of estimating the difference between two probability
densities. A naive approach is a two-step procedure of first estimating two
densities separately and then computing their difference. However, such a
two-step procedure does not necessarily work well because the first step is
performed without regard to the second step and thus a small error incurred in
the first stage can cause a big error in the second stage. In this paper, we
propose a single-shot procedure for directly estimating the density difference
without separately estimating two densities. We derive a non-parametric
finite-sample error bound for the proposed single-shot density-difference
estimator and show that it achieves the optimal convergence rate. The
usefulness of the proposed method is also demonstrated experimentally.
|
1207.0117
|
Rule Based Expert System for Cerebral Palsy Diagnosis
|
cs.AI
|
The use of Artificial Intelligence is finding prominence not only in core
computer areas, but also in cross disciplinary areas including medical
diagnosis. In this paper, we present a rule based Expert System used in
diagnosis of Cerebral Palsy. The expert system takes user input and depending
on the symptoms of the patient, diagnoses if the patient is suffering from
Cerebral Palsy. The Expert System also classifies the Cerebral Palsy as mild,
moderate or severe based on the presented symptoms.
|
1207.0120
|
Distributed Secret Dissemination Across a Network
|
cs.CR cs.IT math.IT
|
Shamir's (n, k) threshold secret sharing is an important component of several
cryptographic protocols, such as those for secure multiparty-computation and
key management. These protocols typically assume the presence of direct
communication links from the dealer to all participants, in which case the
dealer can directly pass the shares of the secret to each participant. In this
paper, we consider the problem of secret sharing when the dealer does not have
direct communication links to all the participants, and instead, the dealer and
the participants form a general network. Existing methods are based on secure
message transmissions from the dealer to each participant requiring
considerable coordination in the network. In this paper, we present a
distributed algorithm for disseminating shares over a network, which we call
the SNEAK algorithm, requiring each node to know only the identities of its
one-hop neighbours. While SNEAK imposes a stronger condition on the network by
requiring the dealer to be what we call k-propagating rather than k-connected
as required by the existing solutions, we show that in addition to being
distributed, SNEAK achieves significant reduction in the communication cost and
the amount of randomness required.
|
1207.0132
|
Answering Table Queries on the Web using Column Keywords
|
cs.DB
|
We present the design of a structured search engine which returns a
multi-column table in response to a query consisting of keywords describing
each of its columns. We answer such queries by exploiting the millions of
tables on the Web because these are much richer sources of structured knowledge
than free-format text. However, a corpus of tables harvested from arbitrary
HTML web pages presents huge challenges of diversity and redundancy not seen in
centrally edited knowledge bases. We concentrate on one concrete task in this
paper. Given a set of Web tables T1, . . ., Tn, and a query Q with q sets of
keywords Q1, . . ., Qq, decide for each Ti if it is relevant to Q and if so,
identify the mapping between the columns of Ti and query columns. We represent
this task as a graphical model that jointly maps all tables by incorporating
diverse sources of clues spanning matches in different parts of the table,
corpus-wide co-occurrence statistics, and content overlap across table columns.
We define a novel query segmentation model for matching keywords to table
columns, and a robust mechanism of exploiting content overlap across table
columns. We design efficient inference algorithms based on bipartite matching
and constrained graph cuts to solve the joint labeling task. Experiments on a
workload of 59 queries over a 25 million web table corpus shows significant
boost in accuracy over baseline IR methods.
|
1207.0133
|
Fast Response to Infection Spread and Cyber Attacks on Large-Scale
Networks
|
cs.SI cs.CR physics.soc-ph q-bio.QM
|
We present a strategy for designing fast methods of response to cyber attacks
and infection spread on complex weighted networks. In these networks, nodes can
be interpreted as primitive elements of the system, and weighted edges reflect
the strength of interaction among these elements. The proposed strategy belongs
to the family of multiscale methods whose goal is to approximate the system at
multiple scales of coarseness and to obtain a solution of microscopic scale by
combining the information from coarse scales. In recent years these methods
have demonstrated their potential for solving optimization and analysis
problems on large-scale networks. We consider an optimization problem that is
based on the SIS epidemiological model. The objective is to detect the network
nodes that have to be immunized in order to keep a low level of infection in
the system.
|
1207.0134
|
SODA: Generating SQL for Business Users
|
cs.DB
|
The purpose of data warehouses is to enable business analysts to make better
decisions. Over the years the technology has matured and data warehouses have
become extremely successful. As a consequence, more and more data has been
added to the data warehouses and their schemas have become increasingly
complex. These systems still work great in order to generate pre-canned
reports. However, with their current complexity, they tend to be a poor match
for non tech-savvy business analysts who need answers to ad-hoc queries that
were not anticipated. This paper describes the design, implementation, and
experience of the SODA system (Search over DAta Warehouse). SODA bridges the
gap between the business needs of analysts and the technical complexity of
current data warehouses. SODA enables a Google-like search experience for data
warehouses by taking keyword queries of business users and automatically
generating executable SQL. The key idea is to use a graph pattern matching
algorithm that uses the metadata model of the data warehouse. Our results with
real data from a global player in the financial services industry show that
SODA produces queries with high precision and recall, and makes it much easier
for business users to interactively explore highly-complex data warehouses.
|
1207.0135
|
Privacy Preservation by Disassociation
|
cs.DB
|
In this work, we focus on protection against identity disclosure in the
publication of sparse multidimensional data. Existing multidimensional
anonymization techniquesa) protect the privacy of users either by altering the
set of quasi-identifiers of the original data (e.g., by generalization or
suppression) or by adding noise (e.g., using differential privacy) and/or (b)
assume a clear distinction between sensitive and non-sensitive information and
sever the possible linkage. In many real world applications the above
techniques are not applicable. For instance, consider web search query logs.
Suppressing or generalizing anonymization methods would remove the most
valuable information in the dataset: the original query terms. Additionally,
web search query logs contain millions of query terms which cannot be
categorized as sensitive or non-sensitive since a term may be sensitive for a
user and non-sensitive for another. Motivated by this observation, we propose
an anonymization technique termed disassociation that preserves the original
terms but hides the fact that two or more different terms appear in the same
record. We protect the users' privacy by disassociating record terms that
participate in identifying combinations. This way the adversary cannot
associate with high probability a record with a rare combination of terms. To
the best of our knowledge, our proposal is the first to employ such a technique
to provide protection against identity disclosure. We propose an anonymization
algorithm based on our approach and evaluate its performance on real and
synthetic datasets, comparing it against other state-of-the-art methods based
on generalization and differential privacy.
|
1207.0136
|
Supercharging Recommender Systems using Taxonomies for Learning User
Purchase Behavior
|
cs.DB
|
Recommender systems based on latent factor models have been effectively used
for understanding user interests and predicting future actions. Such models
work by projecting the users and items into a smaller dimensional space,
thereby clustering similar users and items together and subsequently compute
similarity between unknown user-item pairs. When user-item interactions are
sparse (sparsity problem) or when new items continuously appear (cold start
problem), these models perform poorly. In this paper, we exploit the
combination of taxonomies and latent factor models to mitigate these issues and
improve recommendation accuracy. We observe that taxonomies provide structure
similar to that of a latent factor model: namely, it imposes human-labeled
categories (clusters) over items. This leads to our proposed taxonomy-aware
latent factor model (TF) which combines taxonomies and latent factors using
additive models. We develop efficient algorithms to train the TF models, which
scales to large number of users/items and develop scalable
inference/recommendation algorithms by exploiting the structure of the
taxonomy. In addition, we extend the TF model to account for the temporal
dynamics of user interests using high-order Markov chains. To deal with
large-scale data, we develop a parallel multi-core implementation of our TF
model. We empirically evaluate the TF model for the task of predicting user
purchases using a real-world shopping dataset spanning more than a million
users and products. Our experiments demonstrate the benefits of using our TF
models over existing approaches, in terms of both prediction accuracy and
running time.
|
1207.0137
|
DBToaster: Higher-order Delta Processing for Dynamic, Frequently Fresh
Views
|
cs.DB
|
Applications ranging from algorithmic trading to scientific data analysis
require realtime analytics based on views over databases that change at very
high rates. Such views have to be kept fresh at low maintenance cost and
latencies. At the same time, these views have to support classical SQL, rather
than window semantics, to enable applications that combine current with aged or
historical data. In this paper, we present viewlet transforms, a recursive
finite differencing technique applied to queries. The viewlet transform
materializes a query and a set of its higher-order deltas as views. These views
support each other's incremental maintenance, leading to a reduced overall view
maintenance cost. The viewlet transform of a query admits efficient evaluation,
the elimination of certain expensive query operations, and aggressive
parallelization. We develop viewlet transforms into a workable query execution
technique, present a heuristic and cost-based optimization framework, and
report on experiments with a prototype dynamic data management system that
combines viewlet transforms with an optimizing compilation technique. The
system supports tens of thousands of complete view refreshes a second for a
wide range of queries.
|
1207.0138
|
Real Time Discovery of Dense Clusters in Highly Dynamic Graphs:
Identifying Real World Events in Highly Dynamic Environments
|
cs.DB cs.SI physics.soc-ph
|
Due to their real time nature, microblog streams are a rich source of dynamic
information, for example, about emerging events. Existing techniques for
discovering such events from a microblog stream in real time (such as Twitter
trending topics), have several lacunae when used for discovering emerging
events; extant graph based event detection techniques are not practical in
microblog settings due to their complexity; and conventional techniques, which
have been developed for blogs, web-pages, etc., involving the use of keyword
search, are only useful for finding information about known events. Hence, in
this paper, we present techniques to discover events that are unraveling in
microblog message streams in real time so that such events can be reported as
soon as they occur. We model the problem as discovering dense clusters in
highly dynamic graphs. Despite many recent advances in graph analysis, ours is
the first technique to identify dense clusters in massive and highly dynamic
graphs in real time. Given the characteristics of microblog streams, in order
to find clusters without missing any events, we propose and exploit a novel
graph property which we call short-cycle property. Our algorithms find these
clusters efficiently in spite of rapid changes to the microblog streams.
Further we present a novel ranking function to identify the important events.
Besides proving the correctness of our algorithms we show their practical
utility by evaluating them using real world microblog data. These demonstrate
our technique's ability to discover, with high precision and recall, emerging
events in high intensity data streams in real time. Many recent web
applications create data which can be represented as massive dynamic graphs.
Our technique can be easily extended to discover, in real time, interesting
patterns in such graphs.
|
1207.0139
|
Sketch-based Querying of Distributed Sliding-Window Data Streams
|
cs.DB
|
While traditional data-management systems focus on evaluating single, ad-hoc
queries over static data sets in a centralized setting, several emerging
applications require (possibly, continuous) answers to queries on dynamic data
that is widely distributed and constantly updated. Furthermore, such query
answers often need to discount data that is "stale", and operate solely on a
sliding window of recent data arrivals (e.g., data updates occurring over the
last 24 hours). Such distributed data streaming applications mandate novel
algorithmic solutions that are both time- and space-efficient (to manage
high-speed data streams), and also communication-efficient (to deal with
physical data distribution). In this paper, we consider the problem of complex
query answering over distributed, high-dimensional data streams in the
sliding-window model. We introduce a novel sketching technique (termed
ECM-sketch) that allows effective summarization of streaming data over both
time-based and count-based sliding windows with probabilistic accuracy
guarantees. Our sketch structure enables point as well as inner-product
queries, and can be employed to address a broad range of problems, such as
maintaining frequency statistics, finding heavy hitters, and computing
quantiles in the sliding-window model. Focusing on distributed environments, we
demonstrate how ECM-sketches of individual, local streams can be composed to
generate a (low-error) ECM-sketch summary of the order-preserving aggregation
of all streams; furthermore, we show how ECM-sketches can be exploited for
continuous monitoring of sliding-window queries over distributed streams. Our
extensive experimental study with two real-life data sets validates our
theoretical claims and verifies the effectiveness of our techniques. To the
best of our knowledge, ours is the first work to address efficient,
guaranteed-error complex query answ...[truncated].
|
1207.0140
|
LogBase: A Scalable Log-structured Database System in the Cloud
|
cs.DB
|
Numerous applications such as financial transactions (e.g., stock trading)
are write-heavy in nature. The shift from reads to writes in web applications
has also been accelerating in recent years. Write-ahead-logging is a common
approach for providing recovery capability while improving performance in most
storage systems. However, the separation of log and application data incurs
write overheads observed in write-heavy environments and hence adversely
affects the write throughput and recovery time in the system. In this paper, we
introduce LogBase - a scalable log-structured database system that adopts
log-only storage for removing the write bottleneck and supporting fast system
recovery. LogBase is designed to be dynamically deployed on commodity clusters
to take advantage of elastic scaling property of cloud environments. LogBase
provides in-memory multiversion indexes for supporting efficient access to data
maintained in the log. LogBase also supports transactions that bundle read and
write operations spanning across multiple records. We implemented the proposed
system and compared it with HBase and a disk-based log-structured
record-oriented system modeled after RAMCloud. The experimental results show
that LogBase is able to provide sustained write throughput, efficient data
access out of the cache, and effective system recovery.
|
1207.0141
|
Efficient Processing of k Nearest Neighbor Joins using MapReduce
|
cs.DB
|
k nearest neighbor join (kNN join), designed to find k nearest neighbors from
a dataset S for every object in another dataset R, is a primitive operation
widely adopted by many data mining applications. As a combination of the k
nearest neighbor query and the join operation, kNN join is an expensive
operation. Given the increasing volume of data, it is difficult to perform a
kNN join on a centralized machine efficiently. In this paper, we investigate
how to perform kNN join using MapReduce which is a well-accepted framework for
data-intensive applications over clusters of computers. In brief, the mappers
cluster objects into groups; the reducers perform the kNN join on each group of
objects separately. We design an effective mapping mechanism that exploits
pruning rules for distance filtering, and hence reduces both the shuffling and
computational costs. To reduce the shuffling cost, we propose two approximate
algorithms to minimize the number of replicas. Extensive experiments on our
in-house cluster demonstrate that our proposed methods are efficient, robust
and scalable.
|
1207.0142
|
Early Accurate Results for Advanced Analytics on MapReduce
|
cs.DB
|
Approximate results based on samples often provide the only way in which
advanced analytical applications on very massive data sets can satisfy their
time and resource constraints. Unfortunately, methods and tools for the
computation of accurate early results are currently not supported in
MapReduce-oriented systems although these are intended for `big data'.
Therefore, we proposed and implemented a non-parametric extension of Hadoop
which allows the incremental computation of early results for arbitrary
work-flows, along with reliable on-line estimates of the degree of accuracy
achieved so far in the computation. These estimates are based on a technique
called bootstrapping that has been widely employed in statistics and can be
applied to arbitrary functions and data distributions. In this paper, we
describe our Early Accurate Result Library (EARL) for Hadoop that was designed
to minimize the changes required to the MapReduce framework. Various tests of
EARL of Hadoop are presented to characterize the frequent situations where EARL
can provide major speed-ups over the current version of Hadoop.
|
1207.0143
|
CDAS: A Crowdsourcing Data Analytics System
|
cs.DB
|
Some complex problems, such as image tagging and natural language processing,
are very challenging for computers, where even state-of-the-art technology is
yet able to provide satisfactory accuracy. Therefore, rather than relying
solely on developing new and better algorithms to handle such tasks, we look to
the crowdsourcing solution -- employing human participation -- to make good the
shortfall in current technology. Crowdsourcing is a good supplement to many
computer tasks. A complex job may be divided into computer-oriented tasks and
human-oriented tasks, which are then assigned to machines and humans
respectively. To leverage the power of crowdsourcing, we design and implement a
Crowdsourcing Data Analytics System, CDAS. CDAS is a framework designed to
support the deployment of various crowdsourcing applications. The core part of
CDAS is a quality-sensitive answering model, which guides the crowdsourcing
engine to process and monitor the human tasks. In this paper, we introduce the
principles of our quality-sensitive model. To satisfy user required accuracy,
the model guides the crowdsourcing query engine for the design and processing
of the corresponding crowdsourcing jobs. It provides an estimated accuracy for
each generated result based on the human workers' historical performances. When
verifying the quality of the result, the model employs an online strategy to
reduce waiting time. To show the effectiveness of the model, we implement and
deploy two analytics jobs on CDAS, a twitter sentiment analytics job and an
image tagging job. We use real Twitter and Flickr data as our queries
respectively. We compare our approaches with state-of-the-art classification
and image annotation techniques. The results show that the human-assisted
methods can indeed achieve a much higher accuracy. By embedding the
quality-sensitive model into crowdsourcing query engine, we
effectiv...[truncated].
|
1207.0144
|
Mining Statistically Significant Substrings using the Chi-Square
Statistic
|
cs.DB
|
The problem of identification of statistically significant patterns in a
sequence of data has been applied to many domains such as intrusion detection
systems, financial models, web-click records, automated monitoring systems,
computational biology, cryptology, and text analysis. An observed pattern of
events is deemed to be statistically significant if it is unlikely to have
occurred due to randomness or chance alone. We use the chi-square statistic as
a quantitative measure of statistical significance. Given a string of
characters generated from a memoryless Bernoulli model, the problem is to
identify the substring for which the empirical distribution of single letters
deviates the most from the distribution expected from the generative Bernoulli
model. This deviation is captured using the chi-square measure. The most
significant substring (MSS) of a string is thus defined as the substring having
the highest chi-square value. Till date, to the best of our knowledge, there
does not exist any algorithm to find the MSS in better than O(n^2) time, where
n denotes the length of the string. In this paper, we propose an algorithm to
find the most significant substring, whose running time is O(n^{3/2}) with high
probability. We also study some variants of this problem such as finding the
top-t set, finding all substrings having chi-square greater than a fixed
threshold and finding the MSS among substrings greater than a given length. We
experimentally demonstrate the asymptotic behavior of the MSS on varying the
string size and alphabet size. We also describe some applications of our
algorithm on cryptology and real world data from finance and sports. Finally,
we compare our technique with the existing heuristics for finding the MSS.
|
1207.0145
|
Massively Parallel Sort-Merge Joins in Main Memory Multi-Core Database
Systems
|
cs.DB
|
Two emerging hardware trends will dominate the database system technology in
the near future: increasing main memory capacities of several TB per server and
massively parallel multi-core processing. Many algorithmic and control
techniques in current database technology were devised for disk-based systems
where I/O dominated the performance. In this work we take a new look at the
well-known sort-merge join which, so far, has not been in the focus of research
in scalable massively parallel multi-core data processing as it was deemed
inferior to hash joins. We devise a suite of new massively parallel sort-merge
(MPSM) join algorithms that are based on partial partition-based sorting.
Contrary to classical sort-merge joins, our MPSM algorithms do not rely on a
hard to parallelize final merge step to create one complete sort order. Rather
they work on the independently created runs in parallel. This way our MPSM
algorithms are NUMA-affine as all the sorting is carried out on local memory
partitions. An extensive experimental evaluation on a modern 32-core machine
with one TB of main memory proves the competitive performance of MPSM on large
main memory databases with billions of objects. It scales (almost) linearly in
the number of employed cores and clearly outperforms competing hash join
proposals - in particular it outperforms the "cutting-edge" Vectorwise parallel
query engine by a factor of four.
|
1207.0147
|
hStorage-DB: Heterogeneity-aware Data Management to Exploit the Full
Capability of Hybrid Storage Systems
|
cs.DB
|
As storage systems become increasingly heterogeneous and complex, it adds
burdens on DBAs, causing suboptimal performance even after a lot of human
efforts have been made. In addition, existing monitoring-based storage
management by access pattern detections has difficulties to handle workloads
that are highly dynamic and concurrent. To achieve high performance by best
utilizing heterogeneous storage devices, we have designed and implemented a
heterogeneity-aware software framework for DBMS storage management called
hStorage-DB, where semantic information that is critical for storage I/O is
identified and passed to the storage manager. According to the collected
semantic information, requests are classified into different types. Each type
is assigned a proper QoS policy supported by the underlying storage system, so
that every request will be served with a suitable storage device. With
hStorage-DB, we can well utilize semantic information that cannot be detected
through data access monitoring but is particularly important for a hybrid
storage system. To show the effectiveness of hStorage-DB, we have implemented a
system prototype that consists of an I/O request classification enabled DBMS,
and a hybrid storage system that is organized into a two-level caching
hierarchy. Our performance evaluation shows that hStorage-DB can automatically
make proper decisions for data allocation in different storage devices and make
substantial performance improvements in a cost-efficient way.
|
1207.0151
|
Differentiable Pooling for Hierarchical Feature Learning
|
cs.CV cs.LG
|
We introduce a parametric form of pooling, based on a Gaussian, which can be
optimized alongside the features in a single global objective function. By
contrast, existing pooling schemes are based on heuristics (e.g. local maximum)
and have no clear link to the cost function of the model. Furthermore, the
variables of the Gaussian explicitly store location information, distinct from
the appearance captured by the features, thus providing a what/where
decomposition of the input signal. Although the differentiable pooling scheme
can be incorporated in a wide range of hierarchical models, we demonstrate it
in the context of a Deconvolutional Network model (Zeiler et al. ICCV 2011). We
also explore a number of secondary issues within this model and present
detailed experiments on MNIST digits.
|
1207.0166
|
On Multilabel Classification and Ranking with Partial Feedback
|
cs.LG
|
We present a novel multilabel/ranking algorithm working in partial
information settings. The algorithm is based on 2nd-order descent methods, and
relies on upper-confidence bounds to trade-off exploration and exploitation. We
analyze this algorithm in a partial adversarial setting, where covariates can
be adversarial, but multilabel probabilities are ruled by (generalized) linear
models. We show O(T^{1/2} log T) regret bounds, which improve in several ways
on the existing results. We test the effectiveness of our upper-confidence
scheme by contrasting against full-information baselines on real-world
multilabel datasets, often obtaining comparable performance.
|
1207.0170
|
Single parameter galaxy classification: The Principal Curve through the
multi-dimensional space of galaxy properties
|
astro-ph.CO cs.CV stat.ML
|
We propose to describe the variety of galaxies from SDSS by using only one
affine parameter. To this aim, we build the Principal Curve (P-curve) passing
through the spine of the data point cloud, considering the eigenspace derived
from Principal Component Analysis of morphological, physical and photometric
galaxy properties. Thus, galaxies can be labeled, ranked and classified by a
single arc length value of the curve, measured at the unique closest projection
of the data points on the P-curve. We find that the P-curve has a "W" letter
shape with 3 turning points, defining 4 branches that represent distinct galaxy
populations. This behavior is controlled mainly by 2 properties, namely u-r and
SFR. We further present the variations of several galaxy properties as a
function of arc length. Luminosity functions variate from steep Schechter fits
at low arc length, to double power law and ending in Log-normal fits at high
arc length. Galaxy clustering shows increasing autocorrelation power at large
scales as arc length increases. PCA analysis allowed to find peculiar galaxy
populations located apart from the main cloud of data points, such as small red
galaxies dominated by a disk, of relatively high stellar mass-to-light ratio
and surface mass density. The P-curve allows not only dimensionality reduction,
but also provides supporting evidence for relevant physical models and
scenarios in extragalactic astronomy: 1) Evidence for the hierarchical merging
scenario in the formation of a selected group of red massive galaxies. These
galaxies present a log-normal r-band luminosity function, which might arise
from multiplicative processes involved in this scenario. 2) Connection between
the onset of AGN activity and star formation quenching, which appears in green
galaxies when transitioning from blue to red populations. (Full abstract in
downloadable version)
|
1207.0188
|
Model-based clustering of large networks
|
stat.CO cs.SI physics.soc-ph stat.AP
|
We describe a network clustering framework, based on finite mixture models,
that can be applied to discrete-valued networks with hundreds of thousands of
nodes and billions of edge variables. Relative to other recent model-based
clustering work for networks, we introduce a more flexible modeling framework,
improve the variational-approximation estimation algorithm, discuss and
implement standard error estimation via a parametric bootstrap approach, and
apply these methods to much larger data sets than those seen elsewhere in the
literature. The more flexible framework is achieved through introducing novel
parameterizations of the model, giving varying degrees of parsimony, using
exponential family models whose structure may be exploited in various
theoretical and algorithmic ways. The algorithms are based on variational
generalized EM algorithms, where the E-steps are augmented by a
minorization-maximization (MM) idea. The bootstrapped standard error estimates
are based on an efficient Monte Carlo network simulation idea. Last, we
demonstrate the usefulness of the model-based clustering framework by applying
it to a discrete-valued network with more than 131,000 nodes and 17 billion
edge variables.
|
1207.0206
|
Alternative Restart Strategies for CMA-ES
|
cs.AI
|
This paper focuses on the restart strategy of CMA-ES on multi-modal
functions. A first alternative strategy proceeds by decreasing the initial
step-size of the mutation while doubling the population size at each restart. A
second strategy adaptively allocates the computational budget among the restart
settings in the BIPOP scheme. Both restart strategies are validated on the BBOB
benchmark; their generality is also demonstrated on an independent real-world
problem suite related to spacecraft trajectory optimization.
|
1207.0226
|
Wait-Free Gathering of Mobile Robots
|
cs.DC cs.RO
|
The problem of gathering multiple mobile robots to a single location, is one
of the fundamental problems in distributed coordination between autonomous
robots. The problem has been studied and solved even for robots that are
anonymous, disoriented, memoryless and operate in the semi-synchronous (ATOM)
model. However all known solutions require the robots to be faulty-free except
for the results of [Agmon and Peleg 2006] who solve the gathering problem in
presence of one crash fault. This leaves open the question of whether gathering
of correct robots can be achieved in the presence of multiple crash failures.
We resolve the question in this paper and show how to solve gathering, when any
number of robots may crash at any time during the algorithm, assuming strong
multiplicity detection and chirality. In contrast it is known that for the more
stronger byzantine faults, it is impossible to gather even in a 3-robot system
if one robot is faulty. Our algorithm solves the gathering of correct robots in
the semi-synchronous model where an adversary may stop any robot before
reaching its desired destination. Further the algorithm is self-stabilizing as
it achieves gathering starting from any configuration (except the bivalent
configuration where deterministic gathering is impossible).
|
1207.0229
|
Variable-rate Retransmissions for Incremental Redundancy Hybrid ARQ
|
cs.IT math.IT
|
The throughput achievable in truncated Hybrid ARQ protocol (HARQ) using
incremental redundancy (IR) in analyzed when transmitting over a block-fading
channel whose state is unknown at the transmitter. We allow the transmission
lengths to vary, optimize them efficiently via dynamic programming, and show
that such a variable-rate HARQ-IR provides gains with respect to a fixed-rate
transmission in terms of increased throughput and decreased average number of
transmissions, reducing at the same time the outage probability.
|
1207.0235
|
Suprema of Chaos Processes and the Restricted Isometry Property
|
math.PR cs.IT math.IT
|
We present a new bound for suprema of a special type of chaos processes
indexed by a set of matrices, which is based on a chaining method. As
applications we show significantly improved estimates for the restricted
isometry constants of partial random circulant matrices and time-frequency
structured random matrices. In both cases the required condition on the number
$m$ of rows in terms of the sparsity $s$ and the vector length $n$ is $m
\gtrsim s \log^2 s \log^2 n$.
|
1207.0240
|
Online Exploration of Polygons with Holes
|
cs.CG cs.DS cs.RO
|
We study online strategies for autonomous mobile robots with vision to
explore unknown polygons with at most h holes. Our main contribution is an
(h+c_0)!-competitive strategy for such polygons under the assumption that each
hole is marked with a special color, where c_0 is a universal constant. The
strategy is based on a new hybrid approach. Furthermore, we give a new lower
bound construction for small h.
|
1207.0245
|
Adversarial Evaluation for Models of Natural Language
|
cs.CL
|
We now have a rich and growing set of modeling tools and algorithms for
inducing linguistic structure from text that is less than fully annotated. In
this paper, we discuss some of the weaknesses of our current methodology. We
present a new abstract framework for evaluating natural language processing
(NLP) models in general and unsupervised NLP models in particular. The central
idea is to make explicit certain adversarial roles among researchers, so that
the different roles in an evaluation are more clearly defined and performers of
all roles are offered ways to make measurable contributions to the larger goal.
Adopting this approach may help to characterize model successes and failures by
encouraging earlier consideration of error analysis. The framework can be
instantiated in a variety of ways, simulating some familiar intrinsic and
extrinsic evaluations as well as some new evaluations.
|
1207.0246
|
Web Data Extraction, Applications and Techniques: A Survey
|
cs.IR
|
Web Data Extraction is an important problem that has been studied by means of
different scientific tools and in a broad range of applications. Many
approaches to extracting data from the Web have been designed to solve specific
problems and operate in ad-hoc domains. Other approaches, instead, heavily
reuse techniques and algorithms developed in the field of Information
Extraction.
This survey aims at providing a structured and comprehensive overview of the
literature in the field of Web Data Extraction. We provided a simple
classification framework in which existing Web Data Extraction applications are
grouped into two main classes, namely applications at the Enterprise level and
at the Social Web level. At the Enterprise level, Web Data Extraction
techniques emerge as a key tool to perform data analysis in Business and
Competitive Intelligence systems as well as for business process
re-engineering. At the Social Web level, Web Data Extraction techniques allow
to gather a large amount of structured data continuously generated and
disseminated by Web 2.0, Social Media and Online Social Network users and this
offers unprecedented opportunities to analyze human behavior at a very large
scale. We discuss also the potential of cross-fertilization, i.e., on the
possibility of re-using Web Data Extraction techniques originally designed to
work in a given domain, in other domains.
|
1207.0261
|
Biochemical Oscillations in Delayed Negative Cyclic Feedback: Harmonic
Balance Analysis with Applications
|
cs.SY math.OC q-bio.QM
|
Oscillatory chemical reactions often serve as a timing clock of cellular
processes in living cells. The temporal dynamics of protein concentration
levels is thus of great interest in biology. Here we propose a theoretical
framework to analyze the frequency, phase and amplitude of oscillatory protein
concentrations in gene regulatory networks with negative cyclic feedback. We
first formulate the analysis framework of oscillation profiles based on
multivariable harmonic balance. With this framework, the frequency, phase and
amplitude are obtained analytically in terms of kinetic constants of the
reactions despite the nonlinearity of the dynamics. These results are
demonstrated with the Pentilator and Hes7 self-repression network, and it is
shown that the developed analysis method indeed predicts the profiles of the
oscillations. A distinctive feature of the presented result is that the
waveform of oscillations is analytically obtained for a broad class of
biochemical systems. Thus, it is easy to see how the waveform is determined
from the system's parameters and structures. We present general biological
insights that are applicable for any gene regulatory networks with negative
cyclic feedback.
|
1207.0262
|
Characteristic matrix of covering and its application to boolean matrix
decomposition and axiomatization
|
cs.AI
|
Covering is an important type of data structure while covering-based rough
sets provide an efficient and systematic theory to deal with covering data. In
this paper, we use boolean matrices to represent and axiomatize three types of
covering approximation operators. First, we define two types of characteristic
matrices of a covering which are essentially square boolean ones, and their
properties are studied. Through the characteristic matrices, three important
types of covering approximation operators are concisely equivalently
represented. Second, matrix representations of covering approximation operators
are used in boolean matrix decomposition. We provide a sufficient and necessary
condition for a square boolean matrix to decompose into the boolean product of
another one and its transpose. And we develop an algorithm for this boolean
matrix decomposition. Finally, based on the above results, these three types of
covering approximation operators are axiomatized using boolean matrices. In a
word, this work borrows extensively from boolean matrices and present a new
view to study covering-based rough sets.
|
1207.0268
|
Surrogate Regret Bounds for Bipartite Ranking via Strongly Proper Losses
|
cs.LG stat.ML
|
The problem of bipartite ranking, where instances are labeled positive or
negative and the goal is to learn a scoring function that minimizes the
probability of mis-ranking a pair of positive and negative instances (or
equivalently, that maximizes the area under the ROC curve), has been widely
studied in recent years. A dominant theoretical and algorithmic framework for
the problem has been to reduce bipartite ranking to pairwise classification; in
particular, it is well known that the bipartite ranking regret can be
formulated as a pairwise classification regret, which in turn can be upper
bounded using usual regret bounds for classification problems. Recently,
Kotlowski et al. (2011) showed regret bounds for bipartite ranking in terms of
the regret associated with balanced versions of the standard (non-pairwise)
logistic and exponential losses. In this paper, we show that such
(non-pairwise) surrogate regret bounds for bipartite ranking can be obtained in
terms of a broad class of proper (composite) losses that we term as strongly
proper. Our proof technique is much simpler than that of Kotlowski et al.
(2011), and relies on properties of proper (composite) losses as elucidated
recently by Reid and Williamson (2010, 2011) and others. Our result yields
explicit surrogate bounds (with no hidden balancing terms) in terms of a
variety of strongly proper losses, including for example logistic, exponential,
squared and squared hinge losses as special cases. We also obtain tighter
surrogate bounds under certain low-noise conditions via a recent result of
Clemencon and Robbiano (2011).
|
1207.0273
|
Performance Analysis for Heterogeneous Cellular Systems with Range
Expansion
|
cs.IT math.IT
|
Recently heterogeneous base station structure has been adopted in cellular
systems to enhance system throughput and coverage. In this paper, the uplink
coverage probability for the heterogeneous cellular systems is analyzed and
derived in closed-form. The randomness on the locations and number of mobile
users is taken into account in the analysis. Based on the analytical results,
the impacts of various system parameters on the uplink performance are
investigated in detail. The correctness of the analytical results is also
verified by simulation results. These analytical results can thus serve as a
guidance for system design without the need of time consuming simulations.
|
1207.0290
|
A Deterministic Polynomial-Time Protocol for Synchronizing from
Deletions
|
cs.IT math.IT
|
In this paper, we consider a synchronization problem between nodes $A$ and
$B$ that are connected through a two--way communication channel. {Node $A$}
contains a binary file $X$ of length $n$ and {node $B$} contains a binary file
$Y$ that is generated by randomly deleting bits from $X$, by a small deletion
rate $\beta$. The location of deleted bits is not known to either node $A$ or
node $B$. We offer a deterministic synchronization scheme between nodes $A$ and
$B$ that needs a total of $O(n\beta\log \frac{1}{\beta})$ transmitted bits and
reconstructs $X$ at node $B$ with probability of error that is exponentially
low in the size of $X$. Orderwise, the rate of our scheme matches the optimal
rate for this channel.
|
1207.0297
|
On the Achievable Communication Rates of Generalized Soliton
Transmission Systems
|
cs.IT math.IT
|
We analyze the achievable communication rates of a generalized soliton-based
transmission system for the optical fiber channel. This method is based on
modulation of parameters of the scattering domain, via the inverse scattering
transform, by the information bits. The decoder uses the direct spectral
transform to estimate these parameters and decode the information message.
Unlike ordinary On-Off Keying (OOK) soliton systems, the solitons' amplitude
may take values in a continuous interval. A considerable rate gain is shown in
the case where the waveforms are 2-bound soliton states. Using traditional
information theory and inverse scattering perturbation theory, we analyze the
influence of the amplitude fluctuations as well as soliton arrival time jitter,
on the achievable rates. Using this approach we show that the time of arrival
jitter (Gordon-Haus) limits the information rate in a continuous manner, as
opposed to a strict threshold in OOK systems.
|
1207.0313
|
Intellectual Management of Enterprise
|
cs.CE
|
A new technology (in addition to ERP) is proposed to provide an increase of
profit and normal cash flow. This technology involves the next functions:
forming of intellectual interface on a natural language to communicate with a
control system; joint planning of production and sales to get the maximal
profit; an adaptation of control system to internal and external events. The
use of the natural language permits to overcome a barrier between the control
system and upper managers. To solve posed actual problems of management the
selection of information from a database and call to mathematical methods are
executed automatically. Optimal planning provides the maximal use of available
resources and opportunities of market. Adaptive control implements the
efficient reaction to critical events that lead up to a decrease of profit and
increase of accounts receivable.
|
1207.0315
|
Multi-slot Coded ALOHA with Irregular Degree Distribution
|
cs.IT math.IT
|
This paper proposes an improvement of the random multiple access scheme for
satellite communication named Multislot coded ALOHA (MuSCA). MuSCA is a
generalization of Contention Resolution Diversity Slotted ALOHA (CRDSA). In
this scheme, each user transmits several parts of a single codeword of an error
correcting code instead of sending replicas. At the receiver level, the decoder
collects all these parts and includes them in the decoding process even if they
are interfered. In this paper, we show that a high throughput can be obtained
by selecting variable code rates and user degrees according to a probability
distribution. With an optimal irregular degree distribution, our system
achieves a normalized throughput up to 1.43, resulting in a significant gain
compared to CRDSA and MuSCA. The spectral efficiency and the implementation
issues of the scheme are also analyzed.
|
1207.0334
|
Signal Space Alignment for the Gaussian Y-Channel
|
cs.IT math.IT
|
A multi-way communication network with three nodes and a relay is considered.
The three nodes in this so-called Y-channel, communicate with each other in a
bi-directional manner via the relay. Studying this setup is important due to
its being an important milestone for characterizing the capacity of larger
networks. A transmit strategy for the Gaussian Y-channel is proposed, which
mimics a previously considered scheme for the deterministic approximation of
the Y-channel. Namely, a scheme which uses nested-lattice codes and lattice
alignment is used, to perform network coding. A new mode of operation is
introduced, named `cyclic communication', which interestingly turns out to be
an important component for achieving the capacity region of the Gaussian
Y-channel within a constant gap.
|
1207.0335
|
Lattice Coding and the Generalized Degrees of Freedom of the
Interference Channel with Relay
|
cs.IT math.IT
|
The generalized degrees of freedom (GDoF) of the symmetric two-user Gaussian
interference relay channel (IRC) is studied. While it is known that the relay
does not increase the DoF of the IC, this is not known for the more general
GDoF. For the characterization of the GDoF, new sum-capacity upper bounds and
lower bounds are derived. The lower bounds are obtained by a new scheme, which
is based on functional decode-and-forward (FDF). The GDoF is characterized for
the regime in which the source-relay link is weaker than the interference link,
which constitutes half the overall space of channel parameters. It is shown
that the relay can indeed increase the GDoF of the IRC and that it is achieved
by FDF.
|
1207.0337
|
The DoF of the K-user Interference Channel with a Cognitive Relay
|
cs.IT math.IT
|
It was shown recently that the 2-user interference channel with a cognitive
relay (IC-CR) has full degrees of freedom (DoF) almost surely, that is, 2 DoF.
The purpose of this work is to check whether the DoF of the $K$-user IC-CR,
consisting of $K$ user pairs and a cognitive relay, follow as a straight
forward extension of the 2-user case. As it turns out, this is not the case.
The $K$-user IC-CR is shown to have $2K/3$ DoF if $K>2$ for the when the
channel is time varying, achievable using interference alignment. Thus, while
the basic $K$-user IC with time varying channel coefficients has 1/2 DoF per
user for all $K$, the $K$-user IC-CR with varying channels has 1 DoF per user
if K=2 and 2/3 DoF per user if $K>2$. Furthermore, the DoF region of the 3-user
IC-CR with constant channels is characterized using interference
neutralization, and a new upper bound on the sum-capacity of the 2-user IC-CR
is given.
|
1207.0350
|
Dynamic Power Distribution and Energy Management in a Reconfigurable
Multi-Robotic Organism
|
cs.SY
|
Several design parameters in collective robotic systems have been
investigated and developed in order to explore the cooperation among the
autonomous robotic individuals in a variety of robotic swarms in the presence
of different internal and external system constraints. In particular, the
dynamic power management and distribution in a multi-robotic organism is of
very high importance that depends not only on the electronic design but also on
the mechanical structure of the robots. It further defines the true nature of
the collaboration among the modules of a self-reconfigurable multi-robotic
organism. This article describes the essential features and design of a dynamic
power distribution and management system for a dynamically reconfigurable
multi-robotic system. It further presents the empirical results of the proposed
dynamic power management system collected with the real robotic platform. In
the later half of the article, it presents a simulation framework that was
especially developed to explore the collective system behavior and complexities
involved in the operations of a multi-robotic organism. At the end, summary and
conclusion follows the detailed discussion on the obtained simulation results.
|
1207.0361
|
INSTRUCT: Space-Efficient Structure for Indexing and Complete Query
Management of String Databases
|
cs.DB cs.DS
|
The tremendous expanse of search engines, dictionary and thesaurus storage,
and other text mining applications, combined with the popularity of readily
available scanning devices and optical character recognition tools, has
necessitated efficient storage, retrieval and management of massive text
databases for various modern applications. For such applications, we propose a
novel data structure, INSTRUCT, for efficient storage and management of
sequence databases. Our structure uses bit vectors for reusing the storage
space for common triplets, and hence, has a very low memory requirement.
INSTRUCT efficiently handles prefix and suffix search queries in addition to
the exact string search operation by iteratively checking the presence of
triplets. We also propose an extension of the structure to handle substring
search efficiently, albeit with an increase in the space requirements. This
extension is important in the context of trie-based solutions which are unable
to handle such queries efficiently. We perform several experiments portraying
that INSTRUCT outperforms the existing structures by nearly a factor of two in
terms of space requirements, while the query times are better. The ability to
handle insertion and deletion of strings in addition to supporting all kinds of
queries including exact search, prefix/suffix search and substring search makes
INSTRUCT a complete data structure.
|
1207.0362
|
Code-Expanded Random Access for Machine-Type Communications
|
cs.IT math.IT
|
The random access methods used for support of machine-type communications
(MTC) in current cellular standards are derivatives of traditional framed
slotted ALOHA and therefore do not support high user loads efficiently.
Motivated by the random access method employed in LTE, we propose a novel
approach that is able to sustain a wide random access load range, while
preserving the physical layer unchanged and incurring minor changes in the
medium access control layer. The proposed scheme increases the amount of
available contention resources, without resorting to the increase of system
resources, such as contention sub-frames and preambles. This increase is
accomplished by expanding the contention space to the code domain, through the
creation of random access codewords. Specifically, in the proposed scheme,
users perform random access by transmitting one or none of the available LTE
orthogonal preambles in multiple random access sub-frames, thus creating access
codewords that are used for contention. In this way, for the same number of
random access sub-frames and orthogonal preambles, the amount of available
contention resources is drastically increased, enabling the support of an
increased number of MTC users. We present the framework and analysis of the
proposed code-expanded random access method and show that our approach supports
load regions that are beyond the reach of current systems.
|
1207.0369
|
More Effective Crossover Operators for the All-Pairs Shortest Path
Problem
|
cs.NE
|
The all-pairs shortest path problem is the first non-artificial problem for
which it was shown that adding crossover can significantly speed up a
mutation-only evolutionary algorithm. Recently, the analysis of this algorithm
was refined and it was shown to have an expected optimization time (w.r.t. the
number of fitness evaluations) of $\Theta(n^{3.25}(\log n)^{0.25})$.
In contrast to this simple algorithm, evolutionary algorithms used in
practice usually employ refined recombination strategies in order to avoid the
creation of infeasible offspring. We study extensions of the basic algorithm by
two such concepts which are central in recombination, namely \emph{repair
mechanisms} and \emph{parent selection}. We show that repairing infeasible
offspring leads to an improved expected optimization time of
$\mathord{O}(n^{3.2}(\log n)^{0.2})$. As a second part of our study we prove
that choosing parents that guarantee feasible offspring results in an even
better optimization time of $\mathord{O}(n^{3}\log n)$.
Both results show that already simple adjustments of the recombination
operator can asymptotically improve the runtime of evolutionary algorithms.
|
1207.0396
|
Applying Deep Belief Networks to Word Sense Disambiguation
|
cs.CL cs.LG
|
In this paper, we applied a novel learning algorithm, namely, Deep Belief
Networks (DBN) to word sense disambiguation (WSD). DBN is a probabilistic
generative model composed of multiple layers of hidden units. DBN uses
Restricted Boltzmann Machine (RBM) to greedily train layer by layer as a
pretraining. Then, a separate fine tuning step is employed to improve the
discriminative power. We compared DBN with various state-of-the-art supervised
learning algorithms in WSD such as Support Vector Machine (SVM), Maximum
Entropy model (MaxEnt), Naive Bayes classifier (NB) and Kernel Principal
Component Analysis (KPCA). We used all words in the given paragraph,
surrounding context words and part-of-speech of surrounding words as our
knowledge sources. We conducted our experiment on the SENSEVAL-2 data set. We
observed that DBN outperformed all other learning algorithms.
|
1207.0403
|
Robust Principal Component Analysis Using Statistical Estimators
|
cs.AI
|
Principal Component Analysis (PCA) finds a linear mapping and maximizes the
variance of the data which makes PCA sensitive to outliers and may cause wrong
eigendirection. In this paper, we propose techniques to solve this problem; we
use the data-centering method and reestimate the covariance matrix using robust
statistic techniques such as median, robust scaling which is a booster to
data-centering and Huber M-estimator which measures the presentation of
outliers and reweight them with small values. The results on several real world
data sets show that our proposed method handles outliers and gains better
results than the original PCA and provides the same accuracy with lower
computation cost than the Kernel PCA using the polynomial kernel in
classification tasks.
|
1207.0405
|
MITRA: A Meta-Model for Information Flow in Trust and Reputation
Architectures
|
cs.MA
|
We propose MITRA, a meta-model for the information flow in (computational)
trust and reputation architectures. On an abstract level, MITRA describes the
information flow as it is inherent in prominent trust and reputation models
from the literature. We use MITRA to provide a structured comparison of these
models. This makes it possible to get a clear overview of the complex research
area. Furthermore, by doing so, we identify interesting new approaches for
trust and reputation modeling that so far have not been investigated.
|
1207.0436
|
On the Entropy of Sums of Bernoulli Random Variables via the Chen-Stein
Method
|
cs.IT math.IT math.PR
|
This paper considers the entropy of the sum of (possibly dependent and
non-identically distributed) Bernoulli random variables. Upper bounds on the
error that follows from an approximation of this entropy by the entropy of a
Poisson random variable with the same mean are derived. The derivation of these
bounds combines elements of information theory with the Chen-Stein method for
Poisson approximation. The resulting bounds are easy to compute, and their
applicability is exemplified. This conference paper presents in part the first
half of the paper entitled "An information-theoretic perspective of the Poisson
approximation via the Chen-Stein method" (see:arxiv:1206.6811). A
generalization of the bounds that considers the accuracy of the Poisson
approximation for the entropy of a sum of non-negative, integer-valued and
bounded random variables is introduced in the full paper. It also derives lower
bounds on the total variation distance, relative entropy and other measures
that are not considered in this conference paper.
|
1207.0437
|
Ordinal and Cardinal Dendrograms Depicting Migration-Based
Regionalization of 3,000 + U. S. Counties
|
physics.soc-ph cs.SI stat.AP
|
We have obtained a "hierarchical regionalization" of 3,107 county-level units
of the United States based upon census-recorded 1995-2000 intercounty migration
flows. The methodology employed was the two-stage (double-standardization and
strong component [directed graph] hierarchical clustering) algorithm described
in the 2009 PNAS (106 [26], E66) letter (arXiv:0904.4863). Various features (e.
g., cosmopolitan vs. provincial aspects, and indices of isolation) of the
regionalization have been previously discussed in arXiv:0907.2393,
arXiv:0903.3623 and arXiv:0809.2768. However, due to the lengthy (38-page)
nature of the associated dendrogram, the detailed tree structure itself was not
readily available for inspection. Here, we do present this (county-searchable)
dendrogram--and invite readers to explore it, based on their particular
interests/locations. An ordinal scale--rather than the originally-derived
cardinal scale of the doubly-standardized values--in which groupings/features
were more immediately apparent, was originally presented. Now, we append the
cardinal-scale dendrogram.
|
1207.0446
|
Medical Documents Classification Based on the Domain Ontology MeSH
|
cs.IR
|
This paper addresses the problem of classifying web documents using domain
ontology. Our goal is to provide a method for improving the classification of
medical documents by exploiting the MeSH thesaurus (Medical Subject Headings)
which will allow us to generate a new representation based on concepts. This
approach was tested with two well-known data mining algorithms C4.5 and KNN,
and a comparison was made with the usual representation using stems. The
enrichment of vectors using the concepts and the hyperonyms drawn from the
domain ontology has significantly boosted their representation, something
essential for good classification. The results of our experiments on the
benchmark biomedical collection Ohsumed confirm the importance of the approach
by a very significant improvement in the performance of the ontology-based
classification compared to the classical representation (Stems) by 30%.
|
1207.0484
|
Random Subcarrier Allocation in OFDM-Based Cognitive Radio Networks
|
cs.IT math.IT math.PR math.ST stat.TH
|
This paper investigates the performance of an orthogonal frequency-division
multiplexing (OFDM)-based cognitive radio (CR) spectrum sharing communication
system that assumes random allocation and absence of the primary user's (PU)
channel occupation information, i.e., no spectrum sensing is employed to
acquire information about the availability of unused subcarriers. In case of a
single secondary user (SU) in the secondary network, due to the lack of
information of PUs' activities, the SU randomly allocates the subcarriers of
the primary network and collide with the PUs' subcarriers with a certain
probability. To maintain the quality of service (QoS) requirement of PUs, the
interference that SU causes onto PUs is controlled by adjusting SU's transmit
power below a predefined threshold, referred to as interference temperature. In
this work, the average capacity of SU with subcarrier collisions is employed as
performance measure to investigate the proposed random allocation scheme for
both general and Rayleigh channel fading models. Bounds and scaling laws of
average capacity with respect to the number of SU's, PUs' and available
subcarriers are derived. In addition, in the presence of multiple SUs, the
multiuser diversity gain of SUs assuming an opportunistic scheduling is also
investigated. To avoid the interference at the SUs that might be caused by the
random allocation scheme and obtain the maximum sum rate for SUs based on the
available subcarriers, an efficient centralized sequential algorithm based on
the opportunistic scheduling and random allocation (utilization) methods is
proposed to ensure the orthogonality of assigned subcarriers.
|
1207.0543
|
Rate-splitting in the presence of multiple receivers
|
cs.IT math.IT
|
In the presence of multiple senders, one of the simplest decoding strategies
that can be employed by a receiver is successive decoding. In a successive
decoding strategy, the receiver decodes the messages one at a time using the
knowledge of the previously decoded messages as side information. Recently,
there have been two separate attempts to construct codes for the interference
channel using successive decoding based on the idea of rate-splitting.
In this note, we highlight a difficulty that arises when a rate-splitting
codebook is to be decoded by multiple receivers. The main issue is that the
rates of the split codebook are tightly coupled to the properties of the
channel to the receiver, thus, rates chosen for one of the receivers may not be
decodable for the other. We illustrate this issue by scrutinizing two recent
arguments claiming to achieve the Han-Kobayashi rate region for the
interference channel using rate-splitting and successive decoding.
|
1207.0554
|
Proceedings First Workshop on Synthesis
|
cs.LO cs.FL cs.SE cs.SY
|
This volume contains the proceedings of the First Workshop on Synthesis (SYNT
2012). The workshop is held is held in Berkeley, California, on June 6th and
7th, as a satellite event to the 24th International Conference on Computer
Aided Verification (CAV 2012). SYNT aims at bringing together and providing an
open platform for researchers interested in synthesis.
|
1207.0557
|
Distributed Dynamic Inter-Cell Interference Management for Femtocell
Networks Using Over-the-Air Single-Tone Signaling
|
cs.IT math.IT
|
Femtocell networks are promising for not only improving the coverage but also
increasing the capacity of current cellular networks. The interference-limited
reality in femtocell networks makes interference management (IM) the key to
maintaining the quality of service and fairness in femtocell networks.
Over-the-air signaling is one of the most effective means for fast distributed
dynamic IM. However, the design of this type of signal is challenging. In this
paper, we address the challenges and propose an effective solution, referred to
as single-tone signaling (STS). The proposed STS scheme possesses many highly
desirable properties, such as no dedicated resource requirement (no system
overhead), no near-far effect, no inter-signal interference, and immunity to
synchronization error. In addition, the proposed STS signal provides a means
for high quality wideband channel estimation for the use of coordinated
techniques, such as coordinated beamforming. Based on the proposed STS, two
distributed dynamic IM schemes, ON/OFF power control and SLNR
(signal-to-leakage-plus-noise-ratio)-based transmitter beam coordination, are
proposed. Simulation results show significant performance improvement as a
result of the use of STS-based IM schemes.
|
1207.0560
|
Algorithms for Approximate Minimization of the Difference Between
Submodular Functions, with Applications
|
cs.DS cs.LG
|
We extend the work of Narasimhan and Bilmes [30] for minimizing set functions
representable as a difference between submodular functions. Similar to [30],
our new algorithms are guaranteed to monotonically reduce the objective
function at every step. We empirically and theoretically show that the
per-iteration cost of our algorithms is much less than [30], and our algorithms
can be used to efficiently minimize a difference between submodular functions
under various combinatorial constraints, a problem not previously addressed. We
provide computational bounds and a hardness result on the mul- tiplicative
inapproximability of minimizing the difference between submodular functions. We
show, however, that it is possible to give worst-case additive bounds by
providing a polynomial time computable lower-bound on the minima. Finally we
show how a number of machine learning problems can be modeled as minimizing the
difference between submodular functions. We experimentally show the validity of
our algorithms by testing them on the problem of feature selection with
submodular cost features.
|
1207.0561
|
Suicide ideation of individuals in online social networks
|
cs.SI physics.soc-ph
|
Suicide explains the largest number of death tolls among Japanese adolescents
in their twenties and thirties. Suicide is also a major cause of death for
adolescents in many other countries. Although social isolation has been
implicated to influence the tendency to suicidal behavior, the impact of social
isolation on suicide in the context of explicit social networks of individuals
is scarcely explored. To address this question, we examined a large data set
obtained from a social networking service dominant in Japan. The social network
is composed of a set of friendship ties between pairs of users created by
mutual endorsement. We carried out the logistic regression to identify users'
characteristics, both related and unrelated to social networks, which
contribute to suicide ideation. We defined suicide ideation of a user as the
membership to at least one active user-defined community related to suicide. We
found that the number of communities to which a user belongs to, the
intransitivity (i.e., paucity of triangles including the user), and the
fraction of suicidal neighbors in the social network, contributed the most to
suicide ideation in this order. Other characteristics including the age and
gender contributed little to suicide ideation. We also found qualitatively the
same results for depressive symptoms.
|
1207.0563
|
Kron Reduction of Generalized Electrical Networks
|
cs.SY math.OC
|
Kron reduction is used to simplify the analysis of multi-machine power
systems under certain steady state assumptions that underly the usage of
phasors. In this paper we show how to perform Kron reduction for a class of
electrical networks without steady state assumptions. The reduced models can
thus be used to analyze the transient as well as the steady state behavior of
these electrical networks.
|
1207.0577
|
Robust Dequantized Compressive Sensing
|
stat.ML cs.LG
|
We consider the reconstruction problem in compressed sensing in which the
observations are recorded in a finite number of bits. They may thus contain
quantization errors (from being rounded to the nearest representable value) and
saturation errors (from being outside the range of representable values). Our
formulation has an objective of weighted $\ell_2$-$\ell_1$ type, along with
constraints that account explicitly for quantization and saturation errors, and
is solved with an augmented Lagrangian method. We prove a consistency result
for the recovered solution, stronger than those that have appeared to date in
the literature, showing in particular that asymptotic consistency can be
obtained without oversampling. We present extensive computational comparisons
with formulations proposed previously, and variants thereof.
|
1207.0578
|
Parameterized Runtime Analyses of Evolutionary Algorithms for the
Euclidean Traveling Salesperson Problem
|
cs.NE cs.DS
|
Parameterized runtime analysis seeks to understand the influence of problem
structure on algorithmic runtime. In this paper, we contribute to the
theoretical understanding of evolutionary algorithms and carry out a
parameterized analysis of evolutionary algorithms for the Euclidean traveling
salesperson problem (Euclidean TSP).
We investigate the structural properties in TSP instances that influence the
optimization process of evolutionary algorithms and use this information to
bound the runtime of simple evolutionary algorithms. Our analysis studies the
runtime in dependence of the number of inner points $k$ and shows that $(\mu +
\lambda)$ evolutionary algorithms solve the Euclidean TSP in expected time
$O((\mu/\lambda) \cdot n^3\gamma(\epsilon) + n\gamma(\epsilon) + (\mu/\lambda)
\cdot n^{4k}(2k-1)!)$ where $\gamma$ is a function of the minimum angle
$\epsilon$ between any three points.
Finally, our analysis provides insights into designing a mutation operator
that improves the upper bound on expected runtime. We show that a mixed
mutation strategy that incorporates both 2-opt moves and permutation jumps
results in an upper bound of $O((\mu/\lambda) \cdot n^3\gamma(\epsilon) +
n\gamma(\epsilon) + (\mu/\lambda) \cdot n^{2k}(k-1)!)$ for the $(\mu+\lambda)$
EA.
|
1207.0580
|
Improving neural networks by preventing co-adaptation of feature
detectors
|
cs.NE cs.CV cs.LG
|
When a large feedforward neural network is trained on a small training set,
it typically performs poorly on held-out test data. This "overfitting" is
greatly reduced by randomly omitting half of the feature detectors on each
training case. This prevents complex co-adaptations in which a feature detector
is only helpful in the context of several other specific feature detectors.
Instead, each neuron learns to detect a feature that is generally helpful for
producing the correct answer given the combinatorially large variety of
internal contexts in which it must operate. Random "dropout" gives big
improvements on many benchmark tasks and sets new records for speech and object
recognition.
|
1207.0639
|
Joint Source-Channel Coding for the Multiple-Access Relay Channel
|
cs.IT math.IT
|
Reliable transmission of arbitrarily correlated sources over multiple-access
relay channels (MARCs) and multiple-access broadcast relay channels (MABRCs) is
considered. In MARCs, only the destination is interested in a reconstruction of
the sources, while in MABRCs, both the relay and the destination want to
reconstruct the sources. We allow an arbitrary correlation among the sources at
the transmitters, and let both the relay and the destination have side
information that are correlated with the sources.
Two joint source-channel coding schemes are presented and the corresponding
sets of sufficient conditions for reliable communication are derived. The
proposed schemes use a combination of the correlation preserving mapping (CPM)
technique with Slepian-Wolf (SW) source coding: the first scheme uses CPM for
encoding information to the relay and SW source coding for encoding information
to the destination; while the second scheme uses SW source coding for encoding
information to the relay and CPM for encoding information to the destination.
|
1207.0658
|
On the origin of long-range correlations in texts
|
physics.data-an cs.CL physics.soc-ph
|
The complexity of human interactions with social and natural phenomena is
mirrored in the way we describe our experiences through natural language. In
order to retain and convey such a high dimensional information, the statistical
properties of our linguistic output has to be highly correlated in time. An
example are the robust observations, still largely not understood, of
correlations on arbitrary long scales in literary texts. In this paper we
explain how long-range correlations flow from highly structured linguistic
levels down to the building blocks of a text (words, letters, etc..). By
combining calculations and data analysis we show that correlations take form of
a bursty sequence of events once we approach the semantically relevant topics
of the text. The mechanisms we identify are fairly general and can be equally
applied to other hierarchical settings.
|
1207.0677
|
Local Water Diffusion Phenomenon Clustering From High Angular Resolution
Diffusion Imaging (HARDI)
|
cs.LG cs.CV
|
The understanding of neurodegenerative diseases undoubtedly passes through
the study of human brain white matter fiber tracts. To date, diffusion magnetic
resonance imaging (dMRI) is the unique technique to obtain information about
the neural architecture of the human brain, thus permitting the study of white
matter connections and their integrity. However, a remaining challenge of the
dMRI community is to better characterize complex fiber crossing configurations,
where diffusion tensor imaging (DTI) is limited but high angular resolution
diffusion imaging (HARDI) now brings solutions. This paper investigates the
development of both identification and classification process of the local
water diffusion phenomenon based on HARDI data to automatically detect imaging
voxels where there are single and crossing fiber bundle populations. The
technique is based on knowledge extraction processes and is validated on a dMRI
phantom dataset with ground truth.
|
1207.0689
|
The challenges of statistical patterns of language: the case of
Menzerath's law in genomes
|
q-bio.GN cs.CE physics.data-an
|
The importance of statistical patterns of language has been debated over
decades. Although Zipf's law is perhaps the most popular case, recently,
Menzerath's law has begun to be involved. Menzerath's law manifests in
language, music and genomes as a tendency of the mean size of the parts to
decrease as the number of parts increases in many situations. This statistical
regularity emerges also in the context of genomes, for instance, as a tendency
of species with more chromosomes to have a smaller mean chromosome size. It has
been argued that the instantiation of this law in genomes is not indicative of
any parallel between language and genomes because (a) the law is inevitable and
(b) non-coding DNA dominates genomes. Here mathematical, statistical and
conceptual challenges of these criticisms are discussed. Two major conclusions
are drawn: the law is not inevitable and languages also have a correlate of
non-coding DNA. However, the wide range of manifestations of the law in and
outside genomes suggests that the striking similarities between non-coding DNA
and certain linguistics units could be anecdotal for understanding the
recurrence of that statistical law.
|
1207.0702
|
Meme as Building Block for Evolutionary Optimization of Problem
Instances
|
cs.NE
|
A significantly under-explored area of evolutionary optimization in the
literature is the study of optimization methodologies that can evolve along
with the problems solved. Particularly, present evolutionary optimization
approaches generally start their search from scratch or the ground-zero state
of knowledge, independent of how similar the given new problem of interest is
to those optimized previously. There has thus been the apparent lack of
automated knowledge transfers and reuse across problems. Taking the cue, this
paper introduces a novel Memetic Computational Paradigm for search, one that
models after how human solves problems, and embarks on a study towards
intelligent evolutionary optimization of problems through the transfers of
structured knowledge in the form of memes learned from previous problem-solving
experiences, to enhance future evolutionary searches. In particular, the
proposed memetic search paradigm is composed of four culture-inspired
operators, namely, Meme Learning, Meme Selection, Meme Variation and Meme
Imitation. The learning operator mines for memes in the form of latent
structures derived from past experiences of problem-solving. The selection
operator identifies the fit memes that replicate and transmit across problems,
while the variation operator introduces innovations into the memes. The
imitation operator, on the other hand, defines how fit memes assimilate into
the search process of newly encountered problems, thus gearing towards
efficient and effective evolutionary optimization. Finally, comprehensive
studies on two widely studied challenging well established NP-hard routing
problem domains, particularly, the capacitated vehicle routing (CVR) and
capacitated arc routing (CAR), confirm the high efficacy of the proposed
memetic computational search paradigm for intelligent evolutionary optimization
of problems.
|
1207.0704
|
Speckle Reduction using Stochastic Distances
|
cs.IT cs.CV cs.GR math.IT stat.AP stat.ML
|
This paper presents a new approach for filter design based on stochastic
distances and tests between distributions. A window is defined around each
pixel, samples are compared and only those which pass a goodness-of-fit test
are used to compute the filtered value. The technique is applied to intensity
Synthetic Aperture Radar (SAR) data, using the Gamma model with varying number
of looks allowing, thus, changes in heterogeneity. Modified Nagao-Matsuyama
windows are used to define the samples. The proposal is compared with the Lee's
filter which is considered a standard, using a protocol based on simulation.
Among the criteria used to quantify the quality of filters, we employ the
equivalent number of looks (related to the signal-to-noise ratio), line
contrast, and edge preservation. Moreover, we also assessed the filters by the
Universal Image Quality Index and the Pearson's correlation between edges.
|
1207.0739
|
A Universal Model of Global Civil Unrest
|
physics.soc-ph cs.SI nlin.AO
|
Civil unrest is a powerful form of collective human dynamics, which has led
to major transitions of societies in modern history. The study of collective
human dynamics, including collective aggression, has been the focus of much
discussion in the context of modeling and identification of universal patterns
of behavior. In contrast, the possibility that civil unrest activities, across
countries and over long time periods, are governed by universal mechanisms has
not been explored. Here, we analyze records of civil unrest of 170 countries
during the period 1919-2008. We demonstrate that the distributions of the
number of unrest events per year are robustly reproduced by a nonlinear,
spatially extended dynamical model, which reflects the spread of civil disorder
between geographic regions connected through social and communication networks.
The results also expose the similarity between global social instability and
the dynamics of natural hazards and epidemics.
|
1207.0742
|
The OS* Algorithm: a Joint Approach to Exact Optimization and Sampling
|
cs.AI cs.CL cs.LG
|
Most current sampling algorithms for high-dimensional distributions are based
on MCMC techniques and are approximate in the sense that they are valid only
asymptotically. Rejection sampling, on the other hand, produces valid samples,
but is unrealistically slow in high-dimension spaces. The OS* algorithm that we
propose is a unified approach to exact optimization and sampling, based on
incremental refinements of a functional upper bound, which combines ideas of
adaptive rejection sampling and of A* optimization search. We show that the
choice of the refinement can be done in a way that ensures tractability in
high-dimension spaces, and we present first experiments in two different
settings: inference in high-order HMMs and in large discrete graphical models.
|
1207.0757
|
Generalized Statistical Complexity of SAR Imagery
|
cs.IT cs.GR math.IT stat.AP stat.ML
|
A new generalized Statistical Complexity Measure (SCM) was proposed by Rosso
et al in 2010. It is a functional that captures the notions of order/disorder
and of distance to an equilibrium distribution. The former is computed by a
measure of entropy, while the latter depends on the definition of a stochastic
divergence. When the scene is illuminated by coherent radiation, image data is
corrupted by speckle noise, as is the case of ultrasound-B, sonar, laser and
Synthetic Aperture Radar (SAR) sensors. In the amplitude and intensity formats,
this noise is multiplicative and non-Gaussian requiring, thus, specialized
techniques for image processing and understanding. One of the most successful
family of models for describing these images is the Multiplicative Model which
leads, among other probability distributions, to the G0 law. This distribution
has been validated in the literature as an expressive and tractable model,
deserving the "universal" denomination for its ability to describe most types
of targets. In order to compute the statistical complexity of a site in an
image corrupted by speckle noise, we assume that the equilibrium distribution
is that of fully developed speckle, namely the Gamma law in intensity format,
which appears in areas with little or no texture. We use the Shannon entropy
along with the Hellinger distance to measure the statistical complexity of
intensity SAR images, and we show that it is an expressive feature capable of
identifying many types of targets.
|
1207.0771
|
Polarimetric SAR Image Smoothing with Stochastic Distances
|
cs.IT cs.CV cs.GR math.IT stat.AP stat.ML
|
Polarimetric Synthetic Aperture Radar (PolSAR) images are establishing as an
important source of information in remote sensing applications. The most
complete format this type of imaging produces consists of complex-valued
Hermitian matrices in every image coordinate and, as such, their visualization
is challenging. They also suffer from speckle noise which reduces the
signal-to-noise ratio. Smoothing techniques have been proposed in the
literature aiming at preserving different features and, analogously,
projections from the cone of Hermitian positive matrices to different color
representation spaces are used for enhancing certain characteristics. In this
work we propose the use of stochastic distances between models that describe
this type of data in a Nagao-Matsuyama-type of smoothing technique. The
resulting images are shown to present good visualization properties (noise
reduction with preservation of fine details) in all the considered
visualization spaces.
|
1207.0782
|
Polar write once memory codes
|
cs.IT math.IT
|
A coding scheme for write once memory (WOM) using polar codes is presented.
It is shown that the scheme achieves the capacity region of noiseless WOMs when
an arbitrary number of multiple writes is permitted. The encoding and decoding
complexities scale as O(N log N) where N is the blocklength. For N sufficiently
large, the error probability decreases sub-exponentially in N. The results can
be generalized from binary to generalized WOMs, described by an arbitrary
directed acyclic graph, using nonbinary polar codes. In the derivation we also
obtain results on the typical distortion of polar codes for lossy source
coding. Some simulation results with finite length codes are presented.
|
1207.0783
|
Hybrid Template Update System for Unimodal Biometric Systems
|
cs.LG
|
Semi-supervised template update systems allow to automatically take into
account the intra-class variability of the biometric data over time. Such
systems can be inefficient by including too many impostor's samples or skipping
too many genuine's samples. In the first case, the biometric reference drifts
from the real biometric data and attracts more often impostors. In the second
case, the biometric reference does not evolve quickly enough and also
progressively drifts from the real biometric data. We propose a hybrid system
using several biometric sub-references in order to increase per- formance of
self-update systems by reducing the previously cited errors. The proposition is
validated for a keystroke- dynamics authentication system (this modality
suffers of high variability over time) on two consequent datasets from the
state of the art.
|
1207.0784
|
Web-Based Benchmark for Keystroke Dynamics Biometric Systems: A
Statistical Analysis
|
cs.LG
|
Most keystroke dynamics studies have been evaluated using a specific kind of
dataset in which users type an imposed login and password. Moreover, these
studies are optimistics since most of them use different acquisition protocols,
private datasets, controlled environment, etc. In order to enhance the accuracy
of keystroke dynamics' performance, the main contribution of this paper is
twofold. First, we provide a new kind of dataset in which users have typed both
an imposed and a chosen pairs of logins and passwords. In addition, the
keystroke dynamics samples are collected in a web-based uncontrolled
environment (OS, keyboards, browser, etc.). Such kind of dataset is important
since it provides us more realistic results of keystroke dynamics' performance
in comparison to the literature (controlled environment, etc.). Second, we
present a statistical analysis of well known assertions such as the
relationship between performance and password size, impact of fusion schemes on
system overall performance, and others such as the relationship between
performance and entropy. We put into obviousness in this paper some new results
on keystroke dynamics in realistic conditions.
|
1207.0788
|
On generalized terminal state constraints for model predictive control
|
cs.SY math.OC
|
This manuscript contains technical results related to a particular approach
for the design of Model Predictive Control (MPC) laws. The approach, named
"generalized" terminal state constraint, induces the recursive feasibility of
the underlying optimization problem and recursive satisfaction of state and
input constraints, and it can be used for both tracking MPC (i.e. when the
objective is to track a given steady state) and economic MPC (i.e. when the
objective is to minimize a cost function which does not necessarily attains its
minimum at a steady state). It is shown that the proposed technique provides,
in general, a larger feasibility set with respect to existing approaches, given
the same computational complexity. Moreover, a new receding horizon strategy is
introduced, exploiting the generalized terminal state constraint. Under mild
assumptions, the new strategy is guaranteed to converge in finite time, with
arbitrarily good accuracy, to an MPC law with an optimally-chosen terminal
state constraint, while still enjoying a larger feasibility set. The features
of the new technique are illustrated by three examples.
|
1207.0805
|
Anatomical Structure Segmentation in Liver MRI Images
|
cs.CV
|
Segmentation of medical images is a challenging task owing to their
complexity. A standard segmentation problem within Magnetic Resonance Imaging
(MRI) is the task of labeling voxels according to their tissue type. Image
segmentation provides volumetric quantification of liver area and thus helps in
the diagnosis of disorders, such as Hepatitis, Cirrhosis, Jaundice,
Hemochromatosis etc.This work deals with comparison of segmentation by applying
Level Set Method,Fuzzy Level Information C-Means Clustering Algorithm and
Gradient Vector Flow Snake Algorithm.The results are compared using the
parameters such as Number of pixels correctly classified, and percentage of
area segmented.
|
1207.0833
|
Relational Data Mining Through Extraction of Representative Exemplars
|
cs.AI cs.IR stat.ML
|
With the growing interest on Network Analysis, Relational Data Mining is
becoming an emphasized domain of Data Mining. This paper addresses the problem
of extracting representative elements from a relational dataset. After defining
the notion of degree of representativeness, computed using the Borda
aggregation procedure, we present the extraction of exemplars which are the
representative elements of the dataset. We use these concepts to build a
network on the dataset. We expose the main properties of these notions and we
propose two typical applications of our framework. The first application
consists in resuming and structuring a set of binary images and the second in
mining co-authoring relation in a research team.
|
1207.0852
|
Counter-Factual Reinforcement Learning: How to Model Decision-Makers
That Anticipate The Future
|
cs.MA cs.GT
|
This paper introduces a novel framework for modeling interacting humans in a
multi-stage game. This "iterated semi network-form game" framework has the
following desirable characteristics: (1) Bounded rational players, (2)
strategic players (i.e., players account for one another's reward functions
when predicting one another's behavior), and (3) computational tractability
even on real-world systems. We achieve these benefits by combining concepts
from game theory and reinforcement learning. To be precise, we extend the
bounded rational "level-K reasoning" model to apply to games over multiple
stages. Our extension allows the decomposition of the overall modeling problem
into a series of smaller ones, each of which can be solved by standard
reinforcement learning algorithms. We call this hybrid approach "level-K
reinforcement learning". We investigate these ideas in a cyber battle scenario
over a smart power grid and discuss the relationship between the behavior
predicted by our model and what one might expect of real human defenders and
attackers.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.