id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1207.6600
|
Diversity in Ranking using Negative Reinforcement
|
cs.IR cs.AI cs.SI
|
In this paper, we consider the problem of diversity in ranking of the nodes
in a graph. The task is to pick the top-k nodes in the graph which are both
'central' and 'diverse'. Many graph-based models of NLP like text
summarization, opinion summarization involve the concept of diversity in
generating the summaries. We develop a novel method which works in an iterative
fashion based on random walks to achieve diversity. Specifically, we use
negative reinforcement as a main tool to introduce diversity in the
Personalized PageRank framework. Experiments on two benchmark datasets show
that our algorithm is competitive to the existing methods.
|
1207.6650
|
Information-Theoretic Study on Routing Path Selection in Two-Way Relay
Networks
|
cs.IT math.IT
|
Two-way relaying is a promising technique to improve network throughput.
However, how to apply it to a wireless network remains an unresolved issue.
Particularly, challenges lie in the joint design between the physical layer and
the routing protocol. Applying an existing routing protocol to a two-way relay
network can easily compromise the advantages of two-way relaying. Considering
routing path selection and two-way relaying together can be formulated as a
network optimization problem, but it is usually NP-hard. In this paper, we take
a different approach to study routing path selection for two-way relay
networks. Instead of solving the joint optimization problem, we study the
fundamental characteristics of a routing path consisting of multihop two-way
relaying nodes. Information theoretical analysis is carried out to derive
bandwidth efficiency and energy efficiency of a routing path in a two-way relay
network. Such analysis provides a framework of routing path selection by
considering bandwidth efficiency, energy efficiency and latency subject to
physical layer constraints such as the transmission rate, transmission power,
path loss exponent, path length, and the number of relays. This framework
provides insightful guidelines on routing protocol design of a two-way relay
network. Our analytical framework and insights are illustrated by extensive
numerical results.
|
1207.6656
|
Measuring the Complexity of Ultra-Large-Scale Adaptive Systems
|
cs.NE cs.NI nlin.AO
|
Ultra-large scale (ULS) systems are becoming pervasive. They are inherently
complex, which makes their design and control a challenge for traditional
methods. Here we propose the design and analysis of ULS systems using measures
of complexity, emergence, self-organization, and homeostasis based on
information theory. These measures allow the evaluation of ULS systems and thus
can be used to guide their design. We evaluate the proposal with a ULS
computing system provided with adaptation mechanisms. We show the evolution of
the system with stable and also changing workload, using different fitness
functions. When the adaptive plan forces the system to converge to a predefined
performance level, the nodes may result in highly unstable configurations, that
correspond to a high variance in time of the measured complexity. Conversely,
if the adaptive plan is less "aggressive", the system may be more stable, but
the optimal performance may not be achieved.
|
1207.6667
|
Relay Selection for OFDM Wireless Systems under Asymmetric Information:
A Contract-Theory Based Approach
|
cs.NI cs.MA
|
User cooperation although improves performance of wireless systems, it
requires incentives for the potential cooperating nodes to spend their energy
acting as relays. Moreover, these potential relays are better informed than the
source about their transmission costs, which depend on the exact channel
conditions on their relay-destination links. This results in asymmetry of
available information between the source and the relays. In this paper, we use
contract theory to tackle the problem of relay selection under asymmetric
information in OFDM-based cooperative wireless system that employs
decode-and-forward (DF) relaying. We first design incentive compatible
offers/contracts, consisting of a menu of payments and desired
signal-to-noise-ratios (SNR)s at the destination and then the source broadcasts
this menu to nearby mobile nodes. The nearby mobile nodes who are willing to
relay notify back the source with the contracts they are willing to accept in
each subcarrier. We show that when the source is under a budget constraint, the
problem of relay selection in each subcarrier in order to maximize the capacity
is a nonlinear non-separable knapsack problem. We propose a heuristic relay
selection scheme to solve this problem. We compare the performance of our
overall mechanism and the heuristic solution with a simple relay selection
scheme and selected numerical results showed that our solution performs better
and is close to optimal. The overall mechanism introduced in this paper is
simple to implement, requires limited interaction with potential relays and
hence requires minimal signalling overhead.
|
1207.6677
|
Ergodic Sum Capacity of Macrodiversity MIMO Systems in Flat Rayleigh
Fading
|
cs.IT math.IT
|
The prospect of base station (BS) cooperation leading to joint combining at
widely separated antennas has led to increased interest in macrodiversity
systems, where both sources and receive antennas are geographically
distributed. In this scenario, little is known analytically about channel
capacity since the channel matrices have a very general form where each path
may have a different power. Hence, in this paper we consider the ergodic sum
capacity of a macrodiversity MIMO system with arbitrary numbers of sources and
receive antennas operating over Rayleigh fading channels. For this system, we
compute the exact ergodic capacity for a two-source system and a compact
approximation for the general system, which is shown to be very accurate over a
wide range of cases. Finally, we develop a highly simplified upper-bound which
leads to insights into the relationship between capacity and the channel
powers. Results are verified by Monte Carlo simulations and the impact on
capacity of various channel power profiles is investigated
|
1207.6678
|
Performance Analysis of Macrodiversity MIMO Systems with MMSE and ZF
Receivers in Flat Rayleigh Fading
|
cs.IT math.IT
|
Consider a multiuser system where an arbitrary number of users communicate
with a distributed receive array over independent Rayleigh fading paths. The
receive array performs minimum mean squared error (MMSE) or zero forcing (ZF)
combining and perfect channel state information is assumed at the receiver.
This scenario is well-known and exact analysis is possible when the receive
antennas are located in a single array. However, when the antennas are
distributed, the individual links all have different average signal to noise
ratio (SNRs) and this is a much more challenging problem. In this paper, we
provide approximate distributions for the output SNR of a ZF receiver and the
output signal to interference plus noise ratio (SINR) of an MMSE receiver. In
addition, simple high SNR approximations are provided for the symbol error rate
(SER) of both receivers assuming M-PSK or M-QAM modulations. These high SNR
results provide array gain and diversity gain information as well as a
remarkably simple functional link between performance and the link powers.
|
1207.6682
|
Exploring Promising Stepping Stones by Combining Novelty Search with
Interactive Evolution
|
cs.NE
|
The field of evolutionary computation is inspired by the achievements of
natural evolution, in which there is no final objective. Yet the pursuit of
objectives is ubiquitous in simulated evolution. A significant problem is that
objective approaches assume that intermediate stepping stones will increasingly
resemble the final objective when in fact they often do not. The consequence is
that while solutions may exist, searching for such objectives may not discover
them. This paper highlights the importance of leveraging human insight during
search as an alternative to articulating explicit objectives. In particular, a
new approach called novelty-assisted interactive evolutionary computation
(NA-IEC) combines human intuition with novelty search for the first time to
facilitate the serendipitous discovery of agent behaviors. In this approach,
the human user directs evolution by selecting what is interesting from the
on-screen population of behaviors. However, unlike in typical IEC, the user can
now request that the next generation be filled with novel descendants. The
experimental results demonstrate that combining human insight with novelty
search finds solutions significantly faster and at lower genomic complexities
than fully-automated processes, including pure novelty search, suggesting an
important role for human users in the search for solutions.
|
1207.6685
|
FMLtoHOL (version 1.0): Automating First-order Modal Logics with LEO-II
and Friends
|
cs.LO cs.AI
|
A converter from first-order modal logics to classical higher- order logic is
presented. This tool enables the application of off-the-shelf higher-order
theorem provers and model finders for reasoning within first- order modal
logics. The tool supports logics K, K4, D, D4, T, S4, and S5 with respect to
constant, varying and cumulative domain semantics.
|
1207.6706
|
Wireless MIMO Switching: Weighted Sum Mean Square Error and Sum Rate
Optimization
|
cs.IT math.IT
|
This paper addresses joint transceiver and relay design for a wireless
multiple-input-multiple-output (MIMO) switching scheme that enables data
exchange among multiple users. Here, a multi-antenna relay linearly precodes
the received (uplink) signals from multiple users before forwarding the signal
in the downlink, where the purpose of precoding is to let each user receive its
desired signal with interference from other users suppressed. The problem of
optimizing the precoder based on various design criteria is typically
non-convex and difficult to solve. The main contribution of this paper is a
unified approach to solve the weighted sum mean square error (MSE) minimization
and weighted sum rate maximization problems in MIMO switching. Specifically, an
iterative algorithm is proposed for jointly optimizing the relay's precoder and
the users' receive filters to minimize the weighted sum MSE. It is also shown
that the weighted sum rate maximization problem can be reformulated as an
iterated weighted sum MSE minimization problem and can therefore be solved
similarly to the case of weighted sum MSE minimization. With properly chosen
initial values, the proposed iterative algorithms are asymptotically optimal in
both high and low signal-to-noise ratio (SNR) regimes for MIMO switching,
either with or without self-interference cancellation (a.k.a., physical-layer
network coding). Numerical results show that the optimized MIMO switching
scheme based on the proposed algorithms significantly outperforms existing
approaches in the literature.
|
1207.6713
|
Model-Lite Case-Based Planning
|
cs.AI
|
There is increasing awareness in the planning community that depending on
complete models impedes the applicability of planning technology in many real
world domains where the burden of specifying complete domain models is too
high. In this paper, we consider a novel solution for this challenge that
combines generative planning on incomplete domain models with a library of plan
cases that are known to be correct. While this was arguably the original
motivation for case-based planning, most existing case-based planners assume
(and depend on) from-scratch planners that work on complete domain models. In
contrast, our approach views the plan generated with respect to the incomplete
model as a "skeletal plan" and augments it with directed mining of plan
fragments from library cases. We will present the details of our approach and
present an empirical evaluation of our method in comparison to a
state-of-the-art case-based planner that depends on complete domain models.
|
1207.6742
|
Low-Speed ADC Sampling Based High-Resolution Compressive Channel
Estimation
|
cs.IT math.IT
|
Broadband channel is often characterized by a sparse multipath channel where
dominant multipath taps are widely separated in time, thereby resulting in a
large delay spread. Traditionally, accurate channel estimation is done by
sampling received signal by analog-to-digital converter (ADC) at Nyquist rate
(high-speed ADC sampling) and then estimate all channel taps with
high-resolution. However, traditional linear estimation methods have two mainly
disadvantages: 1) demand of the high-speed ADC sampling rate which already
exceeds the capability of current ADC and also the high-speed ADC is very
expensive for regular wireless communications; 2) neglect the inherent channel
sparsity and the low spectral efficiency wireless communication is unavoidable.
To solve these challenges, in this paper, we propose a high-resolution
compressive channel estimation method by using low-speed ADC sampling. Our
proposed method can achieve close performance comparing with traditional sparse
channel estimation methods. At the same time, the proposed method has following
advantages: 1) reduce communication cost by utilizing cheap low-speed ADC; 2)
improve spectral efficiency by extracting potential training signal resource.
Numerical simulations confirm our proposed method using low-speed ADC sampling.
|
1207.6762
|
Cooperative Regenerating Codes
|
cs.IT math.IT
|
One of the design objectives in distributed storage system is the
minimization of the data traffic during the repair of failed storage nodes. By
repairing multiple failures simultaneously and cooperatively, further reduction
of repair traffic is made possible. A closed-form expression of the optimal
tradeoff between the repair traffic and the amount of storage in each node for
cooperative repair is given. We show that the points on the tradeoff curve can
be achieved by linear cooperative regenerating codes, with an explicit bound on
the required finite field size. The proof relies on a max-flow-min-cut-type
theorem for submodular flow from combinatorial optimization. Two families of
explicit constructions are given.
|
1207.6774
|
A Survey Of Activity Recognition And Understanding The Behavior In Video
Survelliance
|
cs.CV
|
This paper presents a review of human activity recognition and behaviour
understanding in video sequence. The key objective of this paper is to provide
a general review on the overall process of a surveillance system used in the
current trend. Visual surveillance system is directed on automatic
identification of events of interest, especially on tracking and classification
of moving objects. The processing step of the video surveillance system
includes the following stages: Surrounding model, object representation, object
tracking, activity recognition and behaviour understanding. It describes
techniques that use to define a general set of activities that are applicable
to a wide range of scenes and environments in video sequence.
|
1207.6788
|
Submartingale Property of E_0 Under The Polarization Transformations
|
cs.IT math.IT
|
We prove that the relation $E_0(\rho, W^{-}) + E_0(\rho, W^{+}) \geq 2
E_0(\rho, W)$ holds for any binary input discrete memoryless channel $W$, and
$\rho \geq 0$.
|
1207.6805
|
Statistical Agent Based Modelization of the Phenomenon of Drug Abuse
|
physics.soc-ph cs.CY cs.SI
|
We introduce a statistical agent based model to describe the phenomenon of
drug abuse and its dynamical evolution at the individual and global level. The
agents are heterogeneous with respect to their intrinsic inclination to drugs,
to their budget attitude and social environment. The various levels of drug use
were inspired by the professional description of the phenomenon and this
permits a direct comparison with all available data. We show that certain
elements have a great importance to start the use of drugs, for example the
rare events in the personal experiences which permit to overcame the barrier of
drug use occasionally. The analysis of how the system reacts to perturbations
is very important to understand its key elements and it provides strategies for
effective policy making. The present model represents the first step of a
realistic description of this phenomenon and can be easily generalized in
various directions.
|
1207.6808
|
Wireless Scheduling with Dominant Interferers and Applications to
Femtocellular Interference Cancellation
|
cs.IT cs.NI math.IT
|
We consider a general class of wireless scheduling and resource allocation
problems where the received rate in each link is determined by the actions of
the transmitter in that link along with a single dominant interferer. Such
scenarios arise in a range of scenarios, particularly in emerging femto- and
picocellular networks with strong, localized interference. For these networks,
a utility maximizing scheduler based on loopy belief propagation is presented
that enables computationally-efficient local processing and low communication
overhead. Our main theoretical result shows that the fixed points of the method
are provably globally optimal for arbitrary (potentially non-convex) rate and
utility functions. The methodology thus provides globally optimal solutions to
a large class of inter-cellular interference coordination problems including
subband scheduling, dynamic orthogonalization and beamforming whenever the
dominant interferer assumption is valid. The paper focuses on applications for
systems with interference cancellation (IC) and suggests a new scheme on
optimal rate control, as opposed to traditional power control. Simulations are
presented in industry standard femtocellular network models demonstrate
significant improvements in rates over simple reuse 1 without IC, and near
optimal performance of loopy belief propagation for rate selection in only one
or two iterations.
|
1207.6814
|
Adaptive Fractal-like Network Structure for Efficient Search of
Inhomogeneously Distributed Targets at Unknown Positions
|
physics.soc-ph cs.SI math-ph math.MP
|
Since a spatial distribution of communication requests is inhomogeneous and
related to a population, in constructing a network, it is crucial for
delivering packets on short paths through the links between proximity nodes and
for distributing the load of nodes how to locate the nodes as base-stations on
a realistic wireless environment. In this paper, from viewpoints of complex
network science and biological foraging, we propose a scalably self-organized
geographical network, in which the proper positions of nodes and the network
topology are simultaneously determined according to the population, by
iterative divisions of rectangles for load balancing of nodes in the adaptive
change of their territories. In particular, we consider a decentralized routing
by using only local information,and show that, for searching targets around
high population areas, the routing on the naturally embedded fractal-like
structure by population has higher efficiency than the conventionally optimal
strategy on a square lattice.
|
1207.6821
|
Proceedings 7th International Workshop on Developments of Computational
Methods
|
cs.CE cs.ET
|
This volume contains the proceedings of the 7th International Workshop on
Developments in Computational Models (DCM 2011) which was held on Sunday July
3, 2011, in Zurich, Switzerland, as a satelite workshop of ICALP 2011.
Recently several new models of computation have emerged, for instance for
bio-computing and quantum-computing, and in addition traditional models of
computation have been adapted to accommodate new demands or capabilities of
computer systems. The aim of DCM is to bring together researchers who are
currently developing new computational models or new features for traditional
computational models, in order to foster their interaction, to provide a forum
for presenting new ideas and work in progress, and to enable newcomers to learn
about current activities in this area.
|
1207.6839
|
Three Degrees of Distance on Twitter
|
cs.SI physics.soc-ph
|
Recent work has found that the propagation of behaviors and sentiments
through networks extends in ranges up to 2 to 4 degrees of distance. The
regularity with which the same observation is found in dissimilar phenomena has
been associated with friction in the propagation process and the instability of
link structure that emerges in the dynamic of social networks. We study a
contagious behavior, the practice of retweeting, in a setting where neither of
those restrictions is present and still found the same result.
|
1207.6862
|
Improved Channel Estimation with Partial Sparse Constraint for AF
Cooperative Communication Systems
|
cs.IT math.IT
|
Accurate channel state information (CSI) is necessary for coherent detection
in amplify and forward (AF) broadband cooperative communication systems. Based
on the assumption of ordinary sparse channel, efficient sparse channel
estimation methods have been investigated in our previous works. However, when
the cooperative channel exhibits partial sparse structure rather than ordinary
sparsity, our previous method cannot take advantage of the prior information.
In this paper, we propose an improved channel estimation method with partial
sparse constraint on cooperative channel. At first, we formulate channel
estimation as a compressive sensing problem and utilize sparse decomposition
theory. Secondly, the cooperative channel is reconstructed by LASSO with
partial sparse constraint. Finally, numerical simulations are carried out to
confirm the superiority of proposed methods over ordinary sparse channel
estimation methods.
|
1207.6889
|
A robust l_1 penalized DOA estimator
|
cs.IT math.IT
|
The SPS-LASSO has recently been introduced as a solution to the problem of
regularization parameter selection in the complex-valued LASSO problem. Still,
the dependence on the grid size and the polynomial time of performing convex
optimization technique in each iteration, in addition to the deficiencies in
the low noise regime, confines its performance for Direction of Arrival (DOA)
estimation. This work presents methods to apply LASSO without grid size
limitation and with less complexity. As we show by simulations, the proposed
methods loose a negligible performance compared to the Maximum Likelihood (ML)
estimator, which needs a combinatorial search We also show by simulations that
compared to practical implementations of ML, the proposed techniques are less
sensitive to the source power difference.
|
1207.6902
|
Interference Alignment with Quantized Grassmannian Feedback in the
K-user Constant MIMO Interference Channel
|
cs.IT math.IT
|
A simple channel state information (CSI) feedback scheme is proposed for
interference alignment (IA) over the K-user constant
Multiple-Input-Multiple-Output Interference Channel (MIMO IC). The proposed
technique relies on the identification of invariants in the IA equations, which
enables the reformulation of the CSI quantization problem as a single
quantization on the Grassmann manifold at each receiver. The scaling of the
number of feedback bits with the transmit power sufficient to preserve the
multiplexing gain that can be achieved under perfect CSI is established. We
show that the CSI feedback requirements of the proposed technique are better
(lower) than what is required when using previously published methods, for
system dimensions (number of users and antennas) of practical interest.
Furthermore, we show through simulations that this advantage persists at low
SNR, in the sense that the proposed technique yields a higher sum-rate
performance for a given number of feedback bits. Finally, to complement our
analysis, we introduce a statistical model that faithfully captures the
properties of the quantization error obtained for random vector quantization
(RVQ) on the Grassmann manifold for large codebooks; this enables the numerical
(Monte-Carlo) analysis of general Grassmannian RVQ schemes for codebook sizes
that would be impractically large to simulate.
|
1207.6910
|
Gaussian process regression as a predictive model for Quality-of-Service
in Web service systems
|
cs.NI cs.LG
|
In this paper, we present the Gaussian process regression as the predictive
model for Quality-of-Service (QoS) attributes in Web service systems. The goal
is to predict performance of the execution system expressed as QoS attributes
given existing execution system, service repository, and inputs, e.g., streams
of requests. In order to evaluate the performance of Gaussian process
regression the simulation environment was developed. Two quality indexes were
used, namely, Mean Absolute Error and Mean Squared Error. The results obtained
within the experiment show that the Gaussian process performed the best with
linear kernel and statistically significantly better comparing to
Classification and Regression Trees (CART) method.
|
1207.6986
|
Two Embedding Theorems for Data with Equivalences under Finite Group
Action
|
cs.DS cs.IT math.IT
|
There is recent interest in compressing data sets for non-sequential
settings, where lack of obvious orderings on their data space, require notions
of data equivalences to be considered. For example, Varshney & Goyal (DCC,
2006) considered multiset equivalences, while Choi & Szpankowski (IEEE Trans.
IT, 2012) considered isomorphic equivalences in graphs. Here equivalences are
considered under a relatively broad framework - finite-dimensional,
non-sequential data spaces with equivalences under group action, for which
analogues of two well-studied embedding theorems are derived: the Whitney
embedding theorem and the Johnson-Lindenstrauss lemma. Only the canonical data
points need to be carefully embedded, each such point representing a set of
data points equivalent under group action. Two-step embeddings are considered.
First, a group invariant is applied to account for equivalences, and then
secondly, a linear embedding takes it down to low-dimensions. Our results
require hypotheses on discriminability of the applied invariant, such notions
related to seperating invariants (Dufresne, 2008), and completeness in pattern
recognition (Kakarala, 1992). In the latter theorem, the embedding complexity
depends on the size of the canonical part, which may be significantly smaller
than the whole data set, up to a factor equal to the size the group.
|
1207.6991
|
The probability of finding a fixed pattern in random data depends
monotonically on the bifix indicator
|
math.PR cs.IT math.IT
|
We consider the problem of finding a fixed L-ary sequence in a stream of
random L-ary data. It is known that the expected search time is a strictly
increasing function of the lengths of the bifices of the pattern. In this paper
we prove the related statement that the probability of finding the pattern in a
finite random word is a strictly decreasing function of the lengths of the
bifices of the pattern.
|
1207.7019
|
Finite Automata with Time-Delay Blocks (Extended Version)
|
cs.FL cs.SY
|
The notion of delays arises naturally in many computational models, such as,
in the design of circuits, control systems, and dataflow languages. In this
work, we introduce \emph{automata with delay blocks} (ADBs), extending finite
state automata with variable time delay blocks, for deferring individual
transition output symbols, in a discrete-time setting. We show that the ADB
languages strictly subsume the regular languages, and are incomparable in
expressive power to the context-free languages. We show that ADBs are closed
under union, concatenation and Kleene star, and under intersection with regular
languages, but not closed under complementation and intersection with other ADB
languages. We show that the emptiness and the membership problems are decidable
in polynomial time for ADBs, whereas the universality problem is undecidable.
Finally we consider the linear-time model checking problem, i.e., whether the
language of an ADB is contained in a regular language, and show that the model
checking problem is PSPACE-complete.
|
1207.7035
|
Supervised Laplacian Eigenmaps with Applications in Clinical Diagnostics
for Pediatric Cardiology
|
cs.LG
|
Electronic health records contain rich textual data which possess critical
predictive information for machine-learning based diagnostic aids. However many
traditional machine learning methods fail to simultaneously integrate both
vector space data and text. We present a supervised method using Laplacian
eigenmaps to augment existing machine-learning methods with low-dimensional
representations of textual predictors which preserve the local similarities.
The proposed implementation performs alternating optimization using gradient
descent. For the evaluation we applied our method to over 2,000 patient records
from a large single-center pediatric cardiology practice to predict if patients
were diagnosed with cardiac disease. Our method was compared with latent
semantic indexing, latent Dirichlet allocation, and local Fisher discriminant
analysis. The results were assessed using AUC, MCC, specificity, and
sensitivity. Results indicate supervised Laplacian eigenmaps was the highest
performing method in our study, achieving 0.782 and 0.374 for AUC and MCC
respectively. SLE showed an increase in 8.16% in AUC and 20.6% in MCC over the
baseline which excluded textual data and a 2.69% and 5.35% increase in AUC and
MCC respectively over unsupervised Laplacian eigenmaps. This method allows many
existing machine learning predictors to effectively and efficiently utilize the
potential of textual predictors.
|
1207.7079
|
Improving multivariate Horner schemes with Monte Carlo tree search
|
cs.SC cs.AI math-ph math.MP
|
Optimizing the cost of evaluating a polynomial is a classic problem in
computer science. For polynomials in one variable, Horner's method provides a
scheme for producing a computationally efficient form. For multivariate
polynomials it is possible to generalize Horner's method, but this leaves
freedom in the order of the variables. Traditionally, greedy schemes like
most-occurring variable first are used. This simple textbook algorithm has
given remarkably efficient results. Finding better algorithms has proved
difficult. In trying to improve upon the greedy scheme we have implemented
Monte Carlo tree search, a recent search method from the field of artificial
intelligence. This results in better Horner schemes and reduces the cost of
evaluating polynomials, sometimes by factors up to two.
|
1207.7125
|
Degree Relations of Triangles in Real-world Networks and Models
|
cs.SI physics.soc-ph
|
Triangles are an important building block and distinguishing feature of
real-world networks, but their structure is still poorly understood. Despite
numerous reports on the abundance of triangles, there is very little
information on what these triangles look like. We initiate the study of
degree-labeled triangles -- specifically, degree homogeneity versus
heterogeneity in triangles. This yields new insight into the structure of
real-world graphs. We observe that networks coming from social and
collaborative situations are dominated by homogeneous triangles, i.e., degrees
of vertices in a triangle are quite similar to each other. On the other hand,
information networks (e.g., web graphs) are dominated by heterogeneous
triangles, i.e., the degrees in triangles are quite disparate. Surprisingly,
nodes within the top 1% of degrees participate in the vast majority of
triangles in heterogeneous graphs. We also ask the question of whether or not
current graph models reproduce the types of triangles that are observed in real
data and showed that most models fail to accurately capture these salient
features.
|
1207.7144
|
Information and Estimation over Binomial and Negative Binomial Models
|
cs.IT math.IT
|
In recent years, a number of results have been developed which connect
information measures and estimation measures under various models, including,
predominently, Gaussian and Poisson models. More recent results due to Taborda
and Perez-Cruz relate the relative entropy to certain mismatched estimation
errors in the context of binomial and negative binomial models, where, unlike
in the case of Gaussian and Poisson models, the conditional mean estimates
concern models of different parameters than those of the original model. In
this note, a different set of results in simple forms are developed for
binomial and negative binomial models, where the conditional mean estimates are
produced through the original models. The new results are more consistent with
existing results for Gaussian and Poisson models.
|
1207.7147
|
A Calculus of Looping Sequences with Local Rules
|
cs.CE cs.FL
|
In this paper we present a variant of the Calculus of Looping Sequences (CLS
for short) with global and local rewrite rules. While global rules, as in CLS,
are applied anywhere in a given term, local rules can only be applied in the
compartment on which they are defined. Local rules are dynamic: they can be
added, moved and erased. We enrich the new calculus with a parallel semantics
where a reduction step is lead by any number of global and local rules that
could be performed in parallel. A type system is developed to enforce the
property that a compartment must contain only local rules with specific
features. As a running example we model some interactions happening in a cell
starting from its nucleus and moving towards its mitochondria.
|
1207.7150
|
Probabilistic Monads, Domains and Classical Information
|
cs.PL cs.DM cs.IT math.IT
|
Shannon's classical information theory uses probability theory to analyze
channels as mechanisms for information flow. In this paper, we generalize
results of Martin, Allwein and Moskowitz for binary channels to show how some
more modern tools - probabilistic monads and domain theory in particular - can
be used to model classical channels. As initiated Martin, et al., the point of
departure is to consider the family of channels with fixed inputs and outputs,
rather than trying to analyze channels one at a time. The results show that
domain theory has a role to play in the capacity of channels; in particular,
the (n x n)-stochastic matrices, which are the classical channels having the
same sized input as output, admit a quotient compact ordered space which is a
domain, and the capacity map factors through this quotient via a
Scott-continuous map that measures the quotient domain. We also comment on how
some of our results relate to recent discoveries about quantum channels and
free affine monoids.
|
1207.7167
|
Predicate Generation for Learning-Based Quantifier-Free Loop Invariant
Inference
|
cs.LO cs.LG
|
We address the predicate generation problem in the context of loop invariant
inference. Motivated by the interpolation-based abstraction refinement
technique, we apply the interpolation theorem to synthesize predicates
implicitly implied by program texts. Our technique is able to improve the
effectiveness and efficiency of the learning-based loop invariant inference
algorithm in [14]. We report experiment results of examples from Linux,
SPEC2000, and Tar utility.
|
1207.7179
|
Novel Modulation Techniques using Isomers as Messenger Molecules for
Nano Communication Networks via Diffusion
|
cs.IT math.IT q-bio.QM
|
In this paper, we propose three novel modulation techniques, i.e.,
concentration-based, molecular-type-based, and molecular-ratio-based, using
isomers as messenger molecules for nano communication networks via diffusion.
To evaluate achievable rate performance, we compare the proposed tech- niques
with conventional insulin based concepts under practical scenarios. Analytical
and numerical results confirm that the proposed modulation techniques using
isomers achieve higher data transmission rate performance (max 7.5 dB
signal-to-noise ratio gain) than the insulin based concepts. We also
investigate the tradeoff between messenger sizes and modulation orders and
provide guidelines for selecting from among several possible candidates.
|
1207.7193
|
Canalizing Boolean Functions Maximize the Mutual Information
|
cs.IT math.IT nlin.AO q-bio.MN
|
The ability of information processing in biologically motivated Boolean
networks is of interest in recent information theoretic research. One measure
to quantify this ability is the well known mutual information. Using Fourier
analysis we show that canalizing functions maximize the mutual information
between an input variable and the outcome of the function. We proof our result
for Boolean functions with uniform distributed as well as product distributed
input variables.
|
1207.7199
|
Message in a Sealed Bottle: Privacy Preserving Friending in Social
Networks
|
cs.SI cs.CR
|
Many proximity-based mobile social networks are developed to facilitate
connections between any two people, or to help a user to find people with
matched profile within a certain distance. A challenging task in these
applications is to protect the privacy the participants' profiles and personal
interests.
In this paper, we design novel mechanisms, when given a preference-profile
submitted by a user, that search a person with matching-profile in
decentralized multi-hop mobile social networks. Our mechanisms are
privacy-preserving: no participants' profile and the submitted
preference-profile are exposed. Our mechanisms establish a secure communication
channel between the initiator and matching users at the time when the matching
user is found. Our rigorous analysis shows that our mechanism is secure,
privacy-preserving, verifiable, and efficient both in communication and
computation. Extensive evaluations using real social network data, and actual
system implementation on smart phones show that our mechanisms are
significantly more efficient then existing solutions.
|
1207.7222
|
Multi-Dimensional Nonsystematic Reed-Solomon Codes
|
cs.IT math.IT
|
This paper proposes a new class of multi-dimensional nonsystematic
Reed-Solomon codes that are constructed based on the multi-dimensional Fourier
transform over a finite field. The proposed codes are the extension of the
nonsystematic Reed-Solomon codes to multi-dimension. This paper also discusses
the performance of the multi-dimensional nonsystematic Reed-Solomon codes.
|
1207.7241
|
Gathering an even number of robots in an odd ring without global
multiplicity detection
|
cs.DC cs.RO
|
We propose a gathering protocol for an even number of robots in a ring-shaped
network that allows symmetric but not periodic configurations as initial
configurations, yet uses only local weak multiplicity detection. Robots are
assumed to be anonymous and oblivious, and the execution model is the non-
atomic CORDA model with asynchronous fair scheduling. In our scheme, the number
of robots k must be greater than 8, the number of nodes n on a network must be
odd and greater than k+3. The running time of our protocol is O(n2)
asynchronous rounds.
|
1207.7244
|
Visual Vocabulary Learning and Its Application to 3D and Mobile Visual
Search
|
cs.CV
|
In this technical report, we review related works and recent trends in visual
vocabulary based web image search, object recognition, mobile visual search,
and 3D object retrieval. Especial focuses would be also given for the recent
trends in supervised/unsupervised vocabulary optimization, compact descriptor
for visual search, as well as in multi-view based 3D object representation.
|
1207.7245
|
Autofocus Correction of Azimuth Phase Error and Residual Range Cell
Migration in Spotlight SAR Polar Format Imagery
|
astro-ph.IM cs.CV
|
Synthetic aperture radar (SAR) images are often blurred by phase
perturbations induced by uncompensated sensor motion and /or unknown
propagation effects caused by turbulent media. To get refocused images,
autofocus proves to be useful post-processing technique applied to estimate and
compensate the unknown phase errors. However, a severe drawback of the
conventional autofocus algorithms is that they are only capable of removing
one-dimensional azimuth phase errors (APE). As the resolution becomes finer,
residual range cell migration (RCM), which makes the defocus inherently
two-dimensional, becomes a new challenge. In this paper, correction of APE and
residual RCM are presented in the framework of polar format algorithm (PFA).
First, an insight into the underlying mathematical mechanism of polar
reformatting is presented. Then based on this new formulation, the effect of
polar reformatting on the uncompensated APE and residual RCM is investigated in
detail. By using the derived analytical relationship between APE and residual
RCM, an efficient two-dimensional (2-D) autofocus method is proposed.
Experimental results indicate the effectiveness of the proposed method.
|
1207.7251
|
Dynamics of Influence on Hierarchical Structures
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
Dichotomous spin dynamics on a pyramidal hierarchical structure (the Bethe
lattice) are studied. The system embodies a number of \emph{classes}, where a
class comprises of nodes that are equidistant from the root (head node).
Weighted links exist between nodes from the same and different classes. The
spin (hereafter, \emph{state}) of the head node is fixed. We solve for the
dynamics of the system for different boundary conditions. We find necessary
conditions so that the classes eventually repudiate or acquiesce in the state
imposed by the head node. The results indicate that to reach unanimity across
the hierarchy, it suffices that the bottom-most class adopts the same state as
the head node. Then the rest of the hierarchy will inevitably comply. This also
sheds light on the importance of mass media as a means of synchronization
between the top-most and bottom-most classes. Surprisingly, in the case of
discord between the head node and the bottom-most classes, the average state
over all nodes inclines towards that of the bottom-most class regardless of the
link weights and intra-class configurations. Hence the role of the bottom-most
class is signified.
|
1207.7253
|
Learning a peptide-protein binding affinity predictor with kernel ridge
regression
|
q-bio.QM cs.LG q-bio.BM stat.ML
|
We propose a specialized string kernel for small bio-molecules, peptides and
pseudo-sequences of binding interfaces. The kernel incorporates
physico-chemical properties of amino acids and elegantly generalize eight
kernels, such as the Oligo, the Weighted Degree, the Blended Spectrum, and the
Radial Basis Function. We provide a low complexity dynamic programming
algorithm for the exact computation of the kernel and a linear time algorithm
for it's approximation. Combined with kernel ridge regression and SupCK, a
novel binding pocket kernel, the proposed kernel yields biologically relevant
and good prediction accuracy on the PepX database. For the first time, a
machine learning predictor is capable of accurately predicting the binding
affinity of any peptide to any protein. The method was also applied to both
single-target and pan-specific Major Histocompatibility Complex class II
benchmark datasets and three Quantitative Structure Affinity Model benchmark
datasets.
On all benchmarks, our method significantly (p-value < 0.057) outperforms the
current state-of-the-art methods at predicting peptide-protein binding
affinities. The proposed approach is flexible and can be applied to predict any
quantitative biological activity. The method should be of value to a large
segment of the research community with the potential to accelerate
peptide-based drug and vaccine development.
|
1207.7261
|
Dynamical phase transition due to preferential cluster growth of
collective emotions in online communities
|
physics.soc-ph cs.SI
|
We consider a preferential cluster growth in a one-dimensional stochastic
model describing the dynamics of a binary chain with long-range memory. The
model is driven by data corresponding to emotional patterns observed during
online communities' discussions. The system undergoes a dynamical phase
transition. For low values of the preference exponent, both states are observed
during the string evolution in the majority of simulated discussion threads.
When the exponent crosses a critical value, in the majority of threads an
ordered phase emerges, i.e. from a certain time moment only one state is
represented. The transition becomes discontinuous in the thermodynamical limit
when the discussions are infinitely long and even an infinitely small
preference exponent leads to the ordering behavior in every discussion thread.
Numerical simulations are in a good agreement with approximated analytical
formula.
|
1207.7274
|
The Dynamics of Health Behavior Sentiments on a Large Online Social
Network
|
cs.SI physics.soc-ph
|
Modifiable health behaviors, a leading cause of illness and death in many
countries, are often driven by individual beliefs and sentiments about health
and disease. Individual behaviors affecting health outcomes are increasingly
modulated by social networks, for example through the associations of
like-minded individuals - homophily - or through peer influence effects. Using
a statistical approach to measure the individual temporal effects of a large
number of variables pertaining to social network statistics, we investigate the
spread of a health sentiment towards a new vaccine on Twitter, a large online
social network. We find that the effects of neighborhood size and exposure
intensity are qualitatively very different depending on the type of sentiment.
Generally, we find that larger numbers of opinionated neighbors inhibit the
expression of sentiments. We also find that exposure to negative sentiment is
contagious - by which we merely mean predictive of future negative sentiment
expression - while exposure to positive sentiments is generally not. In fact,
exposure to positive sentiments can even predict increased negative sentiment
expression. Our results suggest that the effects of peer influence and social
contagion on the dynamics of behavioral spread on social networks are strongly
content-dependent.
|
1207.7298
|
Throughput of Rateless Codes over Broadcast Erasure Channels
|
cs.NI cs.IT math.IT
|
In this paper, we characterize the throughput of a broadcast network with n
receivers using rateless codes with block size K. We assume that the underlying
channel is a Markov modulated erasure channel that is i.i.d. across users, but
can be correlated in time. We characterize the system throughput asymptotically
in n. Specifically, we explicitly show how the throughput behaves for different
values of the coding block size K as a function of n, as n approaches infinity.
For finite values of K and n, under the more restrictive assumption of
Gilbert-Elliott channels, we are able to provide a lower bound on the maximum
achievable throughput. Using simulations we show the tightness of the bound
with respect to system parameters n and K, and find that its performance is
significantly better than the previously known lower bounds.
|
1207.7321
|
Universality in polytope phase transitions and message passing
algorithms
|
math.PR cs.IT math.IT
|
We consider a class of nonlinear mappings $\mathsf{F}_{A,N}$ in
$\mathbb{R}^N$ indexed by symmetric random matrices $A\in\mathbb{R}^{N\times
N}$ with independent entries. Within spin glass theory, special cases of these
mappings correspond to iterating the TAP equations and were studied by
Bolthausen [Comm. Math. Phys. 325 (2014) 333-366]. Within information theory,
they are known as "approximate message passing" algorithms. We study the
high-dimensional (large $N$) behavior of the iterates of $\mathsf{F}$ for
polynomial functions $\mathsf{F}$, and prove that it is universal; that is, it
depends only on the first two moments of the entries of $A$, under a
sub-Gaussian tail condition. As an application, we prove the universality of a
certain phase transition arising in polytope geometry and compressed sensing.
This solves, for a broad class of random projections, a conjecture by David
Donoho and Jared Tanner.
|
1207.7347
|
RIP Analysis of Modulated Sampling Schemes for Recovering Spectrally
Sparse Signals
|
cs.IT cs.SY math.IT
|
In this work, we analyze modulated sampling schemes, such as the Nyquist
Folding Receiver, which are highly efficient, readily implementable,
non-uniform sampling schemes that allows for the blind estimation of a
narrow-band signal's spectral content and location in a wide-band environment.
This non-uniform sampling, achieved by narrow-band modulation of the RF
instantaneous sample rate, results in a frequency domain point spread function
that is between the extremes obtained by uniform sampling and totally random
sampling. As a result, while still preserving structured aliasing, the
modulated sampling scheme is also useful in a compressive sensing (CS) setting.
We estimate restricted isometry property (RIP) constants for CS matrices
induced by such modulated sampling schemes and use those estimates to determine
the amount of sparsity needed for signal recovery. This is followed by a
demonstration and analysis of Orthogonal Matching Pursuit's ability to
reconstruct signals from noisy non-uniform samples.
|
1208.0055
|
Large-scale continuous subgraph queries on streams
|
cs.DB cs.DC
|
Graph pattern matching involves finding exact or approximate matches for a
query subgraph in a larger graph. It has been studied extensively and has
strong applications in domains such as computer vision, computational biology,
social networks, security and finance. The problem of exact graph pattern
matching is often described in terms of subgraph isomorphism which is
NP-complete. The exponential growth in streaming data from online social
networks, news and video streams and the continual need for situational
awareness motivates a solution for finding patterns in streaming updates. This
is also the prime driver for the real-time analytics market. Development of
incremental algorithms for graph pattern matching on streaming inputs to a
continually evolving graph is a nascent area of research. Some of the
challenges associated with this problem are the same as found in continuous
query (CQ) evaluation on streaming databases. This paper reviews some of the
representative work from the exhaustively researched field of CQ systems and
identifies important semantics, constraints and architectural features that are
also appropriate for HPC systems performing real-time graph analytics. For each
of these features we present a brief discussion of the challenge encountered in
the database realm, the approach to the solution and state their relevance in a
high-performance, streaming graph processing framework.
|
1208.0063
|
Capacity Results for Two Classes of Three-Way Channels
|
cs.IT math.IT
|
This paper considers the three-way channel, consisting of three nodes, where
each node broadcasts a message to the two other nodes. The capacity of the
finite-field three-way channel is derived, and is shown to be achievable using
a non-cooperative scheme without feedback. The same scheme is also shown to
achieve the equal-rate capacity (when all nodes transmit at the same rate) of
the sender-symmetrical (each node receives the same SNR from the other two
nodes) phase-fading AWGN channel. In the light that the non-cooperative scheme
is not optimal in general, a cooperative feedback scheme that utilizes relaying
and network coding is proposed and is shown to achieve the equal-rate capacity
of the reciprocal (each pair of nodes has the same forward and backward SNR)
phase-fading AWGN three-way channel.
|
1208.0072
|
Streaming Codes for Channels with Burst and Isolated Erasures
|
cs.IT cs.MM math.IT
|
We study low-delay error correction codes for streaming recovery over a class
of packet-erasure channels that introduce both burst-erasures and isolated
erasures. We propose a simple, yet effective class of codes whose parameters
can be tuned to obtain a tradeoff between the capability to correct burst and
isolated erasures. Our construction generalizes previously proposed low-delay
codes which are effective only against burst erasures. We establish an
information theoretic upper bound on the capability of any code to
simultaneously correct burst and isolated erasures and show that our proposed
constructions meet the upper bound in some special cases. We discuss the
operational significance of column-distance and column-span metrics and
establish that the rate 1/2 codes discovered by Martinian and Sundberg [IT
Trans.\, 2004] through a computer search indeed attain the optimal
column-distance and column-span tradeoff. Numerical simulations over a
Gilbert-Elliott channel model and a Fritchman model show significant
performance gains over previously proposed low-delay codes and random linear
codes for certain range of channel parameters.
|
1208.0073
|
A Scalable Algorithm for Maximizing Range Sum in Spatial Databases
|
cs.DB
|
This paper investigates the MaxRS problem in spatial databases. Given a set O
of weighted points and a rectangular region r of a given size, the goal of the
MaxRS problem is to find a location of r such that the sum of the weights of
all the points covered by r is maximized. This problem is useful in many
location-based applications such as finding the best place for a new franchise
store with a limited delivery range and finding the most attractive place for a
tourist with a limited reachable range. However, the problem has been studied
mainly in theory, particularly, in computational geometry. The existing
algorithms from the computational geometry community are in-memory algorithms
which do not guarantee the scalability. In this paper, we propose a scalable
external-memory algorithm (ExactMaxRS) for the MaxRS problem, which is optimal
in terms of the I/O complexity. Furthermore, we propose an approximation
algorithm (ApproxMaxCRS) for the MaxCRS problem that is a circle version of the
MaxRS problem. We prove the correctness and optimality of the ExactMaxRS
algorithm along with the approximation bound of the ApproxMaxCRS algorithm.
From extensive experimental results, we show that the ExactMaxRS algorithm is
two orders of magnitude faster than methods adapted from existing algorithms,
and the approximation bound in practice is much better than the theoretical
bound of the ApproxMaxCRS algorithm.
|
1208.0074
|
Spatial Queries with Two kNN Predicates
|
cs.DB
|
The widespread use of location-aware devices has led to countless
location-based services in which a user query can be arbitrarily complex, i.e.,
one that embeds multiple spatial selection and join predicates. Amongst these
predicates, the k-Nearest-Neighbor (kNN) predicate stands as one of the most
important and widely used predicates. Unlike related research, this paper goes
beyond the optimization of queries with single kNN predicates, and shows how
queries with two kNN predicates can be optimized. In particular, the paper
addresses the optimization of queries with: (i) two kNN-select predicates, (ii)
two kNN-join predicates, and (iii) one kNN-join predicate and one kNN-select
predicate. For each type of queries, conceptually correct query evaluation
plans (QEPs) and new algorithms that optimize the query execution time are
presented. Experimental results demonstrate that the proposed algorithms
outperform the conceptually correct QEPs by orders of magnitude.
|
1208.0075
|
Optimal Algorithms for Crawling a Hidden Database in the Web
|
cs.DB
|
A hidden database refers to a dataset that an organization makes accessible
on the web by allowing users to issue queries through a search interface. In
other words, data acquisition from such a source is not by following static
hyper-links. Instead, data are obtained by querying the interface, and reading
the result page dynamically generated. This, with other facts such as the
interface may answer a query only partially, has prevented hidden databases
from being crawled effectively by existing search engines. This paper remedies
the problem by giving algorithms to extract all the tuples from a hidden
database. Our algorithms are provably efficient, namely, they accomplish the
task by performing only a small number of queries, even in the worst case. We
also establish theoretical results indicating that these algorithms are
asymptotically optimal -- i.e., it is impossible to improve their efficiency by
more than a constant factor. The derivation of our upper and lower bound
results reveals significant insight into the characteristics of the underlying
problem. Extensive experiments confirm the proposed techniques work very well
on all the real datasets examined.
|
1208.0076
|
Diversifying Top-K Results
|
cs.DB
|
Top-k query processing finds a list of k results that have largest scores
w.r.t the user given query, with the assumption that all the k results are
independent to each other. In practice, some of the top-k results returned can
be very similar to each other. As a result some of the top-k results returned
are redundant. In the literature, diversified top-k search has been studied to
return k results that take both score and diversity into consideration. Most
existing solutions on diversified top-k search assume that scores of all the
search results are given, and some works solve the diversity problem on a
specific problem and can hardly be extended to general cases. In this paper, we
study the diversified top-k search problem. We define a general diversified
top-k search problem that only considers the similarity of the search results
themselves. We propose a framework, such that most existing solutions for top-k
query processing can be extended easily to handle diversified top-k search, by
simply applying three new functions, a sufficient stop condition sufficient(),
a necessary stop condition necessary(), and an algorithm for diversified top-k
search on the current set of generated results, div-search-current(). We
propose three new algorithms, namely, div-astar, div-dp, and div-cut to solve
the div-search-current() problem. div-astar is an A* based algorithm, div-dp is
an algorithm that decomposes the results into components which are searched
using div-astar independently and combined using dynamic programming. div-cut
further decomposes the current set of generated results using cut points and
combines the results using sophisticated operations. We conducted extensive
performance studies using two real datasets, enwiki and reuters. Our div-cut
algorithm finds the optimal solution for diversified top-k search problem in
seconds even for k as large as 2,000.
|
1208.0077
|
Keyword-aware Optimal Route Search
|
cs.DB
|
Identifying a preferable route is an important problem that finds
applications in map services. When a user plans a trip within a city, the user
may want to find "a most popular route such that it passes by shopping mall,
restaurant, and pub, and the travel time to and from his hotel is within 4
hours." However, none of the algorithms in the existing work on route planning
can be used to answer such queries. Motivated by this, we define the problem of
keyword-aware optimal route query, denoted by KOR, which is to find an optimal
route such that it covers a set of user-specified keywords, a specified budget
constraint is satisfied, and an objective score of the route is optimal. The
problem of answering KOR queries is NP-hard. We devise an approximation
algorithm OSScaling with provable approximation bounds. Based on this
algorithm, another more efficient approximation algorithm BucketBound is
proposed. We also design a greedy approximation algorithm. Results of empirical
studies show that all the proposed algorithms are capable of answering KOR
queries efficiently, while the BucketBound and Greedy algorithms run faster.
The empirical studies also offer insight into the accuracy of the proposed
algorithms.
|
1208.0078
|
Answering Queries using Views over Probabilistic XML: Complexity and
Tractability
|
cs.DB
|
We study the complexity of query answering using views in a probabilistic XML
setting, identifying large classes of XPath queries -- with child and
descendant navigation and predicates -- for which there are efficient (PTime)
algorithms. We consider this problem under the two possible semantics for XML
query results: with persistent node identifiers and in their absence.
Accordingly, we consider rewritings that can exploit a single view, by means of
compensation, and rewritings that can use multiple views, by means of
intersection. Since in a probabilistic setting queries return answers with
probabilities, the problem of rewriting goes beyond the classic one of
retrieving XML answers from views. For both semantics of XML queries, we show
that, even when XML answers can be retrieved from views, their probabilities
may not be computable. For rewritings that use only compensation, we describe a
PTime decision procedure, based on easily verifiable criteria that distinguish
between the feasible cases -- when probabilistic XML results are computable --
and the unfeasible ones. For rewritings that can use multiple views, with
compensation and intersection, we identify the most permissive conditions that
make probabilistic rewriting feasible, and we describe an algorithm that is
sound in general, and becomes complete under fairly permissive restrictions,
running in PTime modulo worst-case exponential time equivalence tests. This is
the best we can hope for since intersection makes query equivalence intractable
already over deterministic data. Our algorithm runs in PTime whenever
deterministic rewritings can be found in PTime.
|
1208.0079
|
Probabilistic Databases with MarkoViews
|
cs.DB
|
Most of the work on query evaluation in probabilistic databases has focused
on the simple tuple-independent data model, where tuples are independent random
events. Several efficient query evaluation techniques exists in this setting,
such as safe plans, algorithms based on OBDDs, tree-decomposition and a variety
of approximation algorithms. However, complex data analytics tasks often
require complex correlations, and query evaluation then is significantly more
expensive, or more restrictive. In this paper, we propose MVDB as a framework
both for representing complex correlations and for efficient query evaluation.
An MVDB specifies correlations by views, called MarkoViews, on the
probabilistic relations and declaring the weights of the view's outputs. An
MVDB is a (very large) Markov Logic Network. We make two sets of contributions.
First, we show that query evaluation on an MVDB is equivalent to evaluating a
Union of Conjunctive Query(UCQ) over a tuple-independent database. The
translation is exact (thus allowing the techniques developed for tuple
independent databases to be carried over to MVDB), yet it is novel and quite
non-obvious (some resulting probabilities may be negative!). This translation
in itself though may not lead to much gain since the translated query gets
complicated as we try to capture more correlations. Our second contribution is
to propose a new query evaluation strategy that exploits offline compilation to
speed up online query evaluation. Here we utilize and extend our prior work on
compilation of UCQ. We validate experimentally our techniques on a large
probabilistic database with MarkoViews inferred from the DBLP data.
|
1208.0080
|
The Complexity of Social Coordination
|
cs.DB
|
Coordination is a challenging everyday task; just think of the last time you
organized a party or a meeting involving several people. As a growing part of
our social and professional life goes online, an opportunity for an improved
coordination process arises. Recently, Gupta et al. proposed entangled queries
as a declarative abstraction for data-driven coordination, where the difficulty
of the coordination task is shifted from the user to the database.
Unfortunately, evaluating entangled queries is very hard, and thus previous
work considered only a restricted class of queries that satisfy safety (the
coordination partners are fixed) and uniqueness (all queries need to be
satisfied). In this paper we significantly extend the class of feasible
entangled queries beyond uniqueness and safety. First, we show that we can
simply drop uniqueness and still efficiently evaluate a set of safe entangled
queries. Second, we show that as long as all users coordinate on the same set
of attributes, we can give an efficient algorithm for coordination even if the
set of queries does not satisfy safety. In an experimental evaluation we show
that our algorithms are feasible for a wide spectrum of coordination scenarios.
|
1208.0081
|
Efficient Multi-way Theta-Join Processing Using MapReduce
|
cs.DB
|
Multi-way Theta-join queries are powerful in describing complex relations and
therefore widely employed in real practices. However, existing solutions from
traditional distributed and parallel databases for multi-way Theta-join queries
cannot be easily extended to fit a shared-nothing distributed computing
paradigm, which is proven to be able to support OLAP applications over immense
data volumes. In this work, we study the problem of efficient processing of
multi-way Theta-join queries using MapReduce from a cost-effective perspective.
Although there have been some works using the (key,value) pair-based
programming model to support join operations, efficient processing of multi-way
Theta-join queries has never been fully explored. The substantial challenge
lies in, given a number of processing units (that can run Map or Reduce tasks),
mapping a multi-way Theta-join query to a number of MapReduce jobs and having
them executed in a well scheduled sequence, such that the total processing time
span is minimized. Our solution mainly includes two parts: 1) cost metrics for
both single MapReduce job and a number of MapReduce jobs executed in a certain
order; 2) the efficient execution of a chain-typed Theta-join with only one
MapReduce job. Comparing with the query evaluation strategy proposed in [23]
and the widely adopted Pig Latin and Hive SQL solutions, our method achieves
significant improvement of the join processing efficiency.
|
1208.0082
|
Stubby: A Transformation-based Optimizer for MapReduce Workflows
|
cs.DB
|
There is a growing trend of performing analysis on large datasets using
workflows composed of MapReduce jobs connected through producer-consumer
relationships based on data. This trend has spurred the development of a number
of interfaces--ranging from program-based to query-based interfaces--for
generating MapReduce workflows. Studies have shown that the gap in performance
can be quite large between optimized and unoptimized workflows. However,
automatic cost-based optimization of MapReduce workflows remains a challenge
due to the multitude of interfaces, large size of the execution plan space, and
the frequent unavailability of all types of information needed for
optimization. We introduce a comprehensive plan space for MapReduce workflows
generated by popular workflow generators. We then propose Stubby, a cost-based
optimizer that searches selectively through the subspace of the full plan space
that can be enumerated correctly and costed based on the information available
in any given setting. Stubby enumerates the plan space based on plan-to-plan
transformations and an efficient search algorithm. Stubby is designed to be
extensible to new interfaces and new types of optimizations, which is a
desirable feature given how rapidly MapReduce systems are evolving. Stubby's
efficiency and effectiveness have been evaluated using representative workflows
from many domains.
|
1208.0083
|
Labeling Workflow Views with Fine-Grained Dependencies
|
cs.DB
|
This paper considers the problem of efficiently answering reachability
queries over views of provenance graphs, derived from executions of workflows
that may include recursion. Such views include composite modules and model
fine-grained dependencies between module inputs and outputs. A novel
view-adaptive dynamic labeling scheme is developed for efficient query
evaluation, in which view specifications are labeled statically (i.e. as they
are created) and data items are labeled dynamically as they are produced during
a workflow execution. Although the combination of fine-grained dependencies and
recursive workflows entail, in general, long (linear-size) data labels, we show
that for a large natural class of workflows and views, labels are compact
(logarithmic-size) and reachability queries can be evaluated in constant time.
Experimental results demonstrate the benefit of this approach over the
state-of-the-art technique when applied for labeling multiple views.
|
1208.0084
|
Fundamentals of Order Dependencies
|
cs.DB
|
Dependencies have played a significant role in database design for many
years. They have also been shown to be useful in query optimization. In this
paper, we discuss dependencies between lexicographically ordered sets of
tuples. We introduce formally the concept of order dependency and present a set
of axioms (inference rules) for them. We show how query rewrites based on these
axioms can be used for query optimization. We present several interesting
theorems that can be derived using the inference rules. We prove that
functional dependencies are subsumed by order dependencies and that our set of
axioms for order dependencies is sound and complete.
|
1208.0086
|
Optimization of Analytic Window Functions
|
cs.DB
|
Analytic functions represent the state-of-the-art way of performing complex
data analysis within a single SQL statement. In particular, an important class
of analytic functions that has been frequently used in commercial systems to
support OLAP and decision support applications is the class of window
functions. A window function returns for each input tuple a value derived from
applying a function over a window of neighboring tuples. However, existing
window function evaluation approaches are based on a naive sorting scheme. In
this paper, we study the problem of optimizing the evaluation of window
functions. We propose several efficient techniques, and identify optimization
opportunities that allow us to optimize the evaluation of a set of window
functions. We have integrated our scheme into PostgreSQL. Our comprehensive
experimental study on the TPC-DS datasets as well as synthetic datasets and
queries demonstrate significant speedup over existing approaches.
|
1208.0087
|
Opening the Black Boxes in Data Flow Optimization
|
cs.DB
|
Many systems for big data analytics employ a data flow abstraction to define
parallel data processing tasks. In this setting, custom operations expressed as
user-defined functions are very common. We address the problem of performing
data flow optimization at this level of abstraction, where the semantics of
operators are not known. Traditionally, query optimization is applied to
queries with known algebraic semantics. In this work, we find that a handful of
properties, rather than a full algebraic specification, suffice to establish
reordering conditions for data processing operators. We show that these
properties can be accurately estimated for black box operators by statically
analyzing the general-purpose code of their user-defined functions. We design
and implement an optimizer for parallel data flows that does not assume
knowledge of semantics or algebraic properties of operators. Our evaluation
confirms that the optimizer can apply common rewritings such as selection
reordering, bushy join-order enumeration, and limited forms of aggregation
push-down, hence yielding similar rewriting power as modern relational DBMS
optimizers. Moreover, it can optimize the operator order of non-relational data
flows, a unique feature among today's systems.
|
1208.0088
|
Spinning Fast Iterative Data Flows
|
cs.DB
|
Parallel dataflow systems are a central part of most analytic pipelines for
big data. The iterative nature of many analysis and machine learning
algorithms, however, is still a challenge for current systems. While certain
types of bulk iterative algorithms are supported by novel dataflow frameworks,
these systems cannot exploit computational dependencies present in many
algorithms, such as graph algorithms. As a result, these algorithms are
inefficiently executed and have led to specialized systems based on other
paradigms, such as message passing or shared memory. We propose a method to
integrate incremental iterations, a form of workset iterations, with parallel
dataflows. After showing how to integrate bulk iterations into a dataflow
system and its optimizer, we present an extension to the programming model for
incremental iterations. The extension alleviates for the lack of mutable state
in dataflows and allows for exploiting the sparse computational dependencies
inherent in many iterative algorithms. The evaluation of a prototypical
implementation shows that those aspects lead to up to two orders of magnitude
speedup in algorithm runtime, when exploited. In our experiments, the improved
dataflow system is highly competitive with specialized systems while
maintaining a transparent and unified dataflow abstraction.
|
1208.0089
|
REX: Recursive, Delta-Based Data-Centric Computation
|
cs.DB
|
In today's Web and social network environments, query workloads include ad
hoc and OLAP queries, as well as iterative algorithms that analyze data
relationships (e.g., link analysis, clustering, learning). Modern DBMSs support
ad hoc and OLAP queries, but most are not robust enough to scale to large
clusters. Conversely, "cloud" platforms like MapReduce execute chains of batch
tasks across clusters in a fault tolerant way, but have too much overhead to
support ad hoc queries.
Moreover, both classes of platform incur significant overhead in executing
iterative data analysis algorithms. Most such iterative algorithms repeatedly
refine portions of their answers, until some convergence criterion is reached.
However, general cloud platforms typically must reprocess all data in each
step. DBMSs that support recursive SQL are more efficient in that they
propagate only the changes in each step -- but they still accumulate each
iteration's state, even if it is no longer useful. User-defined functions are
also typically harder to write for DBMSs than for cloud platforms.
We seek to unify the strengths of both styles of platforms, with a focus on
supporting iterative computations in which changes, in the form of deltas, are
propagated from iteration to iteration, and state is efficiently updated in an
extensible way. We present a programming model oriented around deltas, describe
how we execute and optimize such programs in our REX runtime system, and
validate that our platform also handles failures gracefully. We experimentally
validate our techniques, and show speedups over the competing methods ranging
from 2.5 to nearly 100 times.
|
1208.0090
|
K-Reach: Who is in Your Small World
|
cs.DB
|
We study the problem of answering k-hop reachability queries in a directed
graph, i.e., whether there exists a directed path of length k, from a source
query vertex to a target query vertex in the input graph. The problem of k-hop
reachability is a general problem of the classic reachability (where
k=infinity). Existing indexes for processing classic reachability queries, as
well as for processing shortest path queries, are not applicable or not
efficient for processing k-hop reachability queries. We propose an index for
processing k-hop reachability queries, which is simple in design and efficient
to construct. Our experimental results on a wide range of real datasets show
that our index is more efficient than the state-of-the-art indexes even for
processing classic reachability queries, for which these indexes are primarily
designed. We also show that our index is efficient in answering k-hop
reachability queries.
|
1208.0091
|
Performance Guarantees for Distributed Reachability Queries
|
cs.DB
|
In the real world a graph is often fragmented and distributed across
different sites. This highlights the need for evaluating queries on distributed
graphs. This paper proposes distributed evaluation algorithms for three classes
of queries: reachability for determining whether one node can reach another,
bounded reachability for deciding whether there exists a path of a bounded
length between a pair of nodes, and regular reachability for checking whether
there exists a path connecting two nodes such that the node labels on the path
form a string in a given regular expression. We develop these algorithms based
on partial evaluation, to explore parallel computation. When evaluating a query
Q on a distributed graph G, we show that these algorithms possess the following
performance guarantees, no matter how G is fragmented and distributed: (1) each
site is visited only once; (2) the total network traffic is determined by the
size of Q and the fragmentation of G, independent of the size of G; and (3) the
response time is decided by the largest fragment of G rather than the entire G.
In addition, we show that these algorithms can be readily implemented in the
MapReduce framework. Using synthetic and real-life data, we experimentally
verify that these algorithms are scalable on large graphs, regardless of how
the graphs are distributed.
|
1208.0092
|
Efficient Indexing and Querying over Syntactically Annotated Trees
|
cs.DB
|
Natural language text corpora are often available as sets of syntactically
parsed trees. A wide range of expressive tree queries are possible over such
parsed trees that open a new avenue in searching over natural language text.
They not only allow for querying roles and relationships within sentences, but
also improve search effectiveness compared to flat keyword queries. One major
drawback of current systems supporting querying over parsed text is the
performance of evaluating queries over large data. In this paper we propose a
novel indexing scheme over unique subtrees as index keys. We also propose a
novel root-split coding scheme that stores subtree structural information only
partially, thus reducing index size and improving querying performance. Our
extensive set of experiments show that root-split coding reduces the index size
of any interval coding which stores individual node numbers by a factor of 50%
to 80%, depending on the sizes of subtrees indexed. Moreover, We show that our
index using root-split coding, outperforms previous approaches by at least an
order of magnitude in terms of the response time of queries.
|
1208.0093
|
PrivBasis: Frequent Itemset Mining with Differential Privacy
|
cs.DB
|
The discovery of frequent itemsets can serve valuable economic and research
purposes. Releasing discovered frequent itemsets, however, presents privacy
challenges. In this paper, we study the problem of how to perform frequent
itemset mining on transaction databases while satisfying differential privacy.
We propose an approach, called PrivBasis, which leverages a novel notion called
basis sets. A theta-basis set has the property that any itemset with frequency
higher than theta is a subset of some basis. We introduce algorithms for
privately constructing a basis set and then using it to find the most frequent
itemsets. Experiments show that our approach greatly outperforms the current
state of the art.
|
1208.0094
|
Low-Rank Mechanism: Optimizing Batch Queries under Differential Privacy
|
cs.DB
|
Differential privacy is a promising privacy-preserving paradigm for
statistical query processing over sensitive data. It works by injecting random
noise into each query result, such that it is provably hard for the adversary
to infer the presence or absence of any individual record from the published
noisy results. The main objective in differentially private query processing is
to maximize the accuracy of the query results, while satisfying the privacy
guarantees. Previous work, notably the matrix mechanism, has suggested that
processing a batch of correlated queries as a whole can potentially achieve
considerable accuracy gains, compared to answering them individually. However,
as we point out in this paper, the matrix mechanism is mainly of theoretical
interest; in particular, several inherent problems in its design limit its
accuracy in practice, which almost never exceeds that of naive methods. In
fact, we are not aware of any existing solution that can effectively optimize a
query batch under differential privacy. Motivated by this, we propose the
Low-Rank Mechanism (LRM), the first practical differentially private technique
for answering batch queries with high accuracy, based on a low rank
approximation of the workload matrix. We prove that the accuracy provided by
LRM is close to the theoretical lower bound for any mechanism to answer a batch
of queries under differential privacy. Extensive experiments using real data
demonstrate that LRM consistently outperforms state-of-the-art query processing
solutions under differential privacy, by large margins.
|
1208.0095
|
The Simmel effect and babies names
|
physics.soc-ph cs.SI
|
Simulations of the Simmel effect are performed for agents in a scale-free
social network. The social hierarchy of an agent is determined by the degree of
her node. Particular features, once selected by a highly connected agent,
became common in lower class but soon fall out of fashion and extinct.
Numerical results reflect the dynamics of frequency of American babies names in
1880-2011.
|
1208.0107
|
Search Me If You Can: Privacy-preserving Location Query Service
|
cs.CR cs.SI
|
Location-Based Service (LBS) becomes increasingly popular with the dramatic
growth of smartphones and social network services (SNS), and its context-rich
functionalities attract considerable users. Many LBS providers use users'
location information to offer them convenience and useful functions. However,
the LBS could greatly breach personal privacy because location itself contains
much information. Hence, preserving location privacy while achieving utility
from it is still an challenging question now. This paper tackles this
non-trivial challenge by designing a suite of novel fine-grained
Privacy-preserving Location Query Protocol (PLQP). Our protocol allows
different levels of location query on encrypted location information for
different users, and it is efficient enough to be applied in mobile platforms.
|
1208.0129
|
Oracle inequalities for computationally adaptive model selection
|
stat.ML cs.LG
|
We analyze general model selection procedures using penalized empirical loss
minimization under computational constraints. While classical model selection
approaches do not consider computational aspects of performing model selection,
we argue that any practical model selection procedure must not only trade off
estimation and approximation error, but also the computational effort required
to compute empirical minimizers for different function classes. We provide a
framework for analyzing such problems, and we give algorithms for model
selection under a computational budget. These algorithms satisfy oracle
inequalities that show that the risk of the selected model is not much worse
than if we had devoted all of our omputational budget to the optimal function
class.
|
1208.0153
|
Personalization in Geographic information systems: A survey
|
cs.IR cs.DB
|
Geographic Information Systems (GIS) are widely used in different domains of
applications, such as maritime navigation, museums visits and route planning,
as well as ecological, demographical and economical applications. Nowadays,
organizations need sophisticated and adapted GIS-based Decision Support System
(DSS) to get quick access to relevant information and to analyze data with
respect to geographic information, represented not only as spatial objects, but
also as maps.
Several research works on GIS personalization was proposed: Face the great
challenge of developing both the theory and practice to provide personalization
GIS visualization systems. This paper aims to provide a comprehensive review of
literature on presented GIS personalization approaches. A benchmarking study of
GIS personalization methods is proposed. Several evaluation criteria are used
to identify the existence of trends as well as potential needs for further
investigations.
|
1208.0163
|
Spatial and Spatio-Temporal Multidimensional Data Modelling: A Survey
|
cs.DB
|
Data warehouse store and provide access to large volume of historical data
supporting the strategic decisions of organisations. Data warehouse is based on
a multidimensional model which allow to express user's needs for supporting the
decision making process. Since it is estimated that 80% of data used for
decision making has a spatial or location component [1, 2], spatial data have
been widely integrated in Data Warehouses and in OLAP systems. Extending a
multidimensional data model by the inclusion of spatial data provides a concise
and organised spatial datawarehouse representation. This paper aims to provide
a comprehensive review of litterature on developed and suggested spatial and
spatio-temporel multidimensional models. A benchmarking study of the proposed
models is presented. Several evaluation criterias are used to identify the
existence of trends as well as potential needs for further investigations.
|
1208.0186
|
Opportunistic Forwarding with Partial Centrality
|
cs.NI cs.SI
|
In opportunistic networks, the use of social metrics (e.g., degree, closeness
and betweenness centrality) of human mobility network, has recently been shown
to be an effective solution to improve the performance of opportunistic
forwarding algorithms. Most of the current social-based forwarding schemes
exploit some globally defined node centrality, resulting in a bias towards the
most popular nodes. However, these nodes may not be appropriate relay
candidates for some target nodes, because they may have low importance relative
to these subsets of target nodes. In this paper, to improve the opportunistic
forwarding efficiency, we exploit the relative importance (called partial
centrality) of a node with respect to a group of nodes. We design a new
opportunistic forwarding scheme, opportunistic forwarding with partial
centrality (OFPC), and theoretically quantify the influence of the partial
centrality on the data forwarding performance using graph spectrum. By applying
our scheme on three real opportunistic networking scenarios, our extensive
evaluations show that our scheme achieves significantly better mean delivery
delay and cost compared to the state-of-the-art works, while achieving delivery
ratios sufficiently close to those by Epidemic under different TTL
requirements.
|
1208.0193
|
Matched Decoding for Punctured Convolutional Encoded Transmission Over
ISI-Channels
|
cs.IT math.IT
|
Matched decoding is a technique that enables the efficient maximum-likelihood
sequence estimation of convolutionally encoded PAM-transmission over
ISI-channels. Recently, we have shown that the super-trellis of encoder and
channel can be described with significantly fewer states without loss in
Euclidean distance, by introducing a non-linear representation of the trellis.
This paper extends the matched decoding concept to punctured convolutional
codes and introduces a time-variant, non-linear trellis description.
|
1208.0200
|
Adaptation of pedagogical resources description standard (LOM) with the
specificity of Arabic language
|
cs.CL
|
In this article we focus firstly on the principle of pedagogical indexing and
characteristics of Arabic language and secondly on the possibility of adapting
the standard for describing learning resources used (the LOM and its
Application Profiles) with learning conditions such as the educational levels
of students and their levels of understanding,... the educational context with
taking into account the representative elements of text, text length, ... in
particular, we put in relief the specificity of the Arabic language which is a
complex language, characterized by its flexion, its voyellation and
agglutination.
|
1208.0203
|
Towards the Next Generation of Data Warehouse Personalization System: A
Survey and a Comparative Study
|
cs.DB
|
Multidimensional databases are a great asset for decision making. Their users
express complex OLAP (On-Line Analytical Processing) queries, often returning
huge volumes of facts, sometimes providing little or no information.
Furthermore, due to the huge volume of historical data stored in DWs, the OLAP
applications may return a big amount of irrelevant information that could make
the data exploration process not efficient and tardy. OLAP personalization
systems play a major role in reducing the effort of decision-makers to find the
most interesting information. Several works dealing with OLAP personalization
were presented in the last few years. This paper aims to provide a
comprehensive review of literature on OLAP personalization approaches. A
benchmarking study of OLAP personalization methods is proposed. Several
evaluation criteria are used to identify the existence of trends as well as
potential needs for further investigations.
|
1208.0219
|
Functional Mechanism: Regression Analysis under Differential Privacy
|
cs.DB
|
\epsilon-differential privacy is the state-of-the-art model for releasing
sensitive information while protecting privacy. Numerous methods have been
proposed to enforce epsilon-differential privacy in various analytical tasks,
e.g., regression analysis. Existing solutions for regression analysis, however,
are either limited to non-standard types of regression or unable to produce
accurate regression results. Motivated by this, we propose the Functional
Mechanism, a differentially private method designed for a large class of
optimization-based analyses. The main idea is to enforce epsilon-differential
privacy by perturbing the objective function of the optimization problem,
rather than its results. As case studies, we apply the functional mechanism to
address two most widely used regression models, namely, linear regression and
logistic regression. Both theoretical analysis and thorough experimental
evaluations show that the functional mechanism is highly effective and
efficient, and it significantly outperforms existing solutions.
|
1208.0220
|
Publishing Microdata with a Robust Privacy Guarantee
|
cs.DB
|
Today, the publication of microdata poses a privacy threat. Vast research has
striven to define the privacy condition that microdata should satisfy before it
is released, and devise algorithms to anonymize the data so as to achieve this
condition. Yet, no method proposed to date explicitly bounds the percentage of
information an adversary gains after seeing the published data for each
sensitive value therein. This paper introduces beta-likeness, an appropriately
robust privacy model for microdata anonymization, along with two anonymization
schemes designed therefor, the one based on generalization, and the other based
on perturbation. Our model postulates that an adversary's confidence on the
likelihood of a certain sensitive-attribute (SA) value should not increase, in
relative difference terms, by more than a predefined threshold. Our techniques
aim to satisfy a given beta threshold with little information loss. We
experimentally demonstrate that (i) our model provides an effective privacy
guarantee in a way that predecessor models cannot, (ii) our generalization
scheme is more effective and efficient in its task than methods adapting
algorithms for the k-anonymity model, and (iii) our perturbation method
outperforms a baseline approach. Moreover, we discuss in detail the resistance
of our model and methods to attacks proposed in previous research.
|
1208.0221
|
Measuring Two-Event Structural Correlations on Graphs
|
cs.DB
|
Real-life graphs usually have various kinds of events happening on them,
e.g., product purchases in online social networks and intrusion alerts in
computer networks. The occurrences of events on the same graph could be
correlated, exhibiting either attraction or repulsion. Such structural
correlations can reveal important relationships between different events.
Unfortunately, correlation relationships on graph structures are not well
studied and cannot be captured by traditional measures. In this work, we design
a novel measure for assessing two-event structural correlations on graphs.
Given the occurrences of two events, we choose uniformly a sample of "reference
nodes" from the vicinity of all event nodes and employ the Kendall's tau rank
correlation measure to compute the average concordance of event density
changes. Significance can be efficiently assessed by tau's nice property of
being asymptotically normal under the null hypothesis. In order to compute the
measure in large scale networks, we develop a scalable framework using
different sampling strategies. The complexity of these strategies is analyzed.
Experiments on real graph datasets with both synthetic and real events
demonstrate that the proposed framework is not only efficacious, but also
efficient and scalable.
|
1208.0222
|
Ranking Large Temporal Data
|
cs.DB
|
Ranking temporal data has not been studied until recently, even though
ranking is an important operator (being promoted as a firstclass citizen) in
database systems. However, only the instant top-k queries on temporal data were
studied in, where objects with the k highest scores at a query time instance t
are to be retrieved. The instant top-k definition clearly comes with
limitations (sensitive to outliers, difficult to choose a meaningful query time
t). A more flexible and general ranking operation is to rank objects based on
the aggregation of their scores in a query interval, which we dub the aggregate
top-k query on temporal data. For example, return the top-10 weather stations
having the highest average temperature from 10/01/2010 to 10/07/2010; find the
top-20 stocks having the largest total transaction volumes from 02/05/2011 to
02/07/2011. This work presents a comprehensive study to this problem by
designing both exact and approximate methods (with approximation quality
guarantees). We also provide theoretical analysis on the construction cost, the
index size, the update and the query costs of each approach. Extensive
experiments on large real datasets clearly demonstrate the efficiency, the
effectiveness, and the scalability of our methods compared to the baseline
methods.
|
1208.0223
|
The bistable brain: a neuronal model with symbiotic interactions
|
nlin.CD cs.NE math.DS
|
In general, the behavior of large and complex aggregates of elementary
components can not be understood nor extrapolated from the properties of a few
components. The brain is a good example of this type of networked systems where
some patterns of behavior are observed independently of the topology and of the
number of coupled units. Following this insight, we have studied the dynamics
of different aggregates of logistic maps according to a particular {\it
symbiotic} coupling scheme that imitates the neuronal excitation coupling. All
these aggregates show some common dynamical properties, concretely a bistable
behavior that is reported here with a certain detail. Thus, the qualitative
relationship with neural systems is suggested through a naive model of many of
such networked logistic maps whose behavior mimics the waking-sleeping
bistability displayed by brain systems. Due to its relevance, some regions of
multistability are determined and sketched for all these logistic models.
|
1208.0224
|
Compacting Transactional Data in Hybrid OLTP & OLAP Databases
|
cs.DB
|
Growing main memory sizes have facilitated database management systems that
keep the entire database in main memory. The drastic performance improvements
that came along with these in-memory systems have made it possible to reunite
the two areas of online transaction processing (OLTP) and online analytical
processing (OLAP): An emerging class of hybrid OLTP and OLAP database systems
allows to process analytical queries directly on the transactional data. By
offering arbitrarily current snapshots of the transactional data for OLAP,
these systems enable real-time business intelligence. Despite memory sizes of
several Terabytes in a single commodity server, RAM is still a precious
resource: Since free memory can be used for intermediate results in query
processing, the amount of memory determines query performance to a large
extent. Consequently, we propose the compaction of memory-resident databases.
Compaction consists of two tasks: First, separating the mutable working set
from the immutable "frozen" data. Second, compressing the immutable data and
optimizing it for efficient, memory-consumption-friendly snapshotting. Our
approach reorganizes and compresses transactional data online and yet hardly
affects the mission-critical OLTP throughput. This is achieved by unburdening
the OLTP threads from all additional processing and performing these tasks
asynchronously.
|
1208.0225
|
Processing a Trillion Cells per Mouse Click
|
cs.DB
|
Column-oriented database systems have been a real game changer for the
industry in recent years. Highly tuned and performant systems have evolved that
provide users with the possibility of answering ad hoc queries over large
datasets in an interactive manner. In this paper we present the column-oriented
datastore developed as one of the central components of PowerDrill. It combines
the advantages of columnar data layout with other known techniques (such as
using composite range partitions) and extensive algorithmic engineering on key
data structures. The main goal of the latter being to reduce the main memory
footprint and to increase the efficiency in processing typical user queries. In
this combination we achieve large speed-ups. These enable a highly interactive
Web UI where it is common that a single mouse click leads to processing a
trillion values in the underlying dataset.
|
1208.0227
|
OLTP on Hardware Islands
|
cs.DB
|
Modern hardware is abundantly parallel and increasingly heterogeneous. The
numerous processing cores have non-uniform access latencies to the main memory
and to the processor caches, which causes variability in the communication
costs. Unfortunately, database systems mostly assume that all processing cores
are the same and that microarchitecture differences are not significant enough
to appear in critical database execution paths. As we demonstrate in this
paper, however, hardware heterogeneity does appear in the critical path and
conventional database architectures achieve suboptimal and even worse,
unpredictable performance. We perform a detailed performance analysis of OLTP
deployments in servers with multiple cores per CPU (multicore) and multiple
CPUs per server (multisocket). We compare different database deployment
strategies where we vary the number and size of independent database instances
running on a single server, from a single shared-everything instance to
fine-grained shared-nothing configurations. We quantify the impact of
non-uniform hardware on various deployments by (a) examining how efficiently
each deployment uses the available hardware resources and (b) measuring the
impact of distributed transactions and skewed requests on different workloads.
Finally, we argue in favor of shared-nothing deployments that are topology- and
workload-aware and take advantage of fast on-chip communication between islands
of cores on the same socket.
|
1208.0228
|
Initial Version of State Transition Algorithm
|
math.OC cs.NE
|
In terms of the concepts of state and state transition, a new algorithm-State
Transition Algorithm (STA) is proposed in order to probe into classical and
intelligent optimization algorithms. On the basis of state and state
transition, it becomes much simpler and easier to understand. As for continuous
function optimization problems, three special operators named rotation,
translation and expansion are presented. While for discrete function
optimization problems, an operator called general elementary transformation is
introduced. Finally, with 4 common benchmark continuous functions and a
discrete problem used to test the performance of STA, the experiment shows that
STA is a promising algorithm due to its good search capability.
|
1208.0270
|
Serializability, not Serial: Concurrency Control and Availability in
Multi-Datacenter Datastores
|
cs.DB
|
We present a framework for concurrency control and availability in
multi-datacenter datastores. While we consider Google's Megastore as our
motivating example, we define general abstractions for key components, making
our solution extensible to any system that satisfies the abstraction
properties. We first develop and analyze a transaction management and
replication protocol based on a straightforward implementation of the Paxos
algorithm. Our investigation reveals that this protocol acts as a concurrency
prevention mechanism rather than a concurrency control mechanism. We then
propose an enhanced protocol called Paxos with Combination and Promotion
(Paxos-CP) that provides true transaction concurrency while requiring the same
per instance message complexity as the basic Paxos protocol. Finally, we
compare the performance of Paxos and Paxos-CP in a multi-datacenter
experimental study, and we demonstrate that Paxos-CP results in significantly
fewer aborted transactions than basic Paxos.
|
1208.0271
|
Automatic Partitioning of Database Applications
|
cs.DB
|
Database-backed applications are nearly ubiquitous in our daily lives.
Applications that make many small accesses to the database create two
challenges for developers: increased latency and wasted resources from numerous
network round trips. A well-known technique to improve transactional database
application performance is to convert part of the application into stored
procedures that are executed on the database server. Unfortunately, this
conversion is often difficult. In this paper we describe Pyxis, a system that
takes database-backed applications and automatically partitions their code into
two pieces, one of which is executed on the application server and the other on
the database server. Pyxis profiles the application and server loads,
statically analyzes the code's dependencies, and produces a partitioning that
minimizes the number of control transfers as well as the amount of data sent
during each transfer. Our experiments using TPC-C and TPC-W show that Pyxis is
able to generate partitions with up to 3x reduction in latency and 1.7x
improvement in throughput when compared to a traditional non-partitioned
implementation and has comparable performance to that of a custom stored
procedure implementation.
|
1208.0273
|
Whom to Ask? Jury Selection for Decision Making Tasks on Micro-blog
Services
|
cs.DB
|
It is universal to see people obtain knowledge on micro-blog services by
asking others decision making questions. In this paper, we study the Jury
Selection Problem(JSP) by utilizing crowdsourcing for decision making tasks on
micro-blog services. Specifically, the problem is to enroll a subset of crowd
under a limited budget, whose aggregated wisdom via Majority Voting scheme has
the lowest probability of drawing a wrong answer(Jury Error Rate-JER). Due to
various individual error-rates of the crowd, the calculation of JER is
non-trivial. Firstly, we explicitly state that JER is the probability when the
number of wrong jurors is larger than half of the size of a jury. To avoid the
exponentially increasing calculation of JER, we propose two efficient
algorithms and an effective bounding technique. Furthermore, we study the Jury
Selection Problem on two crowdsourcing models, one is for altruistic
users(AltrM) and the other is for incentive-requiring users(PayM) who require
extra payment when enrolled into a task. For the AltrM model, we prove the
monotonicity of JER on individual error rate and propose an efficient exact
algorithm for JSP. For the PayM model, we prove the NP-hardness of JSP on PayM
and propose an efficient greedy-based heuristic algorithm. Finally, we conduct
a series of experiments to investigate the traits of JSP, and validate the
efficiency and effectiveness of our proposed algorithms on both synthetic and
real micro-blog data.
|
1208.0274
|
ALAE: Accelerating Local Alignment with Affine Gap Exactly in
Biosequence Databases
|
cs.DB
|
We study the problem of local alignment, which is finding pairs of similar
subsequences with gaps. The problem exists in biosequence databases. BLAST is a
typical software for finding local alignment based on heuristic, but could miss
results. Using the Smith-Waterman algorithm, we can find all local alignments
in O(mn) time, where m and n are lengths of a query and a text, respectively. A
recent exact approach BWT-SW improves the complexity of the Smith-Waterman
algorithm under constraints, but still much slower than BLAST. This paper takes
on the challenge of designing an accurate and efficient algorithm for
evaluating local-alignment searches, especially for long queries. In this
paper, we propose an efficient software called ALAE to speed up BWT-SW using a
compressed suffix array. ALAE utilizes a family of filtering techniques to
prune meaningless calculations and an algorithm for reusing score calculations.
We also give a mathematical analysis and show that the upper bound of the total
number of calculated entries using ALAE could vary from 4.50mn0.520 to
9.05mn0.896 for random DNA sequences and vary from 8.28mn0.364 to 7.49mn0.723
for random protein sequences. We demonstrate the significant performance
improvement of ALAE on BWT-SW using a thorough experimental study on real
biosequences. ALAE guarantees correctness and accelerates BLAST for most of
parameters.
|
1208.0275
|
sDTW: Computing DTW Distances using Locally Relevant Constraints based
on Salient Feature Alignments
|
cs.DB
|
Many applications generate and consume temporal data and retrieval of time
series is a key processing step in many application domains. Dynamic time
warping (DTW) distance between time series of size N and M is computed relying
on a dynamic programming approach which creates and fills an NxM grid to search
for an optimal warp path. Since this can be costly, various heuristics have
been proposed to cut away the potentially unproductive portions of the DTW
grid. In this paper, we argue that time series often carry structural features
that can be used for identifying locally relevant constraints to eliminate
redundant work. Relying on this observation, we propose salient feature based
sDTW algorithms which first identify robust salient features in the given time
series and then find a consistent alignment of these to establish the
boundaries for the warp path search. More specifically, we propose alternative
fixed core&adaptive width, adaptive core&fixed width, and adaptive
core&adaptive width strategies which enforce different constraints reflecting
the high level structural characteristics of the series in the data set.
Experiment results show that the proposed sDTW algorithms help achieve much
higher accuracy in DTWcomputation and time series retrieval than fixed core &
fixed width algorithms that do not leverage local features of the given time
series.
|
1208.0276
|
SCOUT: Prefetching for Latent Feature Following Queries
|
cs.DB
|
Today's scientists are quickly moving from in vitro to in silico
experimentation: they no longer analyze natural phenomena in a petri dish, but
instead they build models and simulate them. Managing and analyzing the massive
amounts of data involved in simulations is a major task. Yet, they lack the
tools to efficiently work with data of this size. One problem many scientists
share is the analysis of the massive spatial models they build. For several
types of analysis they need to interactively follow the structures in the
spatial model, e.g., the arterial tree, neuron fibers, etc., and issue range
queries along the way. Each query takes long to execute, and the total time for
executing a sequence of queries significantly delays data analysis. Prefetching
the spatial data reduces the response time considerably, but known approaches
do not prefetch with high accuracy. We develop SCOUT, a structure-aware method
for prefetching data along interactive spatial query sequences. SCOUT uses an
approximate graph model of the structures involved in past queries and attempts
to identify what particular structure the user follows. Our experiments with
neuroscience data show that SCOUT prefetches with an accuracy from 71% to 92%,
which translates to a speedup of 4x-15x. SCOUT also improves the prefetching
accuracy on datasets from other scientific domains, such as medicine and
biology.
|
1208.0277
|
Accelerating Pathology Image Data Cross-Comparison on CPU-GPU Hybrid
Systems
|
cs.DB
|
As an important application of spatial databases in pathology imaging
analysis, cross-comparing the spatial boundaries of a huge amount of segmented
micro-anatomic objects demands extremely data- and compute-intensive
operations, requiring high throughput at an affordable cost. However, the
performance of spatial database systems has not been satisfactory since their
implementations of spatial operations cannot fully utilize the power of modern
parallel hardware. In this paper, we provide a customized software solution
that exploits GPUs and multi-core CPUs to accelerate spatial cross-comparison
in a cost-effective way. Our solution consists of an efficient GPU algorithm
and a pipelined system framework with task migration support. Extensive
experiments with real-world data sets demonstrate the effectiveness of our
solution, which improves the performance of spatial cross-comparison by over 18
times compared with a parallelized spatial database approach.
|
1208.0278
|
Robust Estimation of Resource Consumption for SQL Queries using
Statistical Techniques
|
cs.DB
|
The ability to estimate resource consumption of SQL queries is crucial for a
number of tasks in a database system such as admission control, query
scheduling and costing during query optimization. Recent work has explored the
use of statistical techniques for resource estimation in place of the manually
constructed cost models used in query optimization. Such techniques, which
require as training data examples of resource usage in queries, offer the
promise of superior estimation accuracy since they can account for factors such
as hardware characteristics of the system or bias in cardinality estimates.
However, the proposed approaches lack robustness in that they do not generalize
well to queries that are different from the training examples, resulting in
significant estimation errors. Our approach aims to address this problem by
combining knowledge of database query processing with statistical models. We
model resource-usage at the level of individual operators, with different
models and features for each operator type, and explicitly model the asymptotic
behavior of each operator. This results in significantly better estimation
accuracy and the ability to estimate resource usage of arbitrary plans, even
when they are very different from the training instances. We validate our
approach using various large scale real-life and benchmark workloads on
Microsoft SQL Server.
|
1208.0285
|
Who Tags What? An Analysis Framework
|
cs.DB
|
The rise of Web 2.0 is signaled by sites such as Flickr, del.icio.us, and
YouTube, and social tagging is essential to their success. A typical tagging
action involves three components, user, item (e.g., photos in Flickr), and tags
(i.e., words or phrases). Analyzing how tags are assigned by certain users to
certain items has important implications in helping users search for desired
information. In this paper, we explore common analysis tasks and propose a dual
mining framework for social tagging behavior mining. This framework is centered
around two opposing measures, similarity and diversity, being applied to one or
more tagging components, and therefore enables a wide range of analysis
scenarios such as characterizing similar users tagging diverse items with
similar tags, or diverse users tagging similar items with diverse tags, etc. By
adopting different concrete measures for similarity and diversity in the
framework, we show that a wide range of concrete analysis problems can be
defined and they are NP-Complete in general. We design efficient algorithms for
solving many of those problems and demonstrate, through comprehensive
experiments over real data, that our algorithms significantly out-perform the
exact brute-force approach without compromising analysis result quality.
|
1208.0286
|
A Generic Framework for Efficient and Effective Subsequence Retrieval
|
cs.DB
|
This paper proposes a general framework for matching similar subsequences in
both time series and string databases. The matching results are pairs of query
subsequences and database subsequences. The framework finds all possible pairs
of similar subsequences if the distance measure satisfies the "consistency"
property, which is a property introduced in this paper. We show that most
popular distance functions, such as the Euclidean distance, DTW, ERP, the
Frechet distance for time series, and the Hamming distance and Levenshtein
distance for strings, are all "consistent". We also propose a generic index
structure for metric spaces named "reference net". The reference net occupies
O(n) space, where n is the size of the dataset and is optimized to work well
with our framework. The experiments demonstrate the ability of our method to
improve retrieval performance when combined with diverse distance measures. The
experiments also illustrate that the reference net scales well in terms of
space overhead and query time.
|
1208.0287
|
Only Aggressive Elephants are Fast Elephants
|
cs.DB
|
Yellow elephants are slow. A major reason is that they consume their inputs
entirely before responding to an elephant rider's orders. Some clever riders
have trained their yellow elephants to only consume parts of the inputs before
responding. However, the teaching time to make an elephant do that is high. So
high that the teaching lessons often do not pay off. We take a different
approach. We make elephants aggressive; only this will make them very fast. We
propose HAIL (Hadoop Aggressive Indexing Library), an enhancement of HDFS and
Hadoop MapReduce that dramatically improves runtimes of several classes of
MapReduce jobs. HAIL changes the upload pipeline of HDFS in order to create
different clustered indexes on each data block replica. An interesting feature
of HAIL is that we typically create a win-win situation: we improve both data
upload to HDFS and the runtime of the actual Hadoop MapReduce job. In terms of
data upload, HAIL improves over HDFS by up to 60% with the default replication
factor of three. In terms of query execution, we demonstrate that HAIL runs up
to 68x faster than Hadoop. In our experiments, we use six clusters including
physical and EC2 clusters of up to 100 nodes. A series of scalability
experiments also demonstrates the superiority of HAIL.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.