id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1208.3254
|
Carrier Frequency Offset Estimation for Two-Way Relaying: Optimal
Preamble and Estimator Design
|
cs.IT math.IT
|
We consider the problem of carrier frequency offset (CFO) estimation for a
two-way relaying system based on the amplify-and-forward (AF) protocol. Our
contributions are in designing an optimal preamble, and the corresponding
estimator, to closely achieve the minimum Cramer-Rao bound (CRB) for the CFO.
This optimality is asserted with respect to the novel class of preambles,
referred to as the block-rotated preambles (BRPs). This class includes the
periodic preamble that is used widely in practice, yet it provides an
additional degree of design freedom via a block rotation angle. We first
identify the catastrophic scenario of an arbitrarily large CRB when a
conventional periodic preamble is used. We next resolve this problem by using a
BRP with a non-zero block rotation angle. This angle creates, in effect, an
artificial frequency offset that separates the desired relayed signal from the
self-interference that is introduced in the AF protocol. With appropriate
optimization, the CRB incurs only marginal loss from one-way relaying under
practical channel conditions. To facilitate implementation, a specific
low-complexity class of estimators is examined, and conditions for the
estimators to achieve the optimized CRB is established. Numerical results are
given which corroborate with theoretical findings.
|
1208.3261
|
Analyticity of Entropy Rate of Continuous-State Hidden Markov Chains
|
math.PR cs.IT math.IT
|
We prove that under certain mild assumptions, the entropy rate of a hidden
Markov chain, observed when passing a finite-state stationary Markov chain
through a discrete-time continuous-output channel, is jointly analytic as a
function of the input Markov chain parameters and the channel parameters. In
particular, as consequences of the main theorems, we obtain analyticity for the
entropy rate associated with representative channels: Cauchy and Gaussian.
|
1208.3279
|
Structured Prediction Cascades
|
stat.ML cs.LG
|
Structured prediction tasks pose a fundamental trade-off between the need for
model complexity to increase predictive power and the limited computational
resources for inference in the exponentially-sized output spaces such models
require. We formulate and develop the Structured Prediction Cascade
architecture: a sequence of increasingly complex models that progressively
filter the space of possible outputs. The key principle of our approach is that
each model in the cascade is optimized to accurately filter and refine the
structured output state space of the next model, speeding up both learning and
inference in the next layer of the cascade. We learn cascades by optimizing a
novel convex loss function that controls the trade-off between the filtering
efficiency and the accuracy of the cascade, and provide generalization bounds
for both accuracy and efficiency. We also extend our approach to intractable
models using tree-decomposition ensembles, and provide algorithms and theory
for this setting. We evaluate our approach on several large-scale problems,
achieving state-of-the-art performance in handwriting recognition and human
pose recognition. We find that structured prediction cascades allow tremendous
speedups and the use of previously intractable features and models in both
settings.
|
1208.3290
|
The building up of individual inflexibility in opinion dynamics
|
physics.soc-ph cs.SI
|
Two models of opinion dynamics are entangled in order to build a more
realistic model of inflexibility. The first one is the Galam Unifying Frame
(GUF), which incorporates rational and inflexible agents, and the other one
considers the combination of Continuous Opinions and Discrete Actions (CODA).
While initially in GUF, inflexibility is a fixed given feature of an agent, it
is now the result of an accumulation for a given agent who makes the same
choice through repeated updates. Inflexibility thus emerges as an internal
property of agents becoming a continuous function of the strength of its
opinion. Therefore an agent can be more or less inflexible and can shift from
inflexibility along one choice to inflexibility along the opposite choice.
These individual dynamics of the building up and falling off of an agent
inflexibility are driven by the successive local updates of the associated
individual opinions. New results are obtained and discussed in terms of
predicting outcomes of public debates.
|
1208.3291
|
When to look at a noisy Markov chain in sequential decision making if
measurements are costly?
|
math.OC cs.IT math.IT
|
A decision maker records measurements of a finite-state Markov chain
corrupted by noise. The goal is to decide when the Markov chain hits a specific
target state. The decision maker can choose from a finite set of sampling
intervals to pick the next time to look at the Markov chain. The aim is to
optimize an objective comprising of false alarm, delay cost and cumulative
measurement sampling cost. Taking more frequent measurements yields accurate
estimates but incurs a higher measurement cost. Making an erroneous decision
too soon incurs a false alarm penalty. Waiting too long to declare the target
state incurs a delay penalty. What is the optimal sequential strategy for the
decision maker? The paper shows that under reasonable conditions, the optimal
strategy has the following intuitive structure: when the Bayesian estimate
(posterior distribution) of the Markov chain is away from the target state,
look less frequently; while if the posterior is close to the target state, look
more frequently. Bounds are derived for the optimal strategy. Also the
achievable optimal cost of the sequential detector as a function of transition
dynamics and observation distribution is analyzed. The sensitivity of the
optimal achievable cost to parameter variations is bounded in terms of the
Kullback divergence. To prove the results in this paper, novel stochastic
dominance results on the Bayesian filtering recursion are derived. The
formulation in this paper generalizes quickest time change detection to
consider optimal sampling and also yields useful results in sensor scheduling
(active sensing).
|
1208.3307
|
Impedance mismatch is not an "Objects vs. Relations" problem
|
cs.DB
|
A problem of impedance mismatch between applications written in OO languages
and relational DB is not a problem of discrepancy between object-oriented and
relational approaches themselves. Its real causes can be found in usual
implementation of the OO approach. Direct comparison of the two approaches
cannot be used as a base for the conclusion that they are discrepant or
mismatched. Experimental proof of absence of contradiction between
object-oriented paradigm and relational data model is also presented
|
1208.3390
|
A Unified Linear MSE Minimization MIMO Beamforming Design Based on
Quadratic Matrix Programming
|
cs.IT math.IT
|
In this paper, we investigate a unified linear transceiver design with
mean-square-error (MSE) as the objective function for a wide range of wireless
systems. The unified design is based on an elegant mathematical programming
technology namely quadratic matrix programming (QMP). It is revealed that for
different wireless systems such as multi-cell coordination systems, multi-user
MIMO systems, MIMO cognitive radio systems, amplify-and-forward MIMO relaying
systems, the MSE minimization beamforming design problems can always be solved
by solving a number of QMP problems. A comprehensive framework on how to solve
QMP problems is also given.
|
1208.3398
|
How Agreement and Disagreement Evolve over Random Dynamic Networks
|
cs.SI cs.MA cs.SY math.OC
|
The dynamics of an agreement protocol interacting with a disagreement process
over a common random network is considered. The model can represent the
spreading of true and false information over a communication network, the
propagation of faults in a large-scale control system, or the development of
trust and mistrust in a society. At each time instance and with a given
probability, a pair of network nodes are selected to interact. At random each
of the nodes then updates its state towards the state of the other node
(attraction), away from the other node (repulsion), or sticks to its current
state (neglect). Agreement convergence and disagreement divergence results are
obtained for various strengths of the updates for both symmetric and asymmetric
update rules. Impossibility theorems show that a specific level of attraction
is required for almost sure asymptotic agreement and a specific level of
repulsion is required for almost sure asymptotic disagreement. A series of
sufficient and/or necessary conditions are then established for agreement
convergence or disagreement divergence. In particular, under symmetric updates,
a critical convergence measure in the attraction and repulsion update strength
is found, in the sense that the asymptotic property of the network state
evolution transits from agreement convergence to disagreement divergence when
this measure goes from negative to positive. The result can be interpreted as a
tight bound on how much bad action needs to be injected in a dynamic network in
order to consistently steer its overall behavior away from consensus.
|
1208.3422
|
Distance Metric Learning for Kernel Machines
|
stat.ML cs.LG
|
Recent work in metric learning has significantly improved the
state-of-the-art in k-nearest neighbor classification. Support vector machines
(SVM), particularly with RBF kernels, are amongst the most popular
classification algorithms that uses distance metrics to compare examples. This
paper provides an empirical analysis of the efficacy of three of the most
popular Mahalanobis metric learning algorithms as pre-processing for SVM
training. We show that none of these algorithms generate metrics that lead to
particularly satisfying improvements for SVM-RBF classification. As a remedy we
introduce support vector metric learning (SVML), a novel algorithm that
seamlessly combines the learning of a Mahalanobis metric with the training of
the RBF-SVM parameters. We demonstrate the capabilities of SVML on nine
benchmark data sets of varying sizes and difficulties. In our study, SVML
outperforms all alternative state-of-the-art metric learning algorithms in
terms of accuracy and establishes itself as a serious alternative to the
standard Euclidean metric with model selection by cross validation.
|
1208.3428
|
Comparative Bi-stochastizations and Associated
Clusterings/Regionalizations of the 1995-2000 U. S. Intercounty Migration
Network
|
cs.SI physics.soc-ph stat.AP
|
Wang, Li and Konig have recently compared the cluster-theoretic properties of
bi-stochasticized symmetric data similarity (e. g. kernel) matrices, produced
by minimizing two different forms of Bregman divergences. We extend their
investigation to non-symmetric matrices, specifically studying the 1995-2000 U.
S. 3,107 x 3,107 intercounty migration matrix. A particular bi-stochastized
form of it had been obtained (arXiv:1207.0437), using the well-established
Sinkhorn-Knopp (SK) (biproportional) algorithm--which minimizes the
Kullback-Leibler form of the divergence. This matrix has but a single entry
equal to (the maximal possible value of) 1. Highly contrastingly, the
bi-stochastic matrix obtained here, implementing the Wang-Li-Konig-algorithm
for the minimum of the alternative, squared-norm form of the divergence, has
2,707 such unit entries. The corresponding 3,107-vertex, 2,707-link directed
graph has 2,352 strong components. These consist of 1,659 single/isolated
counties, 654 doublets (thirty-one interstate in nature), 22 triplets (one
being interstate), 13 quartets (one being interstate), three quintets and one
septet. Not manifest in these graph-theoretic results, however, are the
five-county states of Hawaii and Rhode Island and the eight-county state of
Connecticut. These--among other regional configurations--appealingly emerged as
well-defined entities in the SK-based strong-component hierarchical clustering.
|
1208.3432
|
A Novel Strategy Selection Method for Multi-Objective Clustering
Algorithms Using Game Theory
|
cs.GT cs.AI
|
The most important factors which contribute to the efficiency of
game-theoretical algorithms are time and game complexity. In this study, we
have offered an elegant method to deal with high complexity of game theoretic
multi-objective clustering methods in large-sized data sets. Here, we have
developed a method which selects a subset of strategies from strategies profile
for each player. In this case, the size of payoff matrices reduces
significantly which has a remarkable impact on time complexity. Therefore,
practical problems with more data are tractable with less computational
complexity. Although strategies set may grow with increasing the number of data
points, the presented model of strategy selection reduces the strategy space,
considerably, where clusters are subdivided into several sub-clusters in each
local game. The remarkable results demonstrate the efficiency of the presented
approach in reducing computational complexity of the problem of concern.
|
1208.3512
|
Contour Completion Around a Fixation Point
|
cs.CV
|
The paper presents two edge grouping algorithms for finding a closed contour
starting from a particular edge point and enclosing a fixation point. Both
algorithms search a shortest simple cycle in \textit{an angularly ordered
graph} derived from an edge image where a vertex is an end point of a contour
fragment and an undirected arc is drawn between a pair of end-points whose
visual angle from the fixation point is less than a threshold value, which is
set to $\pi/2$ in our experiments. The first algorithm restricts the search
space by disregarding arcs that cross the line extending from the fixation
point to the starting point. The second algorithm improves the solution of the
first algorithm in a greedy manner. The algorithms were tested with a large
number of natural images with manually placed fixation and starting points. The
results are promising.
|
1208.3530
|
Leveraging Subjective Human Annotation for Clustering Historic Newspaper
Articles
|
cs.IR cs.CL cs.DL
|
The New York Public Library is participating in the Chronicling America
initiative to develop an online searchable database of historically significant
newspaper articles. Microfilm copies of the newspapers are scanned and high
resolution Optical Character Recognition (OCR) software is run on them. The
text from the OCR provides a wealth of data and opinion for researchers and
historians. However, categorization of articles provided by the OCR engine is
rudimentary and a large number of the articles are labeled editorial without
further grouping. Manually sorting articles into fine-grained categories is
time consuming if not impossible given the size of the corpus. This paper
studies techniques for automatic categorization of newspaper articles so as to
enhance search and retrieval on the archive. We explore unsupervised (e.g.
KMeans) and semi-supervised (e.g. constrained clustering) learning algorithms
to develop article categorization schemes geared towards the needs of
end-users. A pilot study was designed to understand whether there was unanimous
agreement amongst patrons regarding how articles can be categorized. It was
found that the task was very subjective and consequently automated algorithms
that could deal with subjective labels were used. While the small scale pilot
study was extremely helpful in designing machine learning algorithms, a much
larger system needs to be developed to collect annotations from users of the
archive. The "BODHI" system currently being developed is a step in that
direction, allowing users to correct wrongly scanned OCR and providing keywords
and tags for newspaper articles used frequently. On successful implementation
of the beta version of this system, we hope that it can be integrated with
existing software being developed for the Chronicling America project.
|
1208.3533
|
DisC Diversity: Result Diversification based on Dissimilarity and
Coverage
|
cs.DB
|
Recently, result diversification has attracted a lot of attention as a means
to improve the quality of results retrieved by user queries. In this paper, we
propose a new, intuitive definition of diversity called DisC diversity. A DisC
diverse subset of a query result contains objects such that each object in the
result is represented by a similar object in the diverse subset and the objects
in the diverse subset are dissimilar to each other. We show that locating a
minimum DisC diverse subset is an NP-hard problem and provide heuristics for
its approximation. We also propose adapting DisC diverse subsets to a different
degree of diversification. We call this operation zooming. We present efficient
implementations of our algorithms based on the M-tree, a spatial index
structure, and experimentally evaluate their performance.
|
1208.3546
|
Identifiability of multivariate logistic mixture models
|
math.PR cs.IT math.IT
|
Mixture models have been widely used in modeling of continuous observations.
For the possibility to estimate the parameters of a mixture model consistently
on the basis of observations from the mixture, identifiability is a necessary
condition. In this study, we give some results on the identifiability of
multivariate logistic mixture models.
|
1208.3549
|
Explicit Simplicial Discretization of Distributed-Parameter
Port-Hamiltonian Systems
|
cs.SY math.OC
|
Simplicial Dirac structures as finite analogues of the canonical Stokes-Dirac
structure, capturing the topological laws of the system, are defined on
simplicial manifolds in terms of primal and dual cochains related by the
coboundary operators. These finite-dimensional Dirac structures offer a
framework for the formulation of standard input-output finite-dimensional
port-Hamiltonian systems that emulate the behavior of distributed-parameter
port-Hamiltonian systems. This paper elaborates on the matrix representations
of simplicial Dirac structures and the resulting port-Hamiltonian systems on
simplicial manifolds. Employing these representations, we consider the
existence of structural invariants and demonstrate how they pertain to the
energy shaping of port-Hamiltonian systems on simplicial manifolds.
|
1208.3561
|
Efficient Active Learning of Halfspaces: an Aggressive Approach
|
cs.LG
|
We study pool-based active learning of half-spaces. We revisit the aggressive
approach for active learning in the realizable case, and show that it can be
made efficient and practical, while also having theoretical guarantees under
reasonable assumptions. We further show, both theoretically and experimentally,
that it can be preferable to mellow approaches. Our efficient aggressive active
learner of half-spaces has formal approximation guarantees that hold when the
pool is separable with a margin. While our analysis is focused on the
realizable setting, we show that a simple heuristic allows using the same
algorithm successfully for pools with low error as well. We further compare the
aggressive approach to the mellow approach, and prove that there are cases in
which the aggressive approach results in significantly better label complexity
compared to the mellow approach. We demonstrate experimentally that substantial
improvements in label complexity can be achieved using the aggressive approach,
for both realizable and low-error settings.
|
1208.3598
|
Improved Successive Cancellation Decoding of Polar Codes
|
cs.IT math.IT
|
As improved versions of successive cancellation (SC) decoding algorithm,
successive cancellation list (SCL) decoding and successive cancellation stack
(SCS) decoding are used to improve the finite-length performance of polar
codes. Unified descriptions of SC, SCL and SCS decoding algorithms are given as
path searching procedures on the code tree of polar codes. Combining the ideas
of SCL and SCS, a new decoding algorithm named successive cancellation hybrid
(SCH) is proposed, which can achieve a better trade-off between computational
complexity and space complexity. Further, to reduce the complexity, a pruning
technique is proposed to avoid unnecessary path searching operations.
Performance and complexity analysis based on simulations show that, with proper
configurations, all the three improved successive cancellation (ISC) decoding
algorithms can have a performance very close to that of maximum-likelihood (ML)
decoding with acceptable complexity. Moreover, with the help of the proposed
pruning technique, the complexities of ISC decoders can be very close to that
of SC decoder in the moderate and high signal-to-noise ratio (SNR) regime.
|
1208.3600
|
Modeling and Control of CSTR using Model based Neural Network Predictive
Control
|
cs.AI cs.NE nlin.AO
|
This paper presents a predictive control strategy based on neural network
model of the plant is applied to Continuous Stirred Tank Reactor (CSTR). This
system is a highly nonlinear process; therefore, a nonlinear predictive method,
e.g., neural network predictive control, can be a better match to govern the
system dynamics. In the paper, the NN model and the way in which it can be used
to predict the behavior of the CSTR process over a certain prediction horizon
are described, and some comments about the optimization procedure are made.
Predictive control algorithm is applied to control the concentration in a
continuous stirred tank reactor (CSTR), whose parameters are optimally
determined by solving quadratic performance index using the optimization
algorithm. An efficient control of the product concentration in cstr can be
achieved only through accurate model. Here an attempt is made to alleviate the
modeling difficulties using Artificial Intelligent technique such as Neural
Network. Simulation results demonstrate the feasibility and effectiveness of
the NNMPC technique.
|
1208.3619
|
SASeq: A Selective and Adaptive Shrinkage Approach to Detect and
Quantify Active Transcripts using RNA-Seq
|
q-bio.QM cs.CE q-bio.GN
|
Identification and quantification of condition-specific transcripts using
RNA-Seq is vital in transcriptomics research. While initial efforts using
mathematical or statistical modeling of read counts or per-base exonic signal
have been successful, they may suffer from model overfitting since not all the
reference transcripts in a database are expressed under a specific biological
condition. Standard shrinkage approaches, such as Lasso, shrink all the
transcript abundances to zero in a non-discriminative manner. Thus it does not
necessarily yield the set of condition-specific transcripts. Informed shrinkage
approaches, using the observed exonic coverage signal, are thus desirable.
Motivated by ubiquitous uncovered exonic regions in RNA-Seq data, termed as
"naked exons", we propose a new computational approach that first filters out
the reference transcripts not supported by splicing and paired-end reads, then
followed by fitting a new mathematical model of per-base exonic coverage signal
and the underlying transcripts structure. We introduce a tuning parameter to
penalize the specific regions of the selected transcripts that were not
supported by the naked exons. Our approach compares favorably with the selected
competing methods in terms of both time complexity and accuracy using simulated
and real-world data. Our method is implemented in SAMMate, a GUI software suite
freely available from http://sammate.sourceforge.net
|
1208.3623
|
Content-based Text Categorization using Wikitology
|
cs.IR cs.AI
|
A major computational burden, while performing document clustering, is the
calculation of similarity measure between a pair of documents. Similarity
measure is a function that assign a real number between 0 and 1 to a pair of
documents, depending upon the degree of similarity between them. A value of
zero means that the documents are completely dissimilar whereas a value of one
indicates that the documents are practically identical. Traditionally,
vector-based models have been used for computing the document similarity. The
vector-based models represent several features present in documents. These
approaches to similarity measures, in general, cannot account for the semantics
of the document. Documents written in human languages contain contexts and the
words used to describe these contexts are generally semantically related.
Motivated by this fact, many researchers have proposed semantic-based
similarity measures by utilizing text annotation through external thesauruses
like WordNet (a lexical database). In this paper, we define a semantic
similarity measure based on documents represented in topic maps. Topic maps are
rapidly becoming an industrial standard for knowledge representation with a
focus for later search and extraction. The documents are transformed into a
topic map based coded knowledge and the similarity between a pair of documents
is represented as a correlation between the common patterns. The experimental
studies on the text mining datasets reveal that this new similarity measure is
more effective as compared to commonly used similarity measures in text
clustering.
|
1208.3653
|
Using Location-Based Social Networks to Validate Human Mobility and
Relationships Models
|
cs.SI physics.soc-ph
|
We propose to use social networking data to validate mobility models for
pervasive mobile ad-hoc networks (MANETs) and delay tolerant networks (DTNs).
The Random Waypoint (RWP) and Erdos-Renyi (ER) models have been a popular
choice among researchers for generating mobility traces of nodes and
relationships between them. Not only RWP and ER are useful in evaluating
networking protocols in a simulation environment, but they are also used for
theoretical analysis of such dynamic networks. However, it has been observed
that neither relationships among people nor their movements are random.
Instead, human movements frequently contain repeated patterns and friendship is
bounded by distance. We used social networking site Gowalla to collect, create
and validate models of human mobility and relationships for analysis and
evaluations of applications in opportunistic networks such as sensor networks
and transportation models in civil engineering. In doing so, we hope to provide
more human-like movements and social relationship models to researchers to
study problems in complex and mobile networks.
|
1208.3665
|
An Evaluation of Popular Copy-Move Forgery Detection Approaches
|
cs.CV
|
A copy-move forgery is created by copying and pasting content within the same
image, and potentially post-processing it. In recent years, the detection of
copy-move forgeries has become one of the most actively researched topics in
blind image forensics. A considerable number of different algorithms have been
proposed focusing on different types of postprocessed copies. In this paper, we
aim to answer which copy-move forgery detection algorithms and processing steps
(e.g., matching, filtering, outlier detection, affine transformation
estimation) perform best in various postprocessing scenarios. The focus of our
analysis is to evaluate the performance of previously proposed feature sets. We
achieve this by casting existing algorithms in a common pipeline. In this
paper, we examined the 15 most prominent feature sets. We analyzed the
detection performance on a per-image basis and on a per-pixel basis. We created
a challenging real-world copy-move dataset, and a software framework for
systematic image manipulation. Experiments show, that the keypoint-based
features SIFT and SURF, as well as the block-based DCT, DWT, KPCA, PCA and
Zernike features perform very well. These feature sets exhibit the best
robustness against various noise sources and downsampling, while reliably
identifying the copied regions.
|
1208.3667
|
2.5K-Graphs: from Sampling to Generation
|
cs.SI physics.data-an physics.soc-ph
|
Understanding network structure and having access to realistic graphs plays a
central role in computer and social networks research. In this paper, we
propose a complete, and practical methodology for generating graphs that
resemble a real graph of interest. The metrics of the original topology we
target to match are the joint degree distribution (JDD) and the
degree-dependent average clustering coefficient ($\bar{c}(k)$). We start by
developing efficient estimators for these two metrics based on a node sample
collected via either independence sampling or random walks. Then, we process
the output of the estimators to ensure that the target properties are
realizable. Finally, we propose an efficient algorithm for generating
topologies that have the exact target JDD and a $\bar{c}(k)$ close to the
target. Extensive simulations using real-life graphs show that the graphs
generated by our methodology are similar to the original graph with respect to,
not only the two target metrics, but also a wide range of other topological
metrics; furthermore, our generator is order of magnitudes faster than
state-of-the-art techniques.
|
1208.3670
|
A Survey of Recent View-based 3D Model Retrieval Methods
|
cs.CV
|
Extensive research efforts have been dedicated to 3D model retrieval in
recent decades. Recently, view-based methods have attracted much research
attention due to the high discriminative property of multi-views for 3D object
representation. In this report, we summarize the view-based 3D model methods
and provide the further research trends. This paper focuses on the scheme for
matching between multiple views of 3D models and the application of
bag-of-visual-words method in 3D model retrieval. For matching between multiple
views, the many-to-many matching, probabilistic matching and semisupervised
learning methods are introduced. For bag-of-visual-words application in 3D
model retrieval, we first briefly review the bag-of-visual-words works on
multimedia and computer vision tasks, where the visual dictionary has been
detailed introduced. Then a series of 3D model retrieval methods by using
bag-of-visual-words description are surveyed in this paper. At last, we
summarize the further research content in view-based 3D model retrieval.
|
1208.3681
|
Calculations of Frequency Response Functions (FRF) Using Computer Smart
Office Software and Nyquist Plot under Gyroscopic Effect Rotation
|
cs.CE
|
Regenerated (FRF curves), synthesis of (FRF) curves there are two main
requirement in the form of response model, The first being that of regenerating
"Theoretical" curve for the frequency response function actually measured and
analysis and the second being that of synthesising the other functions which
were not measured,(FRF) that isolates the inherent dynamic properties of a
mechanical structure. Experimental modal parameters (frequency, damping, and
mode shape) are also obtained from a set of (FRF) measurements. The (FRF)
describes the input-output relationship between two points on a structure as a
function of frequency. Therefore, an (FRF) is actually defined between a single
input DOF (point & direction), and a single output (DOF), although the FRF was
previously defined as a ratio of the Fourier transforms of an output and input
signal. In this paper we detection FRF curve using Nyquist plot under
gyroscopic effect in revolving structure using computer smart office software.
Keywords - FRF curve; modal test; Nyquist plot; software engineering;
gyroscopic effect; smart office.
|
1208.3687
|
Information-theoretic Dictionary Learning for Image Classification
|
cs.CV cs.IT math.IT stat.ML
|
We present a two-stage approach for learning dictionaries for object
classification tasks based on the principle of information maximization. The
proposed method seeks a dictionary that is compact, discriminative, and
generative. In the first stage, dictionary atoms are selected from an initial
dictionary by maximizing the mutual information measure on dictionary
compactness, discrimination and reconstruction. In the second stage, the
selected dictionary atoms are updated for improved reconstructive and
discriminative power using a simple gradient ascent algorithm on mutual
information. Experiments using real datasets demonstrate the effectiveness of
our approach for image classification tasks.
|
1208.3689
|
An improvement direction for filter selection techniques using
information theory measures and quadratic optimization
|
cs.LG cs.IT math.IT
|
Filter selection techniques are known for their simplicity and efficiency.
However this kind of methods doesn't take into consideration the features
inter-redundancy. Consequently the un-removed redundant features remain in the
final classification model, giving lower generalization performance. In this
paper we propose to use a mathematical optimization method that reduces
inter-features redundancy and maximize relevance between each feature and the
target variable.
|
1208.3691
|
On the genericity properties in networked estimation: Topology design
and sensor placement
|
cs.MA cs.IT math.IT
|
In this paper, we consider networked estimation of linear, discrete-time
dynamical systems monitored by a network of agents. In order to minimize the
power requirement at the (possibly, battery-operated) agents, we require that
the agents can exchange information with their neighbors only \emph{once per
dynamical system time-step}; in contrast to consensus-based estimation where
the agents exchange information until they reach a consensus. It can be
verified that with this restriction on information exchange, measurement fusion
alone results in an unbounded estimation error at every such agent that does
not have an observable set of measurements in its neighborhood. To over come
this challenge, state-estimate fusion has been proposed to recover the system
observability. However, we show that adding state-estimate fusion may not
recover observability when the system matrix is structured-rank ($S$-rank)
deficient.
In this context, we characterize the state-estimate fusion and measurement
fusion under both full $S$-rank and $S$-rank deficient system matrices.
|
1208.3700
|
Synthetic Aperture Radar Imaging and Motion Estimation via Robust
Principle Component Analysis
|
cs.IT math.IT math.NA
|
We consider the problem of synthetic aperture radar (SAR) imaging and motion
estimation of complex scenes. By complex we mean scenes with multiple targets,
stationary and in motion. We use the usual setup with one moving antenna
emitting and receiving signals. We address two challenges: (1) the detection of
moving targets in the complex scene and (2) the separation of the echoes from
the stationary targets and those from the moving targets. Such separation
allows high resolution imaging of the stationary scene and motion estimation
with the echoes from the moving targets alone. We show that the robust
principal component analysis (PCA) method which decomposes a matrix in two
parts, one low rank and one sparse, can be used for motion detection and data
separation. The matrix that is decomposed is the pulse and range compressed SAR
data indexed by two discrete time variables: the slow time, which parametrizes
the location of the antenna, and the fast time, which parametrizes the echoes
received between successive emissions from the antenna. We present an analysis
of the rank of the data matrix to motivate the use of the robust PCA method. We
also show with numerical simulations that successful data separation with
robust PCA requires proper data windowing. Results of motion estimation and
imaging with the separated data are presented, as well.
|
1208.3716
|
Improved Total Variation based Image Compressive Sensing Recovery by
Nonlocal Regularization
|
cs.CV
|
Recently, total variation (TV) based minimization algorithms have achieved
great success in compressive sensing (CS) recovery for natural images due to
its virtue of preserving edges. However, the use of TV is not able to recover
the fine details and textures, and often suffers from undesirable staircase
artifact. To reduce these effects, this letter presents an improved TV based
image CS recovery algorithm by introducing a new nonlocal regularization
constraint into CS optimization problem. The nonlocal regularization is built
on the well known nonlocal means (NLM) filtering and takes advantage of
self-similarity in images, which helps to suppress the staircase effect and
restore the fine details. Furthermore, an efficient augmented Lagrangian based
algorithm is developed to solve the above combined TV and nonlocal
regularization constrained problem. Experimental results demonstrate that the
proposed algorithm achieves significant performance improvements over the
state-of-the-art TV based algorithm in both PSNR and visual perception.
|
1208.3719
|
Auto-WEKA: Combined Selection and Hyperparameter Optimization of
Classification Algorithms
|
cs.LG
|
Many different machine learning algorithms exist; taking into account each
algorithm's hyperparameters, there is a staggeringly large number of possible
alternatives overall. We consider the problem of simultaneously selecting a
learning algorithm and setting its hyperparameters, going beyond previous work
that addresses these issues in isolation. We show that this problem can be
addressed by a fully automated approach, leveraging recent innovations in
Bayesian optimization. Specifically, we consider a wide range of feature
selection techniques (combining 3 search and 8 evaluator methods) and all
classification approaches implemented in WEKA, spanning 2 ensemble methods, 10
meta-methods, 27 base classifiers, and hyperparameter settings for each
classifier. On each of 21 popular datasets from the UCI repository, the KDD Cup
09, variants of the MNIST dataset and CIFAR-10, we show classification
performance often much better than using standard selection/hyperparameter
optimization methods. We hope that our approach will help non-expert users to
more effectively identify machine learning algorithms and hyperparameter
settings appropriate to their applications, and hence to achieve improved
performance.
|
1208.3723
|
Image Super-Resolution via Dual-Dictionary Learning And Sparse
Representation
|
cs.CV
|
Learning-based image super-resolution aims to reconstruct high-frequency (HF)
details from the prior model trained by a set of high- and low-resolution image
patches. In this paper, HF to be estimated is considered as a combination of
two components: main high-frequency (MHF) and residual high-frequency (RHF),
and we propose a novel image super-resolution method via dual-dictionary
learning and sparse representation, which consists of the main dictionary
learning and the residual dictionary learning, to recover MHF and RHF
respectively. Extensive experimental results on test images validate that by
employing the proposed two-layer progressive scheme, more image details can be
recovered and much better results can be achieved than the state-of-the-art
algorithms in terms of both PSNR and visual perception.
|
1208.3728
|
Online Learning with Predictable Sequences
|
stat.ML cs.LG
|
We present methods for online linear optimization that take advantage of
benign (as opposed to worst-case) sequences. Specifically if the sequence
encountered by the learner is described well by a known "predictable process",
the algorithms presented enjoy tighter bounds as compared to the typical worst
case bounds. Additionally, the methods achieve the usual worst-case regret
bounds if the sequence is not benign. Our approach can be seen as a way of
adding prior knowledge about the sequence within the paradigm of online
learning. The setting is shown to encompass partial and side information.
Variance and path-length bounds can be seen as particular examples of online
learning with simple predictable sequences.
We further extend our methods and results to include competing with a set of
possible predictable processes (models), that is "learning" the predictable
process itself concurrently with using it to obtain better regret guarantees.
We show that such model selection is possible under various assumptions on the
available feedback. Our results suggest a promising direction of further
research with potential applications to stock market and time series
prediction.
|
1208.3774
|
Graphical Query Builder in Opportunistic Sensor Networks to discover
Sensor Information
|
cs.IR
|
A lot of sensor network applications are data-driven. We believe that query
is the most preferred way to discover sensor services. Normally users are
unaware of available sensors. Thus users need to pose different types of query
over the sensor network to get the desired information. Even users may need to
input more complicated queries with higher levels of aggregations, and requires
more complex interactions with the system. As the users have no prior knowledge
of the sensor data or services our aim is to develop a visual query interface
where users can feed more user friendly queries and machine can understand
those. In this paper work, we have developed an Interactive visual query
interface for the users. To accomplish this we have considered several use
cases and we have derived graphical representation of query from their text
based format for those use case scenario. We have facilitated the user by
extracting class, subclass and properties from Ontology. To do so we have
parsed OWL file in the user interface and based upon the parsed information
users build visual query. Later on we have translated the visual query
languages into SPARQL query, a machine understandable format which helps the
machine to communicate with the underlying technology.
|
1208.3779
|
Multiple graph regularized protein domain ranking
|
cs.LG cs.CE cs.IR q-bio.QM
|
Background Protein domain ranking is a fundamental task in structural
biology. Most protein domain ranking methods rely on the pairwise comparison of
protein domains while neglecting the global manifold structure of the protein
domain database. Recently, graph regularized ranking that exploits the global
structure of the graph defined by the pairwise similarities has been proposed.
However, the existing graph regularized ranking methods are very sensitive to
the choice of the graph model and parameters, and this remains a difficult
problem for most of the protein domain ranking methods.
Results To tackle this problem, we have developed the Multiple Graph
regularized Ranking algorithm, MultiG- Rank. Instead of using a single graph to
regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold
of protein domain distribution by combining multiple initial graphs for the
regularization. Graph weights are learned with ranking scores jointly and
automatically, by alternately minimizing an ob- jective function in an
iterative algorithm. Experimental results on a subset of the ASTRAL SCOP
protein domain database demonstrate that MultiG-Rank achieves a better ranking
performance than single graph regularized ranking methods and pairwise
similarity based ranking methods.
Conclusion The problem of graph model and parameter selection in graph
regularized protein domain ranking can be solved effectively by combining
multiple graphs. This aspect of generalization introduces a new frontier in
applying multiple graphs to solving protein domain ranking applications.
|
1208.3789
|
On Global Stability of Financial Networks
|
q-fin.GN cs.CE
|
The recent financial crisis have generated renewed interests in fragilities
of global financial networks among economists and regulatory authorities. In
particular, a potential vulnerability of the financial networks is the
"financial contagion" process in which insolvencies of individual entities
propagate through the "web of dependencies" to affect the entire system. In
this paper, we formalize an extension of a financial network model originally
proposed by Nier et al. for scenarios such as the OTC derivatives market,
define a suitable global stability measure for this model, and perform a
comprehensive empirical evaluation of this stability measure over more than
700,000 combinations of networks types and parameter combinations. Based on our
evaluations, we discover many interesting implications of our evaluations of
this stability measure, and derive topological properties and parameters
combinations that may be used to flag the network as a possible fragile
network. An interactive software FIN-STAB for computing the stability is
available from the website www2.cs.uic.edu/~dasgupta/financial-simulator-files
|
1208.3790
|
Secret Key Generation from Sparse Wireless Channels: Ergodic Capacity
and Secrecy Outage
|
cs.CR cs.IT math.IT
|
This paper investigates generation of a secret key from a reciprocal wireless
channel. In particular we consider wireless channels that exhibit sparse
structure in the wideband regime and the impact of the sparsity on the secret
key capacity. We explore this problem in two steps. First, we study key
generation from a state-dependent discrete memoryless multiple source. The
state of source captures the effect of channel sparsity. Secondly, we consider
a wireless channel model that captures channel sparsity and correlation between
the legitimate users' channel and the eavesdropper's channel. Such dependency
can significantly reduce the secret key capacity.
According to system delay requirements, two performance measures are
considered: (i) ergodic secret key capacity and (ii) outage probability. We
show that in the wideband regime when a white sounding sequence is adopted, a
sparser channel can achieve a higher ergodic secret key rate than a richer
channel can. For outage performance, we show that if the users generate secret
keys at a fraction of the ergodic capacity, the outage probability will decay
exponentially in signal bandwidth. Moreover, a larger exponent is achieved by a
richer channel.
|
1208.3802
|
OntoAna: Domain Ontology for Human Anatomy
|
cs.AI
|
Today, we can find many search engines which provide us with information
which is more operational in nature. None of the search engines provide domain
specific information. This becomes very troublesome to a novice user who wishes
to have information in a particular domain. In this paper, we have developed an
ontology which can be used by a domain specific search engine. We have
developed an ontology on human anatomy, which captures information regarding
cardiovascular system, digestive system, skeleton and nervous system. This
information can be used by people working in medical and health care domain.
|
1208.3806
|
Dynamic Rate Adaptation for Improved Throughput and Delay in Wireless
Network Coded Broadcast
|
cs.IT math.IT
|
In this paper we provide theoretical and simulation-based study of the
delivery delay performance of a number of existing throughput optimal coding
schemes and use the results to design a new dynamic rate adaptation scheme that
achieves improved overall throughput-delay performance.
Under a baseline rate control scheme, the receivers' delay performance is
examined. Based on their Markov states, the knowledge difference between the
sender and receiver, three distinct methods for packet delivery are identified:
zero state, leader state and coefficient-based delivery. We provide analyses of
each of these and show that, in many cases, zero state delivery alone presents
a tractable approximation of the expected packet delivery behaviour.
Interestingly, while coefficient-based delivery has so far been treated as a
secondary effect in the literature, we find that the choice of coefficients is
extremely important in determining the delay, and a well chosen encoding scheme
can, in fact, contribute a significant improvement to the delivery delay.
Based on our delivery delay model, we develop a dynamic rate adaptation
scheme which uses performance prediction models to determine the sender
transmission rate. Surprisingly, taking this approach leads us to the simple
conclusion that the sender should regulate its addition rate based on the total
number of undelivered packets stored at the receivers. We show that despite its
simplicity, our proposed dynamic rate adaptation scheme results in noticeably
improved throughput-delay performance over existing schemes in the literature.
|
1208.3809
|
Lifted Variable Elimination: A Novel Operator and Completeness Results
|
cs.AI
|
Various methods for lifted probabilistic inference have been proposed, but
our understanding of these methods and the relationships between them is still
limited, compared to their propositional counterparts. The only existing
theoretical characterization of lifting is for weighted first-order model
counting (WFOMC), which was shown to be complete domain-lifted for the class of
2-logvar models. This paper makes two contributions to lifted variable
elimination (LVE). First, we introduce a novel inference operator called group
inversion. Second, we prove that LVE augmented with this operator is complete
in the same sense as WFOMC.
|
1208.3811
|
State distributions and minimum relative entropy noise sequences in
uncertain stochastic systems: the discrete time case
|
cs.SY cs.IT math.IT math.OC math.PR
|
The paper is concerned with a dissipativity theory and robust performance
analysis of discrete-time stochastic systems driven by a statistically
uncertain random noise. The uncertainty is quantified by the conditional
relative entropy of the actual probability law of the noise with respect to a
nominal product measure corresponding to a white noise sequence. We discuss a
balance equation, dissipation inequality and superadditivity property for the
corresponding conditional relative entropy supply as a function of time. The
problem of minimizing the supply required to drive the system between given
state distributions over a specified time horizon is considered. Such
variational problems, involving entropy and probabilistic boundary conditions,
are known in the literature as Schroedinger bridge problems. In application to
control systems, this minimum required conditional relative entropy supply
characterizes the robustness of the system with respect to an uncertain noise.
We obtain a dynamic programming Bellman equation for the minimum required
conditional relative entropy supply and establish a Markov property of the
worst-case noise with respect to the state of the system. For multivariable
linear systems with a Gaussian white noise sequence as the nominal noise model
and Gaussian initial and terminal state distributions, the minimum required
supply is obtained using an algebraic Riccati equation which admits a
closed-form solution. We propose a computable robustness index for such systems
in the framework of an entropy theoretic formulation of uncertainty and provide
an example to illustrate this approach.
|
1208.3812
|
Algorithms for Efficient Mining of Statistically Significant Attribute
Association Information
|
cs.DB
|
Knowledge of the association information between the attributes in a data set
provides insight into the underlying structure of the data and explains the
relationships (independence, synergy, redundancy) between the attributes and
class (if present). Complex models learnt computationally from the data are
more interpretable to a human analyst when such interdependencies are known. In
this paper, we focus on mining two types of association information among the
attributes - correlation information and interaction information for both
supervised (class attribute present) and unsupervised analysis (class attribute
absent). Identifying the statistically significant attribute associations is a
computationally challenging task - the number of possible associations
increases exponentially and many associations contain redundant information
when a number of correlated attributes are present. In this paper, we explore
efficient data mining methods to discover non-redundant attribute sets that
contain significant association information indicating the presence of
informative patterns in the data.
|
1208.3815
|
Hardy-Schatten Norms of Systems, Output Energy Cumulants and Linear
Quadro-Quartic Gaussian Control
|
cs.SY math.OC math.PR
|
This paper is concerned with linear stochastic control systems in state
space. The integral of the squared norm of the system output over a bounded
time interval is interpreted as energy. The cumulants of the output energy in
the infinite-horizon limit are related to Schatten norms of the system in the
Hardy space of transfer functions and the risk-sensitive performance index. We
employ a novel performance criterion which seeks to minimize a combination of
the average value and the variance of the output energy of the system per unit
time. The resulting linear quadro-quartic Gaussian control problem involves the
H2 and H4-norms of the closed-loop system. We obtain equations for the optimal
controller and outline a homotopy method which reduces the solution of the
problem to the numerical integration of a differential equation initialized by
the standard linear quadratic Gaussian controller.
|
1208.3822
|
Joint-ViVo: Selecting and Weighting Visual Words Jointly for
Bag-of-Features based Tissue Classification in Medical Images
|
cs.CV stat.ML
|
Automatically classifying the tissues types of Region of Interest (ROI) in
medical imaging has been an important application in Computer-Aided Diagnosis
(CAD), such as classification of breast parenchymal tissue in the mammogram,
classify lung disease patterns in High-Resolution Computed Tomography (HRCT)
etc. Recently, bag-of-features method has shown its power in this field,
treating each ROI as a set of local features. In this paper, we investigate
using the bag-of-features strategy to classify the tissue types in medical
imaging applications. Two important issues are considered here: the visual
vocabulary learning and weighting. Although there are already plenty of
algorithms to deal with them, all of them treat them independently, namely, the
vocabulary learned first and then the histogram weighted. Inspired by
Auto-Context who learns the features and classifier jointly, we try to develop
a novel algorithm that learns the vocabulary and weights jointly. The new
algorithm, called Joint-ViVo, works in an iterative way. In each iteration, we
first learn the weights for each visual word by maximizing the margin of ROI
triplets, and then select the most discriminate visual words based on the
learned weights for the next iteration. We test our algorithm on three tissue
classification tasks: identifying brain tissue type in magnetic resonance
imaging (MRI), classifying lung tissue in HRCT images, and classifying breast
tissue density in mammograms. The results show that Joint-ViVo can perform
effectively for classifying tissues.
|
1208.3830
|
On the Stability of Receding Horizon Control for Continuous-Time
Stochastic Systems
|
math.OC cs.SY
|
We study the stability of receding horizon control for continuous-time
non-linear stochastic differential equations. We illustrate the results with a
simulation example in which we employ receding horizon control to design an
investment strategy to repay a debt.
|
1208.3839
|
Discriminative Sparse Coding on Multi-Manifold for Data Representation
and Classification
|
cs.CV cs.LG stat.ML
|
Sparse coding has been popularly used as an effective data representation
method in various applications, such as computer vision, medical imaging and
bioinformatics, etc. However, the conventional sparse coding algorithms and its
manifold regularized variants (graph sparse coding and Laplacian sparse
coding), learn the codebook and codes in a unsupervised manner and neglect the
class information available in the training set. To address this problem, in
this paper we propose a novel discriminative sparse coding method based on
multi-manifold, by learning discriminative class-conditional codebooks and
sparse codes from both data feature space and class labels. First, the entire
training set is partitioned into multiple manifolds according to the class
labels. Then, we formulate the sparse coding as a manifold-manifold matching
problem and learn class-conditional codebooks and codes to maximize the
manifold margins of different classes. Lastly, we present a data point-manifold
matching error based strategy to classify the unlabeled data point.
Experimental results on somatic mutations identification and breast tumors
classification in ultrasonic images tasks demonstrate the efficacy of the
proposed data representation-classification approach.
|
1208.3845
|
Adaptive Graph via Multiple Kernel Learning for Nonnegative Matrix
Factorization
|
cs.LG cs.CV stat.ML
|
Nonnegative Matrix Factorization (NMF) has been continuously evolving in
several areas like pattern recognition and information retrieval methods. It
factorizes a matrix into a product of 2 low-rank non-negative matrices that
will define parts-based, and linear representation of nonnegative data.
Recently, Graph regularized NMF (GrNMF) is proposed to find a compact
representation,which uncovers the hidden semantics and simultaneously respects
the intrinsic geometric structure. In GNMF, an affinity graph is constructed
from the original data space to encode the geometrical information. In this
paper, we propose a novel idea which engages a Multiple Kernel Learning
approach into refining the graph structure that reflects the factorization of
the matrix and the new data space. The GrNMF is improved by utilizing the graph
refined by the kernel learning, and then a novel kernel learning method is
introduced under the GrNMF framework. Our approach shows encouraging results of
the proposed algorithm in comparison to the state-of-the-art clustering
algorithms like NMF, GrNMF, SVD etc.
|
1208.3848
|
Modelling the effect of gap junctions on tissue-level cardiac
electrophysiology
|
cs.CE physics.bio-ph q-bio.CB q-bio.TO
|
When modelling tissue-level cardiac electrophysiology, continuum
approximations to the discrete cell-level equations are used to maintain
computational tractability. One of the most commonly used models is represented
by the bidomain equations, the derivation of which relies on a homogenisation
technique to construct a suitable approximation to the discrete model. This
derivation does not explicitly account for the presence of gap junctions
connecting one cell to another. It has been seen experimentally [Rohr,
Cardiovasc. Res. 2004] that these gap junctions have a marked effect on the
propagation of the action potential, specifically as the upstroke of the wave
passes through the gap junction.
In this paper we explicitly include gap junctions in a both a 2D discrete
model of cardiac electrophysiology, and the corresponding continuum model, on a
simplified cell geometry. Using these models we compare the results of
simulations using both continuum and discrete systems. We see that the form of
the action potential as it passes through gap junctions cannot be replicated
using a continuum model, and that the underlying propagation speed of the
action potential ceases to match up between models when gap junctions are
introduced. In addition, the results of the discrete simulations match the
characteristics of those shown in Rohr 2004. From this, we suggest that a
hybrid model -- a discrete system following the upstroke of the action
potential, and a continuum system elsewhere -- may give a more accurate
description of cardiac electrophysiology.
|
1208.3849
|
Analysis of parametric biological models with non-linear dynamics
|
cs.CE
|
In this paper we present recent results on parametric analysis of biological
models. The underlying method is based on the algorithms for computing
trajectory sets of hybrid systems with polynomial dynamics. The method is then
applied to two case studies of biological systems: one is a cardiac cell model
for studying the conditions for cardiac abnormalities, and the second is a
model of insect nest-site choice.
|
1208.3850
|
A subsystems approach for parameter estimation of ODE models of hybrid
systems
|
cs.CE q-bio.QM
|
We present a new method for parameter identification of ODE system
descriptions based on data measurements. Our method works by splitting the
system into a number of subsystems and working on each of them separately,
thereby being easily parallelisable, and can also deal with noise in the
observations.
|
1208.3851
|
A Model of the Cellular Iron Homeostasis Network Using Semi-Formal
Methods for Parameter Space Exploration
|
cs.CE q-bio.MN q-bio.QM
|
This paper presents a novel framework for the modeling of biological
networks. It makes use of recent tools analyzing the robust satisfaction of
properties of (hybrid) dynamical systems. The main challenge of this approach
as applied to biological systems is to get access to the relevant parameter
sets despite gaps in the available knowledge. An initial estimate of useful
parameters was sought by formalizing the known behavior of the biological
network in the STL logic using the tool Breach. Then, once a set of parameter
values consistent with known biological properties was found, we tried to
locally expand it into the largest possible valid region. We applied this
methodology in an effort to model and better understand the complex network
regulating iron homeostasis in mammalian cells. This system plays an important
role in many biological functions, including erythropoiesis, resistance against
infections, and proliferation of cancer cells.
|
1208.3852
|
Hybrid Automata and \epsilon-Analysis on a Neural Oscillator
|
cs.CE cs.LO cs.SC
|
In this paper we propose a hybrid model of a neural oscillator, obtained by
partially discretizing a well-known continuous model. Our construction points
out that in this case the standard techniques, based on replacing sigmoids with
step functions, is not satisfactory. Then, we study the hybrid model through
both symbolic methods and approximation techniques. This last analysis, in
particular, allows us to show the differences between the considered
approximation approaches. Finally, we focus on approximations via
epsilon-semantics, proving how these can be computed in practice.
|
1208.3853
|
On Expressing and Monitoring Oscillatory Dynamics
|
cs.CE cs.LO cs.NA cs.SY
|
To express temporal properties of dense-time real-valued signals, the Signal
Temporal Logic (STL) has been defined by Maler et al. The work presented a
monitoring algorithm deciding the satisfiability of STL formulae on finite
discrete samples of continuous signals. The logic has been used to express and
analyse biological systems, but it is not expressive enough to sufficiently
distinguish oscillatory properties important in biology. In this paper we
define the extended logic STL* in which STL is augmented with a signal-value
freezing operator allowing us to express (and distinguish) detailed properties
of biological oscillations. The logic is supported by a monitoring algorithm
prototyped in Matlab. The monitoring procedure of STL* is evaluated on a
biologically-relevant case study.
|
1208.3854
|
Hybrid models of the cell cycle molecular machinery
|
cs.CE cs.SY q-bio.QM
|
Piecewise smooth hybrid systems, involving continuous and discrete variables,
are suitable models for describing the multiscale regulatory machinery of the
biological cells. In hybrid models, the discrete variables can switch on and
off some molecular interactions, simulating cell progression through a series
of functioning modes. The advancement through the cell cycle is the archetype
of such an organized sequence of events. We present an approach, inspired from
tropical geometry ideas, allowing to reduce, hybridize and analyse cell cycle
models consisting of polynomial or rational ordinary differential equations.
|
1208.3855
|
Effects of delayed immune-response in tumor immune-system interplay
|
cs.CE q-bio.CB
|
Tumors constitute a wide family of diseases kinetically characterized by the
co-presence of multiple spatio-temporal scales. So, tumor cells ecologically
interplay with other kind of cells, e.g. endothelial cells or immune system
effectors, producing and exchanging various chemical signals. As such, tumor
growth is an ideal object of hybrid modeling where discrete stochastic
processes model agents at low concentrations, and mean-field equations model
chemical signals. In previous works we proposed a hybrid version of the
well-known Panetta-Kirschner mean-field model of tumor cells, effector cells
and Interleukin-2. Our hybrid model suggested -at variance of the inferences
from its original formulation- that immune surveillance, i.e. tumor elimination
by the immune system, may occur through a sort of side-effect of large
stochastic oscillations. However, that model did not account that, due to both
chemical transportation and cellular differentiation/division, the
tumor-induced recruitment of immune effectors is not instantaneous but,
instead, it exhibits a lag period. To capture this, we here integrate a
mean-field equation for Interleukins-2 with a bi-dimensional delayed stochastic
process describing such delayed interplay. An algorithm to realize trajectories
of the underlying stochastic process is obtained by coupling the Piecewise
Deterministic Markov process (for the hybrid part) with a Generalized
Semi-Markovian clock structure (to account for delays). We (i) relate tumor
mass growth with delays via simulations and via parametric sensitivity analysis
techniques, (ii) we quantitatively determine probabilistic eradication times,
and (iii) we prove, in the oscillatory regime, the existence of a heuristic
stochastic bifurcation resulting in delay-induced tumor eradication, which is
neither predicted by the mean-field nor by the hybrid non-delayed models.
|
1208.3856
|
Statistical Model Checking for Stochastic Hybrid Systems
|
cs.CE cs.SE
|
This paper presents novel extensions and applications of the UPPAAL-SMC model
checker. The extensions allow for statistical model checking of stochastic
hybrid systems. We show how our race-based stochastic semantics extends to
networks of hybrid systems, and indicate the integration technique applied for
implementing this semantics in the UPPAAL-SMC simulation engine. We report on
two applications of the resulting tool-set coming from systems biology and
energy aware buildings.
|
1208.3857
|
Towards Cancer Hybrid Automata
|
cs.SY cs.FL
|
This paper introduces Cancer Hybrid Automata (CHAs), a formalism to model the
progression of cancers through discrete phenotypes. The classification of
cancer progression using discrete states like stages and hallmarks has become
common in the biology literature, but primarily as an organizing principle, and
not as an executable formalism. The precise computational model developed here
aims to exploit this untapped potential, namely, through automatic verification
of progression models (e.g., consistency, causal connections, etc.),
classification of unreachable or unstable states and computer-generated
(individualized or universal) therapy plans. The paper builds on a
phenomenological approach, and as such does not need to assume a model for the
biochemistry of the underlying natural progression. Rather, it abstractly
models transition timings between states as well as the effects of drugs and
clinical tests, and thus allows formalization of temporal statements about the
progression as well as notions of timed therapies. The model proposed here is
ultimately based on hybrid automata, and we show how existing controller
synthesis algorithms can be generalized to CHA models, so that therapies can be
generated automatically. Throughout this paper we use cancer hallmarks to
represent the discrete states through which cancer progresses, but other
notions of discretely or continuously varying state formalisms could also be
used to derive similar therapies.
|
1208.3858
|
Disease processes as hybrid dynamical systems
|
cs.LO cs.CE cs.SY q-bio.QM
|
We investigate the use of hybrid techniques in complex processes of
infectious diseases. Since predictive disease models in biomedicine require a
multiscale approach for understanding the molecule-cell-tissue-organ-body
interactions, heterogeneous methodologies are often employed for describing the
different biological scales. Hybrid models provide effective means for complex
disease modelling where the action and dosage of a drug or a therapy could be
meaningfully investigated: the infection dynamics can be classically described
in a continuous fashion, while the scheduling of multiple treatment discretely.
We define an algebraic language for specifying general disease processes and
multiple treatments, from which a semantics in terms of hybrid dynamical system
can be derived. Then, the application of control-theoretic tools is proposed in
order to compute the optimal scheduling of multiple therapies. The
potentialities of our approach are shown in the case study of the SIR epidemic
model and we discuss its applicability on osteomyelitis, a bacterial infection
affecting the bone remodelling system in a specific and multiscale manner. We
report that formal languages are helpful in giving a general homogeneous
formulation for the different scales involved in a multiscale disease process;
and that the combination of hybrid modelling and control theory provides solid
grounds for computational medicine.
|
1208.3876
|
Digging Deeper into Deep Web Databases by Breaking Through the Top-k
Barrier
|
cs.DB
|
A large number of web databases are only accessible through proprietary
form-like interfaces which require users to query the system by entering
desired values for a few attributes. A key restriction enforced by such an
interface is the top-k output constraint - i.e., when there are a large number
of matching tuples, only a few (top-k) of them are preferentially selected and
returned by the website, often according to a proprietary ranking function.
Since most web database owners set k to be a small value, the top-k output
constraint prevents many interesting third-party (e.g., mashup) services from
being developed over real-world web databases. In this paper we consider the
novel problem of "digging deeper" into such web databases. Our main
contribution is the meta-algorithm GetNext that can retrieve the next ranked
tuple from the hidden web database using only the restrictive interface of a
web database without any prior knowledge of its ranking function. This
algorithm can then be called iteratively to retrieve as many top ranked tuples
as necessary. We develop principled and efficient algorithms that are based on
generating and executing multiple reformulated queries and inferring the next
ranked tuple from their returned results. We provide theoretical analysis of
our algorithms, as well as extensive experimental results over synthetic and
real-world databases that illustrate the effectiveness of our techniques.
|
1208.3901
|
Trace transform based method for color image domain identification
|
cs.CV
|
Context categorization is a fundamental pre-requisite for multi-domain
multimedia content analysis applications in order to manage contextual
information in an efficient manner. In this paper, we introduce a new color
image context categorization method (DITEC) based on the trace transform. The
problem of dimensionality reduction of the obtained trace transform signal is
addressed through statistical descriptors that keep the underlying information.
These extracted features offer a highly discriminant behavior for content
categorization. The theoretical properties of the method are analyzed and
validated experimentally through two different datasets.
|
1208.3943
|
Performance Tuning Of J48 Algorithm For Prediction Of Soil Fertility
|
cs.LG cs.DB cs.PF stat.ML
|
Data mining involves the systematic analysis of large data sets, and data
mining in agricultural soil datasets is exciting and modern research area. The
productive capacity of a soil depends on soil fertility. Achieving and
maintaining appropriate levels of soil fertility, is of utmost importance if
agricultural land is to remain capable of nourishing crop production. In this
research, Steps for building a predictive model of soil fertility have been
explained.
This paper aims at predicting soil fertility class using decision tree
algorithms in data mining . Further, it focuses on performance tuning of J48
decision tree algorithm with the help of meta-techniques such as attribute
selection and boosting.
|
1208.3952
|
Dealing with Sparse Document and Topic Representations: Lab Report for
CHiC 2012
|
cs.IR
|
We will report on the participation of GESIS at the first CHiC workshop
(Cultural Heritage in CLEF). Being held for the first time, no prior experience
with the new data set, a document dump of Europeana with ca. 23 million
documents, exists. The most prominent issues that arose from pretests with this
test collection were the very unspecific topics and sparse document
representations. Only half of the topics (26/50) contained a description and
the titles were usually short with just around two words. Therefore we focused
on three different term suggestion and query expansion mechanisms to surpass
the sparse topical description. We used two methods that build on concept
extraction from Wikipedia and on a method that applied co-occurrence statistics
on the available Europeana corpus. In the following paper we will present the
approaches and preliminary results from their assessments.
|
1208.3966
|
Network Coding Based on Chinese Remainder Theorem
|
cs.IT math.IT
|
Random linear network code has to sacrifice part of bandwidth to transfer the
coding vectors, thus a head of size k log|T| is appended to each packet. We
present a distributed random network coding approach based on the Chinese
remainder theorem for general multicast networks. It uses a couple of modulus
as the head, thus reduces the size of head to O(log k). This makes it more
suitable for scenarios where the number of source nodes is large and the
bandwidth is limited. We estimate the multicast rate and show it is
satisfactory in performance for randomly designed networks.
|
1208.3981
|
Minimum Relative Entropy State Transitions in Linear Stochastic Systems:
the Continuous Time Case
|
math.OC cs.IT cs.SY math.DS math.IT math.PR
|
This paper is concerned with a dissipativity theory for dynamical systems
governed by linear Ito stochastic differential equations driven by random noise
with an uncertain drift. The deviation of the noise from a standard Wiener
process in the nominal model is quantified by relative entropy. We discuss a
dissipation inequality for the noise relative entropy supply. The problem of
minimizing the supply required to drive the system between given Gaussian state
distributions over a specified time horizon is considered. This problem, known
in the literature as the Schroedinger bridge, was treated previously in the
context of reciprocal processes. A closed-form smooth solution is obtained for
a Hamilton-Jacobi equation for the minimum required relative entropy supply by
using nonlinear algebraic techniques.
|
1208.3984
|
On the Capacity of the Cognitive Interference Channel with a Common
Cognitive Message
|
cs.IT math.IT
|
In this paper the cognitive interference channel with a common message, a
variation of the classical cognitive interference channel in which the
cognitive message is decoded at both receivers, is studied. For this channel
model new outer and inner bounds are developed as well as new capacity results
for both the discrete memoryless and the Gaussian case. The outer bounds are
derived using bounding techniques originally developed by Sato for the
classical interference channel and Nair and El Gamal for the broadcast channel.
A general inner bound is obtained combining rate-splitting, superposition
coding and binning. Inner and outer bounds are shown to coincide in the "very
strong interference" and the "primary decodes cognitive" regimes. The first
regime consists of channels in which there is no loss of optimality in having
both receivers decode both messages while in the latter regime interference
pre-cancellation at the cognitive receiver achieves capacity. Capacity for the
Gaussian channel is shown to within a constant additive gap and a constant
multiplicative factor.
|
1208.3994
|
Coordination in Network Security Games: a Monotone Comparative Statics
Approach
|
cs.GT cs.NI cs.SI
|
Malicious softwares or malwares for short have become a major security
threat. While originating in criminal behavior, their impact are also
influenced by the decisions of legitimate end users. Getting agents in the
Internet, and in networks in general, to invest in and deploy security features
and protocols is a challenge, in particular because of economic reasons arising
from the presence of network externalities.
In this paper, we focus on the question of incentive alignment for agents of
a large network towards a better security. We start with an economic model for
a single agent, that determines the optimal amount to invest in protection. The
model takes into account the vulnerability of the agent to a security breach
and the potential loss if a security breach occurs. We derive conditions on the
quality of the protection to ensure that the optimal amount spent on security
is an increasing function of the agent's vulnerability and potential loss. We
also show that for a large class of risks, only a small fraction of the
expected loss should be invested.
Building on these results, we study a network of interconnected agents
subject to epidemic risks. We derive conditions to ensure that the incentives
of all agents are aligned towards a better security. When agents are strategic,
we show that security investments are always socially inefficient due to the
network externalities. Moreover alignment of incentives typically implies a
coordination problem, leading to an equilibrium with a very high price of
anarchy.
|
1208.4009
|
Learning sparse messages in networks of neural cliques
|
cs.NE
|
An extension to a recently introduced binary neural network is proposed in
order to allow the learning of sparse messages, in large numbers and with high
memory efficiency. This new network is justified both in biological and
informational terms. The learning and retrieval rules are detailed and
illustrated by various simulation results.
|
1208.4016
|
Concept driven framework for Latent Table Discovery
|
cs.DB
|
Database systems have to cater to the growing demands of the information age.
The growth of the new age information retrieval powerhouses like search engines
has thrown a challenge to the data management community to come up with novel
mechanisms for feeding information to end users. The burgeoning use of natural
language query interfaces compels system designers to present meaningful and
customised information. Conventional query languages like SQL do not cater to
these requirements due to syntax oriented design. Providing a semantic cover
over these systems was the aim of latent table discovery focusing on
semantically connecting unrelated tables that were not syntactically related by
design and document the discovered knowledge. This paper throws a new direction
towards improving the semantic capabilities of database systems by introducing
a concept driven framework over the latent table discovery method.
|
1208.4037
|
Viability of an elementary syntactic structure in a population playing
Naming Games
|
physics.soc-ph cs.SI physics.comp-ph
|
We explore how the social dynamics of communication and learning can bring
about the rise of a syntactic communication in a population of speakers. Our
study is developed starting from a version of the Naming Game model where an
elementary syntactic structure is introduced. This analysis shows how the
transition from non-syntactic to syntactic communication is socially favored in
communities which need to exchange a large number of concepts.
|
1208.4042
|
Measuring quality, reputation and trust in online communities
|
physics.soc-ph cs.SI
|
In the Internet era the information overload and the challenge to detect
quality content has raised the issue of how to rank both resources and users in
online communities. In this paper we develop a general ranking method that can
simultaneously evaluate users' reputation and objects' quality in an iterative
procedure, and that exploits the trust relationships and social acquaintances
of users as an additional source of information. We test our method on two real
online communities, the EconoPhysics forum and the Last.fm music catalogue, and
determine how different variants of the algorithm influence the resultant
ranking. We show the benefits of considering trust relationships, and define
the form of the algorithm better apt to common situations.
|
1208.4043
|
Dynamic Anomalography: Tracking Network Anomalies via Sparsity and Low
Rank
|
cs.NI cs.IT math.IT
|
In the backbone of large-scale networks, origin-to-destination (OD) traffic
flows experience abrupt unusual changes known as traffic volume anomalies,
which can result in congestion and limit the extent to which end-user quality
of service requirements are met. As a means of maintaining seamless end-user
experience in dynamic environments, as well as for ensuring network security,
this paper deals with a crucial network monitoring task termed dynamic
anomalography. Given link traffic measurements (noisy superpositions of
unobserved OD flows) periodically acquired by backbone routers, the goal is to
construct an estimated map of anomalies in real time, and thus summarize the
network `health state' along both the flow and time dimensions. Leveraging the
low intrinsic-dimensionality of OD flows and the sparse nature of anomalies, a
novel online estimator is proposed based on an exponentially-weighted
least-squares criterion regularized with the sparsity-promoting $\ell_1$-norm
of the anomalies, and the nuclear norm of the nominal traffic matrix. After
recasting the non-separable nuclear norm into a form amenable to online
optimization, a real-time algorithm for dynamic anomalography is developed and
its convergence established under simplifying technical assumptions. For
operational conditions where computational complexity reductions are at a
premium, a lightweight stochastic gradient algorithm based on Nesterov's
acceleration technique is developed as well. Comprehensive numerical tests with
both synthetic and real network data corroborate the effectiveness of the
proposed online algorithms and their tracking capabilities, and demonstrate
that they outperform state-of-the-art approaches developed to diagnose traffic
anomalies.
|
1208.4048
|
Degrees of Freedom for MIMO Two-Way X Relay Channel
|
cs.IT math.IT
|
We study the degrees of freedom (DOF) of a multiple-input multiple-output
(MIMO) two-way X relay channel, where there are two groups of source nodes and
one relay node, each equipped with multiple antennas, and each of the two
source nodes in one group exchanges independent messages with the two source
nodes in the other group via the relay node. It is assumed that every source
node is equipped with M antennas while the relay is equipped with N antennas.
We first show that the upper bound on the total DOF for this network is
2min{2M,N} and then focus on the case of N \leq 2M so that the DOF is upper
bounded by the number of antennas at the relay. By applying signal alignment
for network coding and joint transceiver design for interference cancellation,
we show that this upper bound can be achieved when N \leq8M/5. We also show
that with signal alignment only but no joint transceiver design, the upper
bound is achievable when N\leq4M/3. Simulation results are provided to
corroborate the theoretical results and to demonstrate the performance of the
proposed scheme in the finite signal-to-noise ratio regime.
|
1208.4079
|
Recent Technological Advances in Natural Language Processing and
Artificial Intelligence
|
cs.CL
|
A recent advance in computer technology has permitted scientists to implement
and test algorithms that were known from quite some time (or not) but which
were computationally expensive. Two such projects are IBM's Jeopardy as a part
of its DeepQA project [1] and Wolfram's Wolframalpha[2]. Both these methods
implement natural language processing (another goal of AI scientists) and try
to answer questions as asked by the user. Though the goal of the two projects
is similar, both of them have a different procedure at it's core. In the
following sections, the mechanism and history of IBM's Jeopardy and Wolfram
alpha has been explained followed by the implications of these projects in
realizing Ray Kurzweil's [3] dream of passing the Turing test by 2029. A recipe
of taking the above projects to a new level is also explained.
|
1208.4080
|
A Simple Proof of Threshold Saturation for Coupled Vector Recursions
|
cs.IT math.IT
|
Convolutional low-density parity-check (LDPC) codes (or spatially-coupled
codes) have now been shown to achieve capacity on binary-input memoryless
symmetric channels. The principle behind this surprising result is the
threshold-saturation phenomenon, which is defined by the belief-propagation
threshold of the spatially-coupled ensemble saturating to a fundamental
threshold defined by the uncoupled system.
Previously, the authors demonstrated that potential functions can be used to
provide a simple proof of threshold saturation for coupled scalar recursions.
In this paper, we present a simple proof of threshold saturation that applies
to a wide class of coupled vector recursions. The conditions of the theorem are
verified for the density-evolution equations of: (i) joint decoding of
irregular LDPC codes for a Slepian-Wolf problem with erasures, (ii) joint
decoding of irregular LDPC codes on an erasure multiple-access channel, and
(iii) general protograph codes on the BEC. This proves threshold saturation for
these systems.
|
1208.4081
|
Anisotropic Norm Bounded Real Lemma for Linear Discrete Time Varying
Systems
|
cs.SY cs.IT math.IT math.OC
|
We consider a finite horizon linear discrete time varying system whose input
is a random noise with an imprecisely known probability law. The statistical
uncertainty is described by a nonnegative parameter a which constrains the
anisotropy of the noise as an entropy theoretic measure of deviation of the
actual noise distribution from Gaussian white noise laws with scalar covariance
matrices. The worst-case disturbance attenuation capabilities of the system
with respect to the statistically uncertain random inputs are quantified by the
a-anisotropic norm which is an appropriately constrained operator norm of the
system. We establish an anisotropic norm bounded real lemma which provides a
state-space criterion for the a-anisotropic norm of the system not to exceed a
given threshold. The criterion is organized as an inequality on the
determinants of matrices associated with a difference Riccati equation and
extends the Bounded Real Lemma of the H-infinity-control theory. We also
provide a necessary background on the anisotropy-based robust performance
analysis.
|
1208.4138
|
Semi-supervised Clustering Ensemble by Voting
|
cs.LG stat.ML
|
Clustering ensemble is one of the most recent advances in unsupervised
learning. It aims to combine the clustering results obtained using different
algorithms or from different runs of the same clustering algorithm for the same
data set, this is accomplished using on a consensus function, the efficiency
and accuracy of this method has been proven in many works in literature. In the
first part of this paper we make a comparison among current approaches to
clustering ensemble in literature. All of these approaches consist of two main
steps: the ensemble generation and consensus function. In the second part of
the paper, we suggest engaging supervision in the clustering ensemble procedure
to get more enhancements on the clustering results. Supervision can be applied
in two places: either by using semi-supervised algorithms in the clustering
ensemble generation step or in the form of a feedback used by the consensus
function stage. Also, we introduce a flexible two parameter weighting
mechanism, the first parameter describes the compatibility between the datasets
under study and the semi-supervised clustering algorithms used to generate the
base partitions, the second parameter is used to provide the user feedback on
the these partitions. The two parameters are engaged in a "relabeling and
voting" based consensus function to produce the final clustering.
|
1208.4145
|
Injecting Uncertainty in Graphs for Identity Obfuscation
|
cs.DB
|
Data collected nowadays by social-networking applications create fascinating
opportunities for building novel services, as well as expanding our
understanding about social structures and their dynamics. Unfortunately,
publishing social-network graphs is considered an ill-advised practice due to
privacy concerns. To alleviate this problem, several anonymization methods have
been proposed, aiming at reducing the risk of a privacy breach on the published
data, while still allowing to analyze them and draw relevant conclusions. In
this paper we introduce a new anonymization approach that is based on injecting
uncertainty in social graphs and publishing the resulting uncertain graphs.
While existing approaches obfuscate graph data by adding or removing edges
entirely, we propose using a finer-grained perturbation that adds or removes
edges partially: this way we can achieve the same desired level of obfuscation
with smaller changes in the data, thus maintaining higher utility. Our
experiments on real-world networks confirm that at the same level of identity
obfuscation our method provides higher usefulness than existing randomized
methods that publish standard graphs.
|
1208.4147
|
Generating ordered list of Recommended Items: a Hybrid Recommender
System of Microblog
|
cs.IR cs.LG cs.SI
|
Precise recommendation of followers helps in improving the user experience
and maintaining the prosperity of twitter and microblog platforms. In this
paper, we design a hybrid recommender system of microblog as a solution of KDD
Cup 2012, track 1 task, which requires predicting users a user might follow in
Tencent Microblog. We describe the background of the problem and present the
algorithm consisting of keyword analysis, user taxonomy, (potential)interests
extraction and item recommendation. Experimental result shows the high
performance of our algorithm. Some possible improvements are discussed, which
leads to further study.
|
1208.4161
|
Robust Distributed Maximum Likelihood Estimation with Dependent
Quantized Data
|
cs.IT math.IT
|
In this paper, we consider distributed maximum likelihood estimation (MLE)
with dependent quantized data under the assumption that the structure of the
joint probability density function (pdf) is known, but it contains unknown
deterministic parameters. The parameters may include different vector
parameters corresponding to marginal pdfs and parameters that describe
dependence of observations across sensors. Since MLE with a single quantizer is
sensitive to the choice of thresholds due to the uncertainty of pdf, we
concentrate on MLE with multiple groups of quantizers (which can be determined
by the use of prior information or some heuristic approaches) to fend off
against the risk of a poor/outlier quantizer. The asymptotic efficiency of the
MLE scheme with multiple quantizers is proved under some regularity conditions
and the asymptotic variance is derived to be the inverse of a weighted linear
combination of Fisher information matrices based on multiple different
quantizers which can be used to show the robustness of our approach. As an
illustrative example, we consider an estimation problem with a bivariate
non-Gaussian pdf that has applications in distributed constant false alarm rate
(CFAR) detection systems. Simulations show the robustness of the proposed MLE
scheme especially when the number of quantized measurements is small.
|
1208.4165
|
The MADlib Analytics Library or MAD Skills, the SQL
|
cs.DB
|
MADlib is a free, open source library of in-database analytic methods. It
provides an evolving suite of SQL-based algorithms for machine learning, data
mining and statistics that run at scale within a database engine, with no need
for data import/export to other tools. The goal is for MADlib to eventually
serve a role for scalable database systems that is similar to the CRAN library
for R: a community repository of statistical methods, this time written with
scale and parallelism in mind. In this paper we introduce the MADlib project,
including the background that led to its beginnings, and the motivation for its
open source nature. We provide an overview of the library's architecture and
design patterns, and provide a description of various statistical methods in
that context. We include performance and speedup results of a core design
pattern from one of those methods over the Greenplum parallel DBMS on a
modest-sized test cluster. We then report on two initial efforts at
incorporating academic research into MADlib, which is one of the project's
goals. MADlib is freely available at http://madlib.net, and the project is open
for contributions of both new methods, and ports to additional database
platforms.
|
1208.4166
|
Can the Elephants Handle the NoSQL Onslaught?
|
cs.DB
|
In this new era of "big data", traditional DBMSs are under attack from two
sides. At one end of the spectrum, the use of document store NoSQL systems
(e.g. MongoDB) threatens to move modern Web 2.0 applications away from
traditional RDBMSs. At the other end of the spectrum, big data DSS analytics
that used to be the domain of parallel RDBMSs is now under attack by another
class of NoSQL data analytics systems, such as Hive on Hadoop. So, are the
traditional RDBMSs, aka "big elephants", doomed as they are challenged from
both ends of this "big data" spectrum? In this paper, we compare one
representative NoSQL system from each end of this spectrum with SQL Server, and
analyze the performance and scalability aspects of each of these approaches
(NoSQL vs. SQL) on two workloads (decision support analysis and interactive
data-serving) that represent the two ends of the application spectrum. We
present insights from this evaluation and speculate on potential trends for the
future.
|
1208.4167
|
Solving Big Data Challenges for Enterprise Application Performance
Management
|
cs.DB
|
As the complexity of enterprise systems increases, the need for monitoring
and analyzing such systems also grows. A number of companies have built
sophisticated monitoring tools that go far beyond simple resource utilization
reports. For example, based on instrumentation and specialized APIs, it is now
possible to monitor single method invocations and trace individual transactions
across geographically distributed systems. This high-level of detail enables
more precise forms of analysis and prediction but comes at the price of high
data rates (i.e., big data). To maximize the benefit of data monitoring, the
data has to be stored for an extended period of time for ulterior analysis.
This new wave of big data analytics imposes new challenges especially for the
application performance monitoring systems. The monitoring data has to be
stored in a system that can sustain the high data rates and at the same time
enable an up-to-date view of the underlying infrastructure. With the advent of
modern key-value stores, a variety of data storage systems have emerged that
are built with a focus on scalability and high data rates as predominant in
this monitoring use case. In this work, we present our experience and a
comprehensive performance evaluation of six modern (open-source) data stores in
the context of application performance monitoring as part of CA Technologies
initiative. We evaluated these systems with data and workloads that can be
found in application performance monitoring, as well as, on-line advertisement,
power monitoring, and many other use cases. We present our insights not only as
performance results but also as lessons learned and our experience relating to
the setup and configuration complexity of these data stores in an industry
setting.
|
1208.4168
|
M3R: Increased performance for in-memory Hadoop jobs
|
cs.DB
|
Main Memory Map Reduce (M3R) is a new implementation of the Hadoop Map Reduce
(HMR) API targeted at online analytics on high mean-time-to-failure clusters.
It does not support resilience, and supports only those workloads which can fit
into cluster memory. In return, it can run HMR jobs unchanged -- including jobs
produced by compilers for higher-level languages such as Pig, Jaql, and
SystemML and interactive front-ends like IBM BigSheets -- while providing
significantly better performance than the Hadoop engine on several workloads
(e.g. 45x on some input sizes for sparse matrix vector multiply). M3R also
supports extensions to the HMR API which can enable Map Reduce jobs to run
faster on the M3R engine, while not affecting their performance under the
Hadoop engine.
|
1208.4169
|
A Storage Advisor for Hybrid-Store Databases
|
cs.DB
|
With the SAP HANA database, SAP offers a high-performance in-memory
hybrid-store database. Hybrid-store databases---that is, databases supporting
row- and column-oriented data management---are getting more and more prominent.
While the columnar management offers high-performance capabilities for
analyzing large quantities of data, the row-oriented store can handle
transactional point queries as well as inserts and updates more efficiently. To
effectively take advantage of both stores at the same time the novel question
whether to store the given data row- or column-oriented arises. We tackle this
problem with a storage advisor tool that supports database administrators at
this decision. Our proposed storage advisor recommends the optimal store based
on data and query characteristics; its core is a cost model to estimate and
compare query execution times for the different stores. Besides a per-table
decision, our tool also considers to horizontally and vertically partition the
data and manage the partitions on different stores. We evaluated the storage
advisor for the use in the SAP HANA database; we show the recommendation
quality as well as the benefit of having the data in the optimal store with
respect to increased query performance.
|
1208.4170
|
From Cooperative Scans to Predictive Buffer Management
|
cs.DB
|
In analytical applications, database systems often need to sustain workloads
with multiple concurrent scans hitting the same table. The Cooperative Scans
(CScans) framework, which introduces an Active Buffer Manager (ABM) component
into the database architecture, has been the most effective and elaborate
response to this problem, and was initially developed in the X100 research
prototype. We now report on the the experiences of integrating Cooperative
Scans into its industrial-strength successor, the Vectorwise database product.
During this implementation we invented a simpler optimization of concurrent
scan buffer management, called Predictive Buffer Management (PBM). PBM is based
on the observation that in a workload with long-running scans, the buffer
manager has quite a bit of information on the workload in the immediate future,
such that an approximation of the ideal OPT algorithm becomes feasible. In the
evaluation on both synthetic benchmarks as well as a TPC-H throughput run we
compare the benefits of naive buffer management (LRU) versus CScans, PBM and
OPT; showing that PBM achieves benefits close to Cooperative Scans, while
incurring much lower architectural impact.
|
1208.4171
|
The Unified Logging Infrastructure for Data Analytics at Twitter
|
cs.DB
|
In recent years, there has been a substantial amount of work on large-scale
data analytics using Hadoop-based platforms running on large clusters of
commodity machines. A less-explored topic is how those data, dominated by
application logs, are collected and structured to begin with. In this paper, we
present Twitter's production logging infrastructure and its evolution from
application-specific logging to a unified "client events" log format, where
messages are captured in common, well-formatted, flexible Thrift messages.
Since most analytics tasks consider the user session as the basic unit of
analysis, we pre-materialize "session sequences", which are compact summaries
that can answer a large class of common queries quickly. The development of
this infrastructure has streamlined log collection and data analysis, thereby
improving our ability to rapidly experiment and iterate on various aspects of
the service.
|
1208.4172
|
Transaction Log Based Application Error Recovery and Point In-Time Query
|
cs.DB
|
Database backups have traditionally been used as the primary mechanism to
recover from hardware and user errors. High availability solutions maintain
redundant copies of data that can be used to recover from most failures except
user or application errors. Database backups are neither space nor time
efficient for recovering from user errors which typically occur in the recent
past and affect a small portion of the database. Moreover periodic full backups
impact user workload and increase storage costs. In this paper we present a
scheme that can be used for both user and application error recovery starting
from the current state and rewinding the database back in time using the
transaction log. While we provide a consistent view of the entire database as
of a point in time in the past, the actual prior versions are produced only for
data that is accessed. We make the as of data accessible to arbitrary point in
time queries by integrating with the database snapshot feature in Microsoft SQL
Server.
|
1208.4173
|
The Vertica Analytic Database: C-Store 7 Years Later
|
cs.DB
|
This paper describes the system architecture of the Vertica Analytic Database
(Vertica), a commercialization of the design of the C-Store research prototype.
Vertica demonstrates a modern commercial RDBMS system that presents a classical
relational interface while at the same time achieving the high performance
expected from modern "web scale" analytic systems by making appropriate
architectural choices. Vertica is also an instructive lesson in how academic
systems research can be directly commercialized into a successful product.
|
1208.4174
|
Interactive Analytical Processing in Big Data Systems: A Cross-Industry
Study of MapReduce Workloads
|
cs.DB
|
Within the past few years, organizations in diverse industries have adopted
MapReduce-based systems for large-scale data processing. Along with these new
users, important new workloads have emerged which feature many small, short,
and increasingly interactive jobs in addition to the large, long-running batch
jobs for which MapReduce was originally designed. As interactive, large-scale
query processing is a strength of the RDBMS community, it is important that
lessons from that field be carried over and applied where possible in this new
domain. However, these new workloads have not yet been described in the
literature. We fill this gap with an empirical analysis of MapReduce traces
from six separate business-critical deployments inside Facebook and at Cloudera
customers in e-commerce, telecommunications, media, and retail. Our key
contribution is a characterization of new MapReduce workloads which are driven
in part by interactive analysis, and which make heavy use of query-like
programming frameworks on top of MapReduce. These workloads display diverse
behaviors which invalidate prior assumptions about MapReduce such as uniform
data access, regular diurnal patterns, and prevalence of large jobs. A
secondary contribution is a first step towards creating a TPC-like data
processing benchmark for MapReduce.
|
1208.4175
|
Muppet: MapReduce-Style Processing of Fast Data
|
cs.DB
|
MapReduce has emerged as a popular method to process big data. In the past
few years, however, not just big data, but fast data has also exploded in
volume and availability. Examples of such data include sensor data streams, the
Twitter Firehose, and Facebook updates. Numerous applications must process fast
data. Can we provide a MapReduce-style framework so that developers can quickly
write such applications and execute them over a cluster of machines, to achieve
low latency and high scalability? In this paper we report on our investigation
of this question, as carried out at Kosmix and WalmartLabs. We describe
MapUpdate, a framework like MapReduce, but specifically developed for fast
data. We describe Muppet, our implementation of MapUpdate. Throughout the
description we highlight the key challenges, argue why MapReduce is not well
suited to address them, and briefly describe our current solutions. Finally, we
describe our experience and lessons learned with Muppet, which has been used
extensively at Kosmix and WalmartLabs to power a broad range of applications in
social media and e-commerce.
|
1208.4176
|
Building User-defined Runtime Adaptation Routines for Stream Processing
Applications
|
cs.DB
|
Stream processing applications are deployed as continuous queries that run
from the time of their submission until their cancellation. This deployment
mode limits developers who need their applications to perform runtime
adaptation, such as algorithmic adjustments, incremental job deployment, and
application-specific failure recovery. Currently, developers do runtime
adaptation by using external scripts and/or by inserting operators into the
stream processing graph that are unrelated to the data processing logic. In
this paper, we describe a component called orchestrator that allows users to
write routines for automatically adapting the application to runtime
conditions. Developers build an orchestrator by registering and handling events
as well as specifying actuations. Events can be generated due to changes in the
system state (e.g., application component failures), built-in system metrics
(e.g., throughput of a connection), or custom application metrics (e.g.,
quality score). Once the orchestrator receives an event, users can take
adaptation actions by using the orchestrator actuation APIs. We demonstrate the
use of the orchestrator in IBM's System S in the context of three different
applications, illustrating application adaptation to changes on the incoming
data distribution, to application failures, and on-demand dynamic composition.
|
1208.4178
|
MOIST: A Scalable and Parallel Moving Object Indexer with School
Tracking
|
cs.DB
|
Location-Based Service (LBS) is rapidly becoming the next ubiquitous
technology for a wide range of mobile applications. To support applications
that demand nearest-neighbor and history queries, an LBS spatial indexer must
be able to efficiently update, query, archive and mine location records, which
can be in contention with each other. In this work, we propose MOIST, whose
baseline is a recursive spatial partitioning indexer built upon BigTable. To
reduce update and query contention, MOIST groups nearby objects of similar
trajectory into the same school, and keeps track of only the history of school
leaders. This dynamic clustering scheme can eliminate redundant updates and
hence reduce update latency. To improve history query processing, MOIST keeps
some history data in memory, while it flushes aged data onto parallel disks in
a locality-preserving way. Through experimental studies, we show that MOIST can
support highly efficient nearest-neighbor and history queries and can scale
well with an increasing number of users and update frequency.
|
1208.4179
|
Serializable Snapshot Isolation in PostgreSQL
|
cs.DB
|
This paper describes our experience implementing PostgreSQL's new
serializable isolation level. It is based on the recently-developed
Serializable Snapshot Isolation (SSI) technique. This is the first
implementation of SSI in a production database release as well as the first in
a database that did not previously have a lock-based serializable isolation
level. We reflect on our experience and describe how we overcame some of the
resulting challenges, including the implementation of a new lock manager, a
technique for ensuring memory usage is bounded, and integration with other
PostgreSQL features. We also introduce an extension to SSI that improves
performance for read-only transactions. We evaluate PostgreSQL's serializable
isolation level using several benchmarks and show that it achieves performance
only slightly below that of snapshot isolation, and significantly outperforms
the traditional two-phase locking approach on read-intensive workloads.
|
1208.4188
|
Network information theory for classical-quantum channels
|
quant-ph cs.IT math.IT
|
Network information theory is the study of communication problems involving
multiple senders, multiple receivers and intermediate relay stations. The
purpose of this thesis is to extend the main ideas of classical network
information theory to the study of classical-quantum channels. We prove coding
theorems for quantum multiple access channels, quantum interference channels,
quantum broadcast channels and quantum relay channels.
A quantum model for a communication channel describes more accurately the
channel's ability to transmit information. By using physically faithful models
for the channel outputs and the detection procedure, we obtain better
communication rates than would be possible using a classical strategy. In this
thesis, we are interested in the transmission of classical information, so we
restrict our attention to the study of classical-quantum channels. These are
channels with classical inputs and quantum outputs, and so the coding theorems
we present will use classical encoding and quantum decoding. We study the
asymptotic regime where many copies of the channel are used in parallel, and
the uses are assumed to be independent. In this context, we can exploit
information-theoretic techniques to calculate the maximum rates for error-free
communication for any channel, given the statistics of the noise on that
channel. These theoretical bounds can be used as a benchmark to evaluate the
rates achieved by practical communication protocols.
Most of the results in this thesis consider classical-quantum channels with
finite dimensional output systems, which are analogous to classical discrete
memoryless channels. In the last chapter, we will show some applications of our
results to a practical optical communication scenario, in which the information
is encoded in continuous quantum degrees of freedom, which are analogous to
classical channels with Gaussian noise.
|
1208.4208
|
Reciprocity of weighted networks
|
physics.data-an cs.SI physics.soc-ph
|
All types of networks arise as intricate combinations of dyadic building
blocks formed by pairs of vertices. In directed networks, the dyadic patterns
are entirely determined by reciprocity, i.e. the tendency to form, or to avoid,
mutual links. Reciprocity has dramatic effects on every networks dynamical
processes and the emergence of structures like motifs and communities. The
binary reciprocity has been extensively studied: that of weighted networks is
still poorly understood. We introduce a general approach to it, by defining
quantities capturing the observed patterns (from dyad-specific to
vertex-specific and network-wide) and introducing analytically solved models
(Exponential Random Graphs-type). Counter-intuitively, the previous reciprocity
measures based on the similarity of the mutual links-weights are uninformative.
By contrast, our measures can classify different weighted networks, track the
temporal evolution of a networks reciprocity, identify patterns. We show that
in some networks the local reciprocity structure can be inferred from the
global one.
|
1208.4269
|
Spreaders in the Network SIR Model: An Empirical Study
|
cs.SI physics.soc-ph
|
We use the susceptible-infected-recovered (SIR) model for disease spread over
a network, and empirically study how well various centrality measures perform
at identifying which nodes in a network will be the best spreaders of disease
on 10 real-world networks. We find that the relative performance of degree,
shell number and other centrality measures can be sensitive to B, the
probability that an infected node will transmit the disease to a susceptible
node. We also find that eigenvector centrality performs very well in general
for values of B above the epidemic threshold.
|
1208.4270
|
ODYS: A Massively-Parallel Search Engine Using a DB-IR
Tightly-Integrated Parallel DBMS
|
cs.DB
|
Recently, parallel search engines have been implemented based on scalable
distributed file systems such as Google File System. However, we claim that
building a massively-parallel search engine using a parallel DBMS can be an
attractive alternative since it supports a higher-level (i.e., SQL-level)
interface than that of a distributed file system for easy and less error-prone
application development while providing scalability. In this paper, we propose
a new approach of building a massively-parallel search engine using a DB-IR
tightly-integrated parallel DBMS and demonstrate its commercial-level
scalability and performance. In addition, we present a hybrid (i.e., analytic
and experimental) performance model for the parallel search engine. We have
built a five-node parallel search engine according to the proposed architecture
using a DB-IR tightly-integrated DBMS. Through extensive experiments, we show
the correctness of the model by comparing the projected output with the
experimental results of the five-node engine. Our model demonstrates that ODYS
is capable of handling 1 billion queries per day (81 queries/sec) for 30
billion web pages by using only 43,472 nodes with an average query response
time of 211 ms, which is equivalent to or better than those of commercial
search engines. We also show that, by using twice as many (86,944) nodes, ODYS
can provide an average query response time of 162 ms, which is significantly
lower than those of commercial search engines.
|
1208.4289
|
A Quantitative Study of Social Organisation in Open Source Software
Communities
|
cs.SE cs.SI nlin.AO physics.soc-ph
|
The success of open source projects crucially depends on the voluntary
contributions of a sufficiently large community of users. Apart from the mere
size of the community, interesting questions arise when looking at the
evolution of structural features of collaborations between community members.
In this article, we discuss several network analytic proxies that can be used
to quantify different aspects of the social organisation in social
collaboration networks. We particularly focus on measures that can be related
to the cohesiveness of the communities, the distribution of responsibilities
and the resilience against turnover of community members. We present a
comparative analysis on a large-scale dataset that covers the full history of
collaborations between users of 14 major open source software communities. Our
analysis covers both aggregate and time-evolving measures and highlights
differences in the social organisation across communities. We argue that our
results are a promising step towards the definition of suitable, potentially
multi-dimensional, resilience and risk indicators for open source software
communities.
|
1208.4290
|
A Learning Theoretic Approach to Energy Harvesting Communication System
Optimization
|
cs.LG cs.NI
|
A point-to-point wireless communication system in which the transmitter is
equipped with an energy harvesting device and a rechargeable battery, is
studied. Both the energy and the data arrivals at the transmitter are modeled
as Markov processes. Delay-limited communication is considered assuming that
the underlying channel is block fading with memory, and the instantaneous
channel state information is available at both the transmitter and the
receiver. The expected total transmitted data during the transmitter's
activation time is maximized under three different sets of assumptions
regarding the information available at the transmitter about the underlying
stochastic processes. A learning theoretic approach is introduced, which does
not assume any a priori information on the Markov processes governing the
communication system. In addition, online and offline optimization problems are
studied for the same setting. Full statistical knowledge and causal information
on the realizations of the underlying stochastic processes are assumed in the
online optimization problem, while the offline optimization problem assumes
non-causal knowledge of the realizations in advance. Comparing the optimal
solutions in all three frameworks, the performance loss due to the lack of the
transmitter's information regarding the behaviors of the underlying Markov
processes is quantified.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.