id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1402.6763 | Linear Programming for Large-Scale Markov Decision Problems | math.OC cs.AI cs.NA | We consider the problem of controlling a Markov decision process (MDP) with a
large state space, so as to minimize average cost. Since it is intractable to
compete with the optimal policy for large scale problems, we pursue the more
modest goal of competing with a low-dimensional family of policies. We use the
dual linear programming formulation of the MDP average cost problem, in which
the variable is a stationary distribution over state-action pairs, and we
consider a neighborhood of a low-dimensional subset of the set of stationary
distributions (defined in terms of state-action features) as the comparison
class. We propose two techniques, one based on stochastic convex optimization,
and one based on constraint sampling. In both cases, we give bounds that show
that the performance of our algorithms approaches the best achievable by any
policy in the comparison class. Most importantly, these results depend on the
size of the comparison class, but not on the size of the state space.
Preliminary experiments show the effectiveness of the proposed algorithms in a
queuing application.
|
1402.6764 | A method to identify potential ambiguous Malay words through Ambiguity
Attributes mapping: An exploratory Study | cs.SE cs.CL | We describe here a methodology to identify a list of ambiguous Malay words
that are commonly being used in Malay documentations such as Requirement
Specification. We compiled several relevant and appropriate requirement quality
attributes and sentence rules from previous literatures and adopt it to come
out with a set of ambiguity attributes that most suit Malay words. The
extracted Malay ambiguous words (potential) are then being mapped onto the
constructed ambiguity attributes to confirm their vagueness. The list is then
verified by Malay linguist experts. This paper aims to identify a list of
potential ambiguous words in Malay as an attempt to assist writers to avoid
using the vague words while documenting Malay Requirement Specification as well
as to any other related Malay documentation. The result of this study is a list
of 120 potential ambiguous Malay words that could act as guidelines in writing
Malay sentences
|
1402.6771 | On Linear Codes over $\mathbb{Z}_4+v\mathbb{Z}_4$ | cs.IT math.IT | Linear codes are considered over the ring $\mathbb{Z}_4+v\mathbb{Z}_4$, where
$v^2=v$. Gray weight, Gray maps for linear codes are defined and MacWilliams
identity for the Gray weight enumerator is given. Self-dual codes, construction
of Euclidean isodual codes, unimodular complex lattices, MDS codes and MGDS
codes over $\mathbb{Z}_4+v\mathbb{Z}_4$ are studied. Cyclic codes and quadratic
residue codes are also considered. Finally, some examples for illustrating the
main work are given.
|
1402.6775 | Analysis of Barcode sequence features to find anomalies due to
amplification Bias | cs.CE q-bio.QM | In this paper we aim at investigating whether barcode sequence features can
predict the read count ambiguities caused during PCR based next generation
sequencing techniques. The methodologies we used are mutual information based
motif discovery and Lasso regression technique using features generated from
the barcode sequence. The results indicate that there is a certain degree of
correlation between motifs discovered in the sequences and the read counts. Our
main contribution in this paper is a thorough investigation of the barcode
features that gave us useful information regarding the significance of the
sequence features and the sequence containing the discovered motifs in
prediction of read counts.
|
1402.6779 | Resourceful Contextual Bandits | cs.LG cs.DS cs.GT | We study contextual bandits with ancillary constraints on resources, which
are common in real-world applications such as choosing ads or dynamic pricing
of items. We design the first algorithm for solving these problems that handles
constrained resources other than time, and improves over a trivial reduction to
the non-contextual case. We consider very general settings for both contextual
bandits (arbitrary policy sets, e.g. Dudik et al. (UAI'11)) and bandits with
resource constraints (bandits with knapsacks, Badanidiyuru et al. (FOCS'13)),
and prove a regret guarantee with near-optimal statistical properties.
|
1402.6785 | Synthesis of Parametric Programs using Genetic Programming and Model
Checking | cs.SE cs.AI cs.NE | Formal methods apply algorithms based on mathematical principles to enhance
the reliability of systems. It would only be natural to try to progress from
verification, model checking or testing a system against its formal
specification into constructing it automatically. Classical algorithmic
synthesis theory provides interesting algorithms but also alarming high
complexity and undecidability results. The use of genetic programming, in
combination with model checking and testing, provides a powerful heuristic to
synthesize programs. The method is not completely automatic, as it is fine
tuned by a user that sets up the specification and parameters. It also does not
guarantee to always succeed and converge towards a solution that satisfies all
the required properties. However, we applied it successfully on quite
nontrivial examples and managed to find solutions to hard programming
challenges, as well as to improve and to correct code. We describe here several
versions of our method for synthesizing sequential and concurrent systems.
|
1402.6787 | Learning multifractal structure in large networks | cs.SI | Generating random graphs to model networks has a rich history. In this paper,
we analyze and improve upon the multifractal network generator (MFNG)
introduced by Palla et al. We provide a new result on the probability of
subgraphs existing in graphs generated with MFNG. From this result it follows
that we can quickly compute moments of an important set of graph properties,
such as the expected number of edges, stars, and cliques. Specifically, we show
how to compute these moments in time complexity independent of the size of the
graph and the number of recursive levels in the generative model. We leverage
this theory to a new method of moments algorithm for fitting large networks to
MFNG. Empirically, this new approach effectively simulates properties of
several social and information networks. In terms of matching subgraph counts,
our method outperforms similar algorithms used with the Stochastic Kronecker
Graph model. Furthermore, we present a fast approximation algorithm to generate
graph instances following the multi- fractal structure. The approximation
scheme is an improvement over previous methods, which ran in time complexity
quadratic in the number of vertices. Combined, our method of moments and fast
sampling scheme provide the first scalable framework for effectively modeling
large networks with MFNG.
|
1402.6792 | Information Evolution in Social Networks | cs.SI cs.CL physics.soc-ph | Social networks readily transmit information, albeit with less than perfect
fidelity. We present a large-scale measurement of this imperfect information
copying mechanism by examining the dissemination and evolution of thousands of
memes, collectively replicated hundreds of millions of times in the online
social network Facebook. The information undergoes an evolutionary process that
exhibits several regularities. A meme's mutation rate characterizes the
population distribution of its variants, in accordance with the Yule process.
Variants further apart in the diffusion cascade have greater edit distance, as
would be expected in an iterative, imperfect replication process. Some text
sequences can confer a replicative advantage; these sequences are abundant and
transfer "laterally" between different memes. Subpopulations of the social
network can preferentially transmit a specific variant of a meme if the variant
matches their beliefs or culture. Understanding the mechanism driving change in
diffusing information has important implications for how we interpret and
harness the information that reaches us through our social networks.
|
1402.6794 | Trellis-Extended Codebooks and Successive Phase Adjustment: A Path from
LTE-Advanced to FDD Massive MIMO Systems | cs.IT math.IT | It is of great interest to develop efficient ways to acquire accurate channel
state information (CSI) for frequency division duplexing (FDD) massive
multiple-input multiple-output (MIMO) systems for backward compatibility. It is
theoretically well known that the codebook size for CSI quantization should be
increased as the number of transmit antennas becomes larger, and 3GPP long term
evolution (LTE) and LTE-Advanced codebooks also follow this trend. Thus, in
massive MIMO, it is hard to apply the conventional approach of using
pre-defined vector-quantized codebooks for CSI quantization mainly because of
codeword search complexity. In this paper, we propose a trellis-extended
codebook (TEC) that can be easily harmonized with current wireless standards
such as LTE or LTE-Advanced by extending standardized codebooks designed for 2,
4, or 8 antennas with trellis structures. TEC exploits a Viterbi decoder and
convolutional encoder in channel coding as the CSI quantizer and the CSI
reconstructer, respectively. By quantizing multiple channel entries
simultaneously using standardized codebooks in a state transition of trellis
search, TEC can achieve fractional bits per channel entry quantization to have
a practical feedback overhead. Thus, TEC can solve both the complexity and the
feedback overhead issues of CSI quantization in massive MIMO systems. We also
develop trellis-extended successive phase adjustment (TE-SPA) which works as a
differential codebook of TEC. This is similar to the dual codebook concept of
LTE-Advanced. TE-SPA can reduce CSI quantization error even with lower feedback
overhead in temporally correlated channels. Numerical results verify the
effectiveness of the proposed schemes in FDD massive MIMO systems.
|
1402.6809 | Analyzing Cascading Failures in Smart Grids under Random and Targeted
Attacks | cs.SI cs.DM cs.NI math.CO physics.soc-ph | We model smart grids as complex interdependent networks, and study targeted
attacks on smart grids for the first time. A smart grid consists of two
networks: the power network and the communication network, interconnected by
edges. Occurrence of failures (attacks) in one network triggers failures in the
other network, and propagates in cascades across the networks. Such cascading
failures can result in disintegration of either (or both) of the networks.
Earlier works considered only random failures. In practical situations, an
attacker is more likely to compromise nodes selectively.
We study cascading failures in smart grids, where an attacker selectively
compromises the nodes with probabilities proportional to their degrees; high
degree nodes are compromised with higher probability. We mathematically analyze
the sizes of the giant components of the networks under targeted attacks, and
compare the results with the corresponding sizes under random attacks. We show
that networks disintegrate faster for targeted attacks compared to random
attacks. A targeted attack on a small fraction of high degree nodes
disintegrates one or both of the networks, whereas both the networks contain
giant components for random attack on the same fraction of nodes.
|
1402.6859 | Outlier Detection using Improved Genetic K-means | cs.LG cs.DB | The outlier detection problem in some cases is similar to the classification
problem. For example, the main concern of clustering-based outlier detection
algorithms is to find clusters and outliers, which are often regarded as noise
that should be removed in order to make more reliable clustering. In this
article, we present an algorithm that provides outlier detection and data
clustering simultaneously. The algorithmimprovesthe estimation of centroids of
the generative distribution during the process of clustering and outlier
discovery. The proposed algorithm consists of two stages. The first stage
consists of improved genetic k-means algorithm (IGK) process, while the second
stage iteratively removes the vectors which are far from their cluster
centroids.
|
1402.6862 | A Fast, robust algorithm for power line interference cancellation in
neural recording | cs.SY physics.med-ph | Power line interference may severely corrupt neural recordings at 50/60 Hz
and harmonic frequencies. In this paper, we present a robust and
computationally efficient algorithm for removing power line interference from
neural recordings. The algorithm includes four steps. First, an adaptive notch
filter is used to estimate the fundamental frequency of the interference.
Subsequently, based on the estimated frequency, harmonics are generated by
using discrete-time oscillators, and then the amplitude and phase of each
harmonic are estimated through using a modified recursive least squares
algorithm. Finally, the estimated interference is subtracted from the recorded
data. The algorithm does not require any reference signal, and can track the
frequency, phase, and amplitude of each harmonic. When benchmarked with other
popular approaches, our algorithm performs better in terms of noise immunity,
convergence speed, and output signal-to-noise ratio (SNR). While minimally
affecting the signal bands of interest, the algorithm consistently yields fast
convergence and substantial interference rejection in different conditions of
interference strengths (input SNR from -30 dB to 30 dB), power line frequencies
(45-65 Hz), and phase and amplitude drifts. In addition, the algorithm features
a straightforward parameter adjustment since the parameters are independent of
the input SNR, input signal power, and the sampling rate. The proposed
algorithm features a highly robust operation, fast adaptation to interference
variations, significant SNR improvement, low computational complexity and
memory requirement, and straightforward parameter adjustment. These features
render the algorithm suitable for wearable and implantable sensor applications,
where reliable and real-time cancellation of the interference is desired.
|
1402.6865 | Applications of Structural Balance in Signed Social Networks | cs.SI physics.soc-ph | We present measures, models and link prediction algorithms based on the
structural balance in signed social networks. Certain social networks contain,
in addition to the usual 'friend' links, 'enemy' links. These networks are
called signed social networks. A classical and major concept for signed social
networks is that of structural balance, i.e., the tendency of triangles to be
'balanced' towards including an even number of negative edges, such as
friend-friend-friend and friend-enemy-enemy triangles. In this article, we
introduce several new signed network analysis methods that exploit structural
balance for measuring partial balance, for finding communities of people based
on balance, for drawing signed social networks, and for solving the problem of
link prediction. Notably, the introduced methods are based on the signed graph
Laplacian and on the concept of signed resistance distances. We evaluate our
methods on a collection of four signed social network datasets.
|
1402.6880 | It's distributions all the way down!: Second order changes in
statistical distributions also occur | cs.CL | The textual, big-data literature misses Bentley, OBrien, & Brocks (Bentley et
als) message on distributions; it largely examines the first-order effects of
how a single, signature distribution can predict population behaviour,
neglecting second-order effects involving distributional shifts, either between
signature distributions or within a given signature distribution. Indeed,
Bentley et al. themselves under-emphasise the potential richness of the latter,
within-distribution effects.
|
1402.6882 | An Optimal Decoding Strategy for Physical-layer Network Coding over
Multipath Fading Channels | cs.IT math.IT | We present an optimal decoder for physical-layer network coding (PNC) in a
multipath fading channels. Previous studies on PNC have largely focused on the
single path case. For PNC, multipath not only introduces inter-symbol
interference (ISI), but also cross-symbol interference (Cross-SI) between
signals simultaneously transmitted by multiple users. In this paper, we assume
the transmitters do not have channel state information (CSI). The relay in the
PNC system, however, has CSI. The relay makes use of a belief propagation (BP)
algorithm to decode the multipath-distorted signals received from multiple
users into a network-coded packet. We refer to our multipath decoding algorithm
as MP-PNC. Our simulation results show that, benchmarked against synchronous
PNC over a one-path channel, the bit error rate (BER) performance penalty of
MP-PNC under a two-tap ITU channel model can be kept within 0.5dB. Moreover, it
outperforms a MUD-XOR algorithm by 3dB -- MUD-XOR decodes the individual
information from both users explicitly before performing the XOR network-coding
mapping. Although the framework of fading-channel PNC presented in this paper
is demonstrated based on two-path and three-path channel models, our algorithm
can be easily extended to cases with more than three paths.
|
1402.6888 | CriPS: Critical Dynamics in Particle Swarm Optimization | cs.NE | Particle Swarm Optimisation (PSO) makes use of a dynamical system for solving
a search task. Instead of adding search biases in order to improve performance
in certain problems, we aim to remove algorithm-induced scales by controlling
the swarm with a mechanism that is scale-free except possibly for a suppression
of scales beyond the system size. In this way a very promising performance is
achieved due to the balance of large-scale exploration and local search. The
resulting algorithm shows evidence for self-organised criticality, brought
about via the intrinsic dynamics of the swarm as it interacts with the
objective function, rather than being explicitly specified. The Critical
Particle Swarm (CriPS) can be easily combined with many existing extensions
such as chaotic exploration, additional force terms or non-trivial topologies.
|
1402.6926 | Sequential Complexity as a Descriptor for Musical Similarity | cs.IR cs.LG cs.SD | We propose string compressibility as a descriptor of temporal structure in
audio, for the purpose of determining musical similarity. Our descriptors are
based on computing track-wise compression rates of quantised audio features,
using multiple temporal resolutions and quantisation granularities. To verify
that our descriptors capture musically relevant information, we incorporate our
descriptors into similarity rating prediction and song year prediction tasks.
We base our evaluation on a dataset of 15500 track excerpts of Western popular
music, for which we obtain 7800 web-sourced pairwise similarity ratings. To
assess the agreement among similarity ratings, we perform an evaluation under
controlled conditions, obtaining a rank correlation of 0.33 between intersected
sets of ratings. Combined with bag-of-features descriptors, we obtain
performance gains of 31.1% and 10.9% for similarity rating prediction and song
year prediction. For both tasks, analysis of selected descriptors reveals that
representing features at multiple time scales benefits prediction accuracy.
|
1402.6932 | Low-Cost Compressive Sensing for Color Video and Depth | cs.CV | A simple and inexpensive (low-power and low-bandwidth) modification is made
to a conventional off-the-shelf color video camera, from which we recover
{multiple} color frames for each of the original measured frames, and each of
the recovered frames can be focused at a different depth. The recovery of
multiple frames for each measured frame is made possible via high-speed coding,
manifested via translation of a single coded aperture; the inexpensive
translation is constituted by mounting the binary code on a piezoelectric
device. To simultaneously recover depth information, a {liquid} lens is
modulated at high speed, via a variable voltage. Consequently, during the
aforementioned coding process, the liquid lens allows the camera to sweep the
focus through multiple depths. In addition to designing and implementing the
camera, fast recovery is achieved by an anytime algorithm exploiting the
group-sparsity of wavelet/DCT coefficients.
|
1402.6942 | A Parallel Memetic Algorithm to Solve the Vehicle Routing Problem with
Time Windows | cs.DC cs.NE | This paper presents a parallel memetic algorithm for solving the vehicle
routing problem with time windows (VRPTW). The VRPTW is a well-known NP-hard
discrete optimization problem with two objectives. The main objective is to
minimize the number of vehicles serving customers scattered on the map, and the
second one is to minimize the total distance traveled by the vehicles. Here,
the fleet size is minimized in the first phase of the proposed method using the
parallel heuristic algorithm (PHA), and the traveled distance is minimized in
the second phase by the parallel memetic algorithm (PMA). In both parallel
algorithms, the parallel components co-operate periodically in order to
exchange the best solutions found so far. An extensive experimental study
performed on the Gehring and Homberger's benchmark proves the high convergence
capabilities and robustness of both PHA and PMA. Also, we present the speedup
analysis of the PMA.
|
1402.6964 | Scalable methods for nonnegative matrix factorizations of near-separable
tall-and-skinny matrices | cs.LG cs.DC cs.NA stat.ML | Numerous algorithms are used for nonnegative matrix factorization under the
assumption that the matrix is nearly separable. In this paper, we show how to
make these algorithms efficient for data matrices that have many more rows than
columns, so-called "tall-and-skinny matrices". One key component to these
improved methods is an orthogonal matrix transformation that preserves the
separability of the NMF problem. Our final methods need a single pass over the
data matrix and are suitable for streaming, multi-core, and MapReduce
architectures. We demonstrate the efficacy of these algorithms on
terabyte-sized synthetic matrices and real-world matrices from scientific
computing and bioinformatics.
|
1402.6978 | Fundamental Limits of Video Coding: A Closed-form Characterization of
Rate Distortion Region from First Principles | cs.IT cs.MM math.IT | Classical motion-compensated video coding methods have been standardized by
MPEG over the years and video codecs have become integral parts of media
entertainment applications. Despite the ubiquitous use of video coding
techniques, it is interesting to note that a closed form rate-distortion
characterization for video coding is not available in the literature. In this
paper, we develop a simple, yet, fundamental characterization of
rate-distortion region in video coding based on information-theoretic first
principles. The concept of conditional motion estimation is used to derive the
closedform expression for rate-distortion region without losing its generality.
Conditional motion estimation offers an elegant means to analyze the
rate-distortion trade-offs and demonstrates the viability of achieving the
bounds derived. The concept involves classifying image regions into active and
inactive based on the amount of motion activity. By appropriately modeling the
residuals corresponding to active and inactive regions, a closed form
expression for rate-distortion function is derived in terms of motion activity
and spatio-temporal correlation that commonly exist in video content.
Experiments on real video clips using H.264 codec are presented to demonstrate
the practicality and validity of the proposed rate-distortion analysis.
|
1402.7001 | Marginalizing Corrupted Features | cs.LG | The goal of machine learning is to develop predictors that generalize well to
test data. Ideally, this is achieved by training on an almost infinitely large
training data set that captures all variations in the data distribution. In
practical learning settings, however, we do not have infinite data and our
predictors may overfit. Overfitting may be combatted, for example, by adding a
regularizer to the training objective or by defining a prior over the model
parameters and performing Bayesian inference. In this paper, we propose a
third, alternative approach to combat overfitting: we extend the training set
with infinitely many artificial training examples that are obtained by
corrupting the original training data. We show that this approach is practical
and efficient for a range of predictors and corruption models. Our approach,
called marginalized corrupted features (MCF), trains robust predictors by
minimizing the expected value of the loss function under the corruption model.
We show empirically on a variety of data sets that MCF classifiers can be
trained efficiently, may generalize substantially better to test data, and are
also more robust to feature deletion at test time.
|
1402.7005 | Bayesian Multi-Scale Optimistic Optimization | stat.ML cs.LG | Bayesian optimization is a powerful global optimization technique for
expensive black-box functions. One of its shortcomings is that it requires
auxiliary optimization of an acquisition function at each iteration. This
auxiliary optimization can be costly and very hard to carry out in practice.
Moreover, it creates serious theoretical concerns, as most of the convergence
results assume that the exact optimum of the acquisition function can be found.
In this paper, we introduce a new technique for efficient global optimization
that combines Gaussian process confidence bounds and treed simultaneous
optimistic optimization to eliminate the need for auxiliary optimization of
acquisition functions. The experiments with global optimization benchmarks and
a novel application to automatic information extraction demonstrate that the
resulting technique is more efficient than the two approaches from which it
draws inspiration. Unlike most theoretical analyses of Bayesian optimization
with Gaussian processes, our finite-time convergence rate proofs do not require
exact optimization of an acquisition function. That is, our approach eliminates
the unsatisfactory assumption that a difficult, potentially NP-hard, problem
has to be solved in order to obtain vanishing regret rates.
|
1402.7011 | Saving Human Lives: What Complexity Science and Information Systems can
Contribute | physics.soc-ph cs.SI nlin.AO | We discuss models and data of crowd disasters, crime, terrorism, war and
disease spreading to show that conventional recipes, such as deterrence
strategies, are often not effective and sufficient to contain them. Many common
approaches do not provide a good picture of the actual system behavior, because
they neglect feedback loops, instabilities and cascade effects. The complex and
often counter-intuitive behavior of social systems and their macro-level
collective dynamics can be better understood by means of complexity science. We
highlight that a suitable system design and management can help to stop
undesirable cascade effects and to enable favorable kinds of self-organization
in the system. In such a way, complexity science can help to save human lives.
|
1402.7015 | Data-driven HRF estimation for encoding and decoding models | cs.CE cs.LG | Despite the common usage of a canonical, data-independent, hemodynamic
response function (HRF), it is known that the shape of the HRF varies across
brain regions and subjects. This suggests that a data-driven estimation of this
function could lead to more statistical power when modeling BOLD fMRI data.
However, unconstrained estimation of the HRF can yield highly unstable results
when the number of free parameters is large. We develop a method for the joint
estimation of activation and HRF using a rank constraint causing the estimated
HRF to be equal across events/conditions, yet permitting it to be different
across voxels. Model estimation leads to an optimization problem that we
propose to solve with an efficient quasi-Newton method exploiting fast gradient
computations. This model, called GLM with Rank-1 constraint (R1-GLM), can be
extended to the setting of GLM with separate designs which has been shown to
improve decoding accuracy in brain activity decoding experiments. We compare 10
different HRF modeling methods in terms of encoding and decoding score in two
different datasets. Our results show that the R1-GLM model significantly
outperforms competing methods in both encoding and decoding settings,
positioning it as an attractive method both from the points of view of accuracy
and computational efficiency.
|
1402.7025 | Exploiting the Statistics of Learning and Inference | cs.LG | When dealing with datasets containing a billion instances or with simulations
that require a supercomputer to execute, computational resources become part of
the equation. We can improve the efficiency of learning and inference by
exploiting their inherent statistical nature. We propose algorithms that
exploit the redundancy of data relative to a model by subsampling data-cases
for every update and reasoning about the uncertainty created in this process.
In the context of learning we propose to test for the probability that a
stochastically estimated gradient points more than 180 degrees in the wrong
direction. In the context of MCMC sampling we use stochastic gradients to
improve the efficiency of MCMC updates, and hypothesis tests based on adaptive
mini-batches to decide whether to accept or reject a proposed parameter update.
Finally, we argue that in the context of likelihood free MCMC one needs to
store all the information revealed by all simulations, for instance in a
Gaussian process. We conclude that Bayesian methods will remain to play a
crucial role in the era of big data and big simulations, but only if we
overcome a number of computational challenges.
|
1402.7032 | Parameter security characterization of knapsack public-key crypto under
quantum computing | cs.CR cs.IT math.IT | In order to research the security of the knapsack problem under quantum
algorithm attack, we study the quantum algorithm for knapsack problem over Z_r
based on the relation between the dimension of the knapsack vector and r.
First, the oracle function is designed based on the knapsack vector B and S,
and the quantum algorithm for the knapsack problem over Z_r is presented. The
observation probability of target state is not improved by designing unitary
transform, but oracle function. Its complexity is polynomial. And its success
probability depends on the relation between n and r. From the above discussion,
we give the essential condition for the knapsack problem over Z_r against the
existing quantum algorithm attacks, i.e. r<O(2^n). Then we analyze the security
of the Chor-Rivest public-key crypto.
|
1402.7035 | 'Beating the news' with EMBERS: Forecasting Civil Unrest using Open
Source Indicators | cs.SI cs.CY physics.soc-ph | We describe the design, implementation, and evaluation of EMBERS, an
automated, 24x7 continuous system for forecasting civil unrest across 10
countries of Latin America using open source indicators such as tweets, news
sources, blogs, economic indicators, and other data sources. Unlike
retrospective studies, EMBERS has been making forecasts into the future since
Nov 2012 which have been (and continue to be) evaluated by an independent T&E
team (MITRE). Of note, EMBERS has successfully forecast the uptick and downtick
of incidents during the June 2013 protests in Brazil. We outline the system
architecture of EMBERS, individual models that leverage specific data sources,
and a fusion and suppression engine that supports trading off specific
evaluation criteria. EMBERS also provides an audit trail interface that enables
the investigation of why specific predictions were made along with the data
utilized for forecasting. Through numerous evaluations, we demonstrate the
superiority of EMBERS over baserate methods and its capability to forecast
significant societal happenings.
|
1402.7050 | Tools for dynamics simulation of robots: a survey based on user feedback | cs.RO | The number of tools for dynamics simulation has grown in the last years. It
is necessary for the robotics community to have elements to ponder which of the
available tools is the best for their research. As a complement to an objective
and quantitative comparison, difficult to obtain since not all the tools are
open-source, an element of evaluation is user feedback. With this goal in mind,
we created an online survey about the use of dynamical simulation in robotics.
This paper reports the analysis of the participants' answers and a descriptive
information fiche for the most relevant tools. We believe this report will be
helpful for roboticists to choose the best simulation tool for their
researches.
|
1402.7063 | Rapid AkNN Query Processing for Fast Classification of Multidimensional
Data in the Cloud | cs.DB | A $k$-nearest neighbor ($k$NN) query determines the $k$ nearest points, using
distance metrics, from a specific location. An all $k$-nearest neighbor
(A$k$NN) query constitutes a variation of a $k$NN query and retrieves the $k$
nearest points for each point inside a database. Their main usage resonates in
spatial databases and they consist the backbone of many location-based
applications and not only (i.e. $k$NN joins in databases, classification in
data mining). So, it is very crucial to develop methods that answer them
efficiently. In this work, we propose a novel method for classifying
multidimensional data using an A$k$NN algorithm in the MapReduce framework. Our
approach exploits space decomposition techniques for processing the
classification procedure in a parallel and distributed manner. To our
knowledge, we are the first to study the classification of multidimensional
objects under this perspective. Through an extensive experimental evaluation we
prove that our solution is efficient and scalable in processing the given
queries. We investigate many different perspectives that can affect the total
computational cost, such as different dataset distributions, number of
dimensions, growth of $k$ value and granularity of space decomposition and
prove that our system is efficient, robust and scalable.
|
1402.7122 | Nested Regular Path Queries in Description Logics | cs.LO cs.AI cs.DB | Two-way regular path queries (2RPQs) have received increased attention
recently due to their ability to relate pairs of objects by flexibly navigating
graph-structured data. They are present in property paths in SPARQL 1.1, the
new standard RDF query language, and in the XML query language XPath. In line
with XPath, we consider the extension of 2RPQs with nesting, which allows one
to require that objects along a path satisfy complex conditions, in turn
expressed through (nested) 2RPQs. We study the computational complexity of
answering nested 2RPQs and conjunctions thereof (CN2RPQs) in the presence of
domain knowledge expressed in description logics (DLs). We establish tight
complexity bounds in data and combined complexity for a variety of DLs, ranging
from lightweight DLs (DL-Lite, EL) up to highly expressive ones. Interestingly,
we are able to show that adding nesting to (C)2RPQs does not affect worst-case
data complexity of query answering for any of the considered DLs. However, in
the case of lightweight DLs, adding nesting to 2RPQs leads to a surprising jump
in combined complexity, from P-complete to Exp-complete.
|
1402.7136 | Neural Network Approach to Railway Stand Lateral Skew Control | cs.SY cs.NE | The paper presents a study of an adaptive approach to lateral skew control
for an experimental railway stand. The preliminary experiments with the real
experimental railway stand and simulations with its 3-D mechanical model,
indicates difficulties of model-based control of the device. Thus, use of
neural networks for identification and control of lateral skew shall be
investigated. This paper focuses on real-data based modeling of the railway
stand by various neural network models, i.e; linear neural unit and quadratic
neural unit architectures. Furthermore, training methods of these neural
architectures as such, real-time-recurrent-learning and a variation of
back-propagation-through-time are examined, accompanied by a discussion of the
produced experimental results.
|
1402.7143 | Identifying Users with Opposing Opinions in Twitter Debates | cs.SI cs.CY physics.soc-ph | In recent times, social media sites such as Twitter have been extensively
used for debating politics and public policies. These debates span millions of
tweets and numerous topics of public importance. Thus, it is imperative that
this vast trove of data is tapped in order to gain insights into public opinion
especially on hotly contested issues such as abortion, gun reforms etc. Thus,
in our work, we aim to gauge users' stance on such topics in Twitter. We
propose ReLP, a semi-supervised framework using a retweet-based label
propagation algorithm coupled with a supervised classifier to identify users
with differing opinions. In particular, our framework is designed such that it
can be easily adopted to different domains with little human supervision while
still producing excellent accuracy
|
1402.7162 | Visual Saliency Model using SIFT and Comparison of Learning Approaches | cs.CV | Humans' ability to detect and locate salient objects on images is remarkably
fast and successful. Performing this process by using eye tracking equipment is
expensive and cannot be easily applied, and computer modeling of this human
behavior is still a problem to be solved. In our study, one of the largest
public eye-tracking databases which has fixation points of 15 observers on 1003
images is used. In addition to low, medium and high-level features which have
been used in previous studies, SIFT features extracted from the images are used
to improve the classification accuracy of the models. A second contribution of
this paper is the comparison and statistical analysis of different machine
learning methods that can be used to train our model. As a result, a best
feature set and learning model to predict where humans look at images, is
determined.
|
1402.7170 | Improving the Finite-Length Performance of Spatially Coupled LDPC Codes
by Connecting Multiple Code Chains | cs.IT math.IT | In this paper, we analyze the finite-length performance of codes on graphs
constructed by connecting spatially coupled low-density parity-check (SC-LDPC)
code chains. Successive (peeling) decoding is considered for the binary erasure
channel (BEC). The evolution of the undecoded portion of the bipartite graph
remaining after each iteration is analyzed as a dynamical system. When
connecting short SC-LDPC chains, we show that, in addition to superior
iterative decoding thresholds, connected chain ensembles have better
finite-length performance than single chain ensembles of the same rate and
length. In addition, we present a novel encoding/transmission scheme to improve
the performance of a system using long SC-LDPC chains, where, instead of
transmitting codewords corresponding to a single SC-LDPC chain independently,
we connect consecutive chains in a multi-layer format to form a connected chain
ensemble. We refer to such a transmission scheme to as continuous chain (CC)
transmission of SC-LDPC codes. We show that CC transmission can be implemented
with no significant increase in encoding/decoding complexity or decoding delay
with respect a system using a single SC-LDPC code chain for encoding.
|
1402.7184 | The Hegselmann-Krause dynamics for continuous agents and a regular
opinion function do not always lead to consensus | math.DS cs.SI cs.SY | We present an example of a regular opinion function which, as it evolves in
accordance with the discrete-time Hegselmann-Krause bounded confidence
dynamics, always retains opinions which are separated by more than two. This
confirms a conjecture of Blondel, Hendrickx and Tsitsiklis.
|
1402.7190 | Two Stage Prediction Process with Gradient Descent Methods Aligning with
the Data Privacy Preservation | cs.DB | Privacy preservation emphasize on authorization of data, which signifies that
data should be accessed only by authorized users. Ensuring the privacy of data
is considered as one of the challenging task in data management. The
generalization of data with varying concept hierarchies seems to be interesting
solution. This paper proposes two stage prediction processes on privacy
preserved data. The privacy is preserved using generalization and betraying
other communicating parties by disguising generalized data which adds another
level of privacy. The generalization with betraying is performed in first stage
to define the knowledge or hypothesis and which is further optimized using
gradient descent method in second stage prediction for accurate prediction of
data. The experiment carried with both batch and stochastic gradient methods
and it is shown that bulk operation performed by batch takes long time and more
iterations than stochastic to give more accurate solution.
|
1402.7200 | Mathematical Model of Semantic Look - An Efficient Context Driven Search
Engine | cs.IR | The WorldWideWeb (WWW) is a huge conservatory of web pages. Search Engines
are key applications that fetch web pages for the user query. In the current
generation web architecture, search engines treat keywords provided by the user
as isolated keywords without considering the context of the user query. This
results in a lot of unrelated pages or links being displayed to the user.
Semantic Web is based on the current web with a revised framework to display a
more precise result set as response to a user query. The current web pages need
to be annotated by finding relevant meta data to be added to each of them, so
that they become useful to Semantic Web search engines. Semantic Look explores
the context of user query by processing the Semantic information recorded in
the web pages. It is compared with an existing algorithm called OntoLook and it
is shown that Semantic Look is a better optimized search engine by being more
than twice as fast as OntoLook.
|
1402.7216 | Large-Scale Molecular Dynamics Simulations for Highly Parallel
Infrastructures | cs.DC cs.CE physics.comp-ph | Computational chemistry allows researchers to experiment in sillico: by
running a computer simulations of a biological or chemical processes of
interest. Molecular dynamics with molecular mechanics model of interactions
simulates N-body problem of atoms$-$it computes movements of atoms according to
Newtonian physics and empirical descriptions of atomic electrostatic
interactions. These simulations require high performance computing resources,
as evaluations within each step are computationally demanding and billions of
steps are needed to reach interesting timescales. Current methods decompose the
spatial domain of the problem and calculate on parallel/distributed
infrastructures. Even the methods with the highest strong scaling hit the limit
at half a million cores: they are not able to cut the time to result if
provided with more processors. At the dawn of exascale computing with massively
parallel computational resources, we want to increase the level of parallelism
by incorporating parallel-in-time computation to molecular dynamics
simulations. Calculation of results in several successive time points
simultaneously without a priori knowledge has been examined with no major
success. We will study and implement a novel combinations of methods that
according to our theoretical analyses should achieve promising speed-up
compared to sequential-in-time calculation.
|
1402.7223 | SPARQL for Networks of Embedded Systems | cs.DB cs.DC | The Semantic Web (or Web of Data) represents the successful efforts towards
linking and sharing data over the Web. The cornerstones of the Web of Data are
RDF as data format and SPARQL as de-facto standard query language. Recent
trends show the evolution of the Web of Data towards the Web of Things,
integrating embedded devices and smart objects. Data stemming from such devices
do not share a common format, making the integration and querying impossible.
To overcome this problem, we present our approach to make embedded systems
first-class citizens of the Web of Things. Our framework abstracts from
individual deployments to represent them as common data sources in line with
the ideas behind the Semantic Web. This includes the execution of arbitrary
SPARQL queries over the data from a pool of embedded devices and/or external
data sources. Handling verbose RDF data and executing SPARQL queries in an
embedded network poses major challenges to minimize the involved processing and
communication cost. We therefore present an in-network query processor aiming
to push processing steps onto devices. We demonstrate the practical application
and the potential benefits of our framework in a comprehensive evaluation using
a real-world deployment and a range of SPARQL queries stemming from a common
use case of the Web of Things.
|
1402.7228 | The Wiselib TupleStore: A Modular RDF Database for the Internet of
Things | cs.DB cs.DC | The Internet of Things movement provides self-configuring and universally
interoperable devices. While such devices are often built with a specific
application in mind, they often turn out to be useful in other contexts as
well. We claim that by describing the devices' knowledge in a universal way,
IoT devices can become first-class citizens in the Internet. They can then
exchange data between heterogeneous hardware, different applications and large
data sources on the Web. Our key idea --- in contrast to most existing
approaches --- is to not restrict the domain of knowledge that can be expressed
on the device in any way and, at the same time, allow this knowledge to be
machine-understandable and linkable across different locations.
We propose an architecture that allows to connect embedded devices to the
Semantic Web by expressing their knowledge in the Resource Description
Framework (RDF). We present the Wiselib TupleStore, a modular embedded database
tailored specifically for the storage of RDF. The Wiselib TupleStore is
portable to many platforms including Contiki and TinyOS and allows a variety of
trade-offs, making it able to scale to a large variety of hardware scenarios.
We discuss the applicability of RDF to heterogeneous resource-constrained
devices and compare our system to the existing embedded tuple stores Antelope
and TeenyLIME.
|
1402.7247 | Optimal Discrete Power Control in Poisson-Clustered Ad Hoc Networks | cs.IT math.IT | Power control in a digital handset is practically implemented in a discrete
fashion and usually such a discrete power control (DPC) scheme is suboptimal.
In this paper, we first show that in a Poison-distributed ad hoc network, if
DPC is properly designed with a certain condition satisfied, it can strictly
work better than constant power control (i.e. no power control) in terms of
average signal-to-interference ratio, outage probability and spatial reuse.
This motivates us to propose an $N$-layer DPC scheme in a wireless clustered ad
hoc network, where transmitters and their intended receivers in circular
clusters are characterized by a Poisson cluster process (PCP) on the plane
$\mathbb{R}^2$. The cluster of each transmitter is tessellated into $N$-layer
annuli with transmit power $P_i$ adopted if the intended receiver is located at
the $i$-th layer. Two performance metrics of transmission capacity (TC) and
outage-free spatial reuse factor are redefined based on the $N$-layer DPC. The
outage probability of each layer in a cluster is characterized and used to
derive the optimal power scaling law
$P_i=\Theta\left(\eta_i^{-\frac{\alpha}{2}}\right)$, with $\eta_i$ the
probability of selecting power $P_i$ and $\alpha$ the path loss exponent.
Moreover, the specific design approaches to optimize $P_i$ and $N$ based on
$\eta_i$ are also discussed. Simulation results indicate that the proposed
optimal $N$-layer DPC significantly outperforms other existing power control
schemes in terms of TC and spatial reuse.
|
1402.7258 | An Information Theoretic Charachterization of Channel Shortening
Receivers | cs.IT math.IT | Optimal data detection of data transmitted over a linear channel can always
be implemented through the Viterbi algorithm (VA). However, in many cases of
interest the memory of the channel prohibits application of the VA. A popular
and conceptually simple method in this case, studied since the early 70s, is to
first filter the received signal in order to shorten the memory of the channel,
and then to apply a VA that operates with the shorter memory. We shall refer to
this as a channel shortening (CS) receiver. Although studied for almost four
decades, an information theoretic understanding of what such a simple receiver
solution is actually doing is not available.
In this paper we will show that an optimized CS receiver is implementing the
chain rule of mutual information, but only up to the shortened memory that the
receiver is operating with. Further, we will show that the tools for analyzing
the ensuing achievable rates from an optimized CS receiver are precisely the
same as those used for analyzing the achievable rates of a minimum mean square
error (MMSE) receiver.
|
1402.7265 | Semantics, Modelling, and the Problem of Representation of Meaning -- a
Brief Survey of Recent Literature | cs.CL | Over the past 50 years many have debated what representation should be used
to capture the meaning of natural language utterances. Recently new needs of
such representations have been raised in research. Here I survey some of the
interesting representations suggested to answer for these new needs.
|
1402.7268 | Predicting Scientific Success Based on Coauthorship Networks | physics.soc-ph cs.DL cs.SI | We address the question to what extent the success of scientific articles is
due to social influence. Analyzing a data set of over 100000 publications from
the field of Computer Science, we study how centrality in the coauthorship
network differs between authors who have highly cited papers and those who do
not. We further show that a machine learning classifier, based only on
coauthorship network centrality measures at time of publication, is able to
predict with high precision whether an article will be highly cited five years
after publication. By this we provide quantitative insight into the social
dimension of scientific publishing - challenging the perception of citations as
an objective, socially unbiased measure of scientific success.
|
1402.7276 | Robot Location Estimation in the Situation Calculus | cs.AI cs.LO | Location estimation is a fundamental sensing task in robotic applications,
where the world is uncertain, and sensors and effectors are noisy. Most systems
make various assumptions about the dependencies between state variables, and
especially about how these dependencies change as a result of actions. Building
on a general framework by Bacchus, Halpern and Levesque for reasoning about
degrees of belief in the situation calculus, and a recent extension to it for
continuous domains, in this paper we illustrate location estimation in the
presence of a rich theory of actions using an example. We also show that while
actions might affect prior distributions in nonstandard ways, suitable
posterior beliefs are nonetheless entailed as a side-effect of the overall
specification.
|
1402.7305 | Similarity Decomposition Approach to Oscillatory Synchronization for
Multiple Mechanical Systems With a Virtual Leader | cs.SY math.OC | This paper addresses the oscillatory synchronization problem for multiple
uncertain mechanical systems with a virtual leader, and the interaction
topology among them is assumed to contain a directed spanning tree. We propose
an adaptive control scheme to achieve the goal of oscillatory synchronization.
Using the similarity decomposition approach, we show that the position and
velocity synchronization errors between each mechanical system (or follower)
and the virtual leader converge to zero. The performance of the proposed
adaptive scheme is shown by numerical simulation results.
|
1402.7324 | Geometrical approach to modeling of nonlinear systems from experimental
data | cs.CE | This monograph presents a geometric modeling method nonlinear dynamical
systems from experimental data . basis method is a qualitative approach to the
analysis of linear models and construction of the symmetry groups of attractors
of dynamical systems with controls . A theoretical study including the central
theorem manifold defining conditions of existence of the class in question
models in the local area , taking into account the group properties ,
estimation algorithms invariant characteristics , methods of constructing
models and identifiable description of the results obtained using the method
for simulation -driven engineering processes . included two application is the
development of the proposed approach : identification of groups symmetries on
the phase portraits of dynamical systems and the method of constructing neural
network predictive models
|
1402.7340 | Hierarchical community structure in complex (social) networks | physics.soc-ph cs.SI | The investigation of community structure in networks is a task of great
importance in many disciplines, namely physics, sociology, biology and computer
science where systems are often represented as graphs. One of the challenges is
to find local communities from a local viewpoint in a graph without global
information in order to reproduce the subjective hierarchical vision for each
vertex. In this paper we present the improvement of an information dynamics
algorithm in which the label propagation of nodes is based on the Markovian
flow of information in the network under cognitive-inspired constraints
\cite{Massaro2012}. In this framework we have introduced two more complex
heuristics that allow the algorithm to detect the multi-resolution hierarchical
community structure of networks from a source vertex or communities adopting
fixed values of model's parameters. Experimental results show that the proposed
methods are efficient and well-behaved in both real-world and synthetic
networks.
|
1402.7341 | A Novel approach as Multi-place Watermarking for Security in Database | cs.DB cs.CR cs.MM | Digital multimedia watermarking technology had suggested in the last decade
to embed copyright information in digital objects such as images, audio and
video. However, the increasing use of relational database systems in many
real-life applications created an ever-increasing need for watermarking
database systems. As a result, watermarking relational database system is now
emerging as a research area that deals with the legal issue of copyright
protection of database systems. The main goal of database watermarking is to
generate robust and impersistent watermark for database. In this paper we
propose a method, based on image as watermark and this watermark is embedded
over the database at two different attribute of tuple, one in the numeric
attribute of tuple and another in the date attribute's time (seconds) field.
Our approach can be applied for numerical and categorical database.
|
1402.7344 | An Incidence Geometry approach to Dictionary Learning | cs.LG stat.ML | We study the Dictionary Learning (aka Sparse Coding) problem of obtaining a
sparse representation of data points, by learning \emph{dictionary vectors}
upon which the data points can be written as sparse linear combinations. We
view this problem from a geometry perspective as the spanning set of a subspace
arrangement, and focus on understanding the case when the underlying hypergraph
of the subspace arrangement is specified. For this Fitted Dictionary Learning
problem, we completely characterize the combinatorics of the associated
subspace arrangements (i.e.\ their underlying hypergraphs). Specifically, a
combinatorial rigidity-type theorem is proven for a type of geometric incidence
system. The theorem characterizes the hypergraphs of subspace arrangements that
generically yield (a) at least one dictionary (b) a locally unique dictionary
(i.e.\ at most a finite number of isolated dictionaries) of the specified size.
We are unaware of prior application of combinatorial rigidity techniques in the
setting of Dictionary Learning, or even in machine learning. We also provide a
systematic classification of problems related to Dictionary Learning together
with various algorithms, their assumptions and performance.
|
1402.7350 | Phase Retrieval with Application to Optical Imaging | cs.IT math.IT | This review article provides a contemporary overview of phase retrieval in
optical imaging, linking the relevant optical physics to the information
processing methods and algorithms. Its purpose is to describe the current state
of the art in this area, identify challenges, and suggest vision and areas
where signal processing methods can have a large impact on optical imaging and
on the world of imaging at large, with applications in a variety of fields
ranging from biology and chemistry to physics and engineering.
|
1402.7351 | A Machine Learning Model for Stock Market Prediction | cs.CE cs.NE | Stock market prediction is the act of trying to determine the future value of
a company stock or other financial instrument traded on a financial exchange.
|
1402.7352 | Second-Order Consensus of Networked Mechanical Systems With
Communication Delays | cs.SY math.OC | In this paper, we consider the second-order consensus problem for networked
mechanical systems subjected to nonuniform communication delays, and the
mechanical systems are assumed to interact on a general directed topology. We
propose an adaptive controller plus a distributed velocity observer to realize
the objective of second-order consensus. It is shown that both the positions
and velocities of the mechanical agents synchronize, and furthermore, the
velocities of the mechanical agents converge to the scaled weighted average
value of their initial ones. We further demonstrate that the proposed
second-order consensus scheme can be used to solve the leader-follower
synchronization problem with a constant-velocity leader and under constant
communication delays. Simulation results are provided to illustrate the
performance of the proposed adaptive controllers.
|
1403.0012 | A Stochastic Geometry Analysis of Inter-cell Interference Coordination
and Intra-cell Diversity | cs.IT math.IT | Inter-cell interference coordination (ICIC) and intra-cell diversity (ICD)
play important roles in improving cellular downlink coverage. Modeling cellular
base stations (BSs) as a homogeneous Poisson point process (PPP), this paper
provides explicit finite-integral expressions for the coverage probability with
ICIC and ICD, taking into account the temporal/spectral correlation of the
signal and interference. In addition, we show that in the high-reliability
regime, where the user outage probability goes to zero, ICIC and ICD affect the
network coverage in drastically different ways: ICD can provide order gain
while ICIC only offers linear gain. In the high-spectral efficiency regime
where the SIR threshold goes to infinity, the order difference in the coverage
probability does not exist, however the linear difference makes ICIC a better
scheme than ICD for realistic path loss exponents. Consequently, depending on
the SIR requirements, different combinations of ICIC and ICD optimize the
coverage probability.
|
1403.0017 | Intensional RDB Manifesto: a Unifying NewSQL Model for Flexible Big Data | cs.DB | In this paper we present a new family of Intensional RDBs (IRDBs) which
extends the traditional RDBs with the Big Data and flexible and 'Open schema'
features, able to preserve the user-defined relational database schemas and all
preexisting user's applications containing the SQL statements for a deployment
of such a relational data. The standard RDB data is parsed into an internal
vector key/value relation, so that we obtain a column representation of data
used in Big Data applications, covering the key/value and column-based Big Data
applications as well, into a unifying RDB framework. We define a query
rewriting algorithm, based on the GAV Data Integration methods, so that each
user-defined SQL query is rewritten into a SQL query over this vector relation,
and hence the user-defined standard RDB schema is maintained as an empty global
schema for the RDB schema modeling of data and as the SQL interface to stored
vector relation. Such an IRDB architecture is adequate for the massive
migrations from the existing slow RDBMSs into this new family of fast IRDBMSs
by offering a Big Data and new flexible schema features as well.
|
1403.0034 | Tractable Epistemic Reasoning with Functional Fluents, Static Causal
Laws and Postdiction | cs.AI | We present an epistemic action theory for tractable epistemic reasoning as an
extension to the h-approximation (HPX) theory. In contrast to existing
tractable approaches, the theory supports functional fluents and postdictive
reasoning with static causal laws. We argue that this combination is
particularly synergistic because it allows one not only to perform direct
postdiction about the conditions of actions, but also indirect postdiction
about the conditions of static causal laws. We show that despite the richer
expressiveness, the temporal projection problem remains tractable (polynomial),
and therefore the planning problem remains in NP. We present the operational
semantics of our theory as well as its formulation as Answer Set Programming.
|
1403.0036 | Dynamic Decision Process Modeling and Relation-line Handling in
Distributed Cooperative Modeling System | cs.AI | The Distributed Cooperative Modeling System (DCMS) solves complex decision
problems involving a lot of participants with different viewpoints by network
based distributed modeling and multi-template aggregation.
This thesis aims at extending the system with support for dynamic decision
making process. First, the thesis presents a discussion of characteristics and
optimal policy finding Markov Decision Process as well as a brief introduction
to dynamic Bayesian decision network, which is inherently equal to MDP. After
that, discussion and implementation of prediction in Markov process for both
discrete and continuous random variable are given, as well as several different
kinds of correlation analysis among multiple indices which could help
decision-makers to realize the interaction of indices and design appropriate
policy.
Appending history data of Macau industry, as the foundation of extending
DCMS, is introduced. Additional works include rearrangement of graphical class
hierarchy in DCMS, which in turn allows convenient implementation of curve
relation-line, which makes template modeling clearer and friendlier.
|
1403.0041 | Individual dynamics induces symmetry in network controllability | math.OC cs.SI physics.soc-ph | Controlling complex networked systems to a desired state is a key research
goal in contemporary science. Despite recent advances in studying the impact of
network topology on controllability, a comprehensive understanding of the
synergistic effect of network topology and individual dynamics on
controllability is still lacking. Here we offer a theoretical study with
particular interest in the diversity of dynamic units characterized by
different types of individual dynamics. Interestingly, we find a global
symmetry accounting for the invariance of controllability with respect to
exchanging the densities of any two different types of dynamic units,
irrespective of the network topology. The highest controllability arises at the
global symmetry point, at which different types of dynamic units are of the
same density. The lowest controllability occurs when all self-loops are either
completely absent or present with identical weights. These findings further
improve our understanding of network controllability and have implications for
devising the optimal control of complex networked systems in a wide range of
fields.
|
1403.0052 | TBX goes TEI -- Implementing a TBX basic extension for the Text Encoding
Initiative guidelines | cs.CL | This paper presents an attempt to customise the TEI (Text Encoding
Initiative) guidelines in order to offer the possibility to incorporate TBX
(TermBase eXchange) based terminological entries within any kind of TEI
documents. After presenting the general historical, conceptual and technical
contexts, we describe the various design choices we had to take while creating
this customisation, which in turn have led to make various changes in the
actual TBX serialisation. Keeping in mind the objective to provide the TEI
guidelines with, again, an onomasiological model, we try to identify the best
comprise in maintaining both the isomorphism with the existing TBX Basic
standard and the characteristics of the TEI framework.
|
1403.0054 | Multi-Objective Resource Allocation for Secure Communication in
Cognitive Radio Networks with Wireless Information and Power Transfer | cs.IT math.IT | In this paper, we study resource allocation for multiuser multiple-input
single-output secondary communication systems with multiple system design
objectives. We consider cognitive radio networks where the secondary receivers
are able to harvest energy from the radio frequency when they are idle. The
secondary system provides simultaneous wireless power and secure information
transfer to the secondary receivers. We propose a multi-objective optimization
framework for the design of a Pareto optimal resource allocation algorithm
based on the weighted Tchebycheff approach. In particular, the algorithm design
incorporates three important system objectives: total transmit power
minimization, energy harvesting efficiency maximization, and interference power
leakage-to-transmit power ratio minimization. The proposed framework takes into
account a quality of service requirement regarding communication secrecy in the
secondary system and the imperfection of the channel state information of
potential eavesdroppers (idle secondary receivers and primary receivers) at the
secondary transmitter. The adopted multi-objective optimization problem is
non-convex and is recast as a convex optimization problem via semidefinite
programming (SDP) relaxation. It is shown that the global optimal solution of
the original problem can be constructed by exploiting both the primal and the
dual optimal solutions of the SDP relaxed problem. Besides, two suboptimal
resource allocation schemes for the case when the solution of the dual problem
is unavailable for constructing the optimal solution are proposed. Numerical
results not only demonstrate the close-to-optimal performance of the proposed
suboptimal schemes, but also unveil an interesting trade-off between the
considered conflicting system design objectives.
|
1403.0057 | Real-time Topic-aware Influence Maximization Using Preprocessing | cs.SI cs.LG | Influence maximization is the task of finding a set of seed nodes in a social
network such that the influence spread of these seed nodes based on certain
influence diffusion model is maximized. Topic-aware influence diffusion models
have been recently proposed to address the issue that influence between a pair
of users are often topic-dependent and information, ideas, innovations etc.
being propagated in networks (referred collectively as items in this paper) are
typically mixtures of topics. In this paper, we focus on the topic-aware
influence maximization task. In particular, we study preprocessing methods for
these topics to avoid redoing influence maximization for each item from
scratch. We explore two preprocessing algorithms with theoretical
justifications. Our empirical results on data obtained in a couple of existing
studies demonstrate that one of our algorithms stands out as a strong candidate
providing microsecond online response time and competitive influence spread,
with reasonable preprocessing effort.
|
1403.0068 | Semantic Annotation and Search for Educational Resources Supporting
Distance Learning | cs.IR cs.CY cs.DL | Multimedia educational resources play an important role in education,
particularly for distance learning environments. With the rapid growth of the
multimedia web, large numbers of education articles video resources are
increasingly being created by several different organizations. It is crucial to
explore, share, reuse, and link these educational resources for better
e-learning experiences. Most of the video resources are currently annotated in
an isolated way, which means that they lack semantic connections. Thus,
providing the facilities for annotating these video resources is highly
demanded. These facilities create the semantic connections among video
resources and allow their metadata to be understood globally. Adopting Linked
Data technology, this paper introduces a video annotation and browser platform
with two online tools: Notitia and Sansu-Wolke. Notitia enables users to
semantically annotate video resources using vocabularies defined in the Linked
Data cloud. Sansu-Wolke allows users to browse semantically linked educational
video resources with enhanced web information from different online resources.
In the prototype development, the platform uses existing video resources for
education articles. The result of the initial development demonstrates the
benefits of applying Linked Data technology in the aspects of reusability,
scalability, and extensibility
|
1403.0087 | Temporal Image Fusion | cs.CV cs.GR | This paper introduces temporal image fusion. The proposed technique builds
upon previous research in exposure fusion and expands it to deal with the
limited Temporal Dynamic Range of existing sensors and camera technologies. In
particular, temporal image fusion enables the rendering of long-exposure
effects on full frame-rate video, as well as the generation of arbitrarily long
exposures from a sequence of images of the same scene taken over time. We
explore the problem of temporal under-exposure, and show how it can be
addressed by selectively enhancing dynamic structure. Finally, we show that the
use of temporal image fusion together with content-selective image filters can
produce a range of striking visual effects on a given input sequence.
|
1403.0093 | Robust Nonlinear L2 Filtering of Uncertain Lipschitz Systems via Pareto
Optimization | cs.SY math.OC | A new approach for robust Hinfty filtering for a class of Lipschitz nonlinear
systems with time-varying uncertainties both in the linear and nonlinear parts
of the system is proposed in an LMI framework. The admissible Lipschitz
constant of the system and the disturbance attenuation level are maximized
simultaneously through convex multiobjective optimization. The resulting Hinfty
filter guarantees asymptotic stability of the estimation error dynamics with
exponential convergence and is robust against nonlinear additive uncertainty
and time-varying parametric uncertainties. Explicit bounds on the nonlinear
uncertainty are derived based on norm-wise and element-wise robustness
analysis.
|
1403.0135 | A survey on tidal analysis and forecasting methods for Tsunami detection | cs.CE math.OC physics.ao-ph | Accurate analysis and forecasting of tidal level are very important tasks for
human activities in oceanic and coastal areas. They can be crucial in
catastrophic situations like occurrences of Tsunamis in order to provide a
rapid alerting to the human population involved and to save lives. Conventional
tidal forecasting methods are based on harmonic analysis using the least
squares method to determine harmonic parameters. However, a large number of
parameters and long-term measured data are required for precise tidal level
predictions with harmonic analysis. Furthermore, traditional harmonic methods
rely on models based on the analysis of astronomical components and they can be
inadequate when the contribution of non-astronomical components, such as the
weather, is significant. Other alternative approaches have been developed in
the literature in order to deal with these situations and provide predictions
with the desired accuracy, with respect also to the length of the available
tidal record. These methods include standard high or band pass filtering
techniques, although the relatively deterministic character and large amplitude
of tidal signals make special techniques, like artificial neural networks and
wavelets transform analysis methods, more effective. This paper is intended to
provide the communities of both researchers and practitioners with a broadly
applicable, up to date coverage of tidal analysis and forecasting methodologies
that have proven to be successful in a variety of circumstances, and that hold
particular promise for success in the future. Classical and novel methods are
reviewed in a systematic and consistent way, outlining their main concepts and
components, similarities and differences, advantages and disadvantages.
|
1403.0153 | Size Adaptive Region Based Huffman Compression Technique | cs.IT math.IT | A loss-less compression technique is proposed which uses a variable length
Region formation technique to divide the input file into a number of variable
length regions. Huffman codes are obtained for entire file after formation of
regions. Symbols of each region are compressed one by one. Comparisons are made
among proposed technique, Region Based Huffman compression technique and
classical Huffman technique. The proposed technique offers better compression
ratio for some files than other two.
|
1403.0156 | Sleep Analytics and Online Selective Anomaly Detection | cs.LG | We introduce a new problem, the Online Selective Anomaly Detection (OSAD), to
model a specific scenario emerging from research in sleep science. Scientists
have segmented sleep into several stages and stage two is characterized by two
patterns (or anomalies) in the EEG time series recorded on sleep subjects.
These two patterns are sleep spindle (SS) and K-complex. The OSAD problem was
introduced to design a residual system, where all anomalies (known and unknown)
are detected but the system only triggers an alarm when non-SS anomalies
appear. The solution of the OSAD problem required us to combine techniques from
both machine learning and control theory. Experiments on data from real
subjects attest to the effectiveness of our approach.
|
1403.0157 | Network Traffic Decomposition for Anomaly Detection | cs.LG cs.NI | In this paper we focus on the detection of network anomalies like Denial of
Service (DoS) attacks and port scans in a unified manner. While there has been
an extensive amount of research in network anomaly detection, current state of
the art methods are only able to detect one class of anomalies at the cost of
others. The key tool we will use is based on the spectral decomposition of a
trajectory/hankel matrix which is able to detect deviations from both between
and within correlation present in the observed network traffic data. Detailed
experiments on synthetic and real network traces shows a significant
improvement in detection capability over competing approaches. In the process
we also address the issue of robustness of anomaly detection systems in a
principled fashion.
|
1403.0173 | Coordinated Direct and Relay Schemes for Two-Hop Communication in VANETS | cs.NI cs.IT math.IT | In order to accommodate increasing need and offer communication with high
performance, both vehicle-to-infrastructure (V2I) and vehicle-to-vehicle (V2V)
communications are exploited. The advantages of static nodes and vehicular
nodes are combined to achieve an optimal routing scheme. In this paper, we
consider the communications between a static node and the vehicular nodes
moving in an adjacent area of it. The adjacent area is defined as the zone
where a vehicular can communicate with the static node within maximum two hops.
We only consider single-hop and two-hop transmissions because these
transmissions can be considered as building blocks to construct transmissions
with a higher number of hops. Different cases in which an uplink or a downlink
for the two-hop user combined with an uplink or a downlink for the single-hop
user correspond to different CDR schemes. Using side information to
intentionally cancel the interference, Network Coding (NC), CDR, overhearing
and multi-way schemes aggregate communications flows in order to increase the
performance of the network. We apply the mentioned schemes to a V2I network and
propose novel schemes to optimally arrange and combine the transmissions.
|
1403.0190 | RZA-NLMF algorithm based adaptive sparse sensing for realizing
compressive sensing problems | cs.IT math.IT | Nonlinear sparse sensing (NSS) techniques have been adopted for realizing
compressive sensing in many applications such as Radar imaging. Unlike the NSS,
in this paper, we propose an adaptive sparse sensing (ASS) approach using
reweighted zero-attracting normalized least mean fourth (RZA-NLMF) algorithm
which depends on several given parameters, i.e., reweighted factor,
regularization parameter and initial step-size. First, based on the independent
assumption, Cramer Rao lower bound (CRLB) is derived as for the trademark of
performance comparisons. In addition, reweighted factor selection method is
proposed for achieving robust estimation performance. Finally, to verify the
algorithm, Monte Carlo based computer simulations are given to show that the
ASS achieves much better mean square error (MSE) performance than the NSS.
|
1403.0192 | Compressive sensing based Bayesian sparse channel estimation for OFDM
communication systems: high performance and low complexity | cs.IT math.IT | In orthogonal frequency division modulation (OFDM) communication systems,
channel state information (CSI) is required at receiver due to the fact that
frequency-selective fading channel leads to disgusting inter-symbol
interference (ISI) over data transmission. Broadband channel model is often
described by very few dominant channel taps and they can be probed by
compressive sensing based sparse channel estimation (SCE) methods, e.g.,
orthogonal matching pursuit algorithm, which can take the advantage of sparse
structure effectively in the channel as for prior information. However, these
developed methods are vulnerable to both noise interference and column
coherence of training signal matrix. In other words, the primary objective of
these conventional methods is to catch the dominant channel taps without a
report of posterior channel uncertainty. To improve the estimation performance,
we proposed a compressive sensing based Bayesian sparse channel estimation
(BSCE) method which can not only exploit the channel sparsity but also mitigate
the unexpected channel uncertainty without scarifying any computational
complexity. The propose method can reveal potential ambiguity among multiple
channel estimators that are ambiguous due to observation noise or correlation
interference among columns in the training matrix. Computer simulations show
that propose method can improve the estimation performance when comparing with
conventional SCE methods.
|
1403.0214 | Variable-Rate Linear Network Error Correction MDS Codes | cs.IT math.IT | In network communication, the source often transmits messages at several
different information rates within a session. How to deal with information
transmission and network error correction simultaneously under different rates
is introduced in this paper as a variable-rate network error correction
problem. Apparently, linear network error correction MDS codes are expected to
be used for these different rates. For this purpose, designing a linear network
error correction MDS code based on the existing results for each information
rate is an efficient solution. In order to solve the problem more efficiently,
we present the concept of variable-rate linear network error correction MDS
codes, that is, these linear network error correction MDS codes of different
rates have the same local encoding kernel at each internal node. Further, we
propose an approach to construct such a family of variable-rate network MDS
codes and give an algorithm for efficient implementation. This approach saves
the storage space for each internal node, and resources and time for the
transmission on networks. Moreover, the performance of our proposed algorithm
is analyzed, including the field size, the time complexity, the encoding
complexity at the source node, and the decoding methods. Finally, a random
method is introduced for constructing variable-rate network MDS codes and we
obtain a lower bound on the success probability of this random method, which
shows that this probability will approach to one as the base field size goes to
infinity.
|
1403.0222 | Beyond Q-Resolution and Prenex Form: A Proof System for Quantified
Constraint Satisfaction | cs.LO cs.AI cs.CC | We consider the quantified constraint satisfaction problem (QCSP) which is to
decide, given a structure and a first-order sentence (not assumed here to be in
prenex form) built from conjunction and quantification, whether or not the
sentence is true on the structure. We present a proof system for certifying the
falsity of QCSP instances and develop its basic theory; for instance, we
provide an algorithmic interpretation of its behavior. Our proof system places
the established Q-resolution proof system in a broader context, and also allows
us to derive QCSP tractability results.
|
1403.0230 | Research Traceability using Provenance Services for Biomedical Analysis | cs.DB | We outline the approach being developed in the neuGRID project to use
provenance management techniques for the purposes of capturing and preserving
the provenance data that emerges in the specification and execution of
workflows in biomedical analyses. In the neuGRID project a provenance service
has been designed and implemented that is intended to capture, store, retrieve
and reconstruct the workflow information needed to facilitate users in
conducting user analyses. We describe the architecture of the neuGRID
provenance service and discuss how the CRISTAL system from CERN is being
adapted to address the requirements of the project and then consider how a
generalised approach for provenance management could emerge for more generic
application to the (Health)Grid community.
|
1403.0240 | Particle methods enable fast and simple approximation of Sobolev
gradients in image segmentation | cs.CV cs.CE cs.NA q-bio.QM | Bio-image analysis is challenging due to inhomogeneous intensity
distributions and high levels of noise in the images. Bayesian inference
provides a principled way for regularizing the problem using prior knowledge. A
fundamental choice is how one measures "distances" between shapes in an image.
It has been shown that the straightforward geometric L2 distance is degenerate
and leads to pathological situations. This is avoided when using Sobolev
gradients, rendering the segmentation problem less ill-posed. The high
computational cost and implementation overhead of Sobolev gradients, however,
have hampered practical applications. We show how particle methods as applied
to image segmentation allow for a simple and computationally efficient
implementation of Sobolev gradients. We show that the evaluation of Sobolev
gradients amounts to particle-particle interactions along the contour in an
image. We extend an existing particle-based segmentation algorithm to using
Sobolev gradients. Using synthetic and real-world images, we benchmark the
results for both 2D and 3D images using piecewise smooth and piecewise constant
region models. The present particle approximation of Sobolev gradients is 2.8
to 10 times faster than the previous reference implementation, but retains the
known favorable properties of Sobolev gradients. This speedup is achieved by
using local particle-particle interactions instead of solving a global Poisson
equation at each iteration. The computational time per iteration is higher for
Sobolev gradients than for L2 gradients. Since Sobolev gradients precondition
the optimization problem, however, a smaller number of overall iterations may
be necessary for the algorithm to converge, which can in some cases amortize
the higher per-iteration cost.
|
1403.0258 | Decentralized Hybrid Formation Control of Unmanned Aerial Vehicles | cs.SY | This paper presents a decentralized hybrid supervisory control approach for a
team of unmanned helicopters that are involved in a leader-follower formation
mission. Using a polar partitioning technique, the motion dynamics of the
follower helicopters are abstracted to finite state machines. Then, a discrete
supervisor is designed in a modular way for different components of the
formation mission including reaching the formation, keeping the formation, and
collision avoidance. Furthermore, a formal technique is developed to design the
local supervisors decentralizedly, so that the team of helicopters as whole,
can cooperatively accomplish a collision-free formation task.
|
1403.0259 | The Effect of Block-wise Feedback on the Throughput-Delay Trade-off in
Streaming | cs.IT cs.MM math.IT | Unlike traditional file transfer where only total delay matters, streaming
applications impose delay constraints on each packet and require them to be in
order. To achieve fast in-order packet decoding, we have to compromise on the
throughput. We study this trade-off between throughput and in-order decoding
delay, and in particular how it is affected by the frequency of block-wise
feedback to the source. When there is immediate feedback, we can achieve the
optimal throughput and delay simultaneously. But as the feedback delay
increases, we have to compromise on at least one of these metrics. We present a
spectrum of coding schemes that span different points on the throughput-delay
trade-off. Depending upon the delay-sensitivity and bandwidth limitations of
the application, one can choose an appropriate operating point on this
trade-off.
|
1403.0268 | Tropical optimization problems with application to project scheduling
with minimum makespan | math.OC cs.SY | We consider multidimensional optimization problems in the framework of
tropical mathematics. The problems are formulated to minimize a nonlinear
objective function that is defined on vectors over an idempotent semifield and
calculated by means of multiplicative conjugate transposition. We start with an
unconstrained problem and offer two complete direct solutions to demonstrate
different practicable argumentation schemes. The first solution consists of the
derivation of a sharp lower bound for the objective function and the solving of
an equation to find all vectors that yield the bound. The second is based on
extremal properties of the spectral radius of matrices and involves the
evaluation of this radius for a certain matrix. This solution is then extended
to problems with boundary constraints that specify the feasible solution set by
a double inequality, and with a linear inequality constraint given by a matrix.
To illustrate one application of the results obtained, we solve problems in
project scheduling under the minimum makespan criterion subject to various
precedence constraints on the time of initiation and completion of activities
in the project. Simple numerical examples are given to show the computational
technique used for solutions.
|
1403.0270 | Quantum Random State Generation with Predefined Entanglement Constraint | quant-ph cs.IT math.IT | Entanglement plays an important role in quantum communication, algorithms,
and error correction. Schmidt coefficients are correlated to the eigenvalues of
the reduced density matrix. These eigenvalues are used in Von Neumann entropy
to quantify the amount of the bipartite entanglement. In this paper, we map the
Schmidt basis and the associated coefficients to quantum circuits to generate
random quantum states. We also show that it is possible to adjust the
entanglement between subsystems by changing the quantum gates corresponding to
the Schmidt coefficients. In this manner, random quantum states with predefined
bipartite entanglement amounts can be generated using random Schmidt basis.
This provides a technique for generating equivalent quantum states for given
weighted graph states, which are very useful in the study of entanglement,
quantum computing, and quantum error correction.
|
1403.0284 | Bayes Merging of Multiple Vocabularies for Scalable Image Retrieval | cs.CV | The Bag-of-Words (BoW) representation is well applied to recent
state-of-the-art image retrieval works. Typically, multiple vocabularies are
generated to correct quantization artifacts and improve recall. However, this
routine is corrupted by vocabulary correlation, i.e., overlapping among
different vocabularies. Vocabulary correlation leads to an over-counting of the
indexed features in the overlapped area, or the intersection set, thus
compromising the retrieval accuracy. In order to address the correlation
problem while preserve the benefit of high recall, this paper proposes a Bayes
merging approach to down-weight the indexed features in the intersection set.
Through explicitly modeling the correlation problem in a probabilistic view, a
joint similarity on both image- and feature-level is estimated for the indexed
features in the intersection set.
We evaluate our method through extensive experiments on three benchmark
datasets. Albeit simple, Bayes merging can be well applied in various merging
tasks, and consistently improves the baselines on multi-vocabulary merging.
Moreover, Bayes merging is efficient in terms of both time and memory cost, and
yields competitive performance compared with the state-of-the-art methods.
|
1403.0300 | A Proposed Improvement Equalizer for Telephone and Mobile Circuit
Channels | cs.IT math.IT | In the transmission of digital data at a relatively high rate over a
particular band limited channel, it is normally necessary to employ an
equalizer at the receiver in order to correct the signal distortion introduced
by the channel .ISI (inter symbol interference) leads to large error
probability if it is not suppressed .The possible solutions for coping with ISI
such as equalization technique. Maximum Likelihood Sequence Estimation (MLSE)
implemented with Viterbi algorithm is the optimal equalizer for this ISI
problem sense it minimizes the sequence of error rate. This estimator involves
a very considerable amount of equipment complexity especially when detecting a
multilevel digital signal having large alphabet, and/or operating under a
channel with long impulse response, this arises a need to develop detection
algorithms with reduced complexity without losing the performance. The aim of
this work is to study the various ways to remove the ISI, concentrating on the
decision-based algorithms (DFE, MLSE, and near MLSE), analyzing the difference
between them from both performance and complexity point of view. An Improved
non linear equalizer with Perturbation algorithm has been suggested which
trying to enhance the performance and reduce the computational complexity by
comparing it with the other existing detection algorithms.
|
1403.0306 | An extended isogeometric analysis for vibration of cracked FGM plates
using higher-order shear deformation theory | cs.CE math.NA | A novel and effective formulation that combines the eXtended IsoGeometric
Approach (XIGA) and Higher-order Shear Deformation Theory (HSDT) is proposed to
study the free vibration of cracked Functionally Graded Material (FGM) plates.
Herein, the general HSDT model with five unknown variables per node is applied
for calculating the stiffness matrix without needing Shear Correction Factor
(SCF). In order to model the discontinuous and singular phenomena in the
cracked plates, IsoGeometric Analysis (IGA) utilizing the Non-Uniform Rational
B-Spline (NURBS) functions is incorporated with enrichment functions through
the partition of unity method. NURBS basis functions with their inherent
arbitrary high order smoothness permit the C1 requirement of the HSDT model.
The material properties of the FGM plates vary continuously through the plate
thickness according to an exponent function. The effects of gradient index,
crack length, crack location, length to thickness on the natural frequencies
and mode shapes of simply supported and clamped FGM plate are studied.
Numerical examples are provided to show excellent performance of the proposed
method compared with other published solutions in the literature.
|
1403.0307 | Isogeometric finite element analysis of laminated composite plates based
on a four variable refined plate theory | cs.CE math.NA | In this paper, a novel and effective formulation based on isogeometric
approach (IGA) and Refined Plate Theory (RPT) is proposed to study the behavior
of laminated composite plates. Using many kinds of higher-order distributed
functions, RPT model naturally satisfies the traction-free boundary conditions
at plate surfaces and describes the non-linear distribution of shear stresses
without requiring shear correction factor (SCF). IGA utilizes the basis
functions, namely B-splines or non-uniform rational B-splines (NURBS), which
achieve easily the smoothness of any arbitrary order. It hence satisfies the C1
requirement of the RPT model. The static, dynamic and buckling analysis of
rectangular plates is investigated for different boundary conditions. Numerical
results show high effectiveness of the present formulation.
|
1403.0309 | Object Tracking via Non-Euclidean Geometry: A Grassmann Approach | cs.CV math.MG stat.ML | A robust visual tracking system requires an object appearance model that is
able to handle occlusion, pose, and illumination variations in the video
stream. This can be difficult to accomplish when the model is trained using
only a single image. In this paper, we first propose a tracking approach based
on affine subspaces (constructed from several images) which are able to
accommodate the abovementioned variations. We use affine subspaces not only to
represent the object, but also the candidate areas that the object may occupy.
We furthermore propose a novel approach to measure affine subspace-to-subspace
distance via the use of non-Euclidean geometry of Grassmann manifolds. The
tracking problem is then considered as an inference task in a Markov Chain
Monte Carlo framework via particle filtering. Quantitative evaluation on
challenging video sequences indicates that the proposed approach obtains
considerably better performance than several recent state-of-the-art methods
such as Tracking-Learning-Detection and MILtrack.
|
1403.0315 | Summarisation of Short-Term and Long-Term Videos using Texture and
Colour | cs.CV stat.AP | We present a novel approach to video summarisation that makes use of a
Bag-of-visual-Textures (BoT) approach. Two systems are proposed, one based
solely on the BoT approach and another which exploits both colour information
and BoT features. On 50 short-term videos from the Open Video Project we show
that our BoT and fusion systems both achieve state-of-the-art performance,
obtaining an average F-measure of 0.83 and 0.86 respectively, a relative
improvement of 9% and 13% when compared to the previous state-of-the-art. When
applied to a new underwater surveillance dataset containing 33 long-term
videos, the proposed system reduces the amount of footage by a factor of 27,
with only minor degradation in the information content. This order of magnitude
reduction in video data represents significant savings in terms of time and
potential labour cost when manually reviewing such footage.
|
1403.0316 | Cross-Scale Cost Aggregation for Stereo Matching | cs.CV | Human beings process stereoscopic correspondence across multiple scales.
However, this bio-inspiration is ignored by state-of-the-art cost aggregation
methods for dense stereo correspondence. In this paper, a generic cross-scale
cost aggregation framework is proposed to allow multi-scale interaction in cost
aggregation. We firstly reformulate cost aggregation from a unified
optimization perspective and show that different cost aggregation methods
essentially differ in the choices of similarity kernels. Then, an inter-scale
regularizer is introduced into optimization and solving this new optimization
problem leads to the proposed framework. Since the regularization term is
independent of the similarity kernel, various cost aggregation methods can be
integrated into the proposed general framework. We show that the cross-scale
framework is important as it effectively and efficiently expands
state-of-the-art cost aggregation methods and leads to significant
improvements, when evaluated on Middlebury, KITTI and New Tsukuba datasets.
|
1403.0320 | Matching Image Sets via Adaptive Multi Convex Hull | cs.CV stat.ML | Traditional nearest points methods use all the samples in an image set to
construct a single convex or affine hull model for classification. However,
strong artificial features and noisy data may be generated from combinations of
training samples when significant intra-class variations and/or noise occur in
the image set. Existing multi-model approaches extract local models by
clustering each image set individually only once, with fixed clusters used for
matching with various image sets. This may not be optimal for discrimination,
as undesirable environmental conditions (eg. illumination and pose variations)
may result in the two closest clusters representing different characteristics
of an object (eg. frontal face being compared to non-frontal face). To address
the above problem, we propose a novel approach to enhance nearest points based
methods by integrating affine/convex hull classification with an adapted
multi-model approach. We first extract multiple local convex hulls from a query
image set via maximum margin clustering to diminish the artificial variations
and constrain the noise in local convex hulls. We then propose adaptive
reference clustering (ARC) to constrain the clustering of each gallery image
set by forcing the clusters to have resemblance to the clusters in the query
image set. By applying ARC, noisy clusters in the query set can be discarded.
Experiments on Honda, MoBo and ETH-80 datasets show that the proposed method
outperforms single model approaches and other recent techniques, such as Sparse
Approximated Nearest Points, Mutual Subspace Method and Manifold Discriminant
Analysis.
|
1403.0353 | Personalized recommendation against crowd's popular selection | cs.IR cs.SI physics.soc-ph | The problem of personalized recommendation in an ocean of data attracts more
and more attention recently. Most traditional researches ignore the popularity
of the recommended object, which resulting in low personality and accuracy. In
this Letter, we proposed a personalized recommendation method based on weighted
object network, punishing the recommended object that is the crowd's popular
selection, namely, Anti-popularity index(AP), which can give enhanced
personality, accuracy and diversity in contrast to mainstream baselines with a
low computational complexity.
|
1403.0354 | Energy Harvesting Cooperative Networks: Is the Max-Min Criterion Still
Diversity-Optimal? | cs.IT math.IT | This paper considers a general energy harvesting cooperative network with M
source-destination (SD) pairs and one relay, where the relay schedules only m
user pairs for transmissions. For the special case of m = 1, the addressed
scheduling problem is equivalent to relay selection for the scenario with one
SD pair and M relays. In conventional cooperative networks, the max-min
selection criterion has been recognized as a diversity-optimal strategy for
relay selection and user scheduling. The main contribution of this paper is to
show that the use of the max-min criterion will result in loss of diversity
gains in energy harvesting cooperative networks. Particularly when only a
single user is scheduled, analytical results are developed to demonstrate that
the diversity gain achieved by the max-min criterion is only (M+1)/2, much less
than the maximal diversity gain M. The max-min criterion suffers this diversity
loss because it does not reflect the fact that the source-relay channels are
more important than the relay-destination channels in energy harvesting
networks. Motivated by this fact, a few user scheduling approaches tailored to
energy harvesting networks are developed and their performance is analyzed.
Simulation results are provided to demonstrate the accuracy of the developed
analytical results and facilitate the performance comparison.
|
1403.0355 | Ergodic Sum-Rate Maximization for Fading Cognitive Multiple Access
Channels without Successive Interference Cancellation | cs.IT math.IT | In this paper, the ergodic sum-rate of a fading cognitive multiple access
channel (C-MAC) is studied, where a secondary network (SN) with multiple
secondary users (SUs) transmitting to a secondary base station (SBS) shares the
spectrum band with a primary user (PU). An interference power constraint (IPC)
is imposed on the SN to protect the PU. Under such a constraint and the
individual transmit power constraint (TPC) imposed on each SU, we investigate
the power allocation strategies to maximize the ergodic sum-rate of a fading
C-MAC without successive interference cancellation (SIC). In particular, this
paper considers two types of constraints: (1) average TPC and average IPC, (2)
peak TPC and peak IPC. For the first case, it is proved that the optimal power
allocation is dynamic time-division multiple-access (D-TDMA), which is exactly
the same as the optimal power allocation to maximize the ergodic sum-rate of
the fading C-MAC with SIC under the same constraints. For the second case, it
is proved that the optimal solution must be at the extreme points of the
feasible region. It is shown that D-TDMA is optimal with high probability when
the number of SUs is large. Besides, we show that, when the SUs can be sorted
in a certain order, an algorithm with linear complexity can be used to find the
optimal power allocation.
|
1403.0388 | Cascading Randomized Weighted Majority: A New Online Ensemble Learning
Algorithm | stat.ML cs.LG | With the increasing volume of data in the world, the best approach for
learning from this data is to exploit an online learning algorithm. Online
ensemble methods are online algorithms which take advantage of an ensemble of
classifiers to predict labels of data. Prediction with expert advice is a
well-studied problem in the online ensemble learning literature. The Weighted
Majority algorithm and the randomized weighted majority (RWM) are the most
well-known solutions to this problem, aiming to converge to the best expert.
Since among some expert, the best one does not necessarily have the minimum
error in all regions of data space, defining specific regions and converging to
the best expert in each of these regions will lead to a better result. In this
paper, we aim to resolve this defect of RWM algorithms by proposing a novel
online ensemble algorithm to the problem of prediction with expert advice. We
propose a cascading version of RWM to achieve not only better experimental
results but also a better error bound for sufficiently large datasets.
|
1403.0429 | Extending Agents by Transmitting Protocols in Open Systems | cs.MA | Agents in an open system communicate using interaction protocols. Suppose
that we have a system of agents and that we want to add a new protocol that all
(or some) agents should be able to understand. Clearly, modifying the source
code for each agent implementation is not practical. A solution to this problem
of upgrading an open system is to have a mechanism that allows agents to
receive a description of an interaction protocol and use it. In this paper we
propose a representation for protocols based on extending Petri nets. However,
this is not enough: in an open system the source of a protocol may not be
trusted and a protocol that is received may contain steps that are erroneous or
that make confidential information public. We therefore also describe an
analysis method that infers whether a protocol is safe. Finally, we give an
execution model for extended Petri nets.
|
1403.0448 | Hybrid evolving clique-networks and their communicability | cs.SI physics.soc-ph | Aiming to understand real-world hierarchical networks whose degree
distributions are neither power law nor exponential, we construct a hybrid
clique network that includes both homogeneous and inhomogeneous parts, and
introduce an inhomogeneity parameter to tune the ratio between the homogeneous
part and the inhomogeneous one. We perform Monte-Carlo simulations to study
various properties of such a network, including the degree distribution, the
average shortest-path-length, the clustering coefficient, the clustering
spectrum, and the communicability.
|
1403.0461 | Timed Soft Concurrent Constraint Programs: An Interleaved and a Parallel
Approach | cs.PL cs.AI | We propose a timed and soft extension of Concurrent Constraint Programming.
The time extension is based on the hypothesis of bounded asynchrony: the
computation takes a bounded period of time and is measured by a discrete global
clock. Action prefixing is then considered as the syntactic marker which
distinguishes a time instant from the next one. Supported by soft constraints
instead of crisp ones, tell and ask agents are now equipped with a preference
(or consistency) threshold which is used to determine their success or
suspension. In the paper we provide a language to describe the agents behavior,
together with its operational and denotational semantics, for which we also
prove the compositionality and correctness properties. After presenting a
semantics using maximal parallelism of actions, we also describe a version for
their interleaving on a single processor (with maximal parallelism for time
elapsing). Coordinating agents that need to take decisions both on preference
values and time events may benefit from this language. To appear in Theory and
Practice of Logic Programming (TPLP).
|
1403.0466 | Automatic exploration of structural regularities in networks | cs.SI physics.soc-ph | Complex networks provide a powerful mathematical representation of complex
systems in nature and society. To understand complex networks, it is crucial to
explore their internal structures, also called structural regularities. The
task of network structure exploration is to determine how many groups in a
complex network and how to group the nodes of the network. Most existing
structure exploration methods need to specify either a group number or a
certain type of structure when they are applied to a network. In the real
world, however, not only the group number but also the certain type of
structure that a network has are usually unknown in advance. To automatically
explore structural regularities in complex networks, without any prior
knowledge about the group number or the certain type of structure, we extend a
probabilistic mixture model that can handle networks with any type of structure
but needs to specify a group number using Bayesian nonparametric theory and
propose a novel Bayesian nonparametric model, called the Bayesian nonparametric
mixture (BNPM) model. Experiments conducted on a large number of networks with
different structures show that the BNPM model is able to automatically explore
structural regularities in networks with a stable and state-of-the-art
performance.
|
1403.0468 | Identification of Structural Model for Chaotic Systems | math.DS cs.CE cs.SY nlin.CD | This article is talking about the study constructive method of structural
identification systems with chaotic dynamics. It is shown that the
reconstructed attractors are a source of information not only about the
dynamics but also on the basis of the attractors which can be identified and
the mere sight of models. It is known that the knowledge of the symmetry group
allows you to specify the form of a minimal system. Forming a group
transformation can be found in the recon-structed attractor. The affine system
as the basic model is selected. Type of a nonlinear system is the subject of
calcula-tions. A theoretical analysis is performed and proof of the possibility
of constructing models in the central invariant manifold reduced. This
developed algorithm for determining the observed symmetry in the attractor. The
results of identification used in real systems are an application.
|
1403.0481 | Support Vector Machine Model for Currency Crisis Discrimination | cs.LG stat.ML | Support Vector Machine (SVM) is powerful classification technique based on
the idea of structural risk minimization. Use of kernel function enables curse
of dimensionality to be addressed. However, proper kernel function for certain
problem is dependent on specific dataset and as such there is no good method on
choice of kernel function. In this paper, SVM is used to build empirical models
of currency crisis in Argentina. An estimation technique is developed by
training model on real life data set which provides reasonably accurate model
outputs and helps policy makers to identify situations in which currency crisis
may happen. The third and fourth order polynomial kernel is generally best
choice to achieve high generalization of classifier performance. SVM has high
level of maturity with algorithms that are simple, easy to implement, tolerates
curse of dimensionality and good empirical performance. The satisfactory
results show that currency crisis situation is properly emulated using only
small fraction of database and could be used as an evaluation tool as well as
an early warning system. To the best of knowledge this is the first work on SVM
approach for currency crisis evaluation of Argentina.
|
1403.0485 | Face Recognition Methods & Applications | cs.CV | Face recognition presents a challenging problem in the field of image
analysis and computer vision. The security of information is becoming very
significant and difficult. Security cameras are presently common in airports,
Offices, University, ATM, Bank and in any locations with a security system.
Face recognition is a biometric system used to identify or verify a person from
a digital image. Face Recognition system is used in security. Face recognition
system should be able to automatically detect a face in an image. This involves
extracts its features and then recognize it, regardless of lighting,
expression, illumination, ageing, transformations (translate, rotate and scale
image) and pose, which is a difficult task. This paper contains three sections.
The first section describes the common methods like holistic matching method,
feature extraction method and hybrid methods. The second section describes
applications with examples and finally third section describes the future
research directions of face recognition.
|
1403.0500 | Automating Fault Tolerance in High-Performance Computational Biological
Jobs Using Multi-Agent Approaches | cs.DC cs.CE cs.MA | Background: Large-scale biological jobs on high-performance computing systems
require manual intervention if one or more computing cores on which they
execute fail. This places not only a cost on the maintenance of the job, but
also a cost on the time taken for reinstating the job and the risk of losing
data and execution accomplished by the job before it failed. Approaches which
can proactively detect computing core failures and take action to relocate the
computing core's job onto reliable cores can make a significant step towards
automating fault tolerance.
Method: This paper describes an experimental investigation into the use of
multi-agent approaches for fault tolerance. Two approaches are studied, the
first at the job level and the second at the core level. The approaches are
investigated for single core failure scenarios that can occur in the execution
of parallel reduction algorithms on computer clusters. A third approach is
proposed that incorporates multi-agent technology both at the job and core
level. Experiments are pursued in the context of genome searching, a popular
computational biology application.
Result: The key conclusion is that the approaches proposed are feasible for
automating fault tolerance in high-performance computing systems with minimal
human intervention. In a typical experiment in which the fault tolerance is
studied, centralised and decentralised checkpointing approaches on an average
add 90% to the actual time for executing the job. On the other hand, in the
same experiment the multi-agent approaches add only 10% to the overall
execution time.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.