id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1403.5118 | Geotagged tweets to inform a spatial interaction model: a case study of
museums | stat.ME cs.CY cs.SI | This paper explores the potential of volunteered geographical information
from social media for informing geographical models of behavior, based on a
case study of museums in Yorkshire, UK. A spatial interaction model of visitors
to 15 museums from 179 administrative zones is constructed to test this
potential. The main input dataset comprises geo-tagged messages harvested using
the Twitter Streaming Application Programming Interface (API), filtered,
analyzed and aggregated to allow direct comparison with the model's output.
Comparison between model output and tweet information allowed the calibration
of model parameters to optimize the fit between flows to museums inferred from
tweets and flow matrices generated by the spatial interaction model. We
conclude that volunteered geographic information from social media sites have
great potential for informing geographical models of behavior, especially if
the volume of geo-tagged social media messages continues to increase. However,
we caution that volunteered geographical information from social media has some
major limitations so should be used only as a supplement to more consistent
data sources or when official datasets are unavailable.
|
1403.5142 | Interactive Debugging of ASP Programs | cs.AI | Broad application of answer set programming (ASP) for declarative problem
solving requires the development of tools supporting the coding process.
Program debugging is one of the crucial activities within this process.
Recently suggested ASP debugging approaches allow efficient computation of
possible explanations of a fault. However, even for a small program a debugger
might return a large number of possible explanations and selection of the
correct one must be done manually. In this paper we present an interactive
query-based ASP debugging method which extends previous approaches and finds a
preferred explanation by means of observations. The system queries a programmer
whether a set of ground atoms must be true in all (cautiously) or some
(bravely) answer sets of the program. Since some queries can be more
informative than the others, we discuss query selection strategies which, given
user's preferences for an explanation, can find the best query. That is, the
query an answer of which reduces the overall number of queries required for the
identification of a preferred explanation.
|
1403.5156 | Synergy and redundancy in the Granger causal analysis of dynamical
networks | q-bio.QM cs.IT math.IT physics.data-an | We analyze by means of Granger causality the effect of synergy and redundancy
in the inference (from time series data) of the information flow between
subsystems of a complex network. Whilst we show that fully conditioned Granger
causality is not affected by synergy, the pairwise analysis fails to put in
evidence synergetic effects.
In cases when the number of samples is low, thus making the fully conditioned
approach unfeasible, we show that partially conditioned Granger causality is an
effective approach if the set of conditioning variables is properly chosen. We
consider here two different strategies (based either on informational content
for the candidate driver or on selecting the variables with highest pairwise
influences) for partially conditioned Granger causality and show that depending
on the data structure either one or the other might be valid. On the other
hand, we observe that fully conditioned approaches do not work well in presence
of redundancy, thus suggesting the strategy of separating the pairwise links in
two subsets: those corresponding to indirect connections of the fully
conditioned Granger causality (which should thus be excluded) and links that
can be ascribed to redundancy effects and, together with the results from the
fully connected approach, provide a better description of the causality pattern
in presence of redundancy. We finally apply these methods to two different real
datasets. First, analyzing electrophysiological data from an epileptic brain,
we show that synergetic effects are dominant just before seizure occurrences.
Second, our analysis applied to gene expression time series from HeLa culture
shows that the underlying regulatory networks are characterized by both
redundancy and synergy.
|
1403.5162 | General Centrality in a hypergraph | cs.SI math.CO physics.soc-ph | The goal of this paper is to present a centrality measurement for the nodes
of a hypergraph, by using existing literature which extends eigenvector
centrality from a graph to a hypergraph, and literature which give a general
centrality measurement for a graph. We will use this measurement to say more
about the number of communications in a hypergraph, to implement a learning
mechanism, and to construct certain networks.
|
1403.5169 | Defuzzify firstly or finally: Dose it matter in fuzzy DEMATEL under
uncertain environment? | cs.AI | Decision-Making Trial and Evaluation Laboratory (DEMATEL) method is widely
used in many real applications. With the desirable property of efficient
handling with the uncertain information in decision making, the fuzzy DEMATEL
is heavily studied. Recently, Dytczak and Ginda suggested to defuzzify the
fuzzy numbers firstly and then use the classical DEMATEL to obtain the final
result. In this short paper, we show that it is not reasonable in some
situations. The results of defuzzification at the first step are not coincide
with the results of defuzzification at the final step.It seems that the
alternative is to defuzzification in the final step in fuzzy DEMATEL.
|
1403.5172 | SMT-Based Bounded Model Checking of Fixed-Point Digital Controllers | cs.SY cs.SE | Digital controllers have several advantages with respect to their flexibility
and design's simplicity. However, they are subject to problems that are not
faced by analog controllers. In particular, these problems are related to the
finite word-length implementation that might lead to overflows, limit cycles,
and time constraints in fixed-point processors. This paper proposes a new
method to detect design's errors in digital controllers using a state-of-the
art bounded model checker based on satisfiability modulo theories. The
experiments with digital controllers for a ball and beam plant demonstrate that
the proposed method can be very effective in finding errors in digital
controllers than other existing approaches based on traditional simulations
tools.
|
1403.5180 | Inverse optimal control with polynomial optimization | math.OC cs.SY | In the context of optimal control, we consider the inverse problem of
Lagrangian identification given system dynamics and optimal trajectories. Many
of its theoretical and practical aspects are still open. Potential applications
are very broad as a reliable solution to the problem would provide a powerful
modeling tool in many areas of experimental science. We propose to use the
Hamilton-Jacobi-Bellman sufficient optimality conditions for the direct problem
as a tool for analyzing the inverse problem and propose a general method that
attempts at solving it numerically with techniques of polynomial optimization
and linear matrix inequalities. The relevance of the method is illustrated
based on simulations on academic examples under various settings.
|
1403.5195 | Experimental Implementation of an Invariant Extended Kalman Filter-based
Scan Matching SLAM | cs.SY cs.RO | We describe an application of the Invariant Extended Kalman Filter (IEKF)
design methodology to the scan matching SLAM problem. We review the theoretical
foundations of the IEKF and its practical interest of guaranteeing robustness
to poor state estimates, then implement the filter on a wheeled robot hardware
platform. The proposed design is successfully validated in experimental
testing.
|
1403.5199 | Obtaining Information about Queries behind Views and Dependencies | cs.DB cs.LO | We consider the problems of finding and determining certain query answers and
of determining containment between queries; each problem is formulated in
presence of materialized views and dependencies under the closed-world
assumption. We show a tight relationship between the problems in this setting.
Further, we introduce algorithms for solving each problem for those inputs
where all the queries and views are conjunctive, and the dependencies are
embedded weakly acyclic. We also determine the complexity of each problem under
the security-relevant complexity measure introduced by Zhang and Mendelzon in
2005. The problems studied in this paper are fundamental in ensuring correct
specification of database access-control policies, in particular in case of
fine-grained access control. Our approaches can also be applied in the areas of
inference control, secure data publishing, and database auditing.
|
1403.5204 | Adaptive Control of Robot Manipulators With Uncertain Kinematics and
Dynamics | cs.SY cs.RO math.OC | In this paper, we investigate the adaptive control problem for robot
manipulators with both the uncertain kinematics and dynamics. We propose two
adaptive control schemes to realize the objective of task-space trajectory
tracking irrespective of the uncertain kinematics and dynamics. The proposed
controllers have the desirable separation property, and we also show that the
first adaptive controller with appropriate modifications can yield improved
performance, without the expense of conservative gain choice. The performance
of the proposed controllers is shown by numerical simulations.
|
1403.5206 | What is Tumblr: A Statistical Overview and Comparison | cs.SI physics.soc-ph | Tumblr, as one of the most popular microblogging platforms, has gained
momentum recently. It is reported to have 166.4 millions of users and 73.4
billions of posts by January 2014. While many articles about Tumblr have been
published in major press, there is not much scholar work so far. In this paper,
we provide some pioneer analysis on Tumblr from a variety of aspects. We study
the social network structure among Tumblr users, analyze its user generated
content, and describe reblogging patterns to analyze its user behavior. We aim
to provide a comprehensive statistical overview of Tumblr and compare it with
other popular social services, including blogosphere, Twitter and Facebook, in
answering a couple of key questions: What is Tumblr? How is Tumblr different
from other social media networks? In short, we find Tumblr has more rich
content than other microblogging platforms, and it contains hybrid
characteristics of social networking, traditional blogosphere, and social
media. This work serves as an early snapshot of Tumblr that later work can
leverage.
|
1403.5287 | Online Local Learning via Semidefinite Programming | cs.LG | In many online learning problems we are interested in predicting local
information about some universe of items. For example, we may want to know
whether two items are in the same cluster rather than computing an assignment
of items to clusters; we may want to know which of two teams will win a game
rather than computing a ranking of teams. Although finding the optimal
clustering or ranking is typically intractable, it may be possible to predict
the relationships between items as well as if you could solve the global
optimization problem exactly.
Formally, we consider an online learning problem in which a learner
repeatedly guesses a pair of labels (l(x), l(y)) and receives an adversarial
payoff depending on those labels. The learner's goal is to receive a payoff
nearly as good as the best fixed labeling of the items. We show that a simple
algorithm based on semidefinite programming can obtain asymptotically optimal
regret in the case where the number of possible labels is O(1), resolving an
open problem posed by Hazan, Kale, and Shalev-Schwartz. Our main technical
contribution is a novel use and analysis of the log determinant regularizer,
exploiting the observation that log det(A + I) upper bounds the entropy of any
distribution with covariance matrix A.
|
1403.5290 | Nonlinear Feedback Control of Axisymmetric Aerial Vehicles | cs.SY math.DS | We investigate the use of simple aerodynamic models for the feedback control
of aerial vehicles with large flight envelopes. Thrust-propelled vehicles with
a body shape symmetric with respect to the thrust axis are considered. Upon a
condition on the aerodynamic characteristics of the vehicle, we show that the
equilibrium orientation can be explicitly determined as a function of the
desired flight velocity. This allows for the adaptation of previously proposed
control design approaches based on the thrust direction control paradigm.
Simulation results conducted by using measured aerodynamic characteristics of
quasi-axisymmetric bodies illustrate the soundness of the proposed approach.
|
1403.5315 | A Deterministic Annealing Optimization Approach for Witsenhausen's and
Related Decentralized Control Settings | cs.SY cs.IT math.IT math.OC | This paper studies the problem of mapping optimization in decentralized
control problems. A global optimization algorithm is proposed based on the
ideas of ``deterministic annealing" - a powerful non-convex optimization
framework derived from information theoretic principles with analogies to
statistical physics. The key idea is to randomize the mappings and control the
Shannon entropy of the system during optimization. The entropy constraint is
gradually relaxed in a deterministic annealing process while tracking the
minimum, to obtain the ultimate deterministic mappings. Deterministic annealing
has been successfully employed in several problems including clustering, vector
quantization, regression, as well as the Witsenhausen's counterexample in our
recent work[1]. We extend our method to a more involved setting, a variation of
Witsenhausen's counterexample, where there is a side channel between the two
controllers. The problem can be viewed as a two stage cancellation problem. We
demonstrate that there exist complex strategies that can exploit the side
channel efficiently, obtaining significant gains over the best affine and known
non-linear strategies.
|
1403.5326 | Analytic Expressions and Bounds for Special Functions and Applications
in Communication Theory | cs.IT math.IT | This work is devoted to the derivation of novel analytic expressions and
bounds for a family of special functions that are useful in wireless
communication theory. These functions are the well-known Nuttall
$Q{-}$function, the incomplete Toronto function, the Rice $Ie$-function and the
incomplete Lipschitz-Hankel integrals.
Capitalizing on the offered results, useful identities are additionally
derived between the above functions and the Humbert, $\Phi_{1}$, function as
well as for specific cases of the Kamp${\it \acute{e}}$ de F${\it
\acute{e}}$riet function. These functions can be considered useful mathematical
tools that can be employed in applications relating to the analytic performance
evaluation of modern wireless communication systems such as cognitive radio,
cooperative and free-space optical communications as well as radar, diversity
and multi-antenna systems. As an example, new closed-form expressions are
derived for the outage probability over non-linear generalized fading channels,
namely, $\alpha{-}\eta{-}\mu$, $\alpha{-}\lambda{-}\mu$ and
$\alpha{-}\kappa{-}\mu$ as well as for specific cases of the $\eta{-}\mu$ and
$\lambda{-}\mu$ fading channels. Furthermore, simple expressions are presented
for the channel capacity for the truncated channel inversion with fixed rate
and the corresponding optimum cut-off signal-to-noise ratio for single-and
multi-antenna communication systems over Rician fading channels. The accuracy
and validity of the derived expressions is justified through extensive
comparisons with respective numerical results.
|
1403.5330 | Differential Dual-Hop Relaying over Time-Varying Rayleigh-Fading
Channels | cs.IT math.IT | This paper studies dual-hop amplify-and-forward relaying over time-varying
Rayleigh fading channels with differential M-PSK modulation and non-coherent
detection. For the case of "two-symbol" detection, a first order time-series
model is utilized to characterize the time-varying nature of the cascaded
channel. Based on this model, an exact bit error rate (BER) expression is
derived and confirmed with simulation results. The obtained expression shows
that the BER is related to the auto-correlation of the cascaded channel and an
irreducible error floor exists at high transmit power. To overcome the error
floor experienced with fast-fading, a nearly optimal multiple-symbol
differential sphere detection (MSDSD) is also developed. The error performance
of MSDSD is illustrated with simulation results under different fading
scenarios.
|
1403.5331 | Differential Amplify-and-Forward Relaying in Time-Varying Rayleigh
Fading Channels | cs.IT cs.SY math.IT | This paper considers the performance of differential amplify-and-forward
(D-AF) relaying over time-varying Rayleigh fading channels. Using the
auto-regressive time-series model to characterize the time-varying nature of
the wireless channels, new weights for the maximum ratio combining (MRC) of the
received signals at the destination are proposed. Expression for the pair-wise
error probability (PEP) is provided and used to obtain an approximation of the
total average bit error probability (BEP). The obtained BEP approximation
clearly shows how the system performance depends on the auto-correlation of the
direct and the cascaded channels and an irreducible error floor exists at high
signal-to-noise ratio (SNR). Simulation results also demonstrate that, for
fast-fading channels, the new MRC weights lead to a better performance when
compared to the classical combining scheme. Our analysis is verified with
simulation results in different fading scenarios.
|
1403.5341 | An Information-Theoretic Analysis of Thompson Sampling | cs.LG | We provide an information-theoretic analysis of Thompson sampling that
applies across a broad range of online optimization problems in which a
decision-maker must learn from partial feedback. This analysis inherits the
simplicity and elegance of information theory and leads to regret bounds that
scale with the entropy of the optimal-action distribution. This strengthens
preexisting results and yields new insight into how information improves
performance.
|
1403.5345 | A Physarum-Inspired Approach to Optimal Supply Chain Network Design at
Minimum Total Cost with Demand Satisfaction | cs.NE | A supply chain is a system which moves products from a supplier to customers.
The supply chains are ubiquitous. They play a key role in all economic
activities. Inspired by biological principles of nutrients' distribution in
protoplasmic networks of slime mould Physarum polycephalum we propose a novel
algorithm for a supply chain design. The algorithm handles the supply networks
where capacity investments and product flows are variables. The networks are
constrained by a need to satisfy product demands. Two features of the slime
mould are adopted in our algorithm. The first is the continuity of a flux
during the iterative process, which is used in real-time update of the costs
associated with the supply links. The second feature is adaptivity. The supply
chain can converge to an equilibrium state when costs are changed. Practicality
and flexibility of our algorithm is illustrated on numerical examples.
|
1403.5346 | Modeling Collaborations with Persistent Homology | math.AT cs.SI physics.soc-ph | In this paper we describe a model based on persistent homology that describes
interactions between mathematicians in terms of collaborations. Some ideas from
classical data analysis are used.
|
1403.5348 | Coherent-Classical Estimation versus Purely-Classical Estimation for
Linear Quantum Systems | quant-ph cs.SY math.OC | We consider a coherent-classical estimation scheme for a class of linear
quantum systems. It comprises an estimator that is a mixed quantum-classical
system without involving coherent feedback. The estimator yields a classical
estimate of a variable for the quantum plant. We demonstrate that for a passive
plant that can be characterized by annihilation operators only, such
coherent-classical estimation provides no improvement over purely-classical
estimation. An example is also given which shows that if the plant is not
assumed to be an annihilation operator only quantum system, it is possible to
get better estimates with such coherent-classical estimation compared with
purely-classical estimation.
|
1403.5352 | An ESPRIT-Based Approach for 2-D Localization of Incoherently
Distributed Sources in Massive MIMO Systems | cs.IT math.IT | In this paper, an approach of estimating signal parameters via rotational
invariance technique (ESPRIT) is proposed for two-dimensional (2-D)
localization of incoherently distributed (ID) sources in large-scale/massive
multiple-input multiple-output (MIMO) systems. The traditional ESPRIT-based
methods are valid only for one-dimensional (1-D) localization of the ID
sources. By contrast, in the proposed approach the signal subspace is
constructed for estimating the nominal azimuth and elevation
direction-of-arrivals and the angular spreads. The proposed estimator enjoys
closed-form expressions and hence it bypasses the searching over the entire
feasible field. Therefore, it imposes significantly lower computational
complexity than the conventional 2-D estimation approaches. Our analysis shows
that the estimation performance of the proposed approach improves when the
large-scale/massive MIMO systems are employed. The approximate Cram\'{e}r-Rao
bound of the proposed estimator for the 2-D localization is also derived.
Numerical results demonstrate that albeit the proposed estimation method is
comparable with the traditional 2-D estimators in terms of performance, it
benefits from a remarkably lower computational complexity.
|
1403.5361 | Parameter Estimation of Social Forces in Crowd Dynamics Models via a
Probabilistic Method | physics.data-an cs.SI math.PR math.ST physics.soc-ph stat.TH | Focusing on a specific crowd dynamics situation, including real life
experiments and measurements, our paper targets a twofold aim: (1) we present a
Bayesian probabilistic method to estimate the value and the uncertainty (in the
form of a probability density function) of parameters in crowd dynamic models
from the experimental data; and (2) we introduce a fitness measure for the
models to classify a couple of model structures (forces) according to their
fitness to the experimental data, preparing the stage for a more general
model-selection and validation strategy inspired by probabilistic data
analysis. Finally, we review the essential aspects of our experimental setup
and measurement technique.
|
1403.5364 | Control Contraction Metrics, Robust Control and Observer Duality | math.OC cs.SY | This paper addresses the problems of stabilization, robust control, and
observer design for nonlinear systems. We build upon recently a proposed method
based on contraction theory and convex optimization, extending the class of
systems to which it is applicable. We prove converse results for mechanical
systems and feedback-linearizable systems. Next we consider robust control, and
give a simple construction of a controller guaranteeing an L2-gain condition,
and discuss connections to nonlinear H-infinity control. Finally, we discuss a
"duality" result between nonlinear stabilization problems and observer
construction, in the process constructing globally stable reduced-order
observers for a class of nonlinear systems.
|
1403.5370 | Using n-grams models for visual semantic place recognition | stat.ML cs.CV cs.LG | The aim of this paper is to present a new method for visual place
recognition. Our system combines global image characterization and visual
words, which allows to use efficient Bayesian filtering methods to integrate
several images. More precisely, we extend the classical HMM model with
techniques inspired by the field of Natural Language Processing. This paper
presents our system and the Bayesian filtering algorithm. The performance of
our system and the influence of the main parameters are evaluated on a standard
database. The discussion highlights the interest of using such models and
proposes improvements.
|
1403.5374 | Transverse Contraction Criteria for Stability of Nonlinear Hybrid Limit
Cycles | math.OC cs.RO cs.SY | In this paper, we derive differential conditions guaranteeing the orbital
stability of nonlinear hybrid limit cycles. These conditions are represented as
a series of pointwise linear matrix inequalities (LMI), enabling the search for
stability certificates via convex optimization tools such as sum-of-squares
programming. Unlike traditional Lyapunov-based methods, the transverse
contraction framework developed in this paper enables proof of stability for
hybrid systems, without prior knowledge of the exact location of the stable
limit cycle in state space. This methodology is illustrated on a dynamic
walking example.
|
1403.5381 | ({\alpha}, k)-Minimal Sorting and Skew Join in MPI and MapReduce | cs.DB | As computer clusters are found to be highly effective for handling massive
datasets, the design of efficient parallel algorithms for such a computing
model is of great interest. We consider ({\alpha}, k)-minimal algorithms for
such a purpose, where {\alpha} is the number of rounds in the algorithm, and k
is a bound on the deviation from perfect workload balance. We focus on new
({\alpha}, k)-minimal algorithms for sorting and skew equijoin operations for
computer clusters. To the best of our knowledge the proposed sorting and skew
join algorithms achieve the best workload balancing guarantee when compared to
previous works. Our empirical study shows that they are close to optimal in
workload balancing. In particular, our proposed sorting algorithm is around 25%
more efficient than the state-of-the-art Terasort algorithm and achieves
significantly more even workload distribution by over 50%.
|
1403.5384 | NUROA: A Numerical Roadmap Algorithm | cs.RO | Motion planning has been studied for nearly four decades now. Complete,
combinatorial motion planning approaches are theoretically well-rooted with
completeness guarantees but they are hard to implement. Sampling-based and
heuristic methods are easy to implement and quite simple to customize but they
lack completeness guarantees. Can the best of both worlds be ever achieved,
particularly for mission critical applications such as robotic surgery, space
explorations, and handling hazardous material? In this paper, we answer
affirmatively to that question. We present a new methodology, NUROA, to
numerically approximate the Canny's roadmap, which is a network of
one-dimensional algebraic curves. Our algorithm encloses the roadmap with a
chain of tiny boxes each of which contains a piece of the roadmap and whose
connectivity captures the roadmap connectivity. It starts by enclosing the
entire space with a box. In each iteration, remaining boxes are shrunk on all
sides and then split into smaller sized boxes. Those boxes that are empty are
detected in the shrink phase and removed. The algorithm terminates when all
remaining boxes are smaller than a resolution that can be either given as input
or automatically computed using root separation lower bounds. Shrink operation
is cast as a polynomial optimization with semialgebraic constraints, which is
in turn transformed into a (series of) semidefinite programs (SDP) using the
Lasserre's approach. NUROA's success is due to fast SDP solvers. NUROA
correctly captured the connectivity of multiple curves/skeletons whereas
competitors such as IBEX and Realpaver failed in some cases. Since boxes are
independent from one another, NUROA can be parallelized particularly on GPUs.
NUROA is available as an open source package at http://nuroa.sourceforge.net/.
|
1403.5403 | A Non-Local Structure Tensor Based Approach for Multicomponent Image
Recovery Problems | cs.CV cs.NA math.OC | Non-Local Total Variation (NLTV) has emerged as a useful tool in variational
methods for image recovery problems. In this paper, we extend the NLTV-based
regularization to multicomponent images by taking advantage of the Structure
Tensor (ST) resulting from the gradient of a multicomponent image. The proposed
approach allows us to penalize the non-local variations, jointly for the
different components, through various $\ell_{1,p}$ matrix norms with $p \ge 1$.
To facilitate the choice of the hyper-parameters, we adopt a constrained convex
optimization approach in which we minimize the data fidelity term subject to a
constraint involving the ST-NLTV regularization. The resulting convex
optimization problem is solved with a novel epigraphical projection method.
This formulation can be efficiently implemented thanks to the flexibility
offered by recent primal-dual proximal algorithms. Experiments are carried out
for multispectral and hyperspectral images. The results demonstrate the
interest of introducing a non-local structure tensor regularization and show
that the proposed approach leads to significant improvements in terms of
convergence speed over current state-of-the-art methods.
|
1403.5427 | The quasispecies regime for the simple genetic algorithm with ranking
selection | math.PR cs.NE | We study the simple genetic algorithm with a ranking selection mechanism
(linear ranking or tournament). We denote by $\ell$ the length of the
chromosomes, by $m$ the population size, by $p_C$ the crossover probability and
by $p_M$ the mutation probability. We introduce a parameter $\sigma$, called
the selection drift, which measures the selection intensity of the fittest
chromosome. We show that the dynamics of the genetic algorithm depend in a
critical way on the parameter $$\pi \,=\,\sigma(1-p_C)(1-p_M)^\ell\,.$$ If
$\pi<1$, then the genetic algorithm operates in a disordered regime: an
advantageous mutant disappears with probability larger than $1-1/m^\beta$,
where $\beta$ is a positive exponent. If $\pi>1$, then the genetic algorithm
operates in a quasispecies regime: an advantageous mutant invades a positive
fraction of the population with probability larger than a constant $p^*$ (which
does not depend on $m$). We estimate next the probability of the occurrence of
a catastrophe (the whole population falls below a fitness level which was
previously reached by a positive fraction of the population). The asymptotic
results suggest the following rules: $\pi=\sigma(1-p_C)(1-p_M)^\ell$ should be
slightly larger than $1$; $p_M$ should be of order $1/\ell$; $m$ should be
larger than $\ell\ln\ell$; the running time should be of exponential order in
$m$. The first condition requires that $ \ell p_M +p_C< \ln\sigma$. These
conclusions must be taken with great care: they come from an asymptotic regime,
and it is a formidable task to understand the relevance of this regime for a
real-world problem. At least, we hope that these conclusions provide
interesting guidelines for the practical implementation of the simple genetic
algorithm.
|
1403.5462 | Saliency Based Control in Random Feature Networks | cs.SY | The ability to rapidly focus attention and react to salient environmental
features enables animals to move agiley through their habitats. To replicate
this kind of high-performance control of movement in synthetic systems, we
propose a new approach to feedback control that bases control actions on
randomly perceived features. Connections will be made with recent work
incorporating communication protocols into networked control systems. The
concepts of {\em random channel controllability} and {\em random channel
observability} for LTI control systems are introduced and studied.
|
1403.5473 | Image Fusion Techniques in Remote Sensing | cs.CV | Remote sensing image fusion is an effective way to use a large volume of data
from multisensor images. Most earth satellites such as SPOT, Landsat 7, IKONOS
and QuickBird provide both panchromatic (Pan) images at a higher spatial
resolution and multispectral (MS) images at a lower spatial resolution and many
remote sensing applications require both high spatial and high spectral
resolutions, especially for GIS based applications. An effective image fusion
technique can produce such remotely sensed images. Image fusion is the
combination of two or more different images to form a new image by using a
certain algorithm to obtain more and better information about an object or a
study area than. The image fusion is performed at three different processing
levels which are pixel level, feature level and decision level according to the
stage at which the fusion takes place. There are many image fusion methods that
can be used to produce high resolution multispectral images from a high
resolution pan image and low resolution multispectral images. This paper
explores the major remote sensing data fusion techniques at pixel level and
reviews the concept, principals, limitations and advantages for each technique.
This paper focused on traditional techniques like intensity hue-saturation-
(HIS), Brovey, principal component analysis (PCA) and Wavelet.
|
1403.5475 | An Efficient Method for Face Recognition System In Various Assorted
Conditions | cs.CV | In the beginning stage, face verification is done using easy method of
geometric algorithm models, but the verification route has now developed into a
scientific progress of complicated geometric representation and identical
procedure. In recent years the technologies have boosted face recognition
system into the healthy focus. Researchers currently undergoing strong research
on finding face recognition system for wider area information taken under
hysterical elucidation dissimilarity. The proposed face recognition system
consists of a narrative expositionindiscreet preprocessing method, a hybrid
Fourier-based facial feature extraction and a score fusion scheme. We have
verified the face recognition in different lightening conditions (day or night)
and at different locations (indoor or outdoor). Preprocessing, Image detection,
Feature- extraction and Face recognition are the methods used for face
verification system. This paper focuses mainly on the issue of toughness to
lighting variations. The proposed system has obtained an average of 88.1%
verification rate on Two-Dimensional images under different lightening
conditions.
|
1403.5488 | Missing Data Prediction and Classification: The Use of Auto-Associative
Neural Networks and Optimization Algorithms | cs.NE cs.LG | This paper presents methods which are aimed at finding approximations to
missing data in a dataset by using optimization algorithms to optimize the
network parameters after which prediction and classification tasks can be
performed. The optimization methods that are considered are genetic algorithm
(GA), simulated annealing (SA), particle swarm optimization (PSO), random
forest (RF) and negative selection (NS) and these methods are individually used
in combination with auto-associative neural networks (AANN) for missing data
estimation and the results obtained are compared. The methods suggested use the
optimization algorithms to minimize an error function derived from training the
auto-associative neural network during which the interrelationships between the
inputs and the outputs are obtained and stored in the weights connecting the
different layers of the network. The error function is expressed as the square
of the difference between the actual observations and predicted values from an
auto-associative neural network. In the event of missing data, all the values
of the actual observations are not known hence, the error function is
decomposed to depend on the known and unknown variable values. Multi-layer
perceptron (MLP) neural network is employed to train the neural networks using
the scaled conjugate gradient (SCG) method. Prediction accuracy is determined
by mean squared error (MSE), root mean squared error (RMSE), mean absolute
error (MAE), and correlation coefficient (r) computations. Accuracy in
classification is obtained by plotting ROC curves and calculating the areas
under these. Analysis of results depicts that the approach using RF with AANN
produces the most accurate predictions and classifications while on the other
end of the scale is the approach which entails using NS with AANN.
|
1403.5508 | Towards Active Logic Programming | cs.AI | In this paper we present the new logic programming language DALI, aimed at
defining agents and agent systems. A main design objective for DALI has been
that of introducing in a declarative fashion all the essential features, while
keeping the language as close as possible to the syntax and semantics of the
plain Horn--clause language. Special atoms and rules have been introduced, for
representing: external events, to which the agent is able to respond
(reactivity); actions (reactivity and proactivity); internal events (previous
conclusions which can trigger further activity); past and present events (to be
aware of what has happened). An extended resolution is provided, so that a DALI
agent is able to answer queries like in the plain Horn--clause language, but is
also able to cope with the different kinds of events, and exhibit a (rational)
reactive and proactive behaviour.
|
1403.5521 | Scenario optimization with certificates and applications to anti-windup
design | cs.SY math.OC | In this paper, we introduce a significant extension, called scenario with
certificates (SwC), of the so-called scenario approach for uncertain
optimization problems. This extension is motivated by the observation that in
many control problems only some of the optimization variables are used in the
design phase, while the other variables play the role of certificates. Examples
are all those control problems that can be reformulated in terms of linear
matrix inequalities involving parameter-dependent Lyapunov functions. These
control problems include static anti-windup compensator design for uncertain
linear systems with input saturation, where the goal is the minimization of the
nonlinear gain from an exogenous input to a performance output. The main
contribution of this paper is to show that randomization is a useful tool,
specifically for anti-windup design, to make the overall approach less
conservative compared to its robust counterpart. In particular, we demonstrate
that the scenario with certificates reformulation is appealing because it
provides a way to implicitly design the parameter-dependent Lyapunov functions.
Finally, to further reduce the computational cost of this one-shot approach, we
present a sequential randomized algorithm for iteratively solving this problem.
|
1403.5553 | Slepian Spatial-Spectral Concentration on the Ball | math.CA astro-ph.IM cs.IT math.IT | We formulate and solve the Slepian spatial-spectral concentration problem on
the three-dimensional ball. Both the standard Fourier-Bessel and also the
Fourier-Laguerre spectral domains are considered since the latter exhibits a
number of practical advantages (spectral decoupling and exact computation). The
Slepian spatial and spectral concentration problems are formulated as
eigenvalue problems, the eigenfunctions of which form an orthogonal family of
concentrated functions. Equivalence between the spatial and spectral problems
is shown. The spherical Shannon number on the ball is derived, which acts as
the analog of the space-bandwidth product in the Euclidean setting, giving an
estimate of the number of concentrated eigenfunctions and thus the dimension of
the space of functions that can be concentrated in both the spatial and
spectral domains simultaneously. Various symmetries of the spatial region are
considered that reduce considerably the computational burden of recovering
eigenfunctions, either by decoupling the problem into smaller subproblems or by
affording analytic calculations. The family of concentrated eigenfunctions
forms a Slepian basis that can be used be represent concentrated signals
efficiently. We illustrate our results with numerical examples and show that
the Slepian basis indeeds permits a sparse representation of concentrated
signals.
|
1403.5556 | Learning to Optimize via Information-Directed Sampling | cs.LG | We propose information-directed sampling -- a new approach to online
optimization problems in which a decision-maker must balance between
exploration and exploitation while learning from partial feedback. Each action
is sampled in a manner that minimizes the ratio between squared expected
single-period regret and a measure of information gain: the mutual information
between the optimal action and the next observation. We establish an expected
regret bound for information-directed sampling that applies across a very
general class of models and scales with the entropy of the optimal action
distribution. We illustrate through simple analytic examples how
information-directed sampling accounts for kinds of information that
alternative approaches do not adequately address and that this can lead to
dramatic performance gains. For the widely studied Bernoulli, Gaussian, and
linear bandit problems, we demonstrate state-of-the-art simulation performance.
|
1403.5571 | On the Outage Capacity of Orthogonal Space-time Block Codes Over
Multi-cluster Scattering MIMO Channels | cs.IT math.IT | Multiple cluster scattering MIMO channel is a useful model for pico-cellular
MIMO networks. In this paper, orthogonal space-time block coded transmission
over such a channel is considered, where the effective channel equals the
product of n complex Gaussian matrices. A simple and accurate closed-form
approximation to the channel outage capacity has been derived in this setting.
The result is valid for an arbitrary number of clusters n-1 of scatterers and
an arbitrary antenna configuration. Numerical results are provided to study the
relative outage performance between the multi-cluster and the Rayleigh-fading
MIMO channels for which n=1.
|
1403.5590 | Continuous Optimization for Fields of Experts Denoising Works | cs.CV | Several recent papers use image denoising with a Fields of Experts prior to
benchmark discrete optimization methods. We show that a non-linear least
squares solver significantly outperforms all known discrete methods on this
problem.
|
1403.5596 | A Lemma Based Evaluator for Semitic Language Text Summarization Systems | cs.CL cs.IR | Matching texts in highly inflected languages such as Arabic by simple
stemming strategy is unlikely to perform well. In this paper, we present a
strategy for automatic text matching technique for for inflectional languages,
using Arabic as the test case. The system is an extension of ROUGE test in
which texts are matched on token's lemma level. The experimental results show
an enhancement of detecting similarities between different sentences having
same semantics but written in different lexical forms..
|
1403.5603 | Forecasting Popularity of Videos using Social Media | cs.LG cs.SI | This paper presents a systematic online prediction method (Social-Forecast)
that is capable to accurately forecast the popularity of videos promoted by
social media. Social-Forecast explicitly considers the dynamically changing and
evolving propagation patterns of videos in social media when making popularity
forecasts, thereby being situation and context aware. Social-Forecast aims to
maximize the forecast reward, which is defined as a tradeoff between the
popularity prediction accuracy and the timeliness with which a prediction is
issued. The forecasting is performed online and requires no training phase or a
priori knowledge. We analytically bound the prediction performance loss of
Social-Forecast as compared to that obtained by an omniscient oracle and prove
that the bound is sublinear in the number of video arrivals, thereby
guaranteeing its short-term performance as well as its asymptotic convergence
to the optimal performance. In addition, we conduct extensive experiments using
real-world data traces collected from the videos shared in RenRen, one of the
largest online social networks in China. These experiments show that our
proposed method outperforms existing view-based approaches for popularity
prediction (which are not context-aware) by more than 30% in terms of
prediction rewards.
|
1403.5607 | Bayesian Optimization with Unknown Constraints | stat.ML cs.LG | Recent work on Bayesian optimization has shown its effectiveness in global
optimization of difficult black-box objective functions. Many real-world
optimization problems of interest also have constraints which are unknown a
priori. In this paper, we study Bayesian optimization for constrained problems
in the general case that noise may be present in the constraint functions, and
the objective and constraints may be evaluated independently. We provide
motivating practical examples, and present a general framework to solve such
problems. We demonstrate the effectiveness of our approach on optimizing the
performance of online latent Dirichlet allocation subject to topic sparsity
constraints, tuning a neural network given test-time memory constraints, and
optimizing Hamiltonian Monte Carlo to achieve maximal effectiveness in a fixed
time, subject to passing standard convergence diagnostics.
|
1403.5616 | Quantum-noise limited communication with low probability of detection | cs.IT math.IT quant-ph | We demonstrate the achievability of a square root limit on the amount of
information transmitted reliably and with low probability of detection (LPD)
over the single-mode lossy bosonic channel if either the eavesdropper's
measurements or the channel itself is subject to the slightest amount of excess
noise. Specifically, Alice can transmit $\mathcal{O}(\sqrt{n})$ bits to Bob
over $n$ channel uses such that Bob's average codeword error probability is
upper-bounded by an arbitrarily small $\delta>0$ while a passive eavesdropper,
Warden Willie, who is assumed to be able to collect all the transmitted photons
that do not reach Bob, has an average probability of detection error that is
lower-bounded by $1/2-\epsilon$ for an arbitrarily small $\epsilon>0$. We
analyze the thermal noise and pure loss channels. The square root law holds for
the thermal noise channel even if Willie employs a quantum-optimal measurement,
while Bob is equipped with a standard coherent detection receiver. We also show
that LPD communication is not possible on the pure loss channel. However, this
result assumes Willie to possess an ideal receiver that is not subject to
excess noise. If Willie is restricted to a practical receiver with a non-zero
dark current, the square root law is achievable on the pure loss channel.
|
1403.5617 | On the Rise and Fall of Online Social Networks | cs.SI physics.soc-ph | The rise and fall of online social networks recently generated an enormous
amount of interest among people, both inside and outside of academia. Gillette
[Businessweek magazine, 2011] did a detailed analysis of MySpace, which started
losing its popularity since 2008. Cannarella and Spechler [ArXiv, 2014] used a
model of disease spread to explain the rise and fall of MySpace. In this paper,
we present a graph theoretical model that may be able to provide an alternative
explanation for the rise and fall of online social networks. Our model is
motivated by the well-known Barabasi-Albert model of generating random
scale-free networks using preferential attachment or `rich-gets-richer'
phenomenon. As shown by our empirical analysis, we conjecture that such an
online social network growth model is inherently flawed as it fails to maintain
the stability of such networks while ensuring their growth. In the process, we
also conjecture that our model of preferential attachment also exhibits
scale-free phenomenon.
|
1403.5618 | Belief-Rule-Based Expert Systems for Evaluation of E- Government: A Case
Study | cs.AI cs.CY | Little knowledge exists on the impact and results associated with
e-government projects in many specific use domains. Therefore it is necessary
to evaluate the efficiency and effectiveness of e-government systems. Since the
development of e-government is a continuous process of improvement, it requires
continuous evaluation of the overall e-government system as well as evaluation
of its various dimensions such as determinants, characteristics and results.
E-government development is often complex with multiple stakeholders, large
user bases and complex goals. Consequently, even experts have difficulties in
evaluating these systems, especially in an integrated and comprehensive way as
well as on an aggregate level. Expert systems are a candidate solution to
evaluate such complex e-government systems. However, it is difficult for expert
systems to cope with uncertain evaluation data that are vague, inconsistent,
highly subjective or in other ways challenging to formalize. This paper
presents an approach that can handle uncertainty in e-government evaluation:
The combination of Belief Rule Base (BRB) knowledge representation and
Evidential Reasoning (ES). This approach is illustrated with a concrete
prototype, known as Belief Rule Based Expert System (BRBES) and put to use in
the local e-government of Bangladesh. The results have been compared with a
recently developed method of evaluating e-Government, and it is shown that the
results of BRBES are more accurate and reliable. BRBES can be used to identify
the factors that need to be improved to achieve the overall aim of an
e-government project. In addition, various "what if" scenarios can be generated
and developers and managers can get a forecast of the outcomes. In this way,
the system can be used to facilitate decision making processes under
uncertainty.
|
1403.5628 | Vulnerabilities and Attacks Targeting Social Networks and Industrial
Control Systems | cs.SI cs.CR physics.soc-ph | Vulnerability is a weakness, shortcoming or flaw in the system or network
infrastructure which can be used by an attacker to harm the system, disrupt its
normal operation and use it for his financial, competitive or other motives or
just for cyber escapades. In this paper, we re-examined the various types of
attacks on industrial control systems as well as on social networking users. We
have listed which all vulnerabilities were exploited for executing these
attacks and their effects on these systems and social networks. The focus will
be mainly on the vulnerabilities that are used in OSNs as the convertors which
convert the social network into antisocial network and these networks can be
further used for the network attacks on the users associated with the victim
user whereby creating a consecutive chain of attacks on increasing number of
social networking users. Another type of attack, Stuxnet Attack which was
originally designed to attack Iran's nuclear facilities is also discussed here
which harms the system it controls by changing the code in that target system.
The Stuxnet worm is a very treacherous and hazardous means of attack and is the
first of its kind as it allows the attacker to manipulate real time equipment.
|
1403.5638 | Convex separable problems with linear and box constraints | cs.IT math.IT | In this work, we focus on separable convex optimization problems with linear
and box constraints and compute the solution in closed-form as a function of
some Lagrange multipliers that can be easily computed in a finite number of
iterations. This allows us to bridge the gap between a wide family of power
allocation problems of practical interest in signal processing and
communications and their efficient implementation in practice.
|
1403.5641 | Control over adversarial packet-dropping communication networks
revisited | cs.SY math.OC | We revisit a one-step control problem over an adversarial packet-dropping
link. The link is modeled as a set of binary channels controlled by a strategic
jammer whose intention is to wage a `denial of service' attack on the plant by
choosing a most damaging channel-switching strategy. The paper introduces a
class of zero-sum games between the jammer and controller as a scenario for
such attack, and derives necessary and sufficient conditions for these games to
have a nontrivial saddle-point equilibrium. At this equilibrium, the jammer's
optimal policy is to randomize in a region of the plant's state space, thus
requiring the controller to undertake a nontrivial response which is different
from what one would expect in a standard stochastic control problem over a
packet dropping channel.
|
1403.5645 | Transaction Repair: Full Serializability Without Locks | cs.DB | Transaction Repair is a method for lock-free, scalable transaction processing
that achieves full serializability. It demonstrates parallel speedup even in
inimical scenarios where all pairs of transactions have significant read-write
conflicts. In the transaction repair approach, each transaction runs in
complete isolation in a branch of the database; when conflicts occur, we detect
and repair them. These repairs are performed efficiently in parallel, and the
net effect is that of serial processing. Within transactions, we use no locks.
This frees users from the complications and performance hazards of locks, and
from the anomalies of sub-SERIALIZABLE isolation levels. Our approach builds on
an incrementalized variant of leapfrog triejoin, a worst-case optimal algorithm
for $\exists_1$ formulae, and on well-established techniques from programming
languages: declarative languages, purely functional data structures,
incremental computation, and fixpoint equations.
|
1403.5647 | CUR Algorithm with Incomplete Matrix Observation | cs.LG stat.ML | CUR matrix decomposition is a randomized algorithm that can efficiently
compute the low rank approximation for a given rectangle matrix. One limitation
with the existing CUR algorithms is that they require an access to the full
matrix A for computing U. In this work, we aim to alleviate this limitation. In
particular, we assume that besides having an access to randomly sampled d rows
and d columns from A, we only observe a subset of randomly sampled entries from
A. Our goal is to develop a low rank approximation algorithm, similar to CUR,
based on (i) randomly sampled rows and columns from A, and (ii) randomly
sampled entries from A. The proposed algorithm is able to perfectly recover the
target matrix A with only O(rn log n) number of observed entries. In addition,
instead of having to solve an optimization problem involved trace norm
regularization, the proposed algorithm only needs to solve a standard
regression problem. Finally, unlike most matrix completion theories that hold
only when the target matrix is of low rank, we show a strong guarantee for the
proposed algorithm even when the target matrix is not low rank.
|
1403.5648 | Information and Energy Cooperation in Cognitive Radio Networks | cs.IT math.IT | Cooperation between the primary and secondary systems can improve the
spectrum efficiency in cognitive radio networks. The key idea is that the
secondary system helps to boost the primary system's performance by relaying
and in return the primary system provides more opportunities for the secondary
system to access the spectrum. In contrast to most of existing works that only
consider information cooperation, this paper studies joint information and
energy cooperation between the two systems, i.e., the primary transmitter sends
information for relaying and feeds the secondary system with energy as well.
This is particularly useful when the secondary transmitter has good channel
quality to the primary receiver but is energy constrained. We propose and study
three schemes that enable this cooperation. Firstly, we assume there exists an
ideal backhaul between the two systems for information and energy transfer. We
then consider two wireless information and energy transfer schemes from the
primary transmitter to the secondary transmitter using power splitting and time
splitting energy harvesting techniques, respectively. For each scheme, the
optimal and zero-forcing solutions are derived. Simulation results demonstrate
promising performance gain for both systems due to the additional energy
cooperation. It is also revealed that the power splitting scheme can achieve
larger rate region than the time splitting scheme when the efficiency of the
energy transfer is sufficiently large.
|
1403.5683 | Ranking structures and Rank-Rank Correlations of Countries. The FIFA and
UEFA cases | physics.soc-ph cs.SI nlin.AO physics.data-an | Ranking of agents competing with each other in complex systems may lead to
paradoxes according to the pre-chosen different measures. A discussion is
presented on such rank-rank, similar or not, correlations based on the case of
European countries ranked by UEFA and FIFA from different soccer competitions.
The first question to be answered is whether an empirical and simple law is
obtained for such (self-) organizations of complex sociological systems with
such different measuring schemes. It is found that the power law form is not
the best description contrary to many modern expectations. The stretched
exponential is much more adequate. Moreover, it is found that the measuring
rules lead to some inner structures, in both cases.
|
1403.5686 | Iterative Learning for Reference-Guided DNA Sequence Assembly from Short
Reads: Algorithms and Limits of Performance | q-bio.GN cs.CE cs.IT math.IT | Recent emergence of next-generation DNA sequencing technology has enabled
acquisition of genetic information at unprecedented scales. In order to
determine the genetic blueprint of an organism, sequencing platforms typically
employ so-called shotgun sequencing strategy to oversample the target genome
with a library of relatively short overlapping reads. The order of nucleotides
in the reads is determined by processing the acquired noisy signals generated
by the sequencing instrument. Assembly of a genome from potentially erroneous
short reads is a computationally daunting task even in the scenario where a
reference genome exists. Errors and gaps in the reference, and perfect repeat
regions in the target, further render the assembly challenging and cause
inaccuracies. In this paper, we formulate the reference-guided sequence
assembly problem as the inference of the genome sequence on a bipartite graph
and solve it using a message-passing algorithm. The proposed algorithm can be
interpreted as the well-known classical belief propagation scheme under a
certain prior. Unlike existing state-of-the-art methods, the proposed algorithm
combines the information provided by the reads without needing to know
reliability of the short reads (so-called quality scores). Relation of the
message-passing algorithm to a provably convergent power iteration scheme is
discussed. To evaluate and benchmark the performance of the proposed technique,
we find an analytical expression for the probability of error of a genie-aided
maximum a posteriori (MAP) decision scheme. Results on both simulated and
experimental data demonstrate that the proposed message-passing algorithm
outperforms commonly used state-of-the-art tools, and it nearly achieves the
performance of the aforementioned MAP decision scheme.
|
1403.5693 | Firefly Monte Carlo: Exact MCMC with Subsets of Data | stat.ML cs.LG stat.CO | Markov chain Monte Carlo (MCMC) is a popular and successful general-purpose
tool for Bayesian inference. However, MCMC cannot be practically applied to
large data sets because of the prohibitive cost of evaluating every likelihood
term at every iteration. Here we present Firefly Monte Carlo (FlyMC) an
auxiliary variable MCMC algorithm that only queries the likelihoods of a
potentially small subset of the data at each iteration yet simulates from the
exact posterior distribution, in contrast to recent proposals that are
approximate even in the asymptotic limit. FlyMC is compatible with a wide
variety of modern MCMC algorithms, and only requires a lower bound on the
per-datum likelihood factors. In experiments, we find that FlyMC generates
samples from the posterior more than an order of magnitude faster than regular
MCMC, opening up MCMC methods to larger datasets than were previously
considered feasible.
|
1403.5701 | Cortex simulation system proposal using distributed computer network
environments | cs.AI | In the dawn of computer science and the eve of neuroscience we participate in
rebirth of neuroscience due to new technology that allows us to deeply and
precisely explore whole new world that dwells in our brains.
|
1403.5711 | Large-Scale MIMO Detection for 3GPP LTE: Algorithms and FPGA
Implementations | cs.IT math.IT | Large-scale (or massive) multiple-input multiple-output (MIMO) is expected to
be one of the key technologies in next-generation multi-user cellular systems,
based on the upcoming 3GPP LTE Release 12 standard, for example. In this work,
we propose - to the best of our knowledge - the first VLSI design enabling
high-throughput data detection in single-carrier frequency-division multiple
access (SC-FDMA)-based large-scale MIMO systems. We propose a new approximate
matrix inversion algorithm relying on a Neumann series expansion, which
substantially reduces the complexity of linear data detection. We analyze the
associated error, and we compare its performance and complexity to those of an
exact linear detector. We present corresponding VLSI architectures, which
perform exact and approximate soft-output detection for large-scale MIMO
systems with various antenna/user configurations. Reference implementation
results for a Xilinx Virtex-7 XC7VX980T FPGA show that our designs are able to
achieve more than 600 Mb/s for a 128 antenna, 8 user 3GPP LTE-based large-scale
MIMO system. We finally provide a performance/complexity trade-off comparison
using the presented FPGA designs, which reveals that the detector circuit of
choice is determined by the ratio between BS antennas and users, as well as the
desired error-rate performance.
|
1403.5715 | Mining Attribute-Based Access Control Policies from Logs | cs.CR cs.DB | Attribute-based access control (ABAC) provides a high level of flexibility
that promotes security and information sharing. ABAC policy mining algorithms
have potential to significantly reduce the cost of migration to ABAC, by
partially automating the development of an ABAC policy from information about
the existing access-control policy and attribute data. This paper presents an
algorithm for mining ABAC policies from operation logs and attribute data. To
the best of our knowledge, it is the first algorithm for this problem.
|
1403.5718 | SmartAnnotator: An Interactive Tool for Annotating RGBD Indoor Images | cs.CV | RGBD images with high quality annotations in the form of geometric (i.e.,
segmentation) and structural (i.e., how do the segments are mutually related in
3D) information provide valuable priors to a large number of scene and image
manipulation applications. While it is now simple to acquire RGBD images,
annotating them, automatically or manually, remains challenging especially in
cluttered noisy environments. We present SmartAnnotator, an interactive system
to facilitate annotating RGBD images. The system performs the tedious tasks of
grouping pixels, creating potential abstracted cuboids, inferring object
interactions in 3D, and comes up with various hypotheses. The user simply has
to flip through a list of suggestions for segment labels, finalize a selection,
and the system updates the remaining hypotheses. As objects are finalized, the
process speeds up with fewer ambiguities to resolve. Further, as more scenes
are annotated, the system makes better suggestions based on structural and
geometric priors learns from the previous annotation sessions. We test our
system on a large number of database scenes and report significant improvements
over naive low-level annotation tools.
|
1403.5730 | Resource Allocation for Coordinated Multipoint Networks with Wireless
Information and Power Transfer | cs.IT math.IT | This paper studies the resource allocation algorithm design for multiuser
coordinated multipoint (CoMP) networks with simultaneous wireless information
and power transfer (SWIPT). In particular, remote radio heads (RRHs) are
connected to a central processor (CP) via capacity-limited backhaul links to
facilitate CoMP joint transmission. Besides, the CP transfers energy to the
RRHs for more efficient network operation. The considered resource allocation
algorithm design is formulated as a non-convex optimization problem with a
minimum required signal-to-interference-plus-noise ratio (SINR) constraint at
multiple information receivers and a minimum required power transfer constraint
at the energy harvesting receivers. By optimizing the transmit beamforming
vectors at the CP and energy sharing between the CP and the RRHs, we aim at
jointly minimizing the total network transmit power and the maximum capacity
consumption per backhaul link. The resulting non-convex optimization problem is
NP-hard. In light of the intractability of the problem, we reformulate it by
replacing the non-convex objective function with its convex hull, which enables
the derivation of an efficient iterative resource allocation algorithm. In each
iteration, a non-convex optimization problem is solved by semi-definite
programming (SDP) relaxation and the proposed iterative algorithm converges to
a local optimal solution of the original problem. Simulation results illustrate
that our proposed algorithm achieves a close-to-optimal performance and
provides a significant reduction in backhaul capacity consumption compared to
full cooperation. Besides, the considered CoMP network is shown to provide
superior system performance as far as power consumption is concerned compared
to a traditional system with multiple antennas co-located.
|
1403.5734 | Software Agents Interaction Algorithms in Virtual Learning Environment | cs.MA cs.CY | This paper highlights the multi-agent learning virtual environment and agents
communication algorithms. The researcher proposed three algorithms required
software agents interaction in virtual learning information system environment.
The first proposed algorithm is agents interaction localization algorithm, the
second one is the dynamic agents distribution algorithm (load distribution
algorithm), and the third model is Agent communication algorithm based on using
agents intermediaries. The main objectives of these algorithms are to reduce
the response time for any agents changes in virtual learning environment (VLE)
by increasing the information exchange intensity between software agents and
reduce the overall network load, and to improve the communication between
mobile agents in distributed information system to support effectiveness.
Finally the paper describe the algorithms of information exchange between
mobile agents in VLE based on the expansion of the address structure and the
use of an agent, intermediary agents, matchmaking agents, brokers and their
entrepreneurial functions
|
1403.5735 | Cooperative Energy Trading in CoMP Systems Powered by Smart Grids | cs.IT math.IT | This paper studies the energy management in the coordinated multi-point
(CoMP) systems powered by smart grids, where each base station (BS) with local
renewable energy generation is allowed to implement the two-way energy trading
with the grid. Due to the uneven renewable energy supply and communication
energy demand over distributed BSs as well as the difference in the prices for
their buying/selling energy from/to the gird, it is beneficial for the
cooperative BSs to jointly manage their energy trading with the grid and energy
consumption in CoMP based communication for reducing the total energy cost.
Specifically, we consider the downlink transmission in one CoMP cluster by
jointly optimizing the BSs' purchased/sold energy units from/to the grid and
their cooperative transmit precoding, so as to minimize the total energy cost
subject to the given quality of service (QoS) constraints for the users. First,
we obtain the optimal solution to this problem by developing an algorithm based
on techniques from convex optimization and the uplink-downlink duality. Next,
we propose a sub-optimal solution of lower complexity than the optimal
solution, where zero-forcing (ZF) based precoding is implemented at the BSs.
Finally, through extensive simulations, we show the performance gain achieved
by our proposed joint energy trading and communication cooperation schemes in
terms of energy cost reduction, as compared to conventional schemes that
separately design communication cooperation and energy trading.
|
1403.5753 | D-CFPR: D numbers extended consistent fuzzy preference relations | cs.AI | How to express an expert's or a decision maker's preference for alternatives
is an open issue. Consistent fuzzy preference relation (CFPR) is with big
advantages to handle this problem due to it can be construed via a smaller
number of pairwise comparisons and satisfies additive transitivity property.
However, the CFPR is incapable of dealing with the cases involving uncertain
and incomplete information. In this paper, a D numbers extended consistent
fuzzy preference relation (D-CFPR) is proposed to overcome the weakness. The
D-CFPR extends the classical CFPR by using a new model of expressing uncertain
information called D numbers. The D-CFPR inherits the merits of classical CFPR
and can be totally reduced to the classical CFPR. This study can be integrated
into our previous study about D-AHP (D numbers extended AHP) model to provide a
systematic solution for multi-criteria decision making (MCDM).
|
1403.5761 | The Lyapunov Concept of Stability from the Standpoint of Poincare
Approach: General Procedure of Utilization of Lyapunov Functions for
Non-Linear Non-Autonomous Parametric Differential Inclusions | cs.SY | The objective of the research is to develop a general method of constructing
Lyapunov functions for non-linear non-autonomous differential inclusions
described by ordinary differential equations with parameters. The goal has been
attained through the following ideas and tools. First, three-point Poincare
strategy of the investigation of differential equations and manifolds has been
used. Second, the geometric-topological structure of the non-linear
non-autonomous parametric differential inclusions has been presented and
analyzed in the framework of hierarchical fiber bundles. Third, a special
canonizing transformation of the differential inclusions that allows to present
them in special canonical form, for which certain standard forms of Lyapunov
functions exist, has been found. The conditions establishing the relation
between the local asymptotical stability of two corresponding particular
integral curves of a given differential inclusion in its initial and canonical
forms are ascertained. The global asymptotical stability of the entire free
dynamical systems as some restrictions of a given parametric differential
inclusion and the whole latter one per se has been investigated in terms of the
classificational stability of the typical fiber of the meta-bundle. There have
discussed the prospects of development and modifications of the Lyapunov second
method in the light of the discovery of the new features of Lyapunov functions.
|
1403.5768 | Optimizing Your Online-Advertisement Asynchronously | cs.SY cs.GT | We consider the problem of designing optimal online-ad investment strategies
for a single advertiser, who invests at multiple sponsored search sites
simultaneously, with the objective of maximizing his average revenue subject to
the advertising budget constraint. A greedy online investment scheme is
developed to achieve an average revenue that can be pushed to within
$O(\epsilon)$ of the optimal, for any $\epsilon>0$, with a tradeoff that the
temporal budget violation is $O(1/\epsilon)$. Different from many existing
algorithms, our scheme allows the advertiser to \emph{asynchronously} update
his investments on each search engine site, hence applies to systems where the
timescales of action update intervals are heterogeneous for different sites. We
also quantify the impact of inaccurate estimation of the system dynamics and
show that the algorithm is robust against imperfect system knowledge.
|
1403.5771 | A Novel Method to Calculate Click Through Rate for Sponsored Search | cs.IR | Sponsored search adopts generalized second price (GSP) auction mechanism
which works on the concept of pay per click which is most commonly used for the
allocation of slots in the searched page. Two main aspects associated with GSP
are the bidding amount and the click through rate (CTR). The CTR learning
algorithms currently being used works on the basic principle of (#clicks_i/
#impressions_i) under a fixed window of clicks or impressions or time. CTR are
prone to fraudulent clicks, resulting in sudden increase of CTR. The current
algorithms are unable to find the solutions to stop this, although with the use
of machine learning algorithms it can be detected that fraudulent clicks are
being generated. In our paper, we have used the concept of relative ranking
which works on the basic principle of (#clicks_i /#clicks_t). In this
algorithm, both the numerator and the denominator are linked. As #clicks_t is
higher than previous algorithms and is linked to the #clicks_i, the small
change in the clicks which occurs in the normal scenario have a very small
change in the result but in case of fraudulent clicks the number of clicks
increases or decreases rapidly which will add up with the normal clicks to
increase the denominator, thereby decreasing the CTR.
|
1403.5787 | Scalable detection of statistically significant communities and
hierarchies, using message-passing for modularity | physics.soc-ph cond-mat.stat-mech cs.SI stat.ML | Modularity is a popular measure of community structure. However, maximizing
the modularity can lead to many competing partitions, with almost the same
modularity, that are poorly correlated with each other. It can also produce
illusory "communities" in random graphs where none exist. We address this
problem by using the modularity as a Hamiltonian at finite temperature, and
using an efficient Belief Propagation algorithm to obtain the consensus of many
partitions with high modularity, rather than looking for a single partition
that maximizes it. We show analytically and numerically that the proposed
algorithm works all the way down to the detectability transition in networks
generated by the stochastic block model. It also performs well on real-world
networks, revealing large communities in some networks where previous work has
claimed no communities exist. Finally we show that by applying our algorithm
recursively, subdividing communities until no statistically-significant
subcommunities can be found, we can detect hierarchical structure in real-world
networks more efficiently than previous methods.
|
1403.5815 | Heterogeneous epidemic model for assessing data dissemination in
opportunistic networks | cs.SI physics.soc-ph q-bio.PE | In this paper we investigate a susceptible-infected-susceptible (SIS)
epidemic model describing data dissemination in opportunistic networks with
heterogeneous setting of transmission parameters. We obtained the estimation of
the final epidemic size assuming that amount of data transferred between
network nodes possesses a Pareto distribution, implying scale-free properties.
In this context, more heterogeneity in susceptibility means the less severe
epidemic progression, and, on the contrary, more heterogeneity in infectivity
leads to more severe epidemics -- assuming that the other parameter (either
heterogeneity or susceptibility) stays fixed. The results are general enough
and can be useful in general epidemic theory for estimating the epidemic
progression for diseases with no significant acquired immunity -- in the cases
where Pareto distribution holds.
|
1403.5824 | Energy-Throughput Trade-offs in a Wireless Sensor Network with Mobile
Relay | cs.IT cs.NI math.IT | In this paper we analyze the trade-offs between energy and throughput for
links in a wireless sensor network. Our application of interest is one in which
a number of low-powered sensors need to wirelessly communicate their
measurements to a communications sink, or destination node, for communication
to a central processor. We focus on one particular sensor source, and consider
the case where the distance to the destination is beyond the peak power of the
source. A relay node is required. Transmission energy of the sensor and the
relay can be adjusted to minimize the total energy for a given throughput of
the connection from sensor source to destination. We introduce a bounded random
walk model for movement of the relay between the sensor and destination nodes,
and characterize the total transmission energy and throughput performance using
Markov steady state analysis. Based on the trade-offs between total energy and
throughput we propose a new time-sharing protocol to exploit the movement of
the relay to reduce the total energy. We demonstrate the effectiveness of
time-sharing for minimizing the total energy consumption while achieving the
throughput requirement. We then show that the time-sharing scheme is more
energy efficient than the popular sleep mode scheme.
|
1403.5865 | Step and Search Control Method to Track the Maximum Power in Wind Energy
Conversion Systems A Study | cs.SY | A simple step and search control strategy for extracting maximum output power
from grid connected Variable Speed Wind Energy Conversion System (VSWECS) is
implemented in this work. This system consists of a variable speed wind turbine
coupled to a Permanent Magnet Synchronous Generator (PMSG) through a gear box,
a DC-DC boost converter and a hysteresis current controlled Voltage Source
Converter (VSC). The Maximum Power Point Tracking (MPPT) extracts maximum power
from the wind turbine from cut-into rated wind velocity by sensing only by DC
link power. This system can be connected to a micro-grid. Also it can be used
for supplying an isolated local load by means of converting the output of
Permanent Magnet Synchronous Generator (PMSG) to DC and then convert to AC by
means of hysteresis current controlled Voltage Source Converter (VSI).
|
1403.5869 | Block Motion Based Dynamic Texture Analysis: A Review | cs.CV | Dynamic texture refers to image sequences of non-rigid objects that exhibit
some regularity in their movement. Videos of smoke, fire etc. fall under the
category of dynamic texture. Researchers have investigated different ways to
analyze dynamic textures since early nineties. Both appearance based (image
intensities) and motion based approaches are investigated. Motion based
approaches turn out to be more effective. A group of researchers have
investigated ways to utilize the motion vectors readily available with the
blocks in video codes like MGEG/H26X. In this paper we provide a review of the
dynamic texture analysis methods using block motion. Research into dynamic
texture analysis using block motion includes recognition, motion computation,
segmentation, and synthesis. We provide a comprehensive review of these
approaches.
|
1403.5874 | On Compressive Sensing in Coding Problems: A Rigorous Approach | cs.IT math.IT | We take an information theoretic perspective on a classical sparse-sampling
noisy linear model and present an analytical expression for the mutual
information, which plays central role in a variety of communications/processing
problems. Such an expression was addressed previously either by bounds, by
simulations and by the (non-rigorous) replica method. The expression of the
mutual information is based on techniques used in [1], addressing the minimum
mean square error (MMSE) analysis. Using these expressions, we study
specifically a variety of sparse linear communications models which include
coding in different settings, accounting also for multiple access channels and
different wiretap problems. For those, we provide single-letter expressions and
derive achievable rates, capturing the communications/processing features of
these timely models.
|
1403.5877 | Non-uniform Feature Sampling for Decision Tree Ensembles | stat.ML cs.IT cs.LG math.IT stat.AP | We study the effectiveness of non-uniform randomized feature selection in
decision tree classification. We experimentally evaluate two feature selection
methodologies, based on information extracted from the provided dataset: $(i)$
\emph{leverage scores-based} and $(ii)$ \emph{norm-based} feature selection.
Experimental evaluation of the proposed feature selection techniques indicate
that such approaches might be more effective compared to naive uniform feature
selection and moreover having comparable performance to the random forest
algorithm [3]
|
1403.5912 | The state of play of ASC-Inclusion: An Integrated Internet-Based
Environment for Social Inclusion of Children with Autism Spectrum Conditions | cs.HC cs.CV cs.CY | Individuals with Autism Spectrum Conditions (ASC) have marked difficulties
using verbal and non-verbal communication for social interaction. The running
ASC-Inclusion project aims to help children with ASC by allowing them to learn
how emotions can be expressed and recognised via playing games in a virtual
world. The platform includes analysis of users' gestures, facial, and vocal
expressions using standard microphone and web-cam or a depth sensor, training
through games, text communication with peers, animation, video and audio clips.
We present the state of play in realising such a serious game platform and
provide results for the different modalities.
|
1403.5919 | SRA: Fast Removal of General Multipath for ToF Sensors | cs.CV | A major issue with Time of Flight sensors is the presence of multipath
interference. We present Sparse Reflections Analysis (SRA), an algorithm for
removing this interference which has two main advantages. First, it allows for
very general forms of multipath, including interference with three or more
paths, diffuse multipath resulting from Lambertian surfaces, and combinations
thereof. SRA removes this general multipath with robust techniques based on
$L_1$ optimization. Second, due to a novel dimension reduction, we are able to
produce a very fast version of SRA, which is able to run at frame rate.
Experimental results on both synthetic data with ground truth, as well as real
images of challenging scenes, validate the approach.
|
1403.5928 | Viewing the Welch bound inequality from the kernel trick viewpoint | cs.IT math.IT | This brief note views to the Welch bound inequality using the idea of the
kernel trick from the machine learning research area. From this angle, some
novel insights of the inequality are obtained.
|
1403.5933 | AIS-INMACA: A Novel Integrated MACA Based Clonal Classifier for Protein
Coding and Promoter Region Prediction | cs.CE cs.LG | Most of the problems in bioinformatics are now the challenges in computing.
This paper aims at building a classifier based on Multiple Attractor Cellular
Automata (MACA) which uses fuzzy logic. It is strengthened with an artificial
Immune System Technique (AIS), Clonal algorithm for identifying a protein
coding and promoter region in a given DNA sequence. The proposed classifier is
named as AIS-INMACA introduces a novel concept to combine CA with artificial
immune system to produce a better classifier which can address major problems
in bioinformatics. This will be the first integrated algorithm which can
predict both promoter and protein coding regions. To obtain good fitness rules
the basic concept of Clonal selection algorithm was used. The proposed
classifier can handle DNA sequences of lengths 54,108,162,252,354. This
classifier gives the exact boundaries of both protein and promoter regions with
an average accuracy of 89.6%. This classifier was tested with 97,000 data
components which were taken from Fickett & Toung, MPromDb, and other sequences
from a renowned medical university. This proposed classifier can handle huge
data sets and can find protein and promoter regions even in mixed and
overlapped DNA sequences. This work also aims at identifying the logicality
between the major problems in bioinformatics and tries to obtaining a common
frame work for addressing major problems in bioinformatics like protein
structure prediction, RNA structure prediction, predicting the splicing pattern
of any primary transcript and analysis of information content in DNA, RNA,
protein sequences and structure. This work will attract more researchers
towards application of CA as a potential pattern classifier to many important
problems in bioinformatics
|
1403.5946 | Metadata for Energy Disaggregation | cs.DB | Energy disaggregation is the process of estimating the energy consumed by
individual electrical appliances given only a time series of the whole-home
power demand. Energy disaggregation researchers require datasets of the power
demand from individual appliances and the whole-home power demand. Multiple
such datasets have been released over the last few years but provide metadata
in a disparate array of formats including CSV files and plain-text README
files. At best, the lack of a standard metadata schema makes it unnecessarily
time-consuming to write software to process multiple datasets and, at worse,
the lack of a standard means that crucial information is simply absent from
some datasets. We propose a metadata schema for representing appliances,
meters, buildings, datasets, prior knowledge about appliances and appliance
models. The schema is relational and provides a simple but powerful inheritance
mechanism.
|
1403.5969 | Random Matrices and Erasure Robust Frames | cs.IT math.IT | Data erasure can often occur in communication. Guarding against erasures
involves redundancy in data representation. Mathematically this may be achieved
by redundancy through the use of frames. One way to measure the robustness of a
frame against erasures is to examine the worst case condition number of the
frame with a certain number of vectors erased from the frame. The term {\em
numerically erasure-robust frames (NERFs)} was introduced in \cite{FicMix12} to
give a more precise characterization of erasure robustness of frames. In the
paper the authors established that random frames whose entries are drawn
independently from the standard normal distribution can be robust against up to
approximately 15\% erasures, and asked whether there exist frames that are
robust against erasures of more than 50\%. In this paper we show that with very
high probability random frames are, independent of the dimension, robust
against any amount of erasures as long as the number of remaining vectors is at
least $1+\delta$ times the dimension for some $\delta_0>0$. This is the best
possible result, and it also implies that the proportion of erasures can
arbitrarily close to 1 while still maintaining robustness. Our result depends
crucially on a new estimate for the smallest singular value of a rectangular
random matrix with independent standard normal entries.
|
1403.5970 | Mental ability and common sense in an artificial society | physics.soc-ph cs.SI | We read newspapers and watch TV every day. There are many issues and many
controversies. Since media are free, we can hear arguments from every possible
side. How do we decide what is wrong or right? The first condition to accept a
message is to understand it; messages that are too sophisticated are ignored.
So it seems reasonable to assume that our understanding depends on our ability
and our current knowledge. Here we show that the consequences of this statement
are surprising and funny.
|
1403.5971 | On Projection-Based Model Reduction of Biochemical Networks-- Part II:
The Stochastic Case | math.OC cs.SY q-bio.QM | In this paper, we consider the problem of model order reduction of stochastic
biochemical networks. In particular, we reduce the order of (the number of
equations in) the Linear Noise Approximation of the Chemical Master Equation,
which is often used to describe biochemical networks. In contrast to other
biochemical network reduction methods, the presented one is projection-based.
Projection-based methods are powerful tools, but the cost of their use is the
loss of physical interpretation of the nodes in the network. In order alleviate
this drawback, we employ structured projectors, which means that some nodes in
the network will keep their physical interpretation. For many models in
engineering, finding structured projectors is not always feasible; however, in
the context of biochemical networks it is much more likely as the networks are
often (almost) monotonic. To summarise, the method can serve as a trade-off
between approximation quality and physical interpretation, which is illustrated
on numerical examples.
|
1403.5986 | Controllability Analysis for Multirotor Helicopter Rotor Degradation and
Failure | cs.SY cs.RO | This paper considers the controllability analysis problem for a class of
multirotor systems subject to rotor failure/wear. It is shown that classical
controllability theories of linear systems are not sufficient to test the
controllability of the considered multirotors. Owing to this, an easy-to-use
measurement index is introduced to assess the available control authority.
Based on it, a new necessary and sufficient condition for the controllability
of multirotors is derived. Furthermore, a controllability test procedure is
approached. The proposed controllability test method is applied to a class of
hexacopters with different rotor configurations and different rotor efficiency
parameters to show its effectiveness. The analysis results show that
hexacopters with different rotor configurations have different fault-tolerant
capabilities. It is therefore necessary to test the controllability of the
multirotors before any fault-tolerant control strategies are employed.
|
1403.5997 | Bayesian calibration for forensic evidence reporting | stat.ML cs.LG stat.AP | We introduce a Bayesian solution for the problem in forensic speaker
recognition, where there may be very little background material for estimating
score calibration parameters. We work within the Bayesian paradigm of evidence
reporting and develop a principled probabilistic treatment of the problem,
which results in a Bayesian likelihood-ratio as the vehicle for reporting
weight of evidence. We show in contrast, that reporting a likelihood-ratio
distribution does not solve this problem. Our solution is experimentally
exercised on a simulated forensic scenario, using NIST SRE'12 scores, which
demonstrates a clear advantage for the proposed method compared to the
traditional plugin calibration recipe.
|
1403.6002 | Brain Tumor Detection Based On Mathematical Analysis and Symmetry
Information | cs.CV | Image segmentation some of the challenging issues on brain magnetic resonance
image tumor segmentation caused by the weak correlation between magnetic
resonance imaging intensity and anatomical meaning.With the objective of
utilizing more meaningful information to improve brain tumor segmentation,an
approach which employs bilateral symmetry information as an additional feature
for segmentation is proposed.This is motivated by potential performance
improvement in the general automatic brain tumor segmentation systems which are
important for many medical and scientific applications.Brain Magnetic Resonance
Imaging segmentation is a complex problem in the field of medical imaging
despite various presented methods.MR image of human brain can be divided into
several sub-regions especially soft tissues such as gray matter,white matter
and cerebra spinal fluid.Although edge information is the main clue in image
segmentation,it cannot get a better result in analysis the content of images
without combining other information.Our goal is to detect the position and
boundary of tumors automatically.Experiments were conducted on real
pictures,and the results show that the algorithm is flexible and convenient.
|
1403.6023 | Ensemble Detection of Single & Multiple Events at Sentence-Level | cs.CL cs.LG | Event classification at sentence level is an important Information Extraction
task with applications in several NLP, IR, and personalization systems.
Multi-label binary relevance (BR) are the state-of-art methods. In this work,
we explored new multi-label methods known for capturing relations between event
types. These new methods, such as the ensemble Chain of Classifiers, improve
the F1 on average across the 6 labels by 2.8% over the Binary Relevance. The
low occurrence of multi-label sentences motivated the reduction of the hard
imbalanced multi-label classification problem with low number of occurrences of
multiple labels per instance to an more tractable imbalanced multiclass problem
with better results (+ 4.6%). We report the results of adding new features,
such as sentiment strength, rhetorical signals, domain-id (source-id and date),
and key-phrases in both single-label and multi-label event classification
scenarios.
|
1403.6025 | Web-Based Visualization of Very Large Scientific Astronomy Imagery | astro-ph.IM cs.CE cs.MM | Visualizing and navigating through large astronomy images from a remote
location with current astronomy display tools can be a frustrating experience
in terms of speed and ergonomics, especially on mobile devices. In this paper,
we present a high performance, versatile and robust client-server system for
remote visualization and analysis of extremely large scientific images.
Applications of this work include survey image quality control, interactive
data query and exploration, citizen science, as well as public outreach. The
proposed software is entirely open source and is designed to be generic and
applicable to a variety of datasets. It provides access to floating point data
at terabyte scales, with the ability to precisely adjust image settings in
real-time. The proposed clients are light-weight, platform-independent web
applications built on standard HTML5 web technologies and compatible with both
touch and mouse-based devices. We put the system to the test and assess the
performance of the system and show that a single server can comfortably handle
more than a hundred simultaneous users accessing full precision 32 bit
astronomy data.
|
1403.6036 | Adaptive MCMC-Based Inference in Probabilistic Logic Programs | cs.AI | Probabilistic Logic Programming (PLP) languages enable programmers to specify
systems that combine logical models with statistical knowledge. The inference
problem, to determine the probability of query answers in PLP, is intractable
in general, thereby motivating the need for approximate techniques. In this
paper, we present a technique for approximate inference of conditional
probabilities for PLP queries. It is an Adaptive Markov Chain Monte Carlo
(MCMC) technique, where the distribution from which samples are drawn is
modified as the Markov Chain is explored. In particular, the distribution is
progressively modified to increase the likelihood that a generated sample is
consistent with evidence. In our context, each sample is uniquely characterized
by the outcomes of a set of random variables. Inspired by reinforcement
learning, our technique propagates rewards to random variable/outcome pairs
used in a sample based on whether the sample was consistent or not. The
cumulative rewards of each outcome is used to derive a new "adapted
distribution" for each random variable. For a sequence of samples, the
distributions are progressively adapted after each sample. For a query with
"Markovian evaluation structure", we show that the adapted distribution of
samples converges to the query's conditional probability distribution. For
Markovian queries, we present a modified adaptation process that can be used in
adaptive MCMC as well as adaptive independent sampling. We empirically evaluate
the effectiveness of the adaptive sampling methods for queries with and without
Markovian evaluation structure.
|
1403.6046 | Decentralized Primary Frequency Control in Power Networks | cs.SY math.OC | We augment existing generator-side primary frequency control with load-side
control that are local, ubiquitous, and continuous. The mechanisms on both the
generator and the load sides are decentralized in that their control decisions
are functions of locally measurable frequency deviations. These local
algorithms interact over the network through nonlinear power flows. We design
the local frequency feedback control so that any equilibrium point of the
closed-loop system is the solution to an optimization problem that minimizes
the total generation cost and user disutility subject to power balance across
entire network. With Lyapunov method we derive a sufficient condition ensuring
an equilibrium point of the closed-loop system is asymptotically stable.
Simulation demonstrates improvement in both the transient and steady-state
performance over the traditional control only on the generators, even when the
total control capacity remains the same.
|
1403.6048 | Computer-Aided Discovery and Categorisation of Personality Axioms | cs.CE cs.CY cs.LO | We propose a computer-algebraic, order-theoretic framework based on
intuitionistic logic for the computer-aided discovery of personality axioms
from personality-test data and their mathematical categorisation into formal
personality theories in the spirit of F.~Klein's Erlanger Programm for
geometrical theories. As a result, formal personality theories can be
automatically generated, diagrammatically visualised, and mathematically
characterised in terms of categories of invariant-preserving transformations in
the sense of Klein and category theory. Our personality theories and categories
are induced by implicational invariants that are ground instances of
intuitionistic implication, which we postulate as axioms. In our mindset, the
essence of personality, and thus mental health and illness, is its invariance.
The truth of these axioms is algorithmically extracted from histories of
partially-ordered, symbolic data of observed behaviour. The personality-test
data and the personality theories are related by a Galois-connection in our
framework. As data format, we adopt the format of the symbolic values generated
by the Szondi-test, a personality test based on L.~Szondi's unifying,
depth-psychological theory of fate analysis.
|
1403.6067 | Why Do You Spread This Message? Understanding Users Sentiment in Social
Media Campaigns | cs.SI physics.soc-ph | Twitter has been increasingly used for spreading messages about campaigns.
Such campaigns try to gain followers through their Twitter accounts, influence
the followers and spread messages through them. In this paper, we explore the
relationship between followers sentiment towards the campaign topic and their
rate of retweeting of messages generated by the campaign. Our analysis with
followers of multiple social-media campaigns found statistical significant
correlations between such sentiment and retweeting rate. Based on our analysis,
we have conducted an online intervention study among the followers of different
social-media campaigns. Our study shows that targeting followers based on their
sentiment towards the campaign can give higher retweet rate than a number of
other baseline approaches.
|
1403.6089 | Intensional RDB for Big Data Interoperability | cs.DB | A new family of Intensional RDBs (IRDBs), introduced in [1], extends the
traditional RDBs with the Big Data and flexible and 'Open schema' features,
able to preserve the user-defined relational database schemas and all
preexisting user's applications containing the SQL statements for a deployment
of such a relational data. The standard RDB data is parsed into an internal
vector key/value relation, so that we obtain a column representation of data
used in Big Data applications, covering the key/value and column-based Big Data
applications as well, into a unifying RDB framework. Such an IRDB architecture
is adequate for the massive migrations from the existing slow RDBMSs into this
new family of fast IRDBMSs by offering a Big Data and new flexible schema
features as well. Here we present the interoperability features of the IRDBs by
permitting the queries also over the internal vector relations created by
parsing of each federated database in a given Multidatabase system. We show
that the SchemaLog with the second-order syntax and ad hoc Logic Programming
and its querying fragment can be embedded into the standard SQL IRDBMSs, so
that we obtain a full interoperabilty features of IRDBs by using only the
standard relational SQL for querying both data and meta-data.
|
1403.6090 | Column Weight Two and Three LDPC Codes with High Rates and Large Girths | cs.IT math.IT | In this paper, the concept of the {\it broken diagonal pair} in the
chess-like square board is used to define some well-structured block designs
whose incidence matrices can be considered as the parity-check matrices of some
high rate cycle codes with girth 12. The structure of the proposed parity-check
matrices significantly reduces the complexity of encoding and decoding.
Interestingly, the constructed regular cycle codes with row-weights $t$, $3\leq
t \leq 20$, $t\neq 7, 15, 16$, have the best lengths among the known regular
girth-12 cycle codes. In addition, the proposed cycle codes can be easily
extended to some high rate column weight-3 LDPC codes with girth 6. Simulation
results show that the constructed codes achieve excellent performances,
specially the constructed column weight 3 LDPC codes outperform LDPC codes
based on Steiner triple systems (STS).
|
1403.6102 | Renyi generalizations of the conditional quantum mutual information | quant-ph cond-mat.stat-mech cs.IT hep-th math-ph math.IT math.MP | The conditional quantum mutual information $I(A;B|C)$ of a tripartite state
$\rho_{ABC}$ is an information quantity which lies at the center of many
problems in quantum information theory. Three of its main properties are that
it is non-negative for any tripartite state, that it decreases under local
operations applied to systems $A$ and $B$, and that it obeys the duality
relation $I(A;B|C)=I(A;B|D)$ for a four-party pure state on systems $ABCD$. The
conditional mutual information also underlies the squashed entanglement, an
entanglement measure that satisfies all of the axioms desired for an
entanglement measure. As such, it has been an open question to find R\'enyi
generalizations of the conditional mutual information, that would allow for a
deeper understanding of the original quantity and find applications beyond the
traditional memoryless setting of quantum information theory. The present paper
addresses this question, by defining different $\alpha$-R\'enyi generalizations
$I_{\alpha}(A;B|C)$ of the conditional mutual information, some of which we can
prove converge to the conditional mutual information in the limit
$\alpha\rightarrow1$. Furthermore, we prove that many of these generalizations
satisfy non-negativity, duality, and monotonicity with respect to local
operations on one of the systems $A$ or $B$ (with it being left as an open
question to prove that monotoniticity holds with respect to local operations on
both systems). The quantities defined here should find applications in quantum
information theory and perhaps even in other areas of physics, but we leave
this for future work. We also state a conjecture regarding the monotonicity of
the R\'enyi conditional mutual informations defined here with respect to the
R\'enyi parameter $\alpha$. We prove that this conjecture is true in some
special cases and when $\alpha$ is in a neighborhood of one.
|
1403.6106 | Fragmentation transition in a coevolving network with link-state
dynamics | physics.soc-ph cs.SI | We study a network model that couples the dynamics of link states with the
evolution of the network topology. The state of each link, either A or B, is
updated according to the majority rule or zero-temperature Glauber dynamics, in
which links adopt the state of the majority of their neighboring links in the
network. Additionally, a link that is in a local minority is rewired to a
randomly chosen node. While large systems evolving under the majority rule
alone always fall into disordered topological traps composed by frustrated
links, any amount of rewiring is able to drive the network to complete order,
by relinking frustrated links and so releasing the system from traps. However,
depending on the relative rate of the majority rule and the rewiring processes,
the system evolves towards different ordered absorbing configurations: either a
one-component network with all links in the same state or a network fragmented
in two components with opposite states. For low rewiring rates and finite size
networks there is a domain of bistability between fragmented and non-fragmented
final states. Finite size scaling indicates that fragmentation is the only
possible scenario for large systems and any nonzero rate of rewiring.
|
1403.6143 | Exact correct-decoding exponent of the wiretap channel decoder | cs.IT math.IT | The security level of the achievability scheme for Wyner's wiretap channel
model is examined from the perspective of the probability of correct decoding,
$P_c$, at the wiretap channel decoder. In particular, for finite-alphabet
memoryless channels, the exact random coding exponent of $P_c$ is derived as a
function of the total coding rate $R_1$ and the rate of each sub-code $R_2$.
Two different representations are given for this function and its basic
properties are provided. We also characterize the region of pairs of rates
$(R_1,R_2)$ of full security in the sense of the random coding exponent of
$P_c$, in other words, the region where the exponent of this achievability
scheme is the same as that of blind guessing at the eavesdropper side. Finally,
an analogous derivation of the correct-decoding exponent is outlined for the
case of the Gaussian channel.
|
1403.6150 | Optimal Design of Energy-Efficient Multi-User MIMO Systems: Is Massive
MIMO the Answer? | cs.IT cs.NI math.IT | Assume that a multi-user multiple-input multiple-output (MIMO) system is
designed from scratch to uniformly cover a given area with maximal energy
efficiency (EE). What are the optimal number of antennas, active users, and
transmit power? The aim of this paper is to answer this fundamental question.
We consider jointly the uplink and downlink with different processing schemes
at the base station and propose a new realistic power consumption model that
reveals how the above parameters affect the EE. Closed-form expressions for the
EE-optimal value of each parameter, when the other two are fixed, are provided
for zero-forcing (ZF) processing in single-cell scenarios. These expressions
prove how the parameters interact. For example, in sharp contrast to common
belief, the transmit power is found to increase (not to decrease) with the
number of antennas. This implies that energy-efficient systems can operate in
high signal-to-noise ratio regimes in which interference-suppressing signal
processing is mandatory. Numerical and analytical results show that the maximal
EE is achieved by a massive MIMO setup wherein hundreds of antennas are
deployed to serve a relatively large number of users using ZF processing. The
numerical results show the same behavior under imperfect channel state
information and in symmetric multi-cell scenarios.
|
1403.6164 | Wireless Information and Power Transfer in Cooperative Networks with
Spatially Random Relays | cs.IT math.IT | In this paper, the application of wireless information and power transfer to
cooperative networks is investigated, where the relays in the network are
randomly located and based on the decode-forward strategy. For the scenario
with one source-destination pair, three different strategies for using the
available relays are studied, and their impact on the outage probability and
diversity gain is characterized by applying stochastic geometry. By using the
assumptions that the path loss exponent is two and that the relay-destination
distances are much larger than the source-relay distances, closed form
analytical results can be developed to demonstrate that the use of energy
harvesting relays can achieve the same diversity gain as the case with
conventional self-powered relays. For the scenario with multiple sources, the
relays can be viewed as a type of scarce resource, where the sources compete
with each other to get help from the relays. Such a competition is modeled as a
coalition formation game, and two distributed game theoretic algorithms are
developed based on different payoff functions. Simulation results are provided
to confirm the accuracy of the developed analytical results and facilitate a
better performance comparison.
|
1403.6167 | MoM-SO: a Complete Method for Computing the Impedance of Cable Systems
Including Skin, Proximity, and Ground Return Effects | cs.CE | The availability of accurate and broadband models for underground and
submarine cable systems is of paramount importance for the correct prediction
of electromagnetic transients in power grids. Recently, we proposed the MoM-SO
method for extracting the series impedance of power cables while accounting for
skin and proximity effect in the conductors. In this paper, we extend the
method to include ground return effects and to handle cables placed inside a
tunnel. Numerical tests show that the proposed method is more accurate than
widely-used analytic formulas, and is much faster than existing proximity-aware
approaches like finite elements. For a three-phase cable system in a tunnel,
the proposed method requires only 0.3 seconds of CPU time per frequency point,
against the 8.3 minutes taken by finite elements, for a speed up beyond 1000 X.
|
1403.6173 | Coherent Multi-Sentence Video Description with Variable Level of Detail | cs.CV cs.CL | Humans can easily describe what they see in a coherent way and at varying
level of detail. However, existing approaches for automatic video description
are mainly focused on single sentence generation and produce descriptions at a
fixed level of detail. In this paper, we address both of these limitations: for
a variable level of detail we produce coherent multi-sentence descriptions of
complex videos. We follow a two-step approach where we first learn to predict a
semantic representation (SR) from video and then generate natural language
descriptions from the SR. To produce consistent multi-sentence descriptions, we
model across-sentence consistency at the level of the SR by enforcing a
consistent topic. We also contribute both to the visual recognition of objects
proposing a hand-centric approach as well as to the robust generation of
sentences using a word lattice. Human judges rate our multi-sentence
descriptions as more readable, correct, and relevant than related work. To
understand the difference between more detailed and shorter descriptions, we
collect and analyze a video description corpus of three levels of detail.
|
1403.6183 | Development and evaluation of a 3D model observer with nonlinear
spatiotemporal contrast sensitivity | cs.CV | We investigate improvements to our 3D model observer with the goal of better
matching human observer performance as a function of viewing distance,
effective contrast, maximum luminance, and browsing speed. Two nonlinear
methods of applying the human contrast sensitivity function (CSF) to a 3D model
observer are proposed, namely the Probability Map (PM) and Monte Carlo (MC)
methods. In the PM method, the visibility probability for each frequency
component of the image stack, p, is calculated taking into account Barten's
spatiotemporal CSF, the component modulation, and the human psychometric
function. The probability p is considered to be equal to the perceived
amplitude of the frequency component and thus can be used by a traditional
model observer (e.g., LG-msCHO) in the space-time domain. In the MC method,
each component is randomly kept with probability p or discarded with 1-p. The
amplitude of the retained components is normalized to unity. The methods were
tested using DBT stacks of an anthropomorphic breast phantom processed in a
comprehensive simulation pipeline. Our experiments indicate that both the PM
and MC methods yield results that match human observer performance better than
the linear filtering method as a function of viewing distance, effective
contrast, maximum luminance, and browsing speed.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.