id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1403.1497 | Active Learning for Autonomous Intelligent Agents: Exploration,
Curiosity, and Interaction | cs.AI | In this survey we present different approaches that allow an intelligent
agent to explore autonomous its environment to gather information and learn
multiple tasks. Different communities proposed different solutions, that are in
many cases, similar and/or complementary. These solutions include active
learning, exploration/exploitation, online-learning and social learning. The
common aspect of all these approaches is that it is the agent to selects and
decides what information to gather next. Applications for these approaches
already include tutoring systems, autonomous grasping learning, navigation and
mapping and human-robot interaction. We discuss how these approaches are
related, explaining their similarities and their differences in terms of
problem assumptions and metrics of success. We consider that such an integrated
discussion will improve inter-disciplinary research and applications.
|
1403.1521 | Approximation Models of Combat in StarCraft 2 | cs.AI | Real-time strategy (RTS) games make heavy use of artificial intelligence
(AI), especially in the design of computerized opponents. Because of the
computational complexity involved in managing all aspects of these games, many
AI opponents are designed to optimize only a few areas of playing style. In
games like StarCraft 2, a very popular and recently released RTS, most AI
strategies revolve around economic and building efficiency: AI opponents try to
gather and spend all resources as quickly and effectively as possible while
ensuring that no units are idle. The aim of this work was to help address the
need for AI combat strategies that are not computationally intensive. Our goal
was to produce a computationally efficient model that is accurate at predicting
the results of complex battles between diverse armies, including which army
will win and how many units will remain. Our results suggest it may be possible
to develop a relatively simple approximation model of combat that can
accurately predict many battles that do not involve micromanagement. Future
designs of AI opponents may be able to incorporate such an approximation model
into their decision and planning systems to provide a challenge that is
strategically balanced across all aspects of play.
|
1403.1523 | A Novel Method for Comparative Analysis of DNA Sequences by
Ramanujan-Fourier Transform | cs.CE cs.AI | Alignment-free sequence analysis approaches provide important alternatives
over multiple sequence alignment (MSA) in biological sequence analysis because
alignment-free approaches have low computation complexity and are not dependent
on high level of sequence identity, however, most of the existing
alignment-free methods do not employ true full information content of sequences
and thus can not accurately reveal similarities and differences among DNA
sequences. We present a novel alignment-free computational method for sequence
analysis based on Ramanujan-Fourier transform (RFT), in which complete
information of DNA sequences is retained. We represent DNA sequences as four
binary indicator sequences and apply RFT on the indicator sequences to convert
them into frequency domain. The Euclidean distance of the complete RFT
coefficients of DNA sequences are used as similarity measure. To address the
different lengths in Euclidean space of RFT coefficients, we pad zeros to short
DNA binary sequences so that the binary sequences equal the longest length in
the comparison sequence data. Thus, the DNA sequences are compared in the same
dimensional frequency space without information loss. We demonstrate the
usefulness of the proposed method by presenting experimental results on
hierarchical clustering of genes and genomes. The proposed method opens a new
channel to biological sequence analysis, classification, and structural module
identification.
|
1403.1541 | Aligned Image Sets under Channel Uncertainty: Settling a Conjecture by
Lapidoth, Shamai and Wigger on the Collapse of Degrees of Freedom under
Finite Precision CSIT | cs.IT math.IT | A conjecture made by Lapidoth, Shamai and Wigger at Allerton 2005 (also an
open problem presented at ITA 2006) states that the DoF of a 2 user broadcast
channel, where the transmitter is equipped with 2 antennas and each user is
equipped with 1 antenna, must collapse under finite precision CSIT. In this
work we prove that the conjecture is true in all non-degenerate settings (e.g.,
where the probability density function of unknown channel coefficients exists
and is bounded). The DoF collapse even when perfect channel knowledge for one
user is available to the transmitter. This also settles a related recent
conjecture by Tandon et al. The key to our proof is a bound on the number of
codewords that can cast the same image (within noise distortion) at the
undesired receiver whose channel is subject to finite precision CSIT, while
remaining resolvable at the desired receiver whose channel is precisely known
by the transmitter. We are also able to generalize the result along two
directions. First, if the peak of the probability density function is allowed
to scale as O(P^(\alpha/2)), representing the concentration of probability
density (improving CSIT) due to, e.g., quantized feedback at rate
(\alpha/2)\log(P), then the DoF are bounded above by 1+\alpha, which is also
achievable under quantized feedback. Second, we generalize the result to the K
user broadcast channel with K antennas at the transmitter and a single antenna
at each receiver. Here also the DoF collapse under non-degenerate channel
uncertainty. The result directly implies a collapse of DoF to unity under
non-degenerate channel uncertainty for the general K-user interference and MxN
user X networks as well.
|
1403.1546 | Measuring and modelling correlations in multiplex networks | physics.soc-ph cs.SI | The interactions among the elementary components of many complex systems can
be qualitatively different. Such systems are therefore naturally described in
terms of multiplex or multi-layer networks, i.e. networks where each layer
stands for a different type of interaction between the same set of nodes. There
is today a growing interest in understanding when and why a description in
terms of a multiplex network is necessary and more informative than a
single-layer projection. Here, we contribute to this debate by presenting a
comprehensive study of correlations in multiplex networks. Correlations in node
properties, especially degree-degree correlations, have been thoroughly studied
in single-layer networks. Here we extend this idea to investigate and
characterize correlations between the different layers of a multiplex network.
Such correlations are intrinsically multiplex, and we first study them
empirically by constructing and analyzing several multiplex networks from the
real-world. In particular, we introduce various measures to characterize
correlations in the activity of the nodes and in their degree at the different
layers, and between activities and degrees. We show that real-world networks
exhibit indeed non-trivial multiplex correlations. For instance, we find cases
where two layers of the same multiplex network are positively correlated in
terms of node degrees, while other two layers are negatively correlated. We
then focus on constructing synthetic multiplex networks, proposing a series of
models to reproduce the correlations observed empirically and/or to assess
their relevance.
|
1403.1569 | On Effects of Imperfect Channel State Information on Null Space Based
Cognitive MIMO Communication | cs.IT math.IT | In cognitive radio networks, when secondary users transmit in the null space
of their interference channel with primary user, they can avoid interference.
However, performance of this scheme depends on knowledge of channel state
information for secondary user to perform inverse waterfilling. We evaluate the
effects of imperfect channel estimation on error rates and performance
degradation of primary user and elucidate the tradeoffs, such as amount of
interference and guard distance. Results show that, based on the amount of
perturbation in channel matrices, performance of null space based technique can
degrade to that of open loop MIMO. Outcomes presented in this paper also apply
to null space based MIMO radar waveform design to avoid interference with
commercial communication systems, operating in same or adjacent bands.
|
1403.1572 | Binary birth-death dynamics and the expansion of cooperation by means of
self-organized growth | physics.soc-ph cs.SI q-bio.PE | Natural selection favors the more successful individuals. This is the
elementary premise that pervades common models of evolution. Under extreme
conditions, however, the process may no longer be probabilistic. Those that
meet certain conditions survive and may reproduce while others perish. By
introducing the corresponding binary birth-death dynamics to spatial
evolutionary games, we observe solutions that are fundamentally different from
those reported previously based on imitation dynamics. Social dilemmas
transform to collective enterprises, where the availability of free expansion
ranges and limited exploitation possibilities dictates self-organized growth.
Strategies that dominate are those that are collectively most apt in meeting
the survival threshold, rather than those who succeed in exploiting others for
unfair benefits. Revisiting Darwinian principles with the focus on survival
rather than imitation thus reveals the most counterintuitive ways of
reconciling cooperation with competition.
|
1403.1591 | Robust PCA with Partial Subspace Knowledge | cs.IT math.IT | In recent work, robust Principal Components Analysis (PCA) has been posed as
a problem of recovering a low-rank matrix $\mathbf{L}$ and a sparse matrix
$\mathbf{S}$ from their sum, $\mathbf{M}:= \mathbf{L} + \mathbf{S}$ and a
provably exact convex optimization solution called PCP has been proposed. This
work studies the following problem. Suppose that we have partial knowledge
about the column space of the low rank matrix $\mathbf{L}$. Can we use this
information to improve the PCP solution, i.e. allow recovery under weaker
assumptions? We propose here a simple but useful modification of the PCP idea,
called modified-PCP, that allows us to use this knowledge. We derive its
correctness result which shows that, when the available subspace knowledge is
accurate, modified-PCP indeed requires significantly weaker incoherence
assumptions than PCP. Extensive simulations are also used to illustrate this.
Comparisons with PCP and other existing work are shown for a stylized real
application as well. Finally, we explain how this problem naturally occurs in
many applications involving time series data, i.e. in what is called the online
or recursive robust PCA problem. A corollary for this case is also given.
|
1403.1596 | Energy Consumption in multi-user MIMO systems: Impact of user mobility | cs.IT math.IT | In this work, we consider the downlink of a single-cell multi-user
multiple-input multiple-output system in which zero-forcing precoding is used
at the base station (BS) to serve a certain number of user equipments (UEs). A
fixed data rate is guaranteed at each UE. The UEs move around in the cell
according to a Brownian motion, thus the path losses change over time and the
energy consumption fluctuates accordingly. We aim at determining the
distribution of the energy consumption. To this end, we analyze the asymptotic
regime where the number of antennas at the BS and the number of UEs grow large
with a given ratio. It turns out that the energy consumption is asymptotically
a Gaussian random variable whose mean and variance are derived analytically.
These results can, for example, be used to approximate the probability that a
battery-powered BS runs out of energy within a certain time period.
|
1403.1600 | Collaborative Filtering with Information-Rich and Information-Sparse
Entities | stat.ML cs.IT cs.LG math.IT | In this paper, we consider a popular model for collaborative filtering in
recommender systems where some users of a website rate some items, such as
movies, and the goal is to recover the ratings of some or all of the unrated
items of each user. In particular, we consider both the clustering model, where
only users (or items) are clustered, and the co-clustering model, where both
users and items are clustered, and further, we assume that some users rate many
items (information-rich users) and some users rate only a few items
(information-sparse users). When users (or items) are clustered, our algorithm
can recover the rating matrix with $\omega(MK \log M)$ noisy entries while $MK$
entries are necessary, where $K$ is the number of clusters and $M$ is the
number of items. In the case of co-clustering, we prove that $K^2$ entries are
necessary for recovering the rating matrix, and our algorithm achieves this
lower bound within a logarithmic factor when $K$ is sufficiently large. We
compare our algorithms with a well-known algorithms called alternating
minimization (AM), and a similarity score-based algorithm known as the
popularity-among-friends (PAF) algorithm by applying all three to the MovieLens
and Netflix data sets. Our co-clustering algorithm and AM have similar overall
error rates when recovering the rating matrix, both of which are lower than the
error rate under PAF. But more importantly, the error rate of our co-clustering
algorithm is significantly lower than AM and PAF in the scenarios of interest
in recommender systems: when recommending a few items to each user or when
recommending items to users who only rated a few items (these users are the
majority of the total user population). The performance difference increases
even more when noise is added to the datasets.
|
1403.1615 | Interference Localization for Uplink OFDMA Systems in Presence of CFOs | cs.IT math.IT | Multiple carrier frequency offsets (CFOs) present in the uplink of orthogonal
frequency division multiple access (OFDMA) systems adversely affect subcarrier
orthogonality and impose a serious performance loss. In this paper, we propose
the application of time domain receiver windowing to concentrate the leakage
caused by CFOs to a few adjacent subcarriers with almost no additional
computational complexity. This allows us to approximate the interference matrix
with a quasi-banded matrix by neglecting small elements outside a certain band
which enables robust and computationally efficient signal detection. The
proposed CFO compensation technique is applicable to all types of subcarrier
assignment techniques. Simulation results show that the quasi-banded
approximation of the interference matrix is accurate enough to provide almost
the same bit error rate performance as that of the optimal solution. The
excellent performance of our proposed method is also proven through running an
experiment using our FPGA-based system setup.
|
1403.1618 | Design a Persian Automated Plagiarism Detector (AMZPPD) | cs.AI cs.CL | Currently there are lots of plagiarism detection approaches. But few of them
implemented and adapted for Persian languages. In this paper, our work on
designing and implementation of a plagiarism detection system based on
pre-processing and NLP technics will be described. And the results of testing
on a corpus will be presented.
|
1403.1626 | Can Image-Level Labels Replace Pixel-Level Labels for Image Parsing | cs.CV | This paper presents a weakly supervised sparse learning approach to the
problem of noisily tagged image parsing, or segmenting all the objects within a
noisily tagged image and identifying their categories (i.e. tags). Different
from the traditional image parsing that takes pixel-level labels as strong
supervisory information, our noisily tagged image parsing is provided with
noisy tags of all the images (i.e. image-level labels), which is a natural
setting for social image collections (e.g. Flickr). By oversegmenting all the
images into regions, we formulate noisily tagged image parsing as a weakly
supervised sparse learning problem over all the regions, where the initial
labels of each region are inferred from image-level labels. Furthermore, we
develop an efficient algorithm to solve such weakly supervised sparse learning
problem. The experimental results on two benchmark datasets show the
effectiveness of our approach. More notably, the reported surprising results
shed some light on answering the question: can image-level labels replace
pixel-level labels (hard to access) as supervisory information for image
parsing.
|
1403.1639 | Optimal Patching in Clustered Malware Epidemics | cs.CR cs.NI cs.SI cs.SY math.OC | Studies on the propagation of malware in mobile networks have revealed that
the spread of malware can be highly inhomogeneous. Platform diversity, contact
list utilization by the malware, clustering in the network structure, etc. can
also lead to differing spreading rates. In this paper, a general formal
framework is proposed for leveraging such heterogeneity to derive optimal
patching policies that attain the minimum aggregate cost due to the spread of
malware and the surcharge of patching. Using Pontryagin's Maximum Principle for
a stratified epidemic model, it is analytically proven that in the mean-field
deterministic regime, optimal patch disseminations are simple single-threshold
policies. Through numerical simulations, the behavior of optimal patching
policies is investigated in sample topologies and their advantages are
demonstrated.
|
1403.1642 | Optimal Energy-Aware Epidemic Routing in DTNs | cs.SY cs.DC cs.NI math.OC | In this work, we investigate the use of epidemic routing in energy
constrained Delay Tolerant Networks (DTNs). In epidemic routing, messages are
relayed by intermediate nodes at contact opportunities, i.e., when pairs of
nodes come within the transmission range of each other. Each node needs to
decide whether to forward its message upon contact with a new node based on its
own residual energy level and the age of that message. We mathematically
characterize the fundamental trade-off between energy conservation and a
measure of Quality of Service as a dynamic energy-dependent optimal control
problem. We prove that in the mean-field regime, the optimal dynamic forwarding
decisions follow simple threshold-based structures in which the forwarding
threshold for each node depends on its current remaining energy. We then
characterize the nature of this dependence. Our simulations reveal that the
optimal dynamic policy significantly outperforms heuristics.
|
1403.1653 | Automated Tracking and Estimation for Control of Non-rigid Cloth | cs.CV | This report is a summary of research conducted on cloth tracking for
automated textile manufacturing during a two semester long research course at
Georgia Tech. This work was completed in 2009. Advances in current sensing
technology such as the Microsoft Kinect would now allow me to relax certain
assumptions and generally improve the tracking performance. This is because a
major part of my approach described in this paper was to track features in a 2D
image and use these to estimate the cloth deformation. Innovations such as the
Kinect would improve estimation due to the automatic depth information obtained
when tracking 2D pixel locations. Additionally, higher resolution camera images
would probably give better quality feature tracking. However, although I would
use different technology now to implement this tracker, the algorithm described
and implemented in this paper is still a viable approach which is why I am
publishing this as a tech report for reference. In addition, although the
related work is a bit exhaustive, it will be useful to a reader who is new to
methods for tracking and estimation as well as modeling of cloth.
|
1403.1660 | Feature Extraction of ECG Signal Using HHT Algorithm | cs.CV | This paper describe the features extraction algorithm for electrocardiogram
(ECG) signal using Huang Hilbert Transform and Wavelet Transform. ECG signal
for an individual human being is different due to unique heart structure. The
purpose of feature extraction of ECG signal would allow successful abnormality
detection and efficient prognosis due to heart disorder. Some major important
features will be extracted from ECG signals such as amplitude, duration,
pre-gradient, post-gradient and so on. Therefore, we need a strong mathematical
model to extract such useful parameter. Here an adaptive mathematical analysis
model is Hilbert-Huang transform (HHT). This new approach, the Hilbert-Huang
transform, is implemented to analyze the non-linear and nonstationary data. It
is unique and different from the existing methods of data analysis and does not
require an a priori functional basis. The effectiveness of the proposed scheme
is verified through the simulation.
|
1403.1687 | Rigid-Motion Scattering for Texture Classification | cs.CV | A rigid-motion scattering computes adaptive invariants along translations and
rotations, with a deep convolutional network. Convolutions are calculated on
the rigid-motion group, with wavelets defined on the translation and rotation
variables. It preserves joint rotation and translation information, while
providing global invariants at any desired scale. Texture classification is
studied, through the characterization of stationary processes from a single
realization. State-of-the-art results are obtained on multiple texture data
bases, with important rotation and scaling variabilities.
|
1403.1696 | Exact Performance Analysis of the Oracle Receiver for Compressed Sensing
Reconstruction | cs.IT math.IT | A sparse or compressible signal can be recovered from a certain number of
noisy random projections, smaller than what dictated by classic Shannon/Nyquist
theory. In this paper, we derive the closed-form expression of the mean square
error performance of the oracle receiver, knowing the sparsity pattern of the
signal. With respect to existing bounds, our result is exact and does not
depend on a particular realization of the sensing matrix. Moreover, our result
holds irrespective of whether the noise affecting the measurements is white or
correlated. Numerical results show a perfect match between equations and
simulations, confirming the validity of the result.
|
1403.1697 | Compressive Hyperspectral Imaging Using Progressive Total Variation | cs.IT cs.CV math.IT | Compressed Sensing (CS) is suitable for remote acquisition of hyperspectral
images for earth observation, since it could exploit the strong spatial and
spectral correlations, llowing to simplify the architecture of the onboard
sensors. Solutions proposed so far tend to decouple spatial and spectral
dimensions to reduce the complexity of the reconstruction, not taking into
account that onboard sensors progressively acquire spectral rows rather than
acquiring spectral channels. For this reason, we propose a novel progressive CS
architecture based on separate sensing of spectral rows and joint
reconstruction employing Total Variation. Experimental results run on raw
AVIRIS and AIRS images confirm the validity of the proposed system.
|
1403.1727 | On the Sequence of State Configurations in the Garden of Eden | cs.NE | Autonomous threshold element circuit networks are used to investigate the
structure of neural networks. With these circuits, as the transition functions
are threshold functions, it is necessary to consider the existence of sequences
of state configurations that cannot be transitioned. In this study, we focus on
all logical functions of four or fewer variables, and we discuss the periodic
sequences and transient series that transition from all sequences of state
configurations. Furthermore, by using the sequences of state configurations in
the Garden of Eden, we show that it is easy to obtain functions that determine
the operation of circuit networks.
|
1403.1729 | Continuous Features Discretization for Anomaly Intrusion Detectors
Generation | cs.NI cs.CR cs.NE | Network security is a growing issue, with the evolution of computer systems
and expansion of attacks. Biological systems have been inspiring scientists and
designs for new adaptive solutions, such as genetic algorithms. In this paper,
we present an approach that uses the genetic algorithm to generate anomaly net-
work intrusion detectors. In this paper, an algorithm propose use a
discretization method for the continuous features selected for the intrusion
detection, to create some homogeneity between values, which have different data
types. Then,the intrusion detection system is tested against the NSL-KDD data
set using different distance methods. A comparison is held amongst the results,
and it is shown by the end that this proposed approach has good results, and
recommendations is given for future experiments.
|
1403.1732 | Chromatic Dispersion Compensation Using Filter Bank Based Complex-Valued
All-Pass Filter | cs.IT math.IT | A long-haul transmission of 100 Gb/s without optical chromatic-dispersion
(CD) compensation provides a range of benefits regarding cost effectiveness,
power budget, and nonlinearity tolerance. The channel memory is largely
dominated by CD in this case with an intersymbol-interference spread of more
than 100 symbol durations. In this paper, we propose CD equalization technique
based on nonmaximally decimated discrete Fourier transform (NMDFT) filter bank
(FB) with non-trivial prototype filter and complex-valued infinite impulse
response (IIR) all-pass filter per sub-band. The design of the sub-band IIR
all-pass filter is based on minimizing the mean square error (MSE) in group
delay and phase cost functions in an optimization framework. Necessary
conditions are derived and incorporated in a multi-step and multi-band
optimization framework to ensure the stability of the resulting IIR filter. It
is shown that the complexity of the proposed method grows logarithmically with
the channel memory, therefore, larger CD values can be tolerated with our
approach.
|
1403.1734 | Model Reduction by Moment Matching for Linear Switched Systems | cs.SY | Two moment-matching methods for model reduction of linear switched systems
(LSSs) are presented. The methods are similar to the Krylov subspace methods
used for moment matching for linear systems. The more general one of the two
methods, is based on the so called "nice selection" of some vectors in the
reachability or observability space of the LSS. The underlying theory is
closely related to the (partial) realization theory of LSSs. In this paper, the
connection of the methods to the realization theory of LSSs is provided, and
algorithms are developed for the purpose of model reduction. Conditions for
applicability of the methods for model reduction are stated and finally the
results are illustrated on numerical examples.
|
1403.1735 | Ant Colony based Feature Selection Heuristics for Retinal Vessel
Segmentation | cs.NE cs.CV | Features selection is an essential step for successful data classification,
since it reduces the data dimensionality by removing redundant features.
Consequently, that minimizes the classification complexity and time in addition
to maximizing its accuracy. In this article, a comparative study considering
six features selection heuristics is conducted in order to select the best
relevant features subset. The tested features vector consists of fourteen
features that are computed for each pixel in the field of view of retinal
images in the DRIVE database. The comparison is assessed in terms of
sensitivity, specificity, and accuracy measurements of the recommended features
subset resulted by each heuristic when applied with the ant colony system.
Experimental results indicated that the features subset recommended by the
relief heuristic outperformed the subsets recommended by the other experienced
heuristics.
|
1403.1738 | A Fast Active Set Block Coordinate Descent Algorithm for
$\ell_1$-regularized least squares | math.OC cs.IT math.IT | The problem of finding sparse solutions to underdetermined systems of linear
equations arises in several applications (e.g. signal and image processing,
compressive sensing, statistical inference). A standard tool for dealing with
sparse recovery is the $\ell_1$-regularized least-squares approach that has
been recently attracting the attention of many researchers. In this paper, we
describe an active set estimate (i.e. an estimate of the indices of the zero
variables in the optimal solution) for the considered problem that tries to
quickly identify as many active variables as possible at a given point, while
guaranteeing that some approximate optimality conditions are satisfied. A
relevant feature of the estimate is that it gives a significant reduction of
the objective function when setting to zero all those variables estimated
active. This enables to easily embed it into a given globally converging
algorithmic framework. In particular, we include our estimate into a block
coordinate descent algorithm for $\ell_1$-regularized least squares, analyze
the convergence properties of this new active set method, and prove that its
basic version converges with linear rate. Finally, we report some numerical
results showing the effectiveness of the approach.
|
1403.1757 | Hilberg Exponents: New Measures of Long Memory in the Process | cs.IT math.IT | The paper concerns the rates of power-law growth of mutual information
computed for a stationary measure or for a universal code. The rates are called
Hilberg exponents and four such quantities are defined for each measure and
each code: two random exponents and two expected exponents. A particularly
interesting case arises for conditional algorithmic mutual information. In this
case, the random Hilberg exponents are almost surely constant on ergodic
sources and are bounded by the expected Hilberg exponents. This property is a
"second-order" analogue of the Shannon-McMillan-Breiman theorem, proved without
invoking the ergodic theorem. It carries over to Hilberg exponents for the
underlying probability measure via Shannon-Fano coding and Barron inequality.
Moreover, the expected Hilberg exponents can be linked for different universal
codes. Namely, if one code dominates another, the expected Hilberg exponents
are greater for the former than for the latter. The paper is concluded by an
evaluation of Hilberg exponents for certain sources such as the mixture
Bernoulli process and the Santa Fe processes.
|
1403.1773 | Finding Eyewitness Tweets During Crises | cs.CL cs.CY | Disaster response agencies have started to incorporate social media as a
source of fast-breaking information to understand the needs of people affected
by the many crises that occur around the world. These agencies look for tweets
from within the region affected by the crisis to get the latest updates of the
status of the affected region. However only 1% of all tweets are geotagged with
explicit location information. First responders lose valuable information
because they cannot assess the origin of many of the tweets they collect. In
this work we seek to identify non-geotagged tweets that originate from within
the crisis region. Towards this, we address three questions: (1) is there a
difference between the language of tweets originating within a crisis region
and tweets originating outside the region, (2) what are the linguistic patterns
that can be used to differentiate within-region and outside-region tweets, and
(3) for non-geotagged tweets, can we automatically identify those originating
within the crisis region in real-time?
|
1403.1818 | Gray Codes and Overlap Cycles for Restricted Weight Words | math.CO cs.DM cs.IT math.IT | A Gray code is a listing structure for a set of combinatorial objects such
that some consistent (usually minimal) change property is maintained throughout
adjacent elements in the list. While Gray codes for m-ary strings have been
considered in the past, we provide a new, simple Gray code for fixed-weight
m-ary strings. In addition, we consider a relatively new type of Gray code
known as overlap cycles and prove basic existence results concerning overlap
cycles for fixed-weight and weight-range m-ary words.
|
1403.1824 | Distributed Localization and Tracking of Mobile Networks Including
Noncooperative Objects - Extended Version | cs.IT cs.DC math.IT | We propose a Bayesian method for distributed sequential localization of
mobile networks composed of both cooperative agents and noncooperative objects.
Our method provides a consistent combination of cooperative self-localization
(CS) and distributed tracking (DT). Multiple mobile agents and objects are
localized and tracked using measurements between agents and objects and between
agents. For a distributed operation and low complexity, we combine
particle-based belief propagation with a consensus or gossip scheme. High
localization accuracy is achieved through a probabilistic information transfer
between the CS and DT parts of the underlying factor graph. Simulation results
demonstrate significant improvements in both agent self-localization and object
localization performance compared to separate CS and DT, and very good scaling
properties with respect to the numbers of agents and objects.
|
1403.1835 | Hierarchical Recovery in Compressive Sensing | cs.IT math.CO math.IT | A combinatorial approach to compressive sensing based on a deterministic
column replacement technique is proposed. Informally, it takes as input a
pattern matrix and ingredient measurement matrices, and results in a larger
measurement matrix by replacing elements of the pattern matrix with columns
from the ingredient matrices. This hierarchical technique yields great
flexibility in sparse signal recovery. Specifically, recovery for the resulting
measurement matrix does not depend on any fixed algorithm but rather on the
recovery scheme of each ingredient matrix. In this paper, we investigate
certain trade-offs for signal recovery, considering the computational
investment required. Coping with noise in signal recovery requires additional
conditions, both on the pattern matrix and on the ingredient measurement
matrices.
|
1403.1840 | Multi-scale Orderless Pooling of Deep Convolutional Activation Features | cs.CV | Deep convolutional neural networks (CNN) have shown their promise as a
universal representation for recognition. However, global CNN activations lack
geometric invariance, which limits their robustness for classification and
matching of highly variable scenes. To improve the invariance of CNN
activations without degrading their discriminative power, this paper presents a
simple but effective scheme called multi-scale orderless pooling (MOP-CNN).
This scheme extracts CNN activations for local patches at multiple scale
levels, performs orderless VLAD pooling of these activations at each level
separately, and concatenates the result. The resulting MOP-CNN representation
can be used as a generic feature for either supervised or unsupervised
recognition tasks, from image classification to instance-level retrieval; it
consistently outperforms global CNN activations without requiring any joint
training of prediction layers for a particular target dataset. In absolute
terms, it achieves state-of-the-art results on the challenging SUN397 and MIT
Indoor Scenes classification datasets, and competitive results on
ILSVRC2012/2013 classification and INRIA Holidays retrieval datasets.
|
1403.1861 | Credible Autocoding of Convex Optimization Algorithms | cs.SY | The efficiency of modern optimization methods, coupled with increasing
computational resources, has led to the possibility of real-time optimization
algorithms acting in safety critical roles. There is a considerable body of
mathematical proofs on on-line optimization programs which can be leveraged to
assist in the development and verification of their implementation. In this
paper, we demonstrate how theoretical proofs of real-time optimization
algorithms can be used to describe functional properties at the level of the
code, thereby making it accessible for the formal methods community. The
running example used in this paper is a generic semi-definite programming (SDP)
solver. Semi-definite programs can encode a wide variety of optimization
problems and can be solved in polynomial time at a given accuracy. We describe
a top-to-down approach that transforms a high-level analysis of the algorithm
into useful code annotations. We formulate some general remarks about how such
a task can be incorporated into a convex programming autocoder. We then take a
first step towards the automatic verification of the optimization program by
identifying key issues to be adressed in future work.
|
1403.1863 | Statistical Structure Learning, Towards a Robust Smart Grid | cs.LG cs.SY | Robust control and maintenance of the grid relies on accurate data. Both PMUs
and state estimators are prone to false data injection attacks. Thus, it is
crucial to have a mechanism for fast and accurate detection of an agent
maliciously tampering with the data---for both preventing attacks that may lead
to blackouts, and for routine monitoring and control tasks of current and
future grids. We propose a decentralized false data injection detection scheme
based on Markov graph of the bus phase angles. We utilize the Conditional
Covariance Test (CCT) to learn the structure of the grid. Using the DC power
flow model, we show that under normal circumstances, and because of
walk-summability of the grid graph, the Markov graph of the voltage angles can
be determined by the power grid graph. Therefore, a discrepancy between
calculated Markov graph and learned structure should trigger the alarm. Local
grid topology is available online from the protection system and we exploit it
to check for mismatch. Should a mismatch be detected, we use correlation
anomaly score to detect the set of attacked nodes. Our method can detect the
most recent stealthy deception attack on the power grid that assumes knowledge
of bus-branch model of the system and is capable of deceiving the state
estimator, damaging power network observatory, control, monitoring, demand
response and pricing schemes. Specifically, under the stealthy deception
attack, the Markov graph of phase angles changes. In addition to detect a state
of attack, our method can detect the set of attacked nodes. To the best of our
knowledge, our remedy is the first to comprehensively detect this sophisticated
attack and it does not need additional hardware. Moreover, our detection scheme
is successful no matter the size of the attacked subset. Simulation of various
power networks confirms our claims.
|
1403.1891 | Counterfactual Estimation and Optimization of Click Metrics for Search
Engines | cs.LG cs.AI stat.AP stat.ML | Optimizing an interactive system against a predefined online metric is
particularly challenging, when the metric is computed from user feedback such
as clicks and payments. The key challenge is the counterfactual nature: in the
case of Web search, any change to a component of the search engine may result
in a different search result page for the same query, but we normally cannot
infer reliably from search log how users would react to the new result page.
Consequently, it appears impossible to accurately estimate online metrics that
depend on user feedback, unless the new engine is run to serve users and
compared with a baseline in an A/B test. This approach, while valid and
successful, is unfortunately expensive and time-consuming. In this paper, we
propose to address this problem using causal inference techniques, under the
contextual-bandit framework. This approach effectively allows one to run
(potentially infinitely) many A/B tests offline from search log, making it
possible to estimate and optimize online metrics quickly and inexpensively.
Focusing on an important component in a commercial search engine, we show how
these ideas can be instantiated and applied, and obtain very promising results
that suggest the wide applicability of these techniques.
|
1403.1893 | Becoming More Robust to Label Noise with Classifier Diversity | stat.ML cs.AI cs.LG | It is widely known in the machine learning community that class noise can be
(and often is) detrimental to inducing a model of the data. Many current
approaches use a single, often biased, measurement to determine if an instance
is noisy. A biased measure may work well on certain data sets, but it can also
be less effective on a broader set of data sets. In this paper, we present
noise identification using classifier diversity (NICD) -- a method for deriving
a less biased noise measurement and integrating it into the learning process.
To lessen the bias of the noise measure, NICD selects a diverse set of
classifiers (based on their predictions of novel instances) to determine which
instances are noisy. We examine NICD as a technique for filtering, instance
weighting, and selecting the base classifiers of a voting ensemble. We compare
NICD with several other noise handling techniques that do not consider
classifier diversity on a set of 54 data sets and 5 learning algorithms. NICD
significantly increases the classification accuracy over the other considered
approaches and is effective across a broad set of data sets and learning
algorithms.
|
1403.1897 | On the Duality of Erasures and Defects | cs.IT math.IT | In this paper, the duality of erasures and defects will be investigated by
comparing the binary erasure channel (BEC) and the binary defect channel (BDC).
The duality holds for channel capacities, capacity achieving schemes, minimum
distances, and upper bounds on the probability of failure to retrieve the
original message. Also, the binary defect and erasure channel (BDEC) will be
introduced by combining the properties of the BEC and the BDC. It will be shown
that the capacity of the BDEC can be achieved by the coding scheme that
combines the encoding for the defects and the decoding for the erasures. This
coding scheme for the BDEC has two separate redundancy parts for correcting
erasures and masking defects. Thus, we will investigate the problem of
redundancy allocation between these two parts.
|
1403.1902 | Quality-based Multimodal Classification Using Tree-Structured Sparsity | cs.CV | Recent studies have demonstrated advantages of information fusion based on
sparsity models for multimodal classification. Among several sparsity models,
tree-structured sparsity provides a flexible framework for extraction of
cross-correlated information from different sources and for enforcing group
sparsity at multiple granularities. However, the existing algorithm only solves
an approximated version of the cost functional and the resulting solution is
not necessarily sparse at group levels. This paper reformulates the
tree-structured sparse model for multimodal classification task. An accelerated
proximal algorithm is proposed to solve the optimization problem, which is an
efficient tool for feature-level fusion among either homogeneous or
heterogeneous sources of information. In addition, a (fuzzy-set-theoretic)
possibilistic scheme is proposed to weight the available modalities, based on
their respective reliability, in a joint optimization problem for finding the
sparsity codes. This approach provides a general framework for quality-based
fusion that offers added robustness to several sparsity-based multimodal
classification algorithms. To demonstrate their efficacy, the proposed methods
are evaluated on three different applications - multiview face recognition,
multimodal face recognition, and target classification.
|
1403.1937 | A fast eikonal equation solver using the Schrodinger wave equation | math.NA cs.CV cs.NA | We use a Schr\"odinger wave equation formalism to solve the eikonal equation.
In our framework, a solution to the eikonal equation is obtained in the limit
as Planck's constant $\hbar$ (treated as a free parameter) tends to zero of the
solution to the corresponding linear Schr\"odinger equation. The Schr\"odinger
equation corresponding to the eikonal turns out to be a \emph{generalized,
screened Poisson equation}. Despite being linear, it does not have a
closed-form solution for arbitrary forcing functions. We present two different
techniques to solve the screened Poisson equation. In the first approach we use
a standard perturbation analysis approach to derive a new algorithm which is
guaranteed to converge provided the forcing function is bounded and positive.
The perturbation technique requires a sequence of discrete convolutions which
can be performed in $O(N\log N)$ using the Fast Fourier Transform (FFT) where
$N$ is the number of grid points. In the second method we discretize the linear
Laplacian operator by the finite difference method leading to a sparse linear
system of equations which can be solved using the plethora of sparse solvers.
The eikonal solution is recovered from the exponent of the resultant scalar
field. Our approach eliminates the need to explicitly construct viscosity
solutions as customary with direct solutions to the eikonal. Since the linear
equation is computed for a small but non-zero $\hbar$, the obtained solution is
an approximation. Though our solution framework is applicable to the general
class of eikonal problems, we detail specifics for the popular vision
applications of shape-from-shading, vessel segmentation, and path planning.
|
1403.1939 | Extraction of Core Contents from Web Pages | cs.IR | The information available on web pages mostly contains semi-structured text
documents which are represented either in XML, or HTML, or XHTML format that
lacks formatted document structure. The document does not discriminate between
the text and the schema that represent the text. Also the amount of structure
used to represent the text depends on the purpose and size of text document. No
semantic is applied to semi-structured documents. This requires extracting core
contents of text document to analyse words or sentences to generate useful
knowledge. This paper discusses several techniques and approaches useful for
extracting core content from semi-structured text documents and their merits
and demerits
|
1403.1942 | Predictive Overlapping Co-Clustering | cs.LG | In the past few years co-clustering has emerged as an important data mining
tool for two way data analysis. Co-clustering is more advantageous over
traditional one dimensional clustering in many ways such as, ability to find
highly correlated sub-groups of rows and columns. However, one of the
overlooked benefits of co-clustering is that, it can be used to extract
meaningful knowledge for various other knowledge extraction purposes. For
example, building predictive models with high dimensional data and
heterogeneous population is a non-trivial task. Co-clusters extracted from such
data, which shows similar pattern in both the dimension, can be used for a more
accurate predictive model building. Several applications such as finding
patient-disease cohorts in health care analysis, finding user-genre groups in
recommendation systems and community detection problems can benefit from
co-clustering technique that utilizes the predictive power of the data to
generate co-clusters for improved data analysis.
In this paper, we present the novel idea of Predictive Overlapping
Co-Clustering (POCC) as an optimization problem for a more effective and
improved predictive analysis. Our algorithm generates optimal co-clusters by
maximizing predictive power of the co-clusters subject to the constraints on
the number of row and column clusters. In this paper precision, recall and
f-measure have been used as evaluation measures of the resulting co-clusters.
Results of our algorithm has been compared with two other well-known techniques
- K-means and Spectral co-clustering, over four real data set namely, Leukemia,
Internet-Ads, Ovarian cancer and MovieLens data set. The results demonstrate
the effectiveness and utility of our algorithm POCC in practice.
|
1403.1944 | Multi-label ensemble based on variable pairwise constraint projection | cs.LG cs.CV stat.ML | Multi-label classification has attracted an increasing amount of attention in
recent years. To this end, many algorithms have been developed to classify
multi-label data in an effective manner. However, they usually do not consider
the pairwise relations indicated by sample labels, which actually play
important roles in multi-label classification. Inspired by this, we naturally
extend the traditional pairwise constraints to the multi-label scenario via a
flexible thresholding scheme. Moreover, to improve the generalization ability
of the classifier, we adopt a boosting-like strategy to construct a multi-label
ensemble from a group of base classifiers. To achieve these goals, this paper
presents a novel multi-label classification framework named Variable Pairwise
Constraint projection for Multi-label Ensemble (VPCME). Specifically, we take
advantage of the variable pairwise constraint projection to learn a
lower-dimensional data representation, which preserves the correlations between
samples and labels. Thereafter, the base classifiers are trained in the new
data space. For the boosting-like strategy, we employ both the variable
pairwise constraints and the bootstrap steps to diversify the base classifiers.
Empirical studies have shown the superiority of the proposed method in
comparison with other approaches.
|
1403.1946 | Improving Performance of a Group of Classification Algorithms Using
Resampling and Feature Selection | cs.LG | In recent years the importance of finding a meaningful pattern from huge
datasets has become more challenging. Data miners try to adopt innovative
methods to face this problem by applying feature selection methods. In this
paper we propose a new hybrid method in which we use a combination of
resampling, filtering the sample domain and wrapper subset evaluation method
with genetic search to reduce dimensions of Lung-Cancer dataset that we
received from UCI Repository of Machine Learning databases. Finally, we apply
some well- known classification algorithms (Na\"ive Bayes, Logistic, Multilayer
Perceptron, Best First Decision Tree and JRIP) to the resulting dataset and
compare the results and prediction rates before and after the application of
our feature selection method on that dataset. The results show a substantial
progress in the average performance of five classification algorithms
simultaneously and the classification error for these classifiers decreases
considerably. The experiments also show that this method outperforms other
feature selection methods with a lower cost.
|
1403.1949 | Combination of PCA with SMOTE Resampling to Boost the Prediction Rate in
Lung Cancer Dataset | cs.LG cs.CE | Classification algorithms are unable to make reliable models on the datasets
with huge sizes. These datasets contain many irrelevant and redundant features
that mislead the classifiers. Furthermore, many huge datasets have imbalanced
class distribution which leads to bias over majority class in the
classification process. In this paper combination of unsupervised
dimensionality reduction methods with resampling is proposed and the results
are tested on Lung-Cancer dataset. In the first step PCA is applied on
Lung-Cancer dataset to compact the dataset and eliminate irrelevant features
and in the second step SMOTE resampling is carried out to balance the class
distribution and increase the variety of sample domain. Finally, Naive Bayes
classifier is applied on the resulting dataset and the results are compared and
evaluation metrics are calculated. The experiments show the effectiveness of
the proposed method across four evaluation metrics: Overall accuracy, False
Positive Rate, Precision, Recall.
|
1403.1956 | Effect of Social Media on Website Popularity: Differences between Public
and Private Universities in Indonesia | cs.CY cs.SI | Social media has become something that is important to enhance social
networking and sharing of information through the website. Social media have
not only changed social networking, they provide a valuable tool for social
organization, activism, political, healthcare and even academic relations in
the university. The researchers conducted present study with objectives to a).
examine the academic use of social media by universities, b). measure the
popularity and visibility of social media owned by universities. This study was
delimited to the universities in Indonesia. The population of the study
consisted both on public and private universities. The sample size comprised
totally of 264 universities that their ranks included both in Webometrics and
4ICU in July 2012 edition. The social media which was examined included
Facebook, Twitter, Flicker, LinkedIn, Youtube, Wikipeda, Blogs, social network
community owned by the university and Open Course Ware. This study used an
approach for data collection and measurement: by using Alexa and Majestic SEO.
Data analysis using the Pearson Chi-square for social media ownership that
using data ordinal and independent t test for examining effects of social media
on website popularity. The study revealed that majority of the social media
users used Facebook, then followed by Twitter. There are also most significant
differences for result of popularity by Alexa Rank and visibility by Majestic
SEO in universities whether used social media or no.
|
1403.1974 | Designing an FPGA Synthesizable Computer Vision Algorithm to Detect the
Greening of Potatoes | cs.CV | Potato quality control has improved in the last years thanks to automation
techniques like machine vision, mainly making the classification task between
different quality degrees faster, safer and less subjective. In our study we
are going to design a computer vision algorithm for grading of potatoes
according to the greening of the surface color of potato. The ratio of green
pixels to the total number of pixels of the potato surface is found. The higher
the ratio the worse is the potato. First the image is converted into serial
data and then processing is done in RGB colour space. Green part of the potato
is also shown by de-serializing the output. The same algorithm is then
synthesized on FPGA and the result shows thousand times speed improvement in
case of hardware synthesis.
|
1403.2000 | A Galois-Connection between Myers-Briggs' Type Indicators and Szondi's
Personality Profiles | cs.CE cs.CY | We propose a computable Galois-connection between Myers-Briggs' Type
Indicators (MBTIs), the most widely-used personality measure for
non-psychiatric populations (based on C.G. Jung's personality types), and
Szondi's personality profiles (SPPs), a less well-known but, as we show, finer
personality measure for psychiatric as well as non-psychiatric populations
(conceived as a unification of the depth psychology of S. Freud, C.G. Jung, and
A. Adler). The practical significance of our result is that our
Galois-connection provides a pair of computable, interpreting translations
between the two personality spaces of MBTIs and SPPs: one concrete from
MBTI-space to SPP-space (because SPPs are finer) and one abstract from
SPP-space to MBTI-space (because MBTIs are coarser). Thus Myers-Briggs' and
Szondi's personality-test results are mutually interpretable and
inter-translatable, even automatically by computers.
|
1403.2001 | EEG Compression of Scalp Recordings based on Dipole Fitting | cs.IT math.IT | A novel technique for Electroencephalogram (EEG) compression is proposed in
this article. This technique models the intrinsic dependency inherent between
the different EEG channels. It is based on dipole fitting that is usually used
in order to find a solution to the classic problems in EEG analysis: inverse
and forward problems. The suggested compression system uses dipole fitting as a
first building block to provide an approximation of the recorded signals. Then,
(based on a smoothness factor,) appropriate coding techniques are suggested to
compress the residuals of the fitting process. Results show that this technique
works well for different types of recordings and is even able to provide near-
lossless compression for event-related potentials.
|
1403.2002 | Time Series Analysis on Stock Market for Text Mining Correlation of
Economy News | cs.CE cs.IR | This paper proposes an information retrieval method for the economy news. The
effect of economy news, are researched in the word level and stock market
values are considered as the ground proof. The correlation between stock market
prices and economy news is an already addressed problem for most of the
countries. The most well-known approach is applying the text mining approaches
to the news and some time series analysis techniques over stock market closing
values in order to apply classification or clustering algorithms over the
features extracted. This study goes further and tries to ask the question what
are the available time series analysis techniques for the stock market closing
values and which one is the most suitable? In this study, the news and their
dates are collected into a database and text mining is applied over the news,
the text mining part has been kept simple with only term frequency-inverse
document frequency method. For the time series analysis part, we have studied
10 different methods such as random walk, moving average, acceleration,
Bollinger band, price rate of change, periodic average, difference, momentum or
relative strength index and their variation. In this study we have also
explained these techniques in a comparative way and we have applied the methods
over Turkish Stock Market closing values for more than a 2 year period. On the
other hand, we have applied the term frequency-inverse document frequency
method on the economy news of one of the high-circulating newspapers in Turkey.
|
1403.2003 | The Impact of Employment Web Sites' Traffic on Unemployment: A Cross
Country Comparison | stat.AP cs.CY cs.IR | Although employment web sites have recently become the main source for re-
cruitment and selection process, the relation between those sites and unemploy-
ment rates is seldom addressed. Deriving data from 32 countries and 427 web
sites, this study explores the correlation between unemployment rates of
European countries and the attractiveness of country specific employment web
sites. It also compares the changes in unemployment rates and traffic on all
the aforementioned web sites. The results showed that there is a strong
correlation between web sites traffic and unemployment rates.
|
1403.2004 | Natural Language Feature Selection via Cooccurrence | cs.CL | Specificity is important for extracting collocations, keyphrases, multi-word
and index terms [Newman et al. 2012]. It is also useful for tagging, ontology
construction [Ryu and Choi 2006], and automatic summarization of documents
[Louis and Nenkova 2011, Chali and Hassan 2012]. Term frequency and
inverse-document frequency (TF-IDF) are typically used to do this, but fail to
take advantage of the semantic relationships between terms [Church and Gale
1995]. The result is that general idiomatic terms are mistaken for specific
terms. We demonstrate use of relational data for estimation of term
specificity. The specificity of a term can be learned from its distribution of
relations with other terms. This technique is useful for identifying relevant
words or terms for other natural language processing tasks.
|
1403.2006 | An IAC Approach for Detecting Profile Cloning in Online Social Networks | cs.SI cs.CR | Nowadays, Online Social Networks are popular websites on the internet, which
millions of users register on and share their own personal information with
others. Privacy threats and disclosing personal information are the most
important concerns of OSNs users. Recently, a new attack which is named
Identity Cloned Attack is detected on OSNs. In this attack the attacker tries
to make a fake identity of a real user in order to access to private
information of the users friends which they do not publish on the public
profiles. In today OSNs, there are some verification services, but they are not
active services and they are useful for users who are familiar with online
identity issues. In this paper, Identity cloned attacks are explained in more
details and a new and precise method to detect profile cloning in online social
networks is proposed. In this method, first, the social network is shown in a
form of graph, then, according to similarities among users, this graph is
divided into smaller communities. Afterwards, all of the similar profiles to
the real profile are gathered (from the same community), then strength of
relationship (among all selected profiles and the real profile) is calculated,
and those which have the less strength of relationship will be verified by
mutual friend system. In this study, in order to evaluate the effectiveness of
proposed method, all steps are applied on a dataset of Facebook, and finally
this work is compared with two previous works by applying them on the dataset.
|
1403.2010 | Optimal Power Allocation for Distributed BLUE Estimation with Linear
Spatial Collaboration | cs.IT math.IT | This paper investigates the problem of linear spatial collaboration for
distributed estimation in wireless sensor networks. In this context, the
sensors share their local noisy (and potentially spatially correlated)
observations with each other through error-free, low cost links based on a
pattern defined by an adjacency matrix. Each sensor connected to a central
entity, known as the fusion center (FC), forms a linear combination of the
observations to which it has access and sends the resulting signal to the FC
through an orthogonal fading channel. The FC combines these received signals to
find the best linear unbiased estimator of the vector of unknown signals
observed by individual sensors. The main novelty of this paper is the
derivation of an optimal power-allocation scheme in which the coefficients used
to form linear combinations of noisy observations at the sensors connected to
the FC are optimized. Through this optimization, the total estimation
distortion at the FC is minimized, given a constraint on the maximum cumulative
transmit power in the entire network. Numerical results show that even with a
moderate connectivity across the network, spatial collaboration among sensors
significantly reduces the estimation distortion at the FC.
|
1403.2013 | Performance and Robustness Analysis of Stochastic Jump Linear Systems
using Wasserstein metric | cs.SY math.PR | This paper focuses on the performance and the robustness analysis of
stochastic jump linear systems. The state trajectory under stochastic jump
process becomes random variables, which brings forth the probability
distributions in the system state. Therefore, we need to adopt a proper metric
to measure the system performance with respect to stochastic switching. In this
perspective, Wasserstein metric that assesses the distance between probability
density functions is applied to provide the performance and the robustness
analysis. Both the transient and steady-state performance of the systems with
given initial state uncertainties can be measured in this framework. Also, we
prove that the convergence of this metric implies the mean square stability.
Overall, this study provides a unifying framework for the performance and the
robustness analysis of general stochastic jump linear systems, but not
necessarily Markovian jump process that is commonly used for stochastic
switching. The practical usefulness and efficiency of the proposed method are
verified through numerical examples.
|
1403.2024 | Node Removal Vulnerability of the Largest Component of a Network | cs.SI cs.NI | The connectivity structure of a network can be very sensitive to removal of
certain nodes in the network. In this paper, we study the sensitivity of the
largest component size to node removals. We prove that minimizing the largest
component size is equivalent to solving a matrix one-norm minimization problem
whose column vectors are orthogonal and sparse and they form a basis of the
null space of the associated graph Laplacian matrix. A greedy node removal
algorithm is then proposed based on the matrix one-norm minimization. In
comparison with other node centralities such as node degree and betweenness,
experimental results on US power grid dataset validate the effectiveness of the
proposed approach in terms of reduction of the largest component size with
relatively few node removals.
|
1403.2031 | Texture Defect Detection in Gradient Space | cs.CV | In this paper, we propose a machine vision algorithm for automatically
detecting defects in patterned textures with the help of gradient space and its
energy. Experiments on real fabric images with defects show that the proposed
method can be used for automatic detection of fabric defects in textile
industries.
|
1403.2065 | Categorization Axioms for Clustering Results | cs.LG | Cluster analysis has attracted more and more attention in the field of
machine learning and data mining. Numerous clustering algorithms have been
proposed and are being developed due to diverse theories and various
requirements of emerging applications. Therefore, it is very worth establishing
an unified axiomatic framework for data clustering. In the literature, it is an
open problem and has been proved very challenging. In this paper, clustering
results are axiomatized by assuming that an proper clustering result should
satisfy categorization axioms. The proposed axioms not only introduce
classification of clustering results and inequalities of clustering results,
but also are consistent with prototype theory and exemplar theory of
categorization models in cognitive science. Moreover, the proposed axioms lead
to three principles of designing clustering algorithm and cluster validity
index, which follow many popular clustering algorithms and cluster validity
indices.
|
1403.2077 | Application of Asynchronous Weak Commitment Search in Autonomous Quality
of Service Provision in Cognitive Radio Networks | cs.NI cs.MA | This article presents a distributed solution to autonomous quality of service
provision in cognitive radio networks. Specifically, cognitive STDMA and CDMA
communication networks are studied. Based on asynchronous weak commitment
search the task of QoS provision is distributed among different network nodes.
Simulation results verify this scheme converges very fast to optimal solution,
which makes it suitable for practical real time systems. This application of
artificial intelligence in wireless and mobile communications can be used in
home automation and networking, and vehicular technology. The generalizations
and extensions of this approach can be used in Long Term Evolution Self
Organizing Networks (LTE-SONs). In addition, it can pave the way for
decentralized and autonomous QoS provision in capillary networks that reach end
nodes at Internet of Things, where central management is either unavailable or
not efficient.
|
1403.2079 | Joint Power Control in Wiretap Interference Channels | cs.IT math.IT | Interference in wireless networks degrades the signal quality at the
terminals. However, it can potentially enhance the secrecy rate. This paper
investigates the secrecy rate in a two-user interference network where one of
the users, namely user 1, requires to establish a confidential connection. User
1 wants to prevent an unintended user of the network to decode its
transmission. User 1 has to transmit such that its secrecy rate is maximized
while the quality of service at the destination of the other user, user 2, is
satisfied, and both user's power limits are taken into account. We consider two
scenarios: 1) user 2 changes its power in favor of user 1, an altruistic
scenario, 2) user 2 is selfish and only aims to maintain the minimum quality of
service at its destination, an egoistic scenario. It is shown that there is a
threshold for user 2's transmission power that only below or above which,
depending on the channel qualities, user 1 can achieve a positive secrecy rate.
Closed-form solutions are obtained in order to perform joint optimal power
control. Further, a new metric called secrecy energy efficiency is introduced.
We show that in general, the secrecy energy efficiency of user 1 in an
interference channel scenario is higher than that of an interference-free
channel.
|
1403.2081 | Diversity of Linear Transceivers in MIMO AF Half-duplex Relaying
Channels | cs.IT math.IT | Linear transceiving schemes between the relay and the destination have
recently attracted much interest in MIMO amplify-and-forward (AF) relaying
systems due to low implementation complexity. In this paper, we provide
comprehensive analysis on the diversity order of the linear zero-forcing (ZF)
and minimum mean squared error (MMSE) transceivers. Firstly, we obtain a
compact closed-form expression for the diversity-multiplexing tradeoff (DMT)
through tight upper and lower bounds. While our DMT analysis accurately
predicts the performance of the ZF transceivers, it is observed that the MMSE
transceivers exhibit a complicated rate dependent behavior, and thus are very
unpredictable via DMT for finite rate cases. Secondly, we highlight this
interesting behavior of the MMSE transceivers and characterize the diversity
order at all finite rates. This leads to a closed-form expression for the
diversity-rate tradeoff (DRT) which reveals the relationship between the
diversity, the rate, and the number of antennas at each node. Our DRT analysis
compliments our previous work on DMT, thereby providing a complete
understanding on the diversity order of linear transceiving schemes in MIMO AF
relaying channels.
|
1403.2111 | Protograph-Based Raptor-Like LDPC Codes | cs.IT math.IT | This paper proposes a class of rate-compatible LDPC codes, called
protograph-based Raptor-like (PBRL) codes. The construction is focused on
binary codes for BI-AWGN channels. As with the Raptor codes, additional parity
bits are produced by exclusive-OR operations on the precoded bits, providing
extensive rate compatibility. Unlike Raptor codes, the structure of each
additional parity bit in the protograph is explicitly designed through density
evolution. The construction method provides low iterative decoding thresholds
and the lifted codes result in excellent error rate performance for
long-blocklength PBRL codes. For short-blocklength PBRL codes the protograph
design and lifting must avoid undesired graphical structures such as trapping
sets and absorbing sets while also seeking to minimize the density evolution
threshold. Simulation results are shown in information block sizes of $k=192$,
$16368$ and $16384$. Comparing at the same information block size of $k=16368$
bits, the PBRL codes outperform the best known standardized code, the AR4JA
codes in the waterfall region. The PBRL codes also perform comparably to DVB-S2
codes even though the DVB-S2 codes use LDPC codes with longer blocklengths and
are concatenated with outer BCH codes.
|
1403.2116 | Global Synchronization of Pulse-Coupled Oscillators Interacting on Cycle
Graphs | cs.SY | The importance of pulse-coupled oscillators (PCOs) in biology and engineering
has motivated research to understand basic properties of PCO networks. Despite
the large body of work addressing PCOs, a global synchronization result for
networks that are more general than all-to-all connected is still unavailable.
In this paper we address global synchronization of PCO networks described by
cycle graphs. It is shown for the bidirectional cycle case that as the number
of oscillators in the cycle grows, the coupling strength must be increased in
order to guarantee synchronization for arbitrary initial conditions. For the
unidirectional cycle case, the strongest coupling cannot ensure global
synchronization yet a refractory period in the phase response curve is
sufficient to enable global synchronization. Analytical findings are confirmed
by numerical simulations.
|
1403.2124 | Generating Music from Literature | cs.CL | We present a system, TransProse, that automatically generates musical pieces
from text. TransProse uses known relations between elements of music such as
tempo and scale, and the emotions they evoke. Further, it uses a novel
mechanism to determine sequences of notes that capture the emotional activity
in the text. The work has applications in information visualization, in
creating audio-visual e-books, and in developing music apps.
|
1403.2140 | Scientometrics: Untangling the topics | cs.DL cs.SI physics.data-an physics.soc-ph | Measuring science is based on comparing articles to similar others. However,
keyword-based groups of thematically similar articles are dominantly small.
These small sizes keep the statistical errors of comparisons high. With the
growing availability of bibliographic data such statistical errors can be
reduced by merging methods of thematic grouping, citation networks and keyword
co-usage.
|
1403.2150 | Constraint-based Causal Discovery from Multiple Interventions over
Overlapping Variable Sets | stat.ML cs.AI | Scientific practice typically involves repeatedly studying a system, each
time trying to unravel a different perspective. In each study, the scientist
may take measurements under different experimental conditions (interventions,
manipulations, perturbations) and measure different sets of quantities
(variables). The result is a collection of heterogeneous data sets coming from
different data distributions. In this work, we present algorithm COmbINE, which
accepts a collection of data sets over overlapping variable sets under
different experimental conditions; COmbINE then outputs a summary of all causal
models indicating the invariant and variant structural characteristics of all
models that simultaneously fit all of the input data sets. COmbINE converts
estimated dependencies and independencies in the data into path constraints on
the data-generating causal model and encodes them as a SAT instance. The
algorithm is sound and complete in the sample limit. To account for conflicting
constraints arising from statistical errors, we introduce a general method for
sorting constraints in order of confidence, computed as a function of their
corresponding p-values. In our empirical evaluation, COmbINE outperforms in
terms of efficiency the only pre-existing similar algorithm; the latter
additionally admits feedback cycles, but does not admit conflicting constraints
which hinders the applicability on real data. As a proof-of-concept, COmbINE is
employed to co-analyze 4 real, mass-cytometry data sets measuring
phosphorylated protein concentrations of overlapping protein sets under 3
different interventions.
|
1403.2152 | Parsing using a grammar of word association vectors | cs.CL cs.NE | This paper was was first drafted in 2001 as a formalization of the system
described in U.S. patent U.S. 7,392,174. It describes a system for implementing
a parser based on a kind of cross-product over vectors of contextually similar
words. It is being published now in response to nascent interest in vector
combination models of syntax and semantics. The method used aggressive
substitution of contextually similar words and word groups to enable product
vectors to stay in the same space as their operands and make entire sentences
comparable syntactically, and potentially semantically. The vectors generated
had sufficient representational strength to generate parse trees at least
comparable with contemporary symbolic parsers.
|
1403.2170 | On the Harmonic Oscillation of High-order Linear Time Invariant Systems | cs.DM cs.SY | Linear time invariant (LTI) systems are widely used for modeling system
dynamics in science and engineering problems. Harmonic oscillation of LTI
systems are widely used for modeling and analyses of periodic physical
phenomenon. This study investigates sufficient conditions to obtain harmonic
oscillation for high-order LTI systems. The paper presents a design procedure
for controlling harmonic oscillation of singleinput single-output high-order
LTI systems. LTI system coefficients are calculated by the solution of linear
equation set, which imposes a stable sinusoidal oscillation solution for the
characteristic polynomials of LTI systems. An example design is demonstrated
for fourth-order LTI systems and the control of harmonic oscillations are
discussed by illustrating Hilbert transform and spectrogram of oscillation
signals.
|
1403.2174 | A New Technique for INS/GNSS Attitude and Parameter Estimation Using
Online Optimization | cs.RO cs.SY | Integration of inertial navigation system (INS) and global navigation
satellite system (GNSS) is usually implemented in engineering applications by
way of Kalman-like filtering. This form of INS/GNSS integration is prone to
attitude initialization failure, especially when the host vehicle is moving
freely. This paper proposes an online constrained-optimization method to
simultaneously estimate the attitude and other related parameters including
GNSS antenna's lever arm and inertial sensor biases. This new technique
benefits from self-initialization in which no prior attitude or sensor
measurement noise information is required. Numerical results are reported to
validate its effectiveness and prospect in high accurate INS/GNSS applications.
|
1403.2189 | Joint Wireless Information and Energy Transfer with Reduced Feedback in
MIMO Interference Channels | cs.IT math.IT | To determine the transmission strategy for joint wireless information and
energy transfer (JWIET) in the MIMO interference channel (IFC), the information
access point (IAP) and energy access point (EAP) require the channel state
information (CSI) of their associated links to both the information-decoding
(ID) mobile stations (MSs) and energy-harvesting (EH) MSs (so-called local
CSI). In this paper, to reduce th e feedback overhead of MSs for the JWIET in
two-user MIMO IFC, we propose a Geodesic energy beamforming scheme that
requires partial CSI at the EAP. Furthermore, in the two-user MIMO IFC, it is
proved that the Geodesic energy beamforming is the optimal strategy. By adding
a rank-one constraint on the transmit signal covariance of IAP, we can further
reduce the feedback overhead to IAP by exploiting Geodesic information
beamforming. Under the rank-one constraint of IAP's transmit signal, we prove
that Geodesic information/energy beamforming approach is the optimal strategy
for JWIET in the two-user MIMO IFC. We also discuss the extension of the
proposed rank-one Geodesic information/energy beamforming strategies to general
K-user MIMO IFC. Finally, by analyzing the achievable rate-energy performance
statistically under imperfect partial CSIT, we propose an adaptive bit
allocation strategy for both EH MS and ID MS.
|
1403.2194 | Querying Geometric Figures Using a Controlled Language, Ontological
Graphs and Dependency Lattices | cs.CG cs.AI cs.DB cs.IR | Dynamic geometry systems (DGS) have become basic tools in many areas of
geometry as, for example, in education. Geometry Automated Theorem Provers
(GATP) are an active area of research and are considered as being basic tools
in future enhanced educational software as well as in a next generation of
mechanized mathematics assistants. Recently emerged Web repositories of
geometric knowledge, like TGTP and Intergeo, are an attempt to make the already
vast data set of geometric knowledge widely available. Considering the large
amount of geometric information already available, we face the need of a query
mechanism for descriptions of geometric constructions.
In this paper we discuss two approaches for describing geometric figures
(declarative and procedural), and present algorithms for querying geometric
figures in declaratively and procedurally described corpora, by using a DGS or
a dedicated controlled natural language for queries.
|
1403.2201 | SMML estimators for linear regression and tessellations of hyperbolic
space | cs.IT math.IT | The strict minimum message length (SMML) principle links data compression
with inductive inference. The corresponding estimators have many useful
properties but they can be hard to calculate. We investigate SMML estimators
for linear regression models and we show that they have close connections to
hyperbolic geometry. When equipped with the Fisher information metric, the
linear regression model with $p$ covariates and a sample size of $n$ becomes a
Riemannian manifold, and we show that this is isometric to $(p+1)$-dimensional
hyperbolic space $\mathbb{H}^{p+1}$ equipped with a metric tensor which is $2n$
times the usual metric tensor on $\mathbb{H}^{p+1}$. A natural identification
then allows us to also view the set of sufficient statistics for the linear
regression model as a hyperbolic space. We show that the partition of an SMML
estimator corresponds to a tessellation of this hyperbolic space.
|
1403.2226 | Accelerating Community Detection by Using K-core Subgraphs | physics.soc-ph cs.SI | Community detection is expensive, and the cost generally depends at least
linearly on the number of vertices in the graph. We propose working with a
reduced graph that has many fewer nodes but nonetheless captures key community
structure. The K-core of a graph is the largest subgraph within which each node
has at least K connections. We propose a framework that accelerates community
detection by applying an expensive algorithm (modularity optimization, the
Louvain method, spectral clustering, etc.) to the K-core and then using an
inexpensive heuristic (such as local modularity maximization) to infer
community labels for the remaining nodes. Our experiments demonstrate that the
proposed framework can reduce the running time by more than 80% while
preserving the quality of the solutions. Recent theoretical investigations
provide support for using the K-core as a reduced representation.
|
1403.2239 | Super-Resolution from Short-Time Fourier Transform Measurements | cs.IT math.IT | While spike trains are obviously not band-limited, the theory of
super-resolution tells us that perfect recovery of unknown spike locations and
weights from low-pass Fourier transform measurements is possible provided that
the minimum spacing, $\Delta$, between spikes is not too small. Specifically,
for a cutoff frequency of $f_c$, Donoho [2] shows that exact recovery is
possible if $\Delta > 1/f_c$, but does not specify a corresponding recovery
method. On the other hand, Cand\`es and Fernandez-Granda [3] provide a recovery
method based on convex optimization, which provably succeeds as long as $\Delta
> 2/f_c$. In practical applications one often has access to windowed Fourier
transform measurements, i.e., short-time Fourier transform (STFT) measurements,
only. In this paper, we develop a theory of super-resolution from STFT
measurements, and we propose a method that provably succeeds in recovering
spike trains from STFT measurements provided that $\Delta > 1/f_c$.
|
1403.2295 | Sublinear Models for Graphs | cs.LG cs.CV | This contribution extends linear models for feature vectors to sublinear
models for graphs and analyzes their properties. The results are (i) a
geometric interpretation of sublinear classifiers, (ii) a generic learning rule
based on the principle of empirical risk minimization, (iii) a convergence
theorem for the margin perceptron in the sublinearly separable case, and (iv)
the VC-dimension of sublinear functions. Empirical results on graph data show
that sublinear models on graphs have similar properties as linear models for
feature vectors.
|
1403.2301 | Phase Retrieval using Lipschitz Continuous Maps | math.FA cs.IT math.IT stat.ML | In this note we prove that reconstruction from magnitudes of frame
coefficients (the so called "phase retrieval problem") can be performed using
Lipschitz continuous maps. Specifically we show that when the nonlinear
analysis map $\alpha:{\mathcal H}\rightarrow\mathbb{R}^m$ is injective, with
$(\alpha(x))_k=|<x,f_k>|^2$, where $\{f_1,\ldots,f_m\}$ is a frame for the
Hilbert space ${\mathcal H}$, then there exists a left inverse map
$\omega:\mathbb{R}^m\rightarrow {\mathcal H}$ that is Lipschitz continuous.
Additionally we obtain the Lipschitz constant of this inverse map in terms of
the lower Lipschitz constant of $\alpha$. Surprisingly the increase in
Lipschitz constant is independent of the space dimension or frame redundancy.
|
1403.2307 | The Homeostasis Protocol: Avoiding Transaction Coordination Through
Program Analysis | cs.DB | Datastores today rely on distribution and replication to achieve improved
performance and fault-tolerance. But correctness of many applications depends
on strong consistency properties - something that can impose substantial
overheads, since it requires coordinating the behavior of multiple nodes. This
paper describes a new approach to achieving strong consistency in distributed
systems while minimizing communication between nodes. The key insight is to
allow the state of the system to be inconsistent during execution, as long as
this inconsistency is bounded and does not affect transaction correctness. In
contrast to previous work, our approach uses program analysis to extract
semantic information about permissible levels of inconsistency and is fully
automated. We then employ a novel homeostasis protocol to allow sites to
operate independently, without communicating, as long as any inconsistency is
governed by appropriate treaties between the nodes. We discuss mechanisms for
optimizing treaties based on workload characteristics to minimize
communication, as well as a prototype implementation and experiments that
demonstrate the benefits of our approach on common transactional benchmarks.
|
1403.2330 | Subspace clustering using a symmetric low-rank representation | cs.CV | In this paper, we propose a low-rank representation with symmetric constraint
(LRRSC) method for robust subspace clustering. Given a collection of data
points approximately drawn from multiple subspaces, the proposed technique can
simultaneously recover the dimension and members of each subspace. LRRSC
extends the original low-rank representation algorithm by integrating a
symmetric constraint into the low-rankness property of high-dimensional data
representation. The symmetric low-rank representation, which preserves the
subspace structures of high-dimensional data, guarantees weight consistency for
each pair of data points so that highly correlated data points of subspaces are
represented together. Moreover, it can be efficiently calculated by solving a
convex optimization problem. We provide a rigorous proof for minimizing the
nuclear-norm regularized least square problem with a symmetric constraint. The
affinity matrix for spectral clustering can be obtained by further exploiting
the angular information of the principal directions of the symmetric low-rank
representation. This is a critical step towards evaluating the memberships
between data points. Experimental results on benchmark databases demonstrate
the effectiveness and robustness of LRRSC compared with several
state-of-the-art subspace clustering algorithms.
|
1403.2345 | Home Location Identification of Twitter Users | cs.SI cs.CL cs.CY | We present a new algorithm for inferring the home location of Twitter users
at different granularities, including city, state, time zone or geographic
region, using the content of users tweets and their tweeting behavior. Unlike
existing approaches, our algorithm uses an ensemble of statistical and
heuristic classifiers to predict locations and makes use of a geographic
gazetteer dictionary to identify place-name entities. We find that a
hierarchical classification approach, where time zone, state or geographic
region is predicted first and city is predicted next, can improve prediction
accuracy. We have also analyzed movement variations of Twitter users, built a
classifier to predict whether a user was travelling in a certain period of time
and use that to further improve the location detection accuracy. Experimental
evidence suggests that our algorithm works well in practice and outperforms the
best existing algorithms for predicting the home location of Twitter users.
|
1403.2360 | Matching theory for priority-based cell association in the downlink of
wireless small cell networks | cs.IT cs.GT math.IT | The deployment of small cells, overlaid on existing cellular infrastructure,
is seen as a key feature in next-generation cellular systems. In this paper,
the problem of user association in the downlink of small cell networks (SCNs)
is considered. The problem is formulated as a many-to-one matching game in
which the users and SCBSs rank one another based on utility functions that
account for both the achievable performance, in terms of rate and fairness to
cell edge users, as captured by newly proposed priorities. To solve this game,
a novel distributed algorithm that can reach a stable matching is proposed.
Simulation results show that the proposed approach yields an average utility
gain of up to 65% compared to a common association algorithm that is based on
received signal strength. Compared to the classical deferred acceptance
algorithm, the results also show a 40% utility gain and a more fair utility
distribution among the users.
|
1403.2372 | A Hybrid Feature Selection Method to Improve Performance of a Group of
Classification Algorithms | cs.LG | In this paper a hybrid feature selection method is proposed which takes
advantages of wrapper subset evaluation with a lower cost and improves the
performance of a group of classifiers. The method uses combination of sample
domain filtering and resampling to refine the sample domain and two feature
subset evaluation methods to select reliable features. This method utilizes
both feature space and sample domain in two phases. The first phase filters and
resamples the sample domain and the second phase adopts a hybrid procedure by
information gain, wrapper subset evaluation and genetic search to find the
optimal feature space. Experiments carried out on different types of datasets
from UCI Repository of Machine Learning databases and the results show a rise
in the average performance of five classifiers (Naive Bayes, Logistic,
Multilayer Perceptron, Best First Decision Tree and JRIP) simultaneously and
the classification error for these classifiers decreases considerably. The
experiments also show that this method outperforms other feature selection
methods with a lower cost.
|
1403.2395 | A-infinity Persistence | math.AT cs.CG cs.CV | We introduce and study A-infinity persistence of a given homology filtration
of topological spaces. This is a family, one for each n > 0, of homological
invariants which provide information not readily available by the (persistent)
Betti numbers of the given filtration. This may help to detect noise, not just
in the simplicial structure of the filtration but in further geometrical
properties in which the higher codiagonals of the A-infinity structure are
translated. Based in the classification of zigzag modules, a characterization
of the A-infinity persistence in terms of its associated barcode is given.
|
1403.2400 | Batch latency analysis and phase transitions for a tandem of queues with
exponentially distributed service times | math.PR cs.IT math.IT | We analyze the latency or sojourn time L(m,n) for the last customer in a
batch of n customers to exit from the m-th queue in a tandem of m queues in the
setting where the queues are in equilibrium before the batch of customers
arrives at the first queue. We first characterize the distribution of L(m,n)
exactly for every m and n, under the assumption that the queues have unlimited
buffers and that each server has customer independent, exponentially
distributed service times with an arbitrary, known rate. We then evaluate the
first two leading order terms of the distributions in the large m and n limit
and bring into sharp focus the existence of phase transitions in the system
behavior. The phase transition occurs due to the presence of either slow
bottleneck servers or a high external arrival rate. We determine the critical
thresholds for the service rate and the arrival rate, respectively, about which
this phase transition occurs; it turns out that they are the same. This
critical threshold depends, in a manner we make explicit, on the individual
service rates, the number of customers and the number of queues but not on the
external arrival rate.
|
1403.2404 | Scalable RDF Data Compression using X10 | cs.DC cs.DB | The Semantic Web comprises enormous volumes of semi-structured data elements.
For interoperability, these elements are represented by long strings. Such
representations are not efficient for the purposes of Semantic Web applications
that perform computations over large volumes of information. A typical method
for alleviating the impact of this problem is through the use of compression
methods that produce more compact representations of the data. The use of
dictionary encoding for this purpose is particularly prevalent in Semantic Web
database systems. However, centralized implementations present performance
bottlenecks, giving rise to the need for scalable, efficient distributed
encoding schemes. In this paper, we describe an encoding implementation based
on the asynchronous partitioned global address space (APGAS) parallel
programming model. We evaluate performance on a cluster of up to 384 cores and
datasets of up to 11 billion triples (1.9 TB). Compared to the state-of-art
MapReduce algorithm, we demonstrate a speedup of 2.6-7.4x and excellent
scalability. These results illustrate the strong potential of the APGAS model
for efficient implementation of dictionary encoding and contributes to the
engineering of larger scale Semantic Web applications.
|
1403.2407 | A Composable Method for Real-Time Control of Active Distribution
Networks with Explicit Power Setpoints | cs.SY | The conventional approach for the control of distribution networks, in the
presence of active generation and/or controllable loads and storage, involves a
combination of both frequency and voltage regulation at different time scales.
With the increased penetration of stochastic resources, distributed generation
and demand response, this approach shows severe limitations in both the optimal
and feasible operation of these networks, as well as in the aggregation of the
network resources for upper-layer power systems. An alternative approach is to
directly control the targeted grid by defining explicit and real-time setpoints
for active/reactive power absorptions/injections defined by a solution of a
specific optimization problem; but this quickly becomes intractable when
systems get large or diverse. In this paper, we address this problem and
propose a method for the explicit control of the grid status, based on a common
abstract model characterized by the main property of being composable. That is
to say, subsystems can be aggregated into virtual devices that hide their
internal complexity. Thus the proposed method can easily cope with systems of
any size or complexity. The framework is presented in this Part I, whilst in
Part II we illustrate its application to a CIGR\'E low voltage benchmark
microgrid. In particular, we provide implementation examples with respect to
typical devices connected to distribution networks and evaluate of the
performance and benefits of the proposed control framework.
|
1403.2411 | Probabilistic Robustness Analysis of Stochastic Jump Linear Systems | cs.SY math.DS | In this paper, we propose a new method to measure the probabilistic
robustness of stochastic jump linear system with respect to both the initial
state uncertainties and the randomness in switching. Wasserstein distance which
defines a metric on the manifold of probability density functions is used as
tool for the performance and the stability measures. Starting with Gaussian
distribution to represent the initial state uncertainties, the probability
density function of the system state evolves into mixture of Gaussian, where
the number of Gaussian components grows exponentially. To cope with
computational complexity caused by mixture of Gaussian, we prove that there
exists an alternative probability density function that preserves exact
information in the Wasserstein level. The usefulness and the efficiency of the
proposed methods are demonstrated by example.
|
1403.2433 | Generalised Mixability, Constant Regret, and Bayesian Updating | cs.LG stat.ML | Mixability of a loss is known to characterise when constant regret bounds are
achievable in games of prediction with expert advice through the use of Vovk's
aggregating algorithm. We provide a new interpretation of mixability via convex
analysis that highlights the role of the Kullback-Leibler divergence in its
definition. This naturally generalises to what we call $\Phi$-mixability where
the Bregman divergence $D_\Phi$ replaces the KL divergence. We prove that
losses that are $\Phi$-mixable also enjoy constant regret bounds via a
generalised aggregating algorithm that is similar to mirror descent.
|
1403.2439 | String Reconstruction from Substring Compositions | cs.DM cs.DS cs.IT math.IT | Motivated by mass-spectrometry protein sequencing, we consider a
simply-stated problem of reconstructing a string from the multiset of its
substring compositions. We show that all strings of length 7, one less than a
prime, or one less than twice a prime, can be reconstructed uniquely up to
reversal. For all other lengths we show that reconstruction is not always
possible and provide sometimes-tight bounds on the largest number of strings
with given substring compositions. The lower bounds are derived by
combinatorial arguments and the upper bounds by algebraic considerations that
precisely characterize the set of strings with the same substring compositions
in terms of the factorization of bivariate polynomials. The problem can be
viewed as a combinatorial simplification of the turnpike problem, and its
solution may shed light on this long-standing problem as well. Using well known
results on transience of multi-dimensional random walks, we also provide a
reconstruction algorithm that reconstructs random strings over alphabets of
size $\ge4$ in optimal near-quadratic time.
|
1403.2471 | Mean Square Stability for Stochastic Jump Linear Systems via Optimal
Transport | cs.SY math.DS | In this note, we provide a unified framework for the mean square stability of
stochastic jump linear systems via optimal transport. The Wasserstein metric
known as an optimal transport, that assesses the distance between probability
density functions enables the stability analysis. Without any assumption on the
underlying jump process, this Wasserstein distance guarantees the mean square
stability for general stochastic jump linear systems, not necessarily for
Markovian jump. The validity of the proposed methods are proved by recovering
already-known stability conditions under this framework.
|
1403.2482 | Removing Mixture of Gaussian and Impulse Noise by Patch-Based Weighted
Means | cs.CV | We first establish a law of large numbers and a convergence theorem in
distribution to show the rate of convergence of the non-local means filter for
removing Gaussian noise. We then introduce the notion of degree of similarity
to measure the role of similarity for the non-local means filter. Based on the
convergence theorems, we propose a patch-based weighted means filter for
removing impulse noise and its mixture with Gaussian noise by combining the
essential idea of the trilateral filter and that of the non-local means filter.
Our experiments show that our filter is competitive compared to recently
proposed methods.
|
1403.2483 | Optimal Sampling-Based Motion Planning under Differential Constraints:
the Driftless Case | cs.RO | Motion planning under differential constraints is a classic problem in
robotics. To date, the state of the art is represented by sampling-based
techniques, with the Rapidly-exploring Random Tree algorithm as a leading
example. Yet, the problem is still open in many aspects, including guarantees
on the quality of the obtained solution. In this paper we provide a thorough
theoretical framework to assess optimality guarantees of sampling-based
algorithms for planning under differential constraints. We exploit this
framework to design and analyze two novel sampling-based algorithms that are
guaranteed to converge, as the number of samples increases, to an optimal
solution (namely, the Differential Probabilistic RoadMap algorithm and the
Differential Fast Marching Tree algorithm). Our focus is on driftless
control-affine dynamical models, which accurately model a large class of
robotic systems. In this paper we use the notion of convergence in probability
(as opposed to convergence almost surely): the extra mathematical flexibility
of this approach yields convergence rate bounds - a first in the field of
optimal sampling-based motion planning under differential constraints.
Numerical experiments corroborating our theoretical results are presented and
discussed.
|
1403.2484 | Transfer Learning across Networks for Collective Classification | cs.LG cs.SI | This paper addresses the problem of transferring useful knowledge from a
source network to predict node labels in a newly formed target network. While
existing transfer learning research has primarily focused on vector-based data,
in which the instances are assumed to be independent and identically
distributed, how to effectively transfer knowledge across different information
networks has not been well studied, mainly because networks may have their
distinct node features and link relationships between nodes. In this paper, we
propose a new transfer learning algorithm that attempts to transfer common
latent structure features across the source and target networks. The proposed
algorithm discovers these latent features by constructing label propagation
matrices in the source and target networks, and mapping them into a shared
latent feature space. The latent features capture common structure patterns
shared by two networks, and serve as domain-independent features to be
transferred between networks. Together with domain-dependent node features, we
thereafter propose an iterative classification algorithm that leverages label
correlations to predict node labels in the target network. Experiments on
real-world networks demonstrate that our proposed algorithm can successfully
achieve knowledge transfer between networks to help improve the accuracy of
classifying nodes in the target network.
|
1403.2485 | Optimal interval clustering: Application to Bregman clustering and
statistical mixture learning | cs.IT cs.LG math.IT | We present a generic dynamic programming method to compute the optimal
clustering of $n$ scalar elements into $k$ pairwise disjoint intervals. This
case includes 1D Euclidean $k$-means, $k$-medoids, $k$-medians, $k$-centers,
etc. We extend the method to incorporate cluster size constraints and show how
to choose the appropriate $k$ by model selection. Finally, we illustrate and
refine the method on two case studies: Bregman clustering and statistical
mixture learning maximizing the complete likelihood.
|
1403.2498 | Cognitive Internet of Things: A New Paradigm beyond Connection | cs.AI | Current research on Internet of Things (IoT) mainly focuses on how to enable
general objects to see, hear, and smell the physical world for themselves, and
make them connected to share the observations. In this paper, we argue that
only connected is not enough, beyond that, general objects should have the
capability to learn, think, and understand both physical and social worlds by
themselves. This practical need impels us to develop a new paradigm, named
Cognitive Internet of Things (CIoT), to empower the current IoT with a `brain'
for high-level intelligence. Specifically, we first present a comprehensive
definition for CIoT, primarily inspired by the effectiveness of human
cognition. Then, we propose an operational framework of CIoT, which mainly
characterizes the interactions among five fundamental cognitive tasks:
perception-action cycle, massive data analytics, semantic derivation and
knowledge discovery, intelligent decision-making, and on-demand service
provisioning. Furthermore, we provide a systematic tutorial on key enabling
techniques involved in the cognitive tasks. In addition, we also discuss the
design of proper performance metrics on evaluating the enabling techniques.
Last but not least, we present the research challenges and open issues ahead.
Building on the present work and potentially fruitful future studies, CIoT has
the capability to bridge the physical world (with objects, resources, etc.) and
the social world (with human demand, social behavior, etc.), and enhance smart
resource allocation, automatic network operation, and intelligent service
provisioning.
|
1403.2499 | Application of Constacyclic codes to Quantum MDS Codes | cs.IT math.IT | Quantum maximal-distance-separable (MDS) codes form an important class of
quantum codes. To get $q$-ary quantum MDS codes, it suffices to find linear MDS
codes $C$ over $\mathbb{F}_{q^2}$ satisfying $C^{\perp_H}\subseteq C$ by the
Hermitian construction and the quantum Singleton bound. If
$C^{\perp_{H}}\subseteq C$, we say that $C$ is a dual-containing code. Many new
quantum MDS codes with relatively large minimum distance have been produced by
constructing dual-containing constacyclic MDS codes (see \cite{Guardia11},
\cite{Kai13}, \cite{Kai14}). These works motivate us to make a careful study on
the existence condition for nontrivial dual-containing constacyclic codes. This
would help us to avoid unnecessary attempts and provide effective ideas in
order to construct dual-containing codes. Several classes of dual-containing
MDS constacyclic codes are constructed and their parameters are computed.
Consequently, new quantum MDS codes are derived from these parameters. The
quantum MDS codes exhibited here have parameters better than the ones available
in the literature.
|
1403.2535 | A Delay-Constrained Protocol with Adaptive Mode Selection for
Bidirectional Relay Networks | cs.IT math.IT | In this paper, we consider a bidirectional relay network with half-duplex
nodes and block fading where the nodes transmit with a fixed transmission rate.
Thereby, user 1 and user 2 exchange information only via a relay node, i.e., a
direct link between both users is not present. Recently in [1], it was shown
that a considerable gain in terms of sum throughput can be obtained in
bidirectional relaying by optimally selecting the transmission modes or,
equivalently, the states of the nodes, i.e., the transmit, the receive, and the
silent states, in each time slot based on the qualities of the involved links.
To enable adaptive transmission mode selection, the relay has to be equipped
with two buffers for storage of the data received from the two users. However,
the protocol proposed in [1] was delay-unconstrained and provides an upper
bound for the performance of practical delay-constrained protocols. In this
paper, we propose a heuristic but efficient delay-constrained protocol which
can approach the performance upper bound reported in [1], even in cases where
only a small average delay is permitted. In particular, the proposed protocol
does not only take into account the instantaneous qualities of the involved
links for adaptive mode selection but also the states of the queues at the
buffers. The average throughput and the average delay of the proposed
delay-constrained protocol are evaluated by analyzing the Markov chain of the
states of the queues.
|
1403.2541 | Turing: Then, Now and Still Key | cs.AI | This paper looks at Turing's postulations about Artificial Intelligence in
his paper 'Computing Machinery and Intelligence', published in 1950. It notes
how accurate they were and how relevant they still are today. This paper notes
the arguments and mechanisms that he suggested and tries to expand on them
further. The paper however is mostly about describing the essential ingredients
for building an intelligent model and the problems related with that. The
discussion includes recent work by the author himself, who adds his own
thoughts on the matter that come from a purely technical investigation into the
problem. These are personal and quite speculative, but provide an interesting
insight into the mechanisms that might be used for building an intelligent
system.
|
1403.2580 | Optimal Resource Allocation in Full-Duplex Wireless-Powered
Communication Network | cs.IT math.IT | This paper studies optimal resource allocation in the wireless-powered
communication network (WPCN), where one hybrid access-point (H-AP) operating in
full-duplex (FD) broadcasts wireless energy to a set of distributed users in
the downlink (DL) and at the same time receives independent information from
the users via time-division-multiple-access (TDMA) in the uplink (UL). We
design an efficient protocol to support simultaneous wireless energy transfer
(WET) in the DL and wireless information transmission (WIT) in the UL for the
proposed FD-WPCN. We jointly optimize the time allocations to the H-AP for DL
WET and different users for UL WIT as well as the transmit power allocations
over time at the H-AP to maximize the users' weighted sum-rate of UL
information transmission with harvested energy. We consider both the cases with
perfect and imperfect self-interference cancellation (SIC) at the H-AP, for
which we obtain optimal and suboptimal time and power allocation solutions,
respectively. Furthermore, we consider the half-duplex (HD) WPCN as a baseline
scheme and derive its optimal resource allocation solution. Simulation results
show that the FD-WPCN outperforms HD-WPCN when effective SIC can be implemented
and more stringent peak power constraint is applied at the H-AP.
|
1403.2625 | Pattern Formation for Asynchronous Robots without Agreement in Chirality | cs.DC cs.RO | This paper presents a deterministic algorithm for forming a given asymmetric
pattern in finite time by a set of autonomous, homogeneous, oblivious mobile
robots under the CORDA model. The robots are represented as points on the 2D
plane. There is no explicit communication between the robots. The robots
coordinate among themselves by observing the positions of the other robots on
the plane. Initially all the robots are assumed to be stationary. The robots
have local coordinate systems defined by Sense of Direction (SoD), orientation
or chirality and scale. Initially the robots are in asymmetric configuration.
We show that these robots can form any given asymmetric pattern in finite time.
|
1403.2654 | Flying Insect Classification with Inexpensive Sensors | cs.LG cs.CE | The ability to use inexpensive, noninvasive sensors to accurately classify
flying insects would have significant implications for entomological research,
and allow for the development of many useful applications in vector control for
both medical and agricultural entomology. Given this, the last sixty years have
seen many research efforts on this task. To date, however, none of this
research has had a lasting impact. In this work, we explain this lack of
progress. We attribute the stagnation on this problem to several factors,
including the use of acoustic sensing devices, the over-reliance on the single
feature of wingbeat frequency, and the attempts to learn complex models with
relatively little data. In contrast, we show that pseudo-acoustic optical
sensors can produce vastly superior data, that we can exploit additional
features, both intrinsic and extrinsic to the insect's flight behavior, and
that a Bayesian classification approach allows us to efficiently learn
classification models that are very robust to over-fitting. We demonstrate our
findings with large scale experiments that dwarf all previous works combined,
as measured by the number of insects and the number of species considered.
|
1403.2660 | Robust and Scalable Bayes via a Median of Subset Posterior Measures | math.ST cs.DC cs.LG stat.TH | We propose a novel approach to Bayesian analysis that is provably robust to
outliers in the data and often has computational advantages over standard
methods. Our technique is based on splitting the data into non-overlapping
subgroups, evaluating the posterior distribution given each independent
subgroup, and then combining the resulting measures. The main novelty of our
approach is the proposed aggregation step, which is based on the evaluation of
a median in the space of probability measures equipped with a suitable
collection of distances that can be quickly and efficiently evaluated in
practice. We present both theoretical and numerical evidence illustrating the
improvements achieved by our method.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.