id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1403.0503 | Distributed Cooperative Localization in Wireless Sensor Networks without
NLOS Identification | cs.NI cs.IT math.IT | In this paper, a 2-stage robust distributed algorithm is proposed for
cooperative sensor network localization using time of arrival (TOA) data
without identification of non-line of sight (NLOS) links. In the first stage,
to overcome the effect of outliers, a convex relaxation of the Huber loss
function is applied so that by using iterative optimization techniques, good
estimates of the true sensor locations can be obtained. In the second stage,
the original (non-relaxed) Huber cost function is further optimized to obtain
refined location estimates based on those obtained in the first stage. In both
stages, a simple gradient descent technique is used to carry out the
optimization. Through simulations and real data analysis, it is shown that the
proposed convex relaxation generally achieves a lower root mean squared error
(RMSE) compared to other convex relaxation techniques in the literature. Also
by doing the second stage, the position estimates are improved and we can
achieve an RMSE close to that of the other distributed algorithms which know
\textit{a priori} which links are in NLOS.
|
1403.0504 | A Compilation Target for Probabilistic Programming Languages | cs.AI cs.PL stat.ML | Forward inference techniques such as sequential Monte Carlo and particle
Markov chain Monte Carlo for probabilistic programming can be implemented in
any programming language by creative use of standardized operating system
functionality including processes, forking, mutexes, and shared memory.
Exploiting this we have defined, developed, and tested a probabilistic
programming language intermediate representation language we call probabilistic
C, which itself can be compiled to machine code by standard compilers and
linked to operating system libraries yielding an efficient, scalable, portable
probabilistic programming compilation target. This opens up a new hardware and
systems research path for optimizing probabilistic programming systems.
|
1403.0515 | A Primal Dual Active Set with Continuation Algorithm for the
\ell^0-Regularized Optimization Problem | math.OC cs.IT math.IT stat.ML | We develop a primal dual active set with continuation algorithm for solving
the \ell^0-regularized least-squares problem that frequently arises in
compressed sensing. The algorithm couples the the primal dual active set method
with a continuation strategy on the regularization parameter. At each inner
iteration, it first identifies the active set from both primal and dual
variables, and then updates the primal variable by solving a (typically small)
least-squares problem defined on the active set, from which the dual variable
can be updated explicitly. Under certain conditions on the sensing matrix,
i.e., mutual incoherence property or restricted isometry property, and the
noise level, the finite step global convergence of the algorithm is
established. Extensive numerical examples are presented to illustrate the
efficiency and accuracy of the algorithm and the convergence analysis.
|
1403.0522 | Expert System Based On Neural-Fuzzy Rules for Thyroid Diseases Diagnosis | cs.AI | The thyroid, an endocrine gland that secretes hormones in the blood,
circulates its products to all tissues of the body, where they control vital
functions in every cell. Normal levels of thyroid hormone help the brain,
heart, intestines, muscles and reproductive system function normally. Thyroid
hormones control the metabolism of the body. Abnormalities of thyroid function
are usually related to production of too little thyroid hormone
(hypothyroidism) or production of too much thyroid hormone (hyperthyroidism).
Therefore, the correct diagnosis of these diseases is very important topic. In
this study, Linguistic Hedges Neural-Fuzzy Classifier with Selected Features
(LHNFCSF) is presented for diagnosis of thyroid diseases. The performance
evaluation of this system is estimated by using classification accuracy and
k-fold cross-validation. The results indicated that the classification accuracy
without feature selection was 98.6047% and 97.6744% during training and testing
phases, respectively with RMSE of 0.02335. After applying feature selection
algorithm, LHNFCSF achieved 100% for all cluster sizes during training phase.
However, in the testing phase LHNFCSF achieved 88.3721% using one cluster for
each class, 90.6977% using two clusters, 91.8605% using three clusters and
97.6744% using four clusters for each class and 12 fuzzy rules. The obtained
classification accuracy was very promising with regard to the other
classification applications in literature for this problem.
|
1403.0531 | We Tweet Like We Talk and Other Interesting Observations: An Analysis of
English Communication Modalities | cs.CL | Modalities of communication for human beings are gradually increasing in
number with the advent of new forms of technology. Many human beings can
readily transition between these different forms of communication with little
or no effort, which brings about the question: How similar are these different
communication modalities? To understand technology$\text{'}$s influence on
English communication, four different corpora were analyzed and compared:
Writing from Books using the 1-grams database from the Google Books project,
Twitter, IRC Chat, and transcribed Talking. Multi-word confusion matrices
revealed that Talking has the most similarity when compared to the other modes
of communication, while 1-grams were the least similar form of communication
analyzed. Based on the analysis of word usage, word usage frequency
distributions, and word class usage, among other things, Talking is also the
most similar to Twitter and IRC Chat. This suggests that communicating using
Twitter and IRC Chat evolved from Talking rather than Writing. When we
communicate online, even though we are writing, we do not Tweet or Chat how we
write books; we Tweet and Chat how we Speak. Nonfiction and Fiction writing
were clearly differentiable from our analysis with Twitter and Chat being much
more similar to Fiction than Nonfiction writing. These hypotheses were then
tested using author and journalists Cory Doctorow. Mr. Doctorow$\text{'}$s
Writing, Twitter usage, and Talking were all found to have very similar
vocabulary usage patterns as the amalgamized populations, as long as the
writing was Fiction. However, Mr. Doctorow$\text{'}$s Nonfiction writing is
different from 1-grams and other collected Nonfiction writings. This data could
perhaps be used to create more entertaining works of Nonfiction.
|
1403.0537 | A New Framework for the Performance Analysis of Wireless Communications
under Hoyt (Nakagami-q) Fading | cs.IT math.IT | We present a novel relationship between the distribution of circular and
non-circular complex Gaussian random variables. Specifically, we show that the
distribution of the squared norm of a non-circular complex Gaussian random
variable, usually referred to as squared Hoyt distribution, can be constructed
from a conditional exponential distribution. From this fundamental connection
we introduce a new approach, the Hoyt transform method, that allows to analyze
the performance of a wireless link under Hoyt (Nakagami-q) fading in a very
simple way. We illustrate that many performance metrics for Hoyt fading can be
calculated by leveraging well-known results for Rayleigh fading and only
performing a finite-range integral. We use this technique to obtain novel
results for some information and communication-theoretic metrics in Hoyt fading
channels.
|
1403.0541 | Representing, reasoning and answering questions about biological
pathways - various applications | cs.AI cs.CE cs.CL | Biological organisms are composed of numerous interconnected biochemical
processes. Diseases occur when normal functionality of these processes is
disrupted. Thus, understanding these biochemical processes and their
interrelationships is a primary task in biomedical research and a prerequisite
for diagnosing diseases, and drug development. Scientists studying these
processes have identified various pathways responsible for drug metabolism, and
signal transduction, etc.
Newer techniques and speed improvements have resulted in deeper knowledge
about these pathways, resulting in refined models that tend to be large and
complex, making it difficult for a person to remember all aspects of it. Thus,
computer models are needed to analyze them. We want to build such a system that
allows modeling of biological systems and pathways in such a way that we can
answer questions about them.
Many existing models focus on structural and/or factoid questions, using
surface-level knowledge that does not require understanding the underlying
model. We believe these are not the kind of questions that a biologist may ask
someone to test their understanding of the biological processes. We want our
system to answer the kind of questions a biologist may ask. Such questions
appear in early college level text books.
Thus the main goal of our thesis is to develop a system that allows us to
encode knowledge about biological pathways and answer such questions about them
demonstrating understanding of the pathway. To that end, we develop a language
that will allow posing such questions and illustrate the utility of our
framework with various applications in the biological domain. We use some
existing tools with modifications to accomplish our goal.
Finally, we apply our system to real world applications by extracting pathway
knowledge from text and answering questions related to drug development.
|
1403.0543 | Quantum tunneling and evolution speed in an exactly solvable coupled
double-well system | quant-ph cs.IT math.IT | Exact analytical calculations of eigenvalues and eigenstates are presented
for quantum coupled double-well (DW) systems with Razavy's hyperbolic
potential. With the use of four kinds of initial wavepackets, we have
calculated the tunneling period $T$ and the orthogonality time $\tau$ which
signifies a time interval for an initial state to evolve to its orthogonal
state. We discuss the coupling dependence of $T$ and $\tau$, and the relation
between $\tau$ and the concurrence $C$ which is a typical measure of the
entanglement in two qubits. Our calculations have shown that it is not clear
whether the speed of quantum evolution may be measured by $T$ or $\tau$ and
that the evolution speed measured by $\tau$ (or $T$) is not necessarily
increased with increasing $C$. This is in contrast with the earlier study [V.
Giovannetti, S. Lloyd and L. Maccone, Europhys. Lett. {\bf 62} (2003) 615]
which pointed out that the evolution speed measured by $\tau$ is enhanced by
the entanglement in the two-level model.
|
1403.0598 | The Structurally Smoothed Graphlet Kernel | cs.LG | A commonly used paradigm for representing graphs is to use a vector that
contains normalized frequencies of occurrence of certain motifs or sub-graphs.
This vector representation can be used in a variety of applications, such as,
for computing similarity between graphs. The graphlet kernel of Shervashidze et
al. [32] uses induced sub-graphs of k nodes (christened as graphlets by Przulj
[28]) as motifs in the vector representation, and computes the kernel via a dot
product between these vectors. One can easily show that this is a valid kernel
between graphs. However, such a vector representation suffers from a few
drawbacks. As k becomes larger we encounter the sparsity problem; most higher
order graphlets will not occur in a given graph. This leads to diagonal
dominance, that is, a given graph is similar to itself but not to any other
graph in the dataset. On the other hand, since lower order graphlets tend to be
more numerous, using lower values of k does not provide enough discrimination
ability. We propose a smoothing technique to tackle the above problems. Our
method is based on a novel extension of Kneser-Ney and Pitman-Yor smoothing
techniques from natural language processing to graphs. We use the relationships
between lower order and higher order graphlets in order to derive our method.
Consequently, our smoothing algorithm not only respects the dependency between
sub-graphs but also tackles the diagonal dominance problem by distributing the
probability mass across graphlets. In our experiments, the smoothed graphlet
kernel outperforms graph kernels based on raw frequency counts.
|
1403.0600 | Modeling Website Popularity Competition in the Attention-Activity
Marketplace | physics.soc-ph cs.SI | How does a new startup drive the popularity of competing websites into
oblivion like Facebook famously did to MySpace? This question is of great
interest to academics, technologists, and financial investors alike. In this
work we exploit the singular way in which Facebook wiped out the popularity of
MySpace, Hi5, Friendster, and Multiply to guide the design of a new popularity
competition model. Our model provides new insights into what Nobel Laureate
Herbert A. Simon called the "marketplace of attention," which we recast as the
attention-activity marketplace. Our model design is further substantiated by
user-level activity of 250,000 MySpace users obtained between 2004 and 2009.
The resulting model not only accurately fits the observed Daily Active Users
(DAU) of Facebook and its competitors but also predicts their fate four years
into the future.
|
1403.0603 | Efficient Distributed Online Prediction and Stochastic Optimization with
Approximate Distributed Averaging | cs.IT cs.DC cs.SY math.IT math.OC | We study distributed methods for online prediction and stochastic
optimization. Our approach is iterative: in each round nodes first perform
local computations and then communicate in order to aggregate information and
synchronize their decision variables. Synchronization is accomplished through
the use of a distributed averaging protocol. When an exact distributed
averaging protocol is used, it is known that the optimal regret bound of
$\mathcal{O}(\sqrt{m})$ can be achieved using the distributed mini-batch
algorithm of Dekel et al. (2012), where $m$ is the total number of samples
processed across the network. We focus on methods using approximate distributed
averaging protocols and show that the optimal regret bound can also be achieved
in this setting. In particular, we propose a gossip-based optimization method
which achieves the optimal regret bound. The amount of communication required
depends on the network topology through the second largest eigenvalue of the
transition matrix of a random walk on the network. In the setting of stochastic
optimization, the proposed gossip-based approach achieves nearly-linear
scaling: the optimization error is guaranteed to be no more than $\epsilon$
after $\mathcal{O}(\frac{1}{n \epsilon^2})$ rounds, each of which involves
$\mathcal{O}(\log n)$ gossip iterations, when nodes communicate over a
well-connected graph. This scaling law is also observed in numerical
experiments on a cluster.
|
1403.0613 | On Redundant Topological Constraints | cs.AI | The Region Connection Calculus (RCC) is a well-known calculus for
representing part-whole and topological relations. It plays an important role
in qualitative spatial reasoning, geographical information science, and
ontology. The computational complexity of reasoning with RCC5 and RCC8 (two
fragments of RCC) as well as other qualitative spatial/temporal calculi has
been investigated in depth in the literature. Most of these works focus on the
consistency of qualitative constraint networks. In this paper, we consider the
important problem of redundant qualitative constraints. For a set $\Gamma$ of
qualitative constraints, we say a constraint $(x R y)$ in $\Gamma$ is redundant
if it is entailed by the rest of $\Gamma$. A prime subnetwork of $\Gamma$ is a
subset of $\Gamma$ which contains no redundant constraints and has the same
solution set as $\Gamma$. It is natural to ask how to compute such a prime
subnetwork, and when it is unique.
In this paper, we show that this problem is in general intractable, but
becomes tractable if $\Gamma$ is over a tractable subalgebra $\mathcal{S}$ of a
qualitative calculus. Furthermore, if $\mathcal{S}$ is a subalgebra of RCC5 or
RCC8 in which weak composition distributes over nonempty intersections, then
$\Gamma$ has a unique prime subnetwork, which can be obtained in cubic time by
removing all redundant constraints simultaneously from $\Gamma$. As a
byproduct, we show that any path-consistent network over such a distributive
subalgebra is weakly globally consistent and minimal. A thorough empirical
analysis of the prime subnetwork upon real geographical data sets demonstrates
the approach is able to identify significantly more redundant constraints than
previously proposed algorithms, especially in constraint networks with larger
proportions of partial overlap relations.
|
1403.0623 | Global solar irradiation prediction using a multi-gene genetic
programming approach | cs.NE cs.CE stat.AP | In this paper, a nonlinear symbolic regression technique using an
evolutionary algorithm known as multi-gene genetic programming (MGGP) is
applied for a data-driven modelling between the dependent and the independent
variables. The technique is applied for modelling the measured global solar
irradiation and validated through numerical simulations. The proposed modelling
technique shows improved results over the fuzzy logic and artificial neural
network (ANN) based approaches as attempted by contemporary researchers. The
method proposed here results in nonlinear analytical expressions, unlike those
with neural networks which is essentially a black box modelling approach. This
additional flexibility is an advantage from the modelling perspective and helps
to discern the important variables which affect the prediction. Due to the
evolutionary nature of the algorithm, it is able to get out of local minima and
converge to a global optimum unlike the back-propagation (BP) algorithm used
for training neural networks. This results in a better percentage fit than the
ones obtained using neural networks by contemporary researchers. Also a
hold-out cross validation is done on the obtained genetic programming (GP)
results which show that the results generalize well to new data and do not
over-fit the training samples. The multi-gene GP results are compared with
those, obtained using its single-gene version and also the same with four
classical regression models in order to show the effectiveness of the adopted
approach.
|
1403.0628 | Unconstrained Online Linear Learning in Hilbert Spaces: Minimax
Algorithms and Normal Approximations | cs.LG | We study algorithms for online linear optimization in Hilbert spaces,
focusing on the case where the player is unconstrained. We develop a novel
characterization of a large class of minimax algorithms, recovering, and even
improving, several previous results as immediate corollaries. Moreover, using
our tools, we develop an algorithm that provides a regret bound of
$\mathcal{O}\Big(U \sqrt{T \log(U \sqrt{T} \log^2 T +1)}\Big)$, where $U$ is
the $L_2$ norm of an arbitrary comparator and both $T$ and $U$ are unknown to
the player. This bound is optimal up to $\sqrt{\log \log T}$ terms. When $T$ is
known, we derive an algorithm with an optimal regret bound (up to constant
factors). For both the known and unknown $T$ case, a Normal approximation to
the conditional value of the game proves to be the key analysis tool.
|
1403.0636 | The path most travelled: Mining road usage patterns from massive call
data | physics.soc-ph cs.SI | Rapid urbanization places increasing stress on already burdened
transportation systems, resulting in delays and poor levels of service.
Billions of spatiotemporal call detail records (CDRs) collected from mobile
devices create new opportunities to quantify and solve these problems. However,
there is a need for tools to map new data onto existing transportation
infrastructure. In this work, we propose a system that leverages this data to
identify patterns in road usage. First, we develop an algorithm to mine
billions of calls and learn location transition probabilities of callers. These
transition probabilities are then upscaled with demographic data to estimate
origin-destination (OD) flows of residents between any two intersections of a
city. Next, we implement a distributed incremental traffic assignment algorithm
to route these flows on road networks and estimate congestion and level of
service for each roadway. From this assignment, we construct a bipartite usage
network by connecting census tracts to the roads used by their inhabitants.
Comparing the topologies of the physical road network and bipartite usage
network allows us to classify each road's role in a city's transportation
network and detect causes of local bottlenecks. Finally, we demonstrate an
interactive, web-based visualization platform that allows researchers,
policymakers, and drivers to explore road congestion and usage in a new
dimension. To demonstrate the flexibility of this system, we perform these
analyses in multiple cities across the globe with diverse geographical and
sociodemographic qualities. This platform provides a foundation to build
congestion mitigation solutions and generate new insights into urban mobility.
|
1403.0648 | Multi-period Trading Prediction Markets with Connections to Machine
Learning | cs.GT cs.LG q-fin.TR stat.ML | We present a new model for prediction markets, in which we use risk measures
to model agents and introduce a market maker to describe the trading process.
This specific choice on modelling tools brings us mathematical convenience. The
analysis shows that the whole market effectively approaches a global objective,
despite that the market is designed such that each agent only cares about its
own goal. Additionally, the market dynamics provides a sensible algorithm for
optimising the global objective. An intimate connection between machine
learning and our markets is thus established, such that we could 1) analyse a
market by applying machine learning methods to the global objective, and 2)
solve machine learning problems by setting up and running certain markets.
|
1403.0667 | The Hidden Convexity of Spectral Clustering | cs.LG stat.ML | In recent years, spectral clustering has become a standard method for data
analysis used in a broad range of applications. In this paper we propose a new
class of algorithms for multiway spectral clustering based on optimization of a
certain "contrast function" over the unit sphere. These algorithms, partly
inspired by certain Independent Component Analysis techniques, are simple, easy
to implement and efficient.
Geometrically, the proposed algorithms can be interpreted as hidden basis
recovery by means of function optimization. We give a complete characterization
of the contrast functions admissible for provable basis recovery. We show how
these conditions can be interpreted as a "hidden convexity" of our optimization
problem on the sphere; interestingly, we use efficient convex maximization
rather than the more common convex minimization. We also show encouraging
experimental results on real and simulated data.
|
1403.0686 | Performance Analysis of Multi-Antenna Relay Networks over Nakagami-m
Fading Channel | cs.IT math.IT | In this chapter, the authors present the performance of multi-antenna
selective combining decode-and-forward (SC-DF) relay networks over independent
and identically distributed (i.i.d) Nakagami-m fading channels. The outage
probability, moment generation function, symbol error probability and average
channel capacity are derived in closed-form using the Signal-to-Noise-Ratio
(SNR) statistical characteristics. After that, the authors formulate the outage
probability problem, optimize it with an approximated problem, and then solve
it analytically. Finally, for comparison with analytical formulas, the authors
perform some Monte-Carlo simulations.
|
1403.0699 | Multi-Shot Person Re-Identification via Relational Stein Divergence | cs.CV stat.ML | Person re-identification is particularly challenging due to significant
appearance changes across separate camera views. In order to re-identify
people, a representative human signature should effectively handle differences
in illumination, pose and camera parameters. While general appearance-based
methods are modelled in Euclidean spaces, it has been argued that some
applications in image and video analysis are better modelled via non-Euclidean
manifold geometry. To this end, recent approaches represent images as
covariance matrices, and interpret such matrices as points on Riemannian
manifolds. As direct classification on such manifolds can be difficult, in this
paper we propose to represent each manifold point as a vector of similarities
to class representers, via a recently introduced form of Bregman matrix
divergence known as the Stein divergence. This is followed by using a
discriminative mapping of similarity vectors for final classification. The use
of similarity vectors is in contrast to the traditional approach of embedding
manifolds into tangent spaces, which can suffer from representing the manifold
structure inaccurately. Comparative evaluations on benchmark ETHZ and iLIDS
datasets for the person re-identification task show that the proposed approach
obtains better performance than recent techniques such as Histogram Plus
Epitome, Partial Least Squares, and Symmetry-Driven Accumulation of Local
Features.
|
1403.0700 | Random Projections on Manifolds of Symmetric Positive Definite Matrices
for Image Classification | cs.CV stat.ML | Recent advances suggest that encoding images through Symmetric Positive
Definite (SPD) matrices and then interpreting such matrices as points on
Riemannian manifolds can lead to increased classification performance. Taking
into account manifold geometry is typically done via (1) embedding the
manifolds in tangent spaces, or (2) embedding into Reproducing Kernel Hilbert
Spaces (RKHS). While embedding into tangent spaces allows the use of existing
Euclidean-based learning algorithms, manifold shape is only approximated which
can cause loss of discriminatory information. The RKHS approach retains more of
the manifold structure, but may require non-trivial effort to kernelise
Euclidean-based learning algorithms. In contrast to the above approaches, in
this paper we offer a novel solution that allows SPD matrices to be used with
unmodified Euclidean-based learning algorithms, with the true manifold shape
well-preserved. Specifically, we propose to project SPD matrices using a set of
random projection hyperplanes over RKHS into a random projection space, which
leads to representing each matrix as a vector of projection coefficients.
Experiments on face recognition, person re-identification and texture
classification show that the proposed approach outperforms several recent
methods, such as Tensor Sparse Coding, Histogram Plus Epitome, Riemannian
Locality Preserving Projection and Relational Divergence Classification.
|
1403.0701 | GraphChi-DB: Simple Design for a Scalable Graph Database System -- on
Just a PC | cs.DB | We propose a new data structure, Parallel Adjacency Lists (PAL), for
efficiently managing graphs with billions of edges on disk. The PAL structure
is based on the graph storage model of GraphChi (Kyrola et. al., OSDI 2012),
but we extend it to enable online database features such as queries and fast
insertions. In addition, we extend the model with edge and vertex attributes.
Compared to previous data structures, PAL can store graphs more compactly while
allowing fast access to both the incoming and the outgoing edges of a vertex,
without duplicating data. Based on PAL, we design a graph database management
system, GraphChi-DB, which can also execute powerful analytical graph
computation.
We evaluate our design experimentally and demonstrate that GraphChi-DB
achieves state-of-the-art performance on graphs that are much larger than the
available memory. GraphChi-DB enables anyone with just a laptop or a PC to work
with extremely large graphs.
|
1403.0728 | A Novel Method for Vectorization | cs.CV cs.CG cs.GR | Vectorization of images is a key concern uniting computer graphics and
computer vision communities. In this paper we are presenting a novel idea for
efficient, customizable vectorization of raster images, based on Catmull Rom
spline fitting. The algorithm maintains a good balance between photo-realism
and photo abstraction, and hence is applicable to applications with artistic
concerns or applications where less information loss is crucial. The resulting
algorithm is fast, parallelizable and can satisfy general soft realtime
requirements. Moreover, the smoothness of the vectorized images aesthetically
outperforms outputs of many polygon-based methods
|
1403.0736 | Fast Prediction with SVM Models Containing RBF Kernels | stat.ML cs.LG | We present an approximation scheme for support vector machine models that use
an RBF kernel. A second-order Maclaurin series approximation is used for
exponentials of inner products between support vectors and test instances. The
approximation is applicable to all kernel methods featuring sums of kernel
evaluations and makes no assumptions regarding data normalization. The
prediction speed of approximated models no longer relates to the amount of
support vectors but is quadratic in terms of the number of input dimensions. If
the number of input dimensions is small compared to the amount of support
vectors, the approximated model is significantly faster in prediction and has a
smaller memory footprint. An optimized C++ implementation was made to assess
the gain in prediction speed in a set of practical tests. We additionally
provide a method to verify the approximation accuracy, prior to training models
or during run-time, to ensure the loss in accuracy remains acceptable and
within known bounds.
|
1403.0745 | EnsembleSVM: A Library for Ensemble Learning Using Support Vector
Machines | stat.ML cs.LG | EnsembleSVM is a free software package containing efficient routines to
perform ensemble learning with support vector machine (SVM) base models. It
currently offers ensemble methods based on binary SVM models. Our
implementation avoids duplicate storage and evaluation of support vectors which
are shared between constituent models. Experimental results show that using
ensemble approaches can drastically reduce training complexity while
maintaining high predictive accuracy. The EnsembleSVM software package is
freely available online at http://esat.kuleuven.be/stadius/ensemblesvm.
|
1403.0761 | The Obvious Solution to Semantic Mapping -- Ask an Expert | cs.IR | The semantic mapping problem is probably the main obstacle to
computer-to-computer communication. If computer A knows that its concept X is
the same as computer B's concept Y, then the two machines can communicate. They
will in effect be talking the same language. This paper describes a relatively
straightforward way of enhancing the semantic descriptions of Web Service
interfaces by using online sources of keyword definitions. Method interface
descriptions can be enhanced using these standard dictionary definitions.
Because the generated metadata is now standardised, this means that any other
computer that has access to the same source, or understands standard language
concepts, can now understand the description. This helps to remove a lot of the
heterogeneity that would otherwise build up though humans creating their own
descriptions independently of each other. The description comes in the form of
an XML script that can be retrieved and read through the Web Service interface
itself. An additional use for these scripts would be for adding descriptions in
different languages, which would mean that human users that speak a different
language would also understand what the service was about.
|
1403.0764 | Clustering Concept Chains from Ordered Data without Path Descriptions | cs.AI | This paper describes a process for clustering concepts into chains from data
presented randomly to an evaluating system. There are a number of rules or
guidelines that help the system to determine more accurately what concepts
belong to a particular chain and what ones do not, but it should be possible to
write these in a generic way. This mechanism also uses a flat structure without
any hierarchical path information, where the link between two concepts is made
at the level of the concept itself. It does not require related metadata, but
instead, a simple counting mechanism is used. Key to this is a count for both
the concept itself and also the group or chain that it belongs to. To test the
possible success of the mechanism, concept chain parts taken randomly from a
larger ontology were presented to the system, but only at a depth of 2 concepts
each time. That is - root concept plus a concept that it is linked to. The
results show that this can still lead to very variable structures being formed
and can also accommodate some level of randomness.
|
1403.0770 | A Metric for Modelling and Measuring Complex Behavioural Systems | cs.MA | This paper describes a metric for measuring the success of a complex system
composed of agents performing autonomous behaviours. Because of the difficulty
in evaluating such systems, this metric will help to give an initial indication
as to how suitable the agents would be for solving the problem. The system is
modelled as a script, or behavioural ontology, with a number of variables to
represent each of the behaviour attributes. The set of equations can be used
both for modeling and as part of the simulation evaluation. Behaviours can be
nested, allowing for compound behaviours of arbitrary complexity to be built.
There is also the capability for including rules or decision making into the
script. The paper also gives some test examples to show how the metric might be
used.
|
1403.0778 | Dynamic Move Chains -- a Forward Pruning Approach to Tree Search in
Computer Chess | cs.AI cs.NE | This paper proposes a new mechanism for pruning a search game-tree in
computer chess. The algorithm stores and then reuses chains or sequences of
moves, built up from previous searches. These move sequences have a built-in
forward-pruning mechanism that can radically reduce the search space. A typical
search process might retrieve a move from a Transposition Table, where the
decision of what move to retrieve would be based on the position itself. This
algorithm stores move sequences based on what previous sequences were better,
or caused cutoffs. This is therefore position independent and so it could also
be useful in games with imperfect information or uncertainty, where the whole
situation is not known at any one time. Over a small set of tests, the
algorithm was shown to clearly out-perform Transposition Tables, both in terms
of search reduction and game-play results.
|
1403.0779 | Hop Doubling Label Indexing for Point-to-Point Distance Querying on
Scale-Free Networks | cs.DB | We study the problem of point-to-point distance querying for massive
scale-free graphs, which is important for numerous applications. Given a
directed or undirected graph, we propose to build an index for answering such
queries based on a hop-doubling labeling technique. We derive bounds on the
index size, the computation costs and I/O costs based on the properties of
unweighted scale-free graphs. We show that our method is much more efficient
compared to the state-of-the-art technique, in terms of both querying time and
indexing time. Our empirical study shows that our method can handle graphs that
are orders of magnitude larger than existing methods.
|
1403.0783 | Uncertainty in Crowd Data Sourcing under Structural Constraints | cs.DB | Applications extracting data from crowdsourcing platforms must deal with the
uncertainty of crowd answers in two different ways: first, by deriving
estimates of the correct value from the answers; second, by choosing crowd
questions whose answers are expected to minimize this uncertainty relative to
the overall data collection goal. Such problems are already challenging when we
assume that questions are unrelated and answers are independent, but they are
even more complicated when we assume that the unknown values follow hard
structural constraints (such as monotonicity).
In this vision paper, we examine how to formally address this issue with an
approach inspired by [Amsterdamer et al., 2013]. We describe a generalized
setting where we model constraints as linear inequalities, and use them to
guide the choice of crowd questions and the processing of answers. We present
the main challenges arising in this setting, and propose directions to solve
them.
|
1403.0801 | Is getting the right answer just about choosing the right words? The
role of syntactically-informed features in short answer scoring | cs.CL | Developments in the educational landscape have spurred greater interest in
the problem of automatically scoring short answer questions. A recent shared
task on this topic revealed a fundamental divide in the modeling approaches
that have been applied to this problem, with the best-performing systems split
between those that employ a knowledge engineering approach and those that
almost solely leverage lexical information (as opposed to higher-level
syntactic information) in assigning a score to a given response. This paper
aims to introduce the NLP community to the largest corpus currently available
for short-answer scoring, provide an overview of methods used in the shared
task using this data, and explore the extent to which more
syntactically-informed features can contribute to the short answer scoring task
in a way that avoids the question-specific manual effort of the knowledge
engineering approach.
|
1403.0802 | Large-Scale Geospatial Processing on Multi-Core and Many-Core
Processors: Evaluations on CPUs, GPUs and MICs | cs.DB cs.DC | Geospatial Processing, such as queries based on point-to-polyline shortest
distance and point-in-polygon test, are fundamental to many scientific and
engineering applications, including post-processing large-scale environmental
and climate model outputs and analyzing traffic and travel patterns from
massive GPS collections in transportation engineering and urban studies.
Commodity parallel hardware, such as multi-core CPUs, many-core GPUs and Intel
MIC accelerators, provide enormous computing power which can potentially
achieve significant speedups on existing geospatial processing and open the
opportunities for new applications. However, the realizable potential for
geospatial processing on these new hardware devices is largely unknown due to
the complexity in porting serial algorithms to diverse parallel hardware
platforms. In this study, we aim at experimenting our data-parallel designs and
implementations of point-to-polyline shortest distance computation (P2P) and
point-in-polygon topological test (PIP) on different commodity hardware using
real large-scale geospatial data, comparing their performance and discussing
important factors that may significantly affect the performance. Our
experiments have shown that, while GPUs can be several times faster than
multi-core CPUs without utilizing the increasingly available SIMD computing
power on Vector Processing Units (VPUs) that come with multi-core CPUs and
MICs, multi-core CPUs and MICs can be several times faster than GPUs when VPUs
are utilized. By adopting a Domain Specific Language (DSL) approach to
exploiting the VPU computing power in geospatial processing, we are free from
programming SIMD intrinsic functions directly which makes the new approach more
effective, portable and scalable. Our designs, implementations and experiments
can serve as case studies for parallel geospatial computing on modern commodity
parallel hardware.
|
1403.0804 | Double Cylinder Cycle codes of Arbitrary Girth | cs.IT cs.DM math.CO math.IT | A particular class of low-density parity-check codes referred to as
cylinder-type BC-LDPC codes is proposed by Gholami and Eesmaeili. In this paper
We represent a double cylinder-type parity-check matrix H by a graph called the
block-structure graph of H and denoted by BSG(H). Using the properties of
BSG(H) we propose some mother matrices with column-weight two such that the
rate of corresponding cycle codes are greater tan cycle codes constructed by
Gholami with same girth.
|
1403.0811 | A Potential Game Approach for Information-Maximizing Cooperative
Planning of Sensor Networks | cs.SY cs.GT | This paper presents a potential game approach for distributed cooperative
selection of informative sensors, when the goal is to maximize the mutual
information between the measurement variables and the quantities of interest.
It is proved that a local utility function defined by the conditional mutual
information of an agent conditioned on the other agents' sensing decisions
leads to a potential game with the global potential being the original mutual
information of the cooperative planning problem. The joint strategy fictitious
play method is then applied to obtain a distributed solution that provably
converges to a pure strategy Nash equilibrium. Two numerical examples on
simplified weather forecasting and range-only target tracking verify
convergence and performance characteristics of the proposed game-theoretic
approach.
|
1403.0820 | Geometry-based Adaptive Symbolic Approximation for Fast Sequence
Matching on Manifolds | cs.CV math.DG | In this paper, we consider the problem of fast and efficient indexing
techniques for sequences evolving in non-Euclidean spaces. This problem has
several applications in the areas of human activity analysis, where there is a
need to perform fast search, and recognition in very high dimensional spaces.
The problem is made more challenging when representations such as landmarks,
contours, and human skeletons etc. are naturally studied in a non-Euclidean
setting where even simple operations are much more computationally intensive
than their Euclidean counterparts. We propose a geometry and data adaptive
symbolic framework that is shown to enable the deployment of fast and accurate
algorithms for activity recognition, dynamic texture recognition, motif
discovery. Toward this end, we present generalizations of key concepts of
piece-wise aggregation and symbolic approximation for the case of non-Euclidean
manifolds. We show that one can replace expensive geodesic computations with
much faster symbolic computations with little loss of accuracy in activity
recognition and discovery applications. The framework is general enough to work
across both Euclidean and non-Euclidean spaces, depending on appropriate
feature representations without compromising on the ultra-low bandwidth, high
speed and high accuracy. The proposed methods are ideally suited for real-time
systems and low complexity scenarios.
|
1403.0829 | Multiview Hessian regularized logistic regression for action recognition | cs.CV cs.LG stat.ML | With the rapid development of social media sharing, people often need to
manage the growing volume of multimedia data such as large scale video
classification and annotation, especially to organize those videos containing
human activities. Recently, manifold regularized semi-supervised learning
(SSL), which explores the intrinsic data probability distribution and then
improves the generalization ability with only a small number of labeled data,
has emerged as a promising paradigm for semiautomatic video classification. In
addition, human action videos often have multi-modal content and different
representations. To tackle the above problems, in this paper we propose
multiview Hessian regularized logistic regression (mHLR) for human action
recognition. Compared with existing work, the advantages of mHLR lie in three
folds: (1) mHLR combines multiple Hessian regularization, each of which
obtained from a particular representation of instance, to leverage the
exploring of local geometry; (2) mHLR naturally handle multi-view instances
with multiple representations; (3) mHLR employs a smooth loss function and then
can be effectively optimized. We carefully conduct extensive experiments on the
unstructured social activity attribute (USAA) dataset and the experimental
results demonstrate the effectiveness of the proposed multiview Hessian
regularized logistic regression for human action recognition.
|
1403.0836 | Locally-Optimized Reweighted Belief Propagation for Decoding LDPC Codes
with Finite-Length | cs.IT math.IT | In practice, LDPC codes are decoded using message passing methods. These
methods offer good performance but tend to converge slowly and sometimes fail
to converge and to decode the desired codewords correctly. Recently,
tree-reweighted message passing methods have been modified to improve the
convergence speed at little or no additional complexity cost. This paper
extends this line of work and proposes a new class of locally optimized
reweighting strategies, which are suitable for both regular and irregular LDPC
codes. The proposed decoding algorithm first splits the factor graph into
subgraphs and subsequently performs a local optimization of reweighting
parameters. Simulations show that the proposed decoding algorithm significantly
outperforms the standard message passing and existing reweighting techniques.
|
1403.0847 | Knowledge-Aided Reweighted Belief Propagation LDPC Decoding using
Regular and Irregular Designs | cs.IT math.IT | In this paper a new message passing algorithm, which takes advantage of both
tree-based re-parameterization and the knowledge of short cycles, is introduced
for the purpose of decoding LDPC codes with short block lengths. The proposed
algorithm is called variable factor appearance probability belief propagation
(VFAP-BP) algorithm and is suitable for wireless communications applications,
where both good decoding performance and low-latency are expected. Our
simulation results show that the VFAP-BP algorithm outperforms the standard BP
algorithm and requires a significantly smaller number of iterations than
existing algorithms when decoding both regular and irregular LDPC codes.
|
1403.0850 | How to Network in Online Social Networks | cs.SI physics.soc-ph | In this paper, we consider how to maximize users' influence in Online Social
Networks (OSNs) by exploiting social relationships only. Our first contribution
is to extend to OSNs the model of Kempe et al. [1] on the propagation of
information in a social network and to show that a greedy algorithm is a good
approximation of the optimal algorithm that is NP-hard. However, the greedy
algorithm requires global knowledge, which is hardly practical. Our second
contribution is to show on simulations on the full Twitter social graph that
simple and practical strategies perform close to the greedy algorithm.
|
1403.0873 | Matroid Regression | math.ST cs.DM cs.LG stat.ME stat.ML stat.TH | We propose an algebraic combinatorial method for solving large sparse linear
systems of equations locally - that is, a method which can compute single
evaluations of the signal without computing the whole signal. The method scales
only in the sparsity of the system and not in its size, and allows to provide
error estimates for any solution method. At the heart of our approach is the
so-called regression matroid, a combinatorial object associated to sparsity
patterns, which allows to replace inversion of the large matrix with the
inversion of a kernel matrix that is constant size. We show that our method
provides the best linear unbiased estimator (BLUE) for this setting and the
minimum variance unbiased estimator (MVUE) under Gaussian noise assumptions,
and furthermore we show that the size of the kernel matrix which is to be
inverted can be traded off with accuracy.
|
1403.0879 | Robustness: a new SLIP model based criterion for gait transitions in
bipedal locomotion | cs.RO | Bipedal locomotion is a phenomenon that still eludes a fundamental and
concise mathematical understanding. Conceptual models that capture some
relevant aspects of the process exist but their full explanatory power is not
yet exhausted. In the current study, we introduce the robustness criterion
which defines the conditions for stable locomotion when steps are taken with
imprecise angle of attack. Intuitively, the necessity of a higher precision
indicates the difficulty to continue moving with a given gait. We show that the
spring-loaded inverted pendulum model, under the robustness criterion, is
consistent with previously reported findings on attentional demand during human
locomotion. This criterion allows transitions between running and walking, many
of which conserve forward speed. Simulations of transitions predict Froude
numbers below the ones observed in humans, nevertheless the model
satisfactorily reproduces several biomechanical indicators such as hip
excursion, gait duty factor and vertical ground reaction force profiles.
Furthermore, we identify reversible robust walk-run transitions, which allow
the system to execute a robust version of the hopping gait. These findings
foster the spring-loaded inverted pendulum model as the unifying framework for
the understanding of bipedal locomotion.
|
1403.0921 | Dynamic stochastic blockmodels for time-evolving social networks | cs.SI cs.LG physics.soc-ph stat.ME | Significant efforts have gone into the development of statistical models for
analyzing data in the form of networks, such as social networks. Most existing
work has focused on modeling static networks, which represent either a single
time snapshot or an aggregate view over time. There has been recent interest in
statistical modeling of dynamic networks, which are observed at multiple points
in time and offer a richer representation of many complex phenomena. In this
paper, we present a state-space model for dynamic networks that extends the
well-known stochastic blockmodel for static networks to the dynamic setting. We
fit the model in a near-optimal manner using an extended Kalman filter (EKF)
augmented with a local search. We demonstrate that the EKF-based algorithm
performs competitively with a state-of-the-art algorithm based on Markov chain
Monte Carlo sampling but is significantly less computationally demanding.
|
1403.0930 | Spectrum Sensing Via Reconfigurable Antennas: Fundamental Limits and
Potential Gains | cs.NI cs.IT math.IT | We propose a novel paradigm for spectrum sensing in cognitive radio networks
that provides diversity and capacity benefits using a single antenna at the
Secondary User (SU) receiver. The proposed scheme is based on a reconfigurable
antenna: an antenna that is capable of altering its radiation characteristics
by changing its geometric configuration. Each configuration is designated as an
antenna mode or state and corresponds to a distinct channel realization. Based
on an abstract model for the reconfigurable antenna, we tackle two different
settings for the cognitive radio problem and present fundamental limits on the
achievable diversity and throughput gains. First, we explore the (to cooperate
or not to cooperate) tradeoff between the diversity and coding gains in
conventional cooperative and noncooperative spectrum sensing schemes, showing
that cooperation is not always beneficial. Based on this analysis, we propose
two sensing schemes based on reconfigurable antennas that we term as state
switching and state selection. It is shown that each of these schemes
outperform both cooperative and non-cooperative spectrum sensing under a global
energy constraint. Next, we study the (sensing-throughput) trade-off, and
demonstrate that using reconfigurable antennas, the optimal sensing time is
reduced allowing for a longer transmission time, and thus better throughput.
Moreover, state selection can be applied to boost the capacity of SU
transmission.
|
1403.0950 | On the connection between compression learning and scenario based
optimization | cs.SY | We investigate the connections between compression learning and scenario
based optimization. We first show how to strengthen, or relax the consistency
assumption at the basis of compression learning and study the learning and
generalization properties of the algorithm involved. We then consider different
constrained optimization problems affected by uncertainty represented by means
of scenarios. We show that the issue of providing guarantees on the probability
of constraint violation reduces to a learning problem for an appropriately
chosen algorithm that enjoys compression learning properties. The compression
learning perspective provides a unifying framework for scenario based
optimization and allows us to revisit the scenario approach and the
probabilistically robust design, a recently developed technique based on a
mixture of randomized and robust optimization, and to extend the guarantees on
the probability of constraint violation to cascading optimization problems.
|
1403.0952 | Algorithmic Verification of Continuous and Hybrid Systems | cs.SY cs.FL cs.LO cs.NA | We provide a tutorial introduction to reachability computation, a class of
computational techniques that exports verification technology toward continuous
and hybrid systems. For open under-determined systems, this technique can
sometimes replace an infinite number of simulations.
|
1403.0957 | On the Symmetric $K$-user Interference Channels with Limited Feedback | cs.IT math.IT | In this paper, we develop achievability schemes for symmetric $K$-user
interference channels with a rate-limited feedback from each receiver to the
corresponding transmitter. We study this problem under two different channel
models: the linear deterministic model, and the Gaussian model. For the
deterministic model, the proposed scheme achieves a symmetric rate that is the
minimum of the symmetric capacity with infinite feedback, and the sum of the
symmetric capacity without feedback and the symmetric amount of feedback. For
the Gaussian interference channel, we use lattice codes to propose a
transmission strategy that incorporates the techniques of Han-Kobayashi message
splitting, interference decoding, and decode and forward. This strategy
achieves a symmetric rate which is within a constant number of bits to the
minimum of the symmetric capacity with infinite feedback, and the sum of the
symmetric capacity without feedback and the amount of symmetric feedback. This
constant is obtained as a function of the number of users, $K$. The symmetric
achievable rate is used to characterize the achievable generalized degrees of
freedom which exhibits a gradual increase from no feedback to perfect feedback
in the presence of feedback links with limited capacity.
|
1403.0965 | Design Challenges of Millimeter Wave Communications: A MAC Layer
Perspective | cs.IT cs.NI math.IT | As the spectrum is becoming more scarce due to exponential demand of
formidable data quantities, the new millimiterwave (mmW) band is considered as
an enabling player of 5G communications to provide multi-gigabits wireless
acccess. MmW communications exhibit high attenuation and blockage,
directionality due to massive beamforming, deafness, low-interference, and may
need micro waves networks for coordination and fallback support. The current
mmW standardizations are challenged by the overwhelming complexity given by
such heterogeneous communication systems and mmW band characteristics. This
demands new substantial protocol developments at all layers. In this paper, the
medium access control issues for mmW communications are reviewed. It is
discussed that while existing standards address some of these issues for
personal and local area networks, little has been done for cellular networks.
It is argued that the medium access control layer should be equipped with
adaptation mechanisms that are aware of the special mmW characteristics.
Recommendations for mmW medium access control design in 5G are provided. It is
concluded that the design of efficient access control techniques for mmW is in
its infancy and much work still has to be done.
|
1403.0989 | Detecting change points in the large-scale structure of evolving
networks | cs.SI physics.soc-ph stat.ML | Interactions among people or objects are often dynamic in nature and can be
represented as a sequence of networks, each providing a snapshot of the
interactions over a brief period of time. An important task in analyzing such
evolving networks is change-point detection, in which we both identify the
times at which the large-scale pattern of interactions changes fundamentally
and quantify how large and what kind of change occurred. Here, we formalize for
the first time the network change-point detection problem within an online
probabilistic learning framework and introduce a method that can reliably solve
it. This method combines a generalized hierarchical random graph model with a
Bayesian hypothesis test to quantitatively determine if, when, and precisely
how a change point has occurred. We analyze the detectability of our method
using synthetic data with known change points of different types and
magnitudes, and show that this method is more accurate than several previously
used alternatives. Applied to two high-resolution evolving social networks,
this method identifies a sequence of change points that align with known
external "shocks" to these networks.
|
1403.1013 | Covert Communication Gains from Adversary's Ignorance of Transmission
Time | cs.IT math.IT | The recent square root law (SRL) for covert communication demonstrates that
Alice can reliably transmit $\mathcal{O}(\sqrt{n})$ bits to Bob in $n$ uses of
an additive white Gaussian noise (AWGN) channel while keeping ineffective any
detector employed by the adversary; conversely, exceeding this limit either
results in detection by the adversary with high probability or non-zero
decoding error probability at Bob. This SRL is under the assumption that the
adversary knows when Alice transmits (if she transmits); however, in many
operational scenarios he does not know this. Hence, here we study the impact of
the adversary's ignorance of the time of the communication attempt. We employ a
slotted AWGN channel model with $T(n)$ slots each containing $n$ symbol
periods, where Alice may use a single slot out of $T(n)$. Provided that Alice's
slot selection is secret, the adversary needs to monitor all $T(n)$ slots for
possible transmission. We show that this allows Alice to reliably transmit
$\mathcal{O}(\min\{\sqrt{n\log T(n)},n\})$ bits to Bob (but no more) while
keeping the adversary's detector ineffective. To achieve this gain over SRL,
Bob does not have to know the time of transmission provided $T(n)<2^{c_{\rm
T}n}$, $c_{\rm T}=\mathcal{O}(1)$.
|
1403.1023 | Active Hypothesis Testing for Quickest Anomaly Detection | cs.IT math.IT | The problem of quickest detection of an anomalous process among M processes
is considered. At each time, a subset of the processes can be observed, and the
observations from each chosen process follow two different distributions,
depending on whether the process is normal or abnormal. The objective is a
sequential search strategy that minimizes the expected detection time subject
to an error probability constraint. This problem can be considered as a special
case of active hypothesis testing first considered by Chernoff in 1959 where a
randomized strategy, referred to as the Chernoff test, was proposed and shown
to be asymptotically (as the error probability approaches zero) optimal. For
the special case considered in this paper, we show that a simple deterministic
test achieves asymptotic optimality and offers better performance in the finite
regime. We further extend the problem to the case where multiple anomalous
processes are present. In particular, we examine the case where only an upper
bound on the number of anomalous processes is known.
|
1403.1024 | On learning to localize objects with minimal supervision | cs.CV cs.LG | Learning to localize objects with minimal supervision is an important problem
in computer vision, since large fully annotated datasets are extremely costly
to obtain. In this paper, we propose a new method that achieves this goal with
only image-level labels of whether the objects are present or not. Our approach
combines a discriminative submodular cover problem for automatically
discovering a set of positive object windows with a smoothed latent SVM
formulation. The latter allows us to leverage efficient quasi-Newton
optimization techniques. Our experiments demonstrate that the proposed approach
provides a 50% relative improvement in mean average precision over the current
state-of-the-art on PASCAL VOC 2007 detection.
|
1403.1056 | K-Tangent Spaces on Riemannian Manifolds for Improved Pedestrian
Detection | cs.CV | For covariance-based image descriptors, taking into account the curvature of
the corresponding feature space has been shown to improve discrimination
performance. This is often done through representing the descriptors as points
on Riemannian manifolds, with the discrimination accomplished on a tangent
space. However, such treatment is restrictive as distances between arbitrary
points on the tangent space do not represent true geodesic distances, and hence
do not represent the manifold structure accurately. In this paper we propose a
general discriminative model based on the combination of several tangent
spaces, in order to preserve more details of the structure. The model can be
used as a weak learner in a boosting-based pedestrian detection framework.
Experiments on the challenging INRIA and DaimlerChrysler datasets show that the
proposed model leads to considerably higher performance than methods based on
histograms of oriented gradients as well as previous Riemannian-based
techniques.
|
1403.1070 | How to Apply Markov Chains for Modeling Sequential Edit Patterns in
Collaborative Ontology-Engineering Projects | cs.HC cs.SI | With the growing popularity of large-scale collaborative ontology-engineering
projects, such as the creation of the 11th revision of the International
Classification of Diseases, we need new methods and insights to help project-
and community-managers to cope with the constantly growing complexity of such
projects. In this paper, we present a novel application of Markov chains to
model sequential usage patterns that can be found in the change-logs of
collaborative ontology-engineering projects. We provide a detailed presentation
of the analysis process, describing all the required steps that are necessary
to apply and determine the best fitting Markov chain model. Amongst others, the
model and results allow us to identify structural properties and regularities
as well as predict future actions based on usage sequences. We are specifically
interested in determining the appropriate Markov chain orders which postulate
on how many previous actions future ones depend on. To demonstrate the
practical usefulness of the extracted Markov chains we conduct sequential
pattern analyses on a large-scale collaborative ontology-engineering dataset,
the International Classification of Diseases in its 11th revision. To further
expand on the usefulness of the presented analysis, we show that the collected
sequential patterns provide potentially actionable information for
user-interface designers, ontology-engineering tool developers and
project-managers to monitor, coordinate and dynamically adapt to the natural
development processes that occur when collaboratively engineering an ontology.
We hope that presented work will spur a new line of ontology-development tools,
evaluation-techniques and new insights, further taking the interactive nature
of the collaborative ontology-engineering process into consideration.
|
1403.1073 | Artificial Neuron Modelling Based on Wave Shape | cs.NE | This paper describes a new model for an artificial neural network processing
unit or neuron. It is slightly different to a traditional feedforward network
by the fact that it favours a mechanism of trying to match the wave-like
'shape' of the input with the shape of the output against specific value error
corrections. The expectation is then that a best fit shape can be transposed
into the desired output values more easily. This allows for notions of
reinforcement through resonance and also the construction of synapses.
|
1403.1076 | Is Intelligence Artificial? | cs.AI | Our understanding of intelligence is directed primarily at the human level.
This paper attempts to give a more unifying definition that can be applied to
the natural world in general and then Artificial Intelligence. The definition
would be used more to qualify than quantify it and might help when making
judgements on the matter. While correct behaviour is the preferred definition,
a metric that is grounded in Kolmogorov's Complexity Theory is suggested, which
leads to a measurement about entropy. A version of an accepted AI test is then
put forward as the 'acid test' and might be what a free-thinking program would
try to achieve. Recent work by the author has been more from a direction of
mechanical processes, or ones that might operate automatically. This paper
agrees that intelligence is a pro-active event, but also notes a second aspect
to it that is in the background and mechanical. The paper suggests looking at
intelligence and the conscious as being slightly different, where the conscious
is this more mechanical aspect. In fact, a surprising conclusion can be a
passive but intelligent brain being invoked by active and less intelligent
senses.
|
1403.1078 | A network centrality method for the rating problem | physics.soc-ph cs.SI | We propose a new method for aggregating the information of multiple reviewers
rating multiple products. Our approach is based on the network relations
induced between products by the rating activity of the reviewers. We show that
our method is algorithmically implementable even for large numbers of both
products and consumers, as is the case for many online sites. Moreover,
comparing it with the simple average, which is mostly used in practice, and
with other methods previously proposed in the literature, it performs very well
under various dimension, proving itself to be an optimal trade--off between
computational efficiency, accordance with the reviewers original orderings, and
robustness with respect to the inclusion of systematically biased reports.
|
1403.1080 | New Ideas for Brain Modelling | cs.AI q-bio.NC | This paper describes some biologically-inspired processes that could be used
to build the sort of networks that we associate with the human brain. New to
this paper, a 'refined' neuron will be proposed. This is a group of neurons
that by joining together can produce a more analogue system, but with the same
level of control and reliability that a binary neuron would have. With this new
structure, it will be possible to think of an essentially binary system in
terms of a more variable set of values. The paper also shows how recent
research associated with the new model, can be combined with established
theories, to produce a more complete picture. The propositions are largely in
line with conventional thinking, but possibly with one or two more radical
suggestions. An earlier cognitive model can be filled in with more specific
details, based on the new research results, where the components appear to fit
together almost seamlessly. The intention of the research has been to describe
plausible 'mechanical' processes that can produce the appropriate brain
structures and mechanisms, but that could be used without the magical
'intelligence' part that is still not fully understood. There are also some
important updates from an earlier version of this paper.
|
1403.1091 | Signal Estimation from Nonuniform Samples with RMS Error Bound --
Application to OFDM Channel Estimation | cs.IT math.IT | We present a channel spectral estimator for OFDM signals containing pilot
carriers, assuming a known delay spread or a bound on this parameter. The
estimator is based on modeling the channel's spectrum as a band-limited
function, instead of as the discrete Fourier transform of a tapped delay line
(TDL). Its main advantage is its immunity to the truncation mismatch in usual
TDL models (Gibbs phenomenon). In order to assess the estimator, we compare it
with the well-known TDL maximum likelihood (ML) estimator in terms of
root-mean-square (RMS) error. The main result is that the proposed estimator
improves on the ML estimator significantly, whenever the average spectral
sampling rate is above the channel's delay spread. The improvement increases
with the spectral oversampling ratio.
|
1403.1104 | Proposal for a Correction to the Temporal Correlation Coefficient
Calculation for Temporal Networks | physics.soc-ph cs.SI | Measuring the topological overlap of two graphs becomes important when
assessing the changes between temporally adjacent graphs in a time-evolving
network. Current methods depend on the fraction of nodes that have persisting
edges. This breaks down when there are nodes with no edges, persisting or
otherwise. The following outlines a proposed correction to ensure that
correlation metrics have the expected behavior.
|
1403.1124 | Estimating complex causal effects from incomplete observational data | stat.ME cs.LG math.ST stat.ML stat.TH | Despite the major advances taken in causal modeling, causality is still an
unfamiliar topic for many statisticians. In this paper, it is demonstrated from
the beginning to the end how causal effects can be estimated from observational
data assuming that the causal structure is known. To make the problem more
challenging, the causal effects are highly nonlinear and the data are missing
at random. The tools used in the estimation include causal models with design,
causal calculus, multiple imputation and generalized additive models. The main
message is that a trained statistician can estimate causal effects by
judiciously combining existing tools.
|
1403.1168 | Loud and Trendy: Crowdsourcing Impressions of Social Ambiance in Popular
Indoor Urban Places | cs.SI physics.soc-ph | New research cutting across architecture, urban studies, and psychology is
contextualizing the understanding of urban spaces according to the perceptions
of their inhabitants. One fundamental construct that relates place and
experience is ambiance, which is defined as "the mood or feeling associated
with a particular place". We posit that the systematic study of ambiance
dimensions in cities is a new domain for which multimedia research can make
pivotal contributions. We present a study to examine how images collected from
social media can be used for the crowdsourced characterization of indoor
ambiance impressions in popular urban places. We design a crowdsourcing
framework to understand suitability of social images as data source to convey
place ambiance, to examine what type of images are most suitable to describe
ambiance, and to assess how people perceive places socially from the
perspective of ambiance along 13 dimensions. Our study is based on 50,000
Foursquare images collected from 300 popular places across six cities
worldwide. The results show that reliable estimates of ambiance can be obtained
for several of the dimensions. Furthermore, we found that most aggregate
impressions of ambiance are similar across popular places in all studied
cities. We conclude by presenting a multidisciplinary research agenda for
future research in this domain.
|
1403.1169 | A proof challenge: multiple alignment and information compression | cs.AI | These notes pose a "proof challenge": a proof, or disproof, of the
proposition that "For any given body of information, I, expressed as a
one-dimensional sequence of atomic symbols, a multiple alignment concept,
described in the document, provides a means of encoding all the redundancy that
may exist in I. Aspects of the challenge are described.
|
1403.1177 | Effects of temporal correlations on cascades: Threshold models on
temporal networks | physics.soc-ph cs.SI physics.data-an | A person's decision to adopt an idea or product is often driven by the
decisions of peers, mediated through a network of social ties. A common way of
modeling adoption dynamics is to use threshold models, where a node may become
an adopter given a high enough rate of contacts with adopted neighbors. We
study the dynamics of threshold models that take both the network topology and
the timings of contacts into account, using empirical contact sequences as
substrates. The models are designed such that adoption is driven by the number
of contacts with different adopted neighbors within a chosen time. We find that
while some networks support cascades leading to network-level adoption, some do
not: the propagation of adoption depends on several factors from the frequency
of contacts to burstiness and timing correlations of contact sequences. More
specifically, burstiness is seen to suppress cascades sizes when compared to
randomised contact timings, while timing correlations between contacts on
adjacent links facilitate cascades.
|
1403.1180 | A distributed Integrity Catalog for digital repositories | cs.DB cs.DC cs.DL | Digital repositories, either digital preservation systems or archival
systems, periodically check the integrity of stored objects to assure users of
their correctness. To do so, prior solutions calculate integrity metadata and
require the repository to store it alongside the actual data objects. This
integrity metadata is essential for regularly verifying the correctness of the
stored data objects. To safeguard and detect damage to this metadata, prior
solutions rely on widely visible media, that is unaffiliated third parties, to
store and provide back digests of the metadata to verify it is intact. However,
they do not address recovery of the integrity metadata in case of damage or
attack by an adversary. In essence, they do not preserve this metadata. We
introduce IntegrityCatalog, a system that collects all integrity related
metadata in a single component, and treats them as first class objects,
managing both their integrity and their preservation. We introduce a
treap-based persistent authenticated dictionary managing arbitrary length
key/value pairs, which we use to store all integrity metadata, accessible
simply by object name. Additionally, IntegrityCatalog is a distributed system
that includes a network protocol that manages both corruption detection and
preservation of this metadata, using administrator-selected network peers with
two possible roles. Verifiers store and offer attestations on digests and have
minimal storage requirements, while preservers efficiently synchronize a
complete copy of the catalog to assist in recovery in case of a detected
catalog compromise on the local system. We describe our prototype
implementation of IntegrityCatalog, measure its performance empirically, and
demonstrate its effectiveness in real-world situations, with worst measured
throughput of approximately 1K insertions per second, and 2K verified search
operations per second.
|
1403.1185 | Phase transitions in the condition number distribution of Gaussian
random matrices | cond-mat.stat-mech cs.CC cs.IT math-ph math.IT math.MP stat.OT | We study the statistics of the condition number
$\kappa=\lambda_{\mathrm{max}}/\lambda_{\mathrm{min}}$ (the ratio between
largest and smallest squared singular values) of $N\times M$ Gaussian random
matrices. Using a Coulomb fluid technique, we derive analytically and for large
$N$ the cumulative $\mathcal{P}[\kappa<x]$ and tail-cumulative
$\mathcal{P}[\kappa>x]$ distributions of $\kappa$. We find that these
distributions decay as $\mathcal{P}[\kappa<x]\approx\exp\left(-\beta N^2
\Phi_{-}(x)\right)$ and $\mathcal{P}[\kappa>x]\approx\exp\left(-\beta N
\Phi_{+}(x)\right)$, where $\beta$ is the Dyson index of the ensemble. The left
and right rate functions $\Phi_{\pm}(x)$ are independent of $\beta$ and
calculated exactly for any choice of the rectangularity parameter
$\alpha=M/N-1>0$. Interestingly, they show a weak non-analytic behavior at
their minimum $\langle\kappa\rangle$ (corresponding to the average condition
number), a direct consequence of a phase transition in the associated Coulomb
fluid problem. Matching the behavior of the rate functions around
$\langle\kappa\rangle$, we determine exactly the scale of typical fluctuations
$\sim\mathcal{O}(N^{-2/3})$ and the tails of the limiting distribution of
$\kappa$. The analytical results are in excellent agreement with numerical
simulations.
|
1403.1194 | Latent Semantic Word Sense Disambiguation Using Global Co-occurrence
Information | cs.CL cs.IR | In this paper, I propose a novel word sense disambiguation method based on
the global co-occurrence information using NMF. When I calculate the dependency
relation matrix, the existing method tends to produce very sparse co-occurrence
matrix from a small training set. Therefore, the NMF algorithm sometimes does
not converge to desired solutions. To obtain a large number of co-occurrence
relations, I propose to use co-occurrence frequencies of dependency relations
between word features in the whole training set. This enables us to solve data
sparseness problem and induce more effective latent features. To evaluate the
efficiency of the method of word sense disambiguation, I make some experiments
to compare with the result of the two baseline methods. The results of the
experiments show this method is effective for word sense disambiguation in
comparison with the all baseline methods. Moreover, the proposed method is
effective for obtaining a stable effect by analyzing the global co-occurrence
information.
|
1403.1202 | Flocking and turning: a new model for self-organized collective motion | cond-mat.stat-mech cs.RO cs.SY physics.bio-ph q-bio.PE | Birds in a flock move in a correlated way, resulting in large polarization of
velocities. A good understanding of this collective behavior exists for linear
motion of the flock. Yet observing actual birds, the center of mass of the
group often turns giving rise to more complicated dynamics, still keeping
strong polarization of the flock. Here we propose novel dynamical equations for
the collective motion of polarized animal groups that account for correlated
turning including solely social forces. We exploit rotational symmetries and
conservation laws of the problem to formulate a theory in terms of generalized
coordinates of motion for the velocity directions akin to a Hamiltonian
formulation for rotations. We explicitly derive the correspondence between this
formulation and the dynamics of the individual velocities, thus obtaining a new
model of collective motion. In the appropriate overdamped limit we recover the
well-known Vicsek model, which dissipates rotational information and does not
allow for polarized turns. Although the new model has its most vivid success in
describing turning groups, its dynamics is intrinsically different from
previous ones in a wide dynamical regime, while reducing to the hydrodynamic
description of Toner and Tu at very large length-scales. The derived framework
is therefore general and it may describe the collective motion of any strongly
polarized active matter system.
|
1403.1214 | A fast clustering algorithm for mining social network data | cs.SI physics.soc-ph | Many groups with diverse convictions are interacting online. Interactions in
online communities help people to engage each other and enhance understanding
across groups. Online communities include multiple sub-communities whose
members are similar due to social ties, characteristics, or ideas on a topic.
In this research, we are interested in understanding the changes in the
relative size and activity of these sub-communities, their merging or splitting
patterns, and the changes in the perspectives of the members of these
sub-communities due to endogenous dynamics inside the community.
|
1403.1218 | Cyclic Orbit Codes and Stabilizer Subfields | cs.IT math.IT | Cyclic orbit codes are constant dimension subspace codes that arise as the
orbit of a cyclic subgroup of the general linear group acting on subspaces in
the given ambient space. With the aid of the largest subfield over which the
given subspace is a vector space, the cardinality of the orbit code can be
determined, and estimates for its distance can be found. This subfield is
closely related to the stabilizer of the generating subspace. Finally, with a
linkage construction larger, and longer, constant dimension codes can be
derived from cyclic orbit codes without compromising the distance.
|
1403.1228 | Topological implications of negative curvature for biological and social
networks | q-bio.MN cs.DM cs.SI physics.soc-ph | Network measures that reflect the most salient properties of complex
large-scale networks are in high demand in the network research community. In
this paper we adapt a combinatorial measure of negative curvature (also called
hyperbolicity) to parameterized finite networks, and show that a variety of
biological and social networks are hyperbolic. This hyperbolicity property has
strong implications on the higher-order connectivity and other topological
properties of these networks. Specifically, we derive and prove bounds on the
distance among shortest or approximately shortest paths in hyperbolic networks.
We describe two implications of these bounds to cross-talk in biological
networks, and to the existence of central, influential neighborhoods in both
biological and social networks.
|
1403.1241 | Vaccines, Contagion, and Social Networks | stat.ME cs.SI physics.soc-ph | Consider the causal effect that one individual's treatment may have on
another individual's outcome when the outcome is contagious, with specific
application to the effect of vaccination on an infectious disease outcome. The
effect of one individual's vaccination on another's outcome can be decomposed
into two different causal effects, called the "infectiousness" and "contagion"
effects. We present identifying assumptions and estimation or testing
procedures for infectiousness and contagion effects in two different settings:
(1) using data sampled from independent groups of observations, and (2) using
data collected from a single interdependent social network. The methods that we
propose for social network data require fitting generalized linear models
(GLMs). GLMs and other statistical models that require independence across
subjects have been used widely to estimate causal effects in social network
data, but, because the subjects in networks are presumably not independent, the
use of such models is generally invalid, resulting in inference that is
expected to be anticonservative. We introduce a way to ensure that GLM
residuals are uncorrelated across subjects despite the fact that outcomes are
non-independent. This simultaneously demonstrates the possibility of using GLMs
and related statistical models for network data and highlights their
limitations.
|
1403.1243 | Estimation of Toeplitz Covariance Matrices in Large Dimensional Regime
with Application to Source Detection | cs.IT math.IT | In this article, we derive concentration inequalities for the spectral norm
of two classical sample estimators of large dimensional Toeplitz covariance
matrices, demonstrating in particular their asymptotic almost sure consistence.
The consistency is then extended to the case where the aggregated matrix of
time samples is corrupted by a rank one (or more generally, low rank) matrix.
As an application of the latter, the problem of source detection in the context
of large dimensional sensor networks within a temporally correlated noise
environment is studied. As opposed to standard procedures, this application is
performed online, i.e. without the need to possess a learning set of pure noise
samples.
|
1403.1248 | Integrating Energy Storage into the Smart Grid: A Prospect Theoretic
Approach | cs.GT cs.IT math.IT | In this paper, the interactions and energy exchange decisions of a number of
geographically distributed storage units are studied under decision-making
involving end-users. In particular, a noncooperative game is formulated between
customer-owned storage units where each storage unit's owner can decide on
whether to charge or discharge energy with a given probability so as to
maximize a utility that reflects the tradeoff between the monetary transactions
from charging/discharging and the penalty from power regulation. Unlike
existing game-theoretic works which assume that players make their decisions
rationally and objectively, we use the new framework of prospect theory (PT) to
explicitly incorporate the users' subjective perceptions of their expected
utilities. For the two-player game, we show the existence of a proper mixed
Nash equilibrium for both the standard game-theoretic case and the case with PT
considerations. Simulation results show that incorporating user behavior via PT
reveals several important insights into load management as well as economics of
energy storage usage. For instance, the results show that deviations from
conventional game theory, as predicted by PT, can lead to undesirable grid
loads and revenues thus requiring the power company to revisit its pricing
schemes and the customers to reassess their energy storage usage choices.
|
1403.1252 | Inducing Language Networks from Continuous Space Word Representations | cs.LG cs.CL cs.SI | Recent advancements in unsupervised feature learning have developed powerful
latent representations of words. However, it is still not clear what makes one
representation better than another and how we can learn the ideal
representation. Understanding the structure of latent spaces attained is key to
any future advancement in unsupervised learning. In this work, we introduce a
new view of continuous space word representations as language networks. We
explore two techniques to create language networks from learned features by
inducing them for two popular word representation methods and examining the
properties of their resulting networks. We find that the induced networks
differ from other methods of creating language networks, and that they contain
meaningful community structure.
|
1403.1276 | Quantifying the Information Leakage in Timing Side Channels in
Deterministic Work-Conserving Schedulers | cs.IT math.IT | When multiple job processes are served by a single scheduler, the queueing
delays of one process are often affected by the others, resulting in a timing
side channel that leaks the arrival pattern of one process to the others. In
this work, we study such a timing side channel between a regular user and a
malicious attacker. Utilizing Shannon's mutual information as a measure of
information leakage between the user and attacker, we analyze
privacy-preserving behaviors of common work-conserving schedulers. We find that
the attacker can always learn perfectly the user's arrival process in a
longest-queue-first (LQF) scheduler. When the user's job arrival rate is very
low (near zero), first-come-first-serve (FCFS) and round robin schedulers both
completely reveal the user's arrival pattern. The near-complete information
leakage in the low-rate traffic region is proven to be reduced by half in a
work-conserving version of TDMA (WC-TDMA) scheduler, which turns out to be
privacy-optimal in the class of deterministic-working-conserving (det-WC)
schedulers, according to a universal lower bound on information leakage we
derive for all det-WC schedulers.
|
1403.1310 | AntiPlag: Plagiarism Detection on Electronic Submissions of Text Based
Assignments | cs.IR cs.CL cs.DL | Plagiarism is one of the growing issues in academia and is always a concern
in Universities and other academic institutions. The situation is becoming even
worse with the availability of ample resources on the web. This paper focuses
on creating an effective and fast tool for plagiarism detection for text based
electronic assignments. Our plagiarism detection tool named AntiPlag is
developed using the tri-gram sequence matching technique. Three sets of text
based assignments were tested by AntiPlag and the results were compared against
an existing commercial plagiarism detection tool. AntiPlag showed better
results in terms of false positives compared to the commercial tool due to the
pre-processing steps performed in AntiPlag. In addition, to improve the
detection latency, AntiPlag applies a data clustering technique making it four
times faster than the commercial tool considered. AntiPlag could be used to
isolate plagiarized text based assignments from non-plagiarised assignments
easily. Therefore, we present AntiPlag, a fast and effective tool for
plagiarism detection on text based electronic assignments.
|
1403.1313 | Accelerating motif finding in DNA sequences with multicore CPUs | cs.CE cs.DC | Motif discovery in DNA sequences is a challenging task in molecular biology.
In computational motif discovery, Planted (l, d) motif finding is a widely
studied problem and numerous algorithms are available to solve it. Both
hardware and software accelerators have been introduced to accelerate the motif
finding algorithms. However, the use of hardware accelerators such as FPGAs
needs hardware specialists to design such systems. Software based acceleration
methods on the other hand are easier to implement than hardware acceleration
techniques. Grid computing is one such software based acceleration technique
which has been used in acceleration of motif finding. However, drawbacks such
as network communication delays and the need of fast interconnection between
nodes in the grid can limit its usage and scalability. As using multicore CPUs
to accelerate CPU intensive tasks are becoming increasingly popular and common
nowadays, we can employ it to accelerate motif finding and it can be a faster
method than grid based acceleration. In this paper, we have explored the use of
multicore CPUs to accelerate motif finding. We have accelerated the Skip-Brute
Force algorithm on multicore CPUs parallelizing it using the POSIX thread
library. Our method yielded an average speed up of 34x on a 32-core processor
compared to a speed up of 21x on a grid based implementation of 32 nodes.
|
1403.1314 | Authorship detection of SMS messages using unigrams | cs.CL cs.IR | SMS messaging is a popular media of communication. Because of its popularity
and privacy, it could be used for many illegal purposes. Additionally, since
they are part of the day to day life, SMSes can be used as evidence for many
legal disputes. Since a cellular phone might be accessible to people close to
the owner, it is important to establish the fact that the sender of the message
is indeed the owner of the phone. For this purpose, the straight forward
solutions seem to be the use of popular stylometric methods. However, in
comparison with the data used for stylometry in the literature, SMSes have
unusual characteristics making it hard or impossible to apply these methods in
a conventional way. Our target is to come up with a method of authorship
detection of SMS messages that could still give a usable accuracy. We argue
that, considering the methods of author attribution, the best method that could
be applied to SMS messages is an n-gram method. To prove our point, we checked
two different methods of distribution comparison with varying number of
training and testing data. We specifically try to compare how well our
algorithms work under less amount of testing data and large number of candidate
authors (which we believe to be the real world scenario) against controlled
tests with less number of authors and selected SMSes with large number of
words. To counter the lack of information in an SMS message, we propose the
method of stacking together few SMSes.
|
1403.1317 | Hardware software co-design of the Aho-Corasick algorithm: Scalable for
protein identification? | cs.CE | Pattern matching is commonly required in many application areas and
bioinformatics is a major area of interest that requires both exact and
approximate pattern matching. Much work has been done in this area, yet there
is still a significant space for improvement in efficiency, flexibility, and
throughput. This paper presents a hardware software co-design of Aho-Corasick
algorithm in Nios II soft-processor and a study on its scalability for a
pattern matching application. A software only approach is used to compare the
throughput and the scalability of the hardware software co-design approach.
According to the results we obtained, we conclude that the hardware software
co-design implementation shows a maximum of 10 times speed up for pattern size
of 1200 peptides compared to the software only implementation. The results also
show that the hardware software co-design approach scales well for increasing
data size compared to the software only approach.
|
1403.1319 | Hardware accelerated protein inference framework | cs.CE | Protein inference plays a vital role in the proteomics study. Two major
approaches could be used to handle the problem of protein inference; top-down
and bottom-up. This paper presents a framework for protein inference, which
uses hardware accelerated protein inference framework for handling the most
important step in a bottom-up approach, viz. peptide identification during the
assembling process. In our framework, identified peptides and their
probabilities are used to predict the most suitable reference protein cluster
for a given input amino acid sequence with the probability of identified
peptides. The framework is developed on an FPGA where hardware software
co-design techniques are used to accelerate the computationally intensive parts
of the protein inference process. In the paper we have measured, compared and
reported the time taken for the protein inference process in our framework
against a pure software implementation.
|
1403.1323 | Performance of ML Range Estimator in Radio Interferometric Positioning
Systems | cs.IT cs.NI math.IT | The radio interferometric positioning system (RIPS) is a novel positioning
solution used in wireless sensor networks. This letter explores the ranging
accuracy of RIPS in two configurations. In the linear step-frequency (LSF)
configuration, we derive the mean square error (MSE) of the maximum likelihood
(ML) estimator. In the random step-frequency (RSF) configuration, we introduce
average MSE to characterize the performance of the ML estimator. The simulation
results fit well with theoretical analysis. It is revealed that RSF is superior
to LSF in that the former is more robust in a jamming environment with similar
ranging accuracy.
|
1403.1327 | Multi-view Face Analysis Based on Gabor Features | cs.CV | Facial analysis has attracted much attention in the technology for
human-machine interface. Different methods of classification based on sparse
representation and Gabor kernels have been widely applied in the fields of
facial analysis. However, most of these methods treat face from a whole view
standpoint. In terms of the importance of different facial views, in this
paper, we present multi-view face analysis based on sparse representation and
Gabor wavelet coefficients. To evaluate the performance, we conduct face
analysis experiments including face recognition (FR) and face expression
recognition (FER) on JAFFE database. Experiments are conducted from two parts:
(1) Face images are divided into three facial parts which are forehead, eye and
mouth. (2) Face images are divided into 8 parts by the orientation of Gabor
kernels. Experimental results demonstrate that the proposed methods can
significantly boost the performance and perform better than the other methods.
|
1403.1329 | Integer Programming Relaxations for Integrated Clustering and Outlier
Detection | cs.LG | In this paper we present methods for exemplar based clustering with outlier
selection based on the facility location formulation. Given a distance function
and the number of outliers to be found, the methods automatically determine the
number of clusters and outliers. We formulate the problem as an integer program
to which we present relaxations that allow for solutions that scale to large
data sets. The advantages of combining clustering and outlier selection
include: (i) the resulting clusters tend to be compact and semantically
coherent (ii) the clusters are more robust against data perturbations and (iii)
the outliers are contextualised by the clusters and more interpretable, i.e. it
is easier to distinguish between outliers which are the result of data errors
from those that may be indicative of a new pattern emergent in the data. We
present and contrast three relaxations to the integer program formulation: (i)
a linear programming formulation (LP) (ii) an extension of affinity propagation
to outlier detection (APOC) and (iii) a Lagrangian duality based formulation
(LD). Evaluation on synthetic as well as real data shows the quality and
scalability of these different methods.
|
1403.1336 | An Extensive Repot on the Efficiency of AIS-INMACA (A Novel Integrated
MACA based Clonal Classifier for Protein Coding and Promoter Region
Prediction) | cs.CE cs.LG | This paper exclusively reports the efficiency of AIS-INMACA. AIS-INMACA has
created good impact on solving major problems in bioinformatics like protein
region identification and promoter region prediction with less time (Pokkuluri
Kiran Sree, 2014). This AIS-INMACA is now came with several variations
(Pokkuluri Kiran Sree, 2014) towards projecting it as a tool in bioinformatics
for solving many problems in bioinformatics. So this paper will be very much
useful for so many researchers who are working in the domain of bioinformatics
with cellular automata.
|
1403.1343 | Ubic: Bridging the gap between digital cryptography and the physical
world | cs.CR cs.CV | Advances in computing technology increasingly blur the boundary between the
digital domain and the physical world. Although the research community has
developed a large number of cryptographic primitives and has demonstrated their
usability in all-digital communication, many of them have not yet made their
way into the real world due to usability aspects. We aim to make another step
towards a tighter integration of digital cryptography into real world
interactions. We describe Ubic, a framework that allows users to bridge the gap
between digital cryptography and the physical world. Ubic relies on
head-mounted displays, like Google Glass, resource-friendly computer vision
techniques as well as mathematically sound cryptographic primitives to provide
users with better security and privacy guarantees. The framework covers key
cryptographic primitives, such as secure identification, document verification
using a novel secure physical document format, as well as content hiding. To
make a contribution of practical value, we focused on making Ubic as simple,
easily deployable, and user friendly as possible.
|
1403.1347 | Deep Supervised and Convolutional Generative Stochastic Network for
Protein Secondary Structure Prediction | q-bio.QM cs.CE cs.LG | Predicting protein secondary structure is a fundamental problem in protein
structure prediction. Here we present a new supervised generative stochastic
network (GSN) based method to predict local secondary structure with deep
hierarchical representations. GSN is a recently proposed deep learning
technique (Bengio & Thibodeau-Laufer, 2013) to globally train deep generative
model. We present the supervised extension of GSN, which learns a Markov chain
to sample from a conditional distribution, and applied it to protein structure
prediction. To scale the model to full-sized, high-dimensional data, like
protein sequences with hundreds of amino acids, we introduce a convolutional
architecture, which allows efficient learning across multiple layers of
hierarchical representations. Our architecture uniquely focuses on predicting
structured low-level labels informed with both low and high-level
representations learned by the model. In our application this corresponds to
labeling the secondary structure state of each amino-acid residue. We trained
and tested the model on separate sets of non-homologous proteins sharing less
than 30% sequence identity. Our model achieves 66.4% Q8 accuracy on the CB513
dataset, better than the previously reported best performance 64.9% (Wang et
al., 2011) for this challenging secondary structure prediction problem.
|
1403.1349 | Learning Soft Linear Constraints with Application to Citation Field
Extraction | cs.CL cs.DL cs.IR | Accurately segmenting a citation string into fields for authors, titles, etc.
is a challenging task because the output typically obeys various global
constraints. Previous work has shown that modeling soft constraints, where the
model is encouraged, but not require to obey the constraints, can substantially
improve segmentation performance. On the other hand, for imposing hard
constraints, dual decomposition is a popular technique for efficient prediction
given existing algorithms for unconstrained inference. We extend the technique
to perform prediction subject to soft constraints. Moreover, with a technique
for performing inference given soft constraints, it is easy to automatically
generate large families of constraints and learn their costs with a simple
convex optimization problem during training. This allows us to obtain
substantial gains in accuracy on a new, challenging citation extraction
dataset.
|
1403.1353 | Collaborative Representation for Classification, Sparse or Non-sparse? | cs.CV cs.AI cs.LG | Sparse representation based classification (SRC) has been proved to be a
simple, effective and robust solution to face recognition. As it gets popular,
doubts on the necessity of enforcing sparsity starts coming up, and primary
experimental results showed that simply changing the $l_1$-norm based
regularization to the computationally much more efficient $l_2$-norm based
non-sparse version would lead to a similar or even better performance. However,
that's not always the case. Given a new classification task, it's still unclear
which regularization strategy (i.e., making the coefficients sparse or
non-sparse) is a better choice without trying both for comparison. In this
paper, we present as far as we know the first study on solving this issue,
based on plenty of diverse classification experiments. We propose a scoring
function for pre-selecting the regularization strategy using only the dataset
size, the feature dimensionality and a discrimination score derived from a
given feature representation. Moreover, we show that when dictionary learning
is taking into account, non-sparse representation has a more significant
superiority to sparse representation. This work is expected to enrich our
understanding of sparse/non-sparse collaborative representation for
classification and motivate further research activities.
|
1403.1362 | Illumination,Expression and Occlusion Invariant Pose-Adaptive Face
Recognition System for Real-Time Applications | cs.CV | Face recognition in real-time scenarios is mainly affected by illumination,
expression and pose variations and also by occlusion. This paper presents the
framework for pose adaptive component-based face recognition system. The
framework proposed deals with all the above mentioned issues. The steps
involved in the presented framework are (i) facial landmark localisation, (ii)
facial component extraction, (iii) pre-processing of facial image (iv) facial
pose estimation (v) feature extraction using Local Binary Pattern Histograms of
each component followed by (vi) fusion of pose adaptive classification of
components. By employing pose adaptive classification, the recognition process
is carried out on some part of database, based on estimated pose, instead of
applying the recognition process on the whole database. Pre-processing
techniques employed to overcome the problems due to illumination variation are
also discussed in this paper. Component-based techniques provide better
recognition rates when face images are occluded compared to the holistic
methods. Our method is simple, feasible and provides better results when
compared to other holistic methods.
|
1403.1366 | An Accurate and Efficient Analysis of a MBSFN Network | cs.IT math.IT | A new accurate analysis is presented for an OFDM-based multicast-broadcast
single-frequency network (MBSFN). The topology of the network is modeled by a
constrained random spatial model involving a fixed number of base stations
placed over a finite area with a minimum separation. The analysis is driven by
a new closed-form expression for the conditional outage probability at each
location of the network, where the conditioning is with respect to the network
realization. The analysis accounts for the diversity combining of signals
transmitted by different base stations of a given MBSFN area, and also accounts
for the interference caused by the base stations of other MBSFN areas. The
analysis features a flexible channel model, accounting for path loss, Nakagami
fading, and correlated shadowing. The analysis is used to investigate the
influence of the minimum base-station separation and provides insight regarding
the optimal size of the MBSFN areas. In order to highlight the percentage of
the network that will fail to successfully receive the broadcast, the area
below an outage threshold (ABOT) is here used and defined as the fraction of
the network that provides an outage probability (averaged over the fading) that
meets a threshold.
|
1403.1403 | Mining Concurrent Topical Activity in Microblog Streams | physics.soc-ph cs.SI | Streams of user-generated content in social media exhibit patterns of
collective attention across diverse topics, with temporal structures determined
both by exogenous factors and endogenous factors. Teasing apart different
topics and resolving their individual, concurrent, activity timelines is a key
challenge in extracting knowledge from microblog streams. Facing this challenge
requires the use of methods that expose latent signals by using term
correlations across posts and over time. Here we focus on content posted to
Twitter during the London 2012 Olympics, for which a detailed schedule of
events is independently available and can be used for reference. We mine the
temporal structure of topical activity by using two methods based on
non-negative matrix factorization. We show that for events in the Olympics
schedule that can be semantically matched to Twitter topics, the extracted
Twitter activity timeline closely matches the known timeline from the schedule.
Our results show that, given appropriate techniques to detect latent signals,
Twitter can be used as a social sensor to extract topical-temporal information
on real-world events at high temporal resolution.
|
1403.1412 | Rate Prediction and Selection in LTE systems using Modified Source
Encoding Techniques | stat.AP cs.IT cs.LG math.IT | In current wireless systems, the base-Station (eNodeB) tries to serve its
user-equipment (UE) at the highest possible rate that the UE can reliably
decode. The eNodeB obtains this rate information as a quantized feedback from
the UE at time n and uses this, for rate selection till the next feedback is
received at time n + {\delta}. The feedback received at n can become outdated
before n + {\delta}, because of a) Doppler fading, and b) Change in the set of
active interferers for a UE. Therefore rate prediction becomes essential.
Since, the rates belong to a discrete set, we propose a discrete sequence
prediction approach, wherein, frequency trees for the discrete sequences are
built using source encoding algorithms like Prediction by Partial Match (PPM).
Finding the optimal depth of the frequency tree used for prediction is cast as
a model order selection problem. The rate sequence complexity is analysed to
provide an upper bound on model order. Information-theoretic criteria are then
used to solve the model order problem. Finally, two prediction algorithms are
proposed, using the PPM with optimal model order and system level simulations
demonstrate the improvement in packet loss and throughput due to these
algorithms.
|
1403.1430 | Sparse Principal Component Analysis via Rotation and Truncation | cs.LG cs.CV stat.ML | Sparse principal component analysis (sparse PCA) aims at finding a sparse
basis to improve the interpretability over the dense basis of PCA, meanwhile
the sparse basis should cover the data subspace as much as possible. In
contrast to most of existing work which deal with the problem by adding some
sparsity penalties on various objectives of PCA, in this paper, we propose a
new method SPCArt, whose motivation is to find a rotation matrix and a sparse
basis such that the sparse basis approximates the basis of PCA after the
rotation. The algorithm of SPCArt consists of three alternating steps: rotate
PCA basis, truncate small entries, and update the rotation matrix. Its
performance bounds are also given. SPCArt is efficient, with each iteration
scaling linearly with the data dimension. It is easy to choose parameters in
SPCArt, due to its explicit physical explanations. Besides, we give a unified
view to several existing sparse PCA methods and discuss the connection with
SPCArt. Some ideas in SPCArt are extended to GPower, a popular sparse PCA
algorithm, to overcome its drawback. Experimental results demonstrate that
SPCArt achieves the state-of-the-art performance. It also achieves a good
tradeoff among various criteria, including sparsity, explained variance,
orthogonality, balance of sparsity among loadings, and computational speed.
|
1403.1437 | Evolution of the digital society reveals balance between viral and mass
media influence | physics.soc-ph cond-mat.dis-nn cs.SI physics.comp-ph | Online social networks (OSNs) enable researchers to study the social universe
at a previously unattainable scale. The worldwide impact and the necessity to
sustain their rapid growth emphasize the importance to unravel the laws
governing their evolution. We present a quantitative two-parameter model which
reproduces the entire topological evolution of a quasi-isolated OSN with
unprecedented precision from the birth of the network. This allows us to
precisely gauge the fundamental macroscopic and microscopic mechanisms
involved. Our findings suggest that the coupling between the real pre-existing
underlying social structure, a viral spreading mechanism, and mass media
influence govern the evolution of OSNs. The empirical validation of our model,
on a macroscopic scale, reveals that virality is four to five times stronger
than mass media influence and, on a microscopic scale, individuals have a
higher subscription probability if invited by weaker social contacts, in
agreement with the "strength of weak ties" paradigm.
|
1403.1451 | Real-Time Classification of Twitter Trends | cs.IR cs.CL cs.SI | Social media users give rise to social trends as they share about common
interests, which can be triggered by different reasons. In this work, we
explore the types of triggers that spark trends on Twitter, introducing a
typology with following four types: 'news', 'ongoing events', 'memes', and
'commemoratives'. While previous research has analyzed trending topics in a
long term, we look at the earliest tweets that produce a trend, with the aim of
categorizing trends early on. This would allow to provide a filtered subset of
trends to end users. We analyze and experiment with a set of straightforward
language-independent features based on the social spread of trends to
categorize them into the introduced typology. Our method provides an efficient
way to accurately categorize trending topics without need of external data,
enabling news organizations to discover breaking news in real-time, or to
quickly identify viral memes that might enrich marketing decisions, among
others. The analysis of social features also reveals patterns associated with
each type of trend, such as tweets about ongoing events being shorter as many
were likely sent from mobile devices, or memes having more retweets originating
from a few trend-setters.
|
1403.1455 | Non-singular assembly mode changing trajectories in the workspace for
the 3-RPS parallel robot | cs.RO | Having non-singular assembly modes changing trajectories for the 3-RPS
parallel robot is a well-known feature. The only known solution for defining
such trajectory is to encircle a cusp point in the joint space. In this paper,
the aspects and the characteristic surfaces are computed for each operation
mode to define the uniqueness of the domains. Thus, we can easily see in the
workspace that at least three assembly modes can be reached for each operation
mode. To validate this property, the mathematical analysis of the determinant
of the Jacobian is done. The image of these trajectories in the joint space is
depicted with the curves associated with the cusp points.
|
1403.1458 | Phase Transitions in Phase Retrieval | cs.IT math.AG math.FA math.IT | Consider a scenario in which an unknown signal is transformed by a known
linear operator, and then the pointwise absolute value of the unknown output
function is reported. This scenario appears in several applications, and the
goal is to recover the unknown signal -- this is called phase retrieval. Phase
retrieval has been a popular subject of research in the last few years, both in
determining whether complete information is available with a given linear
operator, and in finding efficient and stable phase retrieval algorithms in the
cases where complete information is available. Interestingly, there are a few
ways to measure information completeness, and each way appears to be governed
by a phase transition of sorts. This chapter will survey the state of the art
with some of these phase transitions, and identify a few open problems for
further research.
|
1403.1460 | Decentralized Subspace Pursuit for Joint Sparsity Pattern Recovery | cs.IT math.IT | To solve the problem of joint sparsity pattern recovery in a decen-tralized
network, we propose an algorithm named decentralized and collaborative subspace
pursuit (DCSP). The basic idea of DCSP is to embed collaboration among nodes
and fusion strategy into each iteration of the standard subspace pursuit (SP)
algorithm. In DCSP, each node collaborates with several of its neighbors by
sharing high-dimensional coefficient estimates and communicates with other
remote nodes by exchanging low-dimensional support set estimates. Experimental
evaluations show that, compared with several existing algorithms for sparsity
pattern recovery, DCSP produces satisfactory results in terms of accuracy of
sparsity pattern recovery with much less communication cost.
|
1403.1476 | Cooperative Radar and Communications Signaling: The Estimation and
Information Theory Odd Couple | cs.IT math.IT | We investigate cooperative radar and communications signaling. While each
system typically considers the other system a source of interference, by
considering the radar and communications operations to be a single joint
system, the performance of both systems can, under certain conditions, be
improved by the existence of the other. As an initial demonstration, we focus
on the radar as relay scenario and present an approach denoted multiuser
detection radar (MUDR). A novel joint estimation and information theoretic
bound formulation is constructed for a receiver that observes communications
and radar return in the same frequency allocation. The joint performance bound
is presented in terms of the communication rate and the estimation rate of the
system.
|
1403.1486 | Lifespan and propagation of information in On-line Social Networks a
Case Study | cs.SI cs.IR physics.soc-ph | Since 1950, information flows have been in the centre of scientific research.
Up until internet penetration in the late 90s, these studies were based over
traditional offline social networks. Several observations in offline
information flows studies, such as two-step flow of communication and the
importance of weak ties, were verified in several online studies, showing that
the diffused information flows from one Online Social Network (OSN) to several
others. Within that flow, information is shared to and reproduced by the users
of each network. Furthermore, the original content is enhanced or weakened
according to its topic, the dynamic and exposure of each OSNs. In such a
concept, each OSN is considered a layer of information flows that interacts
with each other. In this paper, we examine such flows in several social
networks, as well as their diffusion and lifespan across multiple OSNs, in
terms of user-generated content. Our results verify the perception of content
and information connection in various OSNs.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.