id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1312.4108 | A MapReduce based distributed SVM algorithm for binary classification | cs.LG cs.DC | Although Support Vector Machine (SVM) algorithm has a high generalization
property to classify for unseen examples after training phase and it has small
loss value, the algorithm is not suitable for real-life classification and
regression problems. SVMs cannot solve hundreds of thousands examples in
training dataset. In previous studies on distributed machine learning
algorithms, SVM is trained over a costly and preconfigured computer
environment. In this research, we present a MapReduce based distributed
parallel SVM training algorithm for binary classification problems. This work
shows how to distribute optimization problem over cloud computing systems with
MapReduce technique. In the second step of this work, we used statistical
learning theory to find the predictive hypothesis that minimize our empirical
risks from hypothesis spaces that created with reduce function of MapReduce.
The results of this research are important for training of big datasets for SVM
algorithm based classification problems. We provided that iterative training of
split dataset with MapReduce technique; accuracy of the classifier function
will converge to global optimal classifier function's accuracy in finite
iteration size. The algorithm performance was measured on samples from letter
recognition and pen-based recognition of handwritten digits dataset.
|
1312.4124 | A robust Iris recognition method on adverse conditions | cs.CV | As a stable biometric system, iris has recently attracted great attention
among the researchers. However, research is still needed to provide appropriate
solutions to ensure the resistance of the system against error factors. The
present study has tried to apply a mask to the image so that the unexpected
factors affecting the location of the iris can be removed. So, pupil
localization will be faster and robust. Then to locate the exact location of
the iris, a simple stage of boundary displacement due to the Canny edge
detector has been applied. Then, with searching left and right IRIS edge point,
outer radios of IRIS will be detect. Through the process of extracting the iris
features, it has been sought to obtain the distinctive iris texture features by
using a discrete stationary wavelets transform 2-D (DSWT2). Using DSWT2 tool
and symlet 4 wavelet, distinctive features are extracted. To reduce the
computational cost, the features obtained from the application of the wavelet
have been investigated and a feature selection procedure, using similarity
criteria, has been implemented. Finally, the iris matching has been performed
using a semi-correlation criterion. The accuracy of the proposed method for
localization on CASIA-v1, CASIA-v3 is 99.73%, 98.24% and 97.04%, respectively.
The accuracy of the feature extraction proposed method for CASIA3 iris images
database is 97.82%, which confirms the efficiency of the proposed method.
|
1312.4125 | Model Counting of Query Expressions: Limitations of Propositional
Methods | cs.DB cs.CC | Query evaluation in tuple-independent probabilistic databases is the problem
of computing the probability of an answer to a query given independent
probabilities of the individual tuples in a database instance. There are two
main approaches to this problem: (1) in `grounded inference' one first obtains
the lineage for the query and database instance as a Boolean formula, then
performs weighted model counting on the lineage (i.e., computes the probability
of the lineage given probabilities of its independent Boolean variables); (2)
in methods known as `lifted inference' or `extensional query evaluation', one
exploits the high-level structure of the query as a first-order formula.
Although it is widely believed that lifted inference is strictly more powerful
than grounded inference on the lineage alone, no formal separation has
previously been shown for query evaluation. In this paper we show such a formal
separation for the first time.
We exhibit a class of queries for which model counting can be done in
polynomial time using extensional query evaluation, whereas the algorithms used
in state-of-the-art exact model counters on their lineages provably require
exponential time. Our lower bounds on the running times of these exact model
counters follow from new exponential size lower bounds on the kinds of d-DNNF
representations of the lineages that these model counters (either explicitly or
implicitly) produce. Though some of these queries have been studied before, no
non-trivial lower bounds on the sizes of these representations for these
queries were previously known.
|
1312.4132 | An introduction to synchronous self-learning Pareto strategy | cs.NE | In last decades optimization and control of complex systems that possessed
various conflicted objectives simultaneously attracted an incremental interest
of scientists. This is because of the vast applications of these systems in
various fields of real life engineering phenomena that are generally multi
modal, non convex and multi criterion. Hence, many researchers utilized
versatile intelligent models such as Pareto based techniques, game theory
(cooperative and non cooperative games), neuro evolutionary systems, fuzzy
logic and advanced neural networks for handling these types of problems. In
this paper a novel method called Synchronous Self Learning Pareto Strategy
Algorithm (SSLPSA) is presented which utilizes Evolutionary Computing (EC),
Swarm Intelligence (SI) techniques and adaptive Classical Self Organizing Map
(CSOM) simultaneously incorporating with a data shuffling behavior.
Evolutionary Algorithms (EA) which attempt to simulate the phenomenon of
natural evolution are powerful numerical optimization algorithms that reach an
approximate global maximum of a complex multi variable function over a wide
search space and swarm base technique can improved the intensity and the
robustness in EA. CSOM is a neural network capable of learning and can improve
the quality of obtained optimal Pareto front. To prove the efficient
performance of proposed algorithm, authors utilized some well known benchmark
test functions. Obtained results indicate that the cited method is best suit in
the case of vector optimization.
|
1312.4149 | Autonomous Quantum Perceptron Neural Network | cs.NE | Recently, with the rapid development of technology, there are a lot of
applications require to achieve low-cost learning. However the computational
power of classical artificial neural networks, they are not capable to provide
low-cost learning. In contrast, quantum neural networks may be representing a
good computational alternate to classical neural network approaches, based on
the computational power of quantum bit (qubit) over the classical bit. In this
paper we present a new computational approach to the quantum perceptron neural
network can achieve learning in low-cost computation. The proposed approach has
only one neuron can construct self-adaptive activation operators capable to
accomplish the learning process in a limited number of iterations and, thereby,
reduce the overall computational cost. The proposed approach is capable to
construct its own set of activation operators to be applied widely in both
quantum and classical applications to overcome the linearity limitation of
classical perceptron. The computational power of the proposed approach is
illustrated via solving variety of problems where promising and comparable
results are given.
|
1312.4162 | New Method for Localization and Human Being Detection using UWB
Technology: Helpful Solution for Rescue Robots | cs.RO | Two challenges for rescue robots are to detect human beings and to have an
accurate positioning system. In indoor positioning, GPS receivers cannot be
used due to the reflections or attenuation caused by obstacles. To detect human
beings, sensors such as thermal camera, ultrasonic and microphone can be
embedded on the rescue robot. The drawback of these sensors is the detection
range. These sensors have to be in close proximity to the victim in order to
detect it. UWB technology is then very helpful to ensure precise localization
of the rescue robot inside the disaster site and detect human beings.
We propose a new method to both detect human beings and locate the rescue
robot at the same time. To achieve these goals we optimize the design of UWB
pulses based on B-splines. The spectral effectiveness is optimized so the
symbols are easier to detect and the mitigation with noise is reduced. Our
positioning system performs to locate the rescue robot with an accuracy about 2
centimeters. During some tests we discover that UWB signal characteristics
abruptly change after passing through a human body. Our system uses this
particular signature to detect human body.
|
1312.4176 | Distributed k-means algorithm | cs.LG cs.DC | In this paper we provide a fully distributed implementation of the k-means
clustering algorithm, intended for wireless sensor networks where each agent is
endowed with a possibly high-dimensional observation (e.g., position, humidity,
temperature, etc.) The proposed algorithm, by means of one-hop communication,
partitions the agents into measure-dependent groups that have small in-group
and large out-group "distances". Since the partitions may not have a relation
with the topology of the network--members of the same clusters may not be
spatially close--the algorithm is provided with a mechanism to compute the
clusters'centroids even when the clusters are disconnected in several
sub-clusters.The results of the proposed distributed algorithm coincide, in
terms of minimization of the objective function, with the centralized k-means
algorithm. Some numerical examples illustrate the capabilities of the proposed
solution.
|
1312.4182 | Adaptive Protocols for Interactive Communication | cs.DS cs.IT math.IT | How much adversarial noise can protocols for interactive communication
tolerate? This question was examined by Braverman and Rao (IEEE Trans. Inf.
Theory, 2014) for the case of "robust" protocols, where each party sends
messages only in fixed and predetermined rounds. We consider a new class of
non-robust protocols for Interactive Communication, which we call adaptive
protocols. Such protocols adapt structurally to the noise induced by the
channel in the sense that both the order of speaking, and the length of the
protocol may vary depending on observed noise.
We define models that capture adaptive protocols and study upper and lower
bounds on the permissible noise rate in these models. When the length of the
protocol may adaptively change according to the noise, we demonstrate a
protocol that tolerates noise rates up to $1/3$. When the order of speaking may
adaptively change as well, we demonstrate a protocol that tolerates noise rates
up to $2/3$. Hence, adaptivity circumvents an impossibility result of $1/4$ on
the fraction of tolerable noise (Braverman and Rao, 2014).
|
1312.4185 | Comment: Causal entropic forces | cond-mat.stat-mech cs.SY | In this comment I argue that the causal entropy proposed in [1] is
state-independent and the entropic force is zero for state-independent noise in
a discrete time formulation and that the causal entropy description is
incomplete in the continuous time case.
|
1312.4190 | One-Shot-Learning Gesture Recognition using HOG-HOF Features | cs.CV | The purpose of this paper is to describe one-shot-learning gesture
recognition systems developed on the \textit{ChaLearn Gesture Dataset}. We use
RGB and depth images and combine appearance (Histograms of Oriented Gradients)
and motion descriptors (Histogram of Optical Flow) for parallel temporal
segmentation and recognition. The Quadratic-Chi distance family is used to
measure differences between histograms to capture cross-bin relationships. We
also propose a new algorithm for trimming videos --- to remove all the
unimportant frames from videos. We present two methods that use combination of
HOG-HOF descriptors together with variants of Dynamic Time Warping technique.
Both methods outperform other published methods and help narrow down the gap
between human performance and algorithms on this task. The code has been made
publicly available in the MLOSS repository.
|
1312.4207 | On the Energy Self-Sustainability of IoT via Distributed Compressed
Sensing | cs.IT cs.NI math.IT | This paper advocates the use of the distributed compressed sensing (DCS)
paradigm to deploy energy harvesting (EH) Internet of Thing (IoT) devices for
energy self-sustainability. We consider networks with signal/energy models that
capture the fact that both the collected signals and the harvested energy of
different devices can exhibit correlation. We provide theoretical analysis on
the performance of both the classical compressive sensing (CS) approach and the
proposed distributed CS (DCS)-based approach to data acquisition for EH IoT.
Moreover, we perform an in-depth comparison of the proposed DCS-based approach
against the distributed source coding (DSC) system. These performance
characterizations and comparisons embody the effect of various system phenomena
and parameters including signal correlation, EH correlation, network size, and
energy availability level. Our results unveil that, the proposed approach
offers significant increase in data gathering capability with respect to the
CS-based approach, and offers a substantial reduction of the mean-squared error
distortion with respect to the DSC system.
|
1312.4209 | Feature Graph Architectures | cs.LG | In this article we propose feature graph architectures (FGA), which are deep
learning systems employing a structured initialisation and training method
based on a feature graph which facilitates improved generalisation performance
compared with a standard shallow architecture. The goal is to explore
alternative perspectives on the problem of deep network training. We evaluate
FGA performance for deep SVMs on some experimental datasets, and show how
generalisation and stability results may be derived for these models. We
describe the effect of permutations on the model accuracy, and give a criterion
for the optimal permutation in terms of feature correlations. The experimental
results show that the algorithm produces robust and significant test set
improvements over a standard shallow SVM training method for a range of
datasets. These gains are achieved with a moderate increase in time complexity.
|
1312.4224 | A paradox in community detection | physics.soc-ph cs.SI | Recent research has shown that virtually all algorithms aimed at the
identification of communities in networks are affected by the same main
limitation: the impossibility to detect communities, even when these are
well-defined, if the average value of the difference between internal and
external node degrees does not exceed a strictly positive value, in literature
known as detectability threshold. Here, we counterintuitively show that the
value of this threshold is inversely proportional to the intrinsic quality of
communities: the detection of well-defined modules is thus more difficult than
the identification of ill-defined communities.
|
1312.4231 | Dependence space of matroids and its application to attribute reduction | cs.AI | Attribute reduction is a basic issue in knowledge representation and data
mining. Rough sets provide a theoretical foundation for the issue. Matroids
generalized from matrices have been widely used in many fields, particularly
greedy algorithm design, which plays an important role in attribute reduction.
Therefore, it is meaningful to combine matroids with rough sets to solve the
optimization problems. In this paper, we introduce an existing algebraic
structure called dependence space to study the reduction problem in terms of
matroids. First, a dependence space of matroids is constructed. Second, the
characterizations for the space such as consistent sets and reducts are studied
through matroids. Finally, we investigate matroids by the means of the space
and present two expressions for their bases. In a word, this paper provides new
approaches to study attribute reduction.
|
1312.4232 | Geometric lattice structure of covering and its application to attribute
reduction through matroids | cs.AI | The reduction of covering decision systems is an important problem in data
mining, and covering-based rough sets serve as an efficient technique to
process the problem. Geometric lattices have been widely used in many fields,
especially greedy algorithm design which plays an important role in the
reduction problems. Therefore, it is meaningful to combine coverings with
geometric lattices to solve the optimization problems. In this paper, we obtain
geometric lattices from coverings through matroids and then apply them to the
issue of attribute reduction. First, a geometric lattice structure of a
covering is constructed through transversal matroids. Then its atoms are
studied and used to describe the lattice. Second, considering that all the
closed sets of a finite matroid form a geometric lattice, we propose a
dependence space through matroids and study the attribute reduction issues of
the space, which realizes the application of geometric lattices to attribute
reduction. Furthermore, a special type of information system is taken as an
example to illustrate the application. In a word, this work points out an
interesting view, namely, geometric lattice to study the attribute reduction
issues of information systems.
|
1312.4234 | Connectedness of graphs and its application to connected matroids
through covering-based rough sets | cs.AI | Graph theoretical ideas are highly utilized by computer science fields
especially data mining. In this field, a data structure can be designed in the
form of tree. Covering is a widely used form of data representation in data
mining and covering-based rough sets provide a systematic approach to this type
of representation. In this paper, we study the connectedness of graphs through
covering-based rough sets and apply it to connected matroids. First, we present
an approach to inducing a covering by a graph, and then study the connectedness
of the graph from the viewpoint of the covering approximation operators.
Second, we construct a graph from a matroid, and find the matroid and the graph
have the same connectedness, which makes us to use covering-based rough sets to
study connected matroids. In summary, this paper provides a new approach to
studying graph theory and matroid theory.
|
1312.4252 | Three New Families of Zero-difference Balanced Functions with
Applications | cs.IT math.IT | Zero-difference balanced (ZDB) functions integrate a number of subjects in
combinatorics and algebra, and have many applications in coding theory,
cryptography and communications engineering. In this paper, three new families
of ZDB functions are presented. The first construction, inspired by the recent
work \cite{Cai13}, gives ZDB functions defined on the abelian groups $(\gf(q_1)
\times \cdots \times \gf(q_k), +)$ with new and flexible parameters. The other
two constructions are based on $2$-cyclotomic cosets and yield ZDB functions on
$\Z_n$ with new parameters. The parameters of optimal constant composition
codes, optimal and perfect difference systems of sets obtained from these new
families of ZDB functions are also summarized.
|
1312.4259 | Modification of Contract Net Protocol(CNP) : A Rule-Updation Approach | cs.MA | Coordination in multi-agent system is very essential, in order to perform
complex tasks and lead MAS towards its goal. Also, the member agents of
multi-agent system should be autonomous as well as collaborative to accomplish
the complex task for which multi-agent system is designed specifically.
Contract-Net Protocol (CNP) is one of the coordination mechanisms which is used
by multi-agent systems which prefer coordination through interaction protocols.
In order to overcome the limitations of conventional CNP, this paper proposes a
modification in conventional CNP called updated-CNP. Updated-CNP is an effort
towards updating of a CNP in terms of its limitations of modifiability and
communication overhead. The limitation of the modification of tasks, if the
task requirements change at any instance, corresponding to tasks which are
allocated to contractor agents by manager agents is possible in our updated-CNP
version, which was not possible in the case of conventional-CNP, as it has to
be restarted in the case of task modification. This in turn will be reducing
the communication overhead of CNP, which is time taken by various agents using
CNP to pass messages to each other. For the illustration of the updated CNP, we
have used a sound predator-prey case study.
|
1312.4280 | Uniqueness Conditions for A Class of l0-Minimization Problems | cs.IT math.IT math.OC | We consider a class of l0-minimization problems, which is to search for the
partial sparsest solution to an underdetermined linear system with additional
constraints. We introduce several concepts, including lp-induced norm (0 < p <
1), maximal scaled spark and scaled mutual coherence, to develop several new
uniqueness conditions for the partial sparsest solution to this class of
l0-minimization problems. A further improvement of some of these uniqueness
criteria have been also achieved through the so-called concepts such as maximal
scaled (sub)coherence rank.
|
1312.4283 | On Load Shedding in Complex Event Processing | cs.DB | Complex Event Processing (CEP) is a stream processing model that focuses on
detecting event patterns in continuous event streams. While the CEP model has
gained popularity in the research communities and commercial technologies, the
problem of gracefully degrading performance under heavy load in the presence of
resource constraints, or load shedding, has been largely overlooked. CEP is
similar to "classical" stream data management, but addresses a substantially
different class of queries. This unfortunately renders the load shedding
algorithms developed for stream data processing inapplicable. In this paper we
study CEP load shedding under various resource constraints. We formalize broad
classes of CEP load-shedding scenarios as different optimization problems. We
demonstrate an array of complexity results that reveal the hardness of these
problems and construct shedding algorithms with performance guarantees. Our
results shed some light on the difficulty of developing load-shedding
algorithms that maximize utility.
|
1312.4287 | Strategic Argumentation is NP-Complete | cs.LO cs.AI cs.CC | In this paper we study the complexity of strategic argumentation for dialogue
games. A dialogue game is a 2-player game where the parties play arguments. We
show how to model dialogue games in a skeptical, non-monotonic formalism, and
we show that the problem of deciding what move (set of rules) to play at each
turn is an NP-complete problem.
|
1312.4314 | Learning Factored Representations in a Deep Mixture of Experts | cs.LG | Mixtures of Experts combine the outputs of several "expert" networks, each of
which specializes in a different part of the input space. This is achieved by
training a "gating" network that maps each input to a distribution over the
experts. Such models show promise for building larger networks that are still
cheap to compute at test time, and more parallelizable at training time. In
this this work, we extend the Mixture of Experts to a stacked model, the Deep
Mixture of Experts, with multiple sets of gating and experts. This
exponentially increases the number of effective experts by associating each
input with a combination of experts at each layer, yet maintains a modest model
size. On a randomly translated version of the MNIST dataset, we find that the
Deep Mixture of Experts automatically learns to develop location-dependent
("where") experts at the first layer, and class-specific ("what") experts at
the second layer. In addition, we see that the different combinations are in
use when the model is applied to a dataset of speech monophones. These
demonstrate effective use of all expert combinations.
|
1312.4318 | Computing Scalable Multivariate Glocal Invariants of Large (Brain-)
Graphs | cs.SY q-bio.QM | Graphs are quickly emerging as a leading abstraction for the representation
of data. One important application domain originates from an emerging
discipline called "connectomics". Connectomics studies the brain as a graph;
vertices correspond to neurons (or collections thereof) and edges correspond to
structural or functional connections between them. To explore the variability
of connectomes---to address both basic science questions regarding the
structure of the brain, and medical health questions about psychiatry and
neurology---one can study the topological properties of these brain-graphs. We
define multivariate glocal graph invariants: these are features of the graph
that capture various local and global topological properties of the graphs. We
show that the collection of features can collectively be computed via a
combination of daisy-chaining, sparse matrix representation and computations,
and efficient approximations. Our custom open-source Python package serves as a
back-end to a Web-service that we have created to enable researchers to upload
graphs, and download the corresponding invariants in a number of different
formats. Moreover, we built this package to support distributed processing on
multicore machines. This is therefore an enabling technology for network
science, lowering the barrier of entry by providing tools to biologists and
analysts who otherwise lack these capabilities. As a demonstration, we run our
code on 120 brain-graphs, each with approximately 16M vertices and up to 90M
edges.
|
1312.4346 | Teleoperation System Using Past Image Records Considering Narrow
Communication Band | cs.RO cs.CV | Teleoperation is necessary when the robot is applied to real missions, for
example surveillance, search and rescue. We proposed teleoperation system using
past image records (SPIR). SPIR virtually generates the bird's-eye view image
by overlaying the CG model of the robot at the corresponding current position
on the background image which is captured from the camera mounted on the robot
at a past time. The problem for SPIR is that the communication bandwidth is
often narrow in some teleoperation tasks. In this case, the candidates of
background image of SPIR are few and the position of the robot is often
delayed. In this study, we propose zoom function for insufficiency of
candidates of the background image and additional interpolation lines for the
delay of the position data of the robot. To evaluate proposed system, an
outdoor experiments are carried out. The outdoor experiment is conducted on a
training course of a driving school.
|
1312.4353 | Abstraction in decision-makers with limited information processing
capabilities | cs.AI cs.IT math.IT stat.ML | A distinctive property of human and animal intelligence is the ability to
form abstractions by neglecting irrelevant information which allows to separate
structure from noise. From an information theoretic point of view abstractions
are desirable because they allow for very efficient information processing. In
artificial systems abstractions are often implemented through computationally
costly formations of groups or clusters. In this work we establish the relation
between the free-energy framework for decision making and rate-distortion
theory and demonstrate how the application of rate-distortion for
decision-making leads to the emergence of abstractions. We argue that
abstractions are induced due to a limit in information processing capacity.
|
1312.4354 | Decomposition of Optical Flow on the Sphere | math.OC cs.CV | We propose a number of variational regularisation methods for the estimation
and decomposition of motion fields on the $2$-sphere. While motion estimation
is based on the optical flow equation, the presented decomposition models are
motivated by recent trends in image analysis. In particular we treat $u+v$
decomposition as well as hierarchical decomposition. Helmholtz decomposition of
motion fields is obtained as a natural by-product of the chosen numerical
method based on vector spherical harmonics. All models are tested on time-lapse
microscopy data depicting fluorescently labelled endodermal cells of a
zebrafish embryo.
|
1312.4359 | Legendre transform structure and extremal properties of the relative
Fisher information | cond-mat.stat-mech cs.IT math-ph math.IT math.MP | Variational extremization of the relative Fisher information (RFI, hereafter)
is performed. Reciprocity relations, akin to those of thermodynamics are
derived, employing the extremal results of the RFI expressed in terms of
probability amplitudes. A time independent Schr\"{o}dinger-like equation
(Schr\"{o}dinger-like link) for the RFI is derived. The concomitant Legendre
transform structure (LTS, hereafter) is developed by utilizing a generalized
RFI-Euler theorem, which shows that the entire mathematical structure of
thermodynamics translates into the RFI framework, both for equilibrium and
non-equilibrium cases. The qualitatively distinct nature of the present results
\textit{vis-\'{a}-vis} those of prior studies utilizing the Shannon entropy
and/or the Fisher information measure (FIM, hereafter) is discussed. A
principled relationship between the RFI and the FIM frameworks is derived. The
utility of this relationship is demonstrated by an example wherein the energy
eigenvalues of the Schr\"{o}dinger-like link for the RFI is inferred solely
using the quantum mechanical virial theorem and the LTS of the RFI.
|
1312.4370 | Sampling-based Learning Control for Quantum Systems with Hamiltonian
Uncertainties | cs.SY | Robust control design for quantum systems has been recognized as a key task
in the development of practical quantum technology. In this paper, we present a
systematic numerical methodology of sampling-based learning control (SLC) for
control design of quantum systems with Hamiltonian uncertainties. The SLC
method includes two steps of "training" and "testing and evaluation". In the
training step, an augmented system is constructed by sampling uncertainties
according to possible distributions of uncertainty parameters. A gradient flow
based learning and optimization algorithm is adopted to find the control for
the augmented system. In the process of testing and evaluation, a number of
samples obtained through sampling the uncertainties are tested to evaluate the
control performance. Numerical results demonstrate the success of the SLC
approach. The SLC method has potential applications for robust control design
of quantum systems.
|
1312.4378 | Is Non-Unique Decoding Necessary? | cs.IT math.IT | In multi-terminal communication systems, signals carrying messages meant for
different destinations are often observed together at any given destination
receiver. Han and Kobayashi (1981) proposed a receiving strategy which performs
a joint unique decoding of messages of interest along with a subset of messages
which are not of interest. It is now well-known that this provides an
achievable region which is, in general, larger than if the receiver treats all
messages not of interest as noise. Nair and El Gamal (2009) and Chong, Motani,
Garg, and El Gamal (2008) independently proposed a generalization called
indirect or non-unique decoding where the receiver uses the codebook structure
of the messages to uniquely decode only its messages of interest. Non-unique
decoding has since been used in various scenarios.
The main result in this paper is to provide an interpretation and a
systematic proof technique for why non-unique decoding, in all known cases
where it has been employed, can be replaced by a particularly designed joint
unique decoding strategy, without any penalty from a rate region viewpoint.
|
1312.4384 | Rectifying Self Organizing Maps for Automatic Concept Learning from Web
Images | cs.CV cs.LG cs.NE | We attack the problem of learning concepts automatically from noisy web image
search results. Going beyond low level attributes, such as colour and texture,
we explore weakly-labelled datasets for the learning of higher level concepts,
such as scene categories. The idea is based on discovering common
characteristics shared among subsets of images by posing a method that is able
to organise the data while eliminating irrelevant instances. We propose a novel
clustering and outlier detection method, namely Rectifying Self Organizing Maps
(RSOM). Given an image collection returned for a concept query, RSOM provides
clusters pruned from outliers. Each cluster is used to train a model
representing a different characteristics of the concept. The proposed method
outperforms the state-of-the-art studies on the task of learning low-level
concepts, and it is competitive in learning higher level concepts as well. It
is capable to work at large scale with no supervision through exploiting the
available sources.
|
1312.4400 | Network In Network | cs.NE cs.CV cs.LG | We propose a novel deep network structure called "Network In Network" (NIN)
to enhance model discriminability for local patches within the receptive field.
The conventional convolutional layer uses linear filters followed by a
nonlinear activation function to scan the input. Instead, we build micro neural
networks with more complex structures to abstract the data within the receptive
field. We instantiate the micro neural network with a multilayer perceptron,
which is a potent function approximator. The feature maps are obtained by
sliding the micro networks over the input in a similar manner as CNN; they are
then fed into the next layer. Deep NIN can be implemented by stacking mutiple
of the above described structure. With enhanced local modeling via the micro
network, we are able to utilize global average pooling over feature maps in the
classification layer, which is easier to interpret and less prone to
overfitting than traditional fully connected layers. We demonstrated the
state-of-the-art classification performances with NIN on CIFAR-10 and
CIFAR-100, and reasonable performances on SVHN and MNIST datasets.
|
1312.4405 | Learning Deep Representations By Distributed Random Samplings | cs.LG | In this paper, we propose an extremely simple deep model for the unsupervised
nonlinear dimensionality reduction -- deep distributed random samplings, which
performs like a stack of unsupervised bootstrap aggregating. First, its network
structure is novel: each layer of the network is a group of mutually
independent $k$-centers clusterings. Second, its learning method is extremely
simple: the $k$ centers of each clustering are only $k$ randomly selected
examples from the training data; for small-scale data sets, the $k$ centers are
further randomly reconstructed by a simple cyclic-shift operation. Experimental
results on nonlinear dimensionality reduction show that the proposed method can
learn abstract representations on both large-scale and small-scale problems,
and meanwhile is much faster than deep neural networks on large-scale problems.
|
1312.4415 | Adaptive Penalty-Based Distributed Stochastic Convex Optimization | math.OC cs.DC cs.MA | In this work, we study the task of distributed optimization over a network of
learners in which each learner possesses a convex cost function, a set of
affine equality constraints, and a set of convex inequality constraints. We
propose a fully-distributed adaptive diffusion algorithm based on penalty
methods that allows the network to cooperatively optimize the global cost
function, which is defined as the sum of the individual costs over the network,
subject to all constraints. We show that when small constant step-sizes are
employed, the expected distance between the optimal solution vector and that
obtained at each node in the network can be made arbitrarily small. Two
distinguishing features of the proposed solution relative to other related
approaches is that the developed strategy does not require the use of
projections and is able to adapt to and track drifts in the location of the
minimizer due to changes in the constraints or in the aggregate cost itself.
The proposed strategy is also able to cope with changing network topology, is
robust to network disruptions, and does not require global information or rely
on central processors.
|
1312.4423 | Achievable Diversity-Rate Tradeoff of MIMO AF Relaying Systems with MMSE
Transceivers | cs.IT math.IT | This paper investigates the diversity order of the minimum mean squared error
(MMSE) based optimal transceivers in multiple-input multiple-output (MIMO)
amplify-and-forward (AF) relaying systems. While the diversity-multiplexing
tradeoff (DMT) analysis accurately predicts the behavior of the MMSE receiver
for the positive multiplexing gain, it turned out that the performance is very
unpredictable via DMT for the case of fixed rates, because MMSE strategies
exhibit a complicated rate dependent behavior. In this paper, we establish the
diversity-rate tradeoff performance of MIMO AF relaying systems with the MMSE
transceivers as a closed-form for all fixed rates, thereby providing a complete
characterization of the diversity order together with the earlier work on DMT.
|
1312.4425 | An Ontology-based Model for Indexing and Retrieval | cs.IR | Starting from an unsolved problem of information retrieval this paper
presents an ontology-based model for indexing and retrieval. The model combines
the methods and experiences of cognitive-to-interpret indexing languages with
the strengths and possibilities of formal knowledge representation. The core
component of the model uses inferences along the paths of typed relations
between the entities of a knowledge representation for enabling the
determination of hit quantities in the context of retrieval processes. The
entities are arranged in aspect-oriented facets to ensure a consistent
hierarchical structure. The possible consequences for indexing and retrieval
are discussed.
|
1312.4426 | Optimization for Compressed Sensing: the Simplex Method and Kronecker
Sparsification | stat.ML cs.LG | In this paper we present two new approaches to efficiently solve large-scale
compressed sensing problems. These two ideas are independent of each other and
can therefore be used either separately or together. We consider all
possibilities.
For the first approach, we note that the zero vector can be taken as the
initial basic (infeasible) solution for the linear programming problem and
therefore, if the true signal is very sparse, some variants of the simplex
method can be expected to take only a small number of pivots to arrive at a
solution. We implemented one such variant and demonstrate a dramatic
improvement in computation time on very sparse signals.
The second approach requires a redesigned sensing mechanism in which the
vector signal is stacked into a matrix. This allows us to exploit the Kronecker
compressed sensing (KCS) mechanism. We show that the Kronecker sensing requires
stronger conditions for perfect recovery compared to the original vector
problem. However, the Kronecker sensing, modeled correctly, is a much sparser
linear optimization problem. Hence, algorithms that benefit from sparse problem
representation, such as interior-point methods, can solve the Kronecker sensing
problems much faster than the corresponding vector problem. In our numerical
studies, we demonstrate a ten-fold improvement in the computation time.
|
1312.4461 | Low-Rank Approximations for Conditional Feedforward Computation in Deep
Neural Networks | cs.LG | Scalability properties of deep neural networks raise key research questions,
particularly as the problems considered become larger and more challenging.
This paper expands on the idea of conditional computation introduced by Bengio,
et. al., where the nodes of a deep network are augmented by a set of gating
units that determine when a node should be calculated. By factorizing the
weight matrix into a low-rank approximation, an estimation of the sign of the
pre-nonlinearity activation can be efficiently obtained. For networks using
rectified-linear hidden units, this implies that the computation of a hidden
unit with an estimated negative pre-nonlinearity can be ommitted altogether, as
its value will become zero when nonlinearity is applied. For sparse neural
networks, this can result in considerable speed gains. Experimental results
using the MNIST and SVHN data sets with a fully-connected deep neural network
demonstrate the performance robustness of the proposed scheme with respect to
the error introduced by the conditional computation process.
|
1312.4468 | Extremality for Gallager's Reliability Function $E_0$ | cs.IT math.IT | We describe certain extremalities for Gallager's $E_0$ function evaluated
under the uniform input distribution for binary input discrete memoryless
channels. The results characterize the extremality of the $E_0(\rho)$ curves of
the binary erasure channel and the binary symmetric channel among all the
$E_0(\rho)$ curves that can be generated by the class of binary discrete
memoryless channels whose $E_0(\rho)$ curves pass through a given point
$(\rho_0, e_0)$, for some $\rho_0 > -1$.
|
1312.4476 | Social Media Monitoring of the Campaigns for the 2013 German Bundestag
Elections on Facebook and Twitter | cs.SI cs.CY | As more and more people use social media to communicate their view and
perception of elections, researchers have increasingly been collecting and
analyzing data from social media platforms. Our research focuses on social
media communication related to the 2013 election of the German parlia-ment
[translation: Bundestagswahl 2013]. We constructed several social media
datasets using data from Facebook and Twitter. First, we identified the most
relevant candidates (n=2,346) and checked whether they maintained social media
accounts. The Facebook data was collected in November 2013 for the period of
January 2009 to October 2013. On Facebook we identified 1,408 Facebook walls
containing approximately 469,000 posts. Twitter data was collected between June
and December 2013 finishing with the constitution of the government. On Twitter
we identified 1,009 candidates and 76 other agents, for example, journalists.
We estimated the number of relevant tweets to exceed eight million for the
period from July 27 to September 27 alone. In this document we summarize past
research in the literature, discuss possibilities for research with our data
set, explain the data collection procedures, and provide a description of the
data and a discussion of issues for archiving and dissemination of social media
data.
|
1312.4477 | GCG: Mining Maximal Complete Graph Patterns from Large Spatial Data | cs.DB | Recent research on pattern discovery has progressed from mining frequent
patterns and sequences to mining structured patterns, such as trees and graphs.
Graphs as general data structure can model complex relations among data with
wide applications in web exploration and social networks. However, the process
of mining large graph patterns is a challenge due to the existence of large
number of subgraphs. In this paper, we aim to mine only frequent complete graph
patterns. A graph g in a database is complete if every pair of distinct
vertices is connected by a unique edge. Grid Complete Graph (GCG) is a mining
algorithm developed to explore interesting pruning techniques to extract
maximal complete graphs from large spatial dataset existing in Sloan Digital
Sky Survey (SDSS) data. Using a divide and conquer strategy, GCG shows high
efficiency especially in the presence of large number of patterns. In this
paper, we describe GCG that can mine not only simple co-location spatial
patterns but also complex ones. To the best of our knowledge, this is the first
algorithm used to exploit the extraction of maximal complete graphs in the
process of mining complex co-location patterns in large spatial dataset.
|
1312.4479 | Parametric Modelling of Multivariate Count Data Using Probabilistic
Graphical Models | stat.ML cs.LG stat.ME | Multivariate count data are defined as the number of items of different
categories issued from sampling within a population, which individuals are
grouped into categories. The analysis of multivariate count data is a recurrent
and crucial issue in numerous modelling problems, particularly in the fields of
biology and ecology (where the data can represent, for example, children counts
associated with multitype branching processes), sociology and econometrics. We
focus on I) Identifying categories that appear simultaneously, or on the
contrary that are mutually exclusive. This is achieved by identifying
conditional independence relationships between the variables; II)Building
parsimonious parametric models consistent with these relationships; III)
Characterising and testing the effects of covariates on the joint distribution
of the counts. To achieve these goals, we propose an approach based on
graphical probabilistic models, and more specifically partially directed
acyclic graphs.
|
1312.4490 | eXamine: a Cytoscape app for exploring annotated modules in networks | q-bio.MN cs.CE cs.DS | Background. Biological networks have growing importance for the
interpretation of high-throughput "omics" data. Statistical and combinatorial
methods allow to obtain mechanistic insights through the extraction of smaller
subnetwork modules. Further enrichment analyses provide set-based annotations
of these modules.
Results. We present eXamine, a set-oriented visual analysis approach for
annotated modules that displays set membership as contours on top of a
node-link layout. Our approach extends upon Self Organizing Maps to
simultaneously lay out nodes, links, and set contours.
Conclusions. We implemented eXamine as a freely available Cytoscape app.
Using eXamine we study a module that is activated by the virally-encoded
G-protein coupled receptor US28 and formulate a novel hypothesis about its
functioning.
|
1312.4511 | Who Watches (and Shares) What on YouTube? And When? Using Twitter to
Understand YouTube Viewership | cs.SI physics.soc-ph | We combine user-centric Twitter data with video-centric YouTube data to
analyze who watches and shares what on YouTube. Combination of two data sets,
with 87k Twitter users, 5.6mln YouTube videos and 15mln video sharing events,
allows rich analysis going beyond what could be obtained with either of the two
data sets individually. For Twitter, we generate user features relating to
activity, interests and demographics. For YouTube, we obtain video features for
topic, popularity and polarization. These two feature sets are combined through
sharing events for YouTube URLs on Twitter. This combination is done both in a
user-, a video- and a sharing-event-centric manner. For the user-centric
analysis, we show how Twitter user features correlate both with YouTube
features and with sharing-related features. As two examples, we show urban
users are quicker to share than rural users and for some notions of "influence"
influential users on Twitter share videos with a higher number of views. For
the video-centric analysis, we find a superlinear relation between initial
Twitter shares and the final amounts of views, showing the correlated behavior
of Twitter. On user impact, we find the total amount of followers of users that
shared the video in the first week does not affect its final popularity.
However, aggregated user retweet rates serve as a better predictor for YouTube
video popularity. For the sharing-centric analysis, we reveal existence of
correlated behavior concerning the time between video creation and sharing
within certain timescales, showing the time onset for a coherent response, and
the time limit after which collective responses are extremely unlikely. We show
that response times depend on video category, revealing that Twitter sharing of
a video is highly dependent on its content. To the best of our knowledge this
is the first large-scale study combining YouTube and Twitter data.
|
1312.4521 | Weyl-Heisenberg Spaces for Robust Orthogonal Frequency Division
Multiplexing | cs.IT math.IT | Design of Weyl-Heisenberg sets of waveforms for robust orthogonal frequency
division multiplex- ing (OFDM) has been the subject of a considerable volume of
work. In this paper, a complete parameterization of orthogonal Weyl-Heisenberg
sets and their corresponding biorthogonal sets is given. Several examples of
Weyl-Heisenberg sets designed using this parameterization are pre- sented,
which in simulations show a high potential for enabling OFDM robust to
frequency offset, timing mismatch, and narrow-band interference.
|
1312.4527 | Probable convexity and its application to Correlated Topic Models | cs.LG stat.ML | Non-convex optimization problems often arise from probabilistic modeling,
such as estimation of posterior distributions. Non-convexity makes the problems
intractable, and poses various obstacles for us to design efficient algorithms.
In this work, we attack non-convexity by first introducing the concept of
\emph{probable convexity} for analyzing convexity of real functions in
practice. We then use the new concept to analyze an inference problem in the
\emph{Correlated Topic Model} (CTM) and related nonconjugate models. Contrary
to the existing belief of intractability, we show that this inference problem
is concave under certain conditions. One consequence of our analyses is a novel
algorithm for learning CTM which is significantly more scalable and qualitative
than existing methods. Finally, we highlight that stochastic gradient
algorithms might be a practical choice to resolve efficiently non-convex
problems. This finding might find beneficial in many contexts which are beyond
probabilistic modeling.
|
1312.4551 | Comparative Analysis of Viterbi Training and Maximum Likelihood
Estimation for HMMs | stat.ML cs.LG | We present an asymptotic analysis of Viterbi Training (VT) and contrast it
with a more conventional Maximum Likelihood (ML) approach to parameter
estimation in Hidden Markov Models. While ML estimator works by (locally)
maximizing the likelihood of the observed data, VT seeks to maximize the
probability of the most likely hidden state sequence. We develop an analytical
framework based on a generating function formalism and illustrate it on an
exactly solvable model of HMM with one unambiguous symbol. For this particular
model the ML objective function is continuously degenerate. VT objective, in
contrast, is shown to have only finite degeneracy. Furthermore, VT converges
faster and results in sparser (simpler) models, thus realizing an automatic
Occam's razor for HMM learning. For more general scenario VT can be worse
compared to ML but still capable of correctly recovering most of the
parameters.
|
1312.4552 | Intelligent Bug Algorithm (IBA): A Novel Strategy to Navigate Mobile
Robots Autonomously | cs.RO | This research proposed an intelligent obstacle avoidance algorithm to
navigate an autonomous mobile robot. The presented Intelligent Bug Algorithm
(IBA) over performs and reaches the goal in relatively less time as compared to
existing Bug algorithms. The improved algorithm offers a goal oriented strategy
by following smooth and short trajectory. This has been achieved by
continuously considering the goal position during obstacle avoidance. The
proposed algorithm is computationally inexpensive and easy to tune. The paper
also presents the performance comparison of IBA and reported Bug algorithms.
Simulation results of robot navigation in an environment with obstacles
demonstrate the performance of the improved algorithm.
|
1312.4564 | Adaptive Stochastic Alternating Direction Method of Multipliers | stat.ML cs.LG | The Alternating Direction Method of Multipliers (ADMM) has been studied for
years. The traditional ADMM algorithm needs to compute, at each iteration, an
(empirical) expected loss function on all training examples, resulting in a
computational complexity proportional to the number of training examples. To
reduce the time complexity, stochastic ADMM algorithms were proposed to replace
the expected function with a random loss function associated with one uniformly
drawn example plus a Bregman divergence. The Bregman divergence, however, is
derived from a simple second order proximal function, the half squared norm,
which could be a suboptimal choice.
In this paper, we present a new family of stochastic ADMM algorithms with
optimal second order proximal functions, which produce a new family of adaptive
subgradient methods. We theoretically prove that their regret bounds are as
good as the bounds which could be achieved by the best proximal function that
can be chosen in hindsight. Encouraging empirical results on a variety of
real-world datasets confirm the effectiveness and efficiency of the proposed
algorithms.
|
1312.4568 | Functions with Diffusive Properties | cs.IT cs.CR math.IT | While exploring desirable properties of hash functions in cryptography, the
author was led to investigate three notions of functions with scattering or
"diffusive" properties, where the functions map between binary strings of fixed
finite length. These notions of diffusion ask for some property to be fulfilled
by the Hamming distances between outputs corresponding to pairs of inputs that
lie on the endpoints of edges of an $n$-dimensional hypercube. Given the
dimension of the input space, we explicitly construct such functions for every
dimension of the output space that allows for the functions to exist.
|
1312.4569 | Dropout improves Recurrent Neural Networks for Handwriting Recognition | cs.CV cs.LG cs.NE | Recurrent neural networks (RNNs) with Long Short-Term memory cells currently
hold the best known results in unconstrained handwriting recognition. We show
that their performance can be greatly improved using dropout - a recently
proposed regularization method for deep architectures. While previous works
showed that dropout gave superior performance in the context of convolutional
networks, it had never been applied to RNNs. In our approach, dropout is
carefully used in the network so that it does not affect the recurrent
connections, hence the power of RNNs in modeling sequence is preserved.
Extensive experiments on a broad range of handwritten databases confirm the
effectiveness of dropout on deep architectures even when the network mainly
consists of recurrent and shared connections.
|
1312.4575 | Branching MERA codes: a natural extension of polar codes | quant-ph cs.IT math.IT | We introduce a new class of circuits for constructing efficiently decodable
error-correction codes, based on a recently discovered contractible tensor
network. We perform an in-depth study of a particular example that can be
thought of as an extension to Arikan's polar code. Notably, our numerical
simulation show that this code polarizes the logical channels more strongly
while retaining the log-linear decoding complexity using the successive
cancellation decoder. These codes also display improved error-correcting
capability with only a minor impact on decoding complexity. Efficient decoding
is realized using powerful graphical calculus tools developed in the field of
quantum many-body physics. In a companion paper, we generalize our construction
to the quantum setting and describe more in-depth the relation between
classical and quantum error correction and the graphical calculus of tensor
networks.
|
1312.4578 | Tensor Networks and Quantum Error Correction | quant-ph cs.IT math.IT | We establish several relations between quantum error correction (QEC) and
tensor network (TN) methods of quantum many-body physics. We exhibit
correspondences between well-known families of QEC codes and TNs, and
demonstrate a formal equivalence between decoding a QEC code and contracting a
TN. We build on this equivalence to propose a new family of quantum codes and
decoding algorithms that generalize and improve upon quantum polar codes and
successive cancellation decoding in a natural way.
|
1312.4587 | FFTPL: An Analytic Placement Algorithm Using Fast Fourier Transform for
Density Equalization | cs.CE cs.AR cs.NA | We propose a flat nonlinear placement algorithm FFTPL using fast Fourier
transform for density equalization. The placement instance is modeled as an
electrostatic system with the analogy of density cost to the potential energy.
A well-defined Poisson's equation is proposed for gradient and cost
computation. Our placer outperforms state-of-the-art placers with better
solution quality and efficiency.
|
1312.4598 | Tethered Flying Robot for Information Gathering System | cs.SY | Information from the sky is important for rescue activity in large-scale
disaster or dangerous areas. Observation system using a balloon or an airplane
has been studied as an information gathering system from the sky. A balloon
observation system needs helium gas and relatively long time to be ready. An
airplane observation system can be prepared in a short time and its mobility is
good. However, a long time flight is difficult because of limited amount of
fuel.
This paper proposes a kite-based observation system that complements
activities of balloon and airplane observation systems by short preparation
time and long time flight. This research aims at construction of the autonomous
flight information gathering system using a tethered flying unit that consists
of the kite and the ground tether line control unit with a winding machine.
This paper reports development of the kite type tethered flying robot and an
autonomous flying control system inspired by how to fly a kite by a human.
|
1312.4599 | Evolution and Computational Learning Theory: A survey on Valiant's paper | cs.LG | Darwin's theory of evolution is considered to be one of the greatest
scientific gems in modern science. It not only gives us a description of how
living things evolve, but also shows how a population evolves through time and
also, why only the fittest individuals continue the generation forward. The
paper basically gives a high level analysis of the works of Valiant[1]. Though,
we know the mechanisms of evolution, but it seems that there does not exist any
strong quantitative and mathematical theory of the evolution of certain
mechanisms. What is defined exactly as the fitness of an individual, why is
that only certain individuals in a population tend to mutate, how computation
is done in finite time when we have exponentially many examples: there seems to
be a lot of questions which need to be answered. [1] basically treats Darwinian
theory as a form of computational learning theory, which calculates the net
fitness of the hypotheses and thus distinguishes functions and their classes
which could be evolvable using polynomial amount of resources. Evolution is
considered as a function of the environment and the previous evolutionary
stages that chooses the best hypothesis using learning techniques that makes
mutation possible and hence, gives a quantitative idea that why only the
fittest individuals tend to survive and have the power to mutate.
|
1312.4601 | Strategic Control of Proximity Relationships in Heterogeneous Search and
Rescue Teams | cs.RO | In the context of search and rescue, we consider the problem of mission
planning for heterogeneous teams that can include human, robotic, and animal
agents. The problem is tackled using a mixed integer mathematical programming
formulation that jointly determines the path and the activity scheduling of
each agent in the team. Based on the mathematical formulation, we propose the
use of soft constraints and penalties that allow the flexible strategic control
of spatio-temporal relations among the search trajectories of the agents. In
this way, we can enable the mission planner to obtain solutions that maximize
the area coverage and, at the same time, control the spatial proximity among
the agents (e.g., to minimize mutual task interference, or to promote local
cooperation and data sharing). Through simulation experiments, we show the
application of the strategic framework considering a number of scenarios of
interest for real-world search and rescue missions.
|
1312.4617 | A Survey of Data Mining Techniques for Social Media Analysis | cs.SI cs.CL | Social network has gained remarkable attention in the last decade. Accessing
social network sites such as Twitter, Facebook LinkedIn and Google+ through the
internet and the web 2.0 technologies has become more affordable. People are
becoming more interested in and relying on social network for information, news
and opinion of other users on diverse subject matters. The heavy reliance on
social network sites causes them to generate massive data characterised by
three computational issues namely; size, noise and dynamism. These issues often
make social network data very complex to analyse manually, resulting in the
pertinent use of computational means of analysing them. Data mining provides a
wide range of techniques for detecting useful knowledge from massive datasets
like trends, patterns and rules [44]. Data mining techniques are used for
information retrieval, statistical modelling and machine learning. These
techniques employ data pre-processing, data analysis, and data interpretation
processes in the course of data analysis. This survey discusses different data
mining techniques used in mining diverse aspects of the social network over
decades going from the historical techniques to the up-to-date models,
including our novel technique named TRCM. All the techniques covered in this
survey are listed in the Table.1 including the tools employed as well as names
of their authors.
|
1312.4626 | Compact Random Feature Maps | stat.ML cs.LG | Kernel approximation using randomized feature maps has recently gained a lot
of interest. In this work, we identify that previous approaches for polynomial
kernel approximation create maps that are rank deficient, and therefore do not
utilize the capacity of the projected feature space effectively. To address
this challenge, we propose compact random feature maps (CRAFTMaps) to
approximate polynomial kernels more concisely and accurately. We prove the
error bounds of CRAFTMaps demonstrating their superior kernel reconstruction
performance compared to the previous approximation schemes. We show how
structured random matrices can be used to efficiently generate CRAFTMaps, and
present a single-pass algorithm using CRAFTMaps to learn non-linear multi-class
classifiers. We present experiments on multiple standard data-sets with
performance competitive with state-of-the-art results.
|
1312.4634 | Implementation of WSN which can simultaneously monitor Temperature
conditions and control robot for positional accuracy | cs.RO cs.NI | Sensor networks and robots are both quickly evolving fields, the union of two
fields seems inherently symbiotic. Collecting data from stationary sensors can
be time consuming task and thus can be automated by adding wireless
communication capabilities to the sensors. This proposed project takes
advantage of wireless sensor networks in remote handling environment which can
send signals over far distances by using a mesh topology, transfers the data
wirelessly and also consumes low power. In this paper a testbed is created for
wireless sensor network using custom build sensor nodes for temperature
monitoring in labs and to control a robot moving in another lab. The two
temperature sensor nodes used here consists of a Arduino microcontroller and
XBee wireless communication module based on IEEE 802.15.4 standard while the
robot has inherent FPGA board as a processing unit with xbee module connected
via Rs-2332 cable for serial communication between zigbee device and FPGA. A
simple custom packet is designed so that uniformity is maintained while
collection of data from temperature nodes and a moving robot and passing to a
remote terminal. The coordinator Zigbee is connected to remote terminal (PC)
through its USB port where Graphical user interface (GUI) can be run to monitor
Temperature readings and position of Robot dynamically and save those readings
in database.
|
1312.4637 | Constraint Reduction using Marginal Polytope Diagrams for MAP LP
Relaxations | cs.CV cs.AI | LP relaxation-based message passing algorithms provide an effective tool for
MAP inference over Probabilistic Graphical Models. However, different LP
relaxations often have different objective functions and variables of differing
dimensions, which presents a barrier to effective comparison and analysis. In
addition, the computational complexity of LP relaxation-based methods grows
quickly with the number of constraints. Reducing the number of constraints
without sacrificing the quality of the solutions is thus desirable.
We propose a unified formulation under which existing MAP LP relaxations may
be compared and analysed. Furthermore, we propose a new tool called Marginal
Polytope Diagrams. Some properties of Marginal Polytope Diagrams are exploited
such as node redundancy and edge equivalence. We show that using Marginal
Polytope Diagrams allows the number of constraints to be reduced without
loosening the LP relaxations. Then, using Marginal Polytope Diagrams and
constraint reduction, we develop three novel message passing algorithms, and
demonstrate that two of these show a significant improvement in speed over
state-of-art algorithms while delivering a competitive, and sometimes higher,
quality of solution.
|
1312.4638 | A Class of Five-weight Cyclic Codes and Their Weight Distribution | cs.IT math.IT | In this paper, a family of five-weight reducible cyclic codes is presented.
Furthermore, the weight distribution of these cyclic codes is determined, which
follows from the determination of value distributions of certain exponential
sums.
|
1312.4640 | A Review of Temporal Aspects of Hand Gesture Analysis Applied to
Discourse Analysis and Natural Conversation | cs.HC cs.AI | Lately, there has been an increasing interest in hand gesture analysis
systems. Recent works have employed pattern recognition techniques and have
focused on the development of systems with more natural user interfaces. These
systems may use gestures to control interfaces or recognize sign language
gestures, which can provide systems with multimodal interaction; or consist in
multimodal tools to help psycholinguists to understand new aspects of discourse
analysis and to automate laborious tasks. Gestures are characterized by several
aspects, mainly by movements and sequence of postures. Since data referring to
movements or sequences carry temporal information, this paper presents a
literature review about temporal aspects of hand gesture analysis, focusing on
applications related to natural conversation and psycholinguistic analysis,
using Systematic Literature Review methodology. In our results, we organized
works according to type of analysis, methods, highlighting the use of Machine
Learning techniques, and applications.
|
1312.4659 | DeepPose: Human Pose Estimation via Deep Neural Networks | cs.CV | We propose a method for human pose estimation based on Deep Neural Networks
(DNNs). The pose estimation is formulated as a DNN-based regression problem
towards body joints. We present a cascade of such DNN regressors which results
in high precision pose estimates. The approach has the advantage of reasoning
about pose in a holistic fashion and has a simple but yet powerful formulation
which capitalizes on recent advances in Deep Learning. We present a detailed
empirical analysis with state-of-art or better performance on four academic
benchmarks of diverse real-world images.
|
1312.4676 | Une m\'ethode pour caract\'eriser les communaut\'es des r\'eseaux
dynamiques \`a attributs | cs.SI | Many complex systems are modeled through complex networks whose analysis
reveals typical topological properties. Amongst those, the community structure
is one of the most studied. Many methods are proposed to detect communities,
not only in plain, but also in attributed, directed or even dynamic networks. A
community structure takes the form of a partition of the node set, which must
then be characterized relatively to the properties of the studied system. We
propose a method to support such a characterization task. We define a
sequence-based representation of networks, combining temporal information,
topological measures, and nodal attributes. We then characterize communities
using the most representative emerging sequential patterns of its nodes. This
also allows detecting unusual behavior in a community. We describe an empirical
study of a network of scientific collaborations.---De nombreux syst\`emes
complexes sont \'etudi\'es via l'analyse de r\'eseaux dits complexes ayant des
propri\'et\'es topologiques typiques. Parmi cellesci, les structures de
communaut\'es sont particuli\`erement \'etudi\'ees. De nombreuses m\'ethodes
permettent de les d\'etecter, y compris dans des r\'eseaux contenant des
attributs nodaux, des liens orient\'es ou \'evoluant dans le temps. La
d\'etection prend la forme d'une partition de l'ensemble des noeuds, qu'il faut
ensuite caract\'eriser relativement au syst\`eme mod\'elis\'e. Nous travaillons
sur l'assistance \`a cette t\^ache de caract\'erisation. Nous proposons une
repr\'esentation des r\'eseaux sous la forme de s\'equences de descripteurs de
noeuds, qui combinent les informations temporelles, les mesures topologiques,
et les valeurs des attributs nodaux. Les communaut\'es sont caract\'eris\'ees
au moyen des motifs s\'equentiels \'emergents les plus repr\'esentatifs issus
de leurs noeuds. Ceci permet notamment la d\'etection de comportements
inhabituels au sein d'une communaut\'e. Nous d\'ecrivons une \'etude empirique
sur un r\'eseau de collaboration scientifique.
|
1312.4678 | Simple, compact and robust approximate string dictionary | cs.DS cs.DB | This paper is concerned with practical implementations of approximate string
dictionaries that allow edit errors. In this problem, we have as input a
dictionary $D$ of $d$ strings of total length $n$ over an alphabet of size
$\sigma$. Given a bound $k$ and a pattern $x$ of length $m$, a query has to
return all the strings of the dictionary which are at edit distance at most $k$
from $x$, where the edit distance between two strings $x$ and $y$ is defined as
the minimum-cost sequence of edit operations that transform $x$ into $y$. The
cost of a sequence of operations is defined as the sum of the costs of the
operations involved in the sequence. In this paper, we assume that each of
these operations has unit cost and consider only three operations: deletion of
one character, insertion of one character and substitution of a character by
another. We present a practical implementation of the data structure we
recently proposed and which works only for one error. We extend the scheme to
$2\leq k<m$. Our implementation has many desirable properties: it has a very
fast and space-efficient building algorithm. The dictionary data structure is
compact and has fast and robust query time. Finally our data structure is
simple to implement as it only uses basic techniques from the literature,
mainly hashing (linear probing and hash signatures) and succinct data
structures (bitvectors supporting rank queries).
|
1312.4692 | An error event sensitive trade-off between rate and coding gain in MIMO
MAC | cs.IT math.IT | This work considers space-time block coding for the Rayleigh fading
multiple-input multiple-output (MIMO) multiple access channel (MAC). If we
suppose that the receiver is performing joint maximum-likelihood (ML) decoding,
optimizing a MIMO MAC code against a fixed error event leads to a situation
where the joint codewords of the users in error can be seen as a single user
MIMO code. In such a case pair-wise error probability (PEP) based determinant
criterion of Tarokh et al. can be used to upper bound the error probability.
It was already proven by Lahtonen et al. that irrespective of the used codes
the determinants of the differences of codewords of the overall codematrices
will decay as a function of the rates of the users.
This work will study this decay phenomenon further and derive upper bounds
for the decay of determinants corresponding any error event. Lower bounds for
the optimal decay are studied by constructions based on algebraic number theory
and Diophantine approximation. For some error profiles the constructed codes
will be proven to be optimal.
While the perspective of the paper is that of PEP, the final part of the
paper proves how the achieved decay results can be turned into statements about
the diversity-multiplexing gain trade-off (DMT).
|
1312.4695 | Sparse, complex-valued representations of natural sounds learned with
phase and amplitude continuity priors | cs.LG cs.SD q-bio.NC | Complex-valued sparse coding is a data representation which employs a
dictionary of two-dimensional subspaces, while imposing a sparse, factorial
prior on complex amplitudes. When trained on a dataset of natural image
patches, it learns phase invariant features which closely resemble receptive
fields of complex cells in the visual cortex. Features trained on natural
sounds however, rarely reveal phase invariance and capture other aspects of the
data. This observation is a starting point of the present work. As its first
contribution, it provides an analysis of natural sound statistics by means of
learning sparse, complex representations of short speech intervals. Secondly,
it proposes priors over the basis function set, which bias them towards
phase-invariant solutions. In this way, a dictionary of complex basis functions
can be learned from the data statistics, while preserving the phase invariance
property. Finally, representations trained on speech sounds with and without
priors are compared. Prior-based basis functions reveal performance comparable
to unconstrained sparse coding, while explicitely representing phase as a
temporal shift. Such representations can find applications in many perceptual
and machine learning tasks.
|
1312.4704 | RDF Translator: A RESTful Multi-Format Data Converter for the Semantic
Web | cs.DL cs.AI | The interdisciplinary nature of the Semantic Web and the many projects put
forward by the community led to a large number of widely accepted serialization
formats for RDF. Most of these RDF syntaxes have been developed out of a
necessity to serve specific purposes better than existing ones, e.g. RDFa was
proposed as an extension to HTML for embedding non-intrusive RDF statements in
human-readable documents. Nonetheless, the RDF serialization formats are
generally transducible among themselves given that they are commonly based on
the RDF model. In this paper, we present (1) a RESTful Web service based on the
HTTP protocol that translates between different serializations. In addition to
its core functionality, our proposed solution provides (2) features to
accommodate frequent needs of Semantic Web developers, namely a straightforward
user interface with copy-to-clipboard functionality, syntax highlighting,
persistent URI links for easy sharing, cool URI patterns, and content
negotiation using respective HTTP headers. We demonstrate the benefit of our
converter by presenting two use cases.
|
1312.4706 | Designing Spontaneous Speech Search Interface for Historical Archives | cs.HC cs.CL | Spontaneous speech in the form of conversations, meetings, voice-mail,
interviews, oral history, etc. is one of the most ubiquitous forms of human
communication. Search engines providing access to such speech collections have
the potential to better inform intelligence and make relevant data over vast
audio/video archives available to users. This project presents a search user
interface design supporting search tasks over a speech collection consisting of
an historical archive with nearly 52,000 audiovisual testimonies of survivors
and witnesses of the Holocaust and other genocides. The design incorporates
faceted search, along with other UI elements like highlighted search items,
tags, snippets, etc., to promote discovery and exploratory search. Two
different designs have been created to support both manual and automated
transcripts. Evaluation was performed using human subjects to measure accuracy
in retrieving results, understanding user-perspective on the design elements,
and ease of parsing information.
|
1312.4707 | The Multiple Instances of Node Centrality and their Implications on the
Vulnerability of ISP Networks | cs.SI | The position of the nodes within a network topology largely determines the
level of their involvement in various networking functions. Yet numerous node
centrality indices, proposed to quantify how central individual nodes are in
this respect, yield very different views of their relative significance. Our
first contribution in this paper is then an exhaustive survey and
categorization of centrality indices along several attributes including the
type of information (local vs. global) and processing complexity required for
their computation. We next study the seven most popular of those indices in the
context of Internet vulnerability to address issues that remain under-explored
in literature so far. First, we carry out a correlation study to assess the
consistency of the node rankings those indices generate over ISP router-level
topologies. For each pair of indices, we compute the full ranking correlation,
which is the standard choice in literature, and the percentage overlap between
the k top nodes. Then, we let these rankings guide the removal of highly
central nodes and assess the impact on both the connectivity properties and
traffic-carrying capacity of the network. Our results confirm that the top-k
overlap predicts the comparative impact of indices on the network vulnerability
better than the full-ranking correlation. Importantly, the locally computed
degree centrality index approximates closely the global indices with the most
dramatic impact on the traffic-carrying capacity; whereas, its approximative
power in terms of connectivity is more topology-dependent.
|
1312.4716 | More Classes of Complete Permutation Polynomials over $\F_q$ | cs.IT math.IT math.NT | In this paper, by using a powerful criterion for permutation polynomials
given by Zieve, we give several classes of complete permutation monomials over
$\F_{q^r}$. In addition, we present a class of complete permutation
multinomials, which is a generalization of recent work.
|
1312.4740 | Learning High-level Image Representation for Image Retrieval via
Multi-Task DNN using Clickthrough Data | cs.CV | Image retrieval refers to finding relevant images from an image database for
a query, which is considered difficult for the gap between low-level
representation of images and high-level representation of queries. Recently
further developed Deep Neural Network sheds light on automatically learning
high-level image representation from raw pixels. In this paper, we proposed a
multi-task DNN learned for image retrieval, which contains two parts, i.e.,
query-sharing layers for image representation computation and query-specific
layers for relevance estimation. The weights of multi-task DNN are learned on
clickthrough data by Ring Training. Experimental results on both simulated and
real dataset show the effectiveness of the proposed method.
|
1312.4746 | Co-Sparse Textural Similarity for Image Segmentation | cs.CV | We propose an algorithm for segmenting natural images based on texture and
color information, which leverages the co-sparse analysis model for image
segmentation within a convex multilabel optimization framework. As a key
ingredient of this method, we introduce a novel textural similarity measure,
which builds upon the co-sparse representation of image patches. We propose a
Bayesian approach to merge textural similarity with information about color and
location. Combined with recently developed convex multilabel optimization
methods this leads to an efficient algorithm for both supervised and
unsupervised segmentation, which is easily parallelized on graphics hardware.
The approach provides competitive results in unsupervised segmentation and
outperforms state-of-the-art interactive segmentation methods on the Graz
Benchmark.
|
1312.4752 | BW - Eye Ophthalmologic decision support system based on clinical
workflow and data mining techniques-image registration algorithm | cs.CV | Blueworks - Medical Expert Diagnosis is developing an application, BWEye, to
be used as an ophthalmology consultation decision support system. The
implementation of this application involves several different tasks and one of
them is the implementation of an ophthalmology images registration algorithm.
The work reported in this document is related with the implementation of an
algorithm to register images of angiography, colour retinography and redfree
retinography. The implementations described were developed in the software
MATLAB.
The implemented algorithm is based in the detection of the bifurcation points
(y-features) of the vascular structures of the retina that usually are visible
in the referred type of images. There are proposed two approaches to establish
an initial set of features correspondences. The first approach is based in the
maximization of the mutual information of the bifurcation regions of the
features of images. The second approach is based in the characterization of
each bifurcation point and in the minimization of the Euclidean distance
between the descriptions of the features of the images in the descriptors
space. The final set of the matching features for a pair of images is defined
through the application of the RANSAC algorithm.
Although, it was not achieved the implementation of a full functional
algorithm, there were made several analysis that can be important to future
improvement of the current implementation.
|
1312.4794 | Semantic Annotation: The Mainstay of Semantic Web | cs.DL cs.AI cs.IR | Given that semantic Web realization is based on the critical mass of metadata
accessibility and the representation of data with formal knowledge, it needs to
generate metadata that is specific, easy to understand and well-defined.
However, semantic annotation of the web documents is the successful way to make
the Semantic Web vision a reality. This paper introduces the Semantic Web and
its vision (stack layers) with regard to some concept definitions that helps
the understanding of semantic annotation. Additionally, this paper introduces
the semantic annotation categories, tools, domains and models.
|
1312.4798 | Finite Horizon Online Lazy Scheduling with Energy Harvesting
Transmitters over Fading Channels | cs.IT math.IT | Lazy scheduling, i.e. setting transmit power and rate in response to data
traffic as low as possible so as to satisfy delay constraints, is a known
method for energy efficient transmission.This paper addresses an online lazy
scheduling problem over finite time-slotted transmission window and introduces
low-complexity heuristics which attain near-optimal performance.Particularly,
this paper generalizes lazy scheduling problem for energy harvesting systems to
deal with packet arrival, energy harvesting and time-varying channel processes
simultaneously. The time-slotted formulation of the problem and depiction of
its offline optimal solution provide explicit expressions allowing to derive
good online policies and algorithms.
|
1312.4800 | New Approach to Optimize the Time of Association Rules Extraction | cs.DB | The knowledge discovery algorithms have become ineffective at the abundance
of data and the need for fast algorithms or optimizing methods is required. To
address this limitation, the objective of this work is to adapt a new method
for optimizing the time of association rules extractions from large databases.
Indeed, given a relational database (one relation) represented as a set of
tuples, also called set of attributes, we transform the original database as a
binary table (Bitmap table) containing binary numbers. Then, we use this Bitmap
table to construct a data structure called Peano Tree stored as a binary file
on which we apply a new algorithm called BF-ARM (extension of the well known
Apriori algorithm). Since the database is loaded into a binary file, our
proposed algorithm will traverse this file, and the processes of association
rules extractions will be based on the file stored on disk. The BF-ARM
algorithm is implemented and compared with Apriori, Apriori+ and RS-Rules+
algorithms. The evaluation process is based on three benchmarks (Mushroom, Car
Evaluation and Adult). Our preliminary experimental results showed that our
algorithm produces association rules with a minimum time compared to other
algorithms.
|
1312.4805 | Array Convolutional Low-Density Parity-Check Codes | cs.IT math.IT | This paper presents a design technique for obtaining regular time-invariant
low-density parity-check convolutional (RTI-LDPCC) codes with low complexity
and good performance. We start from previous approaches which unwrap a
low-density parity-check (LDPC) block code into an RTI-LDPCC code, and we
obtain a new method to design RTI-LDPCC codes with better performance and
shorter constraint length. Differently from previous techniques, we start the
design from an array LDPC block code. We show that, for codes with high rate, a
performance gain and a reduction in the constraint length are achieved with
respect to previous proposals. Additionally, an increase in the minimum
distance is observed.
|
1312.4811 | Finite-Length Analysis of BATS Codes | cs.IT math.IT | BATS codes were proposed for communication through networks with packet loss.
A BATS code consists of an outer code and an inner code. The outer code is a
matrix generation of a fountain code, which works with the inner code that
comprises random linear coding at the intermediate network nodes. In this
paper, the performance of finite-length BATS codes is analyzed with respect to
both belief propagation (BP) decoding and inactivation decoding. Our results
enable us to evaluate efficiently the finite-length performance in terms of the
number of batches used for decoding ranging from 1 to a given maximum number,
and provide new insights on the decoding performance. Specifically, for a fixed
number of input symbols and a range of the number of batches used for decoding,
we obtain recursive formulae to calculate respectively the stopping time
distribution of BP decoding and the inactivation probability in inactivation
decoding. We also find that both the failure probability of BP decoding and the
expected number of inactivations in inactivation decoding can be expressed in a
power-sum form where the number of batches appears only as the exponent. This
power-sum expression reveals clearly how the decoding failure probability and
the expected number of inactivation decrease with the number of batches. When
the number of batches used for decoding follows a Poisson distribution, we
further derive recursive formulae with potentially lower computational
complexity for both decoding algorithms. For the BP decoder that consumes
batches one by one, three formulae are provided to characterize the expected
number of consumed batches until all the input symbols are decoded.
|
1312.4814 | Mining Malware Specifications through Static Reachability Analysis | cs.CR cs.AI cs.LO | The number of malicious software (malware) is growing out of control.
Syntactic signature based detection cannot cope with such growth and manual
construction of malware signature databases needs to be replaced by computer
learning based approaches. Currently, a single modern signature capturing the
semantics of a malicious behavior can be used to replace an arbitrarily large
number of old-fashioned syntactical signatures. However teaching computers to
learn such behaviors is a challenge. Existing work relies on dynamic analysis
to extract malicious behaviors, but such technique does not guarantee the
coverage of all behaviors. To sidestep this limitation we show how to learn
malware signatures using static reachability analysis. The idea is to model
binary programs using pushdown systems (that can be used to model the stack
operations occurring during the binary code execution), use reachability
analysis to extract behaviors in the form of trees, and use subtrees that are
common among the trees extracted from a training set of malware files as
signatures. To detect malware we propose to use a tree automaton to compactly
store malicious behavior trees and check if any of the subtrees extracted from
the file under analysis is malicious. Experimental data shows that our approach
can be used to learn signatures from a training set of malware files and use
them to detect a test set of malware that is 5 times the size of the training
set.
|
1312.4824 | Generation, Implementation and Appraisal of an N-gram based Stemming
Algorithm | cs.IR cs.CL | A language independent stemmer has always been looked for. Single N-gram
tokenization technique works well, however, it often generates stems that start
with intermediate characters, rather than initial ones. We present a novel
technique that takes the concept of N gram stemming one step ahead and compare
our method with an established algorithm in the field, Porter's Stemmer.
Results indicate that our N gram stemmer is not inferior to Porter's linguistic
stemmer.
|
1312.4826 | Geometric Methods for Invariant-Zero Cancellation in Linear
Multivariable Systems: Illustrative Examples | cs.SY | This note presents some numerical examples worked out in order to show the
reader how to implement, within a widely accessible computational setting, the
methodology for achieving zero cancellation in linear multivariable systems
discussed in [1]. The results are evaluated in the light of applicability and
performance of different methods available in the literature.
|
1312.4828 | Subjective Logic Operators in Trust Assessment: an Empirical Study | cs.CR cs.AI cs.LO | Computational trust mechanisms aim to produce trust ratings from both direct
and indirect information about agents' behaviour. Subjective Logic (SL) has
been widely adopted as the core of such systems via its fusion and discount
operators. In recent research we revisited the semantics of these operators to
explore an alternative, geometric interpretation. In this paper we present a
principled desiderata for discounting and fusion operators in SL. Building upon
this we present operators that satisfy these desirable properties, including a
family of discount operators. We then show, through a rigorous empirical study,
that specific, geometrically interpreted operators significantly outperform
standard SL operators in estimating ground truth. These novel operators offer
real advantages for computational models of trust and reputation, in which they
may be employed without modifying other aspects of an existing system.
|
1312.4833 | Toward Security Verification against Inference Attacks on Data Trees | cs.CR cs.DB cs.FL | This paper describes our ongoing work on security verification against
inference attacks on data trees. We focus on infinite secrecy against inference
attacks, which means that attackers cannot narrow down the candidates for the
value of the sensitive information to finite by available information to the
attackers. Our purpose is to propose a model under which infinite secrecy is
decidable. To be specific, we first propose tree transducers which are
expressive enough to represent practical queries. Then, in order to represent
attackers' knowledge, we propose data tree types such that type inference and
inverse type inference on those tree transducers are possible with respect to
data tree types, and infiniteness of data tree types is decidable.
|
1312.4839 | Reasoning about the Impacts of Information Sharing | cs.AI | In this paper we describe a decision process framework allowing an agent to
decide what information it should reveal to its neighbours within a
communication graph in order to maximise its utility. We assume that these
neighbours can pass information onto others within the graph. The inferences
made by agents receiving the messages can have a positive or negative impact on
the information providing agent, and our decision process seeks to identify how
a message should be modified in order to be most beneficial to the information
producer. Our decision process is based on the provider's subjective beliefs
about others in the system, and therefore makes extensive use of the notion of
trust. Our core contributions are therefore the construction of a model of
information propagation; the description of the agent's decision procedure; and
an analysis of some of its properties.
|
1312.4851 | Representing, Simulating and Analysing Ho Chi Minh City Tsunami Plan by
Means of Process Models | cs.CY cs.AI | This paper considers the textual plan (guidelines) proposed by People's
Committee of Ho Chi Minh City (Vietnam) to manage earthquake and tsunami, and
try to represent it in a more formal way, in order to provide means to
simulate, analyse and adapt it. We first present a state of the art about
coordination models for disaster management with a focus on process oriented
approaches. We give an overview of the different dimensions of the textual
tsunami plan of Ho Chi Minh City and then the graphical representation of its
process with BPMN (Business Process Model and Notation). We finally show how to
exploit this process with workflow tools to simulate (YAWL tool) and analyse it
(ProM tool).
|
1312.4852 | Identification of Gaussian Process State-Space Models with Particle
Stochastic Approximation EM | stat.ML cs.SY | Gaussian process state-space models (GP-SSMs) are a very flexible family of
models of nonlinear dynamical systems. They comprise a Bayesian nonparametric
representation of the dynamics of the system and additional (hyper-)parameters
governing the properties of this nonparametric representation. The Bayesian
formalism enables systematic reasoning about the uncertainty in the system
dynamics. We present an approach to maximum likelihood identification of the
parameters in GP-SSMs, while retaining the full nonparametric description of
the dynamics. The method is based on a stochastic approximation version of the
EM algorithm that employs recent developments in particle Markov chain Monte
Carlo for efficient identification.
|
1312.4860 | Low-rank Similarity Measure for Role Model Extraction | cs.SI physics.soc-ph | Computing meaningful clusters of nodes is crucial to analyze large networks.
In this paper, we present a pairwise node similarity measure that allows to
extract roles, i.e. group of nodes sharing similar flow patterns within a
network. We propose a low rank iterative scheme to approximate the similarity
measure for very large networks. Finally, we show that our low rank similarity
score successfully extracts the different roles in random graphs and that its
performances are similar to the pairwise similarity measure.
|
1312.4875 | MIGRAINE: MRI Graph Reliability Analysis and Inference for Connectomics | q-bio.QM cs.CE | Currently, connectomes (e.g., functional or structural brain graphs) can be
estimated in humans at $\approx 1~mm^3$ scale using a combination of diffusion
weighted magnetic resonance imaging, functional magnetic resonance imaging and
structural magnetic resonance imaging scans. This manuscript summarizes a
novel, scalable implementation of open-source algorithms to rapidly estimate
magnetic resonance connectomes, using both anatomical regions of interest
(ROIs) and voxel-size vertices. To assess the reliability of our pipeline, we
develop a novel nonparametric non-Euclidean reliability metric. Here we provide
an overview of the methods used, demonstrate our implementation, and discuss
available user extensions. We conclude with results showing the efficacy and
reliability of the pipeline over previous state-of-the-art.
|
1312.4892 | A Fast Algorithm for Sparse Controller Design | math.OC cs.DC cs.SY | We consider the task of designing sparse control laws for large-scale systems
by directly minimizing an infinite horizon quadratic cost with an $\ell_1$
penalty on the feedback controller gains. Our focus is on an improved algorithm
that allows us to scale to large systems (i.e. those where sparsity is most
useful) with convergence times that are several orders of magnitude faster than
existing algorithms. In particular, we develop an efficient proximal Newton
method which minimizes per-iteration cost with a coordinate descent active set
approach and fast numerical solutions to the Lyapunov equations. Experimentally
we demonstrate the appeal of this approach on synthetic examples and real power
networks significantly larger than those previously considered in the
literature.
|
1312.4894 | Deep Convolutional Ranking for Multilabel Image Annotation | cs.CV | Multilabel image annotation is one of the most important challenges in
computer vision with many real-world applications. While existing work usually
use conventional visual features for multilabel annotation, features based on
Deep Neural Networks have shown potential to significantly boost performance.
In this work, we propose to leverage the advantage of such features and analyze
key components that lead to better performances. Specifically, we show that a
significant performance gain could be obtained by combining convolutional
architectures with approximate top-$k$ ranking objectives, as thye naturally
fit the multilabel tagging problem. Our experiments on the NUS-WIDE dataset
outperforms the conventional visual features by about 10%, obtaining the best
reported performance in the literature.
|
1312.4895 | Recursive Compressed Sensing | stat.ML cs.IT math.IT | We introduce a recursive algorithm for performing compressed sensing on
streaming data. The approach consists of a) recursive encoding, where we sample
the input stream via overlapping windowing and make use of the previous
measurement in obtaining the next one, and b) recursive decoding, where the
signal estimate from the previous window is utilized in order to achieve faster
convergence in an iterative optimization scheme applied to decode the new one.
To remove estimation bias, a two-step estimation procedure is proposed
comprising support set detection and signal amplitude estimation. Estimation
accuracy is enhanced by a non-linear voting method and averaging estimates over
multiple windows. We analyze the computational complexity and estimation error,
and show that the normalized error variance asymptotically goes to zero for
sublinear sparsity. Our simulation results show speed up of an order of
magnitude over traditional CS, while obtaining significantly lower
reconstruction error under mild conditions on the signal magnitudes and the
noise level.
|
1312.4967 | Estimation of Human Body Shape and Posture Under Clothing | cs.CV cs.GR | Estimating the body shape and posture of a dressed human subject in motion
represented as a sequence of (possibly incomplete) 3D meshes is important for
virtual change rooms and security. To solve this problem, statistical shape
spaces encoding human body shape and posture variations are commonly used to
constrain the search space for the shape estimate. In this work, we propose a
novel method that uses a posture-invariant shape space to model body shape
variation combined with a skeleton-based deformation to model posture
variation. Our method can estimate the body shape and posture of both static
scans and motion sequences of dressed human body scans. In case of motion
sequences, our method takes advantage of motion cues to solve for a single body
shape estimate along with a sequence of posture estimates. We apply our
approach to both static scans and motion sequences and demonstrate that using
our method, higher fitting accuracy is achieved than when using a variant of
the popular SCAPE model as statistical model.
|
1312.4986 | A Comparative Evaluation of Curriculum Learning with Filtering and
Boosting | cs.LG | Not all instances in a data set are equally beneficial for inferring a model
of the data. Some instances (such as outliers) are detrimental to inferring a
model of the data. Several machine learning techniques treat instances in a
data set differently during training such as curriculum learning, filtering,
and boosting. However, an automated method for determining how beneficial an
instance is for inferring a model of the data does not exist. In this paper, we
present an automated method that orders the instances in a data set by
complexity based on the their likelihood of being misclassified (instance
hardness). The underlying assumption of this method is that instances with a
high likelihood of being misclassified represent more complex concepts in a
data set. Ordering the instances in a data set allows a learning algorithm to
focus on the most beneficial instances and ignore the detrimental ones. We
compare ordering the instances in a data set in curriculum learning, filtering
and boosting. We find that ordering the instances significantly increases
classification accuracy and that filtering has the largest impact on
classification accuracy. On a set of 52 data sets, ordering the instances
increases the average accuracy from 81% to 84%.
|
1312.5021 | Efficient Online Bootstrapping for Large Scale Learning | cs.LG | Bootstrapping is a useful technique for estimating the uncertainty of a
predictor, for example, confidence intervals for prediction. It is typically
used on small to moderate sized datasets, due to its high computation cost.
This work describes a highly scalable online bootstrapping strategy,
implemented inside Vowpal Wabbit, that is several times faster than traditional
strategies. Our experiments indicate that, in addition to providing a black
box-like method for estimating uncertainty, our implementation of online
bootstrapping may also help to train models with better prediction performance
due to model averaging.
|
1312.5023 | Contextually Supervised Source Separation with Application to Energy
Disaggregation | stat.ML cs.LG math.OC | We propose a new framework for single-channel source separation that lies
between the fully supervised and unsupervised setting. Instead of supervision,
we provide input features for each source signal and use convex methods to
estimate the correlations between these features and the unobserved signal
decomposition. We analyze the case of $\ell_2$ loss theoretically and show that
recovery of the signal components depends only on cross-correlation between
features for different signals, not on correlations between features for the
same signal. Contextually supervised source separation is a natural fit for
domains with large amounts of data but no explicit supervision; our motivating
application is energy disaggregation of hourly smart meter data (the separation
of whole-home power signals into different energy uses). Here we apply
contextual supervision to disaggregate the energy usage of thousands homes over
four years, a significantly larger scale than previously published efforts, and
demonstrate on synthetic data that our method outperforms the unsupervised
approach.
|
1312.5033 | Evaluation of Plane Detection with RANSAC According to Density of 3D
Point Clouds | cs.RO cs.CV | We have implemented a method that detects planar regions from 3D scan data
using Random Sample Consensus (RANSAC) algorithm to address the issue of a
trade-off between the scanning speed and the point density of 3D scanning.
However, the limitation of the implemented method has not been clear yet. In
this paper, we conducted an additional experiment to evaluate the implemented
method by changing its parameter and environments in both high and low point
density data. As a result, the number of detected planes in high point density
data was different from that in low point density data with the same parameter
value.
|
1312.5035 | SybilBelief: A Semi-supervised Learning Approach for Structure-based
Sybil Detection | cs.CR cs.SI | Sybil attacks are a fundamental threat to the security of distributed
systems. Recently, there has been a growing interest in leveraging social
networks to mitigate Sybil attacks. However, the existing approaches suffer
from one or more drawbacks, including bootstrapping from either only known
benign or known Sybil nodes, failing to tolerate noise in their prior knowledge
about known benign or Sybil nodes, and being not scalable.
In this work, we aim to overcome these drawbacks. Towards this goal, we
introduce SybilBelief, a semi-supervised learning framework, to detect Sybil
nodes. SybilBelief takes a social network of the nodes in the system, a small
set of known benign nodes, and, optionally, a small set of known Sybils as
input. Then SybilBelief propagates the label information from the known benign
and/or Sybil nodes to the remaining nodes in the system.
We evaluate SybilBelief using both synthetic and real world social network
topologies. We show that SybilBelief is able to accurately identify Sybil nodes
with low false positive rates and low false negative rates. SybilBelief is
resilient to noise in our prior knowledge about known benign and Sybil nodes.
Moreover, SybilBelief performs orders of magnitudes better than existing Sybil
classification mechanisms and significantly better than existing Sybil ranking
mechanisms.
|
1312.5045 | Comparative analysis of evolutionary algorithms for image enhancement | cs.CV cs.NE | Evolutionary algorithms are metaheuristic techniques that derive inspiration
from the natural process of evolution. They can efficiently solve (generate
acceptable quality of solution in reasonable time) complex optimization
(NP-Hard) problems. In this paper, automatic image enhancement is considered as
an optimization problem and three evolutionary algorithms (Genetic Algorithm,
Differential Evolution and Self Organizing Migration Algorithm) are employed to
search for an optimum solution. They are used to find an optimum parameter set
for an image enhancement transfer function. The aim is to maximize a fitness
criterion which is a measure of image contrast and the visibility of details in
the enhanced image. The enhancement results obtained using all three
evolutionary algorithms are compared amongst themselves and also with the
output of histogram equalization method.
|
1312.5047 | Stable Camera Motion Estimation Using Convex Programming | cs.CV | We study the inverse problem of estimating n locations $t_1, ..., t_n$ (up to
global scale, translation and negation) in $R^d$ from noisy measurements of a
subset of the (unsigned) pairwise lines that connect them, that is, from noisy
measurements of $\pm (t_i - t_j)/\|t_i - t_j\|$ for some pairs (i,j) (where the
signs are unknown). This problem is at the core of the structure from motion
(SfM) problem in computer vision, where the $t_i$'s represent camera locations
in $R^3$. The noiseless version of the problem, with exact line measurements,
has been considered previously under the general title of parallel rigidity
theory, mainly in order to characterize the conditions for unique realization
of locations. For noisy pairwise line measurements, current methods tend to
produce spurious solutions that are clustered around a few locations. This
sensitivity of the location estimates is a well-known problem in SfM,
especially for large, irregular collections of images.
In this paper we introduce a semidefinite programming (SDP) formulation,
specially tailored to overcome the clustering phenomenon. We further identify
the implications of parallel rigidity theory for the location estimation
problem to be well-posed, and prove exact (in the noiseless case) and stable
location recovery results. We also formulate an alternating direction method to
solve the resulting semidefinite program, and provide a distributed version of
our formulation for large numbers of locations. Specifically for the camera
location estimation problem, we formulate a pairwise line estimation method
based on robust camera orientation and subspace estimation. Lastly, we
demonstrate the utility of our algorithm through experiments on real images.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.