id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1312.5096 | Analysis of MIMO Systems used in planning a 4G-WiMAX Network in Ghana | cs.IT math.IT | With the increasing demand for mobile data services, Broadband Wireless
Access (BWA) is emerging as one of the fastest growing areas within mobile
communications. Innovative wireless communication systems, such as WiMAX, are
expected to offer highly reliable broadband radio access in order to meet the
increasing demands of emerging high speed data and multimedia services. In
Ghana, deployment of WiMAX technology has recently begun. Planning these high
capacity networks in the presence of multiple interferences in order to achieve
the aim of enabling users enjoy cheap and reliable internet services is a
critical design issue. This paper has used a deterministic approach for
simulating the Bit-Error-Rate (BER) of initial MIMO antenna configurations
which were considered in deploying a high capacity 4G-WiMAX network in Ghana.
The radiation pattern of the antenna used in the deploying the network has been
simulated with Genex-Unet and NEC and results presented. An adaptive 4x4 MIMO
antenna configuration with optimally suppressed sidelobes has been suggested
for future network deployment since the adaptive 2x2 MIMO antenna
configuration, which was used in the initial network deployment provides poor
estimates for average BER performance as compared to 4x4 antenna configuration
which seem less affected in the presence of multiple interferers.
|
1312.5097 | A Cellular Automaton Based Controller for a Ms. Pac-Man Agent | cs.AI | Video games can be used as an excellent test bed for Artificial Intelligence
(AI) techniques. They are challenging and non-deterministic, this makes it very
difficult to write strong AI players. An example of such a video game is Ms.
Pac-Man. In this paper we will outline some of the previous techniques used to
build AI controllers for Ms. Pac-Man as well as presenting a new and novel
solution. My technique utilises a Cellular Automaton (CA) to build a
representation of the environment at each time step of the game. The basis of
the representation is a 2-D grid of cells. Each cell has a state and a set of
rules which determine whether or not that cell will dominate (i.e. pass its
state value onto) adjacent cells at each update. Once a certain number of
update iterations have been completed, the CA represents the state of the
environment in the game, the goals and the dangers. At this point, Ms. Pac-Man
will decide her next move based only on her adjacent cells, that is to say, she
has no knowledge of the state of the environment as a whole, she will simply
follow the strongest path. This technique shows promise and allows the
controller to achieve high scores in a live game, the behaviour it exhibits is
interesting and complex. Moreover, this behaviour is produced by using very
simple rules which are applied many times to each cell in the grid. Simple
local interactions with complex global results are truly achieved.
|
1312.5106 | Codes between MBR and MSR Points with Exact Repair Property | cs.IT math.IT | In this paper distributed storage systems with exact repair are studied. A
construction for regenerating codes between the minimum storage regenerating
(MSR) and the minimum bandwidth regenerating (MBR) points is given. To the best
of author's knowledge, no previous construction of exact-regenerating codes
between MBR and MSR points is done except in the work by Tian et al. On
contrast to their work, the methods used here are elementary.
In this paper it is shown that in the case that the parameters $n$, $k$, and
$d$ are close to each other, the given construction is close to optimal when
comparing to the known functional repair capacity. This is done by showing that
when the distances of the parameters $n$, $k$, and $d$ are fixed but the actual
values approach to infinity, the fraction of the performance of constructed
codes with exact repair and the known capacity of codes with functional repair,
approaches to one. Also a simple variation of the constructed codes with almost
the same performance is given.
|
1312.5111 | Long Time No See: The Probability of Reusing Tags as a Function of
Frequency and Recency | cs.IR | In this paper, we introduce a tag recommendation algorithm that mimics the
way humans draw on items in their long-term memory. This approach uses the
frequency and recency of previous tag assignments to estimate the probability
of reusing a particular tag. Using three real-world folksonomies gathered from
bookmarks in BibSonomy, CiteULike and Flickr, we show how adding a
time-dependent component outperforms conventional "most popular tags"
approaches and another existing and very effective but less theory-driven,
time-dependent recommendation mechanism. By combining our approach with a
simple resource-specific frequency analysis, our algorithm outperforms other
well-established algorithms, such as FolkRank, Pairwise Interaction Tensor
Factorization and Collaborative Filtering. We conclude that our approach
provides an accurate and computationally efficient model of a user's temporal
tagging behavior. We show how effective principles for information retrieval
can be designed and implemented if human memory processes are taken into
account.
|
1312.5124 | Permuted NMF: A Simple Algorithm Intended to Minimize the Volume of the
Score Matrix | stat.AP cs.LG stat.ML | Non-Negative Matrix Factorization, NMF, attempts to find a number of
archetypal response profiles, or parts, such that any sample profile in the
dataset can be approximated by a close profile among these archetypes or a
linear combination of these profiles. The non-negativity constraint is imposed
while estimating archetypal profiles, due to the non-negative nature of the
observed signal. Apart from non negativity, a volume constraint can be applied
on the Score matrix W to enhance the ability of learning parts of NMF. In this
report, we describe a very simple algorithm, which in effect achieves volume
minimization, although indirectly.
|
1312.5129 | Deep Learning Embeddings for Discontinuous Linguistic Units | cs.CL | Deep learning embeddings have been successfully used for many natural
language processing problems. Embeddings are mostly computed for word forms
although a number of recent papers have extended this to other linguistic units
like morphemes and phrases. In this paper, we argue that learning embeddings
for discontinuous linguistic units should also be considered. In an
experimental evaluation on coreference resolution, we show that such embeddings
perform better than word form embeddings.
|
1312.5138 | Locating Multiple Ultrasound Targets in Chorus | cs.IT math.IT | Ranging by Time of Arrival (TOA) of Narrow-band ultrasound (NBU) has been
widely used by many locating systems for its characteristics of low cost and
high accuracy. However, because it is hard to support code division multiple
access in narrowband signal, to track multiple targets, existing NBU-based
locating systems generally need to assign exclusive time slot to each target to
avoid the signal conflicts. Because the propagation speed of ultrasound is slow
in air, dividing exclusive time slots on a single channel causes the location
updating rate for each target rather low, leading to unsatisfied tracking
performances as the number of targets increases. In this paper, we investigated
a new multiple target locating method using NBU, called UltraChorus, which is
to locate multiple targets while allowing them sending NBU signals
simultaneously, i.e., in chorus mode. It can dramatically increase the location
updating rate. In particular, we investigated by both experiments and
theoretical analysis on the necessary and sufficient conditions for resolving
the conflicts of multiple NBU signals on a single channel, which is referred as
the conditions for chorus ranging and chorus locating. To tackle the difficulty
caused by the anonymity of the measured distances, we further developed
consistent position generation algorithm and probabilistic particle filter
algorithm}to label the distances by sources, to generate reasonable location
estimations, and to disambiguate the motion trajectories of the multiple
concurrent targets based on the anonymous distance measurements. Extensive
evaluations by both simulation and testbed were carried out, which verified the
effectiveness of our proposed theories and algorithms.
|
1312.5148 | Object Selection under Team Context | cs.DB | Context-aware database has drawn increasing attention from both industry and
academia recently by taking users' current situation and environment into
consideration. However, most of the literature focus on individual context,
overlooking the team users. In this paper, we investigate how to integrate team
context into database query process to help the users' get top-ranked database
tuples and make the team more competitive. We introduce naive and optimized
query algorithm to select the suitable records and show that they output the
same results while the latter is more computational efficient. Extensive
empirical studies are conducted to evaluate the query approaches and
demonstrate their effectiveness and efficiency.
|
1312.5155 | On the Effectiveness of Polynomial Realization of Reed-Solomon Codes for
Storage Systems | cs.IT cs.PF math.IT | There are different ways to realize Reed Solomon (RS) codes. While in the
storage community, using the generator matrices to implement RS codes is more
popular, in the coding theory community the generator polynomials are typically
used to realize RS codes. Prominent exceptions include HDFS-RAID, which uses
generator polynomial based erasure codes, and extends the Apache Hadoop's file
system.
In this paper we evaluate the performance of an implementation of polynomial
realization of Reed-Solomon codes, along with our optimized version of it,
against that of a widely-used library (Jerasure) that implements the main
matrix realization alternatives. Our experimental study shows that despite
significant performance gains yielded by our optimizations, the polynomial
implementations' performance is constantly inferior to those of matrix
realization alternatives in general, and that of Cauchy bit matrices in
particular.
|
1312.5162 | Sistem pendukung keputusan kelayakan TKI ke luar negeri menggunakan
FMADM | cs.AI | BP3TKI Palembang is the government agencies that coordinate, execute and
selection of prospective migrants registration and placement. To simplify the
existing procedures and improve decision-making is necessary to build a
decision support system (DSS) to determine eligibility for employment abroad by
applying Fuzzy Multiple Attribute Decision Making (FMADM), using the linear
sequential systems development methods. The system is built using Microsoft
Visual Basic. Net 2010 and SQL Server 2008 database. The design of the system
using use case diagrams and class diagrams to identify the needs of users and
systems as well as systems implementation guidelines. This Decision Support
System able to rank and produce the prospective migrants, making it easier for
parties to take decision BP3TKI the workers who will be working out of the
country.
|
1312.5173 | New Repair strategy of Hadamard Minimum Storage Regenerating Code for
Distributed Storage System | cs.IT math.IT | The newly presented $(k+2,k)$ Hadamard minimum storage regenerating (MSR)
code is the first class of high rate storage code with optimal repair property
for all single node failures. In this paper, we propose a new simple repair
strategy, which can considerably reduces the computation load of the node
repair in contrast to the original one.
|
1312.5179 | The Total Variation on Hypergraphs - Learning on Hypergraphs Revisited | stat.ML cs.LG math.OC | Hypergraphs allow one to encode higher-order relationships in data and are
thus a very flexible modeling tool. Current learning methods are either based
on approximations of the hypergraphs via graphs or on tensor methods which are
only applicable under special conditions. In this paper, we present a new
learning framework on hypergraphs which fully uses the hypergraph structure.
The key element is a family of regularization functionals based on the total
variation on hypergraphs.
|
1312.5192 | Nonlinear Eigenproblems in Data Analysis - Balanced Graph Cuts and the
RatioDCA-Prox | stat.ML cs.LG math.OC | It has been recently shown that a large class of balanced graph cuts allows
for an exact relaxation into a nonlinear eigenproblem. We review briefly some
of these results and propose a family of algorithms to compute nonlinear
eigenvectors which encompasses previous work as special cases. We provide a
detailed analysis of the properties and the convergence behavior of these
algorithms and then discuss their application in the area of balanced graph
cuts.
|
1312.5198 | Learning Semantic Script Knowledge with Event Embeddings | cs.LG cs.AI cs.CL stat.ML | Induction of common sense knowledge about prototypical sequences of events
has recently received much attention. Instead of inducing this knowledge in the
form of graphs, as in much of the previous work, in our method, distributed
representations of event realizations are computed based on distributed
representations of predicates and their arguments, and then these
representations are used to predict prototypical event orderings. The
parameters of the compositional process for computing the event representations
and the ranking component of the model are jointly estimated from texts. We
show that this approach results in a substantial boost in ordering performance
with respect to previous methods.
|
1312.5202 | Consensus in the presence of interference | cs.SY | This paper studies distributed strategies for average-consensus of arbitrary
vectors in the presence of network interference. We assume that the underlying
communication on any \emph{link} suffers from \emph{additive interference}
caused due to the communication by other agents following their own consensus
protocol. Additionally, no agent knows how many or which agents are interfering
with its communication. Clearly, the standard consensus protocol does not
remain applicable in such scenarios. In this paper, we cast an algebraic
structure over the interference and show that the standard protocol can be
modified such that the average is reachable in a subspace whose dimension is
complimentary to the maximal dimension of the interference subspaces (over all
of the communication links). To develop the results, we use \emph{information
alignment} to align the intended transmission (over each link) to the
null-space of the interference (on that link). We show that this alignment is
indeed invertible, i.e. the intended transmission can be recovered over which,
subsequently, consensus protocol is implemented. That \emph{local} protocols
exist even when the collection of the interference subspaces span the entire
vector space is somewhat surprising.
|
1312.5242 | Unsupervised feature learning by augmenting single images | cs.CV cs.LG cs.NE | When deep learning is applied to visual object recognition, data augmentation
is often used to generate additional training data without extra labeling cost.
It helps to reduce overfitting and increase the performance of the algorithm.
In this paper we investigate if it is possible to use data augmentation as the
main component of an unsupervised feature learning architecture. To that end we
sample a set of random image patches and declare each of them to be a separate
single-image surrogate class. We then extend these trivial one-element classes
by applying a variety of transformations to the initial 'seed' patches. Finally
we train a convolutional neural network to discriminate between these surrogate
classes. The feature representation learned by the network can then be used in
various vision tasks. We find that this simple feature learning algorithm is
surprisingly successful, achieving competitive classification results on
several popular vision datasets (STL-10, CIFAR-10, Caltech-101).
|
1312.5258 | On the Challenges of Physical Implementations of RBMs | stat.ML cs.LG | Restricted Boltzmann machines (RBMs) are powerful machine learning models,
but learning and some kinds of inference in the model require sampling-based
approximations, which, in classical digital computers, are implemented using
expensive MCMC. Physical computation offers the opportunity to reduce the cost
of sampling by building physical systems whose natural dynamics correspond to
drawing samples from the desired RBM distribution. Such a system avoids the
burn-in and mixing cost of a Markov chain. However, hardware implementations of
this variety usually entail limitations such as low-precision and limited range
of the parameters and restrictions on the size and topology of the RBM. We
conduct software simulations to determine how harmful each of these
restrictions is. Our simulations are designed to reproduce aspects of the
D-Wave quantum computer, but the issues we investigate arise in most forms of
physical computation.
|
1312.5271 | Systematic and multifactor risk models revisited | q-fin.RM cs.CE math.LO q-fin.CP stat.ML | Systematic and multifactor risk models are revisited via methods which were
already successfully developed in signal processing and in automatic control.
The results, which bypass the usual criticisms on those risk modeling, are
illustrated by several successful computer experiments.
|
1312.5276 | Integration by parts and representation of information functionals | math.PR cs.IT math.IT | We introduce a new formalism for computing expectations of functionals of
arbitrary random vectors, by using generalised integration by parts formulae.
In doing so we extend recent representation formulae for the score function
introduced in Nourdin, Peccati and Swan (JFA, to appear) and also provide a new
proof of a central identity first discovered in Guo, Shamai, and Verd{\'u}
(IEEE Trans. Information Theory, 2005). We derive a representation for the
standardized Fisher information of sums of i.i.d. random vectors which use our
identities to provide rates of convergence in information theoretic central
limit theorems (both in Fisher information distance and in relative entropy).
|
1312.5297 | Tweet, but Verify: Epistemic Study of Information Verification on
Twitter | cs.SI cs.CY | While Twitter provides an unprecedented opportunity to learn about breaking
news and current events as they happen, it often produces skepticism among
users as not all the information is accurate but also hoaxes are sometimes
spread. While avoiding the diffusion of hoaxes is a major concern during
fast-paced events such as natural disasters, the study of how users trust and
verify information from tweets in these contexts has received little attention
so far. We survey users on credibility perceptions regarding witness pictures
posted on Twitter related to Hurricane Sandy. By examining credibility
perceptions on features suggested for information verification in the field of
Epistemology, we evaluate their accuracy in determining whether pictures were
real or fake compared to professional evaluations performed by experts. Our
study unveils insight about tweet presentation, as well as features that users
should look at when assessing the veracity of tweets in the context of
fast-paced events. Some of our main findings include that while author details
not readily available on Twitter feeds should be emphasized in order to
facilitate verification of tweets, showing multiple tweets corroborating a fact
misleads users to trusting what actually is a hoax. We contrast some of the
behavioral patterns found on tweets with literature in Psychology research.
|
1312.5306 | Network histograms and universality of blockmodel approximation | stat.ME cs.SI math.CO math.ST stat.TH | In this article we introduce the network histogram: a statistical summary of
network interactions, to be used as a tool for exploratory data analysis. A
network histogram is obtained by fitting a stochastic blockmodel to a single
observation of a network dataset. Blocks of edges play the role of histogram
bins, and community sizes that of histogram bandwidths or bin sizes. Just as
standard histograms allow for varying bandwidths, different blockmodel
estimates can all be considered valid representations of an underlying
probability model, subject to bandwidth constraints. Here we provide methods
for automatic bandwidth selection, by which the network histogram approximates
the generating mechanism that gives rise to exchangeable random graphs. This
makes the blockmodel a universal network representation for unlabeled graphs.
With this insight, we discuss the interpretation of network communities in
light of the fact that many different community assignments can all give an
equally valid representation of such a network. To demonstrate the
fidelity-versus-interpretability tradeoff inherent in considering different
numbers and sizes of communities, we analyze two publicly available networks -
political weblogs and student friendships - and discuss how to interpret the
network histogram when additional information related to node and edge labeling
is present.
|
1312.5345 | Min Flow Rate Maximization for Software Defined Radio Access Networks | cs.IT math.IT | We consider a heterogeneous network (HetNet) of base stations (BSs) connected
via a backhaul network of routers and wired/wireless links with limited
capacity. The optimal provision of such networks requires proper resource
allocation across the radio access links in conjunction with appropriate
traffic engineering within the backhaul network. In this paper we propose an
efficient algorithm for joint resource allocation across the wireless links and
the flow control within the backhaul network. The proposed algorithm, which
maximizes the minimum rate among all the users and/or flows, is based on a
decomposition approach that leverages both the Alternating Direction Method of
Multipliers (ADMM) and the weighted-MMSE (WMMSE) algorithm. We show that this
algorithm is easily parallelizable and converges globally to a stationary
solution of the joint optimization problem. The proposed algorithm can also be
extended to deal with per-flow quality of service constraint, or to networks
with multi-antenna nodes.
|
1312.5349 | Moving-Horizon Dynamic Power System State Estimation Using Semidefinite
Relaxation | cs.SY | Accurate power system state estimation (PSSE) is an essential prerequisite
for reliable operation of power systems. Different from static PSSE, dynamic
PSSE can exploit past measurements based on a dynamical state evolution model,
offering improved accuracy and state predictability. A key challenge is the
nonlinear measurement model, which is often tackled using linearization,
despite divergence and local optimality issues. In this work, a moving-horizon
estimation (MHE) strategy is advocated, where model nonlinearity can be
accurately captured with strong performance guarantees. To mitigate local
optimality, a semidefinite relaxation approach is adopted, which often provides
solutions close to the global optimum. Numerical tests show that the proposed
method can markedly improve upon an extended Kalman filter (EKF)-based
alternative.
|
1312.5354 | Classification of Human Ventricular Arrhythmia in High Dimensional
Representation Spaces | cs.CE cs.LG | We studied classification of human ECGs labelled as normal sinus rhythm,
ventricular fibrillation and ventricular tachycardia by means of support vector
machines in different representation spaces, using different observation
lengths. ECG waveform segments of duration 0.5-4 s, their Fourier magnitude
spectra, and lower dimensional projections of Fourier magnitude spectra were
used for classification. All considered representations were of much higher
dimension than in published studies. Classification accuracy improved with
segment duration up to 2 s, with 4 s providing little improvement. We found
that it is possible to discriminate between ventricular tachycardia and
ventricular fibrillation by the present approach with much shorter runs of ECG
(2 s, minimum 86% sensitivity per class) than previously imagined. Ensembles of
classifiers acting on 1 s segments taken over 5 s observation windows gave best
results, with sensitivities of detection for all classes exceeding 93%.
|
1312.5355 | Generative NeuroEvolution for Deep Learning | cs.NE cs.CV | An important goal for the machine learning (ML) community is to create
approaches that can learn solutions with human-level capability. One domain
where humans have held a significant advantage is visual processing. A
significant approach to addressing this gap has been machine learning
approaches that are inspired from the natural systems, such as artificial
neural networks (ANNs), evolutionary computation (EC), and generative and
developmental systems (GDS). Research into deep learning has demonstrated that
such architectures can achieve performance competitive with humans on some
visual tasks; however, these systems have been primarily trained through
supervised and unsupervised learning algorithms. Alternatively, research is
showing that evolution may have a significant role in the development of visual
systems. Thus this paper investigates the role neuro-evolution (NE) can take in
deep learning. In particular, the Hypercube-based NeuroEvolution of Augmenting
Topologies is a NE approach that can effectively learn large neural structures
by training an indirect encoding that compresses the ANN weight pattern as a
function of geometry. The results show that HyperNEAT struggles with performing
image classification by itself, but can be effective in training a feature
extractor that other ML approaches can learn from. Thus NeuroEvolution combined
with other ML methods provides an intriguing area of research that can
replicate the processes in nature.
|
1312.5371 | Investigating early warning signs of oscillatory instability in
simulated phasor measurements | nlin.CD cs.SY physics.soc-ph | This paper shows that the variance of load bus voltage magnitude in a small
power system test case increases monotonically as the system approaches a Hopf
bifurcation. This property can potentially be used as a method for monitoring
oscillatory stability in power grid using high-resolution phasor measurements.
Increasing variance in data from a dynamical system is a common sign of a
phenomenon known as critical slowing down (CSD). CSD is slower recovery of
dynamical systems from perturbations as they approach critical transitions.
Earlier work has focused on studying CSD in systems approaching voltage
collapse; In this paper, we investigate its occurrence as a power system
approaches a Hopf bifurcation.
|
1312.5378 | Skolemization for Weighted First-Order Model Counting | cs.AI | First-order model counting emerged recently as a novel reasoning task, at the
core of efficient algorithms for probabilistic logics. We present a
Skolemization algorithm for model counting problems that eliminates existential
quantifiers from a first-order logic theory without changing its weighted model
count. For certain subsets of first-order logic, lifted model counters were
shown to run in time polynomial in the number of objects in the domain of
discourse, where propositional model counters require exponential time.
However, these guarantees apply only to Skolem normal form theories (i.e., no
existential quantifiers) as the presence of existential quantifiers reduces
lifted model counters to propositional ones. Since textbook Skolemization is
not sound for model counting, these restrictions precluded efficient model
counting for directed models, such as probabilistic logic programs, which rely
on existential quantification. Our Skolemization procedure extends the
applicability of first-order model counters to these representations. Moreover,
it simplifies the design of lifted model counting algorithms.
|
1312.5394 | Missing Value Imputation With Unsupervised Backpropagation | cs.NE cs.LG stat.ML | Many data mining and data analysis techniques operate on dense matrices or
complete tables of data. Real-world data sets, however, often contain unknown
values. Even many classification algorithms that are designed to operate with
missing values still exhibit deteriorated accuracy. One approach to handling
missing values is to fill in (impute) the missing values. In this paper, we
present a technique for unsupervised learning called Unsupervised
Backpropagation (UBP), which trains a multi-layer perceptron to fit to the
manifold sampled by a set of observed point-vectors. We evaluate UBP with the
task of imputing missing values in datasets, and show that UBP is able to
predict missing values with significantly lower sum-squared error than other
collaborative filtering and imputation techniques. We also demonstrate with 24
datasets and 9 supervised learning algorithms that classification accuracy is
usually higher when randomly-withheld values are imputed using UBP, rather than
with other methods.
|
1312.5398 | Continuous Learning: Engineering Super Features With Feature Algebras | cs.LG stat.ML | In this paper we consider a problem of searching a space of predictive models
for a given training data set. We propose an iterative procedure for deriving a
sequence of improving models and a corresponding sequence of sets of non-linear
features on the original input space. After a finite number of iterations N,
the non-linear features become 2^N -degree polynomials on the original space.
We show that in a limit of an infinite number of iterations derived non-linear
features must form an associative algebra: a product of two features is equal
to a linear combination of features from the same feature space for any given
input point. Because each iteration consists of solving a series of convex
problems that contain all previous solutions, the likelihood of the models in
the sequence is increasing with each iteration while the dimension of the model
parameter space is set to a limited controlled value.
|
1312.5402 | Some Improvements on Deep Convolutional Neural Network Based Image
Classification | cs.CV | We investigate multiple techniques to improve upon the current state of the
art deep convolutional neural network based image classification pipeline. The
techiques include adding more image transformations to training data, adding
more transformations to generate additional predictions at test time and using
complementary models applied to higher resolution images. This paper summarizes
our entry in the Imagenet Large Scale Visual Recognition Challenge 2013. Our
system achieved a top 5 classification error rate of 13.55% using no external
data which is over a 20% relative improvement on the previous year's winner.
|
1312.5412 | Approximated Infomax Early Stopping: Revisiting Gaussian RBMs on Natural
Images | stat.ML cs.LG | We pursue an early stopping technique that helps Gaussian Restricted
Boltzmann Machines (GRBMs) to gain good natural image representations in terms
of overcompleteness and data fitting. GRBMs are widely considered as an
unsuitable model for natural images because they gain non-overcomplete
representations which include uniform filters that do not represent useful
image features. We have recently found that GRBMs once gain and subsequently
lose useful filters during their training, contrary to this common perspective.
We attribute this phenomenon to a tradeoff between overcompleteness of GRBM
representations and data fitting. To gain GRBM representations that are
overcomplete and fit data well, we propose a measure for GRBM representation
quality, approximated mutual information, and an early stopping technique based
on this measure. The proposed method boosts performance of classifiers trained
on GRBM representations.
|
1312.5419 | Large-scale Multi-label Text Classification - Revisiting Neural Networks | cs.LG | Neural networks have recently been proposed for multi-label classification
because they are able to capture and model label dependencies in the output
layer. In this work, we investigate limitations of BP-MLL, a neural network
(NN) architecture that aims at minimizing pairwise ranking error. Instead, we
propose to use a comparably simple NN approach with recently proposed learning
techniques for large-scale multi-label text classification tasks. In
particular, we show that BP-MLL's ranking loss minimization can be efficiently
and effectively replaced with the commonly used cross entropy error function,
and demonstrate that several advances in neural network training that have been
developed in the realm of deep learning can be effectively employed in this
setting. Our experimental results show that simple NN models equipped with
advanced techniques such as rectified linear units, dropout, and AdaGrad
perform as well as or even outperform state-of-the-art approaches on six
large-scale textual datasets with diverse characteristics.
|
1312.5434 | Asynchronous Adaptation and Learning over Networks --- Part I: Modeling
and Stability Analysis | cs.SY cs.IT cs.LG math.IT math.OC | In this work and the supporting Parts II [2] and III [3], we provide a rather
detailed analysis of the stability and performance of asynchronous strategies
for solving distributed optimization and adaptation problems over networks. We
examine asynchronous networks that are subject to fairly general sources of
uncertainties, such as changing topologies, random link failures, random data
arrival times, and agents turning on and off randomly. Under this model, agents
in the network may stop updating their solutions or may stop sending or
receiving information in a random manner and without coordination with other
agents. We establish in Part I conditions on the first and second-order moments
of the relevant parameter distributions to ensure mean-square stable behavior.
We derive in Part II expressions that reveal how the various parameters of the
asynchronous behavior influence network performance. We compare in Part III the
performance of asynchronous networks to the performance of both centralized
solutions and synchronous networks. One notable conclusion is that the
mean-square-error performance of asynchronous networks shows a degradation only
of the order of $O(\nu)$, where $\nu$ is a small step-size parameter, while the
convergence rate remains largely unaltered. The results provide a solid
justification for the remarkable resilience of cooperative networks in the face
of random failures at multiple levels: agents, links, data arrivals, and
topology.
|
1312.5438 | Asynchronous Adaptation and Learning over Networks - Part II:
Performance Analysis | cs.SY cs.IT cs.LG math.IT math.OC | In Part I \cite{Zhao13TSPasync1}, we introduced a fairly general model for
asynchronous events over adaptive networks including random topologies, random
link failures, random data arrival times, and agents turning on and off
randomly. We performed a stability analysis and established the notable fact
that the network is still able to converge in the mean-square-error sense to
the desired solution. Once stable behavior is guaranteed, it becomes important
to evaluate how fast the iterates converge and how close they get to the
optimal solution. This is a demanding task due to the various asynchronous
events and due to the fact that agents influence each other. In this Part II,
we carry out a detailed analysis of the mean-square-error performance of
asynchronous strategies for solving distributed optimization and adaptation
problems over networks. We derive analytical expressions for the mean-square
convergence rate and the steady-state mean-square-deviation. The expressions
reveal how the various parameters of the asynchronous behavior influence
network performance. In the process, we establish the interesting conclusion
that even under the influence of asynchronous events, all agents in the
adaptive network can still reach an $O(\nu^{1 + \gamma_o'})$ near-agreement
with some $\gamma_o' > 0$ while approaching the desired solution within
$O(\nu)$ accuracy, where $\nu$ is proportional to the small step-size parameter
for adaptation.
|
1312.5439 | Asynchronous Adaptation and Learning over Networks - Part III:
Comparison Analysis | cs.SY cs.IT cs.LG math.IT math.OC | In Part II [3] we carried out a detailed mean-square-error analysis of the
performance of asynchronous adaptation and learning over networks under a
fairly general model for asynchronous events including random topologies,
random link failures, random data arrival times, and agents turning on and off
randomly. In this Part III, we compare the performance of synchronous and
asynchronous networks. We also compare the performance of decentralized
adaptation against centralized stochastic-gradient (batch) solutions. Two
interesting conclusions stand out. First, the results establish that the
performance of adaptive networks is largely immune to the effect of
asynchronous events: the mean and mean-square convergence rates and the
asymptotic bias values are not degraded relative to synchronous or centralized
implementations. Only the steady-state mean-square-deviation suffers a
degradation in the order of $\nu$, which represents the small step-size
parameters used for adaptation. Second, the results show that the adaptive
distributed network matches the performance of the centralized solution. These
conclusions highlight another critical benefit of cooperation by networked
agents: cooperation does not only enhance performance in comparison to
stand-alone single-agent processing, but it also endows the network with
remarkable resilience to various forms of random failure events and is able to
deliver performance that is as powerful as batch solutions.
|
1312.5444 | Blind Denoising with Random Greedy Pursuits | cs.IT math.IT | Denoising methods require some assumptions about the signal of interest and
the noise. While most denoising procedures require some knowledge about the
noise level, which may be unknown in practice, here we assume that the signal
expansion in a given dictionary has a distribution that is more heavy-tailed
than the noise. We show how this hypothesis leads to a stopping criterion for
greedy pursuit algorithms which is independent from the noise level. Inspired
by the success of ensemble methods in machine learning, we propose a strategy
to reduce the variance of greedy estimates by averaging pursuits obtained from
randomly subsampled dictionaries. We call this denoising procedure Blind Random
Pursuit Denoising (BIRD). We offer a generalization to multidimensional
signals, with a structured sparse model (S-BIRD). The relevance of this
approach is demonstrated on synthetic and experimental MEG signals where,
without any parameter tuning, BIRD outperforms state-of-the-art algorithms even
when they are informed by the noise level. Code is available to reproduce all
experiments.
|
1312.5457 | Codebook based Audio Feature Representation for Music Information
Retrieval | cs.IR cs.LG cs.MM | Digital music has become prolific in the web in recent decades. Automated
recommendation systems are essential for users to discover music they love and
for artists to reach appropriate audience. When manual annotations and user
preference data is lacking (e.g. for new artists) these systems must rely on
\emph{content based} methods. Besides powerful machine learning tools for
classification and retrieval, a key component for successful recommendation is
the \emph{audio content representation}.
Good representations should capture informative musical patterns in the audio
signal of songs. These representations should be concise, to enable efficient
(low storage, easy indexing, fast search) management of huge music
repositories, and should also be easy and fast to compute, to enable real-time
interaction with a user supplying new songs to the system.
Before designing new audio features, we explore the usage of traditional
local features, while adding a stage of encoding with a pre-computed
\emph{codebook} and a stage of pooling to get compact vectorial
representations. We experiment with different encoding methods, namely
\emph{the LASSO}, \emph{vector quantization (VQ)} and \emph{cosine similarity
(CS)}. We evaluate the representations' quality in two music information
retrieval applications: query-by-tag and query-by-example. Our results show
that concise representations can be used for successful performance in both
applications. We recommend using top-$\tau$ VQ encoding, which consistently
performs well in both applications, and requires much less computation time
than the LASSO.
|
1312.5465 | Learning rates of $l^q$ coefficient regularization learning with
Gaussian kernel | cs.LG stat.ML | Regularization is a well recognized powerful strategy to improve the
performance of a learning machine and $l^q$ regularization schemes with
$0<q<\infty$ are central in use. It is known that different $q$ leads to
different properties of the deduced estimators, say, $l^2$ regularization leads
to smooth estimators while $l^1$ regularization leads to sparse estimators.
Then, how does the generalization capabilities of $l^q$ regularization learning
vary with $q$? In this paper, we study this problem in the framework of
statistical learning theory and show that implementing $l^q$ coefficient
regularization schemes in the sample dependent hypothesis space associated with
Gaussian kernel can attain the same almost optimal learning rates for all
$0<q<\infty$. That is, the upper and lower bounds of learning rates for $l^q$
regularization learning are asymptotically identical for all $0<q<\infty$. Our
finding tentatively reveals that, in some modeling contexts, the choice of $q$
might not have a strong impact with respect to the generalization capability.
From this perspective, $q$ can be arbitrarily specified, or specified merely by
other no generalization criteria like smoothness, computational complexity,
sparsity, etc..
|
1312.5479 | Sparse similarity-preserving hashing | cs.CV cs.DS | In recent years, a lot of attention has been devoted to efficient nearest
neighbor search by means of similarity-preserving hashing. One of the plights
of existing hashing techniques is the intrinsic trade-off between performance
and computational complexity: while longer hash codes allow for lower false
positive rates, it is very difficult to increase the embedding dimensionality
without incurring in very high false negatives rates or prohibiting
computational costs. In this paper, we propose a way to overcome this
limitation by enforcing the hash codes to be sparse. Sparse high-dimensional
codes enjoy from the low false positive rates typical of long hashes, while
keeping the false negative rates similar to those of a shorter dense hashing
scheme with equal number of degrees of freedom. We use a tailored feed-forward
neural network for the hashing function. Extensive experimental evaluation
involving visual and multi-modal data shows the benefits of the proposed
method.
|
1312.5486 | Molecular communication networks with general molecular circuit
receivers | q-bio.MN cs.IT math.IT | In a molecular communication network, transmitters may encode information in
concentration or frequency of signalling molecules. When the signalling
molecules reach the receivers, they react, via a set of chemical reactions or a
molecular circuit, to produce output molecules. The counts of output molecules
over time is the output signal of the receiver. The aim of this paper is to
investigate the impact of different reaction types on the information
transmission capacity of molecular communication networks. We realise this aim
by using a general molecular circuit model. We derive general expressions of
mean receiver output, and signal and noise spectra. We use these expressions to
investigate the information transmission capacities of a number of molecular
circuits.
|
1312.5515 | Conservative, Proportional and Optimistic Contextual Discounting in the
Belief Functions Theory | cs.AI | Information discounting plays an important role in the theory of belief
functions and, generally, in information fusion. Nevertheless, neither
classical uniform discounting nor contextual cannot model certain use cases,
notably temporal discounting. In this article, new contextual discounting
schemes, conservative, proportional and optimistic, are proposed. Some
properties of these discounting operations are examined. Classical discounting
is shown to be a special case of these schemes. Two motivating cases are
discussed: modelling of source reliability and application to temporal
discounting.
|
1312.5542 | Word Emdeddings through Hellinger PCA | cs.CL cs.LG | Word embeddings resulting from neural language models have been shown to be
successful for a large variety of NLP tasks. However, such architecture might
be difficult to train and time-consuming. Instead, we propose to drastically
simplify the word embeddings computation through a Hellinger PCA of the word
co-occurence matrix. We compare those new word embeddings with some well-known
embeddings on NER and movie review tasks and show that we can reach similar or
even better performance. Although deep learning is not really necessary for
generating good word embeddings, we show that it can provide an easy way to
adapt embeddings to specific tasks.
|
1312.5548 | My First Deep Learning System of 1991 + Deep Learning Timeline 1962-2013 | cs.NE | Deep Learning has attracted significant attention in recent years. Here I
present a brief overview of my first Deep Learner of 1991, and its historic
context, with a timeline of Deep Learning highlights.
|
1312.5559 | Distributional Models and Deep Learning Embeddings: Combining the Best
of Both Worlds | cs.CL | There are two main approaches to the distributed representation of words:
low-dimensional deep learning embeddings and high-dimensional distributional
models, in which each dimension corresponds to a context word. In this paper,
we combine these two approaches by learning embeddings based on
distributional-model vectors - as opposed to one-hot vectors as is standardly
done in deep learning. We show that the combined approach has better
performance on a word relatedness judgment task.
|
1312.5568 | An Adaptive Dictionary Learning Approach for Modeling Dynamical Textures | cs.CV | Video representation is an important and challenging task in the computer
vision community. In this paper, we assume that image frames of a moving scene
can be modeled as a Linear Dynamical System. We propose a sparse coding
framework, named adaptive video dictionary learning (AVDL), to model a video
adaptively. The developed framework is able to capture the dynamics of a moving
scene by exploring both sparse properties and the temporal correlations of
consecutive video frames. The proposed method is compared with state of the art
video processing methods on several benchmark data sequences, which exhibit
appearance changes and heavy occlusions.
|
1312.5578 | Multimodal Transitions for Generative Stochastic Networks | cs.LG stat.ML | Generative Stochastic Networks (GSNs) have been recently introduced as an
alternative to traditional probabilistic modeling: instead of parametrizing the
data distribution directly, one parametrizes a transition operator for a Markov
chain whose stationary distribution is an estimator of the data generating
distribution. The result of training is therefore a machine that generates
samples through this Markov chain. However, the previously introduced GSN
consistency theorems suggest that in order to capture a wide class of
distributions, the transition operator in general should be multimodal,
something that has not been done before this paper. We introduce for the first
time multimodal transition distributions for GSNs, in particular using models
in the NADE family (Neural Autoregressive Density Estimator) as output
distributions of the transition operator. A NADE model is related to an RBM
(and can thus model multimodal distributions) but its likelihood (and
likelihood gradient) can be computed easily. The parameters of the NADE are
obtained as a learned function of the previous state of the learned Markov
chain. Experiments clearly illustrate the advantage of such multimodal
transition distributions over unimodal GSNs.
|
1312.5598 | Vulnerability and power on networks | cs.SI physics.soc-ph | Inspired by socio-political scenarios, like dictatorships, in which a
minority of people exercise control over a majority of weakly interconnected
individuals, we propose vulnerability and power measures defined on groups of
actors of networks. We establish an unexpected connection between network
vulnerability and graph regularizability. We use the Shapley value of coalition
games to introduce fresh notions of vulnerability and power at node level
defined in terms of the corresponding measures at group level. We investigate
the computational complexity of computing the defined measures, both at group
and node levels, and provide effective methods to quantify them. Finally we
test vulnerability and power on both artificial and real networks.
|
1312.5602 | Playing Atari with Deep Reinforcement Learning | cs.LG | We present the first deep learning model to successfully learn control
policies directly from high-dimensional sensory input using reinforcement
learning. The model is a convolutional neural network, trained with a variant
of Q-learning, whose input is raw pixels and whose output is a value function
estimating future rewards. We apply our method to seven Atari 2600 games from
the Arcade Learning Environment, with no adjustment of the architecture or
learning algorithm. We find that it outperforms all previous approaches on six
of the games and surpasses a human expert on three of them.
|
1312.5604 | Learning Transformations for Classification Forests | cs.CV cs.LG stat.ML | This work introduces a transformation-based learner model for classification
forests. The weak learner at each split node plays a crucial role in a
classification tree. We propose to optimize the splitting objective by learning
a linear transformation on subspaces using nuclear norm as the optimization
criteria. The learned linear transformation restores a low-rank structure for
data from the same class, and, at the same time, maximizes the separation
between different classes, thereby improving the performance of the split
function. Theoretical and experimental results support the proposed framework.
|
1312.5641 | Recursive Robust PCA or Recursive Sparse Recovery in Large but
Structured Noise (parts 1 and 2 combined) | cs.IT math.IT | This work studies the recursive robust principal components analysis (PCA)
problem. If the outlier is the signal-of-interest, this problem can be
interpreted as one of recursively recovering a time sequence of sparse vectors,
$S_t$, in the presence of large but structured noise, $L_t$. The structure that
we assume on $L_t$ is that $L_t$ is dense and lies in a low dimensional
subspace that is either fixed or changes "slowly enough". A key application
where this problem occurs is in video surveillance where the goal is to
separate a slowly changing background ($L_t$) from moving foreground objects
($S_t$) on-the-fly. To solve the above problem, in recent work, we introduced a
novel solution called Recursive Projected CS (ReProCS). In this work we develop
a simple modification of the original ReProCS idea and analyze it. This
modification assumes knowledge of a subspace change model on the $L_t$'s. Under
mild assumptions and a denseness assumption on the unestimated part of the
subspace of $L_t$ at various times, we show that, with high probability
(w.h.p.), the proposed approach can exactly recover the support set of $S_t$ at
all times; and the reconstruction errors of both $S_t$ and $L_t$ are upper
bounded by a time-invariant and small value. In simulation experiments, we
observe that the last assumption holds as long as there is some support change
of $S_t$ every few frames.
|
1312.5650 | Zero-Shot Learning by Convex Combination of Semantic Embeddings | cs.LG | Several recent publications have proposed methods for mapping images into
continuous semantic embedding spaces. In some cases the embedding space is
trained jointly with the image transformation. In other cases the semantic
embedding space is established by an independent natural language processing
task, and then the image transformation into that space is learned in a second
stage. Proponents of these image embedding systems have stressed their
advantages over the traditional \nway{} classification framing of image
understanding, particularly in terms of the promise for zero-shot learning --
the ability to correctly annotate images of previously unseen object
categories. In this paper, we propose a simple method for constructing an image
embedding system from any existing \nway{} image classifier and a semantic word
embedding model, which contains the $\n$ class labels in its vocabulary. Our
method maps images into the semantic embedding space via convex combination of
the class label embedding vectors, and requires no additional training. We show
that this simple and direct method confers many of the advantages associated
with more complex image embedding schemes, and indeed outperforms state of the
art methods on the ImageNet zero-shot learning task.
|
1312.5663 | k-Sparse Autoencoders | cs.LG | Recently, it has been observed that when representations are learnt in a way
that encourages sparsity, improved performance is obtained on classification
tasks. These methods involve combinations of activation functions, sampling
steps and different kinds of penalties. To investigate the effectiveness of
sparsity by itself, we propose the k-sparse autoencoder, which is an
autoencoder with linear activation function, where in hidden layers only the k
highest activities are kept. When applied to the MNIST and NORB datasets, we
find that this method achieves better classification results than denoising
autoencoders, networks trained with dropout, and RBMs. k-sparse autoencoders
are simple to train and the encoding stage is very fast, making them
well-suited to large problem sizes, where conventional sparse coding algorithms
cannot be applied.
|
1312.5673 | Flower Pollination Algorithm for Global Optimization | math.OC cs.NE nlin.AO | Flower pollination is an intriguing process in the natural world. Its
evolutionary characteristics can be used to design new optimization algorithms.
In this paper, we propose a new algorithm, namely, flower pollination
algorithm, inspired by the pollination process of flowers. We first use ten
test functions to validate the new algorithm, and compare its performance with
genetic algorithms and particle swarm optimization. Our simulation results show
the flower algorithm is more efficient than both GA and PSO. We also use the
flower algorithm to solve a nonlinear design benchmark, which shows the
convergence rate is almost exponential.
|
1312.5697 | Using Web Co-occurrence Statistics for Improving Image Categorization | cs.CV cs.LG | Object recognition and localization are important tasks in computer vision.
The focus of this work is the incorporation of contextual information in order
to improve object recognition and localization. For instance, it is natural to
expect not to see an elephant to appear in the middle of an ocean. We consider
a simple approach to encapsulate such common sense knowledge using
co-occurrence statistics from web documents. By merely counting the number of
times nouns (such as elephants, sharks, oceans, etc.) co-occur in web
documents, we obtain a good estimate of expected co-occurrences in visual data.
We then cast the problem of combining textual co-occurrence statistics with the
predictions of image-based classifiers as an optimization problem. The
resulting optimization problem serves as a surrogate for our inference
procedure. Albeit the simplicity of the resulting optimization problem, it is
effective in improving both recognition and localization accuracy. Concretely,
we observe significant improvements in recognition and localization rates for
both ImageNet Detection 2012 and Sun 2012 datasets.
|
1312.5704 | Design of Field Programmable Gate Array (FPGA) Based Emulators for Motor
Control Applications | cs.SY | Problem Statement: Field Programmable Gate Array (FPGA) circuits play a
significant role in major recent embedded process control designs. However,
exploiting these platforms requires deep hardware conception skills and remains
an important time consuming stage in a design flow. High Level Synthesis
technique avoids this bottleneck and increases design productivity as witnessed
by industry specialists. Approach: This study proposes to apply this technique
for the conception and implementation of a Real Time Direct Current Machine
(RTDCM) emulator for an embedded control application. Results: Several
FPGA-based configuration scenarios are studied. A series of tests including
design and timing-precision analysis were conducted to discuss and validate the
obtained hardware architectures. Conclusion/Recommendations: The proposed
methodology has accelerated the design time besides it has provided an extra
time to refine the hardware core of the DCM emulator. The high level synthesis
technique can be applied to the control field especially to test with low cost
and short delays newest algorithms and motor models.
|
1312.5713 | Giving the AI definition a form suitable for the engineer | cs.AI | Artificial Intelligence - what is this? That is the question! In earlier
papers we already gave a formal definition for AI, but if one desires to build
an actual AI implementation, the following issues require attention and are
treated here: the data format to be used, the idea of Undef and Nothing
symbols, various ways for defining the "meaning of life", and finally, a new
notion of "incorrect move". These questions are of minor importance in the
theoretical discussion, but we already know the answer of the question "Does AI
exist?" Now we want to make the next step and to create this program.
|
1312.5714 | Avoiding Confusion between Predictors and Inhibitors in Value Function
Approximation | cs.AI | In reinforcement learning, the goal is to seek rewards and avoid punishments.
A simple scalar captures the value of a state or of taking an action, where
expected future rewards increase and punishments decrease this quantity.
Naturally an agent should learn to predict this quantity to take beneficial
actions, and many value function approximators exist for this purpose. In the
present work, however, we show how value function approximators can cause
confusion between predictors of an outcome of one valence (e.g., a signal of
reward) and the inhibitor of the opposite valence (e.g., a signal canceling
expectation of punishment). We show this to be a problem for both linear and
non-linear value function approximators, especially when the amount of data (or
experience) is limited. We propose and evaluate a simple resolution: to instead
predict reward and punishment values separately, and rectify and add them to
get the value needed for decision making. We evaluate several function
approximators in this slightly different value function approximation
architecture and show that this approach is able to circumvent the confusion
and thereby achieve lower value-prediction errors.
|
1312.5734 | Time-varying Learning and Content Analytics via Sparse Factor Analysis | stat.ML cs.LG math.OC stat.AP | We propose SPARFA-Trace, a new machine learning-based framework for
time-varying learning and content analytics for education applications. We
develop a novel message passing-based, blind, approximate Kalman filter for
sparse factor analysis (SPARFA), that jointly (i) traces learner concept
knowledge over time, (ii) analyzes learner concept knowledge state transitions
(induced by interacting with learning resources, such as textbook sections,
lecture videos, etc, or the forgetting effect), and (iii) estimates the content
organization and intrinsic difficulty of the assessment questions. These
quantities are estimated solely from binary-valued (correct/incorrect) graded
learner response data and a summary of the specific actions each learner
performs (e.g., answering a question or studying a learning resource) at each
time instance. Experimental results on two online course datasets demonstrate
that SPARFA-Trace is capable of tracing each learner's concept knowledge
evolution over time, as well as analyzing the quality and content organization
of learning resources, the question-concept associations, and the question
intrinsic difficulties. Moreover, we show that SPARFA-Trace achieves comparable
or better performance in predicting unobserved learner responses than existing
collaborative filtering and knowledge tracing approaches for personalized
education.
|
1312.5753 | SOMz: photometric redshift PDFs with self organizing maps and random
atlas | astro-ph.IM astro-ph.CO cs.LG stat.ML | In this paper we explore the applicability of the unsupervised machine
learning technique of Self Organizing Maps (SOM) to estimate galaxy photometric
redshift probability density functions (PDFs). This technique takes a
spectroscopic training set, and maps the photometric attributes, but not the
redshifts, to a two dimensional surface by using a process of competitive
learning where neurons compete to more closely resemble the training data
multidimensional space. The key feature of a SOM is that it retains the
topology of the input set, revealing correlations between the attributes that
are not easily identified. We test three different 2D topological mapping:
rectangular, hexagonal, and spherical, by using data from the DEEP2 survey. We
also explore different implementations and boundary conditions on the map and
also introduce the idea of a random atlas where a large number of different
maps are created and their individual predictions are aggregated to produce a
more robust photometric redshift PDF. We also introduced a new metric, the
$I$-score, which efficiently incorporates different metrics, making it easier
to compare different results (from different parameters or different
photometric redshift codes). We find that by using a spherical topology mapping
we obtain a better representation of the underlying multidimensional topology,
which provides more accurate results that are comparable to other,
state-of-the-art machine learning algorithms. Our results illustrate that
unsupervised approaches have great potential for many astronomical problems,
and in particular for the computation of photometric redshifts.
|
1312.5763 | Identification of Employees Using RFID in IE-NTUA | cs.SY | During the last decade with the rapid increase in indoor wireless
communications, location-aware services have received a great deal of attention
for commercial, public-safety, and a military application, the greatest
challenge associated with indoor positioning methods is moving object data and
identification. Mobility tracking and localization are multifaceted problems,
which have been studied for a long time in different contexts. Many potential
applications in the domain of WSNs require such capabilities. The mobility
tracking needs inherent in many surveillance, security and logistic
applications. This paper presents the identification of employees in National
Technical University in Athens (IE-NTUA), when the employees access to a
certain area of the building (enters and leaves to/from the college), Radio
Frequency Identification (RFID) applied for identification by offering special
badges containing RFID-tags.
|
1312.5765 | Multi-Branch Matching Pursuit with applications to MIMO radar | cs.IT math.IT math.OC | We present an algorithm, dubbed Multi-Branch Matching Pursuit (MBMP), to
solve the sparse recovery problem over redundant dictionaries. MBMP combines
three different paradigms: being a greedy method, it performs iterative signal
support estimation; as a rank-aware method, it is able to exploit signal
subspace information when multiple snapshots are available; and, as its name
foretells, it leverages a multi-branch (i.e., tree-search) strategy that allows
us to trade-off hardware complexity (e.g. measurements) for computational
complexity. We derive a sufficient condition under which MBMP can recover a
sparse signal from noiseless measurements. This condition, named MB-coherence,
is met when the dictionary is sufficiently incoherent. It incorporates the
number of branches of MBMP and it requires fewer measurements than other
conditions (e.g. the Neuman ERC or the cumulative coherence). As such,
successful recovery with MBMP is guaranteed for dictionaries that do not
satisfy previously known conditions.
|
1312.5766 | Structure-Aware Dynamic Scheduler for Parallel Machine Learning | stat.ML cs.LG | Training large machine learning (ML) models with many variables or parameters
can take a long time if one employs sequential procedures even with stochastic
updates. A natural solution is to turn to distributed computing on a cluster;
however, naive, unstructured parallelization of ML algorithms does not usually
lead to a proportional speedup and can even result in divergence, because
dependencies between model elements can attenuate the computational gains from
parallelization and compromise correctness of inference. Recent efforts toward
this issue have benefited from exploiting the static, a priori block structures
residing in ML algorithms. In this paper, we take this path further by
exploring the dynamic block structures and workloads therein present during ML
program execution, which offers new opportunities for improving convergence,
correctness, and load balancing in distributed ML. We propose and showcase a
general-purpose scheduler, STRADS, for coordinating distributed updates in ML
algorithms, which harnesses the aforementioned opportunities in a systematic
way. We provide theoretical guarantees for our scheduler, and demonstrate its
efficacy versus static block structures on Lasso and Matrix Factorization.
|
1312.5770 | Consistency of Causal Inference under the Additive Noise Model | cs.LG stat.ML | We analyze a family of methods for statistical causal inference from sample
under the so-called Additive Noise Model. While most work on the subject has
concentrated on establishing the soundness of the Additive Noise Model, the
statistical consistency of the resulting inference methods has received little
attention. We derive general conditions under which the given family of
inference methods consistently infers the causal direction in a nonparametric
setting.
|
1312.5783 | Unsupervised Feature Learning by Deep Sparse Coding | cs.LG cs.CV cs.NE | In this paper, we propose a new unsupervised feature learning framework,
namely Deep Sparse Coding (DeepSC), that extends sparse coding to a multi-layer
architecture for visual object recognition tasks. The main innovation of the
framework is that it connects the sparse-encoders from different layers by a
sparse-to-dense module. The sparse-to-dense module is a composition of a local
spatial pooling step and a low-dimensional embedding process, which takes
advantage of the spatial smoothness information in the image. As a result, the
new method is able to learn several levels of sparse representation of the
image which capture features at a variety of abstraction levels and
simultaneously preserve the spatial smoothness between the neighboring image
patches. Combining the feature representations from multiple layers, DeepSC
achieves the state-of-the-art performance on multiple object recognition tasks.
|
1312.5785 | EXMOVES: Classifier-based Features for Scalable Action Recognition | cs.CV | This paper introduces EXMOVES, learned exemplar-based features for efficient
recognition of actions in videos. The entries in our descriptor are produced by
evaluating a set of movement classifiers over spatial-temporal volumes of the
input sequence. Each movement classifier is a simple exemplar-SVM trained on
low-level features, i.e., an SVM learned using a single annotated positive
space-time volume and a large number of unannotated videos.
Our representation offers two main advantages. First, since our mid-level
features are learned from individual video exemplars, they require minimal
amount of supervision. Second, we show that simple linear classification models
trained on our global video descriptor yield action recognition accuracy
approaching the state-of-the-art but at orders of magnitude lower cost, since
at test-time no sliding window is necessary and linear models are efficient to
train and test. This enables scalable action recognition, i.e., efficient
classification of a large number of different actions even in large video
databases. We show the generality of our approach by building our mid-level
descriptors from two different low-level feature representations. The accuracy
and efficiency of the approach are demonstrated on several large-scale action
recognition benchmarks.
|
1312.5794 | Random Basketball Routing for ZigBee based Sensor Networks | cs.NI cs.IT math.IT | Random basketball routing (BR) \cite {Hwang} is a simple protocol that
integrates MAC and multihop routing in a cross-layer optimized manner. Due to
its lightness and performance, BR would be quite suitable for sensor networks,
where communication nodes are usually simple devices. In this paper, we
describe how we implemented BR in a ZigBee-based (IEEE 802.15.4) sensor
network. In \cite{Hwang}, it is verified that BR takes advantages of dynamic
environments (in particular, node mobility), however, here we focus on how BR
works under static situations. For implementation purposes, we add some
features such as destination RSSI measuring and loop-free procedure, to the
original BR. With implemented testbed, we compare the performance of BR with
that of the simplified AODV with CSMA/CA. The result is that BR has merits in
terms of number of hops to traverse the network. Considering the simple
structure of BR and its possible energy-efficiency, we can conclude that BR can
be a good candidate for sensor networks both under dynamic- and static
environments.
|
1312.5797 | Network Coded Rate Scheduling for Two-way Relay Networks | cs.IT math.IT | We investigate a scheduling scheme incorporating network coding and channel
varying information for the two-way relay networks. Our scheduler aims at
minimizing the time span needed to send all the data of each source of the
network. We consider three channel models, time invariant channels, time
varying channels with finite states and time varying Rayleigh fading channels.
We formulate the problem into a manageable optimization problem, and get a
closed form scheduler under time invariant channels and time varying channel
with finite channel states. For Rayleigh fading channels, we focus on the relay
node operation and propose heuristic power allocation algorithm resemblant to
water filling algorithm. By simulations, we find that even if the channel
fluctuates randomly, the average time span is minimized when the relay node
transmits/schedules network coded data as much as possible.
|
1312.5800 | An Iterative Algorithm for Optimal Carrier Sensing Threshold in Random
CSMA/CA Wireless Networks | cs.IT math.IT | We investigate the optimal carrier sensing threshold in random CSMA/CA
networks considering the effect of binary exponential backoff. We propose an
iterative algorithm for optimizing the carrier sensing threshold and hence
maximizing the area spectral efficiency. We verify that simulations are
consistent with our analytical results.
|
1312.5813 | Unsupervised Pretraining Encourages Moderate-Sparseness | cs.LG cs.NE | It is well known that direct training of deep neural networks will generally
lead to poor results. A major progress in recent years is the invention of
various pretraining methods to initialize network parameters and it was shown
that such methods lead to good prediction performance. However, the reason for
the success of pretraining has not been fully understood, although it was
argued that regularization and better optimization play certain roles. This
paper provides another explanation for the effectiveness of pretraining, where
we show pretraining leads to a sparseness of hidden unit activation in the
resulting neural networks. The main reason is that the pretraining models can
be interpreted as an adaptive sparse coding. Compared to deep neural network
with sigmoid function, our experimental results on MNIST and Birdsong further
support this sparseness observation.
|
1312.5814 | Optimal parameter selection for unsupervised neural network using
genetic algorithm | cs.NE | K-means Fast Learning Artificial Neural Network (K-FLANN) is an unsupervised
neural network requires two parameters: tolerance and vigilance. Best
Clustering results are feasible only by finest parameters specified to the
neural network. Selecting optimal values for these parameters is a major
problem. To solve this issue, Genetic Algorithm (GA) is used to determine
optimal parameters of K-FLANN for finding groups in multidimensional data.
K-FLANN is a simple topological network, in which output nodes grows
dynamically during the clustering process on receiving input patterns. Original
K-FLANN is enhanced to select winner unit out of the matched nodes so that
stable clusters are formed with in a less number of epochs. The experimental
results show that the GA is efficient in finding optimal values of parameters
from the large search space and is tested using artificial and synthetic data
sets.
|
1312.5830 | An Introduction to Socially Connected Machines: Characteristics and
Applications | cs.SI cs.CY | Due to the development of information and communication technologies, it is
difficult to handle the billions of connected machines. In this paper, to cope
with the problem, we introduce machine social networks, where they freely
follow each other and share common interests with their neighbors. We classify
characteristics and describe required functionalities of socially connected
machines. We also illustrate two examples; a twit-bot and maze scenario.
|
1312.5841 | Fisher Information and the Fourth Moment Theorem | math.PR cs.IT math.IT | Using a representation of the score function by means of the divergence
operator we exhibit a sufficient condition, in terms of the negative moments of
the norm of the Malliavin derivative, under which convergence in Fisher
information to the standard Gaussian of sequences belonging to a given Wiener
chaos is actually equivalent to convergence of only the fourth moment. Thus,
our result may be considered as a further building block associated to the
recent but already rich literature dedicated to the Fourth Moment Theorem of
Nualart and Peccati. To illustrate the power of our approach we prove a local
limit theorem together with some rates of convergence for the normal
convergence of a standardized version of the quadratic variation of the
fractional Brownian motion.
|
1312.5845 | Competitive Learning with Feedforward Supervisory Signal for Pre-trained
Multilayered Networks | cs.NE cs.CV cs.LG stat.ML | We propose a novel learning method for multilayered neural networks which
uses feedforward supervisory signal and associates classification of a new
input with that of pre-trained input. The proposed method effectively uses rich
input information in the earlier layer for robust leaning and revising internal
representation in a multilayer neural network.
|
1312.5847 | Deep learning for neuroimaging: a validation study | cs.NE cs.LG stat.ML | Deep learning methods have recently made notable advances in the tasks of
classification and representation learning. These tasks are important for brain
imaging and neuroscience discovery, making the methods attractive for porting
to a neuroimager's toolbox. Success of these methods is, in part, explained by
the flexibility of deep learning models. However, this flexibility makes the
process of porting to new areas a difficult parameter optimization problem. In
this work we demonstrate our results (and feasible parameter ranges) in
application of deep learning methods to structural and functional brain imaging
data. We also describe a novel constraint-based approach to visualizing high
dimensional data. We use it to analyze the effect of parameter choices on data
transformations. Our results show that deep learning methods are able to learn
physiologically important representations and detect latent relations in
neuroimaging data.
|
1312.5851 | Fast Training of Convolutional Networks through FFTs | cs.CV cs.LG cs.NE | Convolutional networks are one of the most widely employed architectures in
computer vision and machine learning. In order to leverage their ability to
learn complex functions, large amounts of data are required for training.
Training a large convolutional network to produce state-of-the-art results can
take weeks, even when using modern GPUs. Producing labels using a trained
network can also be costly when dealing with web-scale datasets. In this work,
we present a simple algorithm which accelerates training and inference by a
significant factor, and can yield improvements of over an order of magnitude
compared to existing state-of-the-art implementations. This is done by
computing convolutions as pointwise products in the Fourier domain while
reusing the same transformed feature map many times. The algorithm is
implemented on a GPU architecture and addresses a number of related challenges.
|
1312.5853 | Multi-GPU Training of ConvNets | cs.LG cs.NE | In this work we evaluate different approaches to parallelize computation of
convolutional neural networks across several GPUs.
|
1312.5857 | A Generative Product-of-Filters Model of Audio | stat.ML cs.LG | We propose the product-of-filters (PoF) model, a generative model that
decomposes audio spectra as sparse linear combinations of "filters" in the
log-spectral domain. PoF makes similar assumptions to those used in the classic
homomorphic filtering approach to signal processing, but replaces hand-designed
decompositions built of basic signal processing operations with a learned
decomposition based on statistical inference. This paper formulates the PoF
model and derives a mean-field method for posterior inference and a variational
EM algorithm to estimate the model's free parameters. We demonstrate PoF's
potential for audio processing on a bandwidth expansion task, and show that PoF
can serve as an effective unsupervised feature extractor for a speaker
identification task.
|
1312.5869 | Principled Non-Linear Feature Selection | cs.LG | Recent non-linear feature selection approaches employing greedy optimisation
of Centred Kernel Target Alignment(KTA) exhibit strong results in terms of
generalisation accuracy and sparsity. However, they are computationally
prohibitive for large datasets. We propose randSel, a randomised feature
selection algorithm, with attractive scaling properties. Our theoretical
analysis of randSel provides strong probabilistic guarantees for correct
identification of relevant features. RandSel's characteristics make it an ideal
candidate for identifying informative learned representations. We've conducted
experimentation to establish the performance of this approach, and present
encouraging results, including a 3rd position result in the recent ICML black
box learning challenge as well as competitive results for signal peptide
prediction, an important problem in bioinformatics.
|
1312.5891 | The Sparse Principal Component of a Constant-rank Matrix | cs.IT math.IT stat.ML | The computation of the sparse principal component of a matrix is equivalent
to the identification of its principal submatrix with the largest maximum
eigenvalue. Finding this optimal submatrix is what renders the problem
${\mathcal{NP}}$-hard. In this work, we prove that, if the matrix is positive
semidefinite and its rank is constant, then its sparse principal component is
polynomially computable. Our proof utilizes the auxiliary unit vector technique
that has been recently developed to identify problems that are polynomially
solvable. Moreover, we use this technique to design an algorithm which, for any
sparsity value, computes the sparse principal component with complexity
${\mathcal O}\left(N^{D+1}\right)$, where $N$ and $D$ are the matrix size and
rank, respectively. Our algorithm is fully parallelizable and memory efficient.
|
1312.5892 | Support for Error Tolerance in the Real-Time Transport Protocol | cs.NI cs.IT cs.OS math.IT | Streaming applications often tolerate bit errors in their received data well.
This is contrasted by the enforcement of correctness of the packet headers and
payload by network protocols. We investigate a solution for the Real-time
Transport Protocol (RTP) that is tolerant to errors by accepting erroneous
data. It passes potentially corrupted stream data payloads to the codecs. If
errors occur in the header, our solution recovers from these by leveraging the
known state and expected header values for each stream. The solution is fully
receiver-based and incrementally deployable, and as such requires neither
support from the sender nor changes to the RTP specification. Evaluations show
that our header error recovery scheme can recover from almost all errors, with
virtually no erroneous recoveries, up to bit error rates of about 10%.
|
1312.5912 | Containment of Schema Mappings for Data Exchange (Preliminary Report) | cs.DB | In data exchange, data are materialised from a source schema to a target
schema, according to suitable source-to-target constraints. Constraints are
also expressed on the target schema to represent the domain of interest. A
schema mapping is the union of the source-to-target and of the target
constraints.
In this paper, we address the problem of containment of schema mappings for
data exchange, which has been recently proposed in this framework as a step
towards the optimization of data exchange settings. We refer to a natural
notion of containment that relies on the behaviour of schema mappings with
respect to conjunctive query answering, in the presence of so-called LAV TGDs
as target constraints. Our contribution is a practical technique for testing
the containment based on the existence of a homomorphism between special
"dummy" instances, which can be easily built from schema mappings.
We argue that containment of schema mappings is decidable for most practical
cases, and we set the basis for further investigations in the topic. This paper
extends our preliminary results.
|
1312.5914 | Deep Separability of Ontological Constraints | cs.DB | When data schemata are enriched with expressive constraints that aim at
representing the domain of interest, in order to answer queries one needs to
consider the logical theory consisting of both the data and the constraints.
Query answering in such a context is called ontological query answering.
Commonly adopted database constraints in this field are tuple-generating
dependencies (TGDs) and equality-generating dependencies (EGDs). It is well
known that their interaction leads to intractability or undecidability of query
answering even in the case of simple subclasses. Several conditions have been
found to guarantee separability, that is lack of interaction, between TGDs and
EGDs. Separability makes EGDs (mostly) irrelevant for query answering and
therefore often guarantees tractability, as long as the theory is satisfiable.
In this paper we review the two notions of separability found in the
literature, as well as several syntactic conditions that are sufficient to
prove them. We then shed light on the issue of satisfiability checking, showing
that under a sufficient condition called deep separability it can be done by
considering the TGDs only.
We show that, fortunately, in the case of TGDs and EGDs, separability implies
deep separability. This result generalizes several analogous ones, proved ad
hoc for particular classes of constraints. Applications include the class of
sticky TGDs and EGDs, for which we provide a syntactic separability condition
which extends the analogous one for linear TGDs; preliminary experiments show
the feasibility of query answering in this case.
|
1312.5921 | Group-sparse Embeddings in Collective Matrix Factorization | stat.ML cs.LG | CMF is a technique for simultaneously learning low-rank representations based
on a collection of matrices with shared entities. A typical example is the
joint modeling of user-item, item-property, and user-feature matrices in a
recommender system. The key idea in CMF is that the embeddings are shared
across the matrices, which enables transferring information between them. The
existing solutions, however, break down when the individual matrices have
low-rank structure not shared with others. In this work we present a novel CMF
solution that allows each of the matrices to have a separate low-rank structure
that is independent of the other matrices, as well as structures that are
shared only by a subset of them. We compare MAP and variational Bayesian
solutions based on alternating optimization algorithms and show that the model
automatically infers the nature of each factor using group-wise sparsity. Our
approach supports in a principled way continuous, binary and count observations
and is efficient for sparse matrices involving missing data. We illustrate the
solution on a number of examples, focusing in particular on an interesting
use-case of augmented multi-view learning.
|
1312.5938 | Dual-Branch MRC Receivers under Spatial Interference Correlation and
Nakagami Fading | cs.IT math.IT math.PR | Despite being ubiquitous in practice, the performance of maximal-ratio
combining (MRC) in the presence of interference is not well understood. Because
the interference received at each antenna originates from the same set of
interferers, but partially de-correlates over the fading channel, it possesses
a complex correlation structure. This work develops a realistic analytic model
that accurately accounts for the interference correlation using stochastic
geometry. Modeling interference by a Poisson shot noise process with
independent Nakagami fading, we derive the link success probability for
dual-branch interference-aware MRC. Using this result, we show that the common
assumption that all receive antennas experience equal interference power
underestimates the true performance, although this gap rapidly decays with
increasing the Nakagami parameter $m_{\text{I}}$ of the interfering links. In
contrast, ignoring interference correlation leads to a highly optimistic
performance estimate for MRC, especially for large $m_{\text{I}}$. In the low
outage probability regime, our success probability expression can be
considerably simplified. Observations following from the analysis include: (i)
for small path loss exponents, MRC and minimum mean square error combining
exhibit similar performance, and (ii) the gains of MRC over selection combining
are smaller in the interference-limited case than in the well-studied
noise-limited case.
|
1312.5940 | Generic Deep Networks with Wavelet Scattering | cs.CV | We introduce a two-layer wavelet scattering network, for object
classification. This scattering transform computes a spatial wavelet transform
on the first layer and a new joint wavelet transform along spatial, angular and
scale variables in the second layer. Numerical experiments demonstrate that
this two layer convolution network, which involves no learning and no max
pooling, performs efficiently on complex image data sets such as CalTech, with
structural objects variability and clutter. It opens the possibility to
simplify deep neural network learning by initializing the first layers with
wavelet filters.
|
1312.5941 | Developing a model of evacuation after an earthquake in Lebanon | cs.MA | This article describes the development of an agent-based model (AMEL,
Agent-based Model for Earthquake evacuation in Lebanon) that aims at simulating
the movement of pedestrians shortly after an earthquake. The GAMA platform was
chosen to implement the model. AMEL is applied to a real case study, a district
of the city of Beirut, Lebanon, which potentially could be stricken by a M7
earthquake. The objective of the model is to reproduce real life mobility
behaviours that have been gathered through a survey in Beirut and to test
different future scenarios, which may help the local authorities to target
information campaigns.
|
1312.5946 | Adaptive Seeding for Gaussian Mixture Models | cs.LG | We present new initialization methods for the expectation-maximization
algorithm for multivariate Gaussian mixture models. Our methods are adaptions
of the well-known $K$-means++ initialization and the Gonzalez algorithm.
Thereby we aim to close the gap between simple random, e.g. uniform, and
complex methods, that crucially depend on the right choice of hyperparameters.
Our extensive experiments indicate the usefulness of our methods compared to
common techniques and methods, which e.g. apply the original $K$-means++ and
Gonzalez directly, with respect to artificial as well as real-world data sets.
|
1312.5952 | Proceedings of the Fourth International Workshop on Domain-Specific
Languages and Models for Robotic Systems (DSLRob 2013) | cs.RO | The Fourth International Workshop on Domain-Specific Languages and Models for
Robotic Systems (DSLRob'13) was held in conjunction with the 2013 IEEE/RSJ
International Conference on Intelligent Robots and Systems (IROS 2013),
November 2013 in Tokyo, Japan.
The main topics of the workshop were Domain-Specific Languages (DSLs) and
Model-driven Software Development (MDSD) for robotics. A domain-specific
language is a programming language dedicated to a particular problem domain
that offers specific notations and abstractions that increase programmer
productivity within that domain. Model-driven software development offers a
high-level way for domain users to specify the functionality of their system at
the right level of abstraction. DSLs and models have historically been used for
programming complex systems. However recently they have garnered interest as a
separate field of study. Robotic systems blend hardware and software in a
holistic way that intrinsically raises many crosscutting concerns (concurrency,
uncertainty, time constraints, ...), for which reason, traditional
general-purpose languages often lead to a poor fit between the language
features and the implementation requirements. DSLs and models offer a powerful,
systematic way to overcome this problem, enabling the programmer to quickly and
precisely implement novel software solutions to complex problems within the
robotics domain.
|
1312.5985 | Learning Type-Driven Tensor-Based Meaning Representations | cs.CL cs.LG | This paper investigates the learning of 3rd-order tensors representing the
semantics of transitive verbs. The meaning representations are part of a
type-driven tensor-based semantic framework, from the newly emerging field of
compositional distributional semantics. Standard techniques from the neural
networks literature are used to learn the tensors, which are tested on a
selectional preference-style task with a simple 2-dimensional sentence space.
Promising results are obtained against a competitive corpus-based baseline. We
argue that extending this work beyond transitive verbs, and to
higher-dimensional sentence spaces, is an interesting and challenging problem
for the machine learning community to consider.
|
1312.5990 | Universal Polar Codes for More Capable and Less Noisy Channels and
Sources | cs.IT math.IT | We prove two results on the universality of polar codes for source coding and
channel communication. First, we show that for any polar code built for a
source $P_{X,Z}$ there exists a slightly modified polar code - having the same
rate, the same encoding and decoding complexity and the same error rate - that
is universal for every source $P_{X,Y}$ when using successive cancellation
decoding, at least when the channel $P_{Y|X}$ is more capable than $P_{Z|X}$
and $P_X$ is such that it maximizes $I(X;Y) - I(X;Z)$ for the given channels
$P_{Y|X}$ and $P_{Z|X}$. This result extends to channel coding for discrete
memoryless channels. Second, we prove that polar codes using successive
cancellation decoding are universal for less noisy discrete memoryless
channels.
|
1312.6002 | Stochastic Gradient Estimate Variance in Contrastive Divergence and
Persistent Contrastive Divergence | cs.NE cs.LG stat.ML | Contrastive Divergence (CD) and Persistent Contrastive Divergence (PCD) are
popular methods for training the weights of Restricted Boltzmann Machines.
However, both methods use an approximate method for sampling from the model
distribution. As a side effect, these approximations yield significantly
different biases and variances for stochastic gradient estimates of individual
data points. It is well known that CD yields a biased gradient estimate. In
this paper we however show empirically that CD has a lower stochastic gradient
estimate variance than exact sampling, while the mean of subsequent PCD
estimates has a higher variance than exact sampling. The results give one
explanation to the finding that CD can be used with smaller minibatches or
higher learning rates than PCD.
|
1312.6024 | Occupancy Detection in Vehicles Using Fisher Vector Image Representation | cs.CV | Due to the high volume of traffic on modern roadways, transportation agencies
have proposed High Occupancy Vehicle (HOV) lanes and High Occupancy Tolling
(HOT) lanes to promote car pooling. However, enforcement of the rules of these
lanes is currently performed by roadside enforcement officers using visual
observation. Manual roadside enforcement is known to be inefficient, costly,
potentially dangerous, and ultimately ineffective. Violation rates up to
50%-80% have been reported, while manual enforcement rates of less than 10% are
typical. Therefore, there is a need for automated vehicle occupancy detection
to support HOV/HOT lane enforcement. A key component of determining vehicle
occupancy is to determine whether or not the vehicle's front passenger seat is
occupied. In this paper, we examine two methods of determining vehicle front
seat occupancy using a near infrared (NIR) camera system pointed at the
vehicle's front windshield. The first method examines a state-of-the-art
deformable part model (DPM) based face detection system that is robust to
facial pose. The second method examines state-of- the-art local aggregation
based image classification using bag-of-visual-words (BOW) and Fisher vectors
(FV). A dataset of 3000 images was collected on a public roadway and is used to
perform the comparison. From these experiments it is clear that the image
classification approach is superior for this problem.
|
1312.6026 | How to Construct Deep Recurrent Neural Networks | cs.NE cs.LG stat.ML | In this paper, we explore different ways to extend a recurrent neural network
(RNN) to a \textit{deep} RNN. We start by arguing that the concept of depth in
an RNN is not as clear as it is in feedforward neural networks. By carefully
analyzing and understanding the architecture of an RNN, however, we find three
points of an RNN which may be made deeper; (1) input-to-hidden function, (2)
hidden-to-hidden transition and (3) hidden-to-output function. Based on this
observation, we propose two novel architectures of a deep RNN which are
orthogonal to an earlier attempt of stacking multiple recurrent layers to build
a deep RNN (Schmidhuber, 1992; El Hihi and Bengio, 1996). We provide an
alternative interpretation of these deep RNNs using a novel framework based on
neural operators. The proposed deep RNNs are empirically evaluated on the tasks
of polyphonic music prediction and language modeling. The experimental result
supports our claim that the proposed deep RNNs benefit from the depth and
outperform the conventional, shallow RNNs.
|
1312.6034 | Deep Inside Convolutional Networks: Visualising Image Classification
Models and Saliency Maps | cs.CV | This paper addresses the visualisation of image classification models, learnt
using deep Convolutional Networks (ConvNets). We consider two visualisation
techniques, based on computing the gradient of the class score with respect to
the input image. The first one generates an image, which maximises the class
score [Erhan et al., 2009], thus visualising the notion of the class, captured
by a ConvNet. The second technique computes a class saliency map, specific to a
given image and class. We show that such maps can be employed for weakly
supervised object segmentation using classification ConvNets. Finally, we
establish the connection between the gradient-based ConvNet visualisation
methods and deconvolutional networks [Zeiler et al., 2013].
|
1312.6042 | Learning States Representations in POMDP | cs.LG | We propose to deal with sequential processes where only partial observations
are available by learning a latent representation space on which policies may
be accurately learned.
|
1312.6055 | Unit Tests for Stochastic Optimization | cs.LG | Optimization by stochastic gradient descent is an important component of many
large-scale machine learning algorithms. A wide variety of such optimization
algorithms have been devised; however, it is unclear whether these algorithms
are robust and widely applicable across many different optimization landscapes.
In this paper we develop a collection of unit tests for stochastic
optimization. Each unit test rapidly evaluates an optimization algorithm on a
small-scale, isolated, and well-understood difficulty, rather than in
real-world scenarios where many such issues are entangled. Passing these unit
tests is not sufficient, but absolutely necessary for any algorithms with
claims to generality or robustness. We give initial quantitative and
qualitative results on numerous established algorithms. The testing framework
is open-source, extensible, and easy to apply to new algorithms.
|
1312.6057 | On the Joint Impact of Beamwidth and Orientation Error on Throughput in
Directional Wireless Poisson Networks | cs.IT cs.NI math.IT | We introduce a model for capturing the effects of beam misdirection on
coverage and throughput in a directional wireless network using stochastic
geometry. In networks employing ideal sector antennas without sidelobes, we
find that concavity of the orientation error distribution is sufficient to
prove monotonicity and quasi-concavity (both with respect to antenna beamwidth)
of spatial throughput and transmission capacity, respectively. Additionally, we
identify network conditions that produce opposite extremal choices in beamwidth
(absolutely directed versus omni-directional) that maximize the two related
throughput metrics. We conclude our paper with a numerical exploration of the
relationship between mean orientation error, throughput-maximizing beamwidths,
and maximum throughput, across radiation patterns of varied complexity.
|
1312.6061 | Driving forces in researchers mobility | physics.soc-ph cs.SI | Starting from the dataset of the publication corpus of the APS during the
period 1955-2009, we reconstruct the individual researchers trajectories,
namely the list of the consecutive affiliations for each scholar. Crossing this
information with different geographic datasets we embed these trajectories in a
spatial framework. Using methods from network theory and complex systems
analysis we characterise these patterns in terms of topological network
properties and we analyse the dependence of an academic path across different
dimensions: the distance between two subsequent positions, the relative
importance of the institutions (in terms of number of publications) and some
socio-cultural traits. We show that distance is not always a good predictor for
the next affiliation while other factors like "the previous steps" of the
career of the researchers (in particular the first position) or the linguistic
and historical similarity between two countries can have an important impact.
Finally we show that the dataset exhibit a memory effect, hence the fate of a
career strongly depends from the first two affiliations.
|
1312.6062 | Stopping Criteria in Contrastive Divergence: Alternatives to the
Reconstruction Error | cs.LG | Restricted Boltzmann Machines (RBMs) are general unsupervised learning
devices to ascertain generative models of data distributions. RBMs are often
trained using the Contrastive Divergence learning algorithm (CD), an
approximation to the gradient of the data log-likelihood. A simple
reconstruction error is often used to decide whether the approximation provided
by the CD algorithm is good enough, though several authors (Schulz et al.,
2010; Fischer & Igel, 2010) have raised doubts concerning the feasibility of
this procedure. However, not many alternatives to the reconstruction error have
been used in the literature. In this manuscript we investigate simple
alternatives to the reconstruction error in order to detect as soon as possible
the decrease in the log-likelihood during learning.
|
1312.6064 | Extending Construction X for Quantum Error-Correcting Codes | cs.IT math.IT | In this paper we extend the work of Lisonek and Singh on construction X for
quantum error-correcting codes to finite fields of order $p^2^ where p is
prime. The results obtained are applied to the dual of Hermitian repeated root
cyclic codes to generate new quantum error-correcting codes.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.