id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1312.6077 | Efficient Visual Coding: From Retina To V2 | cs.CV q-bio.NC | The human visual system has a hierarchical structure consisting of layers of
processing, such as the retina, V1, V2, etc. Understanding the functional roles
of these visual processing layers would help to integrate the
psychophysiological and neurophysiological models into a consistent theory of
human vision, and would also provide insights to computer vision research. One
classical theory of the early visual pathway hypothesizes that it serves to
capture the statistical structure of the visual inputs by efficiently coding
the visual information in its outputs. Until recently, most computational
models following this theory have focused upon explaining the receptive field
properties of one or two visual layers. Recent work in deep networks has
eliminated this concern, however, there is till the retinal layer to consider.
Here we improve on a previously-described hierarchical model Recursive ICA
(RICA) [1] which starts with PCA, followed by a layer of sparse coding or ICA,
followed by a component-wise nonlinearity derived from considerations of the
variable distributions expected by ICA. This process is then repeated. In this
work, we improve on this model by using a new version of sparse PCA (sPCA),
which results in biologically-plausible receptive fields for both the sPCA and
ICA/sparse coding. When applied to natural image patches, our model learns
visual features exhibiting the receptive field properties of retinal ganglion
cells/lateral geniculate nucleus (LGN) cells, V1 simple cells, V1 complex
cells, and V2 cells. Our work provides predictions for experimental
neuroscience studies. For example, our result suggests that a previous
neurophysiological study improperly discarded some of their recorded neurons;
we predict that their discarded neurons capture the shape contour of objects.
|
1312.6079 | An Improved Outer Bound on the Storage-Repair-Bandwidth Tradeoff of
Exact-Repair Regenerating Codes | cs.IT math.IT | In this paper we establish an improved outer bound on the
storage-repair-bandwidth tradeoff of regenerating codes under exact repair. The
result shows that in particular, it is not possible to construct exact-repair
regenerating codes that asymptotically achieve the tradeoff that holds for
functional repair. While this had been shown earlier by Tian for the special
case of $[n,k,d]=[4,3,3]$ the present result holds for general $[n,k,d]$. The
new outer bound is obtained by building on the framework established earlier by
Shah et al.
|
1312.6082 | Multi-digit Number Recognition from Street View Imagery using Deep
Convolutional Neural Networks | cs.CV | Recognizing arbitrary multi-character text in unconstrained natural
photographs is a hard problem. In this paper, we address an equally hard
sub-problem in this domain viz. recognizing arbitrary multi-digit numbers from
Street View imagery. Traditional approaches to solve this problem typically
separate out the localization, segmentation, and recognition steps. In this
paper we propose a unified approach that integrates these three steps via the
use of a deep convolutional neural network that operates directly on the image
pixels. We employ the DistBelief implementation of deep neural networks in
order to train large, distributed neural networks on high quality images. We
find that the performance of this approach increases with the depth of the
convolutional network, with the best performance occurring in the deepest
architecture we trained, with eleven hidden layers. We evaluate this approach
on the publicly available SVHN dataset and achieve over $96\%$ accuracy in
recognizing complete street numbers. We show that on a per-digit recognition
task, we improve upon the state-of-the-art, achieving $97.84\%$ accuracy. We
also evaluate this approach on an even more challenging dataset generated from
Street View imagery containing several tens of millions of street number
annotations and achieve over $90\%$ accuracy. To further explore the
applicability of the proposed system to broader text recognition tasks, we
apply it to synthetic distorted text from reCAPTCHA. reCAPTCHA is one of the
most secure reverse turing tests that uses distorted text to distinguish humans
from bots. We report a $99.8\%$ accuracy on the hardest category of reCAPTCHA.
Our evaluations on both tasks indicate that at specific operating thresholds,
the performance of the proposed system is comparable to, and in some cases
exceeds, that of human operators.
|
1312.6086 | The return of AdaBoost.MH: multi-class Hamming trees | cs.LG | Within the framework of AdaBoost.MH, we propose to train vector-valued
decision trees to optimize the multi-class edge without reducing the
multi-class problem to $K$ binary one-against-all classifications. The key
element of the method is a vector-valued decision stump, factorized into an
input-independent vector of length $K$ and label-independent scalar classifier.
At inner tree nodes, the label-dependent vector is discarded and the binary
classifier can be used for partitioning the input space into two regions. The
algorithm retains the conceptual elegance, power, and computational efficiency
of binary AdaBoost. In experiments it is on par with support vector machines
and with the best existing multi-class boosting algorithm AOSOLogitBoost, and
it is significantly better than other known implementations of AdaBoost.MH.
|
1312.6094 | Energy Efficient Control of an Induction Machine under Load Torque Step
Change | cs.SY | Optimal control of magnetizing current for minimizing induction motor power
losses during load torque step change was developed. Obtained strategy has
feedback form and is exactly optimal of ideal speed controller performance and
absence of saturation in motor. The impact of limited bandwidth of real speed
controller is analyzed. For case of main induction saturation the sub-optimal
optimal control is suggested. Relative accuracy of sub-optimality is studied.
Hardware implementation of optimal strategy and experimentation conducted with
induction motors under vector control.
|
1312.6095 | Multi-View Priors for Learning Detectors from Sparse Viewpoint Data | cs.CV | While the majority of today's object class models provide only 2D bounding
boxes, far richer output hypotheses are desirable including viewpoint,
fine-grained category, and 3D geometry estimate. However, models trained to
provide richer output require larger amounts of training data, preferably well
covering the relevant aspects such as viewpoint and fine-grained categories. In
this paper, we address this issue from the perspective of transfer learning,
and design an object class model that explicitly leverages correlations between
visual features. Specifically, our model represents prior distributions over
permissible multi-view detectors in a parametric way -- the priors are learned
once from training data of a source object class, and can later be used to
facilitate the learning of a detector for a target class. As we show in our
experiments, this transfer is not only beneficial for detectors based on
basic-level category representations, but also enables the robust learning of
detectors that represent classes at finer levels of granularity, where training
data is typically even scarcer and more unbalanced. As a result, we report
largely improved performance in simultaneous 2D object localization and
viewpoint estimation on a recent dataset of challenging street scenes.
|
1312.6096 | Properties of Answer Set Programming with Convex Generalized Atoms | cs.AI | In recent years, Answer Set Programming (ASP), logic programming under the
stable model or answer set semantics, has seen several extensions by
generalizing the notion of an atom in these programs: be it aggregate atoms,
HEX atoms, generalized quantifiers, or abstract constraints, the idea is to
have more complicated satisfaction patterns in the lattice of Herbrand
interpretations than traditional, simple atoms. In this paper we refer to any
of these constructs as generalized atoms. Several semantics with differing
characteristics have been proposed for these extensions, rendering the big
picture somewhat blurry. In this paper, we analyze the class of programs that
have convex generalized atoms (originally proposed by Liu and Truszczynski in
[10]) in rule bodies and show that for this class many of the proposed
semantics coincide. This is an interesting result, since recently it has been
shown that this class is the precise complexity boundary for the FLP semantics.
We investigate whether similar results also hold for other semantics, and
discuss the implications of our findings.
|
1312.6098 | On the number of response regions of deep feed forward networks with
piece-wise linear activations | cs.LG cs.NE | This paper explores the complexity of deep feedforward networks with linear
pre-synaptic couplings and rectified linear activations. This is a contribution
to the growing body of work contrasting the representational power of deep and
shallow network architectures. In particular, we offer a framework for
comparing deep and shallow models that belong to the family of piecewise linear
functions based on computational geometry. We look at a deep rectifier
multi-layer perceptron (MLP) with linear outputs units and compare it with a
single layer version of the model. In the asymptotic regime, when the number of
inputs stays constant, if the shallow model has $kn$ hidden units and $n_0$
inputs, then the number of linear regions is $O(k^{n_0}n^{n_0})$. For a $k$
layer model with $n$ hidden units on each layer it is $\Omega(\left\lfloor
{n}/{n_0}\right\rfloor^{k-1}n^{n_0})$. The number
$\left\lfloor{n}/{n_0}\right\rfloor^{k-1}$ grows faster than $k^{n_0}$ when $n$
tends to infinity or when $k$ tends to infinity and $n \geq 2n_0$.
Additionally, even when $k$ is small, if we restrict $n$ to be $2n_0$, we can
show that a deep model has considerably more linear regions that a shallow one.
We consider this as a first step towards understanding the complexity of these
models and specifically towards providing suitable mathematical tools for
future analysis.
|
1312.6101 | Concatenated Raptor Codes in NAND Flash Memory | cs.IT math.IT | Two concatenated coding schemes based on fixed-rate Raptor codes are proposed
for error control in NAND flash memory. One is geared for off-line recovery of
uncorrectable pages and the other is designed for page error correction during
the normal read mode. Both proposed coding strategies assume hard-decision
decoding of the inner code with inner decoding failure generating erasure
symbols for the outer Raptor code. Raptor codes allow low-complexity decoding
of very long codewords while providing capacity- approaching performance for
erasure channels. For the off-line page recovery scheme, one whole NAND block
forms a Raptor codeword with each inner codeword typically made up of several
Raptor symbols. An efficient look-up-table strategy is devised for Raptor
encoding and decoding which avoids using large buffers in the controller
despite the substantial size of the Raptor code employed. The potential
performance benefit of the proposed scheme is evaluated in terms of the
probability of block recovery conditioned on the presence of uncorrectable
pages. In the suggested page-error-correction strategy, on the other hand, a
hard-decision-iterating product code is used as the inner code. The specific
product code employed in this work is based on row-column concatenation with
multiple intersecting bits allowing the use of longer component codes. In this
setting the collection of bits captured within each intersection of the
row-column codes acts as the Raptor symbol(s), and the intersections of failed
row codes and column codes are declared as erasures. The error rate analysis
indicates that the proposed concatenation provides a considerable performance
boost relative to the existing error correcting system based on long
Bose-Chaudhuri-Hocquenghem (BCH) codes.
|
1312.6105 | Hybrid Automated Reasoning Tools: from Black-box to Clear-box
Integration | cs.AI | Recently, researchers in answer set programming and constraint programming
spent significant efforts in the development of hybrid languages and solving
algorithms combining the strengths of these traditionally separate fields.
These efforts resulted in a new research area: constraint answer set
programming (CASP). CASP languages and systems proved to be largely successful
at providing efficient solutions to problems involving hybrid reasoning tasks,
such as scheduling problems with elements of planning. Yet, the development of
CASP systems is difficult, requiring non-trivial expertise in multiple areas.
This suggests a need for a study identifying general development principles of
hybrid systems. Once these principles and their implications are well
understood, the development of hybrid languages and systems may become a
well-established and well-understood routine process. As a step in this
direction, in this paper we conduct a case study aimed at evaluating various
integration schemas of CASP methods.
|
1312.6108 | Modeling correlations in spontaneous activity of visual cortex with
centered Gaussian-binary deep Boltzmann machines | cs.NE cs.LG q-bio.NC | Spontaneous cortical activity -- the ongoing cortical activities in absence
of intentional sensory input -- is considered to play a vital role in many
aspects of both normal brain functions and mental dysfunctions. We present a
centered Gaussian-binary Deep Boltzmann Machine (GDBM) for modeling the
activity in early cortical visual areas and relate the random sampling in GDBMs
to the spontaneous cortical activity. After training the proposed model on
natural image patches, we show that the samples collected from the model's
probability distribution encompass similar activity patterns as found in the
spontaneous activity. Specifically, filters having the same orientation
preference tend to be active together during random sampling. Our work
demonstrates the centered GDBM is a meaningful model approach for basic
receptive field properties and the emergence of spontaneous activity patterns
in early cortical visual areas. Besides, we show empirically that centered
GDBMs do not suffer from the difficulties during training as GDBMs do and can
be properly trained without the layer-wise pretraining.
|
1312.6110 | Learning Generative Models with Visual Attention | cs.CV | Attention has long been proposed by psychologists as important for
effectively dealing with the enormous sensory stimulus available in the
neocortex. Inspired by the visual attention models in computational
neuroscience and the need of object-centric data for generative models, we
describe for generative learning framework using attentional mechanisms.
Attentional mechanisms can propagate signals from region of interest in a scene
to an aligned canonical representation, where generative modeling takes place.
By ignoring background clutter, generative models can concentrate their
resources on the object of interest. Our model is a proper graphical model
where the 2D Similarity transformation is a part of the top-down process. A
ConvNet is employed to provide good initializations during posterior inference
which is based on Hamiltonian Monte Carlo. Upon learning images of faces, our
model can robustly attend to face regions of novel test subjects. More
importantly, our model can learn generative models of new faces from a novel
dataset of large images where the face locations are not known.
|
1312.6113 | Aspartame: Solving Constraint Satisfaction Problems with Answer Set
Programming | cs.AI | Encoding finite linear CSPs as Boolean formulas and solving them by using
modern SAT solvers has proven to be highly effective, as exemplified by the
award-winning sugar system. We here develop an alternative approach based on
ASP. This allows us to use first-order encodings providing us with a high
degree of flexibility for easy experimentation with different implementations.
The resulting system aspartame re-uses parts of sugar for parsing and
normalizing CSPs. The obtained set of facts is then combined with an ASP
encoding that can be grounded and solved by off-the-shelf ASP systems. We
establish the competitiveness of our approach by empirically contrasting
aspartame and sugar.
|
1312.6114 | Auto-Encoding Variational Bayes | stat.ML cs.LG | How can we perform efficient inference and learning in directed probabilistic
models, in the presence of continuous latent variables with intractable
posterior distributions, and large datasets? We introduce a stochastic
variational inference and learning algorithm that scales to large datasets and,
under some mild differentiability conditions, even works in the intractable
case. Our contributions are two-fold. First, we show that a reparameterization
of the variational lower bound yields a lower bound estimator that can be
straightforwardly optimized using standard stochastic gradient methods. Second,
we show that for i.i.d. datasets with continuous latent variables per
datapoint, posterior inference can be made especially efficient by fitting an
approximate inference model (also called a recognition model) to the
intractable posterior using the proposed lower bound estimator. Theoretical
advantages are reflected in experimental results.
|
1312.6115 | Neuronal Synchrony in Complex-Valued Deep Networks | stat.ML cs.LG cs.NE q-bio.NC | Deep learning has recently led to great successes in tasks such as image
recognition (e.g Krizhevsky et al., 2012). However, deep networks are still
outmatched by the power and versatility of the brain, perhaps in part due to
the richer neuronal computations available to cortical circuits. The challenge
is to identify which neuronal mechanisms are relevant, and to find suitable
abstractions to model them. Here, we show how aspects of spike timing, long
hypothesized to play a crucial role in cortical information processing, could
be incorporated into deep networks to build richer, versatile representations.
We introduce a neural network formulation based on complex-valued neuronal
units that is not only biologically meaningful but also amenable to a variety
of deep learning frameworks. Here, units are attributed both a firing rate and
a phase, the latter indicating properties of spike timing. We show how this
formulation qualitatively captures several aspects thought to be related to
neuronal synchrony, including gating of information processing and dynamic
binding of distributed object representations. Focusing on the latter, we
demonstrate the potential of the approach in several simple experiments. Thus,
neuronal synchrony could be a flexible mechanism that fulfills multiple
functional roles in deep networks.
|
1312.6116 | Improving Deep Neural Networks with Probabilistic Maxout Units | stat.ML cs.LG cs.NE | We present a probabilistic variant of the recently introduced maxout unit.
The success of deep neural networks utilizing maxout can partly be attributed
to favorable performance under dropout, when compared to rectified linear
units. It however also depends on the fact that each maxout unit performs a
pooling operation over a group of linear transformations and is thus partially
invariant to changes in its input. Starting from this observation we ask the
question: Can the desirable properties of maxout units be preserved while
improving their invariance properties ? We argue that our probabilistic maxout
(probout) units successfully achieve this balance. We quantitatively verify
this claim and report classification performance matching or exceeding the
current state of the art on three challenging image classification benchmarks
(CIFAR-10, CIFAR-100 and SVHN).
|
1312.6117 | Comparison three methods of clustering: k-means, spectral clustering and
hierarchical clustering | cs.LG | Comparison of three kind of the clustering and find cost function and loss
function and calculate them. Error rate of the clustering methods and how to
calculate the error percentage always be one on the important factor for
evaluating the clustering methods, so this paper introduce one way to calculate
the error rate of clustering methods. Clustering algorithms can be divided into
several categories including partitioning clustering algorithms, hierarchical
algorithms and density based algorithms. Generally speaking we should compare
clustering algorithms by Scalability, Ability to work with different attribute,
Clusters formed by conventional, Having minimal knowledge of the computer to
recognize the input parameters, Classes for dealing with noise and extra
deposition that same error rate for clustering a new data, Thus, there is no
effect on the input data, different dimensions of high levels, K-means is one
of the simplest approach to clustering that clustering is an unsupervised
problem.
|
1312.6119 | A New Frequency Control Reserve Framework based on Energy-Constrained
Units | cs.SY | Frequency control reserves are an essential ancillary service in any electric
power system, guaranteeing that generation and demand of active power are
balanced at all times. Traditionally, conventional power plants are used for
frequency reserves. There are economical and technical benefits of instead
using energy constrained units such as storage systems and demand response, but
so far they have not been widely adopted as their energy constraints prevent
them from following traditional regulation signals, which sometimes are biased
over long time-spans. This paper proposes a frequency control framework that
splits the control signals according to the frequency spectrum. This guarantees
that all control signals are zero-mean over well-defined time-periods, which is
a crucial requirement for the usage of energy-constraint units such as
batteries. A case-study presents a possible implementation, and shows how
different technologies with widely varying characteristics can all participate
in frequency control reserve provision, while guaranteeing that their
respective energy constraints are always fulfilled.
|
1312.6120 | Exact solutions to the nonlinear dynamics of learning in deep linear
neural networks | cs.NE cond-mat.dis-nn cs.CV cs.LG q-bio.NC stat.ML | Despite the widespread practical success of deep learning methods, our
theoretical understanding of the dynamics of learning in deep neural networks
remains quite sparse. We attempt to bridge the gap between the theory and
practice of deep learning by systematically analyzing learning dynamics for the
restricted case of deep linear neural networks. Despite the linearity of their
input-output map, such networks have nonlinear gradient descent dynamics on
weights that change with the addition of each new hidden layer. We show that
deep linear networks exhibit nonlinear learning phenomena similar to those seen
in simulations of nonlinear networks, including long plateaus followed by rapid
transitions to lower error solutions, and faster convergence from greedy
unsupervised pretraining initial conditions than from random initial
conditions. We provide an analytical description of these phenomena by finding
new exact solutions to the nonlinear dynamics of deep learning. Our theoretical
analysis also reveals the surprising finding that as the depth of a network
approaches infinity, learning speed can nevertheless remain finite: for a
special class of initial conditions on the weights, very deep networks incur
only a finite, depth independent, delay in learning speed relative to shallow
networks. We show that, under certain conditions on the training data,
unsupervised pretraining can find this special class of initial conditions,
while scaled random Gaussian initializations cannot. We further exhibit a new
class of random orthogonal initial conditions on weights that, like
unsupervised pre-training, enjoys depth independent learning times. We further
show that these initial conditions also lead to faithful propagation of
gradients even in deep nonlinear networks, as long as they operate in a special
regime known as the edge of chaos.
|
1312.6122 | Shadow networks: Discovering hidden nodes with models of information
flow | physics.soc-ph cond-mat.dis-nn cs.SI physics.data-an | Complex, dynamic networks underlie many systems, and understanding these
networks is the concern of a great span of important scientific and engineering
problems. Quantitative description is crucial for this understanding yet, due
to a range of measurement problems, many real network datasets are incomplete.
Here we explore how accidentally missing or deliberately hidden nodes may be
detected in networks by the effect of their absence on predictions of the speed
with which information flows through the network. We use Symbolic Regression
(SR) to learn models relating information flow to network topology. These
models show localized, systematic, and non-random discrepancies when applied to
test networks with intentionally masked nodes, demonstrating the ability to
detect the presence of missing nodes and where in the network those nodes are
likely to reside.
|
1312.6130 | A Functional View of Strong Negation in Answer Set Programming | cs.AI | The distinction between strong negation and default negation has been useful
in answer set programming. We present an alternative account of strong
negation, which lets us view strong negation in terms of the functional stable
model semantics by Bartholomew and Lee. More specifically, we show that, under
complete interpretations, minimizing both positive and negative literals in the
traditional answer set semantics is essentially the same as ensuring the
uniqueness of Boolean function values under the functional stable model
semantics. The same account lets us view Lifschitz's two-valued logic programs
as a special case of the functional stable model semantics. In addition, we
show how non-Boolean intensional functions can be eliminated in favor of
Boolean intensional functions, and furthermore can be represented using strong
negation, which provides a way to compute the functional stable model semantics
using existing ASP solvers. We also note that similar results hold with the
functional stable model semantics by Cabalar.
|
1312.6134 | An Algebra of Causal Chains | cs.AI | In this work we propose a multi-valued extension of logic programs under the
stable models semantics where each true atom in a model is associated with a
set of justifications, in a similar spirit than a set of proof trees. The main
contribution of this paper is that we capture justifications into an algebra of
truth values with three internal operations: an addition '+' representing
alternative justifications for a formula, a commutative product '*'
representing joint interaction of causes and a non-commutative product '.'
acting as a concatenation or proof constructor. Using this multi-valued
semantics, we obtain a one-to-one correspondence between the syntactic proof
tree of a standard (non-causal) logic program and the interpretation of each
true atom in a model. Furthermore, thanks to this algebraic characterization we
can detect semantic properties like redundancy and relevance of the obtained
justifications. We also identify a lattice-based characterization of this
algebra, defining a direct consequences operator, proving its continuity and
that its least fix point can be computed after a finite number of iterations.
Finally, we define the concept of causal stable model by introducing an
analogous transformation to Gelfond and Lifschitz's program reduct.
|
1312.6138 | Query Answering in Object Oriented Knowledge Bases in Logic Programming:
Description and Challenge for ASP | cs.AI | Research on developing efficient and scalable ASP solvers can substantially
benefit by the availability of data sets to experiment with. KB_Bio_101
contains knowledge from a biology textbook, has been developed as part of
Project Halo, and has recently become available for research use. KB_Bio_101 is
one of the largest KBs available in ASP and the reasoning with it is
undecidable in general. We give a description of this KB and ASP programs for a
suite of queries that have been of practical interest. We explain why these
queries pose significant practical challenges for the current ASP solvers.
|
1312.6140 | The DIAMOND System for Argumentation: Preliminary Report | cs.AI | Abstract dialectical frameworks (ADFs) are a powerful generalisation of
Dung's abstract argumentation frameworks. In this paper we present an answer
set programming based software system, called DIAMOND (DIAlectical MOdels
eNcoDing). It translates ADFs into answer set programs whose stable models
correspond to models of the ADF with respect to several semantics (i.e.
admissible, complete, stable, grounded).
|
1312.6143 | A System for Interactive Query Answering with Answer Set Programming | cs.AI | Reactive answer set programming has paved the way for incorporating online
information into operative solving processes. Although this technology was
originally devised for dealing with data streams in dynamic environments, like
assisted living and cognitive robotics, it can likewise be used to incorporate
facts, rules, or queries provided by a user. As a result, we present the design
and implementation of a system for interactive query answering with reactive
answer set programming. Our system quontroller is based on the reactive solver
oclingo and implemented as a dedicated front-end. We describe its functionality
and implementation, and we illustrate its features by some selected use cases.
|
1312.6146 | Generating Shortest Synchronizing Sequences using Answer Set Programming | cs.AI | For a finite state automaton, a synchronizing sequence is an input sequence
that takes all the states to the same state. Checking the existence of a
synchronizing sequence and finding a synchronizing sequence, if one exists, can
be performed in polynomial time. However, the problem of finding a shortest
synchronizing sequence is known to be NP-hard. In this work, the usefulness of
Answer Set Programming to solve this optimization problem is investigated, in
comparison with brute-force algorithms and SAT-based approaches.
Keywords: finite automata, shortest synchronizing sequence, ASP
|
1312.6149 | On the Semantics of Gringo | cs.AI cs.LO | Input languages of answer set solvers are based on the mathematically simple
concept of a stable model. But many useful constructs available in these
languages, including local variables, conditional literals, and aggregates,
cannot be easily explained in terms of stable models in the sense of the
original definition of this concept and its straightforward generalizations.
Manuals written by designers of answer set solvers usually explain such
constructs using examples and informal comments that appeal to the user's
intuition, without references to any precise semantics. We propose to approach
the problem of defining the semantics of gringo programs by translating them
into the language of infinitary propositional formulas. This semantics allows
us to study equivalent transformations of gringo programs using natural
deduction in infinitary propositional logic.
|
1312.6150 | A Review on Automated Brain Tumor Detection and Segmentation from MRI of
Brain | cs.CV | Tumor segmentation from magnetic resonance imaging (MRI) data is an important
but time consuming manual task performed by medical experts. Automating this
process is a challenging task because of the high diversity in the appearance
of tumor tissues among different patients and in many cases similarity with the
normal tissues. MRI is an advanced medical imaging technique providing rich
information about the human soft-tissue anatomy. There are different brain
tumor detection and segmentation methods to detect and segment a brain tumor
from MRI images. These detection and segmentation approaches are reviewed with
an importance placed on enlightening the advantages and drawbacks of these
methods for brain tumor detection and segmentation. The use of MRI image
detection and segmentation in different procedures are also described. Here a
brief review of different segmentation for detection of brain tumor from MRI of
brain has been discussed.
|
1312.6151 | Abstract Modular Systems and Solvers | cs.AI | Integrating diverse formalisms into modular knowledge representation systems
offers increased expressivity, modeling convenience and computational benefits.
We introduce concepts of abstract modules and abstract modular systems to study
general principles behind the design and analysis of model-finding programs, or
solvers, for integrated heterogeneous multi-logic systems. We show how abstract
modules and abstract modular systems give rise to transition systems, which are
a natural and convenient representation of solvers pioneered by the SAT
community. We illustrate our approach by showing how it applies to answer set
programming and propositional logic, and to multi-logic systems based on these
two formalisms.
|
1312.6156 | Negation in the Head of CP-logic Rules | cs.AI | CP-logic is a probabilistic extension of the logic FO(ID). Unlike ASP, both
of these logics adhere to a Tarskian informal semantics, in which
interpretations represent objective states-of-affairs. In other words, these
logics lack the epistemic component of ASP, in which interpretations represent
the beliefs or knowledge of a rational agent. Consequently, neither CP-logic
nor FO(ID) have the need for two kinds of negations: there is only one
negation, and its meaning is that of objective falsehood. Nevertheless, the
formal semantics of this objective negation is mathematically more similar to
ASP's negation-as-failure than to its classical negation. The reason is that
both CP-logic and FO(ID) have a constructive semantics in which all atoms start
out as false, and may only become true as the result of a rule application.
This paper investigates the possibility of adding the well-known ASP feature of
allowing negation in the head of rules to CP-logic. Because CP-logic only has
one kind of negation, it is of necessity this ''negation-as-failure like''
negation that will be allowed in the head. We investigate the intuitive meaning
of such a construct and the benefits that arise from it.
|
1312.6157 | Distinction between features extracted using deep belief networks | cs.LG cs.NE | Data representation is an important pre-processing step in many machine
learning algorithms. There are a number of methods used for this task such as
Deep Belief Networks (DBNs) and Discrete Fourier Transforms (DFTs). Since some
of the features extracted using automated feature extraction methods may not
always be related to a specific machine learning task, in this paper we propose
two methods in order to make a distinction between extracted features based on
their relevancy to the task. We applied these two methods to a Deep Belief
Network trained for a face recognition task.
|
1312.6158 | Deep Belief Networks for Image Denoising | cs.LG cs.CV cs.NE | Deep Belief Networks which are hierarchical generative models are effective
tools for feature representation and extraction. Furthermore, DBNs can be used
in numerous aspects of Machine Learning such as image denoising. In this paper,
we propose a novel method for image denoising which relies on the DBNs' ability
in feature representation. This work is based upon learning of the noise
behavior. Generally, features which are extracted using DBNs are presented as
the values of the last layer nodes. We train a DBN a way that the network
totally distinguishes between nodes presenting noise and nodes presenting image
content in the last later of DBN, i.e. the nodes in the last layer of trained
DBN are divided into two distinct groups of nodes. After detecting the nodes
which are presenting the noise, we are able to make the noise nodes inactive
and reconstruct a noiseless image. In section 4 we explore the results of
applying this method on the MNIST dataset of handwritten digits which is
corrupted with additive white Gaussian noise (AWGN). A reduction of 65.9% in
average mean square error (MSE) was achieved when the proposed method was used
for the reconstruction of the noisy images.
|
1312.6159 | Learned versus Hand-Designed Feature Representations for 3d
Agglomeration | cs.CV | For image recognition and labeling tasks, recent results suggest that machine
learning methods that rely on manually specified feature representations may be
outperformed by methods that automatically derive feature representations based
on the data. Yet for problems that involve analysis of 3d objects, such as mesh
segmentation, shape retrieval, or neuron fragment agglomeration, there remains
a strong reliance on hand-designed feature descriptors. In this paper, we
evaluate a large set of hand-designed 3d feature descriptors alongside features
learned from the raw data using both end-to-end and unsupervised learning
techniques, in the context of agglomeration of 3d neuron fragments. By
combining unsupervised learning techniques with a novel dynamic pooling scheme,
we show how pure learning-based methods are for the first time competitive with
hand-designed 3d shape descriptors. We investigate data augmentation strategies
for dramatically increasing the size of the training set, and show how
combining both learned and hand-designed features leads to the highest
accuracy.
|
1312.6168 | Factorial Hidden Markov Models for Learning Representations of Natural
Language | cs.LG cs.CL | Most representation learning algorithms for language and image processing are
local, in that they identify features for a data point based on surrounding
points. Yet in language processing, the correct meaning of a word often depends
on its global context. As a step toward incorporating global context into
representation learning, we develop a representation learning algorithm that
incorporates joint prediction into its technique for producing features for a
word. We develop efficient variational methods for learning Factorial Hidden
Markov Models from large texts, and use variational distributions to produce
features for each word that are sensitive to the entire input sequence, not
just to a local context window. Experiments on part-of-speech tagging and
chunking indicate that the features are competitive with or better than
existing state-of-the-art representation learning methods.
|
1312.6169 | Learning Information Spread in Content Networks | cs.LG cs.SI physics.soc-ph | We introduce a model for predicting the diffusion of content information on
social media. When propagation is usually modeled on discrete graph structures,
we introduce here a continuous diffusion model, where nodes in a diffusion
cascade are projected onto a latent space with the property that their
proximity in this space reflects the temporal diffusion process. We focus on
the task of predicting contaminated users for an initial initial information
source and provide preliminary results on differents datasets.
|
1312.6171 | Learning Paired-associate Images with An Unsupervised Deep Learning
Architecture | cs.NE cs.CV cs.LG | This paper presents an unsupervised multi-modal learning system that learns
associative representation from two input modalities, or channels, such that
input on one channel will correctly generate the associated response at the
other and vice versa. In this way, the system develops a kind of supervised
classification model meant to simulate aspects of human associative memory. The
system uses a deep learning architecture (DLA) composed of two input/output
channels formed from stacked Restricted Boltzmann Machines (RBM) and an
associative memory network that combines the two channels. The DLA is trained
on pairs of MNIST handwritten digit images to develop hierarchical features and
associative representations that are able to reconstruct one image given its
paired-associate. Experiments show that the multi-modal learning system
generates models that are as accurate as back-propagation networks but with the
advantage of a bi-directional network and unsupervised learning from either
paired or non-paired training examples.
|
1312.6173 | Multilingual Distributed Representations without Word Alignment | cs.CL | Distributed representations of meaning are a natural way to encode covariance
relationships between words and phrases in NLP. By overcoming data sparsity
problems, as well as providing information about semantic relatedness which is
not available in discrete representations, distributed representations have
proven useful in many NLP tasks. Recent work has shown how compositional
semantic representations can successfully be applied to a number of monolingual
applications such as sentiment analysis. At the same time, there has been some
initial success in work on learning shared word-level representations across
languages. We combine these two approaches by proposing a method for learning
distributed representations in a multilingual setup. Our model learns to assign
similar embeddings to aligned sentences and dissimilar ones to sentence which
are not aligned while not requiring word alignments. We show that our
representations are semantically informative and apply them to a cross-lingual
document classification task where we outperform the previous state of the art.
Further, by employing parallel corpora of multiple language pairs we find that
our model learns representations that capture semantic relationships across
languages for which no parallel data was used.
|
1312.6180 | Manifold regularized kernel logistic regression for web image annotation | cs.LG cs.MM | With the rapid advance of Internet technology and smart devices, users often
need to manage large amounts of multimedia information using smart devices,
such as personal image and video accessing and browsing. These requirements
heavily rely on the success of image (video) annotation, and thus large scale
image annotation through innovative machine learning methods has attracted
intensive attention in recent years. One representative work is support vector
machine (SVM). Although it works well in binary classification, SVM has a
non-smooth loss function and can not naturally cover multi-class case. In this
paper, we propose manifold regularized kernel logistic regression (KLR) for web
image annotation. Compared to SVM, KLR has the following advantages: (1) the
KLR has a smooth loss function; (2) the KLR produces an explicit estimate of
the probability instead of class label; and (3) the KLR can naturally be
generalized to the multi-class case. We carefully conduct experiments on MIR
FLICKR dataset and demonstrate the effectiveness of manifold regularized kernel
logistic regression for image annotation.
|
1312.6182 | Large-Scale Paralleled Sparse Principal Component Analysis | cs.MS cs.LG cs.NA stat.ML | Principal component analysis (PCA) is a statistical technique commonly used
in multivariate data analysis. However, PCA can be difficult to interpret and
explain since the principal components (PCs) are linear combinations of the
original variables. Sparse PCA (SPCA) aims to balance statistical fidelity and
interpretability by approximating sparse PCs whose projections capture the
maximal variance of original data. In this paper we present an efficient and
paralleled method of SPCA using graphics processing units (GPUs), which can
process large blocks of data in parallel. Specifically, we construct parallel
implementations of the four optimization formulations of the generalized power
method of SPCA (GP-SPCA), one of the most efficient and effective SPCA
approaches, on a GPU. The parallel GPU implementation of GP-SPCA (using CUBLAS)
is up to eleven times faster than the corresponding CPU implementation (using
CBLAS), and up to 107 times faster than a MatLab implementation. Extensive
comparative experiments in several real-world datasets confirm that SPCA offers
a practical advantage.
|
1312.6184 | Do Deep Nets Really Need to be Deep? | cs.LG cs.NE | Currently, deep neural networks are the state of the art on problems such as
speech recognition and computer vision. In this extended abstract, we show that
shallow feed-forward networks can learn the complex functions previously
learned by deep nets and achieve accuracies previously only achievable with
deep models. Moreover, in some cases the shallow neural nets can learn these
deep functions using a total number of parameters similar to the original deep
model. We evaluate our method on the TIMIT phoneme recognition task and are
able to train shallow fully-connected nets that perform similarly to complex,
well-engineered, deep convolutional architectures. Our success in training
shallow neural nets to mimic deeper models suggests that there probably exist
better algorithms for training shallow feed-forward nets than those currently
available.
|
1312.6186 | GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network
Training | cs.CV cs.DC cs.LG cs.NE | The ability to train large-scale neural networks has resulted in
state-of-the-art performance in many areas of computer vision. These results
have largely come from computational break throughs of two forms: model
parallelism, e.g. GPU accelerated training, which has seen quick adoption in
computer vision circles, and data parallelism, e.g. A-SGD, whose large scale
has been used mostly in industry. We report early experiments with a system
that makes use of both model parallelism and data parallelism, we call GPU
A-SGD. We show using GPU A-SGD it is possible to speed up training of large
convolutional neural networks useful for computer vision. We believe GPU A-SGD
will make it possible to train larger networks on larger training sets in a
reasonable amount of time.
|
1312.6189 | Coping with Physical Attacks on Random Network Structures | cs.SY cs.SI math.OC | Communication networks are vulnerable to natural disasters, such as
earthquakes or floods, as well as to physical attacks, such as an
Electromagnetic Pulse (EMP) attack. Such real-world events happen at specific
geographical locations and disrupt specific parts of the network. Therefore,
the geographical layout of the network determines the impact of such events on
the network's physical topology in terms of capacity, connectivity, and flow.
Recent works focused on assessing the vulnerability of a deterministic
network to such events. In this work, we focus on assessing the vulnerability
of (geographical) random networks to such disasters. We consider stochastic
graph models in which nodes and links are probabilistically distributed on a
plane, and model the disaster event as a circular cut that destroys any node or
link within or intersecting the circle.
We develop algorithms for assessing the damage of both targeted and
non-targeted (random) attacks and determining which attack locations have the
expected most disruptive impact on the network. Then, we provide experimental
results for assessing the impact of circular disasters to communications
networks in the USA, where the network's geographical layout was modeled
probabilistically, relying on demographic information only. Our results
demonstrates the applicability of our algorithms to real-world scenarios.
Our algorithms allows to examine how valuable is public information about the
network's geographical area (e.g., demography, topography, economy) to an
attacker's destruction assessment capabilities in the case the network's
physical topology is hidden or examine the affect of hiding the actual physical
location of the fibers on the attack strategy. Thereby, our schemes can be used
as a tool for policy makers and engineers to design more robust networks and
identifying locations which require additional protection efforts.
|
1312.6190 | Adaptive Feature Ranking for Unsupervised Transfer Learning | cs.LG | Transfer Learning is concerned with the application of knowledge gained from
solving a problem to a different but related problem domain. In this paper, we
propose a method and efficient algorithm for ranking and selecting
representations from a Restricted Boltzmann Machine trained on a source domain
to be transferred onto a target domain. Experiments carried out using the
MNIST, ICDAR and TiCC image datasets show that the proposed adaptive feature
ranking and transfer learning method offers statistically significant
improvements on the training of RBMs. Our method is general in that the
knowledge chosen by the ranking function does not depend on its relation to any
specific target domain, and it works with unsupervised learning and
knowledge-based transfer.
|
1312.6192 | Can recursive neural tensor networks learn logical reasoning? | cs.CL cs.LG | Recursive neural network models and their accompanying vector representations
for words have seen success in an array of increasingly semantically
sophisticated tasks, but almost nothing is known about their ability to
accurately capture the aspects of linguistic meaning that are necessary for
interpretation or reasoning. To evaluate this, I train a recursive model on a
new corpus of constructed examples of logical reasoning in short sentences,
like the inference of "some animal walks" from "some dog walks" or "some cat
walks," given that dogs and cats are animals. This model learns representations
that generalize well to new types of reasoning pattern in all but a few cases,
a result which is promising for the ability of learned representation models to
capture logical reasoning.
|
1312.6197 | An empirical analysis of dropout in piecewise linear networks | stat.ML cs.LG cs.NE | The recently introduced dropout training criterion for neural networks has
been the subject of much attention due to its simplicity and remarkable
effectiveness as a regularizer, as well as its interpretation as a training
procedure for an exponentially large ensemble of networks that share
parameters. In this work we empirically investigate several questions related
to the efficacy of dropout, specifically as it concerns networks employing the
popular rectified linear activation function. We investigate the quality of the
test time weight-scaling inference procedure by evaluating the geometric
average exactly in small models, as well as compare the performance of the
geometric mean to the arithmetic mean more commonly employed by ensemble
techniques. We explore the effect of tied weights on the ensemble
interpretation by training ensembles of masked networks without tied weights.
Finally, we investigate an alternative criterion based on a biased estimator of
the maximum likelihood ensemble gradient.
|
1312.6199 | Intriguing properties of neural networks | cs.CV cs.LG cs.NE | Deep neural networks are highly expressive models that have recently achieved
state of the art performance on speech and visual recognition tasks. While
their expressiveness is the reason they succeed, it also causes them to learn
uninterpretable solutions that could have counter-intuitive properties. In this
paper we report two such properties.
First, we find that there is no distinction between individual high level
units and random linear combinations of high level units, according to various
methods of unit analysis. It suggests that it is the space, rather than the
individual units, that contains of the semantic information in the high layers
of neural networks.
Second, we find that deep neural networks learn input-output mappings that
are fairly discontinuous to a significant extend. We can cause the network to
misclassify an image by applying a certain imperceptible perturbation, which is
found by maximizing the network's prediction error. In addition, the specific
nature of these perturbations is not a random artifact of learning: the same
perturbation can cause a different network, that was trained on a different
subset of the dataset, to misclassify the same input.
|
1312.6203 | Spectral Networks and Locally Connected Networks on Graphs | cs.LG cs.CV cs.NE | Convolutional Neural Networks are extremely efficient architectures in image
and audio recognition tasks, thanks to their ability to exploit the local
translational invariance of signal classes over their domain. In this paper we
consider possible generalizations of CNNs to signals defined on more general
domains without the action of a translation group. In particular, we propose
two constructions, one based upon a hierarchical clustering of the domain, and
another based on the spectrum of the graph Laplacian. We show through
experiments that for low-dimensional graphs it is possible to learn
convolutional layers with a number of parameters independent of the input size,
resulting in efficient deep architectures.
|
1312.6204 | One-Shot Adaptation of Supervised Deep Convolutional Models | cs.CV cs.LG cs.NE | Dataset bias remains a significant barrier towards solving real world
computer vision tasks. Though deep convolutional networks have proven to be a
competitive approach for image classification, a question remains: have these
models have solved the dataset bias problem? In general, training or
fine-tuning a state-of-the-art deep model on a new domain requires a
significant amount of data, which for many applications is simply not
available. Transfer of models directly to new domains without adaptation has
historically led to poor recognition performance. In this paper, we pose the
following question: is a single image dataset, much larger than previously
explored for adaptation, comprehensive enough to learn general deep models that
may be effectively applied to new image domains? In other words, are deep CNNs
trained on large amounts of labeled data as susceptible to dataset bias as
previous methods have been shown to be? We show that a generic supervised deep
CNN model trained on a large dataset reduces, but does not remove, dataset
bias. Furthermore, we propose several methods for adaptation with deep models
that are able to operate with little (one example per category) or no labeled
domain specific data. Our experiments show that adaptation of deep models on
benchmark visual domain adaptation datasets can provide a significant
performance boost.
|
1312.6205 | Relaxations for inference in restricted Boltzmann machines | stat.ML cs.LG | We propose a relaxation-based approximate inference algorithm that samples
near-MAP configurations of a binary pairwise Markov random field. We experiment
on MAP inference tasks in several restricted Boltzmann machines. We also use
our underlying sampler to estimate the log-partition function of restricted
Boltzmann machines and compare against other sampling-based methods.
|
1312.6208 | Total variation with overlapping group sparsity for image deblurring
under impulse noise | math.NA cs.CV | The total variation (TV) regularization method is an effective method for
image deblurring in preserving edges. However, the TV based solutions usually
have some staircase effects. In this paper, in order to alleviate the staircase
effect, we propose a new model for restoring blurred images with impulse noise.
The model consists of an $\ell_1$-fidelity term and a TV with overlapping group
sparsity (OGS) regularization term. Moreover, we impose a box constraint to the
proposed model for getting more accurate solutions. An efficient and effective
algorithm is proposed to solve the model under the framework of the alternating
direction method of multipliers (ADMM). We use an inner loop which is nested
inside the majorization minimization (MM) iteration for the subproblem of the
proposed method. Compared with other methods, numerical results illustrate that
the proposed method, can significantly improve the restoration quality, both in
avoiding staircase effects and in terms of peak signal-to-noise ratio (PSNR)
and relative error (ReE).
|
1312.6211 | An Empirical Investigation of Catastrophic Forgetting in Gradient-Based
Neural Networks | stat.ML cs.LG cs.NE | Catastrophic forgetting is a problem faced by many machine learning models
and algorithms. When trained on one task, then trained on a second task, many
machine learning models "forget" how to perform the first task. This is widely
believed to be a serious problem for neural networks. Here, we investigate the
extent to which the catastrophic forgetting problem occurs for modern neural
networks, comparing both established and recent gradient-based training
algorithms and activation functions. We also examine the effect of the
relationship between the first task and the second task on catastrophic
forgetting. We find that it is always best to train using the dropout
algorithm--the dropout algorithm is consistently best at adapting to the new
task, remembering the old task, and has the best tradeoff curve between these
two extremes. We find that different tasks and relationships between tasks
result in very different rankings of activation function performance. This
suggests the choice of activation function should always be cross-validated.
|
1312.6214 | Volumetric Spanners: an Efficient Exploration Basis for Learning | cs.LG cs.AI cs.DS | Numerous machine learning problems require an exploration basis - a mechanism
to explore the action space. We define a novel geometric notion of exploration
basis with low variance, called volumetric spanners, and give efficient
algorithms to construct such a basis.
We show how efficient volumetric spanners give rise to the first efficient
and optimal regret algorithm for bandit linear optimization over general convex
sets. Previously such results were known only for specific convex sets, or
under special conditions such as the existence of an efficient self-concordant
barrier for the underlying set.
|
1312.6215 | Sensor management for multi-target tracking via Multi-Bernoulli
filtering | cs.SY | In multi-object stochastic systems, the issue of sensor management is a
theoretically and computationally challenging problem. In this paper, we
present a novel random finite set (RFS) approach to the multi-target sensor
management problem within the partially observed Markov decision process
(POMDP) framework. The multi-target state is modelled as a multi-Bernoulli RFS,
and the multi-Bernoulli filter is used in conjunction with two different
control objectives: maximizing the expected R\'enyi divergence between the
predicted and updated densities, and minimizing the expected posterior
cardinality variance. Numerical studies are presented in two scenarios where a
mobile sensor tracks five moving targets with different levels of
observability.
|
1312.6219 | Extracting Region of Interest for Palm Print Authentication | cs.CV | Biometrics authentication is an effective method for automatically
recognizing individuals. The authentication consists of an enrollment phase and
an identification or verification phase. In the stages of enrollment known
(training) samples after the pre-processing stage are used for suitable feature
extraction to generate the template database. In the verification stage, the
test sample is similarly pre processed and subjected to feature extraction
modules, and then it is matched with the training feature templates to decide
whether it is a genuine or not. This paper presents use of a region of interest
(ROI) for palm print technology. First some of the existing methods for palm
print identification have been introduced. Then focus has been given on
extraction of a suitable smaller region from the acquired palm print to improve
the identification method accuracy. Several existing work in the topic of
region extraction have been examined. Subsequently, a simple and original
method has then proposed for locating the ROI that can be effectively used for
palm print analysis. The ROI extracted using this new technique is suitable for
different types of processing as it creates a rectangular or square area around
the center of activity represented by the lines, wrinkles and ridges of the
palm print. The effectiveness of the ROI approach has been tested by
integrating it with a texture based identification / authentication system
proposed earlier. The improvement has been shown by comparing the
identification accuracy rate before and after the ROI pre-processing.
|
1312.6224 | The Cauchy-Schwarz divergence for Poisson point processes | cs.IT math.IT | In this paper, we extend the notion of Cauchy-Schwarz divergence to point
processes and establish that the Cauchy-Schwarz divergence between the
probability densities of two Poisson point processes is half the squared
$\mathbf{L^{2}}$-distance between their intensity functions. Extension of this
result to mixtures of Poisson point processes and, in the case where the
intensity functions are Gaussian mixtures, closed form expressions for the
Cauchy-Schwarz divergence are presented. Our result also implies that the
Bhattachryaa distance between the probability distributions of two Poisson
point processes is equal to the square of the Hellinger distance between their
intensity measures. We illustrate the result via a sensor management
application where the system states are modeled as point processes.
|
1312.6229 | OverFeat: Integrated Recognition, Localization and Detection using
Convolutional Networks | cs.CV | We present an integrated framework for using Convolutional Networks for
classification, localization and detection. We show how a multiscale and
sliding window approach can be efficiently implemented within a ConvNet. We
also introduce a novel deep learning approach to localization by learning to
predict object boundaries. Bounding boxes are then accumulated rather than
suppressed in order to increase detection confidence. We show that different
tasks can be learned simultaneously using a single shared network. This
integrated framework is the winner of the localization task of the ImageNet
Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very
competitive results for the detection and classifications tasks. In
post-competition work, we establish a new state of the art for the detection
task. Finally, we release a feature extractor from our best model called
OverFeat.
|
1312.6273 | Parallel architectures for fuzzy triadic similarity learning | cs.DC cs.LG stat.ML | In a context of document co-clustering, we define a new similarity measure
which iteratively computes similarity while combining fuzzy sets in a
three-partite graph. The fuzzy triadic similarity (FT-Sim) model can deal with
uncertainty offers by the fuzzy sets. Moreover, with the development of the Web
and the high availability of storage spaces, more and more documents become
accessible. Documents can be provided from multiple sites and make similarity
computation an expensive processing. This problem motivated us to use parallel
computing. In this paper, we introduce parallel architectures which are able to
treat large and multi-source data sets by a sequential, a merging or a
splitting-based process. Then, we proceed to a local and a central (or global)
computing using the basic FT-Sim measure. The idea behind these architectures
is to reduce both time and space complexities thanks to parallel computation.
|
1312.6282 | Dimension-free Concentration Bounds on Hankel Matrices for Spectral
Learning | cs.LG | Learning probabilistic models over strings is an important issue for many
applications. Spectral methods propose elegant solutions to the problem of
inferring weighted automata from finite samples of variable-length strings
drawn from an unknown target distribution. These methods rely on a singular
value decomposition of a matrix $H_S$, called the Hankel matrix, that records
the frequencies of (some of) the observed strings. The accuracy of the learned
distribution depends both on the quantity of information embedded in $H_S$ and
on the distance between $H_S$ and its mean $H_r$. Existing concentration bounds
seem to indicate that the concentration over $H_r$ gets looser with the size of
$H_r$, suggesting to make a trade-off between the quantity of used information
and the size of $H_r$. We propose new dimension-free concentration bounds for
several variants of Hankel matrices. Experiments demonstrate that these bounds
are tight and that they significantly improve existing bounds. These results
suggest that the concentration rate of the Hankel matrix around its mean does
not constitute an argument for limiting its size.
|
1312.6290 | Information-based measure of nonlocality | quant-ph cs.IT math.IT | Quantum nonlocality concerns correlations among spatially separated systems
that cannot be classically explained without post-measurement communication
among the parties. Thus, a natural measure of nonlocal correlations is provided
by the minimal amount of communication required for classically simulating
them. In this paper, we present a method to compute the minimal communication
cost, which we call nonlocal capacity, for any general nonsignaling
correlations. This measure turns out to have an important role in communication
complexity and can be used to discriminate between local and nonlocal
correlations, as an alternative to the violation of Bell's inequalities.
|
1312.6293 | PRIMEBALL: a Parallel Processing Framework Benchmark for Big Data
Applications in the Cloud | cs.DC cs.DB | In this paper, we draw the specifications of a novel benchmark for comparing
parallel processing frameworks in the context of big data applications hosted
in the cloud. We aim at filling several gaps in already existing cloud data
processing benchmarks, which lack a real-life context for their processes, thus
losing relevance when trying to assess performance for real applications.
Hence, we propose a fictitious news site hosted in the cloud that is to be
managed by the framework under analysis, together with several objective use
case scenarios and measures for evaluating system performance. The main
strengths of our benchmark are parallelization capabilities supporting cloud
features and big data properties.
|
1312.6335 | Spreading dynamics in complex networks | physics.soc-ph cs.SI | Searching for influential spreaders in complex networks is an issue of great
significance for applications across various domains, ranging from the epidemic
control, innovation diffusion, viral marketing, social movement to idea
propagation. In this paper, we first display some of the most important
theoretical models that describe spreading processes, and then discuss the
problem of locating both the individual and multiple influential spreaders
respectively. Recent approaches in these two topics are presented. For the
identification of privileged single spreaders, we summarize several widely used
centralities, such as degree, betweenness centrality, PageRank, k-shell, etc.
We investigate the empirical diffusion data in a large scale online social
community -- LiveJournal. With this extensive dataset, we find that various
measures can convey very distinct information of nodes. Of all the users in
LiveJournal social network, only a small fraction of them involve in spreading.
For the spreading processes in LiveJournal, while degree can locate nodes
participating in information diffusion with higher probability, k-shell is more
effective in finding nodes with large influence. Our results should provide
useful information for designing efficient spreading strategies in reality.
|
1312.6370 | An Efficient Edge Detection Technique by Two Dimensional Rectangular
Cellular Automata | cs.CV | This paper proposes a new pattern of two dimensional cellular automata linear
rules that are used for efficient edge detection of an image. Since cellular
automata is inherently parallel in nature, it has produced desired output
within a unit time interval. We have observed four linear rules among 512 total
linear rules of a rectangular cellular automata in adiabatic or reflexive
boundary condition that produces an optimal result. These four rules are
directly applied once to the images and produced edge detected output. We
compare our results with the existing edge detection algorithms and found that
our results shows better edge detection with an enhancement of edges.
|
1312.6410 | A Survey on Eye-Gaze Tracking Techniques | cs.CV | Study of eye-movement is being employed in Human Computer Interaction (HCI)
research. Eye - gaze tracking is one of the most challenging problems in the
area of computer vision. The goal of this paper is to present a review of
latest research in this continued growth of remote eye-gaze tracking. This
overview includes the basic definitions and terminologies, recent advances in
the field and finally the need of future development in the field.
|
1312.6415 | Measurement Analysis and Channel Modeling for TOA-Based Ranging in
Tunnels | cs.IT math.IT | A robust and accurate positioning solution is required to increase the safety
in GPS-denied environments. Although there is a lot of available research in
this area, little has been done for confined environments such as tunnels.
Therefore, we organized a measurement campaign in a basement tunnel of
Link\"{o}ping university, in which we obtained ultra-wideband (UWB) complex
impulse responses for line-of-sight (LOS), and three non-LOS (NLOS) scenarios.
This paper is focused on time-of-arrival (TOA) ranging since this technique can
provide the most accurate range estimates, which are required for range-based
positioning. We describe the measurement setup and procedure, select the
threshold for TOA estimation, analyze the channel propagation parameters
obtained from the power delay profile (PDP), and provide statistical model for
ranging. According to our results, the rise-time should be used for NLOS
identification, and the maximum excess delay should be used for NLOS error
mitigation. However, the NLOS condition cannot be perfectly determined, so the
distance likelihood has to be represented in a Gaussian mixture form. We also
compared these results with measurements from a mine tunnel, and found a
similar behavior.
|
1312.6421 | Output Synchronization of Nonlinear Systems under Input Disturbances | cs.SY math.OC | We study synchronization of nonlinear systems that satisfy an incremental
passivity property. We consider the case where the control input is subject to
a class of disturbances, including constant and sinusoidal disturbances with
unknown phases and magnitudes and known frequencies. We design a distributed
control law that recovers the synchronization of the nonlinear systems in the
presence of the disturbances. Simulation results of Goodwin oscillators
illustrate the effectiveness of the control law. Finally, we highlight the
connection of the proposed control law to the dynamic average consensus
estimator developed in [1].
|
1312.6430 | Growing Regression Forests by Classification: Applications to Object
Pose Estimation | cs.CV cs.LG stat.ML | In this work, we propose a novel node splitting method for regression trees
and incorporate it into the regression forest framework. Unlike traditional
binary splitting, where the splitting rule is selected from a predefined set of
binary splitting rules via trial-and-error, the proposed node splitting method
first finds clusters of the training data which at least locally minimize the
empirical loss without considering the input space. Then splitting rules which
preserve the found clusters as much as possible are determined by casting the
problem into a classification problem. Consequently, our new node splitting
method enjoys more freedom in choosing the splitting rules, resulting in more
efficient tree structures. In addition to the Euclidean target space, we
present a variant which can naturally deal with a circular target space by the
proper use of circular statistics. We apply the regression forest employing our
node splitting to head pose estimation (Euclidean target space) and car
direction estimation (circular target space) and demonstrate that the proposed
method significantly outperforms state-of-the-art methods (38.5% and 22.5%
error reduction respectively).
|
1312.6456 | Exact Simulation of Non-stationary Reflected Brownian Motion | math.PR cs.CE q-fin.CP | This paper develops the first method for the exact simulation of reflected
Brownian motion (RBM) with non-stationary drift and infinitesimal variance. The
running time of generating exact samples of non-stationary RBM at any time $t$
is uniformly bounded by $\mathcal{O}(1/\bar\gamma^2)$ where $\bar\gamma$ is the
average drift of the process. The method can be used as a guide for planning
simulations of complex queueing systems with non-stationary arrival rates
and/or service time.
|
1312.6461 | Nonparametric Weight Initialization of Neural Networks via Integral
Representation | cs.LG cs.NE | A new initialization method for hidden parameters in a neural network is
proposed. Derived from the integral representation of the neural network, a
nonparametric probability distribution of hidden parameters is introduced. In
this proposal, hidden parameters are initialized by samples drawn from this
distribution, and output parameters are fitted by ordinary linear regression.
Numerical experiments show that backpropagation with proposed initialization
converges faster than uniformly random initialization. Also it is shown that
the proposed method achieves enough accuracy by itself without backpropagation
in some cases.
|
1312.6468 | Suppressing epidemics on networks by exploiting observer nodes | physics.soc-ph cs.SI | To control infection spreading on networks, we investigate the effect of
observer nodes that recognize infection in a neighboring node and make the rest
of the neighbor nodes immune. We numerically show that random placement of
observer nodes works better on networks with clustering than on locally
treelike networks, implying that our model is promising for realistic social
networks. The efficiency of several heuristic schemes for observer placement is
also examined for synthetic and empirical networks. In parallel with numerical
simulations of epidemic dynamics, we also show that the effect of observer
placement can be assessed by the size of the largest connected component of
networks remaining after removing observer nodes and links between their
neighboring nodes.
|
1312.6490 | Book inequalities | cs.IT math.IT | Information theoretical inequalities have strong ties with polymatroids and
their representability. A polymatroid is entropic if its rank function is given
by the Shannon entropy of the subsets of some discrete random variables. The
book is a special iterated adhesive extension of a polymatroid with the
property that entropic polymatroids have $n$-page book extensions over an
arbitrary spine. We prove that every polymatroid has an $n$-page book extension
over a single element and over an all-but-one-element spine. Consequently, for
polymatroids on four elements, only book extensions over a two-element spine
should be considered. F. Mat\'{u}\v{s} proved that the Zhang-Yeung inequalities
characterize polymatroids on four elements which have such a 2-page book
extension. The $n$-page book inequalities, defined in this paper, are
conjectured to characterize polymatroids on four elements which have $n$-page
book extensions over a two-element spine. We prove that the condition is
necessary; consequently every book inequality is an information inequality on
four random variables. Using computer-aided multiobjective optimization, the
sufficiency of the condition is verified up to 9-page book extensions.
|
1312.6494 | Generic criticality of community structure in random graphs | cond-mat.stat-mech cs.SI physics.soc-ph | We examine a community structure in random graphs of size $n$ and link
probability $p/n$ determined with the Newman greedy optimization of modularity.
Calculations show that for $p<1$ communities are nearly identical with
clusters. For $p=1$ the average sizes of a community $s_{av}$ and of the giant
community $s_g$ show a power-law increase $s_{av}\sim n^{\alpha'}$ and $s_g\sim
n^{\alpha}$. From numerical results we estimate $\alpha'\approx 0.26(1)$,
$\alpha\approx 0.50(1)$, and using the probability distribution of sizes of
communities we suggest that $\alpha'=\alpha/2$ should hold. For $p>1$ the
community structure remains critical: (i) $s_{av}$ and $s_g$ have a power law
increase with $\alpha'\approx\alpha <1$; (ii) the probability distribution of
sizes of communities is very broad and nearly flat for all sizes up to $s_g$.
For large $p$ the modularity $Q$ decays as $Q\sim p^{-0.55}$, which is
intermediate between some previous estimations. To check the validity of the
results, we also determined the community structure using another method,
namely a non-greedy optimization of modularity. Tests with some benchmark
networks show that the method outperforms the greedy version. For random
graphs, however, the characteristics of the community structure determined
using both greedy an non-greedy optimizations are, within small statistical
fluctuations, the same.
|
1312.6506 | Top Down Approach to Multiple Plane Detection | cs.CV | Detecting multiple planes in images is a challenging problem, but one with
many applications. Recent work such as J-Linkage and Ordered Residual Kernels
have focussed on developing a domain independent approach to detect multiple
structures. These multiple structure detection methods are then used for
estimating multiple homographies given feature matches between two images.
Features participating in the multiple homographies detected, provide us the
multiple scene planes. We show that these methods provide locally optimal
results and fail to merge detected planar patches to the true scene planes.
These methods use only residues obtained on applying homography of one plane to
another as cue for merging. In this paper, we develop additional cues such as
local consistency of planes, local normals, texture etc. to perform better
classification and merging . We formulate the classification as an MRF problem
and use TRWS message passing algorithm to solve non metric energy terms and
complex sparse graph structure. We show results on challenging dataset common
in robotics navigation scenarios where our method shows accuracy of more than
85 percent on average while being close or same as the actual number of scene
planes.
|
1312.6533 | A General, Fast, and Robust Implementation of the Time-Optimal Path
Parameterization Algorithm | cs.RO | Finding the Time-Optimal Parameterization of a given Path (TOPP) subject to
kinodynamic constraints is an essential component in many robotic theories and
applications. The objective of this article is to provide a general, fast and
robust implementation of this component. For this, we give a complete solution
to the issue of dynamic singularities, which are the main cause of failure in
existing implementations. We then present an open-source implementation of the
algorithm in C++/Python and demonstrate its robustness and speed in various
robotics settings.
|
1312.6546 | Fair assignment of indivisible objects under ordinal preferences | cs.GT cs.AI | We consider the discrete assignment problem in which agents express ordinal
preferences over objects and these objects are allocated to the agents in a
fair manner. We use the stochastic dominance relation between fractional or
randomized allocations to systematically define varying notions of
proportionality and envy-freeness for discrete assignments. The computational
complexity of checking whether a fair assignment exists is studied for these
fairness notions. We also characterize the conditions under which a fair
assignment is guaranteed to exist. For a number of fairness concepts,
polynomial-time algorithms are presented to check whether a fair assignment
exists. Our algorithmic results also extend to the case of unequal entitlements
of agents. Our NP-hardness result, which holds for several variants of
envy-freeness, answers an open question posed by Bouveret, Endriss, and Lang
(ECAI 2010). We also propose fairness concepts that always suggest a non-empty
set of assignments with meaningful fairness properties. Among these concepts,
optimal proportionality and optimal weak proportionality appear to be desirable
fairness concepts.
|
1312.6552 | Socially-Aware Networking: A Survey | cs.SI cs.NI physics.soc-ph | The widespread proliferation of handheld devices enables mobile carriers to
be connected at anytime and anywhere. Meanwhile, the mobility patterns of
mobile devices strongly depend on the users' movements, which are closely
related to their social relationships and behaviors. Consequently, today's
mobile networks are becoming increasingly human centric. This leads to the
emergence of a new field which we call socially-aware networking (SAN). One of
the major features of SAN is that social awareness becomes indispensable
information for the design of networking solutions. This emerging paradigm is
applicable to various types of networks (e.g. opportunistic networks, mobile
social networks, delay tolerant networks, ad hoc networks, etc) where the users
have social relationships and interactions. By exploiting social properties of
nodes, SAN can provide better networking support to innovative applications and
services. In addition, it facilitates the convergence of human society and
cyber physical systems. In this paper, for the first time, to the best of our
knowledge, we present a survey of this emerging field. Basic concepts of SAN
are introduced. We intend to generalize the widely-used social properties in
this regard. The state-of-the-art research on SAN is reviewed with focus on
three aspects: routing and forwarding, incentive mechanisms and data
dissemination. Some important open issues with respect to mobile social sensing
and learning, privacy, node selfishness and scalability are discussed.
|
1312.6558 | Predictive User Modeling with Actionable Attributes | cs.AI | Different machine learning techniques have been proposed and used for
modeling individual and group user needs, interests and preferences. In the
traditional predictive modeling instances are described by observable
variables, called attributes. The goal is to learn a model for predicting the
target variable for unseen instances. For example, for marketing purposes a
company consider profiling a new user based on her observed web browsing
behavior, referral keywords or other relevant information. In many real world
applications the values of some attributes are not only observable, but can be
actively decided by a decision maker. Furthermore, in some of such applications
the decision maker is interested not only to generate accurate predictions, but
to maximize the probability of the desired outcome. For example, a direct
marketing manager can choose which type of a special offer to send to a client
(actionable attribute), hoping that the right choice will result in a positive
response with a higher probability. We study how to learn to choose the value
of an actionable attribute in order to maximize the probability of a desired
outcome in predictive modeling. We emphasize that not all instances are equally
sensitive to changes in actions. Accurate choice of an action is critical for
those instances, which are on the borderline (e.g. users who do not have a
strong opinion one way or the other). We formulate three supervised learning
approaches for learning to select the value of an actionable attribute at an
instance level. We also introduce a focused training procedure which puts more
emphasis on the situations where varying the action is the most likely to take
the effect. The proof of concept experimental validation on two real-world case
studies in web analytics and e-learning domains highlights the potential of the
proposed approaches.
|
1312.6565 | Mobile Multimedia Recommendation in Smart Communities: A Survey | cs.IR cs.MM | Due to the rapid growth of internet broadband access and proliferation of
modern mobile devices, various types of multimedia (e.g. text, images, audios
and videos) have become ubiquitously available anytime. Mobile device users
usually store and use multimedia contents based on their personal interests and
preferences. Mobile device challenges such as storage limitation have however
introduced the problem of mobile multimedia overload to users. In order to
tackle this problem, researchers have developed various techniques that
recommend multimedia for mobile users. In this survey paper, we examine the
importance of mobile multimedia recommendation systems from the perspective of
three smart communities, namely, mobile social learning, mobile event guide and
context-aware services. A cautious analysis of existing research reveals that
the implementation of proactive, sensor-based and hybrid recommender systems
can improve mobile multimedia recommendations. Nevertheless, there are still
challenges and open issues such as the incorporation of context and social
properties, which need to be tackled in order to generate accurate and
trustworthy mobile multimedia recommendations.
|
1312.6573 | Trackability with Imprecise Localization | cs.RO cs.SY | Imagine a tracking agent $P$ who wants to follow a moving target $Q$ in
$d$-dimensional Euclidean space. The tracker has access to a noisy location
sensor that reports an estimate $\tilde{Q}(t)$ of the target's true location
$Q(t)$ at time $t$, where $||Q(T) - \tilde{Q}(T)||$ represents the sensor's
localization error. We study the limits of tracking performance under this kind
of sensing imprecision. In particular, we investigate (1) what is $P$'s best
strategy to follow $Q$ if both $P$ and $Q$ can move with equal speed, (2) at
what rate does the distance $||Q(t) - P(t)||$ grow under worst-case
localization noise, (3) if $P$ wants to keep $Q$ within a prescribed distance
$L$, how much faster does it need to move, and (4) what is the effect of
obstacles on the tracking performance, etc. Under a relative error model of
noise, we are able to give upper and lower bounds for the worst-case tracking
performance, both with or without obstacles.
|
1312.6594 | Sequentially Generated Instance-Dependent Image Representations for
Classification | cs.CV cs.LG | In this paper, we investigate a new framework for image classification that
adaptively generates spatial representations. Our strategy is based on a
sequential process that learns to explore the different regions of any image in
order to infer its category. In particular, the choice of regions is specific
to each image, directed by the actual content of previously selected
regions.The capacity of the system to handle incomplete image information as
well as its adaptive region selection allow the system to perform well in
budgeted classification tasks by exploiting a dynamicly generated
representation of each image. We demonstrate the system's abilities in a series
of image-based exploration and classification tasks that highlight its learned
exploration and inference abilities.
|
1312.6597 | Co-Multistage of Multiple Classifiers for Imbalanced Multiclass Learning | cs.LG cs.IR | In this work, we propose two stochastic architectural models (CMC and CMC-M)
with two layers of classifiers applicable to datasets with one and multiple
skewed classes. This distinction becomes important when the datasets have a
large number of classes. Therefore, we present a novel solution to imbalanced
multiclass learning with several skewed majority classes, which improves
minority classes identification. This fact is particularly important for text
classification tasks, such as event detection. Our models combined with
pre-processing sampling techniques improved the classification results on six
well-known datasets. Finally, we have also introduced a new metric SG-Mean to
overcome the multiplication by zero limitation of G-Mean.
|
1312.6599 | Image Processing based Systems and Techniques for the Recognition of
Ancient and Modern Coins | cs.CV cs.AI | Coins are frequently used in everyday life at various places like in banks,
grocery stores, supermarkets, automated weighing machines, vending machines
etc. So, there is a basic need to automate the counting and sorting of coins.
For this machines need to recognize the coins very fast and accurately, as
further transaction processing depends on this recognition. Three types of
systems are available in the market: Mechanical method based systems,
Electromagnetic method based systems and Image processing based systems. This
paper presents an overview of available systems and techniques based on image
processing to recognize ancient and modern coins.
|
1312.6606 | Structural Vulnerability Assessment of Electric Power Grids | physics.soc-ph cs.SY | Cascading failures are the typical reasons of black- outs in power grids. The
grid topology plays an important role in determining the dynamics of cascading
failures in power grids. Measures for vulnerability analysis are crucial to
assure a higher level of robustness of power grids. Metrics from Complex
Networks are widely used to investigate the grid vulnerability. Yet, these
purely topological metrics fail to capture the real behaviour of power grids.
This paper proposes a metric, the effective graph resistance, as a
vulnerability measure to de- termine the critical components in a power grid.
Differently than the existing purely topological measures, the effective graph
resistance accounts for the electrical properties of power grids such as power
flow allocation according to Kirchoff laws. To demonstrate the applicability of
the effective graph resistance, a quantitative vulnerability assessment of the
IEEE 118 buses power system is performed. The simulation results verify the
effectiveness of the effective graph resistance to identify the critical
transmission lines in a power grid.
|
1312.6607 | Using Latent Binary Variables for Online Reconstruction of Large Scale
Systems | math.PR cs.LG stat.ML | We propose a probabilistic graphical model realizing a minimal encoding of
real variables dependencies based on possibly incomplete observation and an
empirical cumulative distribution function per variable. The target application
is a large scale partially observed system, like e.g. a traffic network, where
a small proportion of real valued variables are observed, and the other
variables have to be predicted. Our design objective is therefore to have good
scalability in a real-time setting. Instead of attempting to encode the
dependencies of the system directly in the description space, we propose a way
to encode them in a latent space of binary variables, reflecting a rough
perception of the observable (congested/non-congested for a traffic road). The
method relies in part on message passing algorithms, i.e. belief propagation,
but the core of the work concerns the definition of meaningful latent variables
associated to the variables of interest and their pairwise dependencies.
Numerical experiments demonstrate the applicability of the method in practice.
|
1312.6609 | A comprehensive review of firefly algorithms | cs.NE | The firefly algorithm has become an increasingly important tool of Swarm
Intelligence that has been applied in almost all areas of optimization, as well
as engineering practice. Many problems from various areas have been
successfully solved using the firefly algorithm and its variants. In order to
use the algorithm to solve diverse problems, the original firefly algorithm
needs to be modified or hybridized. This paper carries out a comprehensive
review of this living and evolving discipline of Swarm Intelligence, in order
to show that the firefly algorithm could be applied to every problem arising in
practice. On the other hand, it encourages new researchers and algorithm
developers to use this simple and yet very efficient algorithm for problem
solving. It often guarantees that the obtained results will meet the
expectations.
|
1312.6615 | Automated Coin Recognition System using ANN | cs.CV cs.AI | Coins are integral part of our day to day life. We use coins everywhere like
grocery store, banks, buses, trains etc. So it becomes a basic need that coins
can be sorted and counted automatically. For this it is necessary that coins
can be recognized automatically. In this paper we have developed an ANN
(Artificial Neural Network) based Automated Coin Recognition System for the
recognition of Indian Coins of denomination Rs. 1, 2, 5 and 10 with rotation
invariance. We have taken images from both sides of coin. So this system is
capable of recognizing coins from both sides. Features are extracted from
images using techniques of Hough Transformation, Pattern Averaging etc. Then,
the extracted features are passed as input to a trained Neural Network. 97.74%
recognition rate has been achieved during the experiments i.e. only 2.26% miss
recognition, which is quite encouraging.
|
1312.6635 | Topic and Sentiment Analysis on OSNs: a Case Study of Advertising
Strategies on Twitter | cs.SI physics.soc-ph | Social media have substantially altered the way brands and businesses
advertise: Online Social Networks provide brands with more versatile and
dynamic channels for advertisement than traditional media (e.g., TV and radio).
Levels of engagement in such media are usually measured in terms of content
adoption (e.g., likes and retweets) and sentiment, around a given topic.
However, sentiment analysis and topic identification are both non-trivial
tasks.
In this paper, using data collected from Twitter as a case study, we analyze
how engagement and sentiment in promoted content spread over a 10-day period.
We find that promoted tweets lead to higher positive sentiment than promoted
trends; although promoted trends pay off in response volume. We observe that
levels of engagement for the brand and promoted content are highest on the
first day of the campaign, and fall considerably thereafter. However, we show
that these insights depend on the use of robust machine learning and natural
language processing techniques to gather focused, relevant datasets, and to
accurately gauge sentiment, rather than relying on the simple keyword- or
frequency-based metrics sometimes used in social media research.
|
1312.6652 | Rounding Sum-of-Squares Relaxations | cs.DS cs.LG quant-ph | We present a general approach to rounding semidefinite programming
relaxations obtained by the Sum-of-Squares method (Lasserre hierarchy). Our
approach is based on using the connection between these relaxations and the
Sum-of-Squares proof system to transform a *combining algorithm* -- an
algorithm that maps a distribution over solutions into a (possibly weaker)
solution -- into a *rounding algorithm* that maps a solution of the relaxation
to a solution of the original problem.
Using this approach, we obtain algorithms that yield improved results for
natural variants of three well-known problems:
1) We give a quasipolynomial-time algorithm that approximates the maximum of
a low degree multivariate polynomial with non-negative coefficients over the
Euclidean unit sphere. Beyond being of interest in its own right, this is
related to an open question in quantum information theory, and our techniques
have already led to improved results in this area (Brand\~{a}o and Harrow, STOC
'13).
2) We give a polynomial-time algorithm that, given a d dimensional subspace
of R^n that (almost) contains the characteristic function of a set of size n/k,
finds a vector $v$ in the subspace satisfying $|v|_4^4 > c(k/d^{1/3}) |v|_2^2$,
where $|v|_p = (E_i v_i^p)^{1/p}$. Aside from being a natural relaxation, this
is also motivated by a connection to the Small Set Expansion problem shown by
Barak et al. (STOC 2012) and our results yield a certain improvement for that
problem.
3) We use this notion of L_4 vs. L_2 sparsity to obtain a polynomial-time
algorithm with substantially improved guarantees for recovering a planted
$\mu$-sparse vector v in a random d-dimensional subspace of R^n. If v has mu n
nonzero coordinates, we can recover it with high probability whenever $\mu <
O(\min(1,n/d^2))$, improving for $d < n^{2/3}$ prior methods which
intrinsically required $\mu < O(1/\sqrt(d))$.
|
1312.6661 | Rapid and deterministic estimation of probability densities using
scale-free field theories | physics.data-an cs.LG math.ST q-bio.QM stat.ML stat.TH | The question of how best to estimate a continuous probability density from
finite data is an intriguing open problem at the interface of statistics and
physics. Previous work has argued that this problem can be addressed in a
natural way using methods from statistical field theory. Here I describe new
results that allow this field-theoretic approach to be rapidly and
deterministically computed in low dimensions, making it practical for use in
day-to-day data analysis. Importantly, this approach does not impose a
privileged length scale for smoothness of the inferred probability density, but
rather learns a natural length scale from the data due to the tradeoff between
goodness-of-fit and an Occam factor. Open source software implementing this
method in one and two dimensions is provided.
|
1312.6675 | Data Mining on Social Interaction Networks | cs.SI cs.DB physics.soc-ph | Social media and social networks have already woven themselves into the very
fabric of everyday life. This results in a dramatic increase of social data
capturing various relations between the users and their associated artifacts,
both in online networks and the real world using ubiquitous devices. In this
work, we consider social interaction networks from a data mining perspective -
also with a special focus on real-world face-to-face contact networks: We
combine data mining and social network analysis techniques for examining the
networks in order to improve our understanding of the data, the modeled
behavior, and its underlying emergent processes. Furthermore, we adapt, extend
and apply known predictive data mining algorithms on social interaction
networks. Additionally, we present novel methods for descriptive data mining
for uncovering and extracting relations and patterns for hypothesis generation
and exploration, in order to provide characteristic information about the data
and networks. The presented approaches and methods aim at extracting valuable
knowledge for enhancing the understanding of the respective data, and for
supporting the users of the respective systems. We consider data from several
social systems, like the social bookmarking system BibSonomy, the social
resource sharing system flickr, and ubiquitous social systems: Specifically, we
focus on data from the social conference guidance system Conferator and the
social group interaction system MyGroup. This work first gives a short
introduction into social interaction networks, before we describe several
analysis results in the context of online social networks and real-world
face-to-face contact networks. Next, we present predictive data mining methods,
i.e., for localization, recommendation and link prediction. After that, we
present novel descriptive data mining methods for mining communities and
patterns.
|
1312.6712 | Invariant Factorization Of Time-Series | cs.LG | Time-series classification is an important domain of machine learning and a
plethora of methods have been developed for the task. In comparison to existing
approaches, this study presents a novel method which decomposes a time-series
dataset into latent patterns and membership weights of local segments to those
patterns. The process is formalized as a constrained objective function and a
tailored stochastic coordinate descent optimization is applied. The time-series
are projected to a new feature representation consisting of the sums of the
membership weights, which captures frequencies of local patterns. Features from
various sliding window sizes are concatenated in order to encapsulate the
interaction of patterns from different sizes. Finally, a large-scale
experimental comparison against 6 state of the art baselines and 43 real life
datasets is conducted. The proposed method outperforms all the baselines with
statistically significant margins in terms of prediction accuracy.
|
1312.6715 | The expert game -- Cooperation in social communication | cs.SI nlin.AO physics.soc-ph | Large parts of professional human communication proceed in a request-reply
fashion, whereby requests contain specifics of the information desired while
replies can deliver the required information. However, time limitations often
force individuals to prioritize some while neglecting others. This dilemma will
inevitably force individuals into defecting against some communication partners
to give attention to others. Furthermore, communication entirely breaks down
when individuals act purely egoistically as replies would never be issued and
quest for desired information would always be prioritized. Here we present an
experiment, termed "The expert game", where a number of individuals communicate
with one-another through an electronic messaging system. By imposing a strict
limit on the number of sent messages, individuals were required to decide
between requesting information that is beneficial for themselves or helping
others by replying to their requests. In the experiment, individuals were
assigned the task to find the expert on a specific topic and receive a reply
from that expert. Tasks and expertise of each player were periodically
re-assigned to randomize the required interactions. Resisting this
randomization, a non-random network of cooperative communication between
individuals formed. We use a simple Bayesian inference algorithm to model each
player's trust in the cooperativity of others with good experimental agreement.
Our results suggest that human communication in groups of individuals is
strategic and favors cooperation with trusted parties at the cost of defection
against others. To establish and maintain trusted links a significant fraction
of time-resources is allocated, even in situations where the information
transmitted is negligible.
|
1312.6720 | Weighted Multiplex Networks | physics.soc-ph cond-mat.dis-nn cond-mat.stat-mech cs.DL cs.SI | One of the most important challenges in network science is to quantify the
information encoded in complex network structures. Disentangling randomness
from organizational principles is even more demanding when networks have a
multiplex nature. Multiplex networks are multilayer systems of $N$ nodes that
can be linked in multiple interacting and co-evolving layers. In these
networks, relevant information might not be captured if the single layers were
analyzed separately. Here we demonstrate that such partial analysis of layers
fails to capture significant correlations between weights and topology of
complex multiplex networks. To this end, we study two weighted multiplex
co-authorship and citation networks involving the authors included in the
American Physical Society. We show that in these networks weights are strongly
correlated with multiplex structure, and provide empirical evidence in favor of
the advantage of studying weighted measures of multiplex networks, such as
multistrength and the inverse multiparticipation ratio. Finally, we introduce a
theoretical framework based on the entropy of multiplex ensembles to quantify
the information stored in multiplex networks that would remain undetected if
the single layers were analyzed in isolation.
|
1312.6722 | On the limiting behavior of parameter-dependent network centrality
measures | math.NA cs.SI physics.soc-ph | We consider a broad class of walk-based, parameterized node centrality
measures for network analysis. These measures are expressed in terms of
functions of the adjacency matrix and generalize various well-known centrality
indices, including Katz and subgraph centrality. We show that the parameter can
be "tuned" to interpolate between degree and eigenvector centrality, which
appear as limiting cases. Our analysis helps explain certain correlations often
observed between the rankings obtained using different centrality measures, and
provides some guidance for the tuning of parameters. We also highlight the
roles played by the spectral gap of the adjacency matrix and by the number of
triangles in the network. Our analysis covers both undirected and directed
networks, including weighted ones. A brief discussion of PageRank is also
given.
|
1312.6724 | Local algorithms for interactive clustering | cs.DS cs.LG | We study the design of interactive clustering algorithms for data sets
satisfying natural stability assumptions. Our algorithms start with any initial
clustering and only make local changes in each step; both are desirable
features in many applications. We show that in this constrained setting one can
still design provably efficient algorithms that produce accurate clusterings.
We also show that our algorithms perform well on real-world data.
|
1312.6726 | Bounded Rational Decision-Making in Changing Environments | cs.AI | A perfectly rational decision-maker chooses the best action with the highest
utility gain from a set of possible actions. The optimality principles that
describe such decision processes do not take into account the computational
costs of finding the optimal action. Bounded rational decision-making addresses
this problem by specifically trading off information-processing costs and
expected utility. Interestingly, a similar trade-off between energy and entropy
arises when describing changes in thermodynamic systems. This similarity has
been recently used to describe bounded rational agents. Crucially, this
framework assumes that the environment does not change while the decision-maker
is computing the optimal policy. When this requirement is not fulfilled, the
decision-maker will suffer inefficiencies in utility, that arise because the
current policy is optimal for an environment in the past. Here we borrow
concepts from non-equilibrium thermodynamics to quantify these inefficiencies
and illustrate with simulations its relationship with computational resources.
|
1312.6743 | Joint Transmitter and Receiver Energy Minimization in Multiuser OFDM
Systems | cs.IT math.IT | In this paper, we formulate and solve a weighted-sum transmitter and receiver
energy minimization (WSTREMin) problem in the downlink of an orthogonal
frequency division multiplexing (OFDM) based multiuser wireless system. The
proposed approach offers the flexibility of assigning different levels of
importance to base station (BS) and mobile terminal (MT) power consumption,
corresponding to the BS being connected to the grid and the MT relying on
batteries. To obtain insights into the characteristics of the problem, we first
consider two extreme cases separately, i.e., weighted-sum receiver-side energy
minimization (WSREMin) for MTs and transmitter-side energy minimization (TEMin)
for the BS. It is shown that Dynamic TDMA (D-TDMA), where MTs are scheduled for
single-user OFDM transmissions over orthogonal time slots, is the optimal
transmission strategy for WSREMin at MTs, while OFDMA is optimal for TEMin at
the BS. As a hybrid of the two extreme cases, we further propose a new multiple
access scheme, i.e., Time-Slotted OFDMA (TS-OFDMA) scheme, in which MTs are
grouped into orthogonal time slots with OFDMA applied to users assigned within
the same slot. TS-OFDMA can be shown to include both D-TDMA and OFDMA as
special cases. Numerical results confirm that the proposed schemes enable a
flexible range of energy consumption tradeoffs between the BS and MTs.
|
1312.6756 | Multi-dimensional Conversation Analysis across Online Social Networks | cs.SI physics.soc-ph | With the advance of the Internet, ordinary users have created multiple
personal accounts on online social networks, and interactions among these
social network users have recently been tagged with location information. In
this work, we observe user interactions across two popular online social
networks, Facebook and Twitter, and analyze which factors lead to retweet/like
interactions for tweets/posts. In addition to the named entities, lexical
errors and expressed sentiments in these data items, we also consider the
impact of shared user locations on user interactions. In particular, we show
that geolocations of users can greatly affect which social network post/tweet
will be liked/ retweeted. We believe that the results of our analysis can help
researchers to understand which social network content will have better
visibility.
|
1312.6764 | Bounded Recursive Self-Improvement | cs.AI | We have designed a machine that becomes increasingly better at behaving in
underspecified circumstances, in a goal-directed way, on the job, by modeling
itself and its environment as experience accumulates. Based on principles of
autocatalysis, endogeny, and reflectivity, the work provides an architectural
blueprint for constructing systems with high levels of operational autonomy in
underspecified circumstances, starting from a small seed. Through value-driven
dynamic priority scheduling controlling the parallel execution of a vast number
of reasoning threads, the system achieves recursive self-improvement after it
leaves the lab, within the boundaries imposed by its designers. A prototype
system has been implemented and demonstrated to learn a complex real-world
task, real-time multimodal dialogue with humans, by on-line observation. Our
work presents solutions to several challenges that must be solved for achieving
artificial general intelligence.
|
1312.6782 | IVSS Integration of Color Feature Extraction Techniques for Intelligent
Video Search Systems | cs.CV cs.IR cs.MM | As large amount of visual Information is available on web in form of images,
graphics, animations and videos, so it is important in internet era to have an
effective video search system. As there are number of video search engine
(blinkx, Videosurf, Google, YouTube, etc.) which search for relevant videos
based on user keyword or term, But very less commercial video search engine are
available which search videos based on visual image/clip/video. In this paper
we are recommending a system that will search for relevant video using color
feature of video in response of user Query.
|
1312.6784 | Relay Broadcast Channel with Confidential Messages | cs.IT math.IT | We investigate the effects of an additional relay node on the secrecy of
broadcast channels by considering the model of relay broadcast channels with
confidential messages. We show that this additional relay node can increase the
achievable secrecy rate region of the broadcast channels with confidential
messages. More specifically, first, we investigate the discrete memoryless
relay broadcast channels with two confidential messages and one common message.
Three inner bounds (with respect to decode-forward, generalized noise-forward
and compress-forward strategies) and an outer bound on the
capacity-equivocation region are provided. Second, we investigate the discrete
memoryless relay broadcast channels with two confidential messages. Inner and
outer bounds on the capacity-equivocation region are provided. Finally, we
investigate the discrete memoryless relay broadcast channels with one
confidential message and one common message. Inner and outer bounds on the
capacity-equivocation region are provided, and the results are further
explained via a Gaussian example. Compared with Csiszar-Korner's work on
broadcast channels with confidential messages (BCC), we find that with the help
of the relay node, the secrecy capacity region of the Gaussian BCC is enhanced.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.