id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1210.8440 | Large Scale Language Modeling in Automatic Speech Recognition | cs.CL | Large language models have been proven quite beneficial for a variety of
automatic speech recognition tasks in Google. We summarize results on Voice
Search and a few YouTube speech transcription tasks to highlight the impact
that one can expect from increasing both the amount of training data, and the
size of the language model estimated from such data. Depending on the task,
availability and amount of training data used, language model size and amount
of work and care put into integrating them in the lattice rescoring step we
observe reductions in word error rate between 6% and 10% relative, for systems
on a wide range of operating points between 17% and 52% word error rate.
|
1210.8441 | Very Low-Rate Variable-Length Channel Quantization for Minimum Outage
Probability | cs.IT math.IT | We identify a practical vector quantizer design problem where any
fixed-length quantizer (FLQ) yields non-zero distortion at any finite rate,
while there is a variable-length quantizer (VLQ) that can achieve zero
distortion with arbitrarily low rate. The problem arises in a $t \times 1$
multiple-antenna fading channel where we would like to minimize the channel
outage probability by employing beamforming via quantized channel state
information at the transmitter (CSIT). It is well-known that in such a
scenario, finite-rate FLQs cannot achieve the full-CSIT (zero distortion)
outage performance. We construct VLQs that can achieve the full-CSIT
performance with finite rate. In particular, with $P$ denoting the power
constraint of the transmitter, we show that the necessary and sufficient VLQ
rate that guarantees the full-CSIT performance is $\Theta(1/P)$. We also
discuss several extensions (e.g. to precoding) of this result.
|
1210.8442 | Linear-Nonlinear-Poisson Neuron Networks Perform Bayesian Inference On
Boltzmann Machines | cs.AI cs.NE q-bio.NC stat.ML | One conjecture in both deep learning and classical connectionist viewpoint is
that the biological brain implements certain kinds of deep networks as its
back-end. However, to our knowledge, a detailed correspondence has not yet been
set up, which is important if we want to bridge between neuroscience and
machine learning. Recent researches emphasized the biological plausibility of
Linear-Nonlinear-Poisson (LNP) neuron model. We show that with neurally
plausible settings, the whole network is capable of representing any Boltzmann
machine and performing a semi-stochastic Bayesian inference algorithm lying
between Gibbs sampling and variational inference.
|
1211.0025 | Venn-Abers predictors | cs.LG stat.ML | This paper continues study, both theoretical and empirical, of the method of
Venn prediction, concentrating on binary prediction problems. Venn predictors
produce probability-type predictions for the labels of test objects which are
guaranteed to be well calibrated under the standard assumption that the
observations are generated independently from the same distribution. We give a
simple formalization and proof of this property. We also introduce Venn-Abers
predictors, a new class of Venn predictors based on the idea of isotonic
regression, and report promising empirical results both for Venn-Abers
predictors and for their more computationally efficient simplified version.
|
1211.0028 | Understanding the Interaction between Interests, Conversations and
Friendships in Facebook | cs.SI cs.LG stat.ML | In this paper, we explore salient questions about user interests,
conversations and friendships in the Facebook social network, using a novel
latent space model that integrates several data types. A key challenge of
studying Facebook's data is the wide range of data modalities such as text,
network links, and categorical labels. Our latent space model seamlessly
combines all three data modalities over millions of users, allowing us to study
the interplay between user friendships, interests, and higher-order
network-wide social trends on Facebook. The recovered insights not only answer
our initial questions, but also reveal surprising facts about user interests in
the context of Facebook's ecosystem. We also confirm that our results are
significant with respect to evidential information from the study subjects.
|
1211.0053 | The Emerging Field of Signal Processing on Graphs: Extending
High-Dimensional Data Analysis to Networks and Other Irregular Domains | cs.DM cs.LG cs.SI | In applications such as social, energy, transportation, sensor, and neuronal
networks, high-dimensional data naturally reside on the vertices of weighted
graphs. The emerging field of signal processing on graphs merges algebraic and
spectral graph theoretic concepts with computational harmonic analysis to
process such signals on graphs. In this tutorial overview, we outline the main
challenges of the area, discuss different ways to define graph spectral
domains, which are the analogues to the classical frequency domain, and
highlight the importance of incorporating the irregular structures of graph
data domains when processing signals on graphs. We then review methods to
generalize fundamental operations such as filtering, translation, modulation,
dilation, and downsampling to the graph setting, and survey the localized,
multiscale transforms that have been proposed to efficiently extract
information from high-dimensional data on graphs. We conclude with a brief
discussion of open issues and possible extensions.
|
1211.0055 | Dimensionality Reduction and Classification Feature Using Mutual
Information Applied to Hyperspectral Images: A Wrapper Strategy Algorithm
Based on Minimizing the Error Probability Using the Inequality of Fano | cs.CV | In the feature classification domain, the choice of data affects widely the
results. For the Hyperspectral image, the bands dont all contain the
information; some bands are irrelevant like those affected by various
atmospheric effects, see Figure.4, and decrease the classification accuracy.
And there exist redundant bands to complicate the learning system and product
incorrect prediction [14]. Even the bands contain enough information about the
scene they may can't predict the classes correctly if the dimension of space
images, see Figure.3, is so large that needs many cases to detect the
relationship between the bands and the scene (Hughes phenomenon) [10]. We can
reduce the dimensionality of hyperspectral images by selecting only the
relevant bands (feature selection or subset selection methodology), or
extracting, from the original bands, new bands containing the maximal
information about the classes, using any functions, logical or numerical
(feature extraction methodology) [11][9]. Here we focus on the feature
selection using mutual information. Hyperspectral images have three advantages
regarding the multispectral images [6],
|
1211.0056 | Iterative Hard Thresholding Methods for $l_0$ Regularized Convex Cone
Programming | math.OC cs.LG math.NA stat.CO stat.ML | In this paper we consider $l_0$ regularized convex cone programming problems.
In particular, we first propose an iterative hard thresholding (IHT) method and
its variant for solving $l_0$ regularized box constrained convex programming.
We show that the sequence generated by these methods converges to a local
minimizer. Also, we establish the iteration complexity of the IHT method for
finding an $\epsilon$-local-optimal solution. We then propose a method for
solving $l_0$ regularized convex cone programming by applying the IHT method to
its quadratic penalty relaxation and establish its iteration complexity for
finding an $\epsilon$-approximate local minimizer. Finally, we propose a
variant of this method in which the associated penalty parameter is dynamically
updated, and show that every accumulation point is a local minimizer of the
problem.
|
1211.0071 | Randomness and Non-determinism | cs.CC cs.CR cs.IT math.IT | Exponentiation makes the difference between the bit-size of this line and the
number (<< 2^{300}) of particles in the known Universe. The expulsion of
exponential time algorithms from Computer Theory in the 60's broke its
umbilical cord from Mathematical Logic. It created a deep gap between
deterministic computation and -- formerly its unremarkable tools -- randomness
and non-determinism. Little did we learn in the past decades about the power of
either of these two basic "freedoms" of computation, but some vague pattern is
emerging in relationships between them. The pattern of similar techniques
instrumental for quite different results in this area seems even more
interesting. Ideas like multilinear and low-degree multivariate polynomials,
Fourier transformation over low-periodic groups seem very illuminating. The
talk surveyed some recent results. One of them, given in a stronger form than
previously published, is described below.
|
1211.0074 | Transition-Based Dependency Parsing With Pluggable Classifiers | cs.CL | In principle, the design of transition-based dependency parsers makes it
possible to experiment with any general-purpose classifier without other
changes to the parsing algorithm. In practice, however, it often takes
substantial software engineering to bridge between the different
representations used by two software packages. Here we present extensions to
MaltParser that allow the drop-in use of any classifier conforming to the
interface of the Weka machine learning package, a wrapper for the TiMBL
memory-based learner to this interface, and experiments on multilingual
dependency parsing with a variety of classifiers. While earlier work had
suggested that memory-based learners might be a good choice for low-resource
parsing scenarios, we cannot support that hypothesis in this work. We observed
that support-vector machines give better parsing performance than the
memory-based learner, regardless of the size of the training set.
|
1211.0122 | On Rational-Interpolation Based List-Decoding and List-Decoding Binary
Goppa Codes | cs.IT math.IT | We derive the Wu list-decoding algorithm for Generalised Reed-Solomon (GRS)
codes by using Gr\"obner bases over modules and the Euclidean algorithm (EA) as
the initial algorithm instead of the Berlekamp-Massey algorithm (BMA). We
present a novel method for constructing the interpolation polynomial fast. We
give a new application of the Wu list decoder by decoding irreducible binary
Goppa codes up to the binary Johnson radius. Finally, we point out a connection
between the governing equations of the Wu algorithm and the Guruswami-Sudan
algorithm (GSA), immediately leading to equality in the decoding range and a
duality in the choice of parameters needed for decoding, both in the case of
GRS codes and in the case of Goppa codes.
|
1211.0135 | Sampling and Reconstruction of Spatial Fields using Mobile Sensors | cs.MM cs.CV cs.IT math.IT | Spatial sampling is traditionally studied in a static setting where static
sensors scattered around space take measurements of the spatial field at their
locations. In this paper we study the emerging paradigm of sampling and
reconstructing spatial fields using sensors that move through space. We show
that mobile sensing offers some unique advantages over static sensing in
sensing time-invariant bandlimited spatial fields. Since a moving sensor
encounters such a spatial field along its path as a time-domain signal, a
time-domain anti-aliasing filter can be employed prior to sampling the signal
received at the sensor. Such a filtering procedure, when used by a
configuration of sensors moving at constant speeds along equispaced parallel
lines, leads to a complete suppression of spatial aliasing in the direction of
motion of the sensors. We analytically quantify the advantage of using such a
sampling scheme over a static sampling scheme by computing the reduction in
sampling noise due to the filter. We also analyze the effects of non-uniform
sensor speeds on the reconstruction accuracy. Using simulation examples we
demonstrate the advantages of mobile sampling over static sampling in practical
problems.
We extend our analysis to sampling and reconstruction schemes for monitoring
time-varying bandlimited fields using mobile sensors. We demonstrate that in
some situations we require a lower density of sensors when using a mobile
sensing scheme instead of the conventional static sensing scheme. The exact
advantage is quantified for a problem of sampling and reconstructing an audio
field.
|
1211.0156 | Attention Competition with Advertisement | cs.SI nlin.AO physics.soc-ph | In the new digital age, information is available in large quantities. Since
information consumes primarily the attention of its recipients, the scarcity of
attention is becoming the main limiting factor. In this study, we investigate
the impact of advertisement pressure on a cultural market where consumers have
a limited attention capacity. A model of competition for attention is developed
and investigated analytically and by simulation. Advertisement is found to be
much more effective when attention capacity of agents is extremely scarce. We
have observed that the market share of the advertised item improves if dummy
items are introduced to the market while the strength of the advertisement is
kept constant.
|
1211.0169 | Multi-Stratum Networks: toward a unified model of on-line identities | cs.SI physics.soc-ph | One of the reasons behind the success of Social Network Analysis is its
simple and general graph model made of nodes (representing individuals) and
ties. However, when we focus on our daily on-line experience we must confront a
more complex scenario: people inhabitate several on-line spaces interacting to
several communities active on various technological infrastructures like
Twitter, Facebook, YouTube or FourSquare and with distinct social objectives.
This constitutes a complex network of interconnected networks where users'
identities are spread and where information propagates navigating through
different communities and social platforms. In this article we introduce a
model for this layered scenario that we call multi-stratum network. Through a
theoretical discussion and the analysis of real-world data we show how not only
focusing on a single network may provide a very partial understanding of the
role of its users, but also that considering all the networks separately may
not reveal the information contained in the whole multi-stratum model.
|
1211.0176 | Joining relations under discrete uncertainty | cs.DB | In this paper we introduce and experimentally compare alternative algorithms
to join uncertain relations. Different algorithms are based on specific
principles, e.g., sorting, indexing, or building intermediate relational tables
to apply traditional approaches. As a consequence their performance is affected
by different features of the input data, and each algorithm is shown to be more
efficient than the others in specific cases. In this way statistics explicitly
representing the amount and kind of uncertainty in the input uncertain
relations can be used to choose the most efficient algorithm.
|
1211.0191 | Performance Evaluation of Random Set Based Pedestrian Tracking
Algorithms | cs.CV | The paper evaluates the error performance of three random finite set based
multi-object trackers in the context of pedestrian video tracking. The
evaluation is carried out using a publicly available video dataset of 4500
frames (town centre street) for which the ground truth is available. The input
to all pedestrian tracking algorithms is an identical set of head and body
detections, obtained using the Histogram of Oriented Gradients (HOG) detector.
The tracking error is measured using the recently proposed OSPA metric for
tracks, adopted as the only known mathematically rigorous metric for measuring
the distance between two sets of tracks. A comparative analysis is presented
under various conditions.
|
1211.0210 | Extension of TSVM to Multi-Class and Hierarchical Text Classification
Problems With General Losses | cs.LG | Transductive SVM (TSVM) is a well known semi-supervised large margin learning
method for binary text classification. In this paper we extend this method to
multi-class and hierarchical classification problems. We point out that the
determination of labels of unlabeled examples with fixed classifier weights is
a linear programming problem. We devise an efficient technique for solving it.
The method is applicable to general loss functions. We demonstrate the value of
the new method using large margin loss on a number of multi-class and
hierarchical classification datasets. For maxent loss we show empirically that
our method is better than expectation regularization/constraint and posterior
regularization methods, and competitive with the version of entropy
regularization method which uses label constraints.
|
1211.0224 | Views over RDF Datasets: A State-of-the-Art and Open Challenges | cs.DB | Views on RDF datasets have been discussed in several works, nevertheless
there is no consensus on their definition nor the requirements they should
fulfill. In traditional data management systems, views have proved to be useful
in different application scenarios such as data integration, query answering,
data security, and query modularization.
In this work we have reviewed existent work on views over RDF datasets, and
discussed the application of existent view definition mechanisms to four
scenarios in which views have proved to be useful in traditional (relational)
data management systems. To give a framework for the discussion we provided a
definition of views over RDF datasets, an issue over which there is no
consensus so far. We finally chose the three proposals closer to this
definition, and analyzed them with respect to four selected goals.
|
1211.0290 | Super-Resolution from Noisy Data | cs.IT math.IT math.NA | This paper studies the recovery of a superposition of point sources from
noisy bandlimited data. In the fewest possible words, we only have information
about the spectrum of an object in a low-frequency band bounded by a certain
cut-off frequency and seek to obtain a higher resolution estimate by
extrapolating the spectrum up to a higher frequency. We show that as long as
the sources are separated by twice the inverse of the cut-off frequency,
solving a simple convex program produces a stable estimate in the sense that
the approximation error between the higher-resolution reconstruction and the
truth is proportional to the noise level times the square of the
super-resolution factor (SRF), which is the ratio between the desired high
frequency and the cut-off frequency of the data.
|
1211.0320 | TrackMeNot-so-good-after-all | cs.IR | TrackMeNot is a Firefox plugin with laudable intentions - protecting your
privacy. By issuing a customizable stream of random search queries on its
users' behalf, TrackMeNot surmises that enough search noise will prevent its
users' true query profiles from being discerned. However, we find that
clustering queries by semantic relatedness allows us to disentangle a
nontrivial subset of true user queries from TrackMeNot issued noise.
|
1211.0358 | Deep Gaussian Processes | stat.ML cs.LG math.PR | In this paper we introduce deep Gaussian process (GP) models. Deep GPs are a
deep belief network based on Gaussian process mappings. The data is modeled as
the output of a multivariate GP. The inputs to that Gaussian process are then
governed by another GP. A single layer model is equivalent to a standard GP or
the GP latent variable model (GP-LVM). We perform inference in the model by
approximate variational marginalization. This results in a strict lower bound
on the marginal likelihood of the model which we use for model selection
(number of layers and nodes per layer). Deep belief networks are typically
applied to relatively large data sets using stochastic gradient descent for
optimization. Our fully Bayesian treatment allows for the application of deep
models even when data is scarce. Model selection by our variational bound shows
that a five layer hierarchy is justified even when modelling a digit data set
containing only 150 examples.
|
1211.0361 | Sketched SVD: Recovering Spectral Features from Compressive Measurements | cs.IT cs.DS math.IT | We consider a streaming data model in which n sensors observe individual
streams of data, presented in a turnstile model. Our goal is to analyze the
singular value decomposition (SVD) of the matrix of data defined implicitly by
the stream of updates. Each column i of the data matrix is given by the stream
of updates seen at sensor i. Our approach is to sketch each column of the
matrix, forming a "sketch matrix" Y, and then to compute the SVD of the sketch
matrix. We show that the singular values and right singular vectors of Y are
close to those of X, with small relative error. We also believe that this bound
is of independent interest in non-streaming and non-distributed data collection
settings.
Assuming that the data matrix X is of size Nxn, then with m linear
measurements of each column of X, we obtain a smaller matrix Y with dimensions
mxn. If m = O(k \epsilon^{-2} (log(1/\epsilon) + log(1/\delta)), where k
denotes the rank of X, then with probability at least 1-\delta, the singular
values \sigma'_j of Y satisfy the following relative error result
(1-\epsilon)^(1/2)<= \sigma'_j/\sigma_j <= (1 + \epsilon)^(1/2) as compared
to the singular values \sigma_j of the original matrix X. Furthermore, the
right singular vectors v'_j of Y satisfy
||v_j-v_j'||_2 <= min(sqrt{2},
(\epsilon\sqrt{1+\epsilon})/(\sqrt{1-\epsilon}) max_{i\neq j}
(\sqrt{2}\sigma_i\sigma_j)/(min_{c\in[-1,1]}(|\sigma^2_i-\sigma^2_j(1+c\epsilon)|)))
as compared to the right singular vectors v_j of X. We apply this result to
obtain a streaming graph algorithm to approximate the eigenvalues and
eigenvectors of the graph Laplacian in the case where the graph has low rank
(many connected components).
|
1211.0390 | Rating through Voting: An Iterative Method for Robust Rating | cs.IR cs.HC cs.SI | In this paper we introduce an iterative voting algorithm and then use it to
obtain a rating method which is very robust against collusion attacks as well
as random and biased raters. Unlike the previous iterative methods, our method
is not based on comparing submitted evaluations to an approximation of the
final rating scores, and it entirely decouples credibility assessment of the
cast evaluations from the ranking itself. The convergence of our algorithm
relies on the existence of a fixed point of a continuous mapping which is also
a stationary point of a constrained optimization objective. We have implemented
and tested our rating method using both simulated data as well as real world
data. In particular, we have applied our method to movie evaluations obtained
from MovieLens and compared our results with IMDb and Rotten Tomatoes movie
rating sites. Not only are the ratings provided by our system very close to
IMDb rating scores, but when we differ from the IMDb ratings, the direction of
such differences is essentially always towards the ratings provided by the
critics in Rotten Tomatoes. Our tests demonstrate high efficiency of our
method, especially for very large online rating systems, for which trust
management is both of the highest importance and one of the most challenging
problems.
|
1211.0415 | Capacity and Security of Heterogeneous Distributed Storage Systems | cs.DC cs.IT cs.NI math.IT | We study the capacity of heterogeneous distributed storage systems under
repair dynamics. Examples of these systems include peer-to-peer storage clouds,
wireless, and Internet caching systems. Nodes in a heterogeneous system can
have different storage capacities and different repair bandwidths. We give
lower and upper bounds on the system capacity. These bounds depend on either
the average resources per node, or on a detailed knowledge of the node
characteristics. Moreover, we study the case in which nodes may be compromised
by an eavesdropper, and give bounds on the system secrecy capacity. One
implication of our results is that symmetric repair maximizes the capacity of a
homogeneous system, which justifies the model widely used in the literature.
|
1211.0418 | Verbalizing Ontologies in Controlled Baltic Languages | cs.CL cs.AI | Controlled natural languages (mostly English-based) recently have emerged as
seemingly informal supplementary means for OWL ontology authoring, if compared
to the formal notations that are used by professional knowledge engineers. In
this paper we present by examples controlled Latvian language that has been
designed to be compliant with the state of the art Attempto Controlled English.
We also discuss relation with controlled Lithuanian language that is being
designed in parallel.
|
1211.0424 | Learning classifier systems with memory condition to solve non-Markov
problems | cs.NE cs.AI | In the family of Learning Classifier Systems, the classifier system XCS has
been successfully used for many applications. However, the standard XCS has no
memory mechanism and can only learn optimal policy in Markov environments,
where the optimal action is determined solely by the state of current sensory
input. In practice, most environments are partially observable environments on
agent's sensation, which are also known as non-Markov environments. Within
these environments, XCS either fails, or only develops a suboptimal policy,
since it has no memory. In this work, we develop a new classifier system based
on XCS to tackle this problem. It adds an internal message list to XCS as the
memory list to record input sensation history, and extends a small number of
classifiers with memory conditions. The classifier's memory condition, as a
foothold to disambiguate non-Markov states, is used to sense a specified
element in the memory list. Besides, a detection method is employed to
recognize non-Markov states in environments, to avoid these states controlling
over classifiers' memory conditions. Furthermore, four sets of different
complex maze environments have been tested by the proposed method. Experimental
results show that our system is one of the best techniques to solve partially
observable environments, compared with some well-known classifier systems
proposed for these environments.
|
1211.0439 | Learning curves for multi-task Gaussian process regression | cs.LG cond-mat.dis-nn stat.ML | We study the average case performance of multi-task Gaussian process (GP)
regression as captured in the learning curve, i.e. the average Bayes error for
a chosen task versus the total number of examples $n$ for all tasks. For GP
covariances that are the product of an input-dependent covariance function and
a free-form inter-task covariance matrix, we show that accurate approximations
for the learning curve can be obtained for an arbitrary number of tasks $T$. We
use these to study the asymptotic learning behaviour for large $n$.
Surprisingly, multi-task learning can be asymptotically essentially useless, in
the sense that examples from other tasks help only when the degree of
inter-task correlation, $\rho$, is near its maximal value $\rho=1$. This effect
is most extreme for learning of smooth target functions as described by e.g.
squared exponential kernels. We also demonstrate that when learning many tasks,
the learning curves separate into an initial phase, where the Bayes error on
each task is reduced down to a plateau value by "collective learning" even
though most tasks have not seen examples, and a final decay that occurs once
the number of examples is proportional to the number of tasks.
|
1211.0447 | Ordinal Rating of Network Performance and Inference by Matrix Completion | cs.NI cs.LG | This paper addresses the large-scale acquisition of end-to-end network
performance. We made two distinct contributions: ordinal rating of network
performance and inference by matrix completion. The former reduces measurement
costs and unifies various metrics which eases their processing in applications.
The latter enables scalable and accurate inference with no requirement of
structural information of the network nor geometric constraints. By combining
both, the acquisition problem bears strong similarities to recommender systems.
This paper investigates the applicability of various matrix factorization
models used in recommender systems. We found that the simple regularized matrix
factorization is not only practical but also produces accurate results that are
beneficial for peer selection.
|
1211.0479 | Parameterized Complexity and Kernel Bounds for Hard Planning Problems | cs.DS cs.AI | The propositional planning problem is a notoriously difficult computational
problem. Downey et al. (1999) initiated the parameterized analysis of planning
(with plan length as the parameter) and B\"ackstr\"om et al. (2012) picked up
this line of research and provided an extensive parameterized analysis under
various restrictions, leaving open only one stubborn case. We continue this
work and provide a full classification. In particular, we show that the case
when actions have no preconditions and at most $e$ postconditions is
fixed-parameter tractable if $e\leq 2$ and W[1]-complete otherwise. We show
fixed-parameter tractability by a reduction to a variant of the Steiner Tree
problem; this problem has been shown fixed-parameter tractable by Guo et al.
(2007). If a problem is fixed-parameter tractable, then it admits a
polynomial-time self-reduction to instances whose input size is bounded by a
function of the parameter, called the kernel. For some problems, this function
is even polynomial which has desirable computational implications. Recent
research in parameterized complexity has focused on classifying fixed-parameter
tractable problems on whether they admit polynomial kernels or not. We revisit
all the previously obtained restrictions of planning that are fixed-parameter
tractable and show that none of them admits a polynomial kernel unless the
polynomial hierarchy collapses to its third level.
|
1211.0498 | Detecting English Writing Styles For Non-native Speakers | cs.CL | Analyzing writing styles of non-native speakers is a challenging task. In
this paper, we analyze the comments written in the discussion pages of the
English Wikipedia. Using learning algorithms, we are able to detect native
speakers' writing style with an accuracy of 74%. Given the diversity of the
English Wikipedia users and the large number of languages they speak, we
measure the similarities among their native languages by comparing the
influence they have on their English writing style. Our results show that
languages known to have the same origin and development path have similar
footprint on their speakers' English writing style. To enable further studies,
the dataset we extracted from Wikipedia will be made available publicly.
|
1211.0501 | Surprisingly Rational: Probability theory plus noise explains biases in
judgment | physics.data-an cs.AI stat.AP | The systematic biases seen in people's probability judgments are typically
taken as evidence that people do not reason about probability using the rules
of probability theory, but instead use heuristics which sometimes yield
reasonable judgments and sometimes systematic biases. This view has had a major
impact in economics, law, medicine, and other fields; indeed, the idea that
people cannot reason with probabilities has become a widespread truism. We
present a simple alternative to this view, where people reason about
probability according to probability theory but are subject to random variation
or noise in the reasoning process. In this account the effect of noise is
cancelled for some probabilistic expressions: analysing data from two
experiments we find that, for these expressions, people's probability judgments
are strikingly close to those required by probability theory. For other
expressions this account produces systematic deviations in probability
estimates. These deviations explain four reliable biases in human probabilistic
reasoning (conservatism, subadditivity, conjunction and disjunction fallacies).
These results suggest that people's probability judgments embody the rules of
probability theory, and that biases in those judgments are due to the effects
of random noise.
|
1211.0518 | Complex social contagion makes networks more vulnerable to disease
outbreaks | physics.soc-ph cs.SI q-bio.PE | Social network analysis is now widely used to investigate the dynamics of
infectious disease spread from person to person. Vaccination dramatically
disrupts the disease transmission process on a contact network, and indeed,
sufficiently high vaccination rates can disrupt the process to such an extent
that disease transmission on the network is effectively halted. Here, we build
on mounting evidence that health behaviors - such as vaccination, and refusal
thereof - can spread through social networks through a process of complex
contagion that requires social reinforcement. Using network simulations that
model both the health behavior and the infectious disease spread, we find that
under otherwise identical conditions, the process by which the health behavior
spreads has a very strong effect on disease outbreak dynamics. This variability
in dynamics results from differences in the topology within susceptible
communities that arise during the health behavior spreading process, which in
turn depends on the topology of the overall social network. Our findings point
to the importance of health behavior spread in predicting and controlling
disease outbreaks.
|
1211.0587 | Partition Tree Weighting | cs.IT cs.LG math.IT stat.ML | This paper introduces the Partition Tree Weighting technique, an efficient
meta-algorithm for piecewise stationary sources. The technique works by
performing Bayesian model averaging over a large class of possible partitions
of the data into locally stationary segments. It uses a prior, closely related
to the Context Tree Weighting technique of Willems, that is well suited to data
compression applications. Our technique can be applied to any coding
distribution at an additional time and space cost only logarithmic in the
sequence length. We provide a competitive analysis of the redundancy of our
method, and explore its application in a variety of settings. The order of the
redundancy and the complexity of our algorithm matches those of the best
competitors available in the literature, and the new algorithm exhibits a
superior complexity-performance trade-off in our experiments.
|
1211.0602 | Segmentation of ultrasound images of thyroid nodule for assisting fine
needle aspiration cytology | cs.CV | The incidence of thyroid nodule is very high and generally increases with the
age. Thyroid nodule may presage the emergence of thyroid cancer. The thyroid
nodule can be completely cured if detected early. Fine needle aspiration
cytology is a recognized early diagnosis method of thyroid nodule. There are
still some limitations in the fine needle aspiration cytology, and the
ultrasound diagnosis of thyroid nodule has become the first choice for
auxiliary examination of thyroid nodular disease. If we could combine medical
imaging technology and fine needle aspiration cytology, the diagnostic rate of
thyroid nodule would be improved significantly. The properties of ultrasound
will degrade the image quality, which makes it difficult to recognize the edges
for physicians. Image segmentation technique based on graph theory has become a
research hotspot at present. Normalized cut (Ncut) is a representative one,
which is suitable for segmentation of feature parts of medical image. However,
how to solve the normalized cut has become a problem, which needs large memory
capacity and heavy calculation of weight matrix. It always generates over
segmentation or less segmentation which leads to inaccurate in the
segmentation. The speckle noise in B ultrasound image of thyroid tumor makes
the quality of the image deteriorate. In the light of this characteristic, we
combine the anisotropic diffusion model with the normalized cut in this paper.
After the enhancement of anisotropic diffusion model, it removes the noise in
the B ultrasound image while preserves the important edges and local details.
This reduces the amount of computation in constructing the weight matrix of the
improved normalized cut and improves the accuracy of the final segmentation
results. The feasibility of the method is proved by the experimental results.
|
1211.0611 | Matrix approach to rough sets through vector matroids over a field | cs.AI | Rough sets were proposed to deal with the vagueness and incompleteness of
knowledge in information systems. There are may optimization issues in this
field such as attribute reduction. Matroids generalized from matrices are
widely used in optimization. Therefore, it is necessary to connect matroids
with rough sets. In this paper, we take field into consideration and introduce
matrix to study rough sets through vector matroids. First, a matrix
representation of an equivalence relation is proposed, and then a matroidal
structure of rough sets over a field is presented by the matrix. Second, the
properties of the matroidal structure including circuits, bases and so on are
studied through two special matrix solution spaces, especially null space.
Third, over a binary field, we construct an equivalence relation from matrix
null space, and establish an algebra isomorphism from the collection of
equivalence relations to the collection of sets, which any member is a family
of the minimal non-empty sets that are supports of members of null space of a
binary dependence matrix. In a word, matrix provides a new viewpoint to study
rough sets.
|
1211.0613 | Application of Symmetric Uncertainty and Mutual Information to
Dimensionality Reduction and Classification of Hyperspectral Images | cs.CV | Remote sensing is a technology to acquire data for disatant substances,
necessary to construct a model knowledge for applications as classification.
Recently Hyperspectral Images (HSI) becomes a high technical tool that the main
goal is to classify the point of a region. The HIS is more than a hundred
bidirectional measures, called bands (or simply images), of the same region
called Ground Truth Map (GT). But some bands are not relevant because they are
affected by different atmospheric effects; others contain redundant
information; and high dimensionality of HSI features make the accuracy of
classification lower. All these bands can be important for some applications;
but for the classification a small subset of these is relevant. The problematic
related to HSI is the dimensionality reduction. Many studies use mutual
information (MI) to select the relevant bands. Others studies use the MI
normalized forms, like Symmetric Uncertainty, in medical imagery applications.
In this paper we introduce an algorithm based also on MI to select relevant
bands and it apply the Symmetric Uncertainty coefficient to control redundancy
and increase the accuracy of classification. This algorithm is feature
selection tool and a Filter strategy. We establish this study on HSI AVIRIS
92AV3C. This is an effectiveness, and fast scheme to control redundancy.
|
1211.0616 | The complexity of learning halfspaces using generalized linear methods | cs.LG cs.DS | Many popular learning algorithms (E.g. Regression, Fourier-Transform based
algorithms, Kernel SVM and Kernel ridge regression) operate by reducing the
problem to a convex optimization problem over a vector space of functions.
These methods offer the currently best approach to several central problems
such as learning half spaces and learning DNF's. In addition they are widely
used in numerous application domains. Despite their importance, there are still
very few proof techniques to show limits on the power of these algorithms.
We study the performance of this approach in the problem of (agnostically and
improperly) learning halfspaces with margin $\gamma$. Let $\mathcal{D}$ be a
distribution over labeled examples. The $\gamma$-margin error of a hyperplane
$h$ is the probability of an example to fall on the wrong side of $h$ or at a
distance $\le\gamma$ from it. The $\gamma$-margin error of the best $h$ is
denoted $\mathrm{Err}_\gamma(\mathcal{D})$. An $\alpha(\gamma)$-approximation
algorithm receives $\gamma,\epsilon$ as input and, using i.i.d. samples of
$\mathcal{D}$, outputs a classifier with error rate $\le
\alpha(\gamma)\mathrm{Err}_\gamma(\mathcal{D}) + \epsilon$. Such an algorithm
is efficient if it uses $\mathrm{poly}(\frac{1}{\gamma},\frac{1}{\epsilon})$
samples and runs in time polynomial in the sample size.
The best approximation ratio achievable by an efficient algorithm is
$O\left(\frac{1/\gamma}{\sqrt{\log(1/\gamma)}}\right)$ and is achieved using an
algorithm from the above class. Our main result shows that the approximation
ratio of every efficient algorithm from this family must be $\ge
\Omega\left(\frac{1/\gamma}{\mathrm{poly}\left(\log\left(1/\gamma\right)\right)}\right)$,
essentially matching the best known upper bound.
|
1211.0632 | Stochastic ADMM for Nonsmooth Optimization | cs.LG math.OC stat.ML | We present a stochastic setting for optimization problems with nonsmooth
convex separable objective functions over linear equality constraints. To solve
such problems, we propose a stochastic Alternating Direction Method of
Multipliers (ADMM) algorithm. Our algorithm applies to a more general class of
nonsmooth convex functions that does not necessarily have a closed-form
solution by minimizing the augmented function directly. We also demonstrate the
rates of convergence for our algorithm under various structural assumptions of
the stochastic functions: $O(1/\sqrt{t})$ for convex functions and $O(\log
t/t)$ for strongly convex functions. Compared to previous literature, we
establish the convergence rate of ADMM algorithm, for the first time, in terms
of both the objective value and the feasibility violation.
|
1211.0654 | On Threshold Models over Finite Networks | cs.DM cs.GT cs.SI | We study a model for cascade effects over finite networks based on a
deterministic binary linear threshold model. Our starting point is a networked
coordination game where each agent's payoff is the sum of the payoffs coming
from pairwise interactions with each of the neighbors. We first establish that
the best response dynamics in this networked game is equivalent to the linear
threshold dynamics with heterogeneous thresholds over the agents. While the
previous literature has studied such linear threshold models under the
assumption that each agent may change actions at most once, a study of best
response dynamics in such networked games necessitates an analysis that allows
for multiple switches in actions. In this paper, we develop such an analysis
and construct a combinatorial framework to understand the behavior of the
model. To this end, we establish that the agents behavior cycles among
different actions in the limit and provide three sets of results.
We first characterize the limiting behavioral properties of the dynamics. We
determine the length of the limit cycles and reveal bounds on the time steps
required to reach such cycles for different network structures. We then study
the complexity of decision/counting problems that arise within the context.
Specifically, we consider the tractability of counting the number of limit
cycles and fixed-points, and deciding the reachability of action profiles. We
finally propose a measure of network resilience that captures the nature of the
involved dynamics. We prove bounds and investigate the resilience of different
network structures under this measure.
|
1211.0656 | Electoral Susceptibility | physics.soc-ph cond-mat.stat-mech cs.SI stat.AP | In the United States electoral system, a candidate is elected indirectly by
winning a majority of electoral votes cast by individual states, the election
usually being decided by the votes cast by a small number of "swing states"
where the two candidates historically have roughly equal probabilities of
winning. The effective value of a swing state in deciding the election is
determined not only by the number of its electoral votes but by the frequency
of its appearance in the set of winning partitions of the electoral college.
Since the electoral vote values of swing states are not identical, the presence
or absence of a state in a winning partition is generally correlated with the
frequency of appearance of other states and, hence, their effective values. We
quantify the effective value of states by an {\sl electoral susceptibility},
$\chi_j$, the variation of the winning probability with the "cost" of changing
the probability of winning state $j$. We study $\chi_j$ for realistic data
accumulated for the 2012 U.S. presidential election and for a simple model with
a Zipf's law type distribution of electoral votes. In the latter model we show
that the susceptibility for small states is largest in "one-sided" electoral
contests and smallest in close contests. We draw an analogy to models of
entropically driven interactions in poly-disperse colloidal solutions.
|
1211.0658 | On the Non-existence of Lattice Tilings by Quasi-crosses | cs.IT math.CO math.IT | We study necessary conditions for the existence of lattice tilings of $\R^n$
by quasi-crosses. We prove non-existence results, and focus in particular on
the two smallest unclassified shapes, the $(3,1,n)$-quasi-cross and the
$(3,2,n)$-quasi-cross. We show that for dimensions $n\leq 250$, apart from the
known constructions, there are no lattice tilings of $\R^n$ by
$(3,1,n)$-quasi-crosses except for ten remaining cases, and no lattice tilings
of $\R^n$ by $(3,2,n)$-quasi-crosses except for eleven remaining cases.
|
1211.0660 | Generation of Two-Layer Monotonic Functions | cs.NE | The problem of implementing a class of functions with particular conditions
by using monotonic multilayer functions is considered. A genetic algorithm is
used to create monotonic functions of a certain class, and these are
implemented with two-layer monotonic functions. The existence of a solution to
the given problem suggests that from two monotone functions, a monotonic
function with the same dimensions can be created. A new algorithm based on the
genetic algorithm is proposed, which easily implemented two-layer monotonic
functions of a specific class for up to six variables.
|
1211.0689 | Enhancing Invenio Digital Library With An External Relevance Ranking
Engine | cs.IR cs.DL | Invenio is a comprehensive web-based free digital library software suite
originally developed at CERN. In order to improve its information retrieval and
word similarity ranking capabilities, the goal of this thesis is to enhance
Invenio by bridging it with modern external information retrieval systems. In
the first part a comparison of various information retrieval systems such as
Solr and Xapian is made. In the second part a system-independent bridge for
word similarity ranking is designed and implemented. Subsequently, Solr and
Xapian are integrated in Invenio via adapters to the bridge. In the third part
scalability tests are performed. Finally, a future outlook is briefly
discussed.
|
1211.0709 | Shaping Operations to Attack Robust Terror Networks | cs.SI physics.soc-ph | Security organizations often attempt to disrupt terror or insurgent networks
by targeting "high value targets" (HVT's). However, there have been numerous
examples that illustrate how such networks are able to quickly re-generate
leadership after such an operation. Here, we introduce the notion of a
"shaping" operation in which the terrorist network is first targeted for the
purpose of reducing its leadership re-generation ability before targeting
HVT's. We look to conduct shaping by maximizing the network-wide degree
centrality through node removal. We formally define this problem and prove
solving it is NP-Complete. We introduce a mixed integer-linear program that
solves this problem exactly as well as a greedy heuristic for more practical
use. We implement the greedy heuristic and found in examining five real-world
terrorist networks that removing only 12% of nodes can increase the
network-wide centrality between 17% and 45%. We also show our algorithm can
scale to large social networks of 1,133 nodes and 5,541 edges on commodity
hardware.
|
1211.0719 | Social cohesion, structural holes, and a tale of two measures | physics.soc-ph cs.SI | In the social sciences, the debate over the structural foundations of social
capital has long vacillated between two positions on the relative benefits
associated with two types of social structures: closed structures, rich in
third-party relationships, and open structures, rich in structural holes and
brokerage opportunities. In this paper, we engage with this debate by focusing
on the measures typically used for formalising the two conceptions of social
capital: clustering and effective size. We show that these two measures are
simply two sides of the same coin, as they can be expressed one in terms of the
other through a simple functional relation. Building on this relation, we then
attempt to reconcile closed and open structures by proposing a new measure,
Simmelian brokerage, that captures opportunities of brokerage between otherwise
disconnected cohesive groups of contacts. Implications of our findings for
research on social capital and complex networks are discussed.
|
1211.0722 | Sub-Nyquist Radar via Doppler Focusing | cs.IT math.IT | We investigate the problem of a monostatic pulse-Doppler radar transceiver
trying to detect targets, sparsely populated in the radar's unambiguous
time-frequency region. Several past works employ compressed sensing (CS)
algorithms to this type of problem, but either do not address sample rate
reduction, impose constraints on the radar transmitter, propose CS recovery
methods with prohibitive dictionary size, or perform poorly in noisy
conditions. Here we describe a sub-Nyquist sampling and recovery approach
called Doppler focusing which addresses all of these problems: it performs low
rate sampling and digital processing, imposes no restrictions on the
transmitter, and uses a CS dictionary with size which does not increase with
increasing number of pulses P. Furthermore, in the presence of noise, Doppler
focusing enjoys an SNR increase which scales linearly with P, obtaining good
detection performance even at SNRs as low as -25dB. The recovery is based on
the Xampling framework, which allows reducing the number of samples needed to
accurately represent the signal, directly in the analog-to-digital conversion
process. After sampling, the entire digital recovery process is performed on
the low rate samples without having to return to the Nyquist rate. Finally, our
approach can be implemented in hardware using a previously suggested Xampling
prototype.
|
1211.0728 | Fast Algorithm for N-2 Contingency Problem | physics.soc-ph cs.SI math-ph math.MP | We present a novel selection algorithm for N-2 contingency analysis problem.
The algorithm is based on the iterative bounding of line outage distribution
factors and successive pruning of the set of contingency pair candidates. The
selection procedure is non-heuristic, and is certified to identify all events
that lead to thermal constraints violations in DC approximation. The complexity
of the algorithm is O(N^2) comparable to the complexity of N-1 contingency
problem. We validate and test the algorithm on the Polish grid network with
around 3000 lines. For this test case two iterations of the pruning procedure
reduce the total number of candidate pairs by a factor of almost 1000 from 5
millions line pairs to only 6128.
|
1211.0730 | Intelligent Algorithm for Optimum Solutions Based on the Principles of
Bat Sonar | cs.NE | This paper presents a new intelligent algorithm that can solve the problems
of finding the optimum solution in the state space among which the desired
solution resides. The algorithm mimics the principles of bat sonar in finding
its targets. The algorithm introduces three search approaches. The first search
approach considers a single sonar unit (SSU) with a fixed beam length and a
single starting point. In this approach, although the results converge toward
the optimum fitness, it is not guaranteed to find the global optimum solution
especially for complex problems; it is satisfied with finding 'acceptably good'
solutions to these problems. The second approach considers multisonar units
(MSU) working in parallel in the same state space. Each unit has its own
starting point and tries to find the optimum solution. In this approach the
probability that the algorithm converges toward the optimum solution is
significantly increased. It is found that this approach is suitable for complex
functions and for problems of wide state space. In the third approach, a single
sonar unit with a moment (SSM) is used in order to handle the problem of
convergence toward a local optimum rather than a global optimum. The momentum
term is added to the length of the transmitted beams. This will give the chance
to find the best fitness in a wider range within the state space. In this paper
a comparison between the proposed algorithm and genetic algorithm (GA) has been
made. It showed that both of the algorithms can catch approximately the optimum
solutions for all of the testbed functions except for the function that has a
local minimum, in which the proposed algorithm's result is much better than
that of the GA algorithm. On the other hand, the comparison showed that the
required execution time to obtain the optimum solution using the proposed
algorithm is much less than that of the GA algorithm.
|
1211.0736 | A Threshold For Clusters in Real-World Random Networks | cs.SI physics.soc-ph | Recent empirical work [Leskovec2009] has suggested the existence of a size
threshold for the existence of clusters within many real-world networks. We
give the first proof that this clustering size threshold exists within a
real-world random network model, and determine the asymptotic value at which it
occurs.
More precisely, we choose the Community Guided Attachment (CGA) random
network model of Leskovek, Kleinberg, and Faloutsos [Leskovec2005]. The model
is non-uniform and contains self-similar communities, and has been shown to
have many properties of real-world networks. To capture the notion of
clustering, we follow Mishra et. al. [Mishra2007], who defined a type of
clustering for real-world networks: an (\alpha,\beta)-cluster is a set that is
both internally dense (to the extent given by the parameter \beta), and
externally sparse (to the extent given by the parameter \alpha) . With this
definition of clustering, we show the existence of a size threshold of (\ln
n)^{1/2} for the existence of clusters in the CGA model. For all \epsilon>0,
a.a.s. clusters larger than (\ln n)^{1/2-\epsilon} exist, whereas a.a.s.
clusters larger than (\ln n)^{1/2+\epsilon} do not exist. Moreover, we show a
size bound on the existence of small, constant-size clusters.
|
1211.0737 | Optimal Information-Theoretic Wireless Location Verification | cs.IT cs.CR math.IT | We develop a new Location Verification System (LVS) focussed on network-based
Intelligent Transport Systems and vehicular ad hoc networks. The algorithm we
develop is based on an information-theoretic framework which uses the received
signal strength (RSS) from a network of base-stations and the claimed position.
Based on this information we derive the optimal decision regarding the
verification of the user's location. Our algorithm is optimal in the sense of
maximizing the mutual information between its input and output data. Our
approach is based on the practical scenario in which a non-colluding malicious
user some distance from a highway optimally boosts his transmit power in an
attempt to fool the LVS that he is on the highway. We develop a practical
threat model for this attack scenario, and investigate in detail the
performance of the LVS in terms of its input/output mutual information. We show
how our LVS decision rule can be implemented straightforwardly with a
performance that delivers near-optimality under realistic threat conditions,
with information-theoretic optimality approached as the malicious user moves
further from the highway. The practical advantages our new
information-theoretic scheme delivers relative to more traditional Bayesian
verification frameworks are discussed.
|
1211.0749 | Student Modeling using Case-Based Reasoning in Conventional Learning
System | cs.AI cs.CY | Conventional face-to-face classrooms are still the main learning system
applied in Indonesia. In assisting such conventional learning towards an
optimal learning, formative evaluations are needed to monitor the progress of
the class. This task can be very hard when the size of the class is large.
Hence, this research attempted to create a classroom monitoring system based on
student data of Department of Electrical Engineering and Information
Technology. In order to achieve the goal, a student modeling using Case-Based
Reasoning was proposed. A generic student model based on a framework was
developed. The model represented student knowledge of a subject. The result
showed that the system was able to store and retrieve student data for
suggestion of the current situation and formative evaluation for one of the
subject in the Department.
|
1211.0757 | Efficient Point-to-Subspace Query in $\ell^1$: Theory and Applications
in Computer Vision | stat.ML cs.CV stat.AP | Motivated by vision tasks such as robust face and object recognition, we
consider the following general problem: given a collection of low-dimensional
linear subspaces in a high-dimensional ambient (image) space and a query point
(image), efficiently determine the nearest subspace to the query in $\ell^1$
distance. We show in theory that Cauchy random embedding of the objects into
significantly-lower-dimensional spaces helps preserve the identity of the
nearest subspace with constant probability. This offers the possibility of
efficiently selecting several candidates for accurate search. We sketch
preliminary experiments on robust face and digit recognition to corroborate our
theory.
|
1211.0779 | Large Deviation Delay Analysis of Queue-Aware Multi-user MIMO Systems
with Multi-timescale Mobile-Driven Feedback | cs.SY cs.IT math.IT | Multi-user multi-input-multi-output (MU-MIMO) systems transmit data to
multiple users simultaneously using the spatial degrees of freedom with user
feedback channel state information (CSI). Most of the existing literatures on
the reduced feedback user scheduling focus on the throughput performance and
the user queueing delay is usually ignored. As the delay is very important for
real-time applications, a low feedback queue-aware user scheduling algorithm is
desired for the MU-MIMO system. This paper proposed a two-stage queue-aware
user scheduling algorithm, which consists of a queue-aware mobile-driven
feedback filtering stage and a SINR-based user scheduling stage, where the
feedback filtering policy is obtained from the solution of an optimization
problem. We evaluate the queueing performance of the proposed scheduling
algorithm by using the sample path large deviation analysis. We show that the
large deviation decay rate for the proposed algorithm is much larger than that
of the CSI-only user scheduling algorithm. The numerical results also
demonstrate that the proposed algorithm performs much better than the CSI-only
algorithm requiring only a small amount of feedback.
|
1211.0801 | Discussion: Latent variable graphical model selection via convex
optimization | math.ST cs.LG stat.ML stat.TH | Discussion of "Latent variable graphical model selection via convex
optimization" by Venkat Chandrasekaran, Pablo A. Parrilo and Alan S. Willsky
[arXiv:1008.1290].
|
1211.0806 | Discussion: Latent variable graphical model selection via convex
optimization | math.ST cs.LG stat.ML stat.TH | Discussion of "Latent variable graphical model selection via convex
optimization" by Venkat Chandrasekaran, Pablo A. Parrilo and Alan S. Willsky
[arXiv:1008.1290].
|
1211.0808 | Discussion: Latent variable graphical model selection via convex
optimization | math.ST cs.LG stat.ML stat.TH | Discussion of "Latent variable graphical model selection via convex
optimization" by Venkat Chandrasekaran, Pablo A. Parrilo and Alan S. Willsky
[arXiv:1008.1290].
|
1211.0817 | Discussion: Latent variable graphical model selection via convex
optimization | math.ST cs.LG stat.ML stat.TH | Discussion of "Latent variable graphical model selection via convex
optimization" by Venkat Chandrasekaran, Pablo A. Parrilo and Alan S. Willsky
[arXiv:1008.1290].
|
1211.0834 | On Hidden Markov Processes with Infinite Excess Entropy | cs.IT math.IT | We investigate stationary hidden Markov processes for which mutual
information between the past and the future is infinite. It is assumed that the
number of observable states is finite and the number of hidden states is
countably infinite. Under this assumption, we show that the block mutual
information of a hidden Markov process is upper bounded by a power law
determined by the tail index of the hidden state distribution. Moreover, we
exhibit three examples of processes. The first example, considered previously,
is nonergodic and the mutual information between the blocks is bounded by the
logarithm of the block length. The second example is also nonergodic but the
mutual information between the blocks obeys a power law. The third example
obeys the power law and is ergodic.
|
1211.0835 | Rejoinder: Latent variable graphical model selection via convex
optimization | math.ST cs.LG stat.ML stat.TH | Rejoinder to "Latent variable graphical model selection via convex
optimization" by Venkat Chandrasekaran, Pablo A. Parrilo and Alan S. Willsky
[arXiv:1008.1290].
|
1211.0872 | Phase Retrieval: Stability and Recovery Guarantees | cs.IT math.IT math.NA | We consider stability and uniqueness in real phase retrieval problems over
general input sets. Specifically, we assume the data consists of noisy
quadratic measurements of an unknown input x in R^n that lies in a general set
T and study conditions under which x can be stably recovered from the
measurements. In the noise-free setting we derive a general expression on the
number of measurements needed to ensure that a unique solution can be found in
a stable way, that depends on the set T through a natural complexity parameter.
This parameter can be computed explicitly for many sets T of interest. For
example, for k-sparse inputs we show that O(k\log(n/k)) measurements are
needed, and when x can be any vector in R^n, O(n) measurements suffice. In the
noisy case, we show that if one can find a value for which the empirical risk
is bounded by a given, computable constant (that depends on the set T), then
the error with respect to the true input is bounded above by an another,
closely related complexity parameter of the set. By choosing an appropriate
number N of measurements, this bound can be made arbitrarily small, and it
decays at a rate faster than N^{-1/2+\delta} for any \delta>0. In particular,
for k-sparse vectors stable recovery is possible from O(k\log(n/k)\log k) noisy
measurements, and when x can be any vector in R^n, O(n \log n) noisy
measurements suffice. We also show that the complexity parameter for the
quadratic problem is the same as the one used for analyzing stability in linear
measurements under very general conditions. Thus, no substantial price has to
be paid in terms of stability if there is no knowledge of the phase.
|
1211.0879 | Comparing K-Nearest Neighbors and Potential Energy Method in
classification problem. A case study using KNN applet by E.M. Mirkes and real
life benchmark data sets | stat.ML cs.LG | K-nearest neighbors (KNN) method is used in many supervised learning
classification problems. Potential Energy (PE) method is also developed for
classification problems based on its physical metaphor. The energy potential
used in the experiments are Yukawa potential and Gaussian Potential. In this
paper, I use both applet and MATLAB program with real life benchmark data to
analyze the performances of KNN and PE method in classification problems. The
results show that in general, KNN and PE methods have similar performance. In
particular, PE with Yukawa potential has worse performance than KNN when the
density of the data is higher in the distribution of the database. When the
Gaussian potential is applied, the results from PE and KNN have similar
behavior. The indicators used are correlation coefficients and information
gain.
|
1211.0889 | APPLE: Approximate Path for Penalized Likelihood Estimators | stat.ML cs.LG | In high-dimensional data analysis, penalized likelihood estimators are shown
to provide superior results in both variable selection and parameter
estimation. A new algorithm, APPLE, is proposed for calculating the Approximate
Path for Penalized Likelihood Estimators. Both the convex penalty (such as
LASSO) and the nonconvex penalty (such as SCAD and MCP) cases are considered.
The APPLE efficiently computes the solution path for the penalized likelihood
estimator using a hybrid of the modified predictor-corrector method and the
coordinate-descent algorithm. APPLE is compared with several well-known
packages via simulation and analysis of two gene expression data sets.
|
1211.0897 | An Elementary Derivation of Mean Wait Time in Polling Systems | cs.SY math.PR | Polling systems are a well-established subject in queueing theory. However,
their formal treatments generally rely heavily on relatively sophisticated
theoretical tools, such as moment generating functions and Laplace transforms,
and solutions often require the solution of large systems of equations. We show
that, if you are willing to only have the average waiting of a system time
rather than higher moments, it can found through an elementary derivation based
only on algebra and some well-known properties of Poisson processes. Our result
is simple enough to be easily used in real-world applications, and the
simplicity of our derivation makes it ideal for pedagogical purposes.
|
1211.0906 | Algorithm Runtime Prediction: Methods & Evaluation | cs.AI cs.LG cs.PF stat.ML | Perhaps surprisingly, it is possible to predict how long an algorithm will
take to run on a previously unseen input, using machine learning techniques to
build a model of the algorithm's runtime as a function of problem-specific
instance features. Such models have important applications to algorithm
analysis, portfolio-based algorithm selection, and the automatic configuration
of parameterized algorithms. Over the past decade, a wide variety of techniques
have been studied for building such models. Here, we describe extensions and
improvements of existing models, new families of models, and -- perhaps most
importantly -- a much more thorough treatment of algorithm parameters as model
inputs. We also comprehensively describe new and existing features for
predicting algorithm runtime for propositional satisfiability (SAT), travelling
salesperson (TSP) and mixed integer programming (MIP) problems. We evaluate
these innovations through the largest empirical analysis of its kind, comparing
to a wide range of runtime modelling techniques from the literature. Our
experiments consider 11 algorithms and 35 instance distributions; they also
span a very wide range of SAT, MIP, and TSP instances, with the least
structured having been generated uniformly at random and the most structured
having emerged from real industrial applications. Overall, we demonstrate that
our new models yield substantially better runtime predictions than previous
approaches in terms of their generalization to new problem instances, to new
algorithms from a parameterized space, and to both simultaneously.
|
1211.0951 | Decoding Delay Minimization in Inter-Session Network Coding | cs.IT cs.NI math.IT | Intra-session network coding has been shown to offer significant gains in
terms of achievable throughput and delay in settings where one source
multicasts data to several clients. In this paper, we consider a more general
scenario where multiple sources transmit data to sets of clients and study the
benefits of inter-session network coding, when network nodes have the
opportunity to combine packets from different sources. In particular, we
propose a novel framework for optimal rate allocation in inter-session network
coding systems. We formulate the problem as the minimization of the average
decoding delay in the client population and solve it with a gradient-based
stochastic algorithm. Our optimized inter-session network coding solution is
evaluated in different network topologies and compared with basic intra-session
network coding solutions. Our results show the benefits of proper coding
decisions and effective rate allocation for lowering the decoding delay when
the network is used by concurrent multicast sessions.
|
1211.0954 | Jointly Optimal Sensing and Resource Allocation for Multiuser Overlay
Cognitive Radios | cs.NI cs.IT cs.SY math.IT | Successful deployment of cognitive radios requires efficient sensing of the
spectrum and dynamic adaptation of the available resources according to the
sensed (imperfect) information. While most works design these two tasks
separately, in this paper we address them jointly. In particular, we
investigate an overlay cognitive radio with multiple secondary users that
access orthogonally a set of frequency bands originally devoted to primary
users. The schemes are designed to minimize the cost of sensing, maximize the
performance of the secondary users (weighted sum rate), and limit the
probability of interfering the primary users. The joint design is addressed
using dynamic programming and nonlinear optimization techniques. A two-step
strategy that first finds the optimal resource allocation for any sensing
scheme and then uses that solution as input to solve for the optimal sensing
policy is implemented. The two-step strategy is optimal, gives rise to
intuitive optimal policies, and entails a computational complexity much lower
than that required to solve the original formulation.
|
1211.0957 | Adaptive Bee Colony in an Artificial Bee Colony for Solving Engineering
Design Problems | cs.CE q-bio.QM | A wide range of engineering design problems have been solved by the
algorithms that simulates collective intelligence in swarms of birds or
insects. The Artificial Bee Colony or ABC is one of the recent additions to the
class of swarm intelligence based algorithms that mimics the foraging behavior
of honey bees. ABC consists of three groups of bees namely employed, onlooker
and scout bees. In ABC, the food locations represent the potential candidate
solution. In the present study an attempt is made to generate the population of
food sources (Colony Size) adaptively and the variant is named as A-ABC. A-ABC
is further enhanced to improve convergence speed and exploitation capability,
by employing the concept of elitism, which guides the bees towards the best
food source. This enhanced variant is called E-ABC. The proposed algorithms are
validated on a set of standard benchmark problems with varying dimensions taken
from literature and on five engineering design problems. The numerical results
are compared with the basic ABC and three recent variant of ABC. Numerically
and statistically simulated results illustrate that the proposed method is very
efficient and competitive.
|
1211.0963 | Detecting, Representing and Querying Collusion in Online Rating Systems | cs.CR cs.HC cs.IR | Online rating systems are subject to malicious behaviors mainly by posting
unfair rating scores. Users may try to individually or collaboratively promote
or demote a product. Collaborating unfair rating 'collusion' is more damaging
than individual unfair rating. Although collusion detection in general has been
widely studied, identifying collusion groups in online rating systems is less
studied and needs more investigation. In this paper, we study impact of
collusion in online rating systems and asses their susceptibility to collusion
attacks. The proposed model uses a frequent itemset mining algorithm to detect
candidate collusion groups. Then, several indicators are used for identifying
collusion groups and for estimating how damaging such colluding groups might
be. Also, we propose an algorithm for finding possible collusive subgroup
inside larger groups which are not identified as collusive. The model has been
implemented and we present results of experimental evaluation of our
methodology.
|
1211.0970 | Early Prediction of Movie Box Office Success based on Wikipedia Activity
Big Data | physics.soc-ph cs.CY cs.SI physics.data-an | Use of socially generated "big data" to access information about collective
states of the minds in human societies has become a new paradigm in the
emerging field of computational social science. A natural application of this
would be the prediction of the society's reaction to a new product in the sense
of popularity and adoption rate. However, bridging the gap between "real time
monitoring" and "early predicting" remains a big challenge. Here we report on
an endeavor to build a minimalistic predictive model for the financial success
of movies based on collective activity data of online users. We show that the
popularity of a movie can be predicted much before its release by measuring and
analyzing the activity level of editors and viewers of the corresponding entry
to the movie in Wikipedia, the well-known online encyclopedia.
|
1211.0985 | Interactive Interference Alignment | cs.IT math.IT | We study interference channels (IFC) where interaction among sources and
destinations is enabled, e.g., both sources and destinations can talk to each
other using full-duplex radios. The interaction can come in two ways: 1) {\em
In-band interaction:} sources and destinations can transmit and listen in the
same channel simultaneously, enabling interaction. 2) {\em out-of-band
interaction:} destinations talk back to the sources on an out-of-band channel,
possible from white-space channels. The flexibility afforded by interaction
among sources and destinations allows for the derivation of interference
alignment (IA) strategies that have desirable "engineering properties":
insensitivity to the rationality or irrationality of channel parameters, small
block lengths and finite SNR operations. We show that for several classes of
interference channels the interactive interference alignment scheme can achieve
the optimal degrees of freedom. In particular, we show the {\em first simple
scheme} (having finite block length, for channels having no diversity) for
$K=3,4$ that can achieve the optimal degrees of freedom of $\frac{K}{2}$ even
after accounting for the cost of interaction. We also give simulation results
on the finite SNR performance of interactive alignment under some settings.
On the technical side, we show using a Gr\"{o}bner basis argument that in a
general network potentially utilizing cooperation and feedback, the optimal
degrees of freedom under linear schemes of a fixed block length is the same for
channel coefficients with probability 1. Furthermore, a numerical method to
estimate this value is also presented. These tools have potentially wider
utility in studying other wireless networks as well.
|
1211.0986 | New constructions of RIP matrices with fast multiplication and fewer
rows | cs.DS cs.IT math.IT math.PR | In compressed sensing, the "restricted isometry property" (RIP) is a
sufficient condition for the efficient reconstruction of a nearly k-sparse
vector x in C^d from m linear measurements Phi x. It is desirable for m to be
small, and for Phi to support fast matrix-vector multiplication. In this work,
we give a randomized construction of RIP matrices Phi in C^{m x d}, preserving
the L_2 norms of all k-sparse vectors with distortion 1+eps, where the
matrix-vector multiply Phi x can be computed in nearly linear time. The number
of rows m is on the order of eps^{-2}klog dlog^2(klog d). Previous analyses of
constructions of RIP matrices supporting fast matrix-vector multiplies, such as
the sampled discrete Fourier matrix, required m to be larger by roughly a log k
factor.
Supporting fast matrix-vector multiplication is useful for iterative recovery
algorithms which repeatedly multiply by Phi or Phi^*. Furthermore, our
construction, together with a connection between RIP matrices and the
Johnson-Lindenstrauss lemma in [Krahmer-Ward, SIAM. J. Math. Anal. 2011],
implies fast Johnson-Lindenstrauss embeddings with asymptotically fewer rows
than previously known.
Our approach is a simple twist on previous constructions. Rather than
choosing the rows for the embedding matrix to be rows sampled from some larger
structured matrix (such as the discrete Fourier transform or a random circulant
matrix), we instead choose each row of the embedding matrix to be a linear
combination of a small number of rows of the original matrix, with random sign
flips as coefficients. The main tool in our analysis is a recent bound for the
supremum of certain types of Rademacher chaos processes in
[Krahmer-Mendelson-Rauhut, arXiv:1207.0235].
|
1211.0995 | Sparsity Lower Bounds for Dimensionality Reducing Maps | cs.DS cs.IT math.IT | We give near-tight lower bounds for the sparsity required in several
dimensionality reducing linear maps. First, consider the JL lemma which states
that for any set of n vectors in R there is a matrix A in R^{m x d} with m =
O(eps^{-2}log n) such that mapping by A preserves pairwise Euclidean distances
of these n vectors up to a 1 +/- eps factor. We show that there exists a set of
n vectors such that any such matrix A with at most s non-zero entries per
column must have s = Omega(eps^{-1}log n/log(1/eps)) as long as m <
O(n/log(1/eps)). This bound improves the lower bound of Omega(min{eps^{-2},
eps^{-1}sqrt{log_m d}}) by [Dasgupta-Kumar-Sarlos, STOC 2010], which only held
against the stronger property of distributional JL, and only against a certain
restricted class of distributions. Meanwhile our lower bound is against the JL
lemma itself, with no restrictions. Our lower bound matches the sparse
Johnson-Lindenstrauss upper bound of [Kane-Nelson, SODA 2012] up to an
O(log(1/eps)) factor.
Next, we show that any m x n matrix with the k-restricted isometry property
(RIP) with constant distortion must have at least Omega(klog(n/k)) non-zeroes
per column if the number of the rows is the optimal value m = O(klog (n/k)),
and if k < n/polylog n. This improves the previous lower bound of Omega(min{k,
n/m}) by [Chandar, 2010] and shows that for virtually all k it is impossible to
have a sparse RIP matrix with an optimal number of rows.
Lastly, we show that any oblivious distribution over subspace embedding
matrices with 1 non-zero per column and preserving all distances in a d
dimensional-subspace up to a constant factor with constant probability must
have at least Omega(d^2) rows. This matches one of the upper bounds in
[Nelson-Nguyen, 2012] and shows the impossibility of obtaining the best of both
of constructions in that work, namely 1 non-zero per column and \~O(d) rows.
|
1211.0996 | Learning using Local Membership Queries | cs.LG cs.AI | We introduce a new model of membership query (MQ) learning, where the
learning algorithm is restricted to query points that are \emph{close} to
random examples drawn from the underlying distribution. The learning model is
intermediate between the PAC model (Valiant, 1984) and the PAC+MQ model (where
the queries are allowed to be arbitrary points).
Membership query algorithms are not popular among machine learning
practitioners. Apart from the obvious difficulty of adaptively querying
labelers, it has also been observed that querying \emph{unnatural} points leads
to increased noise from human labelers (Lang and Baum, 1992). This motivates
our study of learning algorithms that make queries that are close to examples
generated from the data distribution.
We restrict our attention to functions defined on the $n$-dimensional Boolean
hypercube and say that a membership query is local if its Hamming distance from
some example in the (random) training data is at most $O(\log(n))$. We show the
following results in this model:
(i) The class of sparse polynomials (with coefficients in R) over $\{0,1\}^n$
is polynomial time learnable under a large class of \emph{locally smooth}
distributions using $O(\log(n))$-local queries. This class also includes the
class of $O(\log(n))$-depth decision trees.
(ii) The class of polynomial-sized decision trees is polynomial time
learnable under product distributions using $O(\log(n))$-local queries.
(iii) The class of polynomial size DNF formulas is learnable under the
uniform distribution using $O(\log(n))$-local queries in time
$n^{O(\log(\log(n)))}$.
(iv) In addition we prove a number of results relating the proposed model to
the traditional PAC model and the PAC+MQ model.
|
1211.1035 | Asymmetries of Men and Women in Selecting Partner | cs.CY cs.SI | This paper investigates human dynamics in a large online dating site with
3,000 new users daily who stay in the system for 3 months on the average. The
daily activity is also quite large such as 500,000 massage transactions, 5,000
photo uploads, and 20,000 votes.
The data investigated has 276, 210 male and 483, 963 female users. Based on
the activity that they made, there are clear distinctions between men and women
in their pattern of behavior. Men prefer lower, women prefer higher
qualifications in their partner.
|
1211.1041 | Algorithms and Hardness for Robust Subspace Recovery | cs.CC cs.DS cs.IT cs.LG math.IT | We consider a fundamental problem in unsupervised learning called
\emph{subspace recovery}: given a collection of $m$ points in $\mathbb{R}^n$,
if many but not necessarily all of these points are contained in a
$d$-dimensional subspace $T$ can we find it? The points contained in $T$ are
called {\em inliers} and the remaining points are {\em outliers}. This problem
has received considerable attention in computer science and in statistics. Yet
efficient algorithms from computer science are not robust to {\em adversarial}
outliers, and the estimators from robust statistics are hard to compute in high
dimensions.
Are there algorithms for subspace recovery that are both robust to outliers
and efficient? We give an algorithm that finds $T$ when it contains more than a
$\frac{d}{n}$ fraction of the points. Hence, for say $d = n/2$ this estimator
is both easy to compute and well-behaved when there are a constant fraction of
outliers. We prove that it is Small Set Expansion hard to find $T$ when the
fraction of errors is any larger, thus giving evidence that our estimator is an
{\em optimal} compromise between efficiency and robustness.
As it turns out, this basic problem has a surprising number of connections to
other areas including small set expansion, matroid theory and functional
analysis that we make use of here.
|
1211.1043 | Soft (Gaussian CDE) regression models and loss functions | cs.LG stat.ML | Regression, unlike classification, has lacked a comprehensive and effective
approach to deal with cost-sensitive problems by the reuse (and not a
re-training) of general regression models. In this paper, a wide variety of
cost-sensitive problems in regression (such as bids, asymmetric losses and
rejection rules) can be solved effectively by a lightweight but powerful
approach, consisting of: (1) the conversion of any traditional one-parameter
crisp regression model into a two-parameter soft regression model, seen as a
normal conditional density estimator, by the use of newly-introduced enrichment
methods; and (2) the reframing of an enriched soft regression model to new
contexts by an instance-dependent optimisation of the expected loss derived
from the conditional normal distribution.
|
1211.1044 | Low-Latency Data Sharing in Erasure Multi-Way Relay Channels | cs.IT cs.NI math.IT | We consider an erasure multi-way relay channel (EMWRC) in which several users
share their data through a relay over erasure links. Assuming no feedback
channel between the users and the relay, we first identify the challenges for
designing a data sharing scheme over an EMWRC. Then, to overcome these
challenges, we propose practical low-latency and low-complexity data sharing
schemes based on fountain coding. Later, we introduce the notion of end-to-end
erasure rate (EEER) and analytically derive it for the proposed schemes. EEER
is then used to calculate the achievable rate and transmission overhead of the
proposed schemes. Using EEER and computer simulations, the achievable rates and
transmission overhead of our proposed schemes are compared with the ones of
one-way relaying. This comparison implies that when the number of users and the
channel erasure rates are not large, our proposed schemes outperform one-way
relaying. We also find an upper bound on the achievable rates of EMWRC and
observe that depending on the number of users and channel erasure rates, our
proposed solutions can perform very close to this bound.
|
1211.1073 | Computational and Statistical Tradeoffs via Convex Relaxation | math.ST cs.IT math.IT math.OC stat.TH | In modern data analysis, one is frequently faced with statistical inference
problems involving massive datasets. Processing such large datasets is usually
viewed as a substantial computational challenge. However, if data are a
statistician's main resource then access to more data should be viewed as an
asset rather than as a burden. In this paper we describe a computational
framework based on convex relaxation to reduce the computational complexity of
an inference procedure when one has access to increasingly larger datasets.
Convex relaxation techniques have been widely used in theoretical computer
science as they give tractable approximation algorithms to many computationally
intractable tasks. We demonstrate the efficacy of this methodology in
statistical estimation in providing concrete time-data tradeoffs in a class of
denoising problems. Thus, convex relaxation offers a principled approach to
exploit the statistical gains from larger datasets to reduce the runtime of
inference algorithms.
|
1211.1082 | Active and passive learning of linear separators under log-concave
distributions | cs.LG math.ST stat.ML stat.TH | We provide new results concerning label efficient, polynomial time, passive
and active learning of linear separators. We prove that active learning
provides an exponential improvement over PAC (passive) learning of homogeneous
linear separators under nearly log-concave distributions. Building on this, we
provide a computationally efficient PAC algorithm with optimal (up to a
constant factor) sample complexity for such problems. This resolves an open
question concerning the sample complexity of efficient PAC algorithms under the
uniform distribution in the unit ball. Moreover, it provides the first bound
for a polynomial-time PAC algorithm that is tight for an interesting infinite
class of hypothesis functions under a general and natural class of
data-distributions, providing significant progress towards a longstanding open
question.
We also provide new bounds for active and passive learning in the case that
the data might not be linearly separable, both in the agnostic case and and
under the Tsybakov low-noise condition. To derive our results, we provide new
structural results for (nearly) log-concave distributions, which might be of
independent interest as well.
|
1211.1107 | An effective web document clustering for information retrieval | cs.IR | The size of web has increased exponentially over the past few years with
thousands of documents related to a subject available to the user. With this
much amount of information available, it is not possible to take the full
advantage of the World Wide Web without having a proper framework to search
through the available data. This requisite organization can be done in many
ways. In this paper we introduce a combine approach to cluster the web pages
which first finds the frequent sets and then clusters the documents. These
frequent sets are generated by using Frequent Pattern growth technique. Then by
applying Fuzzy C- Means algorithm on it, we found clusters having documents
which are highly related and have similar features. We used Gensim package to
implement our approach because of its simplicity and robust nature. We have
compared our results with the combine approach of (Frequent Pattern growth,
K-means) and (Frequent Pattern growth, Cosine_Similarity). Experimental results
show that our approach is more efficient then the above two combine approach
and can handles more efficiently the serious limitation of traditional Fuzzy
C-Means algorithm, which is sensitiveto initial centroid and the number of
clusters to be formed.
|
1211.1119 | A Survey on Techniques of Improving Generalization Ability of Genetic
Programming Solutions | cs.NE | In the field of empirical modeling using Genetic Programming (GP), it is
important to evolve solution with good generalization ability. Generalization
ability of GP solutions get affected by two important issues: bloat and
over-fitting. We surveyed and classified existing literature related to
different techniques used by GP research community to deal with these issues.
We also point out limitation of these techniques, if any. Moreover, the
classification of different bloat control approaches and measures for bloat and
over-fitting are also discussed. We believe that this work will be useful to GP
practitioners in following ways: (i) to better understand concepts of
generalization in GP (ii) comparing existing bloat and over-fitting control
techniques and (iii) selecting appropriate approach to improve generalization
ability of GP evolved solutions.
|
1211.1121 | Numerical Schemes for Nonlinear Predictor Feedback | math.OC cs.SY | Implementation is a common problem with feedback laws with distributed
delays. This paper focuses on a specific aspect of the implementation problem
for predictor-based feedback laws: the problem of the approximation of the
predictor mapping. It is shown that the numerical approximation of the
predictor mapping by means of a numerical scheme in conjunction with a hybrid
feedback law that uses sampled measurements, can be used for the global
stabilization of all forward complete nonlinear systems that are globally
asymptotically stabilizable and locally exponentially stabilizable in the
delay-free case. Special results are provided for the linear time invariant
case. Explicit formulae are provided for the estimation of the parameters of
the resulting hybrid control scheme.
|
1211.1123 | Feedback Stabilization Methods for the Solution of Nonlinear Programming
Problems | math.OC cs.SY math.DS | In this work we show that given a nonlinear programming problem, it is
possible to construct a family of dynamical systems defined on the feasible set
of the given problem, so that: (a) the equilibrium points are the unknown
critical points of the problem, (b) each dynamical system admits the objective
function of the problem as a Lyapunov function, and (c) explicit formulae are
available without involving the unknown critical points of the problem. The
construction of the family of dynamical systems is based on the Control
Lyapunov Function methodology, which is used in mathematical control theory for
the construction of stabilizing feedback. The knowledge of a dynamical system
with the previously mentioned properties allows the construction of algorithms
which guarantee global convergence to the set of the critical points.
|
1211.1127 | Visual Transfer Learning: Informal Introduction and Literature Overview | cs.CV cs.LG | Transfer learning techniques are important to handle small training sets and
to allow for quick generalization even from only a few examples. The following
paper is the introduction as well as the literature overview part of my thesis
related to the topic of transfer learning for visual recognition problems.
|
1211.1137 | Wireless Compressive Sensing for Energy Harvesting Sensor Nodes | cs.IT math.IT | We consider the scenario in which multiple sensors send spatially correlated
data to a fusion center (FC) via independent Rayleigh-fading channels with
additive noise. Assuming that the sensor data is sparse in some basis, we show
that the recovery of this sparse signal can be formulated as a compressive
sensing (CS) problem. To model the scenario in which the sensors operate with
intermittently available energy that is harvested from the environment, we
propose that each sensor transmits independently with some probability, and
adapts the transmit power to its harvested energy. Due to the probabilistic
transmissions, the elements of the equivalent sensing matrix are not Gaussian.
Besides, since the sensors have different energy harvesting rates and different
sensor-to-FC distances, the FC has different receive signal-to-noise ratios
(SNRs) for each sensor. This is referred to as the inhomogeneity of SNRs. Thus,
the elements of the sensing matrix are also not identically distributed. For
this unconventional setting, we provide theoretical guarantees on the number of
measurements for reliable and computationally efficient recovery, by showing
that the sensing matrix satisfies the restricted isometry property (RIP), under
reasonable conditions. We then compute an achievable system delay under an
allowable mean-squared-error (MSE). Furthermore, using techniques from large
deviations theory, we analyze the impact of inhomogeneity of SNRs on the
so-called k-restricted eigenvalues, which governs the number of measurements
required for the RIP to hold. We conclude that the number of measurements
required for the RIP is not sensitive to the inhomogeneity of SNRs, when the
number of sensors n is large and the sparsity of the sensor data (signal) k
grows slower than the square root of n. Our analysis is corroborated by
extensive numerical results.
|
1211.1138 | Motion Planning for Continuous Time Stochastic Processes: A Dynamic
Programming Approach | math.OC cs.SY math.PR | We study stochastic motion planning problems which involve a controlled
process, with possibly discontinuous sample paths, visiting certain subsets of
the state-space while avoiding others in a sequential fashion. For this
purpose, we first introduce two basic notions of motion planning, and then
establish a connection to a class of stochastic optimal control problems
concerned with sequential stopping times. A weak dynamic programming principle
(DPP) is then proposed, which characterizes the set of initial states that
admit a control enabling the process to execute the desired maneuver with
probability no less than some pre-specified value. The proposed DPP comprises
auxiliary value functions defined in terms of discontinuous payoff functions. A
concrete instance of the use of this novel DPP in the case of diffusion
processes is also presented. In this case, we establish that the aforementioned
set of initial states can be characterized as the level set of a discontinuous
viscosity solution to a sequence of partial differential equations, for which
the first one has a known boundary condition, while the boundary conditions of
the subsequent ones are determined by the solutions to the preceding steps.
Finally, the generality and flexibility of the theoretical results are
illustrated on an example involving biological switches.
|
1211.1146 | Discrete modelling of bacterial conjugation dynamics | cs.MA physics.bio-ph q-bio.CB | In bacterial populations, cells are able to cooperate in order to yield
complex collective functionalities. Interest in population-level cellular
behaviour is increasing, due to both our expanding knowledge of the underlying
biological principles, and the growing range of possible applications for
engineered microbial consortia. Researchers in the field of synthetic biology -
the application of engineering principles to living systems - have, for
example, recently shown how useful decision-making circuits may be distributed
across a bacterial population. The ability of cells to interact through small
signalling molecules (a mechanism known as it quorum sensing) is the basis for
the majority of existing engineered systems. However, horizontal gene transfer
(or conjugation) offers the possibility of cells exchanging messages (using
DNA) that are much more information-rich. The potential of engineering this
conjugation mechanism to suit specific goals will guide future developments in
this area. Motivated by a lack of computational models for examining the
specific dynamics of conjugation, we present a simulation framework for its
further study. We present an agent-based model for conjugation dynamics, with
realistic handling of physical forces. Our framework combines the management of
intercellular interactions together with simulation of intracellular genetic
networks, to provide a general-purpose platform. We validate our simulations
against existing experimental data, and then demonstrate how the emergent
mixing patterns of multi-strain populations can affect conjugation dynamics.
Our model of conjugation, based on a probability distribution, may be easily
tuned to correspond to the behaviour of different cell types. Simulation code
and movies are available at http://code.google.com/p/discus/.
|
1211.1188 | How can social herding enhance cooperation? | physics.soc-ph cs.SI | We study a system in which N agents have to decide between two strategies
\theta_i (i \in 1... N), for defection or cooperation, when interacting with
other n agents (either spatial neighbors or randomly chosen ones). After each
round, they update their strategy responding nonlinearly to two different
information sources: (i) the payoff a_i(\theta_i, f_i) received from the
strategic interaction with their n counterparts, (ii) the fraction f_i of
cooperators in this interaction. For the latter response, we assume social
herding, i.e. agents adopt their strategy based on the frequencies of the
different strategies in their neighborhood, without taking into account the
consequences of this decision. We note that f_i already determines the payoff,
so there is no additional information assumed. A parameter \zeta defines to
what level agents take the two different information sources into account. For
the strategic interaction, we assume a Prisoner's Dilemma game, i.e. one in
which defection is the evolutionary stable strategy. However, if the additional
dimension of social herding is taken into account, we find instead a stable
outcome where cooperators are the majority. By means of agent-based computer
simulations and analytical investigations, we evaluate the critical conditions
for this transition towards cooperation. We find that, in addition to a high
degree of social herding, there has to be a nonlinear response to the fraction
of cooperators. We argue that the transition to cooperation in our model is
based on less information, i.e. on agents which are not informed about the
payoff matrix, and therefore rely on just observing the strategy of others, to
adopt it. By designing the right mechanisms to respond to this information, the
transition to cooperation can be remarkably enhanced.
|
1211.1234 | A Framework for Investigating the Performance of Chaotic-Map Truly
Random Number Generators | cs.IT math.DS math.IT | In this paper, we approximate the hidden Markov model of chaotic-map truly
random number generators (TRNGs) and describe its fundamental limits based on
the approximate entropy-rate of the underlying bit-generation process. We
demonstrate that entropy-rate plays a key role in the performance and
robustness of chaotic-map TRNGs, which must be taken into account in the
circuit design optimization. We further derive optimality conditions for
post-processing units that extract truly random bits from a raw-RNG.
|
1211.1250 | Detection-Directed Sparse Estimation using Bayesian Hypothesis Test and
Belief Propagation | cs.IT math.IT | In this paper, we propose a sparse recovery algorithm called
detection-directed (DD) sparse estimation using Bayesian hypothesis test (BHT)
and belief propagation (BP). In this framework, we consider the use of
sparse-binary sensing matrices which has the tree-like property and the
sampled-message approach for the implementation of BP.
The key idea behind the proposed algorithm is that the recovery takes
DD-estimation structure consisting of two parts: support detection and signal
value estimation. BP and BHT perform the support detection, and an MMSE
estimator finds the signal values using the detected support set. The proposed
algorithm provides noise-robustness against measurement noise beyond the
conventional MAP approach, as well as a solution to remove quantization effect
by the sampled-message based BP independently of memory size for the message
sampling.
We explain how the proposed algorithm can have the aforementioned
characteristics via exemplary discussion. In addition, our experiments validate
such superiority of the proposed algorithm, compared to recent algorithms under
noisy setup. Interestingly the experimental results show that performance of
the proposed algorithm approaches that of the oracle estimator as SNR becomes
higher.
|
1211.1252 | Implementation of Radon Transformation for Electrical Impedance
Tomography (EIT) | cs.CV | Radon Transformation is generally used to construct optical image (like CT
image) from the projection data in biomedical imaging. In this paper, the
concept of Radon Transformation is implemented to reconstruct Electrical
Impedance Topographic Image (conductivity or resistivity distribution) of a
circular subject. A parallel resistance model of a subject is proposed for
Electrical Impedance Topography(EIT) or Magnetic Induction Tomography(MIT). A
circular subject with embedded circular objects is segmented into equal width
slices from different angles. For each angle, Conductance and Conductivity of
each slice is calculated and stored in an array. A back projection method is
used to generate a two-dimensional image from one-dimensional projections. As a
back projection method, Inverse Radon Transformation is applied on the
calculated conductance and conductivity to reconstruct two dimensional images.
These images are compared to the target image. In the time of image
reconstruction, different filters are used and these images are compared with
each other and target image.
|
1211.1255 | Handwritten digit recognition by bio-inspired hierarchical networks | cs.LG cs.CV q-bio.NC | The human brain processes information showing learning and prediction
abilities but the underlying neuronal mechanisms still remain unknown.
Recently, many studies prove that neuronal networks are able of both
generalizations and associations of sensory inputs. In this paper, following a
set of neurophysiological evidences, we propose a learning framework with a
strong biological plausibility that mimics prominent functions of cortical
circuitries. We developed the Inductive Conceptual Network (ICN), that is a
hierarchical bio-inspired network, able to learn invariant patterns by
Variable-order Markov Models implemented in its nodes. The outputs of the
top-most node of ICN hierarchy, representing the highest input generalization,
allow for automatic classification of inputs. We found that the ICN clusterized
MNIST images with an error of 5.73% and USPS images with an error of 12.56%.
|
1211.1265 | From Bits to Images: Inversion of Local Binary Descriptors | cs.CV cs.IT math.IT | Local Binary Descriptors are becoming more and more popular for image
matching tasks, especially when going mobile. While they are extensively
studied in this context, their ability to carry enough information in order to
infer the original image is seldom addressed.
In this work, we leverage an inverse problem approach to show that it is
possible to directly reconstruct the image content from Local Binary
Descriptors. This process relies on very broad assumptions besides the
knowledge of the pattern of the descriptor at hand. This generalizes previous
results that required either a prior learning database or non-binarized
features.
Furthermore, our reconstruction scheme reveals differences in the way
different Local Binary Descriptors capture and encode image information. Hence,
the potential applications of our work are multiple, ranging from privacy
issues caused by eavesdropping image keypoints streamed by mobile devices to
the design of better descriptors through the visualization and the analysis of
their geometric content.
|
1211.1302 | Calculating Kolmogorov Complexity from the Output Frequency
Distributions of Small Turing Machines | cs.IT cs.CC math.IT nlin.PS | Drawing on various notions from theoretical computer science, we present a
novel numerical approach, motivated by the notion of algorithmic probability,
to the problem of approximating the Kolmogorov-Chaitin complexity of short
strings. The method is an alternative to the traditional lossless compression
algorithms, which it may complement, the two being serviceable for different
string lengths. We provide a thorough analysis for all $\sum_{n=1}^{11} 2^n$
binary strings of length $n<12$ and for most strings of length $12\leq n
\leq16$ by running all $\sim 2.5 \times 10^{13}$ Turing machines with 5 states
and 2 symbols ($8\times 22^9$ with reduction techniques) using the most
standard formalism of Turing machines, used in for example the Busy Beaver
problem. We address the question of stability and error estimation, the
sensitivity of the continued application of the method for wider coverage and
better accuracy, and provide statistical evidence suggesting robustness. As
with compression algorithms, this work promises to deliver a range of
applications, and to provide insight into the question of complexity
calculation of finite (and short) strings.
|
1211.1328 | Random walk kernels and learning curves for Gaussian process regression
on random graphs | stat.ML cond-mat.dis-nn cond-mat.stat-mech cs.LG | We consider learning on graphs, guided by kernels that encode similarity
between vertices. Our focus is on random walk kernels, the analogues of squared
exponential kernels in Euclidean spaces. We show that on large, locally
treelike, graphs these have some counter-intuitive properties, specifically in
the limit of large kernel lengthscales. We consider using these kernels as
covariance matrices of e.g.\ Gaussian processes (GPs). In this situation one
typically scales the prior globally to normalise the average of the prior
variance across vertices. We demonstrate that, in contrast to the Euclidean
case, this generically leads to significant variation in the prior variance
across vertices, which is undesirable from the probabilistic modelling point of
view. We suggest the random walk kernel should be normalised locally, so that
each vertex has the same prior variance, and analyse the consequences of this
by studying learning curves for Gaussian process regression. Numerical
calculations as well as novel theoretical predictions for the learning curves
using belief propagation make it clear that one obtains distinctly different
probabilistic models depending on the choice of normalisation. Our method for
predicting the learning curves using belief propagation is significantly more
accurate than previous approximations and should become exact in the limit of
large random graphs.
|
1211.1332 | Use of PSO in Parameter Estimation of Robot Dynamics; Part Two:
Robustness | cs.RO | In this paper, we analyze the robustness of the PSO-based approach to
parameter estimation of robot dynamics presented in Part One. We have made
attempts to make the PSO method more robust by experimenting with potential
cost functions. The simulated system is a cylindrical robot; through
simulation, the robot is excited, samples are taken, error is added to the
samples, and the noisy samples are used for estimating the robot parameters
through the presented method. Comparisons are made with the least squares,
total least squares, and robust least squares methods of estimation.
|
1211.1335 | Ball Striking Algorithm for a 3 DOF Ping-Pong Playing Robot Based on
Particle Swarm Optimization | cs.RO | This paper illustrates how a 3 degrees of freedom, Cartesian robot can be
given the task of playing ping pong against a human player. We present an
algorithm based on particle swarm optimization for the robot to calculate when
and how to hit an approaching ball. Simulation results are shown to depict the
effectiveness of our approach. Although emphasis is placed on sending the ball
to a desired point on the ping pong table, it is shown that our method may be
adjusted to meet the requirements of a variety of ball hitting strategies.
|
1211.1339 | Use of PSO in Parameter Estimation of Robot Dynamics; Part One: No Need
for Parameterization | cs.RO | Offline procedures for estimating parameters of robot dynamics are
practically based on the parameterized inverse dynamic model. In this paper, we
present a novel approach to parameter estimation of robot dynamics which
removes the necessity of parameterization (i.e. finding the minimum number of
parameters from which the dynamics can be calculated through a linear model
with respect to these parameters). This offline approach is based on a simple
and powerful swarm intelligence tool: the particle swarm optimization (PSO). In
this paper, we discuss and validate the method through simulated experiments.
In Part Two we analyze our method in terms of robustness and compare it to
robust analytical methods of estimation.
|
1211.1361 | On the constrained growth of complex critical systems | physics.soc-ph cs.SI | Critical, or scale independent, systems are so ubiquitous, that gaining
theoretical insights on their nature and properties has many direct
repercussions in social and natural sciences. In this report, we start from the
simplest possible growth model for critical systems and deduce constraints in
their growth : the well-known preferential attachment principle, and, mainly, a
new law of temporal scaling. We then support our scaling law with a number of
calculations and simulations of more complex theoretical models : critical
percolation, self-organized criticality and fractal growth. Perhaps more
importantly, the scaling law is also observed in a number of empirical systems
of quite different nature : prose samples, artistic and scientific
productivity, citation networks, and the topology of the Internet. We believe
that these observations pave the way towards a general and analytical framework
for predicting the growth of complex systems.
|
1211.1364 | A shadowing problem in the detection of overlapping communities: lifting
the resolution limit through a cascading procedure | physics.soc-ph cond-mat.stat-mech cs.SI | Community detection is the process of assigning nodes and links in
significant communities (e.g. clusters, function modules) and its development
has led to a better understanding of complex networks. When applied to sizable
networks, we argue that most detection algorithms correctly identify prominent
communities, but fail to do so across multiple scales. As a result, a
significant fraction of the network is left uncharted. We show that this
problem stems from larger or denser communities overshadowing smaller or
sparser ones, and that this effect accounts for most of the undetected
communities and unassigned links. We propose a generic cascading approach to
community detection that circumvents the problem. Using real and artificial
network datasets with three widely used community detection algorithms, we show
how a simple cascading procedure allows for the detection of the missing
communities. This work highlights a new detection limit of community structure,
and we hope that our approach can inspire better community detection
algorithms.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.