id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1212.0467 | Low-rank Matrix Completion using Alternating Minimization | stat.ML cs.LG math.OC | Alternating minimization represents a widely applicable and empirically
successful approach for finding low-rank matrices that best fit the given data.
For example, for the problem of low-rank matrix completion, this method is
believed to be one of the most accurate and efficient, and formed a major
component of the winning entry in the Netflix Challenge.
In the alternating minimization approach, the low-rank target matrix is
written in a bi-linear form, i.e. $X = UV^\dag$; the algorithm then alternates
between finding the best $U$ and the best $V$. Typically, each alternating step
in isolation is convex and tractable. However the overall problem becomes
non-convex and there has been almost no theoretical understanding of when this
approach yields a good result.
In this paper we present first theoretical analysis of the performance of
alternating minimization for matrix completion, and the related problem of
matrix sensing. For both these problems, celebrated recent results have shown
that they become well-posed and tractable once certain (now standard)
conditions are imposed on the problem. We show that alternating minimization
also succeeds under similar conditions. Moreover, compared to existing results,
our paper shows that alternating minimization guarantees faster (in particular,
geometric) convergence to the true matrix, while allowing a simpler analysis.
|
1212.0493 | Multi-user Diversity in Spectrum Sharing Systems over Fading Channels
with Average Power Constraints | cs.IT math.IT | The multi-user diversity in spectrum sharing cognitive radio systems with
average power constraints over fading channels is investigated. Average power
constraints are imposed for both the transmit power at the secondary
transmitter and the interference power received at the primary receiver in
order to provide optimal power allocation for capacity maximization at the
secondary system and protection at the primary system respectively. Multiple
secondary and primary receivers are considered and the corresponding fading
distributions for the Rayleigh and Nakagami-m fading channels are derived.
Based on the derived formulation of the fading distributions, the average
achievable channel capacity and the outage probability experienced at the
secondary system are obtained, revealing the impact of the average power
constraints on optimal power allocation in multi-user diversity technique in
fading environments with multiple secondary and primary receivers that share
the same channel. The obtained results highlight the advantage of having on one
hand more secondary receivers and on the other hand fewer primary receivers
manifested as an increase in the achievable capacity.
|
1212.0494 | Identification Via Quantum Channels | quant-ph cs.IT math-ph math.IT math.MP | We review the development of the quantum version of Ahlswede and Dueck's
theory of identification via channels. As is often the case in quantum
probability, there is not just one but several quantizations: we know at least
two different concepts of identification of classical information via quantum
channels, and three different identification capacities for quantum
information. In the present summary overview we concentrate on conceptual
points and open problems, referring the reader to the small set of original
articles for details.
|
1212.0504 | Machine learning prediction of cancer cell sensitivity to drugs based on
genomic and chemical properties | q-bio.GN cs.CE cs.LG q-bio.CB | Predicting the response of a specific cancer to a therapy is a major goal in
modern oncology that should ultimately lead to a personalised treatment.
High-throughput screenings of potentially active compounds against a panel of
genomically heterogeneous cancer cell lines have unveiled multiple
relationships between genomic alterations and drug responses. Various
computational approaches have been proposed to predict sensitivity based on
genomic features, while others have used the chemical properties of the drugs
to ascertain their effect. In an effort to integrate these complementary
approaches, we developed machine learning models to predict the response of
cancer cell lines to drug treatment, quantified through IC50 values, based on
both the genomic features of the cell lines and the chemical properties of the
considered drugs. Models predicted IC50 values in a 8-fold cross-validation and
an independent blind test with coefficient of determination R2 of 0.72 and 0.64
respectively. Furthermore, models were able to predict with comparable accuracy
(R2 of 0.61) IC50s of cell lines from a tissue not used in the training stage.
Our in silico models can be used to optimise the experimental design of
drug-cell screenings by estimating a large proportion of missing IC50 values
rather than experimentally measure them. The implications of our results go
beyond virtual drug screening design: potentially thousands of drugs could be
probed in silico to systematically test their potential efficacy as anti-tumour
agents based on their structure, thus providing a computational framework to
identify new drug repositioning opportunities as well as ultimately be useful
for personalized medicine by linking the genomic traits of patients to drug
sensitivity.
|
1212.0511 | Design of Experiments for Calibration of Planar Anthropomorphic
Manipulators | cs.RO | The paper presents a novel technique for the design of optimal calibration
experiments for a planar anthropomorphic manipulator with n degrees of freedom.
Proposed approach for selection of manipulator configurations allows
essentially improving calibration accuracy and reducing parameter
identification errors. The results are illustrated by application examples that
deal with typical anthropomorphic manipulators.
|
1212.0518 | Sublinear but Never Superlinear Preferential Attachment by Local Network
Growth | cond-mat.stat-mech cs.SI physics.soc-ph | We investigate a class of network growth rules that are based on a
redirection algorithm wherein new nodes are added to a network by linking to a
randomly chosen target node with some probability 1-r or linking to the parent
node of the target node with probability r. For fixed 0<r<1, the redirection
algorithm is equivalent to linear preferential attachment. We show that when r
is a decaying function of the degree of the parent of the initial target, the
redirection algorithm produces sublinear preferential attachment network
growth. We also argue that no local redirection algorithm can produce
superlinear preferential attachment.
|
1212.0520 | A modular framework for randomness extraction based on Trevisan's
construction | cs.IT cs.MS math.IT quant-ph | Informally, an extractor delivers perfect randomness from a source that may
be far away from the uniform distribution, yet contains some randomness. This
task is a crucial ingredient of any attempt to produce perfectly random
numbers---required, for instance, by cryptographic protocols, numerical
simulations, or randomised computations. Trevisan's extractor raised
considerable theoretical interest not only because of its data parsimony
compared to other constructions, but particularly because it is secure against
quantum adversaries, making it applicable to quantum key distribution.
We discuss a modular, extensible and high-performance implementation of the
construction based on various building blocks that can be flexibly combined to
satisfy the requirements of a wide range of scenarios. Besides quantitatively
analysing the properties of many combinations in practical settings, we improve
previous theoretical proofs, and give explicit results for non-asymptotic
cases. The self-contained description does not assume familiarity with
extractors.
|
1212.0575 | Sparse and Optimal Acquisition Design for Diffusion MRI and Beyond | physics.med-ph cs.CE math.OC physics.comp-ph | The focus of this paper is on the development of a sparse and optimal
acquisition (SOA) design for diffusion MRI multiple-shell acquisition and
beyond. A novel optimality criterion is proposed for sparse multiple-shell
acquisition and quasi multiple-shell designs in diffusion MRI and a novel and
effective semi-stochastic and moderately greedy combinatorial search strategy
with simulated annealing to locate the optimum design or configuration. Even
though the number of distinct configurations for a given set of diffusion
gradient directions is very large in general---e.g., in the order of 10^{232}
for a set of 144 diffusion gradient directions, the proposed search strategy
was found to be effective in finding the optimum configuration. It was found
that the square design is the most robust (i.e., with stable condition numbers
and A-optimal measures under varying experimental conditions) among many other
possible designs of the same sample size. Under the same performance
evaluation, the square design was found to be more robust than the widely used
sampling schemes similar to that of 3D radial MRI and of diffusion spectrum
imaging (DSI).
|
1212.0578 | Max-plus algebra models of queueing networks | math.OC cs.SY | A class of queueing networks which may have an arbitrary topology, and
consist of single-server fork-join nodes with both infinite and finite buffers
is examined to derive a representation of the network dynamics in terms of
max-plus algebra. For the networks, we present a common dynamic state equation
which relates the departure epochs of customers from the network nodes in an
explicit vector form determined by a state transition matrix. It is shown how
the matrices inherent in particular networks may be calculated from the service
times of customers. Since, in general, an explicit dynamic equation may not
exist for a network, related existence conditions are established in terms of
the network topology.
|
1212.0582 | Compositional Stochastic Modeling and Probabilistic Programming | cs.AI cs.PL | Probabilistic programming is related to a compositional approach to
stochastic modeling by switching from discrete to continuous time dynamics. In
continuous time, an operator-algebra semantics is available in which processes
proceeding in parallel (and possibly interacting) have summed time-evolution
operators. From this foundation, algorithms for simulation, inference and model
reduction may be systematically derived. The useful consequences are
potentially far-reaching in computational science, machine learning and beyond.
Hybrid compositional stochastic modeling/probabilistic programming approaches
may also be possible.
|
1212.0610 | Building Confidential and Efficient Query Services in the Cloud with
RASP Data Perturbation | cs.DB cs.CR | With the wide deployment of public cloud computing infrastructures, using
clouds to host data query services has become an appealing solution for the
advantages on scalability and cost-saving. However, some data might be
sensitive that the data owner does not want to move to the cloud unless the
data confidentiality and query privacy are guaranteed. On the other hand, a
secured query service should still provide efficient query processing and
significantly reduce the in-house workload to fully realize the benefits of
cloud computing. We propose the RASP data perturbation method to provide secure
and efficient range query and kNN query services for protected data in the
cloud. The RASP data perturbation method combines order preserving encryption,
dimensionality expansion, random noise injection, and random projection, to
provide strong resilience to attacks on the perturbed data and queries. It also
preserves multidimensional ranges, which allows existing indexing techniques to
be applied to speedup range query processing. The kNN-R algorithm is designed
to work with the RASP range query algorithm to process the kNN queries. We have
carefully analyzed the attacks on data and queries under a precisely defined
threat model and realistic security assumptions. Extensive experiments have
been conducted to show the advantages of this approach on efficiency and
security.
|
1212.0639 | Evaluation of Particle Swarm Optimization Algorithms for Weighted
Max-Sat Problem: Technical Report | cs.NE | An experimental evaluation is conducted to asses the performance of 4
different Particle Swarm Optimization neighborhood structures in solving
Max-Sat problem. The experiment has shown that none of the algorithms achieves
statistically significant performance over the others under confidence level of
0.05.
|
1212.0655 | G-invariant Persistent Homology | math.AT cs.CG cs.CV | Classical persistent homology is a powerful mathematical tool for shape
comparison. Unfortunately, it is not tailored to study the action of
transformation groups that are different from the group Homeo(X) of all
self-homeomorphisms of a topological space X. This fact restricts its use in
applications. In order to obtain better lower bounds for the natural
pseudo-distance d_G associated with a subgroup G of Homeo(X), we need to adapt
persistent homology and consider G-invariant persistent homology. Roughly
speaking, the main idea consists in defining persistent homology by means of a
set of chains that is invariant under the action of G. In this paper we
formalize this idea, and prove the stability of the persistent Betti number
functions in G-invariant persistent homology with respect to the natural
pseudo-distance d_G. We also show how G-invariant persistent homology could be
used in applications concerning shape comparison, when the invariance group is
a proper subgroup of the group of all self-homeomorphisms of a topological
space. In this paper we will assume that the space X is triangulable, in order
to guarantee that the persistent Betti number functions are finite without
using any tameness assumption.
|
1212.0657 | Modeling Risk Perception in Networks with Community Structure | physics.soc-ph cs.SI | We study the influence of global, local and community-level risk perception
on the extinction probability of a disease in several models of social
networks. In particular, we study the infection progression as a
susceptible-infected-susceptible (SIS) model on several modular networks,
formed by a certain number of random and scale-free communities. We find that
in the scale-free networks the progression is faster than in random ones with
the same average connectivity degree. For what concerns the role of perception,
we find that the knowledge of the infection level in one's own neighborhood is
the most effective property in stopping the spreading of a disease, but at the
same time the more expensive one in terms of the quantity of required
information, thus the cost/effectiveness optimum is a tradeoff between several
parameters.
|
1212.0689 | Multiscale Community Mining in Networks Using Spectral Graph Wavelets | physics.soc-ph cs.DM cs.SI | For data represented by networks, the community structure of the underlying
graph is of great interest. A classical clustering problem is to uncover the
overall ``best'' partition of nodes in communities. Here, a more elaborate
description is proposed in which community structures are identified at
different scales. To this end, we take advantage of the local and
scale-dependent information encoded in graph wavelets. After new developments
for the practical use of graph wavelets, studying proper scale boundaries and
parameters and introducing scaling functions, we propose a method to mine for
communities in complex networks in a scale-dependent manner. It relies on
classifying nodes according to their wavelets or scaling functions, using a
scale-dependent modularity function. An example on a graph benchmark having
hierarchical communities shows that we estimate successfully its multiscale
structure.
|
1212.0692 | An Empirical Evaluation of Portfolios Approaches for solving CSPs | cs.AI cs.LG | Recent research in areas such as SAT solving and Integer Linear Programming
has shown that the performances of a single arbitrarily efficient solver can be
significantly outperformed by a portfolio of possibly slower on-average
solvers. We report an empirical evaluation and comparison of portfolio
approaches applied to Constraint Satisfaction Problems (CSPs). We compared
models developed on top of off-the-shelf machine learning algorithms with
respect to approaches used in the SAT field and adapted for CSPs, considering
different portfolio sizes and using as evaluation metrics the number of solved
problems and the time taken to solve them. Results indicate that the best SAT
approaches have top performances also in the CSP field and are slightly more
competitive than simple models built on top of classification algorithms.
|
1212.0695 | Training Support Vector Machines Using Frank-Wolfe Optimization Methods | cs.LG cs.CV math.OC stat.ML | Training a Support Vector Machine (SVM) requires the solution of a quadratic
programming problem (QP) whose computational complexity becomes prohibitively
expensive for large scale datasets. Traditional optimization methods cannot be
directly applied in these cases, mainly due to memory restrictions.
By adopting a slightly different objective function and under mild conditions
on the kernel used within the model, efficient algorithms to train SVMs have
been devised under the name of Core Vector Machines (CVMs). This framework
exploits the equivalence of the resulting learning problem with the task of
building a Minimal Enclosing Ball (MEB) problem in a feature space, where data
is implicitly embedded by a kernel function.
In this paper, we improve on the CVM approach by proposing two novel methods
to build SVMs based on the Frank-Wolfe algorithm, recently revisited as a fast
method to approximate the solution of a MEB problem. In contrast to CVMs, our
algorithms do not require to compute the solutions of a sequence of
increasingly complex QPs and are defined by using only analytic optimization
steps. Experiments on a large collection of datasets show that our methods
scale better than CVMs in most cases, sometimes at the price of a slightly
lower accuracy. As CVMs, the proposed methods can be easily extended to machine
learning problems other than binary classification. However, effective
classifiers are also obtained using kernels which do not satisfy the condition
required by CVMs and can thus be used for a wider set of problems.
|
1212.0748 | Twisted Radio Waves and Twisted Thermodynamics | physics.class-ph cs.IT math.IT | We present and analyze a gedanken experiment and show that the assumption
that an antenna operating at a single frequency can transmit more than two
independent information channels to the far field violates the Second Law of
Thermodynamics. Transmission of a large number of channels, each associated
with an angular momentum "twisted wave" mode, to the far field in free space is
therefore not possible.
|
1212.0749 | A large-scale study of the World Wide Web: network correlation functions
with scale-invariant boundaries | physics.soc-ph cs.SI | We performed a large-scale crawl of the World Wide Web, covering 6.9 Million
domains and 57 Million subdomains, including all high-traffic sites of the
Internet. We present a study of the correlations found between quantities
measuring the structural relevance of each node in the network (the in- and
out-degree, the local clustering coefficient, the first-neighbor in-degree and
the Alexa rank). We find that some of these properties show strong correlation
effects and that the dependencies occurring out of these correlations follow
power laws not only for the averages, but also for the boundaries of the
respective density distributions. In addition, these scale-free limits do not
follow the same exponents as the corresponding averages. In our study we retain
the directionality of the hyperlinks and develop a statistical estimate for the
clustering coefficient of directed graphs.
We include in our study the correlations between the in-degree and the Alexa
traffic rank, a popular index for the traffic volume, finding non-trivial
power-law correlations. We find that sites with more/less than about one
Thousand links from different domains have remarkably different statistical
properties, for all correlation functions studied, indicating towards an
underlying hierarchical structure of the World Wide Web.
|
1212.0750 | Problem Solving and Computational Thinking in a Learning Environment | cs.AI | Computational thinking is a new problem soling method named for its extensive
use of computer science techniques. It synthesizes critical thinking and
existing knowledge and applies them in solving complex technological problems.
The term was coined by J. Wing, but the relationship between computational and
critical thinking, the two modes of thiking in solving problems, has not been
yet learly established. This paper aims at shedding some light into this
relationship. We also present two classroom experiments performed recently at
the Graduate Technological Educational Institute of Patras in Greece. The
results of these experiments give a strong indication that the use of computers
as a tool for problem solving enchances the students' abilities in solving real
world problems involving mathematical modelling. This is also crossed by
earlier findings of other researchers for the problem solving process in
general (not only for mathematical problems).
|
1212.0763 | Dynamic recommender system : using cluster-based biases to improve the
accuracy of the predictions | cs.LG cs.DB cs.IR | It is today accepted that matrix factorization models allow a high quality of
rating prediction in recommender systems. However, a major drawback of matrix
factorization is its static nature that results in a progressive declining of
the accuracy of the predictions after each factorization. This is due to the
fact that the new obtained ratings are not taken into account until a new
factorization is computed, which can not be done very often because of the high
cost of matrix factorization.
In this paper, aiming at improving the accuracy of recommender systems, we
propose a cluster-based matrix factorization technique that enables online
integration of new ratings. Thus, we significantly enhance the obtained
predictions between two matrix factorizations. We use finer-grained user biases
by clustering similar items into groups, and allocating in these groups a bias
to each user. The experiments we did on large datasets demonstrated the
efficiency of our approach.
|
1212.0767 | Robust Predictor Feedback for Discrete-Time Systems with Input Delays | math.OC cs.SY | This work studies the design problem of feedback stabilizers for
discrete-time systems with input delays. A backstepping procedure is proposed
for disturbance-free discrete-time systems. The feedback law designed by using
backstepping coincides with the predictor-based feedback law used in
continuous-time systems with input delays. However, simple examples demonstrate
that the sensitivity of the closed-loop system with respect to modeling errors
increases as the value of the delay increases. The paper proposes a Lyapunov
redesign procedure which can minimize the effect of the uncertainty. Specific
results are provided for linear single-input discrete-time systems with
multiplicative uncertainty. The feedback law that guarantees robust global
exponential stability is a nonlinear, homogeneous of degree 1 feedback law.
|
1212.0768 | An ontology-based approach to relax traffic regulation for autonomous
vehicle assistance | cs.AI | Traffic regulation must be respected by all vehicles, either human- or
computer- driven. However, extreme traffic situations might exhibit practical
cases in which a vehicle should safely and reasonably relax traffic regulation,
e.g., in order not to be indefinitely blocked and to keep circulating. In this
paper, we propose a high-level representation of an automated vehicle, other
vehicles and their environment, which can assist drivers in taking such
"illegal" but practical relaxation decisions. This high-level representation
(an ontology) includes topological knowledge and inference rules, in order to
compute the next high-level motion an automated vehicle should take, as
assistance to a driver. Results on practical cases are presented.
|
1212.0819 | A Topological Code for Plane Images | cs.CV math.GT | It is proposed a new code for contours of plane images. This code was applied
for optical character recognition of printed and handwritten characters. One
can apply it to recognition of any visual images.
|
1212.0873 | Parallel Coordinate Descent Methods for Big Data Optimization | math.OC cs.AI stat.ML | In this work we show that randomized (block) coordinate descent methods can
be accelerated by parallelization when applied to the problem of minimizing the
sum of a partially separable smooth convex function and a simple separable
convex function. The theoretical speedup, as compared to the serial method, and
referring to the number of iterations needed to approximately solve the problem
with high probability, is a simple expression depending on the number of
parallel processors and a natural and easily computable measure of separability
of the smooth component of the objective function. In the worst case, when no
degree of separability is present, there may be no speedup; in the best case,
when the problem is separable, the speedup is equal to the number of
processors. Our analysis also works in the mode when the number of blocks being
updated at each iteration is random, which allows for modeling situations with
busy or unreliable processors. We show that our algorithm is able to solve a
LASSO problem involving a matrix with 20 billion nonzeros in 2 hours on a large
memory node with 24 cores.
|
1212.0877 | Toeplitz Matrix Based Sparse Error Correction in System Identification:
Outliers and Random Noises | cs.IT math.IT | In this paper, we consider robust system identification under sparse outliers
and random noises. In our problem, system parameters are observed through a
Toeplitz matrix. All observations are subject to random noises and a few are
corrupted with outliers. We reduce this problem of system identification to a
sparse error correcting problem using a Toeplitz structured real-numbered
coding matrix. We prove the performance guarantee of Toeplitz structured matrix
in sparse error correction. Thresholds on the percentage of correctable errors
for Toeplitz structured matrices are also established. When both outliers and
observation noise are present, we have shown that the estimation error goes to
0 asymptotically as long as the probability density function for observation
noise is not "vanishing" around 0.
|
1212.0884 | Maximizing Social Influence in Nearly Optimal Time | cs.DS cs.SI physics.soc-ph | Diffusion is a fundamental graph process, underpinning such phenomena as
epidemic disease contagion and the spread of innovation by word-of-mouth. We
address the algorithmic problem of finding a set of k initial seed nodes in a
network so that the expected size of the resulting cascade is maximized, under
the standard independent cascade model of network diffusion. Runtime is a
primary consideration for this problem due to the massive size of the relevant
input networks.
We provide a fast algorithm for the influence maximization problem, obtaining
the near-optimal approximation factor of (1 - 1/e - epsilon), for any epsilon >
0, in time O((m+n)k log(n) / epsilon^2). Our algorithm is runtime-optimal (up
to a logarithmic factor) and substantially improves upon the previously
best-known algorithms which run in time Omega(mnk POLY(1/epsilon)).
Furthermore, our algorithm can be modified to allow early termination: if it is
terminated after O(beta(m+n)k log(n)) steps for some beta < 1 (which can depend
on n), then it returns a solution with approximation factor O(beta). Finally,
we show that this runtime is optimal (up to logarithmic factors) for any beta
and fixed seed size k.
|
1212.0888 | Unmixing of Hyperspectral Data Using Robust Statistics-based NMF | cs.CV | Mixed pixels are presented in hyperspectral images due to low spatial
resolution of hyperspectral sensors. Spectral unmixing decomposes mixed pixels
spectra into endmembers spectra and abundance fractions. In this paper using of
robust statistics-based nonnegative matrix factorization (RNMF) for spectral
unmixing of hyperspectral data is investigated. RNMF uses a robust cost
function and iterative updating procedure, so is not sensitive to outliers.
This method has been applied to simulated data using USGS spectral library,
AVIRIS and ROSIS datasets. Unmixing results are compared to traditional NMF
method based on SAD and AAD measures. Results demonstrate that this method can
be used efficiently for hyperspectral unmixing purposes.
|
1212.0892 | An Intuitive Approach to Inertial Sensor Bias Estimation | cs.SY math.OC | A simple approach to gyro and accelerometer bias estimation is proposed. It
does not involve Kalman filtering or similar formal techniques. Instead, it is
based on physical intuition and exploits a duality between gimbaled and
strapdown inertial systems. The estimation problem is decoupled into two
separate stages. At the first stage, inertial system attitude errors are
corrected by means of a feedback from an external aid. In the presence of
uncompensated biases, the steady-state feedback rebalances those biases and can
be used to estimate them. At the second stage, the desired bias estimates are
expressed in a closed form in terms of the feedback signal. The estimator has
only three tunable parameters and is easy to implement and use. The tests
proved the feasibility of the proposed approach for the estimation of low-cost
MEMS inertial sensor biases on a moving land vehicle.
|
1212.0895 | The max-plus algebra approach in modelling of queueing networks | math.OC cs.SY | A class of queueing networks which consist of single-server fork-join nodes
with infinite buffers is examined to derive a representation of the network
dynamics in terms of max-plus algebra. For the networks, we present a common
dynamic state equation which relates the departure epochs of customers from the
network nodes in an explicit vector form determined by a state transition
matrix. We show how the matrix may be calculated from the service time of
customers in the general case, and give examples of matrices inherent in
particular networks.
|
1212.0901 | Advances in Optimizing Recurrent Networks | cs.LG | After a more than decade-long period of relatively little research activity
in the area of recurrent neural networks, several new developments will be
reviewed here that have allowed substantial progress both in understanding and
in technical solutions towards more efficient training of recurrent networks.
These advances have been motivated by and related to the optimization issues
surrounding deep learning. Although recurrent networks are extremely powerful
in what they can in principle represent in terms of modelling sequences,their
training is plagued by two aspects of the same issue regarding the learning of
long-term dependencies. Experiments reported here evaluate the use of clipping
gradients, spanning longer time ranges with leaky integration, advanced
momentum techniques, using more powerful output probability models, and
encouraging sparser gradients to help symmetry breaking and credit assignment.
The experiments are performed on text and music data and show off the combined
effects of these techniques in generally improving both training and test
error.
|
1212.0927 | Two Algorithms for Finding $k$ Shortest Paths of a Weighted Pushdown
Automaton | cs.CL cs.DS cs.FL | We introduce efficient algorithms for finding the $k$ shortest paths of a
weighted pushdown automaton (WPDA), a compact representation of a weighted set
of strings with potential applications in parsing and machine translation. Both
of our algorithms are derived from the same weighted deductive logic
description of the execution of a WPDA using different search strategies.
Experimental results show our Algorithm 2 adds very little overhead vs. the
single shortest path algorithm, even with a large $k$.
|
1212.0935 | Computing Consensus Curves | cs.CG cs.CV cs.GT cs.MA | We consider the problem of extracting accurate average ant trajectories from
many (possibly inaccurate) input trajectories contributed by citizen
scientists. Although there are many generic software tools for motion tracking
and specific ones for insect tracking, even untrained humans are much better at
this task, provided a robust method to computing the average trajectories. We
implemented and tested several local (one ant at a time) and global (all ants
together) method. Our best performing algorithm uses a novel global method,
based on finding edge-disjoint paths in an ant-interaction graph constructed
from the input trajectories. The underlying optimization problem is a new and
interesting variant of network flow. Even though the problem is NP-hard, we
implemented two heuristics, which work very well in practice, outperforming all
other approaches, including the best automated system.
|
1212.0945 | Multiclass Diffuse Interface Models for Semi-Supervised Learning on
Graphs | stat.ML cs.LG math.ST physics.data-an stat.TH | We present a graph-based variational algorithm for multiclass classification
of high-dimensional data, motivated by total variation techniques. The energy
functional is based on a diffuse interface model with a periodic potential. We
augment the model by introducing an alternative measure of smoothness that
preserves symmetry among the class labels. Through this modification of the
standard Laplacian, we construct an efficient multiclass method that allows for
sharp transitions between classes. The experimental results demonstrate that
our approach is competitive with the state of the art among other graph-based
algorithms.
|
1212.0950 | A General Formulation for the Stiffness Matrix of Parallel Mechanisms | physics.class-ph cs.RO | Starting from the definition of a stiffness matrix, the authors present a new
formulation of the Cartesian stiffness matrix of parallel mechanisms. The
proposed formulation is more general than any other stiffness matrix found in
the literature since it can take into account the stiffness of the passive
joints, it can consider additional compliances in the joints or in the links
and it remains valid for large displacements. Then, the validity, the
conservative property, the positive definiteness and the relation with other
formulations of stiffness matrices are discussed theoretically. Finally, a
numerical example is given in order to illustrate the correctness of this
matrix.
|
1212.0952 | Self-Organizing Flows in Social Networks | cs.SI cs.GT cs.NI physics.soc-ph | Social networks offer users new means of accessing information, essentially
relying on "social filtering", i.e. propagation and filtering of information by
social contacts. The sheer amount of data flowing in these networks, combined
with the limited budget of attention of each user, makes it difficult to ensure
that social filtering brings relevant content to the interested users. Our
motivation in this paper is to measure to what extent self-organization of the
social network results in efficient social filtering. To this end we introduce
flow games, a simple abstraction that models network formation under selfish
user dynamics, featuring user-specific interests and budget of attention. In
the context of homogeneous user interests, we show that selfish dynamics
converge to a stable network structure (namely a pure Nash equilibrium) with
close-to-optimal information dissemination. We show in contrast, for the more
realistic case of heterogeneous interests, that convergence, if it occurs, may
lead to information dissemination that can be arbitrarily inefficient, as
captured by an unbounded "price of anarchy". Nevertheless the situation differs
when users' interests exhibit a particular structure, captured by a metric
space with low doubling dimension. In that case, natural autonomous dynamics
converge to a stable configuration. Moreover, users obtain all the information
of interest to them in the corresponding dissemination, provided their budget
of attention is logarithmic in the size of their interest set.
|
1212.0960 | Evaluating Classifiers Without Expert Labels | cs.LG cs.IR stat.ML | This paper considers the challenge of evaluating a set of classifiers, as
done in shared task evaluations like the KDD Cup or NIST TREC, without expert
labels. While expert labels provide the traditional cornerstone for evaluating
statistical learners, limited or expensive access to experts represents a
practical bottleneck. Instead, we seek methodology for estimating performance
of the classifiers which is more scalable than expert labeling yet preserves
high correlation with evaluation based on expert labels. We consider both: 1)
using only labels automatically generated by the classifiers (blind
evaluation); and 2) using labels obtained via crowdsourcing. While
crowdsourcing methods are lauded for scalability, using such data for
evaluation raises serious concerns given the prevalence of label noise. In
regard to blind evaluation, two broad strategies are investigated: combine &
score and score & combine methods infer a single pseudo-gold label set by
aggregating classifier labels; classifiers are then evaluated based on this
single pseudo-gold label set. On the other hand, score & combine methods: 1)
sample multiple label sets from classifier outputs, 2) evaluate classifiers on
each label set, and 3) average classifier performance across label sets. When
additional crowd labels are also collected, we investigate two alternative
avenues for exploiting them: 1) direct evaluation of classifiers; or 2)
supervision of combine & score methods. To assess generality of our techniques,
classifier performance is measured using four common classification metrics,
with statistical significance tests. Finally, we measure both score and rank
correlations between estimated classifier performance vs. actual performance
according to expert judgments. Rigorous evaluation of classifiers from the TREC
2011 Crowdsourcing Track shows reliable evaluation can be achieved without
reliance on expert labels.
|
1212.0967 | Compiling Relational Database Schemata into Probabilistic Graphical
Models | cs.AI cs.DB cs.LG stat.ML | Instead of requiring a domain expert to specify the probabilistic
dependencies of the data, in this work we present an approach that uses the
relational DB schema to automatically construct a Bayesian graphical model for
a database. This resulting model contains customized distributions for columns,
latent variables that cluster the data, and factors that reflect and represent
the foreign key links. Experiments demonstrate the accuracy of the model and
the scalability of inference on synthetic and real-world data.
|
1212.0975 | Cost-Sensitive Support Vector Machines | cs.LG stat.ML | A new procedure for learning cost-sensitive SVM(CS-SVM) classifiers is
proposed. The SVM hinge loss is extended to the cost sensitive setting, and the
CS-SVM is derived as the minimizer of the associated risk. The extension of the
hinge loss draws on recent connections between risk minimization and
probability elicitation. These connections are generalized to cost-sensitive
classification, in a manner that guarantees consistency with the cost-sensitive
Bayes risk, and associated Bayes decision rule. This ensures that optimal
decision rules, under the new hinge loss, implement the Bayes-optimal
cost-sensitive classification boundary. Minimization of the new hinge loss is
shown to be a generalization of the classic SVM optimization problem, and can
be solved by identical procedures. The dual problem of CS-SVM is carefully
scrutinized by means of regularization theory and sensitivity analysis and the
CS-SVM algorithm is substantiated. The proposed algorithm is also extended to
cost-sensitive learning with example dependent costs. The minimum cost
sensitive risk is proposed as the performance measure and is connected to ROC
analysis through vector optimization. The resulting algorithm avoids the
shortcomings of previous approaches to cost-sensitive SVM design, and is shown
to have superior experimental performance on a large number of cost sensitive
and imbalanced datasets.
|
1212.1002 | Stochastic Models of Misinformation Distribution in Online Social
Networks | cs.SI physics.soc-ph | This report contains results of an experimental study of the distribution of
misinformation in online social networks (OSNs). We consider the classification
of the topologies of OSNs and analyze the parameters identified in order to
relate the topology of a real network with one of the classes. We propose an
algorithm for conducting a search for the percolation cluster in the social
graph.
|
1212.1037 | Modeling Movements in Oil, Gold, Forex and Market Indices using Search
Volume Index and Twitter Sentiments | cs.CE cs.SI q-fin.GN | Study of the forecasting models using large scale microblog discussions and
the search behavior data can provide a good insight for better understanding
the market movements. In this work we collected a dataset of 2 million tweets
and search volume index (SVI from Google) for a period of June 2010 to
September 2011. We perform a study over a set of comprehensive causative
relationships and developed a unified approach to a model for various market
securities like equity (Dow Jones Industrial Average-DJIA and NASDAQ-100),
commodity markets (oil and gold) and Euro Forex rates. We also investigate the
lagged and statistically causative relations of Twitter sentiments developed
during active trading days and market inactive days in combination with the
search behavior of public before any change in the prices/ indices. Our results
show extent of lagged significance with high correlation value upto 0.82
between search volumes and gold price in USD. We find weekly accuracy in
direction (up and down prediction) uptil 94.3% for DJIA and 90% for NASDAQ-100
with significant reduction in mean average percentage error for all the
forecasting models.
|
1212.1046 | Latency Bounding by Trading off Consistency in NoSQL Store: A Staging
and Stepwise Approach | cs.DB cs.DC | Latency is a key service factor for user satisfaction. Consistency is in a
trade-off relation with operation latency in the distributed and replicated
scenario. Existing NoSQL stores guarantee either strong or weak consistencies
but none provides the best consistency based on the response latency. In this
paper, we introduce dConssandra, a NoSQL store enabling users to specify
latency bounds for data access operations. dConssandra dynamically bounds data
access latency by trading off replica consistency. dConssandra is based on
Cassandra. In comparison to Cassandra's implementation, dConssandra has a
staged replication strategy enabling synchronous or asynchronous replication on
demand. The main idea to bound latency by trading off consistency is to
decompose the replication process into minute steps and bound latency by
executing only a subset of these steps. dConssandra also implements a different
in-memory storage architecture to support the above features. Experimental
results for dConssandra over an actual cluster demonstrate that (1) the actual
response latency is bounded by the given latency constraint; (2) greater write
latency bounds lead to a lower latency in reading the latest value; and, (3)
greater read latency bounds lead to the return of more recently written values.
|
1212.1061 | Study of a Market Model with Conservative Exchanges on Complex Networks | physics.soc-ph cs.SI q-fin.GN | Many models of market dynamics make use of the idea of conservative wealth
exchanges among economic agents. A few years ago an exchange model using
extremal dynamics was developed and a very interesting result was obtained: a
self-generated minimum wealth or poverty line. On the other hand, the wealth
distribution exhibited an exponential shape as a function of the square of the
wealth. These results have been obtained both considering exchanges between
nearest neighbors or in a mean field scheme. In the present paper we study the
effect of distributing the agents on a complex network. We have considered
archetypical complex networks: Erd\"{o}s-R\'enyi random networks and scale-free
networks. The presence of a poverty line with finite wealth is preserved but
spatial correlations are important, particularly between the degree of the node
and the wealth. We present a detailed study of the correlations, as well as the
changes in the Gini coefficient, that measures the inequality, as a function of
the type and average degree of the considered networks.
|
1212.1068 | Spectral properties of Google matrix of Wikipedia and other networks | cs.IR cs.SI physics.soc-ph | We study the properties of eigenvalues and eigenvectors of the Google matrix
of the Wikipedia articles hyperlink network and other real networks. With the
help of the Arnoldi method we analyze the distribution of eigenvalues in the
complex plane and show that eigenstates with significant eigenvalue modulus are
located on well defined network communities. We also show that the correlator
between PageRank and CheiRank vectors distinguishes different organizations of
information flow on BBC and Le Monde web sites.
|
1212.1073 | Kernel Estimation from Salient Structure for Robust Motion Deblurring | cs.CV | Blind image deblurring algorithms have been improving steadily in the past
years. Most state-of-the-art algorithms, however, still cannot perform
perfectly in challenging cases, especially in large blur setting. In this
paper, we focus on how to estimate a good kernel estimate from a single blurred
image based on the image structure. We found that image details caused by
blurring could adversely affect the kernel estimation, especially when the blur
kernel is large. One effective way to eliminate these details is to apply image
denoising model based on the Total Variation (TV). First, we developed a novel
method for computing image structures based on TV model, such that the
structures undermining the kernel estimation will be removed. Second, to
mitigate the possible adverse effect of salient edges and improve the
robustness of kernel estimation, we applied a gradient selection method. Third,
we proposed a novel kernel estimation method, which is capable of preserving
the continuity and sparsity of the kernel and reducing the noises. Finally, we
developed an adaptive weighted spatial prior, for the purpose of preserving
sharp edges in latent image restoration. The effectiveness of our method is
demonstrated by experiments on various kinds of challenging examples.
|
1212.1098 | Extremes of Error Exponents | cs.IT math.IT | This paper determines the range of feasible values of standard error
exponents for binary-input memoryless symmetric channels of fixed capacity $C$
and shows that extremes are attained by the binary symmetric and the binary
erasure channel. The proof technique also provides analogous extremes for other
quantities related to Gallager's $E_0$ function, such as the cutoff rate, the
Bhattacharyya parameter, and the channel dispersion.
|
1212.1100 | Making Early Predictions of the Accuracy of Machine Learning
Applications | cs.LG cs.AI stat.ML | The accuracy of machine learning systems is a widely studied research topic.
Established techniques such as cross-validation predict the accuracy on unseen
data of the classifier produced by applying a given learning method to a given
training data set. However, they do not predict whether incurring the cost of
obtaining more data and undergoing further training will lead to higher
accuracy. In this paper we investigate techniques for making such early
predictions. We note that when a machine learning algorithm is presented with a
training set the classifier produced, and hence its error, will depend on the
characteristics of the algorithm, on training set's size, and also on its
specific composition. In particular we hypothesise that if a number of
classifiers are produced, and their observed error is decomposed into bias and
variance terms, then although these components may behave differently, their
behaviour may be predictable.
We test our hypothesis by building models that, given a measurement taken
from the classifier created from a limited number of samples, predict the
values that would be measured from the classifier produced when the full data
set is presented. We create separate models for bias, variance and total error.
Our models are built from the results of applying ten different machine
learning algorithms to a range of data sets, and tested with "unseen"
algorithms and datasets. We analyse the results for various numbers of initial
training samples, and total dataset sizes. Results show that our predictions
are very highly correlated with the values observed after undertaking the extra
training. Finally we consider the more complex case where an ensemble of
heterogeneous classifiers is trained, and show how we can accurately estimate
an upper bound on the accuracy achievable after further training.
|
1212.1107 | Twitter Sentiment Analysis: How To Hedge Your Bets In The Stock Markets | cs.CE | Emerging interest of trading companies and hedge funds in mining social web
has created new avenues for intelligent systems that make use of public opinion
in driving investment decisions. It is well accepted that at high frequency
trading, investors are tracking memes rising up in microblogging forums to
count for the public behavior as an important feature while making short term
investment decisions. We investigate the complex relationship between tweet
board literature (like bullishness, volume, agreement etc) with the financial
market instruments (like volatility, trading volume and stock prices). We have
analyzed Twitter sentiments for more than 4 million tweets between June 2010
and July 2011 for DJIA, NASDAQ-100 and 11 other big cap technological stocks.
Our results show high correlation (upto 0.88 for returns) between stock prices
and twitter sentiments. Further, using Granger's Causality Analysis, we have
validated that the movement of stock prices and indices are greatly affected in
the short term by Twitter discussions. Finally, we have implemented Expert
Model Mining System (EMMS) to demonstrate that our forecasted returns give a
high value of R-square (0.952) with low Maximum Absolute Percentage Error
(MaxAPE) of 1.76% for Dow Jones Industrial Average (DJIA). We introduce a novel
way to make use of market monitoring elements derived from public mood to
retain a portfolio within limited risk state (highly improved hedging bets)
during typical market conditions.
|
1212.1108 | On the Convergence Properties of Optimal AdaBoost | cs.LG cs.AI stat.ML | AdaBoost is one of the most popular ML algorithms. It is simple to implement
and often found very effective by practitioners, while still being
mathematically elegant and theoretically sound. AdaBoost's interesting behavior
in practice still puzzles the ML community. We address the algorithm's
stability and establish multiple convergence properties of "Optimal AdaBoost,"
a term coined by Rudin, Daubechies, and Schapire in 2004. We prove, in a
reasonably strong computational sense, the almost universal existence of time
averages, and with that, the convergence of the classifier itself, its
generalization error, and its resulting margins, among many other objects, for
fixed data sets under arguably reasonable conditions. Specifically, we frame
Optimal AdaBoost as a dynamical system and, employing tools from ergodic
theory, prove that, under a condition that Optimal AdaBoost does not have ties
for best weak classifier eventually, a condition for which we provide empirical
evidence from high dimensional real-world datasets, the algorithm's update
behaves like a continuous map. We provide constructive proofs of several
arbitrarily accurate approximations of Optimal AdaBoost; prove that they
exhibit certain cycling behavior in finite time, and that the resulting
dynamical system is ergodic; and establish sufficient conditions for the same
to hold for the actual Optimal-AdaBoost update. We believe that our results
provide reasonably strong evidence for the affirmative answer to two open
conjectures, at least from a broad computational-theory perspective: AdaBoost
always cycles and is an ergodic dynamical system. We present empirical evidence
that cycles are hard to detect while time averages stabilize quickly. Our
results ground future convergence-rate analysis and may help optimize
generalization ability and alleviate a practitioner's burden of deciding how
long to run the algorithm.
|
1212.1115 | Energy-efficient transmission for wireless energy harvesting nodes | cs.IT math.IT | Energy harvesting is increasingly gaining importance as a means to charge
battery powered devices such as sensor nodes. Efficient transmission strategies
must be developed for Wireless Energy Harvesting Nodes (WEHNs) that take into
account both the availability of energy and data in the node. We consider a
scenario where data and energy packets arrive to the node where the time
instants and amounts of the packets are known (offline approach). In this
paper, the best data transmission strategy is found for a finite battery
capacity WEHN that has to fulfill some Quality of Service (QoS) constraints, as
well as the energy and data causality constraints. As a result of our analysis,
we can state that losing energy due to overflows of the battery is inefficient
unless there is no more data to transmit and that the problem may not have a
feasible solution. Finally, an algorithm that computes the data transmission
curve minimizing the total transmission time that satisfies the aforementioned
constraints has been developed.
|
1212.1131 | Using Wikipedia to Boost SVD Recommender Systems | cs.LG cs.IR stat.ML | Singular Value Decomposition (SVD) has been used successfully in recent years
in the area of recommender systems. In this paper we present how this model can
be extended to consider both user ratings and information from Wikipedia. By
mapping items to Wikipedia pages and quantifying their similarity, we are able
to use this information in order to improve recommendation accuracy, especially
when the sparsity is high. Another advantage of the proposed approach is the
fact that it can be easily integrated into any other SVD implementation,
regardless of additional parameters that may have been added to it. Preliminary
experimental results on the MovieLens dataset are encouraging.
|
1212.1139 | Efficient Majority-Logic Decoding of Short-Length Reed--Muller Codes at
Information Positions | cs.IT cs.DM cs.ET math.CO math.IT | Short-length Reed--Muller codes under majority-logic decoding are of
particular importance for efficient hardware implementations in real-time and
embedded systems. This paper significantly improves Chen's two-step
majority-logic decoding method for binary Reed--Muller codes $\text{RM}(r,m)$,
$r \leq m/2$, if --- systematic encoding assumed --- only errors at information
positions are to be corrected. Some general results on the minimal number of
majority gates are presented that are particularly good for short codes.
Specifically, with its importance in applications as a 3-error-correcting,
self-dual code, the smallest non-trivial example, $\text{RM}(2,5)$ of dimension
16 and length 32, is investigated in detail. Further, the decoding complexity
of our procedure is compared with that of Chen's decoding algorithm for various
Reed--Muller codes up to length $2^{10}$.
|
1212.1143 | Multiscale Markov Decision Problems: Compression, Solution, and Transfer
Learning | cs.AI cs.SY math.OC stat.ML | Many problems in sequential decision making and stochastic control often have
natural multiscale structure: sub-tasks are assembled together to accomplish
complex goals. Systematically inferring and leveraging hierarchical structure,
particularly beyond a single level of abstraction, has remained a longstanding
challenge. We describe a fast multiscale procedure for repeatedly compressing,
or homogenizing, Markov decision processes (MDPs), wherein a hierarchy of
sub-problems at different scales is automatically determined. Coarsened MDPs
are themselves independent, deterministic MDPs, and may be solved using
existing algorithms. The multiscale representation delivered by this procedure
decouples sub-tasks from each other and can lead to substantial improvements in
convergence rates both locally within sub-problems and globally across
sub-problems, yielding significant computational savings. A second fundamental
aspect of this work is that these multiscale decompositions yield new transfer
opportunities across different problems, where solutions of sub-tasks at
different levels of the hierarchy may be amenable to transfer to new problems.
Localized transfer of policies and potential operators at arbitrary scales is
emphasized. Finally, we demonstrate compression and transfer in a collection of
illustrative domains, including examples involving discrete and continuous
statespaces.
|
1212.1180 | On Some Integrated Approaches to Inference | stat.ML cs.LG | We present arguments for the formulation of unified approach to different
standard continuous inference methods from partial information. It is claimed
that an explicit partition of information into a priori (prior knowledge) and a
posteriori information (data) is an important way of standardizing inference
approaches so that they can be compared on a normative scale, and so that
notions of optimal algorithms become farther-reaching. The inference methods
considered include neural network approaches, information-based complexity, and
Monte Carlo, spline, and regularization methods. The model is an extension of
currently used continuous complexity models, with a class of algorithms in the
form of optimization methods, in which an optimization functional (involving
the data) is minimized. This extends the family of current approaches in
continuous complexity theory, which include the use of interpolatory algorithms
in worst and average case settings.
|
1212.1185 | Semidefinite programming for permutation codes | math.CO cs.IT math.IT | We initiate study of the Terwilliger algebra and related semidefinite
programming techniques for the conjugacy scheme of the symmetric group
Sym$(n)$. In particular, we compute orbits of ordered pairs on Sym$(n)$ acted
upon by conjugation and inversion, explore a block diagonalization of the
associated algebra, and obtain improved upper bounds on the size $M(n,d)$ of
permutation codes of lengths up to 7. For instance, these techniques detect the
nonexistence of the projective plane of order six via $M(6,5)<30$ and yield a
new best bound $M(7,4) \le 535$ for a challenging open case. Each of these
represents an improvement on earlier Delsarte linear programming results.
|
1212.1187 | Compressed Sensing Recoverability In Imaging Modalities | cs.IT math.IT | The paper introduces a framework for the recoverability analysis in
compressive sensing for imaging applications such as CI cameras, rapid MRI and
coded apertures. This is done using the fact that the Spherical Section
Property (SSP) of a sensing matrix provides a lower bound for unique sparse
recovery condition. The lower bound is evaluated for different sampling
paradigms adopted from the aforementioned imaging modalities. In particular, a
platform is provided to analyze the well-posedness of sub-sampling patterns
commonly used in practical scenarios. The effectiveness of the various designed
patterns for sparse image recovery is studied through numerical experiments.
|
1212.1192 | Using external sources of bilingual information for on-the-fly word
alignment | cs.CL | In this paper we present a new and simple language-independent method for
word-alignment based on the use of external sources of bilingual information
such as machine translation systems. We show that the few parameters of the
aligner can be trained on a very small corpus, which leads to results
comparable to those obtained by the state-of-the-art tool GIZA++ in terms of
precision. Regarding other metrics, such as alignment error rate or F-measure,
the parametric aligner, when trained on a very small gold-standard (450 pairs
of sentences), provides results comparable to those produced by GIZA++ when
trained on an in-domain corpus of around 10,000 pairs of sentences.
Furthermore, the results obtained indicate that the training is
domain-independent, which enables the use of the trained aligner 'on the fly'
on any new pair of sentences.
|
1212.1198 | Lattice Coding for the Two-way Two-relay Channel | cs.IT math.IT | Lattice coding techniques may be used to derive achievable rate regions which
outperform known independent, identically distributed (i.i.d.) random codes in
multi-source relay networks and in particular the two-way relay channel. Gains
stem from the ability to decode the sum of codewords (or messages) using
lattice codes at higher rates than possible with i.i.d. random codes. Here we
develop a novel lattice coding scheme for the Two-way Two-relay Channel: 1 <->
2 <-> 3 <-> 4, where Node 1 and 4 simultaneously communicate with each other
through two relay nodes 2 and 3. Each node only communicates with its
neighboring nodes. The key technical contribution is the lattice-based
achievability strategy, where each relay is able to remove the noise while
decoding the sum of several signals in a Block Markov strategy and then
re-encode the signal into another lattice codeword using the so-called
"Re-distribution Transform". This allows nodes further down the line to again
decode sums of lattice codewords. This transform is central to improving the
achievable rates, and ensures that the messages traveling in each of the two
directions fully utilize the relay's power, even under asymmetric channel
conditions. All decoders are lattice decoders and only a single nested lattice
codebook pair is needed. The symmetric rate achieved by the proposed lattice
coding scheme is within 0.5 log 3 bit/Hz/s of the symmetric rate capacity.
|
1212.1223 | Throughput Analysis of Primary and Secondary Networks in a Shared IEEE
802.11 System | cs.NI cs.IT math.IT | In this paper, we analyze the coexistence of a primary and a secondary
(cognitive) network when both networks use the IEEE 802.11 based distributed
coordination function for medium access control. Specifically, we consider the
problem of channel capture by a secondary network that uses spectrum sensing to
determine the availability of the channel, and its impact on the primary
throughput. We integrate the notion of transmission slots in Bianchi's Markov
model with the physical time slots, to derive the transmission probability of
the secondary network as a function of its scan duration. This is used to
obtain analytical expressions for the throughput achievable by the primary and
secondary networks. Our analysis considers both saturated and unsaturated
networks. By performing a numerical search, the secondary network parameters
are selected to maximize its throughput for a given level of protection of the
primary network throughput. The theoretical expressions are validated using
extensive simulations carried out in the Network Simulator 2. Our results
provide critical insights into the performance and robustness of different
schemes for medium access by the secondary network. In particular, we find that
the channel captures by the secondary network does not significantly impact the
primary throughput, and that simply increasing the secondary contention window
size is only marginally inferior to silent-period based methods in terms of its
throughput performance.
|
1212.1224 | Random load fluctuations and collapse probability of a power system
operating near codimension 1 saddle-node bifurcation | physics.soc-ph cs.SY stat.AP | For a power system operating in the vicinity of the power transfer limit of
its transmission system, effect of stochastic fluctuations of power loads can
become critical as a sufficiently strong such fluctuation may activate voltage
instability and lead to a large scale collapse of the system. Considering the
effect of these stochastic fluctuations near a codimension 1 saddle-node
bifurcation, we explicitly calculate the autocorrelation function of the state
vector and show how its behavior explains the phenomenon of critical
slowing-down often observed for power systems on the threshold of blackout. We
also estimate the collapse probability/mean clearing time for the power system
and construct a new indicator function signaling the proximity to a large scale
collapse. The new indicator function is easy to estimate in real time using PMU
data feeds as well as SCADA information about fluctuations of power load on the
nodes of the power grid. We discuss control strategies leading to the
minimization of the collapse probability.
|
1212.1245 | Distributed Adaptive Networks: A Graphical Evolutionary Game-Theoretic
View | cs.GT cs.LG | Distributed adaptive filtering has been considered as an effective approach
for data processing and estimation over distributed networks. Most existing
distributed adaptive filtering algorithms focus on designing different
information diffusion rules, regardless of the nature evolutionary
characteristic of a distributed network. In this paper, we study the adaptive
network from the game theoretic perspective and formulate the distributed
adaptive filtering problem as a graphical evolutionary game. With the proposed
formulation, the nodes in the network are regarded as players and the local
combiner of estimation information from different neighbors is regarded as
different strategies selection. We show that this graphical evolutionary game
framework is very general and can unify the existing adaptive network
algorithms. Based on this framework, as examples, we further propose two
error-aware adaptive filtering algorithms. Moreover, we use graphical
evolutionary game theory to analyze the information diffusion process over the
adaptive networks and evolutionarily stable strategy of the system. Finally,
simulation results are shown to verify the effectiveness of our analysis and
proposed methods.
|
1212.1269 | Approximate Dynamic Programming via Sum of Squares Programming | math.OC cs.SY | We describe an approximate dynamic programming method for stochastic control
problems on infinite state and input spaces. The optimal value function is
approximated by a linear combination of basis functions with coefficients as
decision variables. By relaxing the Bellman equation to an inequality, one
obtains a linear program in the basis coefficients with an infinite set of
constraints. We show that a recently introduced method, which obtains convex
quadratic value function approximations, can be extended to higher order
polynomial approximations via sum of squares programming techniques. An
approximate value function can then be computed offline by solving a
semidefinite program, without having to sample the infinite constraint. The
policy is evaluated online by solving a polynomial optimization problem, which
also turns out to be convex in some cases. We experimentally validate the
method on an autonomous helicopter testbed using a 10-dimensional helicopter
model.
|
1212.1283 | A Tractable Framework for Exact Probability of Node Isolation and
Minimum Node Degree Distribution in Finite Multi-hop Networks | cs.IT cs.NI math.IT | This paper presents a tractable analytical framework for the exact
calculation of probability of node isolation and minimum node degree
distribution when $N$ sensor nodes are independently and uniformly distributed
inside a finite square region. The proposed framework can accurately account
for the boundary effects by partitioning the square into subregions, based on
the transmission range and the node location. We show that for each subregion,
the probability that a random node falls inside a disk centered at an arbitrary
node located in that subregion can be expressed analytically in closed-form.
Using the results for the different subregions, we obtain the exact probability
of node isolation and minimum node degree distribution that serves as an upper
bound for the probability of $k$-connectivity. Our theoretical framework is
validated by comparison with the simulation results and shows that the minimum
node degree distribution serves as a tight upper bound for the probability of
$k$-connectivity. The proposed framework provides a very useful tool to
accurately account for the boundary effects in the design of finite wireless
networks.
|
1212.1296 | Distributed Model Predictive Consensus via the Alternating Direction
Method of Multipliers | math.OC cs.SY | We propose a distributed optimization method for solving a distributed model
predictive consensus problem. The goal is to design a distributed controller
for a network of dynamical systems to optimize a coupled objective function
while respecting state and input constraints. The distributed optimization
method is an augmented Lagrangian method called the Alternating Direction
Method of Multipliers (ADMM), which was introduced in the 1970s but has seen a
recent resurgence in the context of dramatic increases in computing power and
the development of widely available distributed computing platforms. The method
is applied to position and velocity consensus in a network of double
integrators. We find that a few tens of ADMM iterations yield closed-loop
performance near what is achieved by solving the optimization problem
centrally. Furthermore, the use of recent code generation techniques for
solving local subproblems yields fast overall computation times.
|
1212.1298 | On Abelian Group Representability of Finite Groups | math.GR cs.IT math.IT | A set of quasi-uniform random variables $X_1,...,X_n$ may be generated from a
finite group $G$ and $n$ of its subgroups, with the corresponding entropic
vector depending on the subgroup structure of $G$. It is known that the set of
entropic vectors obtained by considering arbitrary finite groups is much richer
than the one provided just by abelian groups. In this paper, we start to
investigate in more detail different families of non-abelian groups with
respect to the entropic vectors they yield. In particular, we address the
question of whether a given non-abelian group $G$ and some fixed subgroups
$G_1,...,G_n$ end up giving the same entropic vector as some abelian group $A$
with subgroups $A_1,...,A_n$, in which case we say that $(A, A_1,..., A_n)$
represents $(G, G_1, ..., G_n)$. If for any choice of subgroups $G_1,...,G_n$,
there exists some abelian group $A$ which represents $G$, we refer to $G$ as
being abelian (group) representable for $n$. We completely characterize
dihedral, quasi-dihedral and dicyclic groups with respect to their abelian
representability, as well as the case when $n=2$, for which we show a group is
abelian representable if and only if it is nilpotent. This problem is motivated
by understanding non-linear coding strategies for network coding, and network
information theory capacity regions.
|
1212.1313 | Autonomous Navigation by Robust Scan Matching Technique | cs.CV cs.AI | For effective autonomous navigation,estimation of the pose of the robot is
essential at every sampling time. For computing an accurate
estimation,odometric error needs to be reduced with the help of data from
external sensor. In this work, a technique has been developed for accurate pose
estimation of mobile robot by using Laser Range data. The technique is robust
to noisy data, which may contain considerable amount of outliers. A grey image
is formed from laser range data and the key points from this image are
extracted by Harris corner detector. The matching of the key points from
consecutive data sets have been done while outliers have been rejected by
RANSAC method. Robot state is measured by the correspondence between the two
sets of keypoints. Finally, optimal robot state is estimated by Extended Kalman
Filter. The technique has been applied to an operational robot in the
laboratory environment to show the robustness of the technique in presence of
noisy sensor data. The performance of this new technique has been compared with
that of conventional ICP method. Through this method, effective and accurate
navigation has been achieved even in presence of substantial noise in the
sensor data at the cost of a small amount of additional computational
complexity.
|
1212.1329 | Automatic Detection of Texture Defects Using Texture-Periodicity and
Gabor Wavelets | cs.CV | In this paper, we propose a machine vision algorithm for automatically
detecting defects in textures belonging to 16 out of 17 wallpaper groups using
texture-periodicity and a family of Gabor wavelets. Input defective images are
subjected to Gabor wavelet transformation in multi-scales and
multi-orientations and a resultant image is obtained in L2 norm. The resultant
image is split into several periodic blocks and energy of each block is used as
a feature space to automatically identify defective and defect-free blocks
using Ward's hierarchical clustering. Experiments on defective fabric images of
three major wallpaper groups, namely, pmm, p2 and p4m, show that the proposed
method is robust in finding fabric defects without human intervention and can
be used for automatic defect detection in fabric industries.
|
1212.1340 | Spatial Modulation in Zero-Padded Single Carrier Communication | cs.IT math.IT | In this paper, we consider the Spatial Modulation (SM) system in a frequency
selective channel under single carrier (SC) communication scenario and propose
zero-padding instead of cyclic prefix considered in the existing literature. We
show that the zero-padded single carrier (ZP-SC) SM system offers full
multipath diversity under maximum-likelihood (ML) detection, unlike the cyclic
prefixed SM system. Further, we show that the order of ML decoding complexity
in the proposed ZP-SC SM system is independent of the frame length and depends
only on the number of multipath links between the transmitter and the receiver.
Thus, we show that the zero-padding in the SC SM system has two fold advantage
over cyclic prefixing: 1) gives full multipath diversity, and 2) offers
relatively low ML decoding complexity. Furthermore, we extend the partial
interference cancellation receiver (PIC-R) proposed by Guo and Xia for the
decoding of STBCs in order to convert the ZP-SC system into a set of
flat-fading subsystems. We show that the transmission of any full rank STBC
over these subsystems achieves full transmit, receive as well as multipath
diversity under PIC-R. With the aid of this extended PIC-R, we show that the
ZP-SC SM system achieves receive and multipath diversity with a decoding
complexity same as that of the SM system in flat-fading scenario.
|
1212.1360 | Physics inspired algorithms for (co)homology computation | cs.CE math.GT | The issue of computing (co)homology generators of a cell complex is gaining a
pivotal role in various branches of science. While this issue can be rigorously
solved in polynomial time, it is still overly demanding for large scale
problems. Drawing inspiration from low-frequency electrodynamics, this paper
presents a physics inspired algorithm for first cohomology group computations
on three-dimensional complexes. The algorithm is general and exhibits orders of
magnitude speed up with respect to competing ones, allowing to handle problems
not addressable before. In particular, when generators are employed in the
physical modeling of magneto-quasistatic problems, this algorithm solves one of
the most long-lasting problems in low-frequency computational electromagnetics.
In this case, the effectiveness of the algorithm and its ease of implementation
may be even improved by introducing the novel concept of \textit{lazy
cohomology generators}.
|
1212.1362 | Stochastic model for the vocabulary growth in natural languages | physics.soc-ph cs.CL physics.data-an | We propose a stochastic model for the number of different words in a given
database which incorporates the dependence on the database size and historical
changes. The main feature of our model is the existence of two different
classes of words: (i) a finite number of core-words which have higher frequency
and do not affect the probability of a new word to be used; and (ii) the
remaining virtually infinite number of noncore-words which have lower frequency
and once used reduce the probability of a new word to be used in the future.
Our model relies on a careful analysis of the google-ngram database of books
published in the last centuries and its main consequence is the generalization
of Zipf's and Heaps' law to two scaling regimes. We confirm that these
generalizations yield the best simple description of the data among generic
descriptive models and that the two free parameters depend only on the language
but not on the database. From the point of view of our model the main change on
historical time scales is the composition of the specific words included in the
finite list of core-words, which we observe to decay exponentially in time with
a rate of approximately 30 words per year for English.
|
1212.1449 | Exploring associations between micro-level models of innovation
diffusion and emerging macro-level adoption patterns | cs.SI physics.soc-ph | A micro-level agent-based model of innovation diffusion was developed that
explicitly combines (a) an individual's perception of the advantages or
relative utility derived from adoption, and (b) social influence from members
of the individual's social network. The micro-model was used to simulate
macro-level diffusion patterns emerging from different configurations of
micro-model parameters. Micro-level simulation results matched very closely the
adoption patterns predicted by the widely-used Bass macro-level model (Bass,
1969). For a portion of the domain, results from micro-simulations were
consistent with aggregate-level adoption patterns reported in the literature.
Induced Bass macro-level parameters and responded to changes in
micro-parameters: (1) increased with the number of innovators and with the rate
at which innovators are introduced; (2) increased with the probability of
rewiring in small-world networks, as the characteristic path length decreases;
and (3) an increase in the overall perceived utility of an innovation caused a
corresponding increase in induced and values. Understanding micro to macro
linkages can inform the design and assessment of marketing interventions on
micro-variables - or processes related to them - to enhance adoption of future
products or technologies.
|
1212.1464 | Structure and Dynamics of Information Pathways in Online Media | cs.SI cs.DS cs.IR physics.soc-ph | Diffusion of information, spread of rumors and infectious diseases are all
instances of stochastic processes that occur over the edges of an underlying
network. Many times networks over which contagions spread are unobserved, and
such networks are often dynamic and change over time. In this paper, we
investigate the problem of inferring dynamic networks based on information
diffusion data. We assume there is an unobserved dynamic network that changes
over time, while we observe the results of a dynamic process spreading over the
edges of the network. The task then is to infer the edges and the dynamics of
the underlying network.
We develop an on-line algorithm that relies on stochastic convex optimization
to efficiently solve the dynamic network inference problem. We apply our
algorithm to information diffusion among 3.3 million mainstream media and blog
sites and experiment with more than 179 million different pieces of information
spreading over the network in a one year period. We study the evolution of
information pathways in the online media space and find interesting insights.
Information pathways for general recurrent topics are more stable across time
than for on-going news events. Clusters of news media sites and blogs often
emerge and vanish in matter of days for on-going news events. Major social
movements and events involving civil population, such as the Libyan's civil war
or Syria's uprise, lead to an increased amount of information pathways among
blogs as well as in the overall increase in the network centrality of blogs and
social media sites.
|
1212.1469 | mqr-tree: A 2-dimensional Spatial Access Method | cs.DB | In this paper, we propose the mqr-tree, a two-dimensional spatial access
method that organizes spatial objects in a two-dimensional node and based on
their spatial relationships. Previously proposed spatial access methods that
attempt to maintain spatial relationships between objects in their structures
are limited in their incorporation of existing one-dimensional spatial access
methods, or have lower space utilization in its nodes, and higher tree height,
overcoverage and overlap than is necessary. The mqr-tree utilizes a node
organization, set of spatial relationship rules and insertion strategy in order
to gain significant improvements in overlap and overcoverage. In addition,
other desirable properties are identified as a result of the chosen node
organization and insertion strategies. In particular, zero overlap is achieved
when the mqr-tree is used to index point data. A comparison of the mqr-tree
insertion strategy versus the R-tree shows significant improvements in overlap
and overcoverage, with comparable space utilization. In addition, a comparison
of region searching shows that the mqr-tree achieves a lower number of disk
accesses in many cases
|
1212.1478 | The Clustering of Author's Texts of English Fiction in the Vector Space
of Semantic Fields | cs.CL cs.DL cs.IR | The clustering of text documents in the vector space of semantic fields and
in the semantic space with orthogonal basis has been analysed. It is shown that
using the vector space model with the basis of semantic fields is effective in
the cluster analysis algorithms of author's texts in English fiction. The
analysis of the author's texts distribution in cluster structure showed the
presence of the areas of semantic space that represent the author's ideolects
of individual authors. SVD factorization of the semantic fields matrix makes it
possible to reduce significantly the dimension of the semantic space in the
cluster analysis of author's texts.
|
1212.1496 | Excess risk bounds for multitask learning with trace norm regularization | stat.ML cs.LG | Trace norm regularization is a popular method of multitask learning. We give
excess risk bounds with explicit dependence on the number of tasks, the number
of examples per task and properties of the data distribution. The bounds are
independent of the dimension of the input space, which may be infinite as in
the case of reproducing kernel Hilbert spaces. A byproduct of the proof are
bounds on the expected norm of sums of random positive semidefinite matrices
with subexponential moments.
|
1212.1521 | Bounds on mean cycle time in acyclic fork-join queueing networks | math.OC cs.SY | Simple lower and upper bounds on mean cycle time in stochastic acyclic
fork-join networks are derived using the $(\max,+)$-algebra approach. The
behaviour of the bounds under various assumptions concerning the service times
in the networks is discussed, and related numerical examples are presented.
|
1212.1522 | Mechanism Design for Fair Division | cs.GT cs.DS cs.MA | We revisit the classic problem of fair division from a mechanism design
perspective, using {\em Proportional Fairness} as a benchmark. In particular,
we aim to allocate a collection of divisible items to a set of agents while
incentivizing the agents to be truthful in reporting their valuations. For the
very large class of homogeneous valuations, we design a truthful mechanism that
provides {\em every agent} with at least a $1/e\approx 0.368$ fraction of her
Proportionally Fair valuation. To complement this result, we show that no
truthful mechanism can guarantee more than a $0.5$ fraction, even for the
restricted class of additive linear valuations. We also propose another
mechanism for additive linear valuations that works really well when every item
is highly demanded. To guarantee truthfulness, our mechanisms discard a
carefully chosen fraction of the allocated resources; we conclude by uncovering
interesting connections between our mechanisms and known mechanisms that use
money instead.
|
1212.1524 | Layer-wise learning of deep generative models | cs.NE cs.LG stat.ML | When using deep, multi-layered architectures to build generative models of
data, it is difficult to train all layers at once. We propose a layer-wise
training procedure admitting a performance guarantee compared to the global
optimum. It is based on an optimistic proxy of future performance, the best
latent marginal. We interpret auto-encoders in this setting as generative
models, by showing that they train a lower bound of this criterion. We test the
new learning procedure against a state of the art method (stacked RBMs), and
find it to improve performance. Both theory and experiments highlight the
importance, when training deep architectures, of using an inference model (from
data to hidden variables) richer than the generative model (from hidden
variables to data).
|
1212.1527 | Learning Mixtures of Arbitrary Distributions over Large Discrete Domains | cs.LG cs.DS | We give an algorithm for learning a mixture of {\em unstructured}
distributions. This problem arises in various unsupervised learning scenarios,
for example in learning {\em topic models} from a corpus of documents spanning
several topics. We show how to learn the constituents of a mixture of $k$
arbitrary distributions over a large discrete domain $[n]=\{1,2,\dots,n\}$ and
the mixture weights, using $O(n\polylog n)$ samples. (In the topic-model
learning setting, the mixture constituents correspond to the topic
distributions.) This task is information-theoretically impossible for $k>1$
under the usual sampling process from a mixture distribution. However, there
are situations (such as the above-mentioned topic model case) in which each
sample point consists of several observations from the same mixture
constituent. This number of observations, which we call the {\em "sampling
aperture"}, is a crucial parameter of the problem. We obtain the {\em first}
bounds for this mixture-learning problem {\em without imposing any assumptions
on the mixture constituents.} We show that efficient learning is possible
exactly at the information-theoretically least-possible aperture of $2k-1$.
Thus, we achieve near-optimal dependence on $n$ and optimal aperture. While the
sample-size required by our algorithm depends exponentially on $k$, we prove
that such a dependence is {\em unavoidable} when one considers general
mixtures. A sequence of tools contribute to the algorithm, such as
concentration results for random matrices, dimension reduction, moment
estimations, and sensitivity analysis.
|
1212.1570 | A simple method for decision making in robocup soccer simulation 3d
environment | cs.AI cs.RO | In this paper new hierarchical hybrid fuzzy-crisp methods for decision making
and action selection of an agent in soccer simulation 3D environment are
presented. First, the skills of an agent are introduced, implemented and
classified in two layers, the basicskills and the highlevel skills. In the
second layer, a twophase mechanism for decision making is introduced. In phase
one, some useful methods are implemented which check the agent's situation for
performing required skills. In the next phase, the team str ategy, team for
mation, agent's role and the agent's positioning system are introduced. A fuzzy
logical approach is employed to recognize the team strategy and further more to
tell the player the best position to move. At last, we comprised our
implemented algor ithm in the Robocup Soccer Simulation 3D environment and
results showed th eefficiency of the introduced methodology.
|
1212.1603 | Model Reduction using a Frequency-Limited H2-Cost | cs.SY math.DS | We propose a method for model reduction on a given frequency range, without
the use of input and output filter weights. The method uses a nonlinear
optimization approach to minimize a frequency limited H2 like cost function.
An important contribution in the paper is the derivation of the gradient of
the proposed cost function. The fact that we have a closed form expression for
the gradient and that considerations have been taken to make the gradient
computationally efficient to compute enables us to efficiently use
off-the-shelf optimization software to solve the optimization problem.
|
1212.1611 | Nonlinearity of quartic rotation symmetric Boolean functions | cs.IT math.CO math.IT | Nonlinearity of rotation symmetric Boolean functions is an important topic on
cryptography algorithm. Let $e\ge 1$ be any given integer. In this paper, we
investigate the following question: Is the nonlinearity of the quartic rotation
symmetric Boolean function generated by the monomial $x_0x_ex_{2e}x_{3e}$ equal
to its weight? We introduce some new simple sub-functions and develop new
technique to get several recursive formulas. Then we use these recursive
formulas to show that the nonlinearity of the quartic rotation symmetric
Boolean function generated by the monomial $x_0x_ex_{2e}x_{3e}$ is the same as
its weight. So we answer the above question affirmatively. Finally, we
conjecture that if $l\ge 4$ is an integer, then the nonlinearity of the
rotation symmetric Boolean function generated by the monomial
$x_0x_ex_{2e}...x_{le}$ equals its weight.
|
1212.1617 | Similarity of Polygonal Curves in the Presence of Outliers | cs.CG cs.CV cs.GR | The Fr\'{e}chet distance is a well studied and commonly used measure to
capture the similarity of polygonal curves. Unfortunately, it exhibits a high
sensitivity to the presence of outliers. Since the presence of outliers is a
frequently occurring phenomenon in practice, a robust variant of Fr\'{e}chet
distance is required which absorbs outliers. We study such a variant here. In
this modified variant, our objective is to minimize the length of subcurves of
two polygonal curves that need to be ignored (MinEx problem), or alternately,
maximize the length of subcurves that are preserved (MaxIn problem), to achieve
a given Fr\'{e}chet distance. An exact solution to one problem would imply an
exact solution to the other problem. However, we show that these problems are
not solvable by radicals over $\mathbb{Q}$ and that the degree of the
polynomial equations involved is unbounded in general. This motivates the
search for approximate solutions. We present an algorithm, which approximates,
for a given input parameter $\delta$, optimal solutions for the \MinEx\ and
\MaxIn\ problems up to an additive approximation error $\delta$ times the
length of the input curves. The resulting running time is upper bounded by
$\mathcal{O} \left(\frac{n^3}{\delta} \log \left(\frac{n}{\delta}
\right)\right)$, where $n$ is the complexity of the input polygonal curves.
|
1212.1625 | Testing the AgreementMaker System in the Anatomy Task of OAEI 2012 | cs.IR cs.AI | The AgreementMaker system was the leading system in the anatomy task of the
Ontology Alignment Evaluation Initiative (OAEI) competition in 2011. While
AgreementMaker did not compete in OAEI 2012, here we report on its performance
in the 2012 anatomy task, using the same configurations of AgreementMaker
submitted to OAEI 2011. Additionally, we also test AgreementMaker using an
updated version of the UBERON ontology as a mediating ontology, and otherwise
identical configurations. AgreementMaker achieved an F-measure of 91.8% with
the 2011 configurations, and an F-measure of 92.2% with the updated UBERON
ontology. Thus, AgreementMaker would have been the second best system had it
competed in the anatomy task of OAEI 2012, and only 0.1% below the F-measure of
the best system.
|
1212.1629 | Modeling for Control of Symmetric Aerial Vehicles Subjected to
Aerodynamic Forces | cs.SY | This paper participates in the development of a unified approach to the
control of aerial vehicles with extended flight envelopes. More precisely,
modeling for control purposes of a class of thrust-propelled aerial vehicles
subjected to lift and drag aerodynamic forces is addressed assuming a
rotational symmetry of the vehicle's shape about the thrust force axis. A
condition upon aerodynamic characteristics that allows one to recast the
control problem into the simpler case of a spherical vehicle is pointed out.
Beside showing how to adapt nonlinear controllers developed for this latter
case, the paper extends a previous work by the authors in two directions.
First, the 3D case is addressed whereas only motions in a single vertical plane
was considered. Secondly, the family of models of aerodynamic forces for which
the aforementioned transformation holds is enlarged.
|
1212.1633 | Inferring Attitude in Online Social Networks Based On Quadratic
Correlation | cs.SI physics.soc-ph | The structure of an online social network in most cases cannot be described
just by links between its members. We study online social networks, in which
members may have certain attitude, positive or negative toward each other, and
so the network consists of a mixture of both positive and negative
relationships. Our goal is to predict the sign of a given relationship based on
the evidences provided in the current snapshot of the network. More precisely,
using machine learning techniques we develop a model that after being trained
on a particular network predicts the sign of an unknown or hidden link. The
model uses relationships and influences from peers as evidences for the guess,
however, the set of peers used is not predefined but rather learned during the
training process. We use quadratic correlation between peer members to train
the predictor. The model is tested on popular online datasets such as Epinions,
Slashdot, and Wikipedia. In many cases it shows almost perfect prediction
accuracy. Moreover, our model can also be efficiently updated as the
underlaying social network evolves.
|
1212.1638 | Achieving Optimal Throughput and Near-Optimal Asymptotic Delay
Performance in Multi-Channel Wireless Networks with Low Complexity: A
Practical Greedy Scheduling Policy | cs.NI cs.IT cs.PF math.IT | In this paper, we focus on the scheduling problem in multi-channel wireless
networks, e.g., the downlink of a single cell in fourth generation (4G)
OFDM-based cellular networks. Our goal is to design practical scheduling
policies that can achieve provably good performance in terms of both throughput
and delay, at a low complexity. While a class of $O(n^{2.5} \log n)$-complexity
hybrid scheduling policies are recently developed to guarantee both
rate-function delay optimality (in the many-channel many-user asymptotic
regime) and throughput optimality (in the general non-asymptotic setting),
their practical complexity is typically high. To address this issue, we develop
a simple greedy policy called Delay-based Server-Side-Greedy (D-SSG) with a
\lower complexity $2n^2+2n$, and rigorously prove that D-SSG not only achieves
throughput optimality, but also guarantees near-optimal asymptotic delay
performance. Specifically, we show that the rate-function attained by D-SSG for
any delay-violation threshold $b$, is no smaller than the maximum achievable
rate-function by any scheduling policy for threshold $b-1$. Thus, we are able
to achieve a reduction in complexity (from $O(n^{2.5} \log n)$ of the hybrid
policies to $2n^2 + 2n$) with a minimal drop in the delay performance. More
importantly, in practice, D-SSG generally has a substantially lower complexity
than the hybrid policies that typically have a large constant factor hidden in
the $O(\cdot)$ notation. Finally, we conduct numerical simulations to validate
our theoretical results in various scenarios. The simulation results show that
D-SSG not only guarantees a near-optimal rate-function, but also empirically is
virtually indistinguishable from delay-optimal policies.
|
1212.1684 | Assessing the Bias in Communication Networks Sampled from Twitter | physics.soc-ph cs.SI | We collect and analyse messages exchanged in Twitter using two of the
platform's publicly available APIs (the search and stream specifications). We
assess the differences between the two samples, and compare the networks of
communication reconstructed from them. The empirical context is given by
political protests taking place in May 2012: we track online communication
around these protests for the period of one month, and reconstruct the network
of mentions and re-tweets according to the two samples. We find that the search
API over-represents the more central users and does not offer an accurate
picture of peripheral activity; we also find that the bias is greater for the
network of mentions. We discuss the implications of this bias for the study of
diffusion dynamics and collective action in the digital era, and advocate the
need for more uniform sampling procedures in the study of online communication.
|
1212.1703 | Non-Systematic Complex Number RS Coded OFDM by Unique Word Prefix | cs.IT math.IT | In this paper we expand our recently introduced concept of UW-OFDM (unique
word orthogonal frequency division multiplexing). In UW-OFDM the cyclic
prefixes (CPs) are replaced by deterministic sequences, the so-called unique
words (UWs). The UWs are generated by appropriately loading a set of redundant
subcarriers. By that a systematic complex number Reed Solomon (RS) code
construction is introduced in a quite natural way, because an RS code may be
defined as the set of vectors, for which a block of successive zeros occurs in
the other domain w.r.t. a discrete Fourier transform. (For a fixed block
different to zero, i.e., a UW, a coset code of an RS code is generated.) A
remaining problem in the original systematic coded UW-OFDM concept is the fact
that the redundant subcarrier symbols disproportionately contribute to the mean
OFDM symbol energy. In this paper we introduce the concept of non-systematic
coded UW-OFDM, where the redundancy is no longer allocated to dedicated
subcarriers, but distributed over all subcarriers. We derive optimum complex
valued code generator matrices matched to the BLUE (best linear unbiased
estimator) and to the LMMSE (linear minimum mean square error) data estimator,
respectively. With the help of simulations we highlight the advantageous
spectral properties and the superior BER (bit error ratio) performance of
non-systematic coded UW-OFDM compared to systematic coded UW-OFDM as well as to
CP-OFDM in AWGN (additive white Gaussian noise) and in frequency selective
environments.
|
1212.1707 | Lossy Compression via Sparse Linear Regression: Computationally
Efficient Encoding and Decoding | cs.IT math.IT stat.ML | We propose computationally efficient encoders and decoders for lossy
compression using a Sparse Regression Code. The codebook is defined by a design
matrix and codewords are structured linear combinations of columns of this
matrix. The proposed encoding algorithm sequentially chooses columns of the
design matrix to successively approximate the source sequence. It is shown to
achieve the optimal distortion-rate function for i.i.d Gaussian sources under
the squared-error distortion criterion. For a given rate, the parameters of the
design matrix can be varied to trade off distortion performance with encoding
complexity. An example of such a trade-off as a function of the block length n
is the following. With computational resource (space or time) per source sample
of O((n/\log n)^2), for a fixed distortion-level above the Gaussian
distortion-rate function, the probability of excess distortion decays
exponentially in n. The Sparse Regression Code is robust in the following
sense: for any ergodic source, the proposed encoder achieves the optimal
distortion-rate function of an i.i.d Gaussian source with the same variance.
Simulations show that the encoder has good empirical performance, especially at
low and moderate rates.
|
1212.1709 | Evolution of the most common English words and phrases over the
centuries | physics.soc-ph cs.CL cs.DL | By determining which were the most common English words and phrases since the
beginning of the 16th century, we obtain a unique large-scale view of the
evolution of written text. We find that the most common words and phrases in
any given year had a much shorter popularity lifespan in the 16th than they had
in the 20th century. By measuring how their usage propagated across the years,
we show that for the past two centuries the process has been governed by linear
preferential attachment. Along with the steady growth of the English lexicon,
this provides an empirical explanation for the ubiquity of the Zipf's law in
language statistics and confirms that writing, although undoubtedly an
expression of art and skill, is not immune to the same influences of
self-organization that are known to regulate processes as diverse as the making
of new friends and World Wide Web growth.
|
1212.1710 | The information and its observer: external and internal information
processes, information cooperation, and the origin of the observer intellect | nlin.AO cs.IT math.IT | The aim is formal principles of origin information and information process
creating information observer self-creating information in interactive
observations. The interactive phenomenon creates Yes-No actions of information
Bits in its information observer. Information emerges from interacting random
field of Kolmogorov probabilities, which link Kolmogorov 0-1 law probabilities
and Bayesian probabilities observing Markov diffusion process by probabilistic
0-1 impulses. Each No-0 action cuts maximum of impulse minimal entropy while
following Yes-1 action transfers maxim between impulses performing dual
principle of converting process entropy to information. Merging Yes-No actions
generate microprocess within bordered impulse producing Bit with free
information when the microprocess probability approaches 1. Interacting bits
memorize free information which attracts multiple Bits moving macroprocess self
joining triplet macrounits. Memorized information binds reversible microprocess
with irreversible macroprocess. The observation converts cutting entropy to
information macrounits. Macrounits logically self-organize information networks
encoding the units in geometrical structures enclosing triplet code. Multiple
IN binds their ending triplets enclosing observer information cognition and
intelligence. The observer cognition assembles common units through multiple
attraction and resonances at forming IN triplet hierarchy which accept only
units that recognizes each IN node. Maximal number of accepted triplet levels
in multiple IN measures the observer maximum comparative information
intelligence. The observation process carries probabilistic and certain wave
functions which self-organize the space hierarchical structures. These
information regularities create integral logic and intelligence self-requesting
needed information.
|
1212.1735 | Towards Design of System Hierarchy (research survey) | math.OC cs.AI cs.NI cs.SY | The paper addresses design/building frameworks for some kinds of tree-like
and hierarchical structures of systems. The following approaches are examined:
(1) expert-based procedures, (2) hierarchical clustering; (3) spanning problems
(e.g., minimum spanning tree, minimum Steiner tree, maximum leaf spanning tree
problem; (4) design of organizational 'optimal' hierarchies; (5) design of
multi-layer (e.g., three-layer) k-connected network; (6) modification of
hierarchies or networks: (i) modification of tree via condensing of neighbor
nodes, (ii) hotlink assignment, (iii) transformation of tree into Steiner tree,
(iv) restructuring as modification of an initial structural solution into a
solution that is the most close to a goal solution while taking into account a
cost of the modification. Combinatorial optimization problems are considered as
basic ones (e.g., classification, knapsack problem, multiple choice problem,
assignment problem). Some numerical examples illustrate the suggested problems
and solving frameworks.
|
1212.1740 | A Graph Partitioning Approach to Predict Patterns in Lateral Inhibition
Systems | math.DS cs.SY | We analyze pattern formation on a network of cells where each cell inhibits
its neighbors through cell-to-cell contact signaling. The network is modeled as
an interconnection of identical dynamical subsystems each of which represents
the signaling reactions in a cell. We search for steady state patterns by
partitioning the graph vertices into disjoint classes, where the cells in the
same class have the same final fate. To prove the existence of steady states
with this structure, we use results from monotone systems theory. Finally, we
analyze the stability of these patterns with a block decomposition based on the
graph partition.
|
1212.1744 | Computational Capabilities of Random Automata Networks for Reservoir
Computing | nlin.AO cond-mat.dis-nn cs.NE | This paper underscores the conjecture that intrinsic computation is maximal
in systems at the "edge of chaos." We study the relationship between dynamics
and computational capability in Random Boolean Networks (RBN) for Reservoir
Computing (RC). RC is a computational paradigm in which a trained readout layer
interprets the dynamics of an excitable component (called the reservoir) that
is perturbed by external input. The reservoir is often implemented as a
homogeneous recurrent neural network, but there has been little investigation
into the properties of reservoirs that are discrete and heterogeneous. Random
Boolean networks are generic and heterogeneous dynamical systems and here we
use them as the reservoir. An RBN is typically a closed system; to use it as a
reservoir we extend it with an input layer. As a consequence of perturbation,
the RBN does not necessarily fall into an attractor. Computational capability
in RC arises from a trade-off between separability and fading memory of inputs.
We find the balance of these properties predictive of classification power and
optimal at critical connectivity. These results are relevant to the
construction of devices which exploit the intrinsic dynamics of complex
heterogeneous systems, such as biomolecular substrates.
|
1212.1752 | Hybrid Optimized Back propagation Learning Algorithm For Multi-layer
Perceptron | cs.NE | Standard neural network based on general back propagation learning using
delta method or gradient descent method has some great faults like poor
optimization of error-weight objective function, low learning rate, instability
.This paper introduces a hybrid supervised back propagation learning algorithm
which uses trust-region method of unconstrained optimization of the error
objective function by using quasi-newton method .This optimization leads to
more accurate weight update system for minimizing the learning error during
learning phase of multi-layer perceptron.[13][14][15] In this paper augmented
line search is used for finding points which satisfies Wolfe condition. In this
paper, This hybrid back propagation algorithm has strong global convergence
properties & is robust & efficient in practice.
|
1212.1798 | IK-PSO, PSO Inverse Kinematics Solver with Application to Biped Gait
Generation | cs.RO cs.AI | This paper describes a new approach allowing the generation of a simplified
Biped gait. This approach combines a classical dynamic modeling with an inverse
kinematics' solver based on particle swarm optimization, PSO. First, an
inverted pendulum, IP, is used to obtain a simplified dynamic model of the
robot and to compute the target position of a key point in biped locomotion,
the Centre Of Mass, COM. The proposed algorithm, called IK-PSO, Inverse
Kinematics PSO, returns and inverse kinematics solution corresponding to that
COM respecting the joints constraints. In This paper the inertia weight PSO
variant is used to generate a possible solution according to the stability
based fitness function and a set of joints motions constraints. The method is
applied with success to a leg motion generation. Since based on a
pre-calculated COM, that satisfied the biped stability, the proposal allowed
also to plan a walk with application on a small size biped robot.
|
1212.1800 | Toward Intelligent Biped-Humanoids Gaits Generation | cs.RO | In this chapter we will highlight our experimental studies on natural human
walking analysis and introduce a biologically inspired design for simple
bipedal locomotion system of humanoid robots. Inspiration comes directly from
human walking analysis and human muscles mechanism and control. A hybrid
algorithm for walking gaits generation is then proposed as an innovative
alternative to classically used kinematics and dynamic equations solving, the
gaits include knee, ankle and hip trajectories. The proposed algorithm is an
intelligent evolutionary based on particle swarm optimization paradigm. This
proposal can be used for small size humanoid robots, with a knee an ankle and a
hip and at least six Degrees of Freedom (DOF).
|
1212.1801 | Sequential Testing for Sparse Recovery | cs.IT math.IT | This paper studies sequential methods for recovery of sparse signals in high
dimensions. When compared to fixed sample size procedures, in the sparse
setting, sequential methods can result in a large reduction in the number of
samples needed for reliable signal support recovery. Starting with a lower
bound, we show any coordinate-wise sequential sampling procedure fails in the
high dimensional limit provided the average number of measurements per
dimension is less then log s/D(P_0||P_1) where s is the level of sparsity and
D(P_0||P_1) the Kullback-Leibler divergence between the underlying
distributions. A series of Sequential Probability Ratio Tests (SPRT) which
require complete knowledge of the underlying distributions is shown to achieve
this bound. Motivated by real world experiments and recent work in adaptive
sensing, we introduce a simple procedure termed Sequential Thresholding which
can be implemented when the underlying testing problem satisfies a monotone
likelihood ratio assumption. Sequential Thresholding guarantees exact support
recovery provided the average number of measurements per dimension grows faster
than log s/ D(P_0||P_1), achieving the lower bound. For comparison, we show any
non-sequential procedure fails provided the number of measurements grows at a
rate less than log n/D(P_1||P_0), where n is the total dimension of the
problem.
|
1212.1819 | A fair comparison of many max-tree computation algorithms (Extended
version of the paper submitted to ISMM 2013 | cs.CV | With the development of connected filters for the last decade, many
algorithms have been proposed to compute the max-tree. Max-tree allows to
compute the most advanced connected operators in a simple way. However, no fair
comparison of algorithms has been proposed yet and the choice of an algorithm
over an other depends on many parameters. Since the need of fast algorithms is
obvious for production code, we present an in depth comparison of five
algorithms and some variations of them in a unique framework. Finally, a
decision tree will be proposed to help user in choosing the right algorithm
with respect to their data.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.