id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1205.0986
|
Robot Navigation using Reinforcement Learning and Slow Feature Analysis
|
cs.AI cs.NE
|
The application of reinforcement learning algorithms onto real life problems
always bears the challenge of filtering the environmental state out of raw
sensor readings. While most approaches use heuristics, biology suggests that
there must exist an unsupervised method to construct such filters
automatically. Besides the extraction of environmental states, the filters have
to represent them in a fashion that support modern reinforcement algorithms.
Many popular algorithms use a linear architecture, so one should aim at filters
that have good approximation properties in combination with linear functions.
This thesis wants to propose the unsupervised method slow feature analysis
(SFA) for this task. Presented with a random sequence of sensor readings, SFA
learns a set of filters. With growing model complexity and training examples,
the filters converge against trigonometric polynomial functions. These are
known to possess excellent approximation capabilities and should therfore
support the reinforcement algorithms well. We evaluate this claim on a robot.
The task is to learn a navigational control in a simple environment using the
least square policy iteration (LSPI) algorithm. The only accessible sensor is a
head mounted video camera, but without meaningful filtering, video images are
not suited as LSPI input. We will show that filters learned by SFA, based on a
random walk video of the robot, allow the learned control to navigate
successfully in ca. 80% of the test trials.
|
1205.0997
|
Partial-MDS Codes and their Application to RAID Type of Architectures
|
cs.IT math.IT
|
A family of codes with a natural two-dimensional structure is presented,
inspired by an application of RAID type of architectures whose units are solid
state drives (SSDs). Arrays of SSDs behave differently to arrays of hard disk
drives (HDDs), since hard errors in sectors are common and traditional RAID
approaches (like RAID 5 or RAID 6) may be either insufficient or excessive. An
efficient solution to this problem is given by the new codes presented, called
partial-MDS (PMDS) codes.
|
1205.1010
|
Partisan Asymmetries in Online Political Activity
|
cs.SI cs.HC physics.soc-ph
|
We examine partisan differences in the behavior, communication patterns and
social interactions of more than 18,000 politically-active Twitter users to
produce evidence that points to changing levels of partisan engagement with the
American online political landscape. Analysis of a network defined by the
communication activity of these users in proximity to the 2010 midterm
congressional elections reveals a highly segregated, well clustered partisan
community structure. Using cluster membership as a high-fidelity (87% accuracy)
proxy for political affiliation, we characterize a wide range of differences in
the behavior, communication and social connectivity of left- and right-leaning
Twitter users. We find that in contrast to the online political dynamics of the
2008 campaign, right-leaning Twitter users exhibit greater levels of political
activity, a more tightly interconnected social structure, and a communication
network topology that facilitates the rapid and broad dissemination of
political information.
|
1205.1013
|
Sparse image reconstruction on the sphere: implications of a new
sampling theorem
|
cs.IT astro-ph.IM math.IT
|
We study the impact of sampling theorems on the fidelity of sparse image
reconstruction on the sphere. We discuss how a reduction in the number of
samples required to represent all information content of a band-limited signal
acts to improve the fidelity of sparse image reconstruction, through both the
dimensionality and sparsity of signals. To demonstrate this result we consider
a simple inpainting problem on the sphere and consider images sparse in the
magnitude of their gradient. We develop a framework for total variation (TV)
inpainting on the sphere, including fast methods to render the inpainting
problem computationally feasible at high-resolution. Recently a new sampling
theorem on the sphere was developed, reducing the required number of samples by
a factor of two for equiangular sampling schemes. Through numerical simulations
we verify the enhanced fidelity of sparse image reconstruction due to the more
efficient sampling of the sphere provided by the new sampling theorem.
|
1205.1053
|
Variable Selection for Latent Dirichlet Allocation
|
cs.LG stat.ML
|
In latent Dirichlet allocation (LDA), topics are multinomial distributions
over the entire vocabulary. However, the vocabulary usually contains many words
that are not relevant in forming the topics. We adopt a variable selection
method widely used in statistical modeling as a dimension reduction tool and
combine it with LDA. In this variable selection model for LDA (vsLDA), topics
are multinomial distributions over a subset of the vocabulary, and by excluding
words that are not informative for finding the latent topic structure of the
corpus, vsLDA finds topics that are more robust and discriminative. We compare
three models, vsLDA, LDA with symmetric priors, and LDA with asymmetric priors,
on heldout likelihood, MCMC chain consistency, and document classification. The
performance of vsLDA is better than symmetric LDA for likelihood and
classification, better than asymmetric LDA for consistency and classification,
and about the same in the other comparisons.
|
1205.1069
|
Asymptotic $L^4$ norm of polynomials derived from characters
|
math.NT cs.IT math.CO math.IT
|
Littlewood investigated polynomials with coefficients in $\{-1,1\}$
(Littlewood polynomials), to see how small their ratio of norms
$||f||_4/||f||_2$ on the unit circle can become as $deg(f)\to\infty$. A small
limit is equivalent to slow growth in the mean square autocorrelation of the
associated binary sequences of coefficients of the polynomials. The
autocorrelation problem for arrays and higher dimensional objects has also been
studied; it is the natural generalization to multivariable polynomials. Here we
find, for each $n > 1$, a family of $n$-variable Littlewood polynomials with
lower asymptotic $||f||_4/||f||_2$ than any known hitherto. We discover these
through a wide survey, infeasible with previous methods, of polynomials whose
coefficients come from finite field characters. This is the first time that the
lowest known asymptotic ratio of norms $||f||_4/||f||_2$ for multivariable
polynomials $f(z_1,...,z_n)$ is strictly less than what could be obtained by
using products $f_1(z_1)... f_n(z_n)$ of the best known univariate polynomials.
|
1205.1117
|
An Overview on Clustering Methods
|
cs.DS cs.DB
|
Clustering is a common technique for statistical data analysis, which is used
in many fields, including machine learning, data mining, pattern recognition,
image analysis and bioinformatics. Clustering is the process of grouping
similar objects into different groups, or more precisely, the partitioning of a
data set into subsets, so that the data in each subset according to some
defined distance measure. This paper covers about clustering algorithms,
benefits and its applications. Paper concludes by discussing some limitations.
|
1205.1125
|
Application Of Data Mining In Bioinformatics
|
cs.CE cs.DB
|
This article highlights some of the basic concepts of bioinformatics and data
mining. The major research areas of bioinformatics are highlighted. The
application of data mining in the domain of bioinformatics is explained. It
also highlights some of the current challenges and opportunities of data mining
in bioinformatics.
|
1205.1126
|
A Comprehensive Study of CRM through Data Mining Techniques
|
cs.DB
|
In today's competitive scenario in corporate world, "Customer Retention"
strategy in Customer Relationship Management (CRM) is an increasingly pressed
issue. Data mining techniques play a vital role in better CRM. This paper
attempts to bring a new perspective by focusing the issue of data mining
applications, opportunities and challenges in CRM. It covers the topic such as
customer retention, customer services, risk assessment, fraud detection and
some of the data mining tools which are widely used in CRM.
|
1205.1143
|
Recommendation on Academic Networks using Direction Aware Citation
Analysis
|
cs.IR cs.DL
|
The literature search has always been an important part of an academic
research. It greatly helps to improve the quality of the research process and
output, and increase the efficiency of the researchers in terms of their novel
contribution to science. As the number of published papers increases every
year, a manual search becomes more exhaustive even with the help of today's
search engines since they are not specialized for this task. In academics, two
relevant papers do not always have to share keywords, cite one another, or even
be in the same field. Although a well-known paper is usually an easy pray in
such a hunt, relevant papers using a different terminology, especially recent
ones, are not obvious to the eye.
In this work, we propose paper recommendation algorithms by using the
citation information among papers. The proposed algorithms are direction aware
in the sense that they can be tuned to find either recent or traditional
papers. The algorithms require a set of papers as input and recommend a set of
related ones. If the user wants to give negative or positive feedback on the
suggested paper set, the recommendation is refined. The search process can be
easily guided in that sense by relevance feedback. We show that this slight
guidance helps the user to reach a desired paper in a more efficient way. We
adapt our models and algorithms also for the venue and reviewer recommendation
tasks. Accuracy of the models and algorithms is thoroughly evaluated by
comparison with multiple baselines and algorithms from the literature in terms
of several objectives specific to citation, venue, and reviewer recommendation
tasks. All of these algorithms are implemented within a publicly available
web-service framework (http://theadvisor.osu.edu/) which currently uses the
data from DBLP and CiteSeer to construct the proposed citation graph.
|
1205.1144
|
Rakeness in the design of Analog-to-Information Conversion of Sparse and
Localized Signals
|
cs.IT cs.CV math.IT
|
Design of Random Modulation Pre-Integration systems based on the
restricted-isometry property may be suboptimal when the energy of the signals
to be acquired is not evenly distributed, i.e. when they are both sparse and
localized. To counter this, we introduce an additional design criterion, that
we call rakeness, accounting for the amount of energy that the measurements
capture from the signal to be acquired. Hence, for localized signals a proper
system tuning increases the rakeness as well as the average SNR of the samples
used in its reconstruction. Yet, maximizing average SNR may go against the need
of capturing all the components that are potentially non-zero in a sparse
signal, i.e., against the restricted isometry requirement ensuring
reconstructability. What we propose is to administer the trade-off between
rakeness and restricted isometry in a statistical way by laying down an
optimization problem. The solution of such an optimization problem is the
statistic of the process generating the random waveforms onto which the signal
is projected to obtain the measurements. The formal definition of such a
problems is given as well as its solution for signals that are either localized
in frequency or in more generic domain. Sample applications, to ECG signals and
small images of printed letters and numbers, show that rakeness-based design
leads to non-negligible improvements in both cases.
|
1205.1173
|
Subset Typicality Lemmas and Improved Achievable Regions in
Multiterminal Source Coding
|
cs.IT math.IT
|
Consider the following information theoretic setup wherein independent
codebooks of N correlated random variables are generated according to their
respective marginals. The problem of determining the conditions on the rates of
codebooks to ensure the existence of at least one codeword tuple which is
jointly typical with respect to a given joint density (called the multivariate
covering lemma) has been studied fairly well and the associated rate regions
have found applications in several source coding scenarios. However, several
multiterminal source coding applications, such as the general multi-user
Gray-Wyner network, require joint typicality only within subsets of codewords
transmitted. Motivated by such applications, we ask ourselves the conditions on
the rates to ensure the existence of at least one codeword tuple which is
jointly typical within subsets according to given per subset joint densities.
This report focuses primarily on deriving a new achievable rate region for this
problem which strictly improves upon the direct extension of the multivariate
covering lemma, which has quite popularly been used in several earlier work.
Towards proving this result, we derive two important results called `subset
typicality lemmas' which can potentially have broader applicability in more
general scenarios beyond what is considered in this report. We finally apply
the results therein to derive a new achievable region for the general
multi-user Gray-Wyner network.
|
1205.1183
|
On the Complexity of Trial and Error
|
cs.CC cs.DS cs.LG
|
Motivated by certain applications from physics, biochemistry, economics, and
computer science, in which the objects under investigation are not accessible
because of various limitations, we propose a trial-and-error model to examine
algorithmic issues in such situations. Given a search problem with a hidden
input, we are asked to find a valid solution, to find which we can propose
candidate solutions (trials), and use observed violations (errors), to prepare
future proposals. In accordance with our motivating applications, we consider
the fairly broad class of constraint satisfaction problems, and assume that
errors are signaled by a verification oracle in the format of the index of a
violated constraint (with the content of the constraint still hidden).
Our discoveries are summarized as follows. On one hand, despite the seemingly
very little information provided by the verification oracle, efficient
algorithms do exist for a number of important problems. For the Nash, Core,
Stable Matching, and SAT problems, the unknown-input versions are as hard as
the corresponding known-input versions, up to a factor of polynomial. We
further give almost tight bounds on the latter two problems' trial
complexities. On the other hand, there are problems whose complexities are
substantially increased in the unknown-input model. In particular, no
time-efficient algorithms exist (under standard hardness assumptions) for Graph
Isomorphism and Group Isomorphism problems. The tools used to achieve these
results include order theory, strong ellipsoid method, and some non-standard
reductions.
Our model investigates the value of information, and our results demonstrate
that the lack of input information can introduce various levels of extra
difficulty. The model exhibits intimate connections with (and we hope can also
serve as a useful supplement to) certain existing learning and complexity
theories.
|
1205.1190
|
An Approach For Robots To Deal With Objects
|
cs.RO
|
Understanding object and its context are very important for robots when
dealing with objects for completion of a mission. In this paper, an
Affordance-based Ontology (ABO) is proposed for easy robot dealing with
substantive and non-substantive objects. An ABO is a machine-understandable
representation of objects and their relationships by what it's related to and
how it's related. By using ABO, when dealing with a substantive object, robots
can understand the representation of its object and its relation with other
non-substantive objects. When the substantive object is not available, the
robots have the understanding ability, in term of objects and their functions
to select a non substantive object in order to complete the mission, such as
giving raincoat or hat instead of getting stuck due to the unavailability of
substantive object, e.g. umbrella. The experiment is done in the Ubiquitous
Robotics Technology (u-RT) Space of National Institute of Advanced Industrial
Science and Technology (AIST), Tsukuba, Japan.
|
1205.1225
|
Volumetric Mapping of Genus Zero Objects via Mass Preservation
|
cs.CG cs.CV
|
In this work, we present a technique to map any genus zero solid object onto
a hexahedral decomposition of a solid cube. This problem appears in many
applications ranging from finite element methods to visual tracking. From this,
one can then hopefully utilize the proposed technique for shape analysis,
registration, as well as other related computer graphics tasks. More
importantly, given that we seek to establish a one-to-one correspondence of an
input volume to that of a solid cube, our algorithm can naturally generate a
quality hexahedral mesh as an output. In addition, we constrain the mapping
itself to be volume preserving allowing for the possibility of further mesh
simplification. We demonstrate our method both qualitatively and quantitatively
on various 3D solid models
|
1205.1240
|
Convex Relaxation for Combinatorial Penalties
|
stat.ML cs.LG
|
In this paper, we propose an unifying view of several recently proposed
structured sparsity-inducing norms. We consider the situation of a model
simultaneously (a) penalized by a set- function de ned on the support of the
unknown parameter vector which represents prior knowledge on supports, and (b)
regularized in Lp-norm. We show that the natural combinatorial optimization
problems obtained may be relaxed into convex optimization problems and
introduce a notion, the lower combinatorial envelope of a set-function, that
characterizes the tightness of our relaxations. We moreover establish links
with norms based on latent representations including the latent group Lasso and
block-coding, and with norms obtained from submodular functions.
|
1205.1242
|
Information Spectrum Approach to Overflow Probability of Variable-Length
Codes with Conditional Cost Function
|
cs.IT math.IT
|
Lossless variable-length source coding with unequal cost function is
considered for general sources. In this problem, the codeword cost instead of
codeword length is important. The infimum of average codeword cost has already
been determined for general sources. We consider the overflow probability of
codeword cost and determine the infimum of achievable overflow threshold. Our
analysis is on the basis of information-spectrum methods and hence valid
through the general source.
|
1205.1245
|
Sparse group lasso and high dimensional multinomial classification
|
stat.ML cs.LG stat.CO
|
The sparse group lasso optimization problem is solved using a coordinate
gradient descent algorithm. The algorithm is applicable to a broad class of
convex loss functions. Convergence of the algorithm is established, and the
algorithm is used to investigate the performance of the multinomial sparse
group lasso classifier. On three different real data examples the multinomial
group lasso clearly outperforms multinomial lasso in terms of achieved
classification error rate and in terms of including fewer features for the
classification. The run-time of our sparse group lasso implementation is of the
same order of magnitude as the multinomial lasso algorithm implemented in the R
package glmnet. Our implementation scales well with the problem size. One of
the high dimensional examples considered is a 50 class classification problem
with 10k features, which amounts to estimating 500k parameters. The
implementation is available as the R package msgl.
|
1205.1277
|
MacWilliams Identities for $m$-tuple Weight Enumerators
|
cs.IT math.CO math.IT
|
Since MacWilliams proved the original identity relating the Hamming weight
enumerator of a linear code to the weight enumerator of its dual code there
have been many different generalizations, leading to the development of
$m$-tuple support enumerators. We prove a generalization of theorems of Britz
and of Ray-Chaudhuri and Siap, which build on earlier work of Kl{\o}ve,
Shiromoto, Wan, and others. We then give illustrations of these $m$-tuple
weight enumerators.
|
1205.1287
|
Compressed Sensing for Energy-Efficient Wireless Telemonitoring of
Noninvasive Fetal ECG via Block Sparse Bayesian Learning
|
stat.ML cs.LG stat.AP
|
Fetal ECG (FECG) telemonitoring is an important branch in telemedicine. The
design of a telemonitoring system via a wireless body-area network with low
energy consumption for ambulatory use is highly desirable. As an emerging
technique, compressed sensing (CS) shows great promise in
compressing/reconstructing data with low energy consumption. However, due to
some specific characteristics of raw FECG recordings such as non-sparsity and
strong noise contamination, current CS algorithms generally fail in this
application.
This work proposes to use the block sparse Bayesian learning (BSBL) framework
to compress/reconstruct non-sparse raw FECG recordings. Experimental results
show that the framework can reconstruct the raw recordings with high quality.
Especially, the reconstruction does not destroy the interdependence relation
among the multichannel recordings. This ensures that the independent component
analysis decomposition of the reconstructed recordings has high fidelity.
Furthermore, the framework allows the use of a sparse binary sensing matrix
with much fewer nonzero entries to compress recordings. Particularly, each
column of the matrix can contain only two nonzero entries. This shows the
framework, compared to other algorithms such as current CS algorithms and
wavelet algorithms, can greatly reduce code execution in CPU in the data
compression stage.
|
1205.1357
|
Detecting Spammers via Aggregated Historical Data Set
|
cs.CR cs.LG
|
The battle between email service providers and senders of mass unsolicited
emails (Spam) continues to gain traction. Vast numbers of Spam emails are sent
mainly from automatic botnets distributed over the world. One method for
mitigating Spam in a computationally efficient manner is fast and accurate
blacklisting of the senders. In this work we propose a new sender reputation
mechanism that is based on an aggregated historical data-set which encodes the
behavior of mail transfer agents over time. A historical data-set is created
from labeled logs of received emails. We use machine learning algorithms to
build a model that predicts the \emph{spammingness} of mail transfer agents in
the near future. The proposed mechanism is targeted mainly at large enterprises
and email service providers and can be used for updating both the black and the
white lists. We evaluate the proposed mechanism using 9.5M anonymized log
entries obtained from the biggest Internet service provider in Europe.
Experiments show that proposed method detects more than 94% of the Spam emails
that escaped the blacklist (i.e., TPR), while having less than 0.5%
false-alarms. Therefore, the effectiveness of the proposed method is much
higher than of previously reported reputation mechanisms, which rely on emails
logs. In addition, the proposed method, when used for updating both the black
and white lists, eliminated the need in automatic content inspection of 4 out
of 5 incoming emails, which resulted in dramatic reduction in the filtering
computational load.
|
1205.1365
|
Image Enhancement with Statistical Estimation
|
cs.MM cs.CV
|
Contrast enhancement is an important area of research for the image analysis.
Over the decade, the researcher worked on this domain to develop an efficient
and adequate algorithm. The proposed method will enhance the contrast of image
using Binarization method with the help of Maximum Likelihood Estimation (MLE).
The paper aims to enhance the image contrast of bimodal and multi-modal images.
The proposed methodology use to collect mathematical information retrieves from
the image. In this paper, we are using binarization method that generates the
desired histogram by separating image nodes. It generates the enhanced image
using histogram specification with binarization method. The proposed method has
showed an improvement in the image contrast enhancement compare with the other
image.
|
1205.1366
|
Remote sensing via $\ell_1$ minimization
|
cs.IT math.IT math.NA math.PR
|
We consider the problem of detecting the locations of targets in the far
field by sending probing signals from an antenna array and recording the
reflected echoes. Drawing on key concepts from the area of compressive sensing,
we use an $\ell_1$-based regularization approach to solve this, in general
ill-posed, inverse scattering problem. As common in compressed sensing, we
exploit randomness, which in this context comes from choosing the antenna
locations at random. With $n$ antennas we obtain $n^2$ measurements of a vector
$x \in \C^{N}$ representing the target locations and reflectivities on a
discretized grid. It is common to assume that the scene $x$ is sparse due to a
limited number of targets. Under a natural condition on the mesh size of the
grid, we show that an $s$-sparse scene can be recovered via
$\ell_1$-minimization with high probability if $n^2 \geq C s \log^2(N)$. The
reconstruction is stable under noise and under passing from sparse to
approximately sparse vectors. Our theoretical findings are confirmed by
numerical simulations.
|
1205.1389
|
A simpler derivation of the coding theorem
|
cs.IT math.IT
|
A simple proof for the Shannon coding theorem, using only the Markov
inequality, is presented. The technique is useful for didactic purposes, since
it does not require many preliminaries and the information density and mutual
information follow naturally in the proof. It may also be applicable to
situations where typicality is not natural.
|
1205.1423
|
RIPless compressed sensing from anisotropic measurements
|
cs.IT math.IT
|
Compressed sensing is the art of reconstructing a sparse vector from its
inner products with respect to a small set of randomly chosen measurement
vectors. It is usually assumed that the ensemble of measurement vectors is in
isotropic position in the sense that the associated covariance matrix is
proportional to the identity matrix. In this paper, we establish bounds on the
number of required measurements in the anisotropic case, where the ensemble of
measurement vectors possesses a non-trivial covariance matrix. Essentially, we
find that the required sampling rate grows proportionally to the condition
number of the covariance matrix. In contrast to other recent contributions to
this problem, our arguments do not rely on any restricted isometry properties
(RIP's), but rather on ideas from convex geometry which have been
systematically studied in the theory of low-rank matrix recovery. This allows
for a simple argument and slightly improved bounds, but may lead to a worse
dependency on noise (which we do not consider in the present paper).
|
1205.1428
|
High Velocity Penetration/Perforation Using Coupled Smooth Particle
Hydrodynamics-Finite Element Method
|
cs.CE physics.flu-dyn
|
Finite element method (FEM) suffers from a serious mesh distortion problem
when used for high velocity impact analyses. The smooth particle hydrodynamics
(SPH) method is appropriate for this class of problems involving severe damages
but at considerable computational cost. It is beneficial if the latter is
adopted only in severely distorted regions and FEM further away. The coupled
smooth particle hydrodynamics - finite element method (SFM) has been adopted in
a commercial hydrocode LS-DYNA to study the perforation of Weldox 460E steel
and AA5083-H116 aluminum plates with varying thicknesses and various projectile
nose geometries including blunt, conical and ogival noses. Effects of the SPH
domain size and particle density are studied considering the friction effect
between the projectile and the target materials. The simulated residual
velocities and the ballistic limit velocities from the SFM agree well with the
published experimental data. The study shows that SFM is able to emulate the
same failure mechanisms of the steel and aluminum plates as observed in various
experimental investigations for initial impact velocity of 170 m/s and higher.
|
1205.1456
|
Dynamic Multi-Relational Chinese Restaurant Process for Analyzing
Influences on Users in Social Media
|
cs.SI cs.LG physics.soc-ph
|
We study the problem of analyzing influence of various factors affecting
individual messages posted in social media. The problem is challenging because
of various types of influences propagating through the social media network
that act simultaneously on any user. Additionally, the topic composition of the
influencing factors and the susceptibility of users to these influences evolve
over time. This problem has not studied before, and off-the-shelf models are
unsuitable for this purpose. To capture the complex interplay of these various
factors, we propose a new non-parametric model called the Dynamic
Multi-Relational Chinese Restaurant Process. This accounts for the user network
for data generation and also allows the parameters to evolve over time.
Designing inference algorithms for this model suited for large scale
social-media data is another challenge. To this end, we propose a scalable and
multi-threaded inference algorithm based on online Gibbs Sampling. Extensive
evaluations on large-scale Twitter and Facebook data show that the extracted
topics when applied to authorship and commenting prediction outperform
state-of-the-art baselines. More importantly, our model produces valuable
insights on topic trends and user personality trends, beyond the capability of
existing approaches.
|
1205.1457
|
Efficient and reliable network tomography in heterogeneous networks
using BitTorrent broadcasts and clustering algorithms
|
cs.DC cs.NI cs.SI
|
In the area of network performance and discovery, network tomography focuses
on reconstructing network properties using only end-to-end measurements at the
application layer. One challenging problem in network tomography is
reconstructing available bandwidth along all links during multiple
source/multiple destination transmissions. The traditional measurement
procedures used for bandwidth tomography are extremely time consuming. We
propose a novel solution to this problem. Our method counts the fragments
exchanged during a BitTorrent broadcast. While this measurement has a high
level of randomness, it can be obtained very efficiently, and aggregated into a
reliable metric. This data is then analyzed with state-of-the-art algorithms,
which reliably reconstruct logical clusters of nodes inter-connected by high
bandwidth, as well as bottlenecks between these logical clusters. Our
experiments demonstrate that the proposed two-phase approach efficiently solves
the presented problem for a number of settings on a complex grid
infrastructure.
|
1205.1462
|
Almost Universal Hash Families are also Storage Enforcing
|
cs.IT cs.CC math.IT
|
We show that every almost universal hash function also has the storage
enforcement property. Almost universal hash functions have found numerous
applications and we show that this new storage enforcement property allows the
application of almost universal hash functions in a wide range of remote
verification tasks: (i) Proof of Secure Erasure (where we want to remotely
erase and securely update the code of a compromised machine with memory-bounded
adversary), (ii) Proof of Ownership (where a storage server wants to check if a
client has the data it claims to have before giving access to deduplicated
data) and (iii) Data possession (where the client wants to verify whether the
remote storage server is storing its data). Specifically, storage enforcement
guarantee in the classical data possession problem removes any practical
incentive for the storage server to cheat the client by saving on storage
space.
The proof of our result relies on a natural combination of Kolmogorov
Complexity and List Decoding. To the best of our knowledge this is the first
work that combines these two techniques. We believe the newly introduced
storage enforcement property of almost universal hash functions will open
promising avenues of exciting research under memory-bounded (bounded storage)
adversary model.
|
1205.1470
|
Random Hyperbolic Graphs: Degree Sequence and Clustering
|
math.CO cs.SI physics.soc-ph
|
In the last decades, the study of models for large real-world networks has
been a very popular and active area of research. A reasonable model should not
only replicate all the structural properties that are observed in real world
networks (for example, heavy tailed degree distributions, high clustering and
small diameter), but it should also be amenable to mathematical analysis. There
are plenty of models that succeed in the first task but are hard to analyze
rigorously. On the other hand, a multitude of proposed models, like classical
random graphs, can be studied mathematically, but fail in creating certain
aspects that are observed in real-world networks.
Recently, Papadopoulos, Krioukov, Boguna and Vahdat [INFOCOM'10] introduced a
random geometric graph model that is based on hyperbolic geometry. The authors
argued empirically and by some preliminary mathematical analysis that the
resulting graphs have many of the desired properties. Moreover, by computing
explicitly a maximum likelihood fit of the Internet graph, they demonstrated
impressively that this model is adequate for reproducing the structure of real
graphs with high accuracy.
In this work we initiate the rigorous study of random hyperbolic graphs. We
compute exact asymptotic expressions for the expected number of vertices of
degree k for all k up to the maximum degree and provide small probabilities for
large deviations. We also prove a constant lower bound for the clustering
coefficient. In particular, our findings confirm rigorously that the degree
sequence follows a power-law distribution with controllable exponent and that
the clustering is nonvanishing.
|
1205.1482
|
Risk estimation for matrix recovery with spectral regularization
|
math.OC cs.IT cs.LG math.IT math.ST stat.ML stat.TH
|
In this paper, we develop an approach to recursively estimate the quadratic
risk for matrix recovery problems regularized with spectral functions. Toward
this end, in the spirit of the SURE theory, a key step is to compute the (weak)
derivative and divergence of a solution with respect to the observations. As
such a solution is not available in closed form, but rather through a proximal
splitting algorithm, we propose to recursively compute the divergence from the
sequence of iterates. A second challenge that we unlocked is the computation of
the (weak) derivative of the proximity operator of a spectral function. To show
the potential applicability of our approach, we exemplify it on a matrix
completion problem to objectively and automatically select the regularization
parameter.
|
1205.1483
|
Index Coding - An Interference Alignment Perspective
|
cs.IT math.CO math.IT
|
The index coding problem is studied from an interference alignment
perspective, providing new results as well as new insights into, and
generalizations of, previously known results. An equivalence is established
between multiple unicast index coding where each message is desired by exactly
one receiver, and multiple groupcast index coding where a message can be
desired by multiple receivers, which settles the heretofore open question of
insufficiency of linear codes for the multiple unicast index coding problem by
equivalence with multiple groupcast settings where this question has previously
been answered. Necessary and sufficient conditions for the achievability of
rate half per message are shown to be a natural consequence of interference
alignment constraints, and generalizations to feasibility of rate
$\frac{1}{L+1}$ per message when each destination desires at least $L$
messages, are similarly obtained. Finally, capacity optimal solutions are
presented to a series of symmetric index coding problems inspired by the local
connectivity and local interference characteristics of wireless networks. The
solutions are based on vector linear coding.
|
1205.1496
|
Graph-based Learning with Unbalanced Clusters
|
stat.ML cs.LG
|
Graph construction is a crucial step in spectral clustering (SC) and
graph-based semi-supervised learning (SSL). Spectral methods applied on
standard graphs such as full-RBF, $\epsilon$-graphs and $k$-NN graphs can lead
to poor performance in the presence of proximal and unbalanced data. This is
because spectral methods based on minimizing RatioCut or normalized cut on
these graphs tend to put more importance on balancing cluster sizes over
reducing cut values. We propose a novel graph construction technique and show
that the RatioCut solution on this new graph is able to handle proximal and
unbalanced data. Our method is based on adaptively modulating the neighborhood
degrees in a $k$-NN graph, which tends to sparsify neighborhoods in low density
regions. Our method adapts to data with varying levels of unbalancedness and
can be naturally used for small cluster detection. We justify our ideas through
limit cut analysis. Unsupervised and semi-supervised experiments on synthetic
and real data sets demonstrate the superiority of our method.
|
1205.1505
|
Crossover phenomenon in the performance of an Internet search engine
|
cs.IR physics.data-an
|
In this work we explore the ability of the Google search engine to find
results for random N-letter strings. These random strings, dense over the set
of possible N-letter words, address the existence of typos, acronyms, and other
words without semantic meaning. Interestingly, we find that the probability of
finding such strings sharply drops from one to zero at Nc = 6. The behavior of
such order parameter suggests the presence of a transition-like phenomenon in
the geometry of the search space. Furthermore, we define a susceptibility-like
parameter which reaches a maximum in the neighborhood, suggesting the presence
of criticality. We finally speculate on the possible connections to Ramsey
theory.
|
1205.1564
|
Characterizing Ranked Chinese Syllable-to-Character Mapping Spectrum: A
Bridge Between the Spoken and Written Chinese Language
|
cs.CL stat.AP
|
One important aspect of the relationship between spoken and written Chinese
is the ranked syllable-to-character mapping spectrum, which is the ranked list
of syllables by the number of characters that map to the syllable. Previously,
this spectrum is analyzed for more than 400 syllables without distinguishing
the four intonations. In the current study, the spectrum with 1280 toned
syllables is analyzed by logarithmic function, Beta rank function, and
piecewise logarithmic function. Out of the three fitting functions, the
two-piece logarithmic function fits the data the best, both by the smallest sum
of squared errors (SSE) and by the lowest Akaike information criterion (AIC)
value. The Beta rank function is the close second. By sampling from a Poisson
distribution whose parameter value is chosen from the observed data, we
empirically estimate the $p$-value for testing the
two-piece-logarithmic-function being better than the Beta rank function
hypothesis, to be 0.16. For practical purposes, the piecewise logarithmic
function and the Beta rank function can be considered a tie.
|
1205.1580
|
Sharp recovery bounds for convex demixing, with applications
|
cs.IT math.IT
|
Demixing refers to the challenge of identifying two structured signals given
only the sum of the two signals and prior information about their structures.
Examples include the problem of separating a signal that is sparse with respect
to one basis from a signal that is sparse with respect to a second basis, and
the problem of decomposing an observed matrix into a low-rank matrix plus a
sparse matrix. This paper describes and analyzes a framework, based on convex
optimization, for solving these demixing problems, and many others. This work
introduces a randomized signal model which ensures that the two structures are
incoherent, i.e., generically oriented. For an observation from this model,
this approach identifies a summary statistic that reflects the complexity of a
particular signal. The difficulty of separating two structured, incoherent
signals depends only on the total complexity of the two structures. Some
applications include (i) demixing two signals that are sparse in mutually
incoherent bases; (ii) decoding spread-spectrum transmissions in the presence
of impulsive errors; and (iii) removing sparse corruptions from a low-rank
matrix. In each case, the theoretical analysis of the convex demixing method
closely matches its empirical behavior.
|
1205.1602
|
Indexing of Arabic documents automatically based on lexical analysis
|
cs.IR
|
The continuous information explosion through the Internet and all information
sources makes it necessary to perform all information processing activities
automatically in quick and reliable manners. In this paper, we proposed and
implemented a method to automatically create and Index for books written in
Arabic language. The process depends largely on text summarization and
abstraction processes to collect main topics and statements in the book. The
process is developed in terms of accuracy and performance and results showed
that this process can effectively replace the effort of manually indexing books
and document, a process that can be very useful in all information processing
and retrieval applications.
|
1205.1603
|
Parsing of Myanmar sentences with function tagging
|
cs.CL
|
This paper describes the use of Naive Bayes to address the task of assigning
function tags and context free grammar (CFG) to parse Myanmar sentences. Part
of the challenge of statistical function tagging for Myanmar sentences comes
from the fact that Myanmar has free-phrase-order and a complex morphological
system. Function tagging is a pre-processing step for parsing. In the task of
function tagging, we use the functional annotated corpus and tag Myanmar
sentences with correct segmentation, POS (part-of-speech) tagging and chunking
information. We propose Myanmar grammar rules and apply context free grammar
(CFG) to find out the parse tree of function tagged Myanmar sentences.
Experiments show that our analysis achieves a good result with parsing of
simple sentences and three types of complex sentences.
|
1205.1609
|
CSHURI - Modified HURI algorithm for Customer Segmentation and
Transaction Profitability
|
cs.DB
|
Association rule mining (ARM) is the process of generating rules based on the
correlation between the set of items that the customers purchase.Of late, data
mining researchers have improved upon the quality of association rule mining
for business development by incorporating factors like value (utility),
quantity of items sold (weight) and profit. The rules mined without considering
utility values (profit margin) will lead to a probable loss of profitable
rules. The advantage of wealth of the customers' needs information and rules
aids the retailer in designing his store layout[9]. An algorithm CSHURI,
Customer Segmentation using HURI, is proposed, a modified version of HURI [6],
finds customers who purchase high profitable rare items and accordingly
classify the customers based on some criteria; for example, a retail business
may need to identify valuable customers who are major contributors to a
company's overall profit. For a potential customer arriving in the store, which
customer group one should belong to according to customer needs, what are the
preferred functional features or products that the customer focuses on and what
kind of offers will satisfy the customer, etc., finds the key in targeting
customers to improve sales [9], which forms the base for customer utility
mining.
|
1205.1621
|
An optimal consensus tracking control algorithm for autonomous
underwater vehicles with disturbances
|
cs.RO
|
The optimal disturbance rejection control problem is considered for consensus
tracking systems affected by external persistent disturbances and noise.
Optimal estimated values of system states are obtained by recursive filtering
for the multiple autonomous underwater vehicles modeled to multi-agent systems
with Kalman filter. Then the feedforward-feedback optimal control law is
deduced by solving the Riccati equations and matrix equations. The existence
and uniqueness condition of feedforward-feedback optimal control law is
proposed and the optimal control law algorithm is carried out. Lastly,
simulations show the result is effectiveness with respect to external
persistent disturbances and noise.
|
1205.1628
|
Communication activity in a social network: relation between long-term
correlations and inter-event clustering
|
physics.soc-ph cs.SI
|
The timing patterns of human communication in social networks is not random.
On the contrary, communication is dominated by emergent statistical laws such
as non-trivial correlations and clustering. Recently, we found long-term
correlations in the user's activity in social communities. Here, we extend this
work to study collective behavior of the whole community. The goal is to
understand the origin of clustering and long-term persistence. At the
individual level, we find that the correlations in activity are a byproduct of
the clustering expressed in the power-law distribution of inter-event times of
single users. On the contrary, the activity of the whole community presents
long-term correlations that are a true emergent property of the system, i.e.
they are not related to the distribution of inter-event times. This result
suggests the existence of collective behavior, possible arising from nontrivial
communication patterns through the embedding social network.
|
1205.1630
|
A New UWB System Based on a Frequency Domain Transformation Of The
Received Signal
|
cs.IT math.IT
|
Differential system for ultra wide band (UWB) transmission is a very
attractive solution from a practical point of view. In this paper, we present a
new direct sequence (DS) UWB system based on the conversion of the received
signal from time domain to frequency domain that's why we called FDR receiver.
Simulation results show that the proposed receiver structure outperforms the
classical differential one for both low and high data rate systems.
|
1205.1638
|
Document summarization using positive pointwise mutual information
|
cs.IR cs.AI
|
The degree of success in document summarization processes depends on the
performance of the method used in identifying significant sentences in the
documents. The collection of unique words characterizes the major signature of
the document, and forms the basis for Term-Sentence-Matrix (TSM). The Positive
Pointwise Mutual Information, which works well for measuring semantic
similarity in the Term-Sentence-Matrix, is used in our method to assign weights
for each entry in the Term-Sentence-Matrix. The Sentence-Rank-Matrix generated
from this weighted TSM, is then used to extract a summary from the document.
Our experiments show that such a method would outperform most of the existing
methods in producing summaries from large documents.
|
1205.1639
|
Spectral Analysis of Projection Histogram for Enhancing Close matching
character Recognition in Malayalam
|
cs.CL cs.CV cs.IR
|
The success rates of Optical Character Recognition (OCR) systems for printed
Malayalam documents is quite impressive with the state of the art accuracy
levels in the range of 85-95% for various. However for real applications,
further enhancement of this accuracy levels are required. One of the bottle
necks in further enhancement of the accuracy is identified as close-matching
characters. In this paper, we delineate the close matching characters in
Malayalam and report the development of a specialised classifier for these
close-matching characters. The output of a state of the art of OCR is taken and
characters falling into the close-matching character set is further fed into
this specialised classifier for enhancing the accuracy. The classifier is based
on support vector machine algorithm and uses feature vectors derived out of
spectral coefficients of projection histogram signals of close-matching
characters.
|
1205.1644
|
DBC based Face Recognition using DWT
|
cs.CV
|
The applications using face biometric has proved its reliability in last
decade. In this paper, we propose DBC based Face Recognition using DWT (DBC-
FR) model. The Poly-U Near Infra Red (NIR) database images are scanned and
cropped to get only the face part in pre-processing. The face part is resized
to 100*100 and DWT is applied to derive LL, LH, HL and HH subbands. The LL
subband of size 50*50 is converted into 100 cells with 5*5 dimention of each
cell. The Directional Binary Code (DBC) is applied on each 5*5 cell to derive
100 features. The Euclidian distance measure is used to compare the features of
test image and database images. The proposed algorithm render better percentage
recognition rate compared to the existing algorithm.
|
1205.1645
|
Publishing and linking transport data on the Web
|
cs.AI
|
Without Linked Data, transport data is limited to applications exclusively
around transport. In this paper, we present a workflow for publishing and
linking transport data on the Web. So we will be able to develop transport
applications and to add other features which will be created from other
datasets. This will be possible because transport data will be linked to these
datasets. We apply this workflow to two datasets: NEPTUNE, a French standard
describing a transport line, and Passim, a directory containing relevant
information on transport services, in every French city.
|
1205.1648
|
A novel statistical fusion rule for image fusion and its comparison in
non subsampled contourlet transform domain and wavelet domain
|
cs.CV math.ST stat.TH
|
Image fusion produces a single fused image from a set of input images. A new
method for image fusion is proposed based on Weighted Average Merging Method
(WAMM) in the NonSubsampled Contourlet Transform (NSCT) domain. A performance
analysis on various statistical fusion rules are also analysed both in NSCT and
Wavelet domain. Analysis has been made on medical images, remote sensing images
and multi focus images. Experimental results shows that the proposed method,
WAMM obtained better results in NSCT domain than the wavelet domain as it
preserves more edges and keeps the visual quality intact in the fused image.
|
1205.1650
|
Compressed Sensing with Nonlinear Observations and Related Nonlinear
Optimisation Problems
|
cs.IT math.IT math.OC
|
Non-convex constraints have recently proven a valuable tool in many
optimisation problems. In particular sparsity constraints have had a
significant impact on sampling theory, where they are used in Compressed
Sensing and allow structured signals to be sampled far below the rate
traditionally prescribed.
Nearly all of the theory developed for Compressed Sensing signal recovery
assumes that samples are taken using linear measurements. In this paper we
instead address the Compressed Sensing recovery problem in a setting where the
observations are non-linear. We show that, under conditions similar to those
required in the linear setting, the Iterative Hard Thresholding algorithm can
be used to accurately recover sparse or structured signals from few non-linear
observations.
Similar ideas can also be developed in a more general non-linear optimisation
framework. In the second part of this paper we therefore present related result
that show how this can be done under sparsity and union of subspaces
constraints, whenever a generalisation of the Restricted Isometry Property
traditionally imposed on the Compressed Sensing system holds.
|
1205.1671
|
Submodular Inference of Diffusion Networks from Multiple Trees
|
cs.SI cs.DS physics.soc-ph
|
Diffusion and propagation of information, influence and diseases take place
over increasingly larger networks. We observe when a node copies information,
makes a decision or becomes infected but networks are often hidden or
unobserved. Since networks are highly dynamic, changing and growing rapidly, we
only observe a relatively small set of cascades before a network changes
significantly. Scalable network inference based on a small cascade set is then
necessary for understanding the rapidly evolving dynamics that govern
diffusion. In this article, we develop a scalable approximation algorithm with
provable near-optimal performance based on submodular maximization which
achieves a high accuracy in such scenario, solving an open problem first
introduced by Gomez-Rodriguez et al (2010). Experiments on synthetic and real
diffusion data show that our algorithm in practice achieves an optimal
trade-off between accuracy and running time.
|
1205.1682
|
Influence Maximization in Continuous Time Diffusion Networks
|
cs.SI cs.DS physics.soc-ph
|
The problem of finding the optimal set of source nodes in a diffusion network
that maximizes the spread of information, influence, and diseases in a limited
amount of time depends dramatically on the underlying temporal dynamics of the
network. However, this still remains largely unexplored to date. To this end,
given a network and its temporal dynamics, we first describe how continuous
time Markov chains allow us to analytically compute the average total number of
nodes reached by a diffusion process starting in a set of source nodes. We then
show that selecting the set of most influential source nodes in the continuous
time influence maximization problem is NP-hard and develop an efficient
approximation algorithm with provable near-optimal performance. Experiments on
synthetic and real diffusion networks show that our algorithm outperforms other
state of the art algorithms by at least ~20% and is robust across different
network topologies.
|
1205.1690
|
Chaotic Method for Generating q-Gaussian Random Variables
|
cs.IT cond-mat.stat-mech math.IT nlin.CD
|
This study proposes a pseudo random number generator of q-Gaussian random
variables for a range of q values, -infinity < q < 3, based on deterministic
chaotic map dynamics. Our method consists of chaotic maps on the unit circle
and map dynamics based on the piecewise linear map. We perform the q-Gaussian
random number generator for several values of q and conduct both
Kolmogorov-Smirnov (KS) and Anderson-Darling (AD) tests. The q-Gaussian samples
generated by our proposed method pass the KS test at more than 5% significance
level for values of q ranging from -1.0 to 2.7, while they pass the AD test at
more than 5% significance level for q ranging from -1 to 2.4.
|
1205.1712
|
On the strong converses for the quantum channel capacity theorems
|
quant-ph cs.IT math.IT
|
A unified approach to prove the converses for the quantum channel capacity
theorems is presented. These converses include the strong converse theorems for
classical or quantum information transfer with error exponents and novel
explicit upper bounds on the fidelity measures reminiscent of the Wolfowitz
strong converse for the classical channel capacity theorems. We provide a new
proof for the error exponents for the classical information transfer. A long
standing problem in quantum information theory has been to find out the strong
converse for the channel capacity theorem when quantum information is sent
across the channel. We give the quantum error exponent thereby giving a
one-shot exponential upper bound on the fidelity. We then apply our results to
show that the strong converse holds for the quantum information transfer across
an erasure channel for maximally entangled channel inputs.
|
1205.1720
|
Reconstruction of Arbitrary Biochemical Reaction Networks: A Compressive
Sensing Approach
|
cs.SY physics.bio-ph
|
Reconstruction of biochemical reaction networks is a central topic in systems
biology which raises crucial theoretical challenges in system identification.
Nonlinear Ordinary Differential Equations (ODEs) that involve polynomial and
rational functions are typically used to model biochemical reaction networks.
Such nonlinear models make the problem of determining the connectivity of
biochemical networks from time-series experimental data quite difficult. In
this paper, we present a network reconstruction algorithm that can deal with
model descriptions under the form of polynomial and rational functions. Rather
than identifying the parameters of linear or nonlinear ODEs characterised by
pre-defined equation structures, our methodology allows us to determine the
nonlinear ODEs structure together with their associated reaction constants. To
solve the network reconstruction problem, we cast it as a Compressive Sensing
(CS) problem and use Bayesian Sparse Learning (BSL) algorithms as an efficient
way to obtain its solution.
|
1205.1731
|
Stable Throughput in a Cognitive Wireless Network
|
cs.IT math.IT
|
We study, from a network layer perspective, the effect of an Ad-Hoc secondary
network with N nodes randomly accessing the spectrum licensed to a primary node
during the idle slots of the primary user. If the sensing is perfect, then the
secondary nodes do not interfere with the primary node and hence do not affect
its stable throughput. In case of imperfect sensing, it is shown that if the
primary user's arrival rate is less than some calculated finite value,
cognitive nodes can employ any transmission power or probabilities without
affecting the primary user's stability; otherwise, the secondary nodes should
control their transmission parameters to reduce the interference on the
primary. It is also shown that in contrast with the primary's maximum stable
throughput which strictly decreases with increased sensing errors, the
throughput of the secondary nodes might increase with sensing errors as more
transmission opportunities become available to them. Finally, we explore the
use of the secondary nodes as relays of the primary node's traffic to
compensate for the interference they might cause. We introduce a relaying
protocol based on distributed space-time coding that forces all the secondary
nodes that are able to decode a primary's unsuccessful packet to relay that
packet whenever the primary is idle. In this case, for appropriate modulation
scheme and under perfect sensing, it is shown that the more secondary nodes in
the system, the better for the primary user in terms of his stable throughput.
Meanwhile, the secondary nodes might benefit from relaying by having access to
a larger number of idle slots due to the increase of the service rate of the
primary. For the case of a single secondary node, the proposed relaying
protocol guarantees that either both the primary and the secondary benefit from
relaying or none of them does.
|
1205.1745
|
Reconfigurable Controller Design For Actuator Faults In A Four-Tank
System Benchmark
|
cs.SY
|
The purpose of this work is to design a state feedback controller using
Parametric Eigenstructure Assignment (PAE) technique that has the capacity to
be reconfigured in the case that partial actuator faults occur. The proposed
controller is capable of compensating the gain losses in actuators and
maintaining the control performance in faulty situations. Simulations show the
performance enhancement in comparison to the non-reconfigurable controller
through Integral Absolute Error (IAE) index for different fault scenarios.
|
1205.1765
|
Chaotic multi-objective optimization based design of fractional order
PI{\lambda}D{\mu} controller in AVR system
|
cs.SY cs.NE
|
In this paper, a fractional order (FO) PI{\lambda}D\mu controller is designed
to take care of various contradictory objective functions for an Automatic
Voltage Regulator (AVR) system. An improved evolutionary Non-dominated Sorting
Genetic Algorithm II (NSGA II), which is augmented with a chaotic map for
greater effectiveness, is used for the multi-objective optimization problem.
The Pareto fronts showing the trade-off between different design criteria are
obtained for the PI{\lambda}D\mu and PID controller. A comparative analysis is
done with respect to the standard PID controller to demonstrate the merits and
demerits of the fractional order PI{\lambda}D\mu controller.
|
1205.1771
|
Quantum-Classical Transitions in Complex Networks
|
cond-mat.dis-nn cond-mat.quant-gas cs.SI physics.soc-ph
|
The inherent properties of specific physical systems can be used as metaphors
for investigation of the behavior of complex networks. This insight has already
been put into practice in previous work, e.g., studying the network evolution
in terms of phase transitions of quantum gases or representing distances among
nodes as if they were particle energies. This paper shows that the emergence of
different structures in complex networks, such as the scale-free and the
winner-takes-all networks, can be represented in terms of a quantum-classical
transition for quantum gases. In particular, we propose a model of fermionic
networks that allows us to investigate the network evolution and its dependence
on the system temperature. Simulations, performed in accordance with the cited
model, clearly highlight the separation between classical random and
winner-takes-all networks, in full correspondence with the separation between
classical and quantum regions for quantum gases. We deem this model useful for
the analysis of synthetic and real complex networks.
|
1205.1779
|
A Common Evaluation Setting for Just.Ask, Open Ephyra and Aranea QA
systems
|
cs.IR
|
Question Answering (QA) is not a new research field in Natural Language
Processing (NLP). However in recent years, QA has been a subject of growing
study. Nowadays, most of the QA systems have a similar pipelined architecture
and each system use a set of unique techniques to accomplish its state of the
art results. However, many things are not clear in the QA processing. It is not
clear the extend of the impact of tasks performed in earlier stages in
following stages of the pipelining process. It is not clear, if techniques used
in a QA system can be used in another QA system to improve its results. And
finally, it is not clear in what setting should be these systems tested in
order to properly analyze their results.
|
1205.1782
|
Approximate Dynamic Programming By Minimizing Distributionally Robust
Bounds
|
stat.ML cs.LG
|
Approximate dynamic programming is a popular method for solving large Markov
decision processes. This paper describes a new class of approximate dynamic
programming (ADP) methods- distributionally robust ADP-that address the curse
of dimensionality by minimizing a pessimistic bound on the policy loss. This
approach turns ADP into an optimization problem, for which we derive new
mathematical program formulations and analyze its properties. DRADP improves on
the theoretical guarantees of existing ADP methods-it guarantees convergence
and L1 norm based error bounds. The empirical evaluation of DRADP shows that
the theoretical guarantees translate well into good performance on benchmark
problems.
|
1205.1794
|
A Novel Method For Speech Segmentation Based On Speakers'
Characteristics
|
cs.AI cs.CL
|
Speech Segmentation is the process change point detection for partitioning an
input audio stream into regions each of which corresponds to only one audio
source or one speaker. One application of this system is in Speaker Diarization
systems. There are several methods for speaker segmentation; however, most of
the Speaker Diarization Systems use BIC-based Segmentation methods. The main
goal of this paper is to propose a new method for speaker segmentation with
higher speed than the current methods - e.g. BIC - and acceptable accuracy. Our
proposed method is based on the pitch frequency of the speech. The accuracy of
this method is similar to the accuracy of common speaker segmentation methods.
However, its computation cost is much less than theirs. We show that our method
is about 2.4 times faster than the BIC-based method, while the average accuracy
of pitch-based method is slightly higher than that of the BIC-based method.
|
1205.1796
|
Moving Object Trajectories Meta-Model And Spatio-Temporal Queries
|
cs.DB
|
In this paper, a general moving object trajectories framework is put forward
to allow independent applications processing trajectories data benefit from a
high level of interoperability, information sharing as well as an efficient
answer for a wide range of complex trajectory queries. Our proposed meta-model
is based on ontology and event approach, incorporates existing presentations of
trajectory and integrates new patterns like space-time path to describe
activities in geographical space-time. We introduce recursive Region of
Interest concepts and deal mobile objects trajectories with diverse
spatio-temporal sampling protocols and different sensors available that
traditional data model alone are incapable for this purpose.
|
1205.1813
|
Graph spectra and the detectability of community structure in networks
|
cs.SI cond-mat.stat-mech physics.soc-ph
|
We study networks that display community structure -- groups of nodes within
which connections are unusually dense. Using methods from random matrix theory,
we calculate the spectra of such networks in the limit of large size, and hence
demonstrate the presence of a phase transition in matrix methods for community
detection, such as the popular modularity maximization method. The transition
separates a regime in which such methods successfully detect the community
structure from one in which the structure is present but is not detected. By
comparing these results with recent analyses of maximum-likelihood methods we
are able to show that spectral modularity maximization is an optimal detection
method in the sense that no other method will succeed in the regime where the
modularity method fails.
|
1205.1820
|
The non-algorithmic side of the mind
|
cs.AI quant-ph
|
The existence of a non-algorithmic side of the mind, conjectured by Penrose
on the basis of G\"odel's first incompleteness theorem, is investigated here in
terms of a quantum metalanguage. We suggest that, besides human ordinary
thought, which can be formalized in a computable, logical language, there is
another important kind of human thought, which is Turing-non-computable. This
is methatought, the process of thinking about ordinary thought. Metathought can
be formalized as a metalanguage, which speaks about and controls the logical
language of ordinary thought. Ordinary thought has two computational modes, the
quantum mode and the classical mode, the latter deriving from decoherence of
the former. In order to control the logical language of the quantum mode, one
needs to introduce a quantum metalanguage, which in turn requires a quantum
version of Tarski Convention T.
|
1205.1823
|
Pl\"ucker Embedding of Cyclic Orbit Codes
|
cs.IT math.IT
|
Cyclic orbit codes are a family of constant dimension codes used for random
network coding. We investigate the Pl\"ucker embedding of these codes and show
how to efficiently compute the Grassmann coordinates of the code words.
|
1205.1828
|
The Natural Gradient by Analogy to Signal Whitening, and Recipes and
Tricks for its Use
|
cs.LG stat.ML
|
The natural gradient allows for more efficient gradient descent by removing
dependencies and biases inherent in a function's parameterization. Several
papers present the topic thoroughly and precisely. It remains a very difficult
idea to get your head around however. The intent of this note is to provide
simple intuition for the natural gradient and its use. We review how an ill
conditioned parameter space can undermine learning, introduce the natural
gradient by analogy to the more widely understood concept of signal whitening,
and present tricks and specific prescriptions for applying the natural gradient
to learning problems.
|
1205.1853
|
Goal Directed Relative Skyline Queries in Time Dependent Road Networks
|
cs.NI cs.DB
|
The Wireless GIS technology is progressing rapidly in the area of mobile
communications. Location-based spatial queries are becoming an integral part of
many new mobile applications. The Skyline queries are latest apps under
Location-based services. In this paper we introduce Goal Directed Relative
Skyline queries on Time dependent (GD-RST) road networks. The algorithm uses
travel time as a metric in finding the data object by considering multiple
query points (multi-source skyline) relative to user location and in the user
direction of travelling. We design an efficient algorithm based on Filter
phase, Heap phase and Refine Skyline phases. At the end, we propose a dynamic
skyline caching (DSC) mechanism which helps to reduce the computation cost for
future skyline queries. The experimental evaluation reflects the performance of
GD-RST algorithm over the traditional branch and bound algorithm for skyline
queries in real road networks.
|
1205.1885
|
Distributed Multicell Beamforming Design Approaching Pareto Boundary
with Max-Min Fairness
|
cs.IT math.IT
|
This paper addresses coordinated downlink beamforming optimization in
multicell time-division duplex (TDD) systems where a small number of parameters
are exchanged between cells but with no data sharing. With the goal to reach
the point on the Pareto boundary with max-min rate fairness, we first develop a
two-step centralized optimization algorithm to design the joint beamforming
vectors. This algorithm can achieve a further sum-rate improvement over the
max-min optimal performance, and is shown to guarantee max-min Pareto
optimality for scenarios with two base stations (BSs) each serving a single
user. To realize a distributed solution with limited intercell communication,
we then propose an iterative algorithm by exploiting an approximate
uplink-downlink duality, in which only a small number of positive scalars are
shared between cells in each iteration. Simulation results show that the
proposed distributed solution achieves a fairness rate performance close to the
centralized algorithm while it has a better sum-rate performance, and
demonstrates a better tradeoff between sum-rate and fairness than the Nash
Bargaining solution especially at high signal-to-noise ratio.
|
1205.1923
|
Using data mining techniques for diagnosis and prognosis of cancer
disease
|
cs.DB
|
Breast cancer is one of the leading cancers for women in developed countries
including India. It is the second most common cause of cancer death in women.
The high incidence of breast cancer in women has increased significantly in the
last years. In this paper we have discussed various data mining approaches that
have been utilized for breast cancer diagnosis and prognosis. Breast Cancer
Diagnosis is distinguishing of benign from malignant breast lumps and Breast
Cancer Prognosis predicts when Breast Cancer is to recur in patients that have
had their cancers excised. This study paper summarizes various review and
technical articles on breast cancer diagnosis and prognosis also we focus on
current research being carried out using the data mining techniques to enhance
the breast cancer diagnosis and prognosis.
|
1205.1925
|
Hamiltonian Annealed Importance Sampling for partition function
estimation
|
cs.LG physics.data-an
|
We introduce an extension to annealed importance sampling that uses
Hamiltonian dynamics to rapidly estimate normalization constants. We
demonstrate this method by computing log likelihoods in directed and undirected
probabilistic image models. We compare the performance of linear generative
models with both Gaussian and Laplace priors, product of experts models with
Laplace and Student's t experts, the mc-RBM, and a bilinear generative model.
We provide code to compare additional models.
|
1205.1928
|
The representer theorem for Hilbert spaces: a necessary and sufficient
condition
|
math.FA cs.LG
|
A family of regularization functionals is said to admit a linear representer
theorem if every member of the family admits minimizers that lie in a fixed
finite dimensional subspace. A recent characterization states that a general
class of regularization functionals with differentiable regularizer admits a
linear representer theorem if and only if the regularization term is a
non-decreasing function of the norm. In this report, we improve over such
result by replacing the differentiability assumption with lower semi-continuity
and deriving a proof that is independent of the dimensionality of the space.
|
1205.1939
|
Hamiltonian Monte Carlo with Reduced Momentum Flips
|
physics.data-an cs.LG
|
Hamiltonian Monte Carlo (or hybrid Monte Carlo) with partial momentum
refreshment explores the state space more slowly than it otherwise would due to
the momentum reversals which occur on proposal rejection. These cause
trajectories to double back on themselves, leading to random walk behavior on
timescales longer than the typical rejection time, and leading to slower
mixing. I present a technique by which the number of momentum reversals can be
reduced. This is accomplished by maintaining the net exchange of probability
between states with opposite momenta, but reducing the rate of exchange in both
directions such that it is 0 in one direction. An experiment illustrates these
reduced momentum flips accelerating mixing for a particular distribution.
|
1205.1975
|
Expressivity of Time-Varying Graphs and the Power of Waiting in Dynamic
Networks
|
cs.DC cs.CL
|
In infrastructure-less highly dynamic networks, computing and performing even
basic tasks (such as routing and broadcasting) is a very challenging activity
due to the fact that connectivity does not necessarily hold, and the network
may actually be disconnected at every time instant. Clearly the task of
designing protocols for these networks is less difficult if the environment
allows waiting (i.e., it provides the nodes with store-carry-forward-like
mechanisms such as local buffering) than if waiting is not feasible. No
quantitative corroborations of this fact exist (e.g., no answer to the
question: how much easier?). In this paper, we consider these qualitative
questions about dynamic networks, modeled as time-varying (or evolving) graphs,
where edges exist only at some times.
We examine the difficulty of the environment in terms of the expressivity of
the corresponding time-varying graph; that is in terms of the language
generated by the feasible journeys in the graph. We prove that the set of
languages $L_{nowait}$ when no waiting is allowed contains all computable
languages. On the other end, using algebraic properties of quasi-orders, we
prove that $L_{wait}$ is just the family of regular languages. In other words,
we prove that, when waiting is no longer forbidden, the power of the accepting
automaton (difficulty of the environment) drops drastically from being as
powerful as a Turing machine, to becoming that of a Finite-State machine. This
(perhaps surprisingly large) gap is a measure of the computational power of
waiting.
We also study bounded waiting; that is when waiting is allowed at a node only
for at most $d$ time units. We prove the negative result that $L_{wait[d]} =
L_{nowait}$; that is, the expressivity decreases only if the waiting is finite
but unpredictable (i.e., under the control of the protocol designer and not of
the environment).
|
1205.1986
|
Evolutionary algorithms in genetic regulatory networks model
|
cs.CE q-bio.MN
|
Genetic Regulatory Networks (GRNs) plays a vital role in the understanding of
complex biological processes. Modeling GRNs is significantly important in order
to reveal fundamental cellular processes, examine gene functions and
understanding their complex relationships. Understanding the interactions
between genes gives rise to develop better method for drug discovery and
diagnosis of the disease since many diseases are characterized by abnormal
behaviour of the genes. In this paper we have reviewed various evolutionary
algorithms-based approach for modeling GRNs and discussed various opportunities
and challenges.
|
1205.1988
|
Fast Optimal Joint Tracking-Registration for Multi-Sensor Systems
|
cs.RO
|
Sensor fusion of multiple sources plays an important role in vehicular
systems to achieve refined target position and velocity estimates. In this
article, we address the general registration problem, which is a key module for
a fusion system to accurately correct systematic errors of sensors. A fast
maximum a posteriori (FMAP) algorithm for joint registration-tracking (JRT) is
presented. The algorithm uses a recursive two-step optimization that involves
orthogonal factorization to ensure numerically stability. Statistical
efficiency analysis based on Cram\`{e}r-Rao lower bound theory is presented to
show asymptotical optimality of FMAP. Also, Givens rotation is used to derive a
fast implementation with complexity O(n) with $n$ the number of tracked
targets. Simulations and experiments are presented to demonstrate the promise
and effectiveness of FMAP.
|
1205.1997
|
Model-based clustering in networks with Stochastic Community Finding
|
stat.CO cs.SI physics.soc-ph
|
In the model-based clustering of networks, blockmodelling may be used to
identify roles in the network. We identify a special case of the Stochastic
Block Model (SBM) where we constrain the cluster-cluster interactions such that
the density inside the clusters of nodes is expected to be greater than the
density between clusters. This corresponds to the intuition behind
community-finding methods, where nodes tend to clustered together if they link
to each other. We call this model Stochastic Community Finding (SCF) and
present an efficient MCMC algorithm which can cluster the nodes, given the
network. The algorithm is evaluated on synthetic data and is applied to a
social network of interactions at a karate club and at a monastery,
demonstrating how the SCF finds the 'ground truth' clustering where sometimes
the SBM does not. The SCF is only one possible form of constraint or
specialization that may be applied to the SBM. In a more supervised context, it
may be appropriate to use other specializations to guide the SBM.
|
1205.2026
|
Complexity and Information: Measuring Emergence, Self-organization, and
Homeostasis at Multiple Scales
|
cs.IT math.IT nlin.AO nlin.CG
|
Concepts used in the scientific study of complex systems have become so
widespread that their use and abuse has led to ambiguity and confusion in their
meaning. In this paper we use information theory to provide abstract and
concise measures of complexity, emergence, self-organization, and homeostasis.
The purpose is to clarify the meaning of these concepts with the aid of the
proposed formal measures. In a simplified version of the measures (focusing on
the information produced by a system), emergence becomes the opposite of
self-organization, while complexity represents their balance. Homeostasis can
be seen as a measure of the stability of the system. We use computational
experiments on random Boolean networks and elementary cellular automata to
illustrate our measures at multiple scales.
|
1205.2031
|
M-FISH Karyotyping - A New Approach Based on Watershed Transform
|
cs.CV
|
Karyotyping is a process in which chromosomes in a dividing cell are properly
stained, identified and displayed in a standard format, which helps geneticist
to study and diagnose genetic factors behind various genetic diseases and for
studying cancer. M-FISH (Multiplex Fluorescent In-Situ Hybridization) provides
color karyotyping. In this paper, an automated method for M-FISH chromosome
segmentation based on watershed transform followed by naive Bayes
classification of each region using the features, mean and standard deviation,
is presented. Also, a post processing step is added to re-classify the small
chromosome segments to the neighboring larger segment for reducing the chances
of misclassification. The approach provided improved accuracy when compared to
the pixel-by-pixel approach. The approach was tested on 40 images from the
dataset and achieved an accuracy of 84.21 %.
|
1205.2046
|
Multiset Estimates and Combinatorial Synthesis
|
cs.SY cs.AI math.OC
|
The paper addresses an approach to ordinal assessment of alternatives based
on assignment of elements into an ordinal scale. Basic versions of the
assessment problems are formulated while taking into account the number of
levels at a basic ordinal scale [1,2,...,l] and the number of assigned elements
(e.g., 1,2,3). The obtained estimates are multisets (or bags) (cardinality of
the multiset equals a constant). Scale-posets for the examined assessment
problems are presented. 'Interval multiset estimates' are suggested. Further,
operations over multiset estimates are examined: (a) integration of multiset
estimates, (b) proximity for multiset estimates, (c) comparison of multiset
estimates, (d) aggregation of multiset estimates, and (e) alignment of multiset
estimates. Combinatorial synthesis based on morphological approach is examined
including the modified version of the approach with multiset estimates of
design alternatives. Knapsack-like problems with multiset estimates are briefly
described as well. The assessment approach, multiset-estimates, and
corresponding combinatorial problems are illustrated by numerical examples.
|
1205.2056
|
Dynamic Behavioral Mixed-Membership Model for Large Evolving Networks
|
cs.SI cs.LG physics.soc-ph stat.ML
|
The majority of real-world networks are dynamic and extremely large (e.g.,
Internet Traffic, Twitter, Facebook, ...). To understand the structural
behavior of nodes in these large dynamic networks, it may be necessary to model
the dynamics of behavioral roles representing the main connectivity patterns
over time. In this paper, we propose a dynamic behavioral mixed-membership
model (DBMM) that captures the roles of nodes in the graph and how they evolve
over time. Unlike other node-centric models, our model is scalable for
analyzing large dynamic networks. In addition, DBMM is flexible,
parameter-free, has no functional form or parameterization, and is
interpretable (identifies explainable patterns). The performance results
indicate our approach can be applied to very large networks while the
experimental results show that our model uncovers interesting patterns
underlying the dynamics of these networks.
|
1205.2077
|
Data Dissemination And Collection Algorithms For Collaborative Sensor
Networks Using Dynamic Cluster Heads
|
cs.NI cs.DS cs.IT math.IT
|
We develop novel data dissemination and collection algorithms for Wireless
Sensor Networks (WSNs) in which we consider $n$ sensor nodes distributed
randomly in a certain field to measure a physical phenomena. Such sensors have
limited energy, shortage coverage range, bandwidth and memory constraints. We
desire to disseminate nodes' data throughout the network such that a base
station will be able to collect the sensed data by querying a small number of
nodes. We propose two data dissemination and collection algorithms (DCA's) to
solve this problem. Data dissemination is achieved through dynamical selection
of some nodes. The selected nodes will be changed after a time slot $t$ and may
be repeated after a period $T$.
|
1205.2081
|
The Computational Complexity of the Restricted Isometry Property, the
Nullspace Property, and Related Concepts in Compressed Sensing
|
math.OC cs.IT math.IT
|
This paper deals with the computational complexity of conditions which
guarantee that the NP-hard problem of finding the sparsest solution to an
underdetermined linear system can be solved by efficient algorithms. In the
literature, several such conditions have been introduced. The most well-known
ones are the mutual coherence, the restricted isometry property (RIP), and the
nullspace property (NSP). While evaluating the mutual coherence of a given
matrix is easy, it has been suspected for some time that evaluating RIP and NSP
is computationally intractable in general. We confirm these conjectures by
showing that for a given matrix A and positive integer k, computing the best
constants for which the RIP or NSP hold is, in general, NP-hard. These results
are based on the fact that determining the spark of a matrix is NP-hard, which
is also established in this paper. Furthermore, we also give several complexity
statements about problems related to the above concepts.
|
1205.2114
|
The Extraction of Community Structures from Publication Networks to
Support Ethnographic Observations of Field Differences in Scientific
Communication
|
cs.SI cs.DL physics.soc-ph
|
The scientific community of researchers in a research specialty is an
important unit of analysis for understanding the field specific shaping of
scientific communication practices. These scientific communities are, however,
a challenging unit of analysis to capture and compare because they overlap,
have fuzzy boundaries, and evolve over time. We describe a network analytic
approach that reveals the complexities of these communities through examination
of their publication networks in combination with insights from ethnographic
field studies. We suggest that the structures revealed indicate overlapping
sub- communities within a research specialty and we provide evidence that they
differ in disciplinary orientation and research practices. By mapping the
community structures of scientific fields we aim to increase confidence about
the domain of validity of ethnographic observations as well as of collaborative
patterns extracted from publication networks thereby enabling the systematic
study of field differences. The network analytic methods presented include
methods to optimize the delineation of a bibliographic data set in order to
adequately represent a research specialty, and methods to extract community
structures from this data. We demonstrate the application of these methods in a
case study of two research specialties in the physical and chemical sciences.
|
1205.2118
|
Performance Bounds for Grouped Incoherent Measurements in Compressive
Sensing
|
cs.IT math.IT
|
Compressive sensing (CS) allows for acquisition of sparse signals at sampling
rates significantly lower than the Nyquist rate required for bandlimited
signals. Recovery guarantees for CS are generally derived based on the
assumption that measurement projections are selected independently at random.
However, for many practical signal acquisition applications, including medical
imaging and remote sensing, this assumption is violated as the projections must
be taken in groups. In this paper, we consider such applications and derive
requirements on the number of measurements needed for successful recovery of
signals when groups of dependent projections are taken at random. We find a
penalty factor on the number of required measurements with respect to the
standard CS scheme that employs conventional independent measurement selection
and evaluate the accuracy of the predicted penalty through simulations.
|
1205.2141
|
Separating the Wheat from the Chaff: Sensing Wireless Microphones in
TVWS
|
cs.IT math.IT
|
This paper summarizes our attempts to establish a systematic approach that
overcomes a key difficulty in sensing wireless microphone signals, namely, the
inability for most existing detection methods to effectively distinguish
between a wireless microphone signal and a sinusoidal continuous wave (CW).
Such an inability has led to an excessively high false alarm rate and thus
severely limited the utility of sensing-based cognitive transmission in the TV
white space (TVWS) spectrum. Having recognized the root of the difficulty, we
propose two potential solutions. The first solution focuses on the periodogram
as an estimate of the power spectral density (PSD), utilizing the property that
a CW has a line spectral component while a wireless microphone signal has a
slightly dispersed PSD. In that approach, we formulate the resulting decision
model as an one-sided test for Gaussian vectors, based on Kullback-Leibler
distance type of decision statistics. The second solution goes beyond the PSD
and looks into the spectral correlation function (SCF), proposing an augmented
SCF that is capable of revealing more features in the cycle frequency domain
compared with the conventional SCF. Thus the augmented SCF exhibits the key
difference between CW and wireless microphone signals. Both simulation results
and experimental validation results indicate that the two proposed solutions
are promising for sensing wireless microphones in TVWS.
|
1205.2151
|
A Converged Algorithm for Tikhonov Regularized Nonnegative Matrix
Factorization with Automatic Regularization Parameters Determination
|
cs.LG
|
We present a converged algorithm for Tikhonov regularized nonnegative matrix
factorization (NMF). We specially choose this regularization because it is
known that Tikhonov regularized least square (LS) is the more preferable form
in solving linear inverse problems than the conventional LS. Because an NMF
problem can be decomposed into LS subproblems, it can be expected that Tikhonov
regularized NMF will be the more appropriate approach in solving NMF problems.
The algorithm is derived using additive update rules which have been shown to
have convergence guarantee. We equip the algorithm with a mechanism to
automatically determine the regularization parameters based on the L-curve, a
well-known concept in the inverse problems community, but is rather unknown in
the NMF research. The introduction of this algorithm thus solves two inherent
problems in Tikhonov regularized NMF algorithm research, i.e., convergence
guarantee and regularization parameters determination.
|
1205.2164
|
Discrimination of English to other Indian languages (Kannada and Hindi)
for OCR system
|
cs.CV
|
India is a multilingual multi-script country. In every state of India there
are two languages one is state local language and the other is English. For
example in Andhra Pradesh, a state in India, the document may contain text
words in English and Telugu script. For Optical Character Recognition (OCR) of
such a bilingual document, it is necessary to identify the script before
feeding the text words to the OCRs of individual scripts. In this paper, we are
introducing a simple and efficient technique of script identification for
Kannada, English and Hindi text words of a printed document. The proposed
approach is based on the horizontal and vertical projection profile for the
discrimination of the three scripts. The feature extraction is done based on
the horizontal projection profile of each text words. We analysed 700 different
words of Kannada, English and Hindi in order to extract the discrimination
features and for the development of knowledge base. We use the horizontal
projection profile of each text word and based on the horizontal projection
profile we extract the appropriate features. The proposed system is tested on
100 different document images containing more than 1000 text words of each
script and a classification rate of 98.25%, 99.25% and 98.87% is achieved for
Kannada, English and Hindi respectively.
|
1205.2171
|
A Generalized Kernel Approach to Structured Output Learning
|
stat.ML cs.LG
|
We study the problem of structured output learning from a regression
perspective. We first provide a general formulation of the kernel dependency
estimation (KDE) problem using operator-valued kernels. We show that some of
the existing formulations of this problem are special cases of our framework.
We then propose a covariance-based operator-valued kernel that allows us to
take into account the structure of the kernel feature space. This kernel
operates on the output space and encodes the interactions between the outputs
without any reference to the input space. To address this issue, we introduce a
variant of our KDE method based on the conditional covariance operator that in
addition to the correlation between the outputs takes into account the effects
of the input variables. Finally, we evaluate the performance of our KDE
approach using both covariance and conditional covariance kernels on two
structured output problems, and compare it to the state-of-the-art kernel-based
structured output regression methods.
|
1205.2172
|
Modularity-Based Clustering for Network-Constrained Trajectories
|
stat.ML cs.LG physics.data-an
|
We present a novel clustering approach for moving object trajectories that
are constrained by an underlying road network. The approach builds a similarity
graph based on these trajectories then uses modularity-optimization hiearchical
graph clustering to regroup trajectories with similar profiles. Our
experimental study shows the superiority of the proposed approach over classic
hierarchical clustering and gives a brief insight to visualization of the
clustering results.
|
1205.2177
|
Locating dominating codes: Bounds and extremal cardinalities
|
math.CO cs.IT math.IT
|
In this work, two types of codes such that they both dominate and locate the
vertices of a graph are studied. Those codes might be sets of detectors in a
network or processors controlling a system whose set of responses should
determine a malfunctioning processor or an intruder. Here, we present our more
significant contributions on \lambda-codes and \eta-codes concerning concerning
bounds, extremal values and realization theorems.
|
1205.2251
|
Combinatorial aspect of fashion
|
physics.soc-ph cs.SI
|
Simulations are performed according to the Axelrod model of culture
dissemination, with modified mechanism of repulsion. Previously, repulsion was
considered by Radillo-Diaz et al (Phys. Rev. E 80 (2009) 066107) as dependent
on a predefined threshold. Here the probabilities of attraction and repulsion
are calculated from the number of cells in the same states. We also investigate
the influence of some homogeneity, introduced to the initial state. As the
result of the probabilistic definition of repulsion, the ordered state
vanishes. A small cluster of a few percent of population is retained only if in
the initial state a set of agents is prepared in the same state. We conclude
that the modelled imitation is successful only with respect to agents, and not
only their features.
|
1205.2265
|
Efficient Constrained Regret Minimization
|
cs.LG
|
Online learning constitutes a mathematical and compelling framework to
analyze sequential decision making problems in adversarial environments. The
learner repeatedly chooses an action, the environment responds with an outcome,
and then the learner receives a reward for the played action. The goal of the
learner is to maximize his total reward. However, there are situations in
which, in addition to maximizing the cumulative reward, there are some
additional constraints on the sequence of decisions that must be satisfied on
average by the learner. In this paper we study an extension to the online
learning where the learner aims to maximize the total reward given that some
additional constraints need to be satisfied. By leveraging on the theory of
Lagrangian method in constrained optimization, we propose Lagrangian
exponentially weighted average (LEWA) algorithm, which is a primal-dual variant
of the well known exponentially weighted average algorithm, to efficiently
solve constrained online decision making problems. Using novel theoretical
analysis, we establish the regret and the violation of the constraint bounds in
full information and bandit feedback models.
|
1205.2282
|
A Discussion on Parallelization Schemes for Stochastic Vector
Quantization Algorithms
|
stat.ML cs.DC cs.LG
|
This paper studies parallelization schemes for stochastic Vector Quantization
algorithms in order to obtain time speed-ups using distributed resources. We
show that the most intuitive parallelization scheme does not lead to better
performances than the sequential algorithm. Another distributed scheme is
therefore introduced which obtains the expected speed-ups. Then, it is improved
to fit implementation on distributed architectures where communications are
slow and inter-machines synchronization too costly. The schemes are tested with
simulated distributed architectures and, for the last one, with Microsoft
Windows Azure platform obtaining speed-ups up to 32 Virtual Machines.
|
1205.2292
|
Diachronic Linked Data: Towards Long-Term Preservation of Structured
Interrelated Information
|
cs.DB cs.DL
|
The Linked Data Paradigm is one of the most promising technologies for
publishing, sharing, and connecting data on the Web, and offers a new way for
data integration and interoperability. However, the proliferation of
distributed, inter-connected sources of information and services on the Web
poses significant new challenges for managing consistently a huge number of
large datasets and their interdependencies. In this paper we focus on the key
problem of preserving evolving structured interlinked data. We argue that a
number of issues that hinder applications and users are related to the temporal
aspect that is intrinsic in linked data. We present a number of real use cases
to motivate our approach, we discuss the problems that occur, and propose a
direction for a solution.
|
1205.2318
|
Systems biology beyond degree, hubs and scale-free networks: the case
for multiple metrics in complex networks
|
q-bio.QM cond-mat.stat-mech cs.SI physics.soc-ph
|
Modeling and topological analysis of networks in biological and other complex
systems, must venture beyond the limited consideration of very few network
metrics like degree, betweenness or assortativity. A proper identification of
informative and redundant entities from many different metrics, using recently
demonstrated techniques, is essential. A holistic comparison of networks and
growth models is best achieved only with the use of such methods.
|
1205.2320
|
Publishing Life Science Data as Linked Open Data: the Case Study of
miRBase
|
cs.DB
|
This paper presents our Linked Open Data (LOD) infrastructures for genomic
and experimental data related to microRNA biomolecules. Legacy data from two
well-known microRNA databases with experimental data and observations, as well
as change and version information about microRNA entities, are fused and
exported as LOD. Our LOD server assists biologists to explore biological
entities and their evolution, and provides a SPARQL endpoint for applications
and services to query historical miRNA data and track changes, their causes and
effects.
|
1205.2334
|
Sparse Approximation via Penalty Decomposition Methods
|
cs.LG math.OC stat.CO stat.ML
|
In this paper we consider sparse approximation problems, that is, general
$l_0$ minimization problems with the $l_0$-"norm" of a vector being a part of
constraints or objective function. In particular, we first study the
first-order optimality conditions for these problems. We then propose penalty
decomposition (PD) methods for solving them in which a sequence of penalty
subproblems are solved by a block coordinate descent (BCD) method. Under some
suitable assumptions, we establish that any accumulation point of the sequence
generated by the PD methods satisfies the first-order optimality conditions of
the problems. Furthermore, for the problems in which the $l_0$ part is the only
nonconvex part, we show that such an accumulation point is a local minimizer of
the problems. In addition, we show that any accumulation point of the sequence
generated by the BCD method is a saddle point of the penalty subproblem.
Moreover, for the problems in which the $l_0$ part is the only nonconvex part,
we establish that such an accumulation point is a local minimizer of the
penalty subproblem. Finally, we test the performance of our PD methods by
applying them to sparse logistic regression, sparse inverse covariance
selection, and compressed sensing problems. The computational results
demonstrate that our methods generally outperform the existing methods in terms
of solution quality and/or speed.
|
1205.2345
|
Hajj and Umrah Event Recognition Datasets
|
cs.CV cs.CY
|
In this note, new Hajj and Umrah Event Recognition datasets (HUER) are
presented. The demonstrated datasets are based on videos and images taken
during 2011-2012 Hajj and Umrah seasons. HUER is the first collection of
datasets covering the six types of Hajj and Umrah ritual events (rotating in
Tawaf around Kabaa, performing Sa'y between Safa and Marwa, standing on the
mount of Arafat, staying overnight in Muzdalifah, staying two or three days in
Mina, and throwing Jamarat). The HUER datasets also contain video and image
databases for nine types of human actions during Hajj and Umrah (walking,
drinking from Zamzam water, sleeping, smiling, eating, praying, sitting,
shaving hairs and ablutions, reading the holy Quran and making duaa). The
spatial resolutions are 1280 x 720 pixels for images and 640 x 480 pixels for
videos and have lengths of 20 seconds in average with 30 frame per second
rates.
|
1205.2382
|
Mesh Learning for Classifying Cognitive Processes
|
cs.NE cs.AI cs.CV stat.ML
|
A relatively recent advance in cognitive neuroscience has been multi-voxel
pattern analysis (MVPA), which enables researchers to decode brain states
and/or the type of information represented in the brain during a cognitive
operation. MVPA methods utilize machine learning algorithms to distinguish
among types of information or cognitive states represented in the brain, based
on distributed patterns of neural activity. In the current investigation, we
propose a new approach for representation of neural data for pattern analysis,
namely a Mesh Learning Model. In this approach, at each time instant, a star
mesh is formed around each voxel, such that the voxel corresponding to the
center node is surrounded by its p-nearest neighbors. The arc weights of each
mesh are estimated from the voxel intensity values by least squares method. The
estimated arc weights of all the meshes, called Mesh Arc Descriptors (MADs),
are then used to train a classifier, such as Neural Networks, k-Nearest
Neighbor, Na\"ive Bayes and Support Vector Machines. The proposed Mesh Model
was tested on neuroimaging data acquired via functional magnetic resonance
imaging (fMRI) during a recognition memory experiment using categorized word
lists, employing a previously established experimental paradigm (\"Oztekin &
Badre, 2011). Results suggest that the proposed Mesh Learning approach can
provide an effective algorithm for pattern analysis of brain activity during
cognitive processing.
|
1205.2450
|
MIMO Relaying Broadcast Channels with Linear Precoding and Quantized
Channel State Information Feedback
|
cs.IT math.IT
|
Multi-antenna relaying has emerged as a promising technology to enhance the
system performance in cellular networks. However, when precoding techniques are
utilized to obtain multi-antenna gains, the system generally requires channel
state information (CSI) at the transmitters. We consider a linear precoding
scheme in a MIMO relaying broadcast channel with quantized CSI feedback from
both two-hop links. With this scheme, each remote user feeds back its quantized
CSI to the relay, and the relay sends back the quantized precoding information
to the base station (BS). An upper bound on the rate loss due to quantized
channel knowledge is first characterized. Then, in order to maintain the rate
loss within a predetermined gap for growing SNRs, a strategy of scaling
quantization quality of both two-hop links is proposed. It is revealed that the
numbers of feedback bits of both links should scale linearly with the transmit
power at the relay, while only the bit number of feedback from the relay to the
BS needs to grow with the increasing transmit power at the BS. Numerical
results are provided to verify the proposed strategy for feedback quality
control.
|
1205.2465
|
Identifying And Weighting Integration Hypotheses On Open Data Platforms
|
cs.DB
|
Open data platforms such as data.gov or opendata.socrata. com provide a huge
amount of valuable information. Their free-for-all nature, the lack of
publishing standards and the multitude of domains and authors represented on
these platforms lead to new integration and standardization problems. At the
same time, crowd-based data integration techniques are emerging as new way of
dealing with these problems. However, these methods still require input in form
of specific questions or tasks that can be passed to the crowd. This paper
discusses integration problems on Open Data Platforms, and proposes a method
for identifying and ranking integration hypotheses in this context. We will
evaluate our findings by conducting a comprehensive evaluation using on one of
the largest Open Data platforms.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.