id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1311.2241 | Learning Gaussian Graphical Models with Observed or Latent FVSs | cs.LG stat.ML | Gaussian Graphical Models (GGMs) or Gauss Markov random fields are widely
used in many applications, and the trade-off between the modeling capacity and
the efficiency of learning and inference has been an important research
problem. In this paper, we study the family of GGMs with small feedback vertex
sets (FVSs), where an FVS is a set of nodes whose removal breaks all the
cycles. Exact inference such as computing the marginal distributions and the
partition function has complexity $O(k^{2}n)$ using message-passing algorithms,
where k is the size of the FVS, and n is the total number of nodes. We propose
efficient structure learning algorithms for two cases: 1) All nodes are
observed, which is useful in modeling social or flight networks where the FVS
nodes often correspond to a small number of high-degree nodes, or hubs, while
the rest of the networks is modeled by a tree. Regardless of the maximum
degree, without knowing the full graph structure, we can exactly compute the
maximum likelihood estimate in $O(kn^2+n^2\log n)$ if the FVS is known or in
polynomial time if the FVS is unknown but has bounded size. 2) The FVS nodes
are latent variables, where structure learning is equivalent to decomposing a
inverse covariance matrix (exactly or approximately) into the sum of a
tree-structured matrix and a low-rank matrix. By incorporating efficient
inference into the learning steps, we can obtain a learning algorithm using
alternating low-rank correction with complexity $O(kn^{2}+n^{2}\log n)$ per
iteration. We also perform experiments using both synthetic data as well as
real data of flight delays to demonstrate the modeling capacity with FVSs of
various sizes.
|
1311.2252 | Semantic Sort: A Supervised Approach to Personalized Semantic
Relatedness | cs.CL cs.LG | We propose and study a novel supervised approach to learning statistical
semantic relatedness models from subjectively annotated training examples. The
proposed semantic model consists of parameterized co-occurrence statistics
associated with textual units of a large background knowledge corpus. We
present an efficient algorithm for learning such semantic models from a
training sample of relatedness preferences. Our method is corpus independent
and can essentially rely on any sufficiently large (unstructured) collection of
coherent texts. Moreover, the approach facilitates the fitting of semantic
models for specific users or groups of users. We present the results of
extensive range of experiments from small to large scale, indicating that the
proposed method is effective and competitive with the state-of-the-art.
|
1311.2271 | More data speeds up training time in learning halfspaces over sparse
vectors | cs.LG | The increased availability of data in recent years has led several authors to
ask whether it is possible to use data as a {\em computational} resource. That
is, if more data is available, beyond the sample complexity limit, is it
possible to use the extra examples to speed up the computation time required to
perform the learning task?
We give the first positive answer to this question for a {\em natural
supervised learning problem} --- we consider agnostic PAC learning of
halfspaces over $3$-sparse vectors in $\{-1,1,0\}^n$. This class is
inefficiently learnable using $O\left(n/\epsilon^2\right)$ examples. Our main
contribution is a novel, non-cryptographic, methodology for establishing
computational-statistical gaps, which allows us to show that, under a widely
believed assumption that refuting random $\mathrm{3CNF}$ formulas is hard, it
is impossible to efficiently learn this class using only
$O\left(n/\epsilon^2\right)$ examples. We further show that under stronger
hardness assumptions, even $O\left(n^{1.499}/\epsilon^2\right)$ examples do not
suffice. On the other hand, we show a new algorithm that learns this class
efficiently using $\tilde{\Omega}\left(n^2/\epsilon^2\right)$ examples. This
formally establishes the tradeoff between sample and computational complexity
for a natural supervised learning problem.
|
1311.2272 | From average case complexity to improper learning complexity | cs.LG cs.CC | The basic problem in the PAC model of computational learning theory is to
determine which hypothesis classes are efficiently learnable. There is
presently a dearth of results showing hardness of learning problems. Moreover,
the existing lower bounds fall short of the best known algorithms.
The biggest challenge in proving complexity results is to establish hardness
of {\em improper learning} (a.k.a. representation independent learning).The
difficulty in proving lower bounds for improper learning is that the standard
reductions from $\mathbf{NP}$-hard problems do not seem to apply in this
context. There is essentially only one known approach to proving lower bounds
on improper learning. It was initiated in (Kearns and Valiant 89) and relies on
cryptographic assumptions.
We introduce a new technique for proving hardness of improper learning, based
on reductions from problems that are hard on average. We put forward a (fairly
strong) generalization of Feige's assumption (Feige 02) about the complexity of
refuting random constraint satisfaction problems. Combining this assumption
with our new technique yields far reaching implications. In particular,
1. Learning $\mathrm{DNF}$'s is hard.
2. Agnostically learning halfspaces with a constant approximation ratio is
hard.
3. Learning an intersection of $\omega(1)$ halfspaces is hard.
|
1311.2276 | A Quantitative Evaluation Framework for Missing Value Imputation
Algorithms | cs.LG | We consider the problem of quantitatively evaluating missing value imputation
algorithms. Given a dataset with missing values and a choice of several
imputation algorithms to fill them in, there is currently no principled way to
rank the algorithms using a quantitative metric. We develop a framework based
on treating imputation evaluation as a problem of comparing two distributions
and show how it can be used to compute quantitative metrics. We present an
efficient procedure for applying this framework to practical datasets,
demonstrate several metrics derived from the existing literature on comparing
distributions, and propose a new metric called Neighborhood-based Dissimilarity
Score which is fast to compute and provides similar results. Results are shown
on several datasets, metrics, and imputations algorithms.
|
1311.2296 | Newton based Stochastic Optimization using q-Gaussian Smoothed
Functional Algorithms | math.OC cs.IT math.IT | We present the first q-Gaussian smoothed functional (SF) estimator of the
Hessian and the first Newton-based stochastic optimization algorithm that
estimates both the Hessian and the gradient of the objective function using
q-Gaussian perturbations. Our algorithm requires only two system simulations
(regardless of the parameter dimension) and estimates both the gradient and the
Hessian at each update epoch using these. We also present a proof of
convergence of the proposed algorithm. In a related recent work (Ghoshdastidar
et al., 2013), we presented gradient SF algorithms based on the q-Gaussian
perturbations. Our work extends prior work on smoothed functional algorithms by
generalizing the class of perturbation distributions as most distributions
reported in the literature for which SF algorithms are known to work and turn
out to be special cases of the q-Gaussian distribution. Besides studying the
convergence properties of our algorithm analytically, we also show the results
of several numerical simulations on a model of a queuing network, that
illustrate the significance of the proposed method. In particular, we observe
that our algorithm performs better in most cases, over a wide range of
q-values, in comparison to Newton SF algorithms with the Gaussian (Bhatnagar,
2007) and Cauchy perturbations, as well as the gradient q-Gaussian SF
algorithms (Ghoshdastidar et al., 2013).
|
1311.2311 | Direct solutions to tropical optimization problems with nonlinear
objective functions and boundary constraints | math.OC cs.SY | We examine two multidimensional optimization problems that are formulated in
terms of tropical mathematics. The problems are to minimize nonlinear objective
functions, which are defined through the multiplicative conjugate vector
transposition on vectors of a finite-dimensional semimodule over an idempotent
semifield, and subject to boundary constraints. The solution approach is
implemented, which involves the derivation of the sharp bounds on the objective
functions, followed by determination of vectors that yield the bound. Based on
the approach, direct solutions to the problems are obtained in a compact vector
form. To illustrate, we apply the results to solving constrained Chebyshev
approximation and location problems, and give numerical examples.
|
1311.2321 | On Improving the Balance between the Completion Time and Decoding Delay
in Instantly Decodable Network Coded Systems | cs.IT math.IT | This paper studies the complicated interplay of the completion time (as a
measure of throughput) and the decoding delay performance in instantly
decodable network coded (IDNC) systems over wireless broadcast erasure channels
with memory, and proposes two new algorithms that improve the balance between
the completion time and decoding delay of broadcasting a block of packets. We
first formulate the IDNC packet selection problem that provides joint control
of the completion time and decoding delay as a statistical shortest path (SSP)
problem. However, since finding the optimal packet selection policy using the
SSP technique is computationally complex, we employ its geometric structure to
find some guidelines and use them to propose two heuristic packet selection
algorithms that can efficiently improve the balance between the completion time
and decoding delay for broadcast erasure channels with a wide range of memory
conditions. It is shown that each one of the two proposed algorithms is
superior for a specific range of memory conditions. Furthermore, we show that
the proposed algorithms achieve an improved fairness in terms of the decoding
delay across all receivers.
|
1311.2331 | Robust Adaptive Beamforming Based on Low-Complexity Shrinkage-Based
Mismatch Estimation | cs.IT math.IT | In this work, we propose a low-complexity robust adaptive beamforming (RAB)
technique which estimates the steering vector using a Low-Complexity
Shrinkage-Based Mismatch Estimation (LOCSME) algorithm. The proposed LOCSME
algorithm estimates the covariance matrix of the input data and the
interference-plus-noise covariance (INC) matrix by using the Oracle
Approximating Shrinkage (OAS) method. LOCSME only requires prior knowledge of
the angular sector in which the actual steering vector is located and the
antenna array geometry. LOCSME does not require a costly optimization algorithm
and does not need to know extra information from the interferers, which avoids
direction finding for all interferers. Simulations show that LOCSME outperforms
previously reported RAB algorithms and has a performance very close to the
optimum.
|
1311.2334 | Embed and Conquer: Scalable Embeddings for Kernel k-Means on MapReduce | cs.LG | The kernel $k$-means is an effective method for data clustering which extends
the commonly-used $k$-means algorithm to work on a similarity matrix over
complex data structures. The kernel $k$-means algorithm is however
computationally very complex as it requires the complete data matrix to be
calculated and stored. Further, the kernelized nature of the kernel $k$-means
algorithm hinders the parallelization of its computations on modern
infrastructures for distributed computing. In this paper, we are defining a
family of kernel-based low-dimensional embeddings that allows for scaling
kernel $k$-means on MapReduce via an efficient and unified parallelization
strategy. Afterwards, we propose two methods for low-dimensional embedding that
adhere to our definition of the embedding family. Exploiting the proposed
parallelization strategy, we present two scalable MapReduce algorithms for
kernel $k$-means. We demonstrate the effectiveness and efficiency of the
proposed algorithms through an empirical evaluation on benchmark data sets.
|
1311.2337 | The Third-Order Term in the Normal Approximation for the AWGN Channel | cs.IT math.IT | This paper shows that, under the average error probability formalism, the
third-order term in the normal approximation for the additive white Gaussian
noise channel with a maximal or equal power constraint is at least $\frac{1}{2}
\log n + O(1)$. This matches the upper bound derived by
Polyanskiy-Poor-Verd\'{u} (2010).
|
1311.2342 | Anatomy of Graph Matching based on an XQuery and RDF Implementation | cs.DB | Graphs are becoming one of the most popular data modeling paradigms since
they are able to model complex relationships that cannot be easily captured
using traditional data models. One of the major tasks of graph management is
graph matching, which aims to find all of the subgraphs in a data graph that
match a query graph. In the literature, proposals in this context are
classified into two different categories: graph-at-a-time, which process the
whole query graph at the same time, and vertex-at-a-time, which process a
single vertex of the query graph at the same time. In this paper, we propose a
new vertex-at-a-time proposal that is based on graphlets, each of which
comprises a vertex of a graph, all of the immediate neighbors of that vertex,
and all of the edges that relate those neighbors. Furthermore, we also use the
concept of minimum hub covers, each of which comprises a subset of vertices in
the query graph that account for all of the edges in that graph. We present the
algorithms of our proposal and describe an implementation based on XQuery and
RDF. Our evaluation results show that our proposal is appealing to perform
graph matching.
|
1311.2346 | Some New Results on Equivalency of Collusion-Secure Properties for
Reed-Solomon Codes | cs.IT math.IT math.RA | A. Silverberg (IEEE Trans. Inform. Theory 49, 2003) proposed a question on
the equivalence of identifiable parent property and traceability property for
Reed-Solomon code family. Earlier studies on Silverberg's problem motivate us
to think of the stronger version of the question on equivalence of separation
and traceability properties. Both, however, still remain open. In this article,
we integrate all the previous works on this problem with an algebraic way, and
present some new results. It is notable that the concept of subspace subcode of
Reed-Solomon code, which was introduced in error-correcting code theory,
provides an interesting prospect for our topic.
|
1311.2349 | Providing Trustworthy Contributions via a Reputation Framework in Social
Participatory Sensing Systems | cs.SI cs.IR | Social participatory sensing is a newly proposed paradigm that tries to
address the limitations of participatory sensing by leveraging online social
networks as an infrastructure. A critical issue in the success of this paradigm
is to assure the trustworthiness of contributions provided by participants. In
this paper, we propose an application-agnostic reputation framework for social
participatory sensing systems. Our framework considers both the quality of
contribution and the trustworthiness level of participant within the social
network. These two aspects are then combined via a fuzzy inference system to
arrive at a final trust rating for a contribution. A reputation score is also
calculated for each participant as a resultant of the trust ratings assigned to
him. We adopt the utilization of PageRank algorithm as the building block for
our reputation module. Extensive simulations demonstrate the efficacy of our
framework in achieving high overall trust and assigning accurate reputation
scores.
|
1311.2378 | An Empirical Evaluation of Sequence-Tagging Trainers | cs.LG | The task of assigning label sequences to a set of observed sequences is
common in computational linguistics. Several models for sequence labeling have
been proposed over the last few years. Here, we focus on discriminative models
for sequence labeling. Many batch and online (updating model parameters after
visiting each example) learning algorithms have been proposed in the
literature. On large datasets, online algorithms are preferred as batch
learning methods are slow. These online algorithms were designed to solve
either a primal or a dual problem. However, there has been no systematic
comparison of these algorithms in terms of their speed, generalization
performance (accuracy/likelihood) and their ability to achieve steady state
generalization performance fast. With this aim, we compare different algorithms
and make recommendations, useful for a practitioner. We conclude that the
selection of an algorithm for sequence labeling depends on the evaluation
criterion used and its implementation simplicity.
|
1311.2433 | Joint recovery algorithms using difference of innovations for
distributed compressed sensing | cs.IT math.IT | Distributed compressed sensing is concerned with representing an ensemble of
jointly sparse signals using as few linear measurements as possible. Two novel
joint reconstruction algorithms for distributed compressed sensing are
presented in this paper. These algorithms are based on the idea of using one of
the signals as side information; this allows to exploit joint sparsity in a
more effective way with respect to existing schemes. They provide gains in
reconstruction quality, especially when the nodes acquire few measurements, so
that the system is able to operate with fewer measurements than is required by
other existing schemes. We show that the algorithms achieve better performance
with respect to the state-of-the-art.
|
1311.2448 | Recovery of Sparse Matrices via Matrix Sketching | cs.NA cs.IT math.IT | In this paper, we consider the problem of recovering an unknown sparse matrix
X from the matrix sketch Y = AX B^T. The dimension of Y is less than that of X,
and A and B are known matrices. This problem can be solved using standard
compressive sensing (CS) theory after converting it to vector form using the
Kronecker operation. In this case, the measurement matrix assumes a Kronecker
product structure. However, as the matrix dimension increases the associated
computational complexity makes its use prohibitive. We extend two algorithms,
fast iterative shrinkage threshold algorithm (FISTA) and orthogonal matching
pursuit (OMP) to solve this problem in matrix form without employing the
Kronecker product. While both FISTA and OMP with matrix inputs are shown to be
equivalent in performance to their vector counterparts with the Kronecker
product, solving them in matrix form is shown to be computationally more
efficient. We show that the computational gain achieved by FISTA with matrix
inputs over its vector form is more significant compared to that achieved by
OMP.
|
1311.2460 | Vision-Guided Robot Hearing | cs.RO cs.CV | Natural human-robot interaction in complex and unpredictable environments is
one of the main research lines in robotics. In typical real-world scenarios,
humans are at some distance from the robot and the acquired signals are
strongly impaired by noise, reverberations and other interfering sources. In
this context, the detection and localisation of speakers plays a key role since
it is the pillar on which several tasks (e.g.: speech recognition and speaker
tracking) rely. We address the problem of how to detect and localize people
that are both seen and heard by a humanoid robot. We introduce a hybrid
deterministic/probabilistic model. Indeed, the deterministic component allows
us to map the visual information into the auditory space. By means of the
probabilistic component, the visual features guide the grouping of the auditory
features in order to form AV objects. The proposed model and the associated
algorithm are implemented in real-time (17 FPS) using a stereoscopic camera
pair and two microphones embedded into the head of the humanoid robot NAO. We
performed experiments on (i) synthetic data, (ii) a publicly available data set
and (iii) data acquired using the robot. The results we obtained validate the
approach and encourage us to further investigate how vision can help robot
hearing.
|
1311.2483 | Global Sensitivity Analysis with Dependence Measures | math.ST cs.LG stat.ML stat.TH | Global sensitivity analysis with variance-based measures suffers from several
theoretical and practical limitations, since they focus only on the variance of
the output and handle multivariate variables in a limited way. In this paper,
we introduce a new class of sensitivity indices based on dependence measures
which overcomes these insufficiencies. Our approach originates from the idea to
compare the output distribution with its conditional counterpart when one of
the input variables is fixed. We establish that this comparison yields
previously proposed indices when it is performed with Csiszar f-divergences, as
well as sensitivity indices which are well-known dependence measures between
random variables. This leads us to investigate completely new sensitivity
indices based on recent state-of-the-art dependence measures, such as distance
correlation and the Hilbert-Schmidt independence criterion. We also emphasize
the potential of feature selection techniques relying on such dependence
measures as alternatives to screening in high dimension.
|
1311.2492 | Notes on Elementary Spectral Graph Theory. Applications to Graph
Clustering Using Normalized Cuts | cs.CV | These are notes on the method of normalized graph cuts and its applications
to graph clustering. I provide a fairly thorough treatment of this deeply
original method due to Shi and Malik, including complete proofs. I include the
necessary background on graphs and graph Laplacians. I then explain in detail
how the eigenvectors of the graph Laplacian can be used to draw a graph. This
is an attractive application of graph Laplacians. The main thrust of this paper
is the method of normalized cuts. I give a detailed account for K = 2 clusters,
and also for K > 2 clusters, based on the work of Yu and Shi. Three points that
do not appear to have been clearly articulated before are elaborated:
1. The solutions of the main optimization problem should be viewed as tuples
in the K-fold cartesian product of projective space RP^{N-1}.
2. When K > 2, the solutions of the relaxed problem should be viewed as
elements of the Grassmannian G(K,N).
3. Two possible Riemannian distances are available to compare the closeness
of solutions: (a) The distance on (RP^{N-1})^K. (b) The distance on the
Grassmannian.
I also clarify what should be the necessary and sufficient conditions for a
matrix to represent a partition of the vertices of a graph to be clustered.
|
1311.2495 | The Noisy Power Method: A Meta Algorithm with Applications | cs.DS cs.LG | We provide a new robust convergence analysis of the well-known power method
for computing the dominant singular vectors of a matrix that we call the noisy
power method. Our result characterizes the convergence behavior of the
algorithm when a significant amount noise is introduced after each
matrix-vector multiplication. The noisy power method can be seen as a
meta-algorithm that has recently found a number of important applications in a
broad range of machine learning problems including alternating minimization for
matrix completion, streaming principal component analysis (PCA), and
privacy-preserving spectral analysis. Our general analysis subsumes several
existing ad-hoc convergence bounds and resolves a number of open problems in
multiple applications including streaming PCA and privacy-preserving singular
vector computation.
|
1311.2503 | Predictable Feature Analysis | cs.LG stat.ML | Every organism in an environment, whether biological, robotic or virtual,
must be able to predict certain aspects of its environment in order to survive
or perform whatever task is intended. It needs a model that is capable of
estimating the consequences of possible actions, so that planning, control, and
decision-making become feasible. For scientific purposes, such models are
usually created in a problem specific manner using differential equations and
other techniques from control- and system-theory. In contrast to that, we aim
for an unsupervised approach that builds up the desired model in a
self-organized fashion. Inspired by Slow Feature Analysis (SFA), our approach
is to extract sub-signals from the input, that behave as predictable as
possible. These "predictable features" are highly relevant for modeling,
because predictability is a desired property of the needed
consequence-estimating model by definition. In our approach, we measure
predictability with respect to a certain prediction model. We focus here on the
solution of the arising optimization problem and present a tractable algorithm
based on algebraic methods which we call Predictable Feature Analysis (PFA). We
prove that the algorithm finds the globally optimal signal, if this signal can
be predicted with low error. To deal with cases where the optimal signal has a
significant prediction error, we provide a robust, heuristically motivated
variant of the algorithm and verify it empirically. Additionally, we give
formal criteria a prediction-model must meet to be suitable for measuring
predictability in the PFA setting and also provide a suitable default-model
along with a formal proof that it meets these criteria.
|
1311.2504 | A Semi-automated Peer-review System | cs.DL cs.HC cs.SI physics.soc-ph | A semi-supervised model of peer review is introduced that is intended to
overcome the bias and incompleteness of traditional peer review. Traditional
approaches are reliant on human biases, while consensus decision-making is
constrained by sparse information. Here, the architecture for one potential
improvement (a semi-supervised, human-assisted classifier) to the traditional
approach will be introduced and evaluated. To evaluate the potential advantages
of such a system, hypothetical receiver operating characteristic (ROC) curves
for both approaches will be assessed. This will provide more specific
indications of how automation would be beneficial in the manuscript evaluation
process. In conclusion, the implications for such a system on measurements of
scientific impact and improving the quality of open submission repositories
will be discussed.
|
1311.2505 | On optimal constacyclic codes | cs.IT math-ph math.IT math.MP | In this paper we investigate the class of constacyclic codes, which is a
natural generalization of the class of cyclic and negacyclic codes. This class
of codes is interesting in the sense that it contains codes with good or even
optimal parameters. In this light, we propose constructions of families of
classical block and convolutional maximum-distance-separable (MDS) constacyclic
codes, as well as families of asymmetric quantum MDS codes derived from
(classical-block) constacyclic codes. These results are mainly derived from the
investigation of suitable properties on cyclotomic cosets of these
corresponding codes.
|
1311.2507 | Robust Beamforming for Secure Communication in Systems with Wireless
Information and Power Transfer | cs.IT math.IT | This paper considers a multiuser multiple-input single-output (MISO) downlink
system with simultaneous wireless information and power transfer. In
particular, we focus on secure communication in the presence of passive
eavesdroppers and potential eavesdroppers (idle legitimate receivers). We study
the design of a resource allocation algorithm minimizing the total transmit
power for the case when the legitimate receivers are able to harvest energy
from radio frequency signals. Our design advocates the dual use of both
artificial noise and energy signals in providing secure communication and
facilitating efficient wireless energy transfer. The algorithm design is
formulated as a non-convex optimization problem. The problem formulation takes
into account artificial noise and energy signal generation for protecting the
transmitted information against both considered types of eavesdroppers when
imperfect channel state information (CSI) of the potential eavesdroppers and no
CSI of the passive eavesdroppers are available at the transmitter. In light of
the intractability of the problem, we reformulate the considered problem by
replacing a non-convex probabilistic constraint with a convex deterministic
constraint. Then, a semi-definite programming (SDP) relaxation approach is
adopted to obtain the optimal solution for the reformulated problem.
Furthermore, we propose a suboptimal resource allocation scheme with low
computational complexity for providing communication secrecy and facilitating
efficient energy transfer. Simulation results demonstrate a close-to-optimal
performance achieved by the proposed schemes and significant transmit power
savings by optimization of the artificial noise and energy signal generation.
|
1311.2524 | Rich feature hierarchies for accurate object detection and semantic
segmentation | cs.CV | Object detection performance, as measured on the canonical PASCAL VOC
dataset, has plateaued in the last few years. The best-performing methods are
complex ensemble systems that typically combine multiple low-level image
features with high-level context. In this paper, we propose a simple and
scalable detection algorithm that improves mean average precision (mAP) by more
than 30% relative to the previous best result on VOC 2012---achieving a mAP of
53.3%. Our approach combines two key insights: (1) one can apply high-capacity
convolutional neural networks (CNNs) to bottom-up region proposals in order to
localize and segment objects and (2) when labeled training data is scarce,
supervised pre-training for an auxiliary task, followed by domain-specific
fine-tuning, yields a significant performance boost. Since we combine region
proposals with CNNs, we call our method R-CNN: Regions with CNN features. We
also compare R-CNN to OverFeat, a recently proposed sliding-window detector
based on a similar CNN architecture. We find that R-CNN outperforms OverFeat by
a large margin on the 200-class ILSVRC2013 detection dataset. Source code for
the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.
|
1311.2526 | User recommendation in reciprocal and bipartite social networks -- a
case study of online dating | cs.SI cs.IR physics.soc-ph | Many social networks in our daily life are bipartite networks built on
reciprocity. How can we recommend users/friends to a user, so that the user is
interested in and attractive to recommended users? In this research, we propose
a new collaborative filtering model to improve user recommendations in
reciprocal and bipartite social networks. The model considers a user's "taste"
in picking others and "attractiveness" in being picked by others. A case study
of an online dating network shows that the new model has good performance in
recommending both initial and reciprocal contacts.
|
1311.2531 | Motility at the origin of life: Its characterization and a model | cs.AI cs.NE nlin.AO nlin.PS q-bio.PE | Due to recent advances in synthetic biology and artificial life, the origin
of life is currently a hot topic of research. We review the literature and
argue that the two traditionally competing "replicator-first" and
"metabolism-first" approaches are merging into one integrated theory of
individuation and evolution. We contribute to the maturation of this more
inclusive approach by highlighting some problematic assumptions that still lead
to an impoverished conception of the phenomenon of life. In particular, we
argue that the new consensus has so far failed to consider the relevance of
intermediate timescales. We propose that an adequate theory of life must
account for the fact that all living beings are situated in at least four
distinct timescales, which are typically associated with metabolism, motility,
development, and evolution. On this view, self-movement, adaptive behavior and
morphological changes could have already been present at the origin of life. In
order to illustrate this possibility we analyze a minimal model of life-like
phenomena, namely of precarious, individuated, dissipative structures that can
be found in simple reaction-diffusion systems. Based on our analysis we suggest
that processes in intermediate timescales could have already been operative in
prebiotic systems. They may have facilitated and constrained changes occurring
in the faster- and slower-paced timescales of chemical self-individuation and
evolution by natural selection, respectively.
|
1311.2540 | Asymmetric numeral systems: entropy coding combining speed of Huffman
coding with compression rate of arithmetic coding | cs.IT math.IT | The modern data compression is mainly based on two approaches to entropy
coding: Huffman (HC) and arithmetic/range coding (AC). The former is much
faster, but approximates probabilities with powers of 2, usually leading to
relatively low compression rates. The latter uses nearly exact probabilities -
easily approaching theoretical compression rate limit (Shannon entropy), but at
cost of much larger computational cost.
Asymmetric numeral systems (ANS) is a new approach to accurate entropy
coding, which allows to end this trade-off between speed and rate: the recent
implementation [1] provides about $50\%$ faster decoding than HC for 256 size
alphabet, with compression rate similar to provided by AC. This advantage is
due to being simpler than AC: using single natural number as the state, instead
of two to represent a range. Beside simplifying renormalization, it allows to
put the entire behavior for given probability distribution into a relatively
small table: defining entropy coding automaton. The memory cost of such table
for 256 size alphabet is a few kilobytes. There is a large freedom while
choosing a specific table - using pseudorandom number generator initialized
with cryptographic key for this purpose allows to simultaneously encrypt the
data.
This article also introduces and discusses many other variants of this new
entropy coding approach, which can provide direct alternatives for standard AC,
for large alphabet range coding, or for approximated quasi arithmetic coding.
|
1311.2542 | Toward a unified theory of sparse dimensionality reduction in Euclidean
space | cs.DS cs.CG cs.IT math.IT math.PR stat.ML | Let $\Phi\in\mathbb{R}^{m\times n}$ be a sparse Johnson-Lindenstrauss
transform [KN14] with $s$ non-zeroes per column. For a subset $T$ of the unit
sphere, $\varepsilon\in(0,1/2)$ given, we study settings for $m,s$ required to
ensure $$ \mathop{\mathbb{E}}_\Phi \sup_{x\in T} \left|\|\Phi x\|_2^2 - 1
\right| < \varepsilon , $$ i.e. so that $\Phi$ preserves the norm of every
$x\in T$ simultaneously and multiplicatively up to $1+\varepsilon$. We
introduce a new complexity parameter, which depends on the geometry of $T$, and
show that it suffices to choose $s$ and $m$ such that this parameter is small.
Our result is a sparse analog of Gordon's theorem, which was concerned with a
dense $\Phi$ having i.i.d. Gaussian entries. We qualitatively unify several
results related to the Johnson-Lindenstrauss lemma, subspace embeddings, and
Fourier-based restricted isometries. Our work also implies new results in using
the sparse Johnson-Lindenstrauss transform in numerical linear algebra,
classical and model-based compressed sensing, manifold learning, and
constrained least squares problems such as the Lasso.
|
1311.2547 | Learning Mixtures of Linear Classifiers | cs.LG stat.ML | We consider a discriminative learning (regression) problem, whereby the
regression function is a convex combination of k linear classifiers. Existing
approaches are based on the EM algorithm, or similar techniques, without
provable guarantees. We develop a simple method based on spectral techniques
and a `mirroring' trick, that discovers the subspace spanned by the
classifiers' parameter vectors. Under a probabilistic assumption on the feature
vector distribution, we prove that this approach has nearly optimal statistical
efficiency.
|
1311.2549 | A doubling construction for self-orthogonal codes | cs.IT math.IT | A simple construction of quaternary hermitian self-orthogonal codes with
parameters $[2n+1,k+1]$ and $[2n+2,k+2]$ from a given pair of self-orthogonal
$[n,k]$ codes, and its link to quantum codes is considered. As an application,
an optimal quaternary linear $[28,20,6]$ dual containing code is found that
yields a new optimal $[[28,12,6]]$ quantum code.
|
1311.2551 | Social Networks and Collective Intelligence: A Return to the Agora | cs.SI cs.CY physics.soc-ph | Nowadays, acquisition of trustable information is increasingly important in
both professional and private contexts. However, establishing what information
is trustable and what is not, is a very challenging task. For example, how can
information quality be reliably assessed? How can sources? credibility be
fairly assessed? How can gatekeeping processes be found trustworthy when
filtering out news and deciding ranking and priorities of traditional media? An
Internet-based solution to a human-based ancient issue is being studied, and it
is called Polidoxa, from Greek "poly", meaning "many" or "several" and "doxa",
meaning "common belief" or "popular opinion". This old problem will be solved
by means of ancient philosophies and processes with truly modern tools and
technologies. This is why this work required a collaborative and
interdisciplinary joint effort from researchers with very different backgrounds
and institutes with significantly different agendas. Polidoxa aims at offering:
1) a trust-based search engine algorithm, which exploits stigmergic behaviours
of users? network, 2) a trust-based social network, where the notion of trust
derives from network activity and 3) a holonic system for bottom-up
self-protection and social privacy. By presenting the Polidoxa solution, this
work also describes the current state of traditional media as well as newer
ones, providing an accurate analysis of major search engines such as Google and
social network (e.g., Facebook). The advantages that Polidoxa offers, compared
to these, are also clearly detailed and motivated. Finally, a Twitter
application (Polidoxa@twitter) which enables experimentation of basic Polidoxa
principles is presented.
|
1311.2561 | Performing edge detection by difference of Gaussians using q-Gaussian
kernels | cs.CV physics.comp-ph | In image processing, edge detection is a valuable tool to perform the
extraction of features from an image. This detection reduces the amount of
information to be processed, since the redundant information (considered less
relevant) can be unconsidered. The technique of edge detection consists of
determining the points of a digital image whose intensity changes sharply. This
changes are due to the discontinuities of the orientation on a surface for
example. A well known method of edge detection is the Difference of Gaussians
(DoG). The method consists of subtracting two Gaussians, where a kernel has a
standard deviation smaller than the previous one. The convolution between the
subtraction of kernels and the input image results in the edge detection of
this image. This paper introduces a method of extracting edges using DoG with
kernels based on the q-Gaussian probability distribution, derived from the
q-statistic proposed by Constantino Tsallis. To demonstrate the method's
potential, we compare the introduced method with the traditional DoG using
Gaussians kernels. The results showed that the proposed method can extract
edges with more accurate details.
|
1311.2621 | Determining Leishmania Infection Levels by Automatic Analysis of
Microscopy Images | cs.CV | Analysis of microscopy images is one important tool in many fields of
biomedical research, as it allows the quantification of a multitude of
parameters at the cellular level. However, manual counting of these images is
both tiring and unreliable and ultimately very time-consuming for biomedical
researchers. Not only does this slow down the overall research process, it also
introduces counting errors due to a lack of objectivity and consistency
inherent to the researchers own human nature.
This thesis addresses this issue by automatically determining infection
indexes of macrophages parasite by Leishmania in microscopy images using
computer vision and pattern recognition methodologies. Initially images are
submitted to a pre-processing stage that consists in a normalization of
illumination conditions. Three algorithms are then applied in parallel to each
image. Algorithm A intends to detect macrophage nuclei and consists of
segmentation via adaptive multi-threshold, and classification of resulting
regions using a set of collected features. Algorithm B intends to detect
parasites and is similar to Algorithm A but the adaptive multi-threshold is
parameterized with a different constraints vector. Algorithm C intends to
detect the macrophages and parasites cytoplasm and consists of a cut-off
version of the previous two algorithms, where the classification step is
skipped. Regions with multiple nuclei or parasites are processed by a voting
system that employs both a Support Vector Machine and a set of region features
for determining the number of objects present in each region. The previous vote
is then taken into account as the number of mixtures to be used in a Gaussian
Mixture Model to decluster the said region. Finally each parasite is assigned
to, at most, a single macrophage using minimum Euclidean distance to a cell
nucleus, thus quantifying Leishmania infection levels.
|
1311.2626 | Second-order Shape Optimization for Geometric Inverse Problems in Vision | cs.CV | We develop a method for optimization in shape spaces, i.e., sets of surfaces
modulo re-parametrization. Unlike previously proposed gradient flows, we
achieve superlinear convergence rates through a subtle approximation of the
shape Hessian, which is generally hard to compute and suffers from a series of
degeneracies. Our analysis highlights the role of mean curvature motion in
comparison with first-order schemes: instead of surface area, our approach
penalizes deformation, either by its Dirichlet energy or total variation.
Latter regularizer sparks the development of an alternating direction method of
multipliers on triangular meshes. Therein, a conjugate-gradients solver enables
us to bypass formation of the Gaussian normal equations appearing in the course
of the overall optimization. We combine all of the aforementioned ideas in a
versatile geometric variation-regularized Levenberg-Marquardt-type method
applicable to a variety of shape functionals, depending on intrinsic properties
of the surface such as normal field and curvature as well as its embedding into
space. Promising experimental results are reported.
|
1311.2634 | A New Look at Dual-Hop Relaying: Performance Limits with Hardware
Impairments | cs.IT math.IT | Physical transceivers have hardware impairments that create distortions which
degrade the performance of communication systems. The vast majority of
technical contributions in the area of relaying neglect hardware impairments
and, thus, assumes ideal hardware. Such approximations make sense in low-rate
systems, but can lead to very misleading results when analyzing future
high-rate systems. This paper quantifies the impact of hardware impairments on
dual-hop relaying, for both amplify-and-forward and decode-and-forward
protocols. The outage probability (OP) in these practical scenarios is a
function of the effective end-to-end signal-to-noise-and-distortion ratio
(SNDR). This paper derives new closed-form expressions for the exact and
asymptotic OPs, accounting for hardware impairments at the source, relay, and
destination. A similar analysis for the ergodic capacity is also pursued,
resulting in new upper bounds. We assume that both hops are subject to
independent but non-identically distributed Nakagami-m fading. This paper
validates that the performance loss is small at low rates, but otherwise can be
very substantial. In particular, it is proved that for high signal-to-noise
ratio (SNR), the end-to-end SNDR converges to a deterministic constant, coined
the SNDR ceiling, which is inversely proportional to the level of impairments.
This stands in contrast to the ideal hardware case in which the end-to-end SNDR
grows without bound in the high-SNR regime. Finally, we provide fundamental
design guidelines for selecting hardware that satisfies the requirements of a
practical relaying system.
|
1311.2637 | Self-Dual codes from $(-1,1)$-matrices of skew symmetric type | cs.IT math.CO math.IT | Previously, self-dual codes have been constructed from weighing matrices, and
in particular from conference matrices (skew and symmetric). In this paper,
codes constructed from matrices of skew symmetric type whose determinants reach
the Ehlich-Wojtas' bound are presented. A necessary and sufficient condition
for these codes to be self-dual is given, and examples are provided for lengths
up to 52.
|
1311.2642 | Volumetric Reconstruction Applied to Perceptual Studies of Size and
Weight | cs.CV | We explore the application of volumetric reconstruction from structured-light
sensors in cognitive neuroscience, specifically in the quantification of the
size-weight illusion, whereby humans tend to systematically perceive smaller
objects as heavier. We investigate the performance of two commercial
structured-light scanning systems in comparison to one we developed
specifically for this application. Our method has two main distinct features:
First, it only samples a sparse series of viewpoints, unlike other systems such
as the Kinect Fusion. Second, instead of building a distance field for the
purpose of points-to-surface conversion directly, we pursue a first-order
approach: the distance function is recovered from its gradient by a screened
Poisson reconstruction, which is very resilient to noise and yet preserves
high-frequency signal components. Our experiments show that the quality of
metric reconstruction from structured light sensors is subject to systematic
biases, and highlights the factors that influence it. Our main performance
index rates estimates of volume (a proxy of size), for which we review a
well-known formula applicable to incomplete meshes. Our code and data will be
made publicly available upon completion of the anonymous review process.
|
1311.2650 | Over-the-air Signaling in Cellular Networks: An Overview | cs.IT cs.NI math.IT | To improve the capacity and coverage of current cellular networks, many
advanced technologies such as massive MIMO, inter-cell coordination, small
cells, device-to-device communications, and so on, are under studying. Many
proposed techniques have been shown to offer significant performance
improvement. Thus, the enabler of those techniques is of great importance. That
is the necessary signaling which guarantee the operation of those techniques.
The design and transmission of those signaling, especially the over-the-air
(OTA) signaling, is challenging. In this article, we provide an overview of the
OTA signaling in cellular networks to provide insights on the design of OTA
signaling. Specifically, we first give a brief introduction of the OTA
signaling in long term evolution (LTE), and then we discuss the challenges and
requirements in designing the OTA signaling in cellular networks in detail. To
better understand the OTA signaling, we give two important classifications of
OTA signaling and address their properties and applications. Finally, we
propose a signature-based signaling named (single-tone signaling, STS) which
can be used for inter-cell OTA signaling and is especially useful and robust in
multi-signal scenario. Simulation results are given to compare the detection
performance of different OTA signaling.
|
1311.2651 | MIMO Broadcast Channel with an Unknown Eavesdropper: Secrecy Degrees of
Freedom | cs.IT math.IT | We study a multi-antenna broadcast channel with two legitimate receivers and
an external eavesdropper. We assume that the channel matrix of the eavesdropper
is unknown to the legitimate terminals but satisfies a maximum rank constraint.
As our main result we characterize the associated secrecy degrees of freedom
for the broadcast channel with common and private messages. We show that a
direct extension of the single-user wiretap codebook does not achieve the
secrecy degrees of freedom. Our proposed optimal scheme involves decomposing
the signal space into a common subspace, which can be observed by both
receivers, and private subspaces which can be observed by only one of the
receivers, and carefully transmitting a subset of messages in each subspace. We
also consider the case when each user's private message must additionally
remain confidential from the other legitimate receiver and characterize the
s.d.o.f.\ region in this case.
|
1311.2663 | DinTucker: Scaling up Gaussian process models on multidimensional arrays
with billions of elements | cs.LG cs.DC stat.ML | Infinite Tucker Decomposition (InfTucker) and random function prior models,
as nonparametric Bayesian models on infinite exchangeable arrays, are more
powerful models than widely-used multilinear factorization methods including
Tucker and PARAFAC decomposition, (partly) due to their capability of modeling
nonlinear relationships between array elements. Despite their great predictive
performance and sound theoretical foundations, they cannot handle massive data
due to a prohibitively high training time. To overcome this limitation, we
present Distributed Infinite Tucker (DINTUCKER), a large-scale nonlinear tensor
decomposition algorithm on MAPREDUCE. While maintaining the predictive accuracy
of InfTucker, it is scalable on massive data. DINTUCKER is based on a new
hierarchical Bayesian model that enables local training of InfTucker on
subarrays and information integration from all local training results. We use
distributed stochastic gradient descent, coupled with variational inference, to
train this model. We apply DINTUCKER to multidimensional arrays with billions
of elements from applications in the "Read the Web" project (Carlson et al.,
2010) and in information security and compare it with the state-of-the-art
large-scale tensor decomposition method, GigaTensor. On both datasets,
DINTUCKER achieves significantly higher prediction accuracy with less
computational time.
|
1311.2669 | Distance-based and continuum Fano inequalities with applications to
statistical estimation | cs.IT math.IT math.ST stat.TH | In this technical note, we give two extensions of the classical Fano
inequality in information theory. The first extends Fano's inequality to the
setting of estimation, providing lower bounds on the probability that an
estimator of a discrete quantity is within some distance $t$ of the quantity.
The second inequality extends our bound to a continuum setting and provides a
volume-based bound. We illustrate how these inequalities lead to direct and
simple proofs of several statistical minimax lower bounds.
|
1311.2670 | Social Network Integration: Towards Constructing the Social Graph | cs.SI physics.soc-ph | In this work, we formulate the problem of social network integration. It
takes multiple observed social networks as input and returns an integrated
global social graph where each node corresponds to a real person. The key
challenge for social network integration is to discover the correspondences or
interlinks across different social networks.
We engaged an in-depth analysis across three online social networks, AMiner,
Linkedin, and Videolectures in order to address what reveals users' social
identity, whether the social factors consistent across different social
networks and how we can leverage these information to perform integration.
We proposed a unified framework for the social network integration task. It
crawls data from multiple social networks and further discovers accounts
correspond to the same real person from the obtained networks. We use a
probabilistic model to determine such correspondence, it incorporates features
like the consistency of social status and social ties across different, as well
as one-to-one mapping constraint and logical transitivity to jointly make the
prediction. Empirical experiments verify the effectiveness of our method.
|
1311.2677 | Sampling Based Approaches to Handle Imbalances in Network Traffic
Dataset for Machine Learning Techniques | cs.NI cs.CR cs.LG | Network traffic data is huge, varying and imbalanced because various classes
are not equally distributed. Machine learning (ML) algorithms for traffic
analysis uses the samples from this data to recommend the actions to be taken
by the network administrators as well as training. Due to imbalances in
dataset, it is difficult to train machine learning algorithms for traffic
analysis and these may give biased or false results leading to serious
degradation in performance of these algorithms. Various techniques can be
applied during sampling to minimize the effect of imbalanced instances. In this
paper various sampling techniques have been analysed in order to compare the
decrease in variation in imbalances of network traffic datasets sampled for
these algorithms. Various parameters like missing classes in samples,
probability of sampling of the different instances have been considered for
comparison.
|
1311.2694 | Hypothesis Testing for Automated Community Detection in Networks | stat.ML cs.LG cs.SI math.ST physics.soc-ph stat.TH | Community detection in networks is a key exploratory tool with applications
in a diverse set of areas, ranging from finding communities in social and
biological networks to identifying link farms in the World Wide Web. The
problem of finding communities or clusters in a network has received much
attention from statistics, physics and computer science. However, most
clustering algorithms assume knowledge of the number of clusters k. In this
paper we propose to automatically determine k in a graph generated from a
Stochastic Blockmodel. Our main contribution is twofold; first, we
theoretically establish the limiting distribution of the principal eigenvalue
of the suitably centered and scaled adjacency matrix, and use that distribution
for our hypothesis test. Secondly, we use this test to design a recursive
bipartitioning algorithm. Using quantifiable classification tasks on real world
networks with ground truth, we show that our algorithm outperforms existing
probabilistic models for learning overlapping clusters, and on unlabeled
networks, we show that we uncover nested community structure.
|
1311.2698 | Packet Travel Times in Wireless Relay Chains under Spatially and
Temporally Dependent Interference | cs.NI cs.IT math.IT | We investigate the statistics of the number of time slots $T$ that it takes a
packet to travel through a chain of wireless relays. Derivations are performed
assuming an interference model for which interference possesses spatiotemporal
dependency properties. When using this model, results are harder to arrive at
analytically, but they are more realistic than the ones obtained in many
related works that are based on independent interference models.
First, we present a method for calculating the distribution of $T$. As the
required computations are extensive, we also obtain simple expressions for the
expected value $\mathrm{E} [T]$ and variance $\mathrm{var} [T]$. Finally, we
calculate the asymptotic limit of the average speed of the packet. Our
numerical results show that spatiotemporal dependence has a significant impact
on the statistics of the travel time $T$. In particular, we show that, with
respect to the independent interference case, $\mathrm{E} [T]$ and
$\mathrm{var} [T]$ increase, whereas the packet speed decreases.
|
1311.2702 | Verifiable Source Code Documentation in Controlled Natural Language | cs.SE cs.AI cs.CL cs.HC cs.LO | Writing documentation about software internals is rarely considered a
rewarding activity. It is highly time-consuming and the resulting documentation
is fragile when the software is continuously evolving in a multi-developer
setting. Unfortunately, traditional programming environments poorly support the
writing and maintenance of documentation. Consequences are severe as the lack
of documentation on software structure negatively impacts the overall quality
of the software product. We show that using a controlled natural language with
a reasoner and a query engine is a viable technique for verifying the
consistency and accuracy of documentation and source code. Using ACE, a
state-of-the-art controlled natural language, we present positive results on
the comprehensibility and the general feasibility of creating and verifying
documentation. As a case study, we used automatic documentation verification to
identify and fix severe flaws in the architecture of a non-trivial piece of
software. Moreover, a user experiment shows that our language is faster and
easier to learn and understand than other formal languages for software
documentation.
|
1311.2705 | Quantum Stabilizer Codes from Maximal Curves | cs.IT math.AG math.IT | A curve attaining the Hasse-Weil bound is called a maximal curve. Usually
classical error-correcting codes obtained from a maximal curve have good
parameters. However, the quantum stabilizer codes obtained from such classical
error-correcting codes via Euclidean or Hermitian self-orthogonality do not
always possess good parameters. In this paper, the Hermitian self-orthogonality
of algebraic geometry codes obtained from two maximal curves is investigated.
It turns out that the stabilizer quantum codes produced from such Hermitian
self-orthogonal classical codes have good parameters.
|
1311.2745 | Sparse Phase Retrieval: Uniqueness Guarantees and Recovery Algorithms | cs.IT math.IT math.OC | The problem of signal recovery from its Fourier transform magnitude is of
paramount importance in various fields of engineering and has been around for
over 100 years. Due to the absence of phase information, some form of
additional information is required in order to be able to uniquely identify the
signal of interest. In this work, we focus our attention on discrete-time
sparse signals (of length $n$). We first show that, if the DFT dimension is
greater than or equal to $2n$, almost all signals with {\em aperiodic} support
can be uniquely identified by their Fourier transform magnitude (up to
time-shift, conjugate-flip and global phase).
Then, we develop an efficient Two-stage Sparse Phase Retrieval algorithm
(TSPR), which involves: (i) identifying the support, i.e., the locations of the
non-zero components, of the signal using a combinatorial algorithm (ii)
identifying the signal values in the support using a convex algorithm. We show
that TSPR can {\em provably} recover most $O(n^{1/2-\eps})$-sparse signals (up
to a time-shift, conjugate-flip and global phase). We also show that, for most
$O(n^{1/4-\eps})$-sparse signals, the recovery is {\em robust} in the presence
of measurement noise. Numerical experiments complement our theoretical analysis
and verify the effectiveness of TSPR.
|
1311.2746 | Deep neural networks for single channel source separation | cs.NE cs.LG | In this paper, a novel approach for single channel source separation (SCSS)
using a deep neural network (DNN) architecture is introduced. Unlike previous
studies in which DNN and other classifiers were used for classifying
time-frequency bins to obtain hard masks for each source, we use the DNN to
classify estimated source spectra to check for their validity during
separation. In the training stage, the training data for the source signals are
used to train a DNN. In the separation stage, the trained DNN is utilized to
aid in estimation of each source in the mixed signal. Single channel source
separation problem is formulated as an energy minimization problem where each
source spectra estimate is encouraged to fit the trained DNN model and the
mixed signal spectrum is encouraged to be written as a weighted sum of the
estimated source spectra. The proposed approach works regardless of the energy
scale differences between the source signals in the training and separation
stages. Nonnegative matrix factorization (NMF) is used to initialize the DNN
estimate for each source. The experimental results show that using DNN
initialized by NMF for source separation improves the quality of the separated
signal compared with using NMF for source separation.
|
1311.2795 | Complete solution of a constrained tropical optimization problem with
application to location analysis | math.OC cs.SY | We present a multidimensional optimization problem that is formulated and
solved in the tropical mathematics setting. The problem consists of minimizing
a nonlinear objective function defined on vectors over an idempotent semifield
by means of a conjugate transposition operator, subject to constraints in the
form of linear vector inequalities. A complete direct solution to the problem
under fairly general assumptions is given in a compact vector form suitable for
both further analysis and practical implementation. We apply the result to
solve a multidimensional minimax single facility location problem with
Chebyshev distance and with inequality constraints imposed on the feasible
location area.
|
1311.2796 | Mixed Human-Robot Team Surveillance | math.OC cs.RO cs.SY | We study the mixed human-robot team design in a system theoretic setting
using the context of a surveillance mission. The three key coupled components
of a mixed team design are (i) policies for the human operator, (ii) policies
to account for erroneous human decisions, and (iii) policies to control the
automaton. In this paper, we survey elements of human decision-making,
including evidence aggregation, situational awareness, fatigue, and memory
effects. We bring together the models for these elements in human
decision-making to develop a single coherent model for human decision-making in
a two-alternative choice task. We utilize the developed model to design
efficient attention allocation policies for the human operator. We propose an
anomaly detection algorithm that utilizes potentially erroneous decision by the
operator to ascertain an anomalous region among the set of regions surveilled.
Finally, we propose a stochastic vehicle routing policy that surveils an
anomalous region with high probability. Our mixed team design relies on the
certainty-equivalent receding-horizon control framework.
|
1311.2799 | Aggregation of Affine Estimators | math.ST cs.LG stat.TH | We consider the problem of aggregating a general collection of affine
estimators for fixed design regression. Relevant examples include some commonly
used statistical estimators such as least squares, ridge and robust least
squares estimators. Dalalyan and Salmon (2012) have established that, for this
problem, exponentially weighted (EW) model selection aggregation leads to sharp
oracle inequalities in expectation, but similar bounds in deviation were not
previously known. While results indicate that the same aggregation scheme may
not satisfy sharp oracle inequalities with high probability, we prove that a
weaker notion of oracle inequality for EW that holds with high probability.
Moreover, using a generalization of the newly introduced $Q$-aggregation scheme
we also prove sharp oracle inequalities that hold with high probability.
Finally, we apply our results to universal aggregation and show that our
proposed estimator leads simultaneously to all the best known bounds for
aggregation, including $\ell_q$-aggregation, $q \in (0,1)$, with high
probability.
|
1311.2838 | A PAC-Bayesian bound for Lifelong Learning | stat.ML cs.LG | Transfer learning has received a lot of attention in the machine learning
community over the last years, and several effective algorithms have been
developed. However, relatively little is known about their theoretical
properties, especially in the setting of lifelong learning, where the goal is
to transfer information to tasks for which no data have been observed so far.
In this work we study lifelong learning from a theoretical perspective. Our
main result is a PAC-Bayesian generalization bound that offers a unified view
on existing paradigms for transfer learning, such as the transfer of parameters
or the transfer of low-dimensional representations. We also use the bound to
derive two principled lifelong learning algorithms, and we show that these
yield results comparable with existing methods.
|
1311.2850 | Virtual Modules in Discrete-Event Systems: Achieving Modular
Diagnosability | cs.SY | This paper deals with the problem of enforcing modular diagnosability for
discrete-event systems that don't satisfy this property by their natural
modularity. We introduce an approach to achieve this property combining
existing modules into new virtual modules. An underlining mathematical problem
is to find a partition of a set, such that the partition satisfies the required
property. The time complexity of such problem is very high. To overcome it, the
paper introduces a structural analysis of the system's modules. In the analysis
we focus on the case when the modules participate in diagnosis with their
observations, rather then the case when indistinguishable observations are
blocked due to concurrency.
|
1311.2852 | Quantifying unique information | cs.IT math.IT | We propose new measures of shared information, unique information and
synergistic information that can be used to decompose the multi-information of
a pair of random variables $(Y,Z)$ with a third random variable $X$. Our
measures are motivated by an operational idea of unique information which
suggests that shared information and unique information should depend only on
the pair marginal distributions of $(X,Y)$ and $(X,Z)$. Although this
invariance property has not been studied before, it is satisfied by other
proposed measures of shared information. The invariance property does not
uniquely determine our new measures, but it implies that the functions that we
define are bounds to any other measures satisfying the same invariance
property. We study properties of our measures and compare them to other
candidate measures.
|
1311.2854 | Spectral Clustering via the Power Method -- Provably | cs.LG cs.NA | Spectral clustering is one of the most important algorithms in data mining
and machine intelligence; however, its computational complexity limits its
application to truly large scale data analysis. The computational bottleneck in
spectral clustering is computing a few of the top eigenvectors of the
(normalized) Laplacian matrix corresponding to the graph representing the data
to be clustered. One way to speed up the computation of these eigenvectors is
to use the "power method" from the numerical linear algebra literature.
Although the power method has been empirically used to speed up spectral
clustering, the theory behind this approach, to the best of our knowledge,
remains unexplored. This paper provides the \emph{first} such rigorous
theoretical justification, arguing that a small number of power iterations
suffices to obtain near-optimal partitionings using the approximate
eigenvectors. Specifically, we prove that solving the $k$-means clustering
problem on the approximate eigenvectors obtained via the power method gives an
additive-error approximation to solving the $k$-means problem on the optimal
eigenvectors.
|
1311.2878 | Selection Effects in Online Sharing: Consequences for Peer Adoption | cs.SI physics.soc-ph | Most models of social contagion take peer exposure to be a corollary of
adoption, yet in many settings, the visibility of one's adoption behavior
happens through a separate decision process. In online systems, product
designers can define how peer exposure mechanisms work: adoption behaviors can
be shared in a passive, automatic fashion, or occur through explicit, active
sharing. The consequences of these mechanisms are of substantial practical and
theoretical interest: passive sharing may increase total peer exposure but
active sharing may expose higher quality products to peers who are more likely
to adopt.
We examine selection effects in online sharing through a large-scale field
experiment on Facebook that randomizes whether or not adopters share Offers
(coupons) in a passive manner. We derive and estimate a joint discrete choice
model of adopters' sharing decisions and their peers' adoption decisions. Our
results show that active sharing enables a selection effect that exposes peers
who are more likely to adopt than the population exposed under passive sharing.
We decompose the selection effect into two distinct mechanisms: active
sharers expose peers to higher quality products, and the peers they share with
are more likely to adopt independently of product quality. Simulation results
show that the user-level mechanism comprises the bulk of the selection effect.
The study's findings are among the first to address downstream peer effects
induced by online sharing mechanisms, and can inform design in settings where a
surplus of sharing could be viewed as costly.
|
1311.2886 | A Fuzzy AHP Approach for Supplier Selection Problem: A Case Study in a
Gear Motor Company | cs.AI | Suuplier selection is one of the most important functions of a purchasing
department. Since by deciding the best supplier, companies can save material
costs and increase competitive advantage.However this decision becomes
compilcated in case of multiple suppliers, multiple conflicting criteria, and
imprecise parameters. In addition the uncertainty and vagueness of the experts'
opinion is the prominent characteristic of the problem. therefore an
extensively used multi criteria decision making tool Fuzzy AHP can be utilized
as an approach for supplier selection problem. This paper reveals the
application of Fuzzy AHP in a gear motor company determining the best supplier
with respect to selected criteria. the contribution of this study is not only
the application of the Fuzzy AHP methodology for supplier selection problem,
but also releasing a comprehensive literature review of multi criteria decision
making problems. In addition by stating the steps of Fuzzy AHP clearly and
numerically, this study can be a guide of the methodology to be implemented to
other multiple criteria decision making problems.
|
1311.2887 | Are all Social Networks Structurally Similar? A Comparative Study using
Network Statistics and Metrics | cs.SI | The modern age has seen an exponential growth of social network data
available on the web. Analysis of these networks reveal important structural
information about these networks in particular and about our societies in
general. More often than not, analysis of these networks is concerned in
identifying similarities among social networks and how they are different from
other networks such as protein interaction networks, computer networks and food
web. In this paper, our objective is to perform a critical analysis of
different social networks using structural metrics in an effort to highlight
their similarities and differences. We use five different social network
datasets which are contextually and semantically different from each other. We
then analyze these networks using a number of different network statistics and
metrics. Our results show that although these social networks have been
constructed from different contexts, they are structurally similar. We also
review the snowball sampling method and show its vulnerability against
different network metrics.
|
1311.2889 | Reinforcement Learning for Matrix Computations: PageRank as an Example | cs.LG cs.SI stat.ML | Reinforcement learning has gained wide popularity as a technique for
simulation-driven approximate dynamic programming. A less known aspect is that
the very reasons that make it effective in dynamic programming can also be
leveraged for using it for distributed schemes for certain matrix computations
involving non-negative matrices. In this spirit, we propose a reinforcement
learning algorithm for PageRank computation that is fashioned after analogous
schemes for approximate dynamic programming. The algorithm has the advantage of
ease of distributed implementation and more importantly, of being model-free,
i.e., not dependent on any specific assumptions about the transition
probabilities in the random web-surfer model. We analyze its convergence and
finite time behavior and present some supporting numerical experiments.
|
1311.2891 | The More, the Merrier: the Blessing of Dimensionality for Learning Large
Gaussian Mixtures | cs.LG cs.DS stat.ML | In this paper we show that very large mixtures of Gaussians are efficiently
learnable in high dimension. More precisely, we prove that a mixture with known
identical covariance matrices whose number of components is a polynomial of any
fixed degree in the dimension n is polynomially learnable as long as a certain
non-degeneracy condition on the means is satisfied. It turns out that this
condition is generic in the sense of smoothed complexity, as soon as the
dimensionality of the space is high enough. Moreover, we prove that no such
condition can possibly exist in low dimension and the problem of learning the
parameters is generically hard. In contrast, much of the existing work on
Gaussian Mixtures relies on low-dimensional projections and thus hits an
artificial barrier. Our main result on mixture recovery relies on a new
"Poissonization"-based technique, which transforms a mixture of Gaussians to a
linear map of a product distribution. The problem of learning this map can be
efficiently solved using some recent results on tensor decompositions and
Independent Component Analysis (ICA), thus giving an algorithm for recovering
the mixture. In addition, we combine our low-dimensional hardness results for
Gaussian mixtures with Poissonization to show how to embed difficult instances
of low-dimensional Gaussian mixtures into the ICA setting, thus establishing
exponential information-theoretic lower bounds for underdetermined ICA in low
dimension. To the best of our knowledge, this is the first such result in the
literature. In addition to contributing to the problem of Gaussian mixture
learning, we believe that this work is among the first steps toward better
understanding the rare phenomenon of the "blessing of dimensionality" in the
computational aspects of statistical inference.
|
1311.2897 | Exponential Stability of Homogeneous Positive Systems of Degree One With
Time-Varying Delays | cs.SY | While the asymptotic stability of positive linear systems in the presence of
bounded time delays has been thoroughly investigated, the theory for nonlinear
positive systems is considerably less well-developed. This paper presents a set
of conditions for establishing delay-independent stability and bounding the
decay rate of a significant class of nonlinear positive systems which includes
positive linear systems as a special case. Specifically, when the time delays
have a known upper bound, we derive necessary and sufficient conditions for
exponential stability of (a) continuous-time positive systems whose vector
fields are homogeneous and cooperative, and (b) discrete-time positive systems
whose vector fields are homogeneous and order preserving. We then present
explicit expressions that allow us to quantify the impact of delays on the
decay rate and show that the best decay rate of positive linear systems that
our bounds provide can be found via convex optimization. Finally, we extend the
results to general linear systems with time-varying delays.
|
1311.2901 | Visualizing and Understanding Convolutional Networks | cs.CV | Large Convolutional Network models have recently demonstrated impressive
classification performance on the ImageNet benchmark. However there is no clear
understanding of why they perform so well, or how they might be improved. In
this paper we address both issues. We introduce a novel visualization technique
that gives insight into the function of intermediate feature layers and the
operation of the classifier. We also perform an ablation study to discover the
performance contribution from different model layers. This enables us to find
model architectures that outperform Krizhevsky \etal on the ImageNet
classification benchmark. We show our ImageNet model generalizes well to other
datasets: when the softmax classifier is retrained, it convincingly beats the
current state-of-the-art results on Caltech-101 and Caltech-256 datasets.
|
1311.2903 | Protocol Design and Stability Analysis of Cooperative Cognitive Radio
Users | cs.NI cs.IT math.IT | A single cognitive radio transmitter--receiver pair shares the spectrum with
two primary users communicating with their respective receivers. Each primary
user has a local traffic queue, whereas the cognitive user has three queues;
one storing its own traffic while the other two are relaying queues used to
store primary relayed packets admitted from the two primary users. A new
cooperative cognitive medium access control protocol for the described network
is proposed, where the cognitive user exploits the idle periods of the primary
spectrum bands. Traffic arrival to each relaying queue is controlled using a
tuneable admittance factor, while relaying queues service scheduling is
controlled via channel access probabilities assigned to each queue based on the
band of operation. The stability region of the proposed protocol is
characterized shedding light on its maximum expected throughput. Numerical
results demonstrate the performance gains of the proposed cooperative cognitive
protocol.
|
1311.2906 | Centrality in Interconnected Multilayer Networks | physics.soc-ph cond-mat.dis-nn cs.SI | Real-world complex systems exhibit multiple levels of relationships. In many
cases, they require to be modeled by interconnected multilayer networks,
characterizing interactions on several levels simultaneously. It is of crucial
importance in many fields, from economics to biology, from urban planning to
social sciences, to identify the most (or the less) influent nodes in a
network. However, defining the centrality of actors in an interconnected
structure is not trivial.
In this paper, we capitalize on the tensorial formalism, recently proposed to
characterize and investigate this kind of complex topologies, to show how
several centrality measures -- well-known in the case of standard ("monoplex")
networks -- can be extended naturally to the realm of interconnected
multiplexes. We consider diagnostics widely used in different fields, e.g.,
computer science, biology, communication and social sciences, to cite only some
of them. We show, both theoretically and numerically, that using the weighted
monoplex obtained by aggregating the multilayer network leads, in general, to
relevant differences in ranking the nodes by their importance.
|
1311.2911 | Exploring universal patterns in human home-work commuting from mobile
phone data | cs.SI cs.CY physics.soc-ph | Home-work commuting has always attracted significant research attention
because of its impact on human mobility. One of the key assumptions in this
domain of study is the universal uniformity of commute times. However, a true
comparison of commute patterns has often been hindered by the intrinsic
differences in data collection methods, which make observation from different
countries potentially biased and unreliable. In the present work, we approach
this problem through the use of mobile phone call detail records (CDRs), which
offers a consistent method for investigating mobility patterns in wholly
different parts of the world. We apply our analysis to a broad range of
datasets, at both the country and city scale. Additionally, we compare these
results with those obtained from vehicle GPS traces in Milan. While different
regions have some unique commute time characteristics, we show that the
home-work time distributions and average values within a single region are
indeed largely independent of commute distance or country (Portugal, Ivory
Coast, and Boston)--despite substantial spatial and infrastructural
differences. Furthermore, a comparative analysis demonstrates that such
distance-independence holds true only if we consider multimodal commute
behaviors--as consistent with previous studies. In car-only (Milan GPS traces)
and car-heavy (Saudi Arabia) commute datasets, we see that commute time is
indeed influenced by commute distance.
|
1311.2912 | A Misanthropic Reinterpretation of the Chinese Room Problem | cs.AI | The chinese room problem asks if computers can think; I ask here if most
humans can.
|
1311.2914 | A novel local search based on variable-focusing for random K-SAT | cs.AI cond-mat.dis-nn | We introduce a new local search algorithm for satisfiability problems. Usual
approaches focus uniformly on unsatisfied clauses. The new method works by
picking uniformly random variables in unsatisfied clauses. A Variable-based
Focused Metropolis Search (V-FMS) is then applied to random 3-SAT. We show that
it is quite comparable in performance to the clause-based FMS. Consequences for
algorithmic design are discussed.
|
1311.2971 | Approximate Inference in Continuous Determinantal Point Processes | stat.ML cs.LG stat.ME | Determinantal point processes (DPPs) are random point processes well-suited
for modeling repulsion. In machine learning, the focus of DPP-based models has
been on diverse subset selection from a discrete and finite base set. This
discrete setting admits an efficient sampling algorithm based on the
eigendecomposition of the defining kernel matrix. Recently, there has been
growing interest in using DPPs defined on continuous spaces. While the
discrete-DPP sampler extends formally to the continuous case, computationally,
the steps required are not tractable in general. In this paper, we present two
efficient DPP sampling schemes that apply to a wide range of kernel functions:
one based on low rank approximations via Nystrom and random Fourier feature
techniques and another based on Gibbs sampling. We demonstrate the utility of
continuous DPPs in repulsive mixture modeling and synthesizing human poses
spanning activity spaces.
|
1311.2972 | Learning Mixtures of Discrete Product Distributions using Spectral
Decompositions | stat.ML cs.CC cs.IT cs.LG math.IT | We study the problem of learning a distribution from samples, when the
underlying distribution is a mixture of product distributions over discrete
domains. This problem is motivated by several practical applications such as
crowd-sourcing, recommendation systems, and learning Boolean functions. The
existing solutions either heavily rely on the fact that the number of
components in the mixtures is finite or have sample/time complexity that is
exponential in the number of components. In this paper, we introduce a
polynomial time/sample complexity method for learning a mixture of $r$ discrete
product distributions over $\{1, 2, \dots, \ell\}^n$, for general $\ell$ and
$r$. We show that our approach is statistically consistent and further provide
finite sample guarantees.
We use techniques from the recent work on tensor decompositions for
higher-order moment matching. A crucial step in these moment matching methods
is to construct a certain matrix and a certain tensor with low-rank spectral
decompositions. These tensors are typically estimated directly from the
samples. The main challenge in learning mixtures of discrete product
distributions is that these low-rank tensors cannot be obtained directly from
the sample moments. Instead, we reduce the tensor estimation problem to: $a$)
estimating a low-rank matrix using only off-diagonal block elements; and $b$)
estimating a tensor using a small number of linear measurements. Leveraging on
recent developments in matrix completion, we give an alternating minimization
based method to estimate the low-rank matrix, and formulate the tensor
completion problem as a least-squares problem.
|
1311.2978 | Authorship Attribution Using Word Network Features | cs.CL | In this paper, we explore a set of novel features for authorship attribution
of documents. These features are derived from a word network representation of
natural language text. As has been noted in previous studies, natural language
tends to show complex network structure at word level, with low degrees of
separation and scale-free (power law) degree distribution. There has also been
work on authorship attribution that incorporates ideas from complex networks.
The goal of our paper is to explore properties of these complex networks that
are suitable as features for machine-learning-based authorship attribution of
documents. We performed experiments on three different datasets, and obtained
promising results.
|
1311.2987 | Learning Input and Recurrent Weight Matrices in Echo State Networks | cs.LG | Echo State Networks (ESNs) are a special type of the temporally deep network
model, the Recurrent Neural Network (RNN), where the recurrent matrix is
carefully designed and both the recurrent and input matrices are fixed. An ESN
uses the linearity of the activation function of the output units to simplify
the learning of the output matrix. In this paper, we devise a special technique
that take advantage of this linearity in the output units of an ESN, to learn
the input and recurrent matrices. This has not been done in earlier ESNs due to
their well known difficulty in learning those matrices. Compared to the
technique of BackPropagation Through Time (BPTT) in learning general RNNs, our
proposed method exploits linearity of activation function in the output units
to formulate the relationships amongst the various matrices in an RNN. These
relationships results in the gradient of the cost function having an analytical
form and being more accurate. This would enable us to compute the gradients
instead of obtaining them by recursion as in BPTT. Experimental results on
phone state classification show that learning one or both the input and
recurrent matrices in an ESN yields superior results compared to traditional
ESNs that do not learn these matrices, especially when longer time steps are
used.
|
1311.3001 | Informed Source Separation: A Bayesian Tutorial | stat.ML cs.LG | Source separation problems are ubiquitous in the physical sciences; any
situation where signals are superimposed calls for source separation to
estimate the original signals. In this tutorial I will discuss the Bayesian
approach to the source separation problem. This approach has a specific
advantage in that it requires the designer to explicitly describe the signal
model in addition to any other information or assumptions that go into the
problem description. This leads naturally to the idea of informed source
separation, where the algorithm design incorporates relevant information about
the specific problem. This approach promises to enable researchers to design
their own high-quality algorithms that are specifically tailored to the problem
at hand.
|
1311.3009 | A Construction of New Quantum MDS Codes | cs.IT math.IT | It has been a great challenge to construct new quantum MDS codes. In
particular, it is very hard to construct quantum MDS codes with relatively
large minimum distance. So far, except for some sparse lengths, all known
$q$-ary quantum MDS codes have minimum distance less than or equal to $q/2+1$.
In the present paper, we provide a construction of quantum MDS codes with
minimum distance bigger than $q/2+1$. In particular, we show existence of
$q$-ary quantum MDS codes with length $n=q^2+1$ and minimum distance $d$ for
any $d\le q-1$ and $d= q+1$(this result extends those given in
\cite{Gu11,Jin1,KZ12}); and with length $(q^2+2)/3$ and minimum distance $d$
for any $d\le (2q+2)/3$ if $3|(q+1)$. Our method is through Hermitian
self-orthogonal codes. The main idea of constructing Hermitian self-orthogonal
codes is based on the solvability in $\F_q$ of a system of homogenous equations
over $\F_{q^2}$.
|
1311.3011 | Cornell SPF: Cornell Semantic Parsing Framework | cs.CL | The Cornell Semantic Parsing Framework (SPF) is a learning and inference
framework for mapping natural language to formal representation of its meaning.
|
1311.3023 | Asynchronous Distributed Downlink Beamforming and Power Control in
Multi-cell Networks | cs.IT math.IT | In this paper, we consider a multi-cell network where every base station (BS)
serves multiple users with an antenna array. Each user is associated with only
one BS and has a single antenna. Assume that only long-term channel state
information (CSI) is available in the system. The objective is to minimize the
network downlink transmission power needed to meet the users'
signal-to-interference-plus-noise ratio (SINR) requirements. For this
objective, we propose an asynchronous distributed beamforming and power control
algorithm which provides the same optimal solution as given by centralized
algorithms. To design the algorithm, the power minimization problem is
formulated mathematically as a non-convex problem. For distributed
implementation, the non-convex problem is cast into the dual decomposition
framework. Resorting to the theory about matrix pencil, a novel asynchronous
iterative method is proposed for solving the dual of the non-convex problem.
The methods for beamforming and power control are obtained by investigating the
primal problem. At last, simulation results are provided to demonstrate the
convergence and performance of the algorithm.
|
1311.3037 | Practical Characterization of Large Networks Using Neighborhood
Information | cs.SI cs.CY physics.soc-ph | Characterizing large online social networks (OSNs) through node querying is a
challenging task. OSNs often impose severe constraints on the query rate, hence
limiting the sample size to a small fraction of the total network. Various
ad-hoc subgraph sampling methods have been proposed, but many of them give
biased estimates and no theoretical basis on the accuracy. In this work, we
focus on developing sampling methods for OSNs where querying a node also
reveals partial structural information about its neighbors. Our methods are
optimized for NoSQL graph databases (if the database can be accessed directly),
or utilize Web API available on most major OSNs for graph sampling. We show
that our sampling method has provable convergence guarantees on being an
unbiased estimator, and it is more accurate than current state-of-the-art
methods. We characterize metrics such as node label density estimation and edge
label density estimation, two of the most fundamental network characteristics
from which other network characteristics can be derived. We evaluate our
methods on-the-fly over several live networks using their native APIs. Our
simulation studies over a variety of offline datasets show that by including
neighborhood information, our method drastically (4-fold) reduces the number of
samples required to achieve the same estimation accuracy of state-of-the-art
methods.
|
1311.3045 | Joint Power and Admission Control: Non-Convex $L_q$ Approximation and An
Effective Polynomial Time Deflation Approach | cs.IT cs.NI math.IT math.OC | In an interference limited network, joint power and admission control (JPAC)
aims at supporting a maximum number of links at their specified signal to
interference plus noise ratio (SINR) targets while using a minimum total
transmission power. Various convex approximation deflation approaches have been
developed for the JPAC problem. In this paper, we propose an effective
polynomial time non-convex approximation deflation approach for solving the
problem. The approach is based on the non-convex $\ell_q$-minimization
approximation of an equivalent sparse $\ell_0$-minimization reformulation of
the JPAC problem where $q\in(0,1).$ We show that, for any instance of the JPAC
problem, there exists a $\bar q\in(0,1)$ such that it can be exactly solved by
solving its $\ell_q$-minimization approximation problem with any $q\in(0, \bar
q]$. We also show that finding the global solution of the $\ell_q$
approximation problem is NP-hard. Then, we propose a potential reduction
interior-point algorithm, which can return an $\epsilon$-KKT solution of the
NP-hard $\ell_q$-minimization approximation problem in polynomial time. The
returned solution can be used to check the simultaneous supportability of all
links in the network and to guide an iterative link removal procedure,
resulting in the polynomial time non-convex approximation deflation approach
for the JPAC problem. Numerical simulations show that the proposed approach
outperforms the existing convex approximation approaches in terms of the number
of supported links and the total transmission power, particularly exhibiting a
quite good performance in selecting which subset of links to support.
|
1311.3062 | Ants: Mobile Finite State Machines | cs.DC cs.MA | Consider the Ants Nearby Treasure Search (ANTS) problem introduced by
Feinerman, Korman, Lotker, and Sereni (PODC 2012), where $n$ mobile agents,
initially placed at the origin of an infinite grid, collaboratively search for
an adversarially hidden treasure. In this paper, the model of Feinerman et al.
is adapted such that the agents are controlled by a (randomized) finite state
machine: they possess a constant-size memory and are able to communicate with
each other through constant-size messages. Despite the restriction to
constant-size memory, we show that their collaborative performance remains the
same by presenting a distributed algorithm that matches a lower bound
established by Feinerman et al. on the run-time of any ANTS algorithm.
|
1311.3064 | Ranking users, papers and authors in online scientific communities | cs.SI cs.DL cs.IR physics.soc-ph | The ever-increasing quantity and complexity of scientific production have
made it difficult for researchers to keep track of advances in their own
fields. This, together with growing popularity of online scientific
communities, calls for the development of effective information filtering
tools. We propose here a method to simultaneously compute reputation of users
and quality of scientific artifacts in an online scientific community.
Evaluation on artificially-generated data and real data from the Econophysics
Forum is used to determine the method's best-performing variants. We show that
when the method is extended by considering author credit, its performance
improves on multiple levels. In particular, top papers have higher citation
count and top authors have higher $h$-index than top papers and top authors
chosen by other algorithms.
|
1311.3076 | An Efficient Method for Recognizing the Low Quality Fingerprint
Verification by Means of Cross Correlation | cs.CV | In this paper, we propose an efficient method to provide personal
identification using fingerprint to get better accuracy even in noisy
condition. The fingerprint matching based on the number of corresponding
minutia pairings, has been in use for a long time, which is not very efficient
for recognizing the low quality fingerprints. To overcome this problem,
correlation technique is used. The correlation-based fingerprint verification
system is capable of dealing with low quality images from which no minutiae can
be extracted reliably and with fingerprints that suffer from non-uniform shape
distortions, also in case of damaged and partial images. Orientation Field
Methodology (OFM) has been used as a preprocessing module, and it converts the
images into a field pattern based on the direction of the ridges, loops and
bifurcations in the image of a fingerprint. The input image is then Cross
Correlated (CC) with all the images in the cluster and the highest correlated
image is taken as the output. The result gives a good recognition rate, as the
proposed scheme uses Cross Correlation of Field Orientation (CCFO = OFM + CC)
for fingerprint identification.
|
1311.3085 | Community detection thresholds and the weak Ramanujan property | cs.SI | Decelle et al.\cite{Decelle11} conjectured the existence of a sharp threshold
for community detection in sparse random graphs drawn from the stochastic block
model. Mossel et al.\cite{Mossel12} established the negative part of the
conjecture, proving impossibility of meaningful detection below the threshold.
However the positive part of the conjecture remained elusive so far. Here we
solve the positive part of the conjecture. We introduce a modified adjacency
matrix $B$ that counts self-avoiding paths of a given length $\ell$ between
pairs of nodes and prove that for logarithmic $\ell$, the leading eigenvectors
of this modified matrix provide non-trivial detection, thereby settling the
conjecture. A key step in the proof consists in establishing a {\em weak
Ramanujan property} of matrix $B$. Namely, the spectrum of $B$ consists in two
leading eigenvalues $\rho(B)$, $\lambda_2$ and $n-2$ eigenvalues of a lower
order $O(n^{\epsilon}\sqrt{\rho(B)})$ for all $\epsilon>0$, $\rho(B)$ denoting
$B$'s spectral radius. $d$-regular graphs are Ramanujan when their second
eigenvalue verifies $|\lambda|\le 2 \sqrt{d-1}$. Random $d$-regular graphs have
a second largest eigenvalue $\lambda$ of $2\sqrt{d-1}+o(1)$ (see
Friedman\cite{friedman08}), thus being {\em almost} Ramanujan.
Erd\H{o}s-R\'enyi graphs with average degree $d$ at least logarithmic
($d=\Omega(\log n)$) have a second eigenvalue of $O(\sqrt{d})$ (see Feige and
Ofek\cite{Feige05}), a slightly weaker version of the Ramanujan property.
However this spectrum separation property fails for sparse ($d=O(1)$)
Erd\H{o}s-R\'enyi graphs. Our result thus shows that by constructing matrix $B$
through neighborhood expansion, we regularize the original adjacency matrix to
eventually recover a weak form of the Ramanujan property.
|
1311.3087 | A Fractal and Scale-free Model of Complex Networks with Hub Attraction
Behaviors | physics.soc-ph cs.SI | It is widely believed that fractality of complex networks origins from hub
repulsion behaviors (anticorrelation or disassortativity), which means large
degree nodes tend to connect with small degree nodes. This hypothesis was
demonstrated by a dynamical growth model, which evolves as the inverse
renormalization procedure proposed by Song et al. Now we find that the
dynamical growth model is based on the assumption that all the cross-boxes
links has the same probability e to link to the most connected nodes inside
each box. Therefore, we modify the growth model by adopting the flexible
probability e, which makes hubs have higher probability to connect with hubs
than non-hubs. With this model, we find some fractal and scale-free networks
have hub attraction behaviors (correlation or assortativity). The results are
the counter-examples of former beliefs.
|
1311.3123 | Polar Codes for Arbitrary DMCs and Arbitrary MACs | cs.IT math.IT | Polar codes are constructed for arbitrary channels by imposing an arbitrary
quasigroup structure on the input alphabet. Just as with "usual" polar codes,
the block error probability under successive cancellation decoding is
$o(2^{-N^{1/2-\epsilon}})$, where $N$ is the block length. Encoding and
decoding for these codes can be implemented with a complexity of $O(N\log N)$.
It is shown that the same technique can be used to construct polar codes for
arbitrary multiple access channels (MAC) by using an appropriate Abelian group
structure. Although the symmetric sum capacity is achieved by this coding
scheme, some points in the symmetric capacity region may not be achieved. In
the case where the channel is a combination of linear channels, we provide a
necessary and sufficient condition characterizing the channels whose symmetric
capacity region is preserved by the polarization process. We also provide a
sufficient condition for having a maximal loss in the dominant face.
|
1311.3141 | Cloud Compute-and-Forward with Relay Cooperation | cs.IT cs.GT cs.NI math.IT | We study a cloud network with M distributed receiving antennas and L users,
which transmit their messages towards a centralized decoder (CD), where M>=L.
We consider that the cloud network applies the Compute-and-Forward (C&F)
protocol, where L antennas/relays are selected to decode integer equations of
the transmitted messages. In this work, we focus on the best relay selection
and the optimization of the Physical-Layer Network Coding (PNC) at the relays,
aiming at the throughput maximization of the network. Existing literature
optimizes PNC with respect to the maximization of the minimum rate among users.
The proposed strategy maximizes the sum rate of the users allowing nonsymmetric
rates, while the optimal solution is explored with the aid of the Pareto
frontier. The problem of relay selection is matched to a coalition formation
game, where the relays and the CD cooperate in order to maximize their profit.
Efficient coalition formation algorithms are proposed, which perform joint
relay selection and PNC optimization. Simulation results show that a
considerable improvement is achieved compared to existing results, both in
terms of the network sum rate and the players' profits.
|
1311.3157 | Multiple Closed-Form Local Metric Learning for K-Nearest Neighbor
Classifier | cs.LG | Many researches have been devoted to learn a Mahalanobis distance metric,
which can effectively improve the performance of kNN classification. Most
approaches are iterative and computational expensive and linear rigidity still
critically limits metric learning algorithm to perform better. We proposed a
computational economical framework to learn multiple metrics in closed-form.
|
1311.3175 | Architecture of an Ontology-Based Domain-Specific Natural Language
Question Answering System | cs.CL cs.IR | Question answering (QA) system aims at retrieving precise information from a
large collection of documents against a query. This paper describes the
architecture of a Natural Language Question Answering (NLQA) system for a
specific domain based on the ontological information, a step towards semantic
web question answering. The proposed architecture defines four basic modules
suitable for enhancing current QA capabilities with the ability of processing
complex questions. The first module was the question processing, which analyses
and classifies the question and also reformulates the user query. The second
module allows the process of retrieving the relevant documents. The next module
processes the retrieved documents, and the last module performs the extraction
and generation of a response. Natural language processing techniques are used
for processing the question and documents and also for answer extraction.
Ontology and domain knowledge are used for reformulating queries and
identifying the relations. The aim of the system is to generate short and
specific answer to the question that is asked in the natural language in a
specific domain. We have achieved 94 % accuracy of natural language question
answering in our implementation.
|
1311.3192 | Localizing Grasp Affordances in 3-D Points Clouds Using Taubin Quadric
Fitting | cs.RO | Perception-for-grasping is a challenging problem in robotics. Inexpensive
range sensors such as the Microsoft Kinect provide sensing capabilities that
have given new life to the effort of developing robust and accurate perception
methods for robot grasping. This paper proposes a new approach to localizing
enveloping grasp affordances in 3-D point clouds efficiently. The approach is
based on modeling enveloping grasp affordances as a cylindrical shells that
corresponds to the geometry of the robot hand. A fast and accurate fitting
method for quadratic surfaces is the core of our approach. An evaluation on a
set of cluttered environments shows high precision and recall statistics. Our
results also show that the approach compares favorably with some alternatives,
and that it is efficient enough to be employed for robot grasping in real-time.
|
1311.3198 | Sound, Complete and Minimal UCQ-Rewriting for Existential Rules | cs.AI cs.LO | We address the issue of Ontology-Based Data Access, with ontologies
represented in the framework of existential rules, also known as Datalog+/-. A
well-known approach involves rewriting the query using ontological knowledge.
We focus here on the basic rewriting technique which consists of rewriting the
initial query into a union of conjunctive queries. First, we study a generic
breadth-first rewriting algorithm, which takes as input any rewriting operator,
and define properties of rewriting operators that ensure the correctness of the
algorithm. Then, we focus on piece-unifiers, which provide a rewriting operator
with the desired properties. Finally, we propose an implementation of this
framework and report some experiments.
|
1311.3211 | Stochastic inference with deterministic spiking neurons | q-bio.NC cond-mat.dis-nn cs.NE physics.bio-ph stat.ML | The seemingly stochastic transient dynamics of neocortical circuits observed
in vivo have been hypothesized to represent a signature of ongoing stochastic
inference. In vitro neurons, on the other hand, exhibit a highly deterministic
response to various types of stimulation. We show that an ensemble of
deterministic leaky integrate-and-fire neurons embedded in a spiking noisy
environment can attain the correct firing statistics in order to sample from a
well-defined target distribution. We provide an analytical derivation of the
activation function on the single cell level; for recurrent networks, we
examine convergence towards stationarity in computer simulations and
demonstrate sample-based Bayesian inference in a mixed graphical model. This
establishes a rigorous link between deterministic neuron models and functional
stochastic dynamics on the network level.
|
1311.3220 | Chaotic Arithmetic Coding for Secure Video Multicast | cs.MM cs.CR cs.IT math.IT | Arithmetic Coding (AC) is widely used for the entropy coding of text and
video data. It involves recursive partitioning of the range [0,1) in accordance
with the relative probabilities of occurrence of the input symbols. A data
(image or video) encryption scheme based on arithmetic coding called as Chaotic
Arithmetic Coding (CAC) has been presented in previous works. In CAC, a large
number of chaotic maps can be used to perform coding, each achieving Shannon
optimal compression performance. The exact choice of map is governed by a key.
CAC has the effect of scrambling the intervals without making any changes to
the width of interval in which the codeword must lie, thereby allowing
encryption without sacrificing any coding efficiency. In this paper, we use a
redundancy in CAC procedure for secure multicast of videos where multiple users
are distributed with different keys to decode same encrypted file. By
encrypting once, we can generate multiple keys, either of which can be used to
decrypt the encoded file. This is very suitable for video distribution over
Internet where a single video can be distributed to multiple clients in a
privacy preserving manner.
|
1311.3269 | On a non-local spectrogram for denoising one-dimensional signals | cs.CV | In previous works, we investigated the use of local filters based on partial
differential equations (PDE) to denoise one-dimensional signals through the
image processing of time-frequency representations, such as the spectrogram. In
this image denoising algorithms, the particularity of the image was hardly
taken into account. We turn, in this paper, to study the performance of
non-local filters, like Neighborhood or Yaroslavsky filters, in the same
problem. We show that, for certain iterative schemes involving the Neighborhood
filter, the computational time is drastically reduced with respect to
Yaroslavsky or nonlinear PDE based filters, while the outputs of the filtering
processes are similar. This is heuristically justified by the connection
between the (fast) Neighborhood filter applied to a spectrogram and the
corresponding Nonlocal Means filter (accurate) applied to the Wigner-Ville
distribution of the signal. This correspondence holds only for time-frequency
representations of one-dimensional signals, not to usual images, and in this
sense the particularity of the image is exploited. We compare though a series
of experiments on synthetic and biomedical signals the performance of local and
non-local filters.
|
1311.3284 | A family of optimal locally recoverable codes | cs.IT math.IT | A code over a finite alphabet is called locally recoverable (LRC) if every
symbol in the encoding is a function of a small number (at most $r$) other
symbols. We present a family of LRC codes that attain the maximum possible
value of the distance for a given locality parameter and code cardinality. The
codewords are obtained as evaluations of specially constructed polynomials over
a finite field, and reduce to a Reed-Solomon code if the locality parameter $r$
is set to be equal to the code dimension. The size of the code alphabet for
most parameters is only slightly greater than the code length. The recovery
procedure is performed by polynomial interpolation over $r$ points. We also
construct codes with several disjoint recovering sets for every symbol. This
construction enables the system to conduct several independent and simultaneous
recovery processes of a specific symbol by accessing different parts of the
codeword. This property enables high availability of frequently accessed data
("hot data").
|
1311.3287 | Nonparametric Estimation of Multi-View Latent Variable Models | cs.LG stat.ML | Spectral methods have greatly advanced the estimation of latent variable
models, generating a sequence of novel and efficient algorithms with strong
theoretical guarantees. However, current spectral algorithms are largely
restricted to mixtures of discrete or Gaussian distributions. In this paper, we
propose a kernel method for learning multi-view latent variable models,
allowing each mixture component to be nonparametric. The key idea of the method
is to embed the joint distribution of a multi-view latent variable into a
reproducing kernel Hilbert space, and then the latent parameters are recovered
using a robust tensor power method. We establish that the sample complexity for
the proposed method is quadratic in the number of latent components and is a
low order polynomial in the other relevant parameters. Thus, our non-parametric
tensor approach to learning latent variable models enjoys good sample and
computational efficiencies. Moreover, the non-parametric tensor power method
compares favorably to EM algorithm and other existing spectral algorithms in
our experiments.
|
1311.3312 | Synthetic Data Generation using Benerator Tool | cs.DB | Datasets of different characteristics are needed by the research community
for experimental purposes. However, real data may be difficult to obtain due to
privacy concerns. Moreover, real data may not meet specific characteristics
which are needed to verify new approaches under certain conditions. Given these
limitations, the use of synthetic data is a viable alternative to complement
the real data. In this report, we describe the process followed to generate
synthetic data using Benerator, a publicly available tool. The results show
that the synthetic data preserves a high level of accuracy compared to the
original data. The generated datasets correspond to microdata containing
records with social, economic and demographic data which mimics the
distribution of aggregated statistics from the 2011 Irish Census data.
|
1311.3315 | Sparse Matrix Factorization | cs.LG stat.ML | We investigate the problem of factorizing a matrix into several sparse
matrices and propose an algorithm for this under randomness and sparsity
assumptions. This problem can be viewed as a simplification of the deep
learning problem where finding a factorization corresponds to finding edges in
different layers and values of hidden units. We prove that under certain
assumptions for a sparse linear deep network with $n$ nodes in each layer, our
algorithm is able to recover the structure of the network and values of top
layer hidden units for depths up to $\tilde O(n^{1/6})$. We further discuss the
relation among sparse matrix factorization, deep learning, sparse recovery and
dictionary learning.
|
1311.3318 | A Study of Actor and Action Semantic Retention in Video Supervoxel
Segmentation | cs.CV | Existing methods in the semantic computer vision community seem unable to
deal with the explosion and richness of modern, open-source and social video
content. Although sophisticated methods such as object detection or
bag-of-words models have been well studied, they typically operate on low level
features and ultimately suffer from either scalability issues or a lack of
semantic meaning. On the other hand, video supervoxel segmentation has recently
been established and applied to large scale data processing, which potentially
serves as an intermediate representation to high level video semantic
extraction. The supervoxels are rich decompositions of the video content: they
capture object shape and motion well. However, it is not yet known if the
supervoxel segmentation retains the semantics of the underlying video content.
In this paper, we conduct a systematic study of how well the actor and action
semantics are retained in video supervoxel segmentation. Our study has human
observers watching supervoxel segmentation videos and trying to discriminate
both actor (human or animal) and action (one of eight everyday actions). We
gather and analyze a large set of 640 human perceptions over 96 videos in 3
different supervoxel scales. Furthermore, we conduct machine recognition
experiments on a feature defined on supervoxel segmentation, called supervoxel
shape context, which is inspired by the higher order processes in human
perception. Our ultimate findings suggest that a significant amount of
semantics have been well retained in the video supervoxel segmentation and can
be used for further video analysis.
|
1311.3353 | SUNNY: a Lazy Portfolio Approach for Constraint Solving | cs.AI | *** To appear in Theory and Practice of Logic Programming (TPLP) ***
Within the context of constraint solving, a portfolio approach allows one to
exploit the synergy between different solvers in order to create a globally
better solver. In this paper we present SUNNY: a simple and flexible algorithm
that takes advantage of a portfolio of constraint solvers in order to compute
--- without learning an explicit model --- a schedule of them for solving a
given Constraint Satisfaction Problem (CSP). Motivated by the performance
reached by SUNNY vs. different simulations of other state of the art
approaches, we developed sunny-csp, an effective portfolio solver that exploits
the underlying SUNNY algorithm in order to solve a given CSP. Empirical tests
conducted on exhaustive benchmarks of MiniZinc models show that the actual
performance of SUNNY conforms to the predictions. This is encouraging both for
improving the power of CSP portfolio solvers and for trying to export them to
fields such as Answer Set Programming and Constraint Logic Programming.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.