id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1404.4443 | Enhanced List-Based Group-Wise Overloaded Receiver with Application to
Satellite Reception | cs.IT math.IT | The market trends towards the use of smaller dish antennas for TV satellite
receivers, as well as the growing density of broadcasting satellites in orbit
require the application of robust adjacent satellite interference (ASI)
cancellation algorithms at the receivers. The wider beamwidth of a small size
dish and the growing number of satellites in orbit impose an overloaded
scenario, i.e., a scenario where the number of transmitting satellites exceeds
the number of receiving antennas. For such a scenario, we present a two stage
receiver to enhance signal detection from the satellite of interest, i.e., the
satellite that the dish is pointing to, while reducing interference from
neighboring satellites. Towards this objective, we propose an enhanced
List-based Group-wise Search Detection (LGSD) receiver architecture that takes
into account the spatially correlated additive noise and uses the
signal-to-interference-plus noise ratio (SINR) maximization criterion to
improve detection performance. Simulations show that the proposed receiver
structure enhances the performance of satellite systems in the presence of ASI
when compared to existing methods.
|
1404.4448 | Overloaded Satellite Receiver Using SIC with Hybrid Beamforming and ML
Detection | cs.IT math.IT | In this paper, a new receiver structure that is intended to detect the
signals from multiple adjacent satellites in the presence of other interfering
satellites is proposed. We tackle the worst case interference conditions, i.e.,
it is assumed that uncoded signals that fully overlap in frequency arrive at a
multiple-element small-size parabolic antenna in a spatially correlated noise
environment. The proposed successive interference cancellation (SIC) receiver,
denoted by SIC Hy/ML, employs hybrid beamforming and disjoint maximum
likelihood (ML) detection. Depending on the individual signals spatial
position, the proposed SIC Hy/ML scheme takes advantage of two types of
beamformers: a maximum ratio combining (MRC) beamformer and a compromised array
response (CAR) beamformer. The performance of the proposed receiver is compared
to an SIC receiver that uses only MRC beamforming scheme with ML detection for
all signals, a joint ML detector, and a minimum mean square error detector. It
is found that SIC Hy/ML outperforms the other schemes by a large margin.
|
1404.4453 | Efficient Decoding Algorithms for the Compute-and-Forward Strategy | cs.IT math.IT | We address in this paper decoding aspects of the Compute-and-Forward (CF)
physical-layer network coding strategy. It is known that the original decoder
for the CF is asymptotically optimal. However, its performance gap to optimal
decoders in practical settings are still not known. In this work, we develop
and assess the performance of novel decoding algorithms for the CF operating in
the multiple access channel. For the fading channel, we analyze the ML decoder
and develop a novel diophantine approximation-based decoding algorithm showed
numerically to outperform the original CF decoder. For the Gaussian channel, we
investigate the maximum a posteriori (MAP) decoder. We derive a novel MAP
decoding metric and develop practical decoding algorithms proved numerically to
outperform the original one.
|
1404.4467 | Cube-Cut: Vertebral Body Segmentation in MRI-Data through Cubic-Shaped
Divergences | cs.CV | In this article, we present a graph-based method using a cubic template for
volumetric segmentation of vertebrae in magnetic resonance imaging (MRI)
acquisitions. The user can define the degree of deviation from a regular cube
via a smoothness value Delta. The Cube-Cut algorithm generates a directed graph
with two terminal nodes (s-t-network), where the nodes of the graph correspond
to a cubic-shaped subset of the image's voxels. The weightings of the graph's
terminal edges, which connect every node with a virtual source s or a virtual
sink t, represent the affinity of a voxel to the vertebra (source) and to the
background (sink). Furthermore, a set of infinite weighted and non-terminal
edges implements the smoothness term. After graph construction, a minimal
s-t-cut is calculated within polynomial computation time, which splits the
nodes into two disjoint units. Subsequently, the segmentation result is
determined out of the source-set. A quantitative evaluation of a C++
implementation of the algorithm resulted in an average Dice Similarity
Coefficient (DSC) of 81.33% and a running time of less than a minute.
|
1404.4468 | On Independence Atoms and Keys | cs.DB cs.LO | Uniqueness and independence are two fundamental properties of data. Their
enforcement in database systems can lead to higher quality data, faster data
service response time, better data-driven decision making and knowledge
discovery from data. The applications can be effectively unlocked by providing
efficient solutions to the underlying implication problems of keys and
independence atoms. Indeed, for the sole class of keys and the sole class of
independence atoms the associated finite and general implication problems
coincide and enjoy simple axiomatizations. However, the situation changes
drastically when keys and independence atoms are combined. We show that the
finite and the general implication problems are already different for keys and
unary independence atoms. Furthermore, we establish a finite axiomatization for
the general implication problem, and show that the finite implication problem
does not enjoy a k-ary axiomatization for any k.
|
1404.4496 | 3-D Channel Characteristics for Molecular Communications with an
Absorbing Receiver | cs.IT math.IT q-bio.MN | Within the domain of molecular communications, researchers mimic the
techniques in nature to come up with alternative communication methods for
collaborating nanomachines. This work investigates the channel transfer
function for molecular communication via diffusion. In nature,
information-carrying molecules are generally absorbed by the target node via
receptors. Using the concentration function, without considering the absorption
process, as the channel transfer function implicitly assumes that the receiver
node does not affect the system. In this letter, we propose a solid analytical
formulation and analyze the signal metrics (attenuation and propagation delay)
for molecular communication via diffusion channel with an absorbing receiver in
a 3-D environment. The proposed model and the formulation match well with the
simulations without any normalization.
|
1404.4502 | A Complete Solver for Constraint Games | cs.GT cs.AI | Game Theory studies situations in which multiple agents having conflicting
objectives have to reach a collective decision. The question of a compact
representation language for agents utility function is of crucial importance
since the classical representation of a $n$-players game is given by a
$n$-dimensional matrix of exponential size for each player. In this paper we
use the framework of Constraint Games in which CSP are used to represent
utilities. Constraint Programming --including global constraints-- allows to
easily give a compact and elegant model to many useful games. Constraint Games
come in two flavors: Constraint Satisfaction Games and Constraint Optimization
Games, the first one using satisfaction to define boolean utilities. In
addition to multimatrix games, it is also possible to model more complex games
where hard constraints forbid certain situations. In this paper we study
complete search techniques and show that our solver using the compact
representation of Constraint Games is faster than the classical game solver
Gambit by one to two orders of magnitude.
|
1404.4528 | The role of centrality for the identification of influential spreaders
in complex networks | physics.soc-ph cs.SI | The identification of the most influential spreaders in networks is important
to control and understand the spreading capabilities of the system as well as
to ensure an efficient information diffusion such as in rumor-like dynamics.
Recent works have suggested that the identification of influential spreaders is
not independent of the dynamics being studied. For instance, the key disease
spreaders might not necessarily be so when it comes to analyze social contagion
or rumor propagation. Additionally, it has been shown that different metrics
(degree, coreness, etc) might identify different influential nodes even for the
same dynamical processes with diverse degree of accuracy. In this paper, we
investigate how nine centrality measures correlate with the disease and rumor
spreading capabilities of the nodes that made up different synthetic and
real-world (both spatial and non-spatial) networks. We also propose a
generalization of the random walk accessibility as a new centrality measure and
derive analytical expressions for the latter measure for simple network
configurations. Our results show that for non-spatial networks, the $k$-core
and degree centralities are most correlated to epidemic spreading, whereas the
average neighborhood degree, the closeness centrality and accessibility are
most related to rumor dynamics. On the contrary, for spatial networks, the
accessibility measure outperforms the rest of centrality metrics in almost all
cases regardless of the kind of dynamics considered. Therefore, an important
consequence of our analysis is that previous studies performed in synthetic
random networks cannot be generalized to the case of spatial networks.
|
1404.4540 | Collective computation in a network with distributed information | cs.SI cs.DC physics.soc-ph | We analyze a distributed information network in which each node has access to
the information contained in a limited set of nodes (its neighborhood) at a
given time. A collective computation is carried out in which each node
calculates a value that implies all information contained in the network (in
our case, the average value of a variable that can take different values in
each network node). The neighborhoods can change dynamically by exchanging
neighbors with other nodes. The results of this collective calculation show
rapid convergence and good scalability with the network size. These results are
compared with those of a fixed network arranged as a square lattice, in which
the number of rounds to achieve a given accuracy is very high when the size of
the network increases. The results for the evolving networks are interpreted in
light of the properties of complex networks and are directly relevant to the
diameter and characteristic path length of the networks, which seem to express
"small world" properties.
|
1404.4560 | A Control Dichotomy for Pure Scoring Rules | cs.GT cs.CC cs.MA | Scoring systems are an extremely important class of election systems. A
length-$m$ (so-called) scoring vector applies only to $m$-candidate elections.
To handle general elections, one must use a family of vectors, one per length.
The most elegant approach to making sure such families are "family-like" is the
recently introduced notion of (polynomial-time uniform) pure scoring rules
[Betzler and Dorn 2010], where each scoring vector is obtained from its
precursor by adding one new coefficient. We obtain the first dichotomy theorem
for pure scoring rules for a control problem. In particular, for constructive
control by adding voters (CCAV), we show that CCAV is solvable in polynomial
time for $k$-approval with $k \leq 3$, $k$-veto with $k \leq 2$, every pure
scoring rule in which only the two top-rated candidates gain nonzero scores,
and a particular rule that is a "hybrid" of 1-approval and 1-veto. For all
other pure scoring rules, CCAV is NP-complete. We also investigate the
descriptive richness of different models for defining pure scoring rules,
proving how more rule-generation time gives more rules, proving that rationals
give more rules than do the natural numbers, and proving that some restrictions
previously thought to be "w.l.o.g." in fact do lose generality.
|
1404.4572 | The First Parallel Multilingual Corpus of Persian: Toward a Persian
BLARK | cs.CL | In this article, we have introduced the first parallel corpus of Persian with
more than 10 other European languages. This article describes primary steps
toward preparing a Basic Language Resources Kit (BLARK) for Persian. Up to now,
we have proposed morphosyntactic specification of Persian based on
EAGLE/MULTEXT guidelines and specific resources of MULTEXT-East. The article
introduces Persian Language, with emphasis on its orthography and
morphosyntactic features, then a new Part-of-Speech categorization and
orthography for Persian in digital environments is proposed. Finally, the
corpus and related statistic will be analyzed.
|
1404.4606 | How Many Topics? Stability Analysis for Topic Models | cs.LG cs.CL cs.IR | Topic modeling refers to the task of discovering the underlying thematic
structure in a text corpus, where the output is commonly presented as a report
of the top terms appearing in each topic. Despite the diversity of topic
modeling algorithms that have been proposed, a common challenge in successfully
applying these techniques is the selection of an appropriate number of topics
for a given corpus. Choosing too few topics will produce results that are
overly broad, while choosing too many will result in the "over-clustering" of a
corpus into many small, highly-similar topics. In this paper, we propose a
term-centric stability analysis strategy to address this issue, the idea being
that a model with an appropriate number of topics will be more robust to
perturbations in the data. Using a topic modeling approach based on matrix
factorization, evaluations performed on a range of corpora show that this
strategy can successfully guide the model selection process.
|
1404.4641 | Multilingual Models for Compositional Distributed Semantics | cs.CL | We present a novel technique for learning semantic representations, which
extends the distributional hypothesis to multilingual data and joint-space
embeddings. Our models leverage parallel data and learn to strongly align the
embeddings of semantically equivalent sentences, while maintaining sufficient
distance between those of dissimilar sentences. The models do not rely on word
alignments or any syntactic information and are successfully applied to a
number of diverse languages. We extend our approach to learn semantic
representations at the document level, too. We evaluate these models on two
cross-lingual document classification tasks, outperforming the prior state of
the art. Through qualitative analysis and the study of pivoting effects we
demonstrate that our representations are semantically plausible and can capture
semantic relationships across languages without parallel data.
|
1404.4644 | A New Space for Comparing Graphs | stat.ME cs.IR cs.LG stat.ML | Finding a new mathematical representations for graph, which allows direct
comparison between different graph structures, is an open-ended research
direction. Having such a representation is the first prerequisite for a variety
of machine learning algorithms like classification, clustering, etc., over
graph datasets. In this paper, we propose a symmetric positive semidefinite
matrix with the $(i,j)$-{th} entry equal to the covariance between normalized
vectors $A^ie$ and $A^je$ ($e$ being vector of all ones) as a representation
for graph with adjacency matrix $A$. We show that the proposed matrix
representation encodes the spectrum of the underlying adjacency matrix and it
also contains information about the counts of small sub-structures present in
the graph such as triangles and small paths. In addition, we show that this
matrix is a \emph{"graph invariant"}. All these properties make the proposed
matrix a suitable object for representing graphs.
The representation, being a covariance matrix in a fixed dimensional metric
space, gives a mathematical embedding for graphs. This naturally leads to a
measure of similarity on graph objects. We define similarity between two given
graphs as a Bhattacharya similarity measure between their corresponding
covariance matrix representations. As shown in our experimental study on the
task of social network classification, such a similarity measure outperforms
other widely used state-of-the-art methodologies. Our proposed method is also
computationally efficient. The computation of both the matrix representation
and the similarity value can be performed in operations linear in the number of
edges. This makes our method scalable in practice.
We believe our theoretical and empirical results provide evidence for
studying truncated power iterations, of the adjacency matrix, to characterize
social networks.
|
1404.4646 | Advancing Matrix Completion by Modeling Extra Structures beyond
Low-Rankness | stat.ME cs.IT cs.LG math.IT math.ST stat.TH | A well-known method for completing low-rank matrices based on convex
optimization has been established by Cand{\`e}s and Recht. Although
theoretically complete, the method may not entirely solve the low-rank matrix
completion problem. This is because the method captures only the low-rankness
property which gives merely a rough constraint that the data points locate on
some low-dimensional subspace, but generally ignores the extra structures which
specify in more detail how the data points locate on the subspace. Whenever the
geometric distribution of the data points is not uniform, the coherence
parameters of data might be large and, accordingly, the method might fail even
if the latent matrix we want to recover is fairly low-rank. To better handle
non-uniform data, in this paper we propose a method termed Low-Rank Factor
Decomposition (LRFD), which imposes an additional restriction that the data
points must be represented as linear combinations of the bases in a dictionary
constructed or learnt in advance. We show that LRFD can well handle non-uniform
data, provided that the dictionary is configured properly: We mathematically
prove that if the dictionary itself is low-rank then LRFD is immune to the
coherence parameters which might be large on non-uniform data. This provides an
elementary principle for learning the dictionary in LRFD and, naturally, leads
to a practical algorithm for advancing matrix completion. Extensive experiments
on randomly generated matrices and motion datasets show encouraging results.
|
1404.4655 | Hierarchical Quasi-Clustering Methods for Asymmetric Networks | cs.LG stat.ML | This paper introduces hierarchical quasi-clustering methods, a generalization
of hierarchical clustering for asymmetric networks where the output structure
preserves the asymmetry of the input data. We show that this output structure
is equivalent to a finite quasi-ultrametric space and study admissibility with
respect to two desirable properties. We prove that a modified version of single
linkage is the only admissible quasi-clustering method. Moreover, we show
stability of the proposed method and we establish invariance properties
fulfilled by it. Algorithms are further developed and the value of
quasi-clustering analysis is illustrated with a study of internal migration
within United States.
|
1404.4661 | Learning Fine-grained Image Similarity with Deep Ranking | cs.CV | Learning fine-grained image similarity is a challenging task. It needs to
capture between-class and within-class image differences. This paper proposes a
deep ranking model that employs deep learning techniques to learn similarity
metric directly from images.It has higher learning capability than models based
on hand-crafted features. A novel multiscale network structure has been
developed to describe the images effectively. An efficient triplet sampling
algorithm is proposed to learn the model with distributed asynchronized
stochastic gradient. Extensive experiments show that the proposed algorithm
outperforms models based on hand-crafted visual features and deep
classification models.
|
1404.4667 | Subspace Learning and Imputation for Streaming Big Data Matrices and
Tensors | stat.ML cs.IT cs.LG math.IT | Extracting latent low-dimensional structure from high-dimensional data is of
paramount importance in timely inference tasks encountered with `Big Data'
analytics. However, increasingly noisy, heterogeneous, and incomplete datasets
as well as the need for {\em real-time} processing of streaming data pose major
challenges to this end. In this context, the present paper permeates benefits
from rank minimization to scalable imputation of missing data, via tracking
low-dimensional subspaces and unraveling latent (possibly multi-way) structure
from \emph{incomplete streaming} data. For low-rank matrix data, a subspace
estimator is proposed based on an exponentially-weighted least-squares
criterion regularized with the nuclear norm. After recasting the non-separable
nuclear norm into a form amenable to online optimization, real-time algorithms
with complementary strengths are developed and their convergence is established
under simplifying technical assumptions. In a stationary setting, the
asymptotic estimates obtained offer the well-documented performance guarantees
of the {\em batch} nuclear-norm regularized estimator. Under the same unifying
framework, a novel online (adaptive) algorithm is developed to obtain multi-way
decompositions of \emph{low-rank tensors} with missing entries, and perform
imputation as a byproduct. Simulated tests with both synthetic as well as real
Internet and cardiac magnetic resonance imagery (MRI) data confirm the efficacy
of the proposed algorithms, and their superior performance relative to
state-of-the-art alternatives.
|
1404.4679 | Graph-based Anomaly Detection and Description: A Survey | cs.SI cs.CR | Detecting anomalies in data is a vital task, with numerous high-impact
applications in areas such as security, finance, health care, and law
enforcement. While numerous techniques have been developed in past years for
spotting outliers and anomalies in unstructured collections of
multi-dimensional points, with graph data becoming ubiquitous, techniques for
structured {\em graph} data have been of focus recently. As objects in graphs
have long-range correlations, a suite of novel technology has been developed
for anomaly detection in graph data.
This survey aims to provide a general, comprehensive, and structured overview
of the state-of-the-art methods for anomaly detection in data represented as
graphs. As a key contribution, we provide a comprehensive exploration of both
data mining and machine learning algorithms for these {\em detection} tasks. we
give a general framework for the algorithms categorized under various settings:
unsupervised vs. (semi-)supervised approaches, for static vs. dynamic graphs,
for attributed vs. plain graphs. We highlight the effectiveness, scalability,
generality, and robustness aspects of the methods. What is more, we stress the
importance of anomaly {\em attribution} and highlight the major techniques that
facilitate digging out the root cause, or the `why', of the detected anomalies
for further analysis and sense-making. Finally, we present several real-world
applications of graph-based anomaly detection in diverse domains, including
financial, auction, computer traffic, and social networks. We conclude our
survey with a discussion on open theoretical and practical challenges in the
field.
|
1404.4699 | Modal occupation measures and LMI relaxations for nonlinear switched
systems control | math.OC cs.SY | This paper presents a linear programming approach for the optimal control of
nonlinear switched systems where the control is the switching sequence. This is
done by introducing modal occupation measures, which allow to relax the problem
as a primal linear programming (LP) problem. Its dual linear program of
Hamilton-Jacobi-Bellman inequalities is also characterized. The LPs are then
solved numerically with a converging hierarchy of primal-dual
moment-sum-of-squares (SOS) linear matrix inequalities (LMI). Because of the
special structure of switched systems, we obtain a much more efficient method
than could be achieved by applying standard moment/SOS LMI hierarchies for
general optimal control problems.
|
1404.4702 | Tight Bounds on $\ell_1$ Approximation and Learning of Self-Bounding
Functions | cs.LG cs.DS | We study the complexity of learning and approximation of self-bounding
functions over the uniform distribution on the Boolean hypercube ${0,1}^n$.
Informally, a function $f:{0,1}^n \rightarrow \mathbb{R}$ is self-bounding if
for every $x \in {0,1}^n$, $f(x)$ upper bounds the sum of all the $n$ marginal
decreases in the value of the function at $x$. Self-bounding functions include
such well-known classes of functions as submodular and fractionally-subadditive
(XOS) functions. They were introduced by Boucheron et al. (2000) in the context
of concentration of measure inequalities. Our main result is a nearly tight
$\ell_1$-approximation of self-bounding functions by low-degree juntas.
Specifically, all self-bounding functions can be $\epsilon$-approximated in
$\ell_1$ by a polynomial of degree $\tilde{O}(1/\epsilon)$ over
$2^{\tilde{O}(1/\epsilon)}$ variables. We show that both the degree and
junta-size are optimal up to logarithmic terms. Previous techniques considered
stronger $\ell_2$ approximation and proved nearly tight bounds of
$\Theta(1/\epsilon^{2})$ on the degree and $2^{\Theta(1/\epsilon^2)}$ on the
number of variables. Our bounds rely on the analysis of noise stability of
self-bounding functions together with a stronger connection between noise
stability and $\ell_1$ approximation by low-degree polynomials. This technique
can also be used to get tighter bounds on $\ell_1$ approximation by low-degree
polynomials and faster learning algorithm for halfspaces.
These results lead to improved and in several cases almost tight bounds for
PAC and agnostic learning of self-bounding functions relative to the uniform
distribution. In particular, assuming hardness of learning juntas, we show that
PAC and agnostic learning of self-bounding functions have complexity of
$n^{\tilde{\Theta}(1/\epsilon)}$.
|
1404.4711 | Resource Allocation for Power Minimization in the Downlink of THP-based
Spatial Multiplexing MIMO-OFDMA Systems | cs.IT math.IT | In this work, we deal with resource allocation in the downlink of spatial
multiplexing MIMO-OFDMA systems. In particular, we concentrate on the problem
of jointly optimizing the transmit and receive processing matrices, the channel
assignment and the power allocation with the objective of minimizing the total
power consumption while satisfying different quality-of-service requirements. A
layered architecture is used in which users are first partitioned in different
groups on the basis of their channel quality and then channel assignment and
transceiver design are sequentially addressed starting from the group of users
with most adverse channel conditions. The multi-user interference among users
belonging to different groups is removed at the base station using a
Tomlinson-Harashima pre-coder operating at user level. Numerical results are
used to highlight the effectiveness of the proposed solution and to make
comparisons with existing alternatives.
|
1404.4714 | Radical-Enhanced Chinese Character Embedding | cs.CL | We present a method to leverage radical for learning Chinese character
embedding. Radical is a semantic and phonetic component of Chinese character.
It plays an important role as characters with the same radical usually have
similar semantic meaning and grammatical usage. However, existing Chinese
processing algorithms typically regard word or character as the basic unit but
ignore the crucial radical information. In this paper, we fill this gap by
leveraging radical for learning continuous representation of Chinese character.
We develop a dedicated neural architecture to effectively learn character
embedding and apply it on Chinese character similarity judgement and Chinese
word segmentation. Experiment results show that our radical-enhanced method
outperforms existing embedding learning algorithms on both tasks.
|
1404.4738 | On the Deployment of Cognitive Relay as Underlay Systems | cs.IT math.IT | The objective of this paper is to extend the idea of Cognitive Relay (CR).
CR, as a secondary user, follows an underlay paradigm to endorse secondary
usage of the spectrum to the indoor devices. To seek a spatial opportunity,
i.e., deciding its transmission over the primary user channels, CR models its
deployment scenario and the movements of the primary receivers and indoor
devices. Modeling is beneficial for theoretical analysis, however it is also
important to ensure the performance of CR in a real scenario. We consider
briefly, the challenges involved while deploying a hardware prototype of such a
system.
|
1404.4740 | Challenges in Persian Electronic Text Analysis | cs.CL | Farsi, also known as Persian, is the official language of Iran and Tajikistan
and one of the two main languages spoken in Afghanistan. Farsi enjoys a unified
Arabic script as its writing system. In this paper we briefly introduce the
writing standards of Farsi and highlight problems one would face when analyzing
Farsi electronic texts, especially during development of Farsi corpora
regarding to transcription and encoding of Farsi e-texts. The pointes mentioned
may sounds easy but they are crucial when developing and processing written
corpora of Farsi.
|
1404.4748 | Resilience of modular complex networks | physics.soc-ph cs.SI | Complex networks often have a modular structure, where a number of tightly-
connected groups of nodes (modules) have relatively few interconnections.
Modularity had been shown to have an important effect on the evolution and
stability of biological networks, on the scalability and efficiency of
large-scale infrastructure, and the development of economic and social systems.
An analytical framework for understanding modularity and its effects on network
vulnerability is still missing. Through recent advances in the understanding of
multilayer networks, however, it is now possible to develop a theoretical
framework to systematically study this critical issue. Here we study,
analytically and numerically, the resilience of modular networks under attacks
on interconnected nodes, which exhibit high betweenness values and are often
more exposed to failure. Our model provides new understandings into the
feedback between structure and function in real world systems, and consequently
has important implications as diverse as developing efficient immunization
strategies, designing robust large-scale infrastructure, and understanding
brain function.
|
1404.4749 | Decoding binary node labels from censored edge measurements: Phase
transition and efficient recovery | cs.IT cs.DS math.IT | We consider the problem of clustering a graph $G$ into two communities by
observing a subset of the vertex correlations. Specifically, we consider the
inverse problem with observed variables $Y=B_G x \oplus Z$, where $B_G$ is the
incidence matrix of a graph $G$, $x$ is the vector of unknown vertex variables
(with a uniform prior) and $Z$ is a noise vector with Bernoulli$(\varepsilon)$
i.i.d. entries. All variables and operations are Boolean. This model is
motivated by coding, synchronization, and community detection problems. In
particular, it corresponds to a stochastic block model or a correlation
clustering problem with two communities and censored edges. Without noise,
exact recovery (up to global flip) of $x$ is possible if and only the graph $G$
is connected, with a sharp threshold at the edge probability $\log(n)/n$ for
Erd\H{o}s-R\'enyi random graphs. The first goal of this paper is to determine
how the edge probability $p$ needs to scale to allow exact recovery in the
presence of noise. Defining the degree (oversampling) rate of the graph by
$\alpha =np/\log(n)$, it is shown that exact recovery is possible if and only
if $\alpha >2/(1-2\varepsilon)^2+ o(1/(1-2\varepsilon)^2)$. In other words,
$2/(1-2\varepsilon)^2$ is the information theoretic threshold for exact
recovery at low-SNR. In addition, an efficient recovery algorithm based on
semidefinite programming is proposed and shown to succeed in the threshold
regime up to twice the optimal rate. For a deterministic graph $G$, defining
the degree rate as $\alpha=d/\log(n)$, where $d$ is the minimum degree of the
graph, it is shown that the proposed method achieves the rate $\alpha>
4((1+\lambda)/(1-\lambda)^2)/(1-2\varepsilon)^2+ o(1/(1-2\varepsilon)^2)$,
where $1-\lambda$ is the spectral gap of the graph $G$.
|
1404.4761 | Using Network Coding to Achieve the Capacity of Deterministic Relay
Networks with Relay Messages | cs.IT math.IT | In this paper, we derive the capacity of the deterministic relay networks
with relay messages. We consider a network which consists of five nodes, four
of which can only communicate via the fifth one. However, the fifth node is not
merely a relay as it may exchange private messages with the other network
nodes. First, we develop an upper bound on the capacity region based on the
notion of a single sided genie. In the course of the achievability proof, we
also derive the deterministic capacity of a 4-user relay network (without
private messages at the relay). The capacity achieving schemes use a
combination of two network coding techniques: the Simple Ordering Scheme (SOS)
and Detour Schemes (DS). In the SOS, we order the transmitted bits at each user
such that the bi-directional messages will be received at the same channel
level at the relay, while the basic idea behind the DS is that some parts of
the message follow an indirect path to their respective destinations. This
paper, therefore, serves to show that user cooperation and network coding can
enhance throughput, even when the users are not directly connected to each
other.
|
1404.4772 | Approximating Pareto Curves using Semidefinite Relaxations | math.OC cs.RO | We consider the problem of constructing an approximation of the Pareto curve
associated with the multiobjective optimization problem $\min_{\mathbf{x} \in
\mathbf{S}}\{ (f_1(\mathbf{x}), f_2(\mathbf{x})) \}$, where $f_1$ and $f_2$ are
two conflicting polynomial criteria and $\mathbf{S} \subset \mathbb{R}^n$ is a
compact basic semialgebraic set. We provide a systematic numerical scheme to
approximate the Pareto curve. We start by reducing the initial problem into a
scalarized polynomial optimization problem (POP). Three scalarization methods
lead to consider different parametric POPs, namely (a) a weighted convex sum
approximation, (b) a weighted Chebyshev approximation, and (c) a parametric
sublevel set approximation. For each case, we have to solve a semidefinite
programming (SDP) hierarchy parametrized by the number of moments or
equivalently the degree of a polynomial sums of squares approximation of the
Pareto curve. When the degree of the polynomial approximation tends to
infinity, we provide guarantees of convergence to the Pareto curve in
$L^2$-norm for methods (a) and (b), and $L^1$-norm for method (c).
|
1404.4774 | Online Group Feature Selection | cs.CV | Online feature selection with dynamic features has become an active research
area in recent years. However, in some real-world applications such as image
analysis and email spam filtering, features may arrive by groups. Existing
online feature selection methods evaluate features individually, while existing
group feature selection methods cannot handle online processing. Motivated by
this, we formulate the online group feature selection problem, and propose a
novel selection approach for this problem. Our proposed approach consists of
two stages: online intra-group selection and online inter-group selection. In
the intra-group selection, we use spectral analysis to select discriminative
features in each group when it arrives. In the inter-group selection, we use
Lasso to select a globally optimal subset of features. This 2-stage procedure
continues until there are no more features to come or some predefined stopping
conditions are met. Extensive experiments conducted on benchmark and real-world
data sets demonstrate that our proposed approach outperforms other
state-of-the-art online feature selection methods.
|
1404.4780 | Robust Face Recognition via Adaptive Sparse Representation | cs.CV | Sparse Representation (or coding) based Classification (SRC) has gained great
success in face recognition in recent years. However, SRC emphasizes the
sparsity too much and overlooks the correlation information which has been
demonstrated to be critical in real-world face recognition problems. Besides,
some work considers the correlation but overlooks the discriminative ability of
sparsity. Different from these existing techniques, in this paper, we propose a
framework called Adaptive Sparse Representation based Classification (ASRC) in
which sparsity and correlation are jointly considered. Specifically, when the
samples are of low correlation, ASRC selects the most discriminative samples
for representation, like SRC; when the training samples are highly correlated,
ASRC selects most of the correlated and discriminative samples for
representation, rather than choosing some related samples randomly. In general,
the representation model is adaptive to the correlation structure, which
benefits from both $\ell_1$-norm and $\ell_2$-norm.
Extensive experiments conducted on publicly available data sets verify the
effectiveness and robustness of the proposed algorithm by comparing it with
state-of-the-art methods.
|
1404.4785 | Ontology as a Source for Rule Generation | cs.AI | This paper discloses the potential of OWL (Web Ontology Language) ontologies
for generation of rules. The main purpose of this paper is to identify new
types of rules, which may be generated from OWL ontologies. Rules, generated
from OWL ontologies, are necessary for the functioning of the Semantic Web
Expert System. It is expected that the Semantic Web Expert System (SWES) will
be able to process ontologies from the Web with the purpose to supplement or
even to develop its knowledge base.
|
1404.4789 | A new combination approach based on improved evidence distance | cs.AI | Dempster-Shafer evidence theory is a powerful tool in information fusion.
When the evidence are highly conflicting, the counter-intuitive results will be
presented. To adress this open issue, a new method based on evidence distance
of Jousselme and Hausdorff distance is proposed. Weight of each evidence can be
computed, preprocess the original evidence to generate a new evidence. The
Dempster's combination rule is used to combine the new evidence. Comparing with
the existing methods, the new proposed method is efficient.
|
1404.4797 | Parallel Graph Partitioning for Complex Networks | cs.DC cs.DS cs.NE cs.SI physics.soc-ph | Processing large complex networks like social networks or web graphs has
recently attracted considerable interest. In order to do this in parallel, we
need to partition them into pieces of about equal size. Unfortunately, previous
parallel graph partitioners originally developed for more regular mesh-like
networks do not work well for these networks. This paper addresses this problem
by parallelizing and adapting the label propagation technique originally
developed for graph clustering. By introducing size constraints, label
propagation becomes applicable for both the coarsening and the refinement phase
of multilevel graph partitioning. We obtain very high quality by applying a
highly parallel evolutionary algorithm to the coarsened graph. The resulting
system is both more scalable and achieves higher quality than state-of-the-art
systems like ParMetis or PT-Scotch. For large complex networks the performance
differences are very big. For example, our algorithm can partition a web graph
with 3.3 billion edges in less than sixteen seconds using 512 cores of a high
performance cluster while producing a high quality partition -- none of the
competing systems can handle this graph on our system.
|
1404.4800 | Automatic Annotation of Axoplasmic Reticula in Pursuit of Connectomes | cs.CV | In this paper, we present a new pipeline which automatically identifies and
annotates axoplasmic reticula, which are small subcellular structures present
only in axons. We run our algorithm on the Kasthuri11 dataset, which was color
corrected using gradient-domain techniques to adjust contrast. We use a
bilateral filter to smooth out the noise in this data while preserving edges,
which highlights axoplasmic reticula. These axoplasmic reticula are then
annotated using a morphological region growing algorithm. Additionally, we
perform Laplacian sharpening on the bilaterally filtered data to enhance edges,
and repeat the morphological region growing algorithm to annotate more
axoplasmic reticula. We track our annotations through the slices to improve
precision, and to create long objects to aid in segment merging. This method
annotates axoplasmic reticula with high precision. Our algorithm can easily be
adapted to annotate axoplasmic reticula in different sets of brain data by
changing a few thresholds. The contribution of this work is the introduction of
a straightforward and robust pipeline which annotates axoplasmic reticula with
high precision, contributing towards advancements in automatic feature
annotations in neural EM data.
|
1404.4801 | Generalized Evidence Theory | cs.AI | Conflict management is still an open issue in the application of Dempster
Shafer evidence theory. A lot of works have been presented to address this
issue. In this paper, a new theory, called as generalized evidence theory
(GET), is proposed. Compared with existing methods, GET assumes that the
general situation is in open world due to the uncertainty and incomplete
knowledge. The conflicting evidence is handled under the framework of GET. It
is shown that the new theory can explain and deal with the conflicting evidence
in a more reasonable way.
|
1404.4805 | iPiano: Inertial Proximal Algorithm for Non-Convex Optimization | cs.CV math.OC | In this paper we study an algorithm for solving a minimization problem
composed of a differentiable (possibly non-convex) and a convex (possibly
non-differentiable) function. The algorithm iPiano combines forward-backward
splitting with an inertial force. It can be seen as a non-smooth split version
of the Heavy-ball method from Polyak. A rigorous analysis of the algorithm for
the proposed class of problems yields global convergence of the function values
and the arguments. This makes the algorithm robust for usage on non-convex
problems. The convergence result is obtained based on the \KL inequality. This
is a very weak restriction, which was used to prove convergence for several
other gradient methods. First, an abstract convergence theorem for a generic
algorithm is proved, and, then iPiano is shown to satisfy the requirements of
this theorem. Furthermore, a convergence rate is established for the general
problem class. We demonstrate iPiano on computer vision problems: image
denoising with learned priors and diffusion based image compression.
|
1404.4820 | Topology optimization based on moving deformable components: A new
computational framework | cs.CE physics.comp-ph | In the present work, a new computational framework for structural topology
optimization based on the concept of moving deformable components is proposed.
Compared with the traditional pixel or node point-based solution framework, the
proposed solution paradigm can incorporate more geometry and mechanical
information into topology optimization directly and therefore render the
solution process more flexible. It also has the great potential to reduce the
computational burden associated with topology optimization substantially. Some
representative examples are presented to illustrate the effectiveness of the
proposed approach.
|
1404.4821 | A Technology for BigData Analysis Task Description using Domain-Specific
Languages | cs.DC cs.DB cs.PL | The article presents a technology for dynamic knowledge-based building of
Domain-Specific Languages (DSL) to describe data-intensive scientific discovery
tasks using BigData technology. The proposed technology supports high level
abstract definition of analytic and simulation parts of the task as well as
integration into the composite scientific solutions. Automatic translation of
the abstract task definition enables seamless integration of various data
sources within single solution.
|
1404.4822 | Performance Analysis of Ambient RF Energy Harvesting: A Stochastic
Geometry Approach | cs.IT math.IT | Ambient RF (Radio Frequency) energy harvesting technique has recently been
proposed as a potential solution to provide proactive energy replenishment for
wireless devices. This paper aims to analyze the performance of a battery-free
wireless sensor powered by ambient RF energy harvesting using a
stochastic-geometry approach. Specifically, we consider a random network model
in which ambient RF sources are distributed as a Ginibre $\alpha$-determinantal
point process which recovers the Poisson point process when alpha? approaches
zero. We characterize the expected RF energy harvesting rate.We also perform a
worst-case study which derives the upper bounds of both power outage and
transmission outage probabilities. Numerical results show that our upper bounds
are accurate and that better performance is achieved when the distribution of
ambient sources exhibits stronger repulsion.
|
1404.4880 | Bias Correction and Modified Profile Likelihood under the Wishart
Complex Distribution | cs.CV stat.ME | This paper proposes improved methods for the maximum likelihood (ML)
estimation of the equivalent number of looks $L$. This parameter has a
meaningful interpretation in the context of polarimetric synthetic aperture
radar (PolSAR) images. Due to the presence of coherent illumination in their
processing, PolSAR systems generate images which present a granular noise
called speckle. As a potential solution for reducing such interference, the
parameter $L$ controls the signal-noise ratio. Thus, the proposal of efficient
estimation methodologies for $L$ has been sought. To that end, we consider
firstly that a PolSAR image is well described by the scaled complex Wishart
distribution. In recent years, Anfinsen et al. derived and analyzed estimation
methods based on the ML and on trace statistical moments for obtaining the
parameter $L$ of the unscaled version of such probability law. This paper
generalizes that approach. We present the second-order bias expression proposed
by Cox and Snell for the ML estimator of this parameter. Moreover, the formula
of the profile likelihood modified by Barndorff-Nielsen in terms of $L$ is
discussed. Such derivations yield two new ML estimators for the parameter $L$,
which are compared to the estimators proposed by Anfinsen et al. The
performance of these estimators is assessed by means of Monte Carlo
experiments, adopting three statistical measures as comparison criterion: the
mean square error, the bias, and the coefficient of variation. Equivalently to
the simulation study, an application to actual PolSAR data concludes that the
proposed estimators outperform all the others in homogeneous scenarios.
|
1404.4884 | Causal Interfaces | cs.AI math.ST stat.TH | The interaction of two binary variables, assumed to be empirical
observations, has three degrees of freedom when expressed as a matrix of
frequencies. Usually, the size of causal influence of one variable on the other
is calculated as a single value, as increase in recovery rate for a medical
treatment, for example. We examine what is lost in this simplification, and
propose using two interface constants to represent positive and negative
implications separately. Given certain assumptions about non-causal outcomes,
the set of resulting epistemologies is a continuum. We derive a variety of
particular measures and contrast them with the one-dimensional index.
|
1404.4887 | (Semi-)External Algorithms for Graph Partitioning and Clustering | cs.DS cs.SI | In this paper, we develop semi-external and external memory algorithms for
graph partitioning and clustering problems. Graph partitioning and clustering
are key tools for processing and analyzing large complex networks. We address
both problems in the (semi-)external model by adapting the size-constrained
label propagation technique. Our (semi-)external size-constrained label
propagation algorithm can be used to compute graph clusterings and is a
prerequisite for the (semi-)external graph partitioning algorithm. The
algorithm is then used for both the coarsening and the refinement phase of a
multilevel algorithm to compute graph partitions. Our algorithm is able to
partition and cluster huge complex networks with billions of edges on cheap
commodity machines. Experiments demonstrate that the semi-external graph
partitioning algorithm is scalable and can compute high quality partitions in
time that is comparable to the running time of an efficient internal memory
implementation. A parallelization of the algorithm in the semi-external model
further reduces running time.
|
1404.4888 | Supervised detection of anomalous light-curves in massive astronomical
catalogs | cs.CE astro-ph.IM cs.LG | The development of synoptic sky surveys has led to a massive amount of data
for which resources needed for analysis are beyond human capabilities. To
process this information and to extract all possible knowledge, machine
learning techniques become necessary. Here we present a new method to
automatically discover unknown variable objects in large astronomical catalogs.
With the aim of taking full advantage of all the information we have about
known objects, our method is based on a supervised algorithm. In particular, we
train a random forest classifier using known variability classes of objects and
obtain votes for each of the objects in the training set. We then model this
voting distribution with a Bayesian network and obtain the joint voting
distribution among the training objects. Consequently, an unknown object is
considered as an outlier insofar it has a low joint probability. Our method is
suitable for exploring massive datasets given that the training process is
performed offline. We tested our algorithm on 20 millions light-curves from the
MACHO catalog and generated a list of anomalous candidates. We divided the
candidates into two main classes of outliers: artifacts and intrinsic outliers.
Artifacts were principally due to air mass variation, seasonal variation, bad
calibration or instrumental errors and were consequently removed from our
outlier list and added to the training set. After retraining, we selected about
4000 objects, which we passed to a post analysis stage by perfoming a
cross-match with all publicly available catalogs. Within these candidates we
identified certain known but rare objects such as eclipsing Cepheids, blue
variables, cataclysmic variables and X-ray sources. For some outliers there
were no additional information. Among them we identified three unknown
variability types and few individual outliers that will be followed up for a
deeper analysis.
|
1404.4893 | CTBNCToolkit: Continuous Time Bayesian Network Classifier Toolkit | cs.AI cs.LG cs.MS | Continuous time Bayesian network classifiers are designed for temporal
classification of multivariate streaming data when time duration of events
matters and the class does not change over time. This paper introduces the
CTBNCToolkit: an open source Java toolkit which provides a stand-alone
application for temporal classification and a library for continuous time
Bayesian network classifiers. CTBNCToolkit implements the inference algorithm,
the parameter learning algorithm, and the structural learning algorithm for
continuous time Bayesian network classifiers. The structural learning algorithm
is based on scoring functions: the marginal log-likelihood score and the
conditional log-likelihood score are provided. CTBNCToolkit provides also an
implementation of the expectation maximization algorithm for clustering
purpose. The paper introduces continuous time Bayesian network classifiers. How
to use the CTBNToolkit from the command line is described in a specific
section. Tutorial examples are included to facilitate users to understand how
the toolkit must be used. A section dedicate to the Java library is proposed to
help further code extensions.
|
1404.4909 | Document Retrieval on Repetitive Collections | cs.DS cs.IR | Document retrieval aims at finding the most important documents where a
pattern appears in a collection of strings. Traditional pattern-matching
techniques yield brute-force document retrieval solutions, which has motivated
the research on tailored indexes that offer near-optimal performance. However,
an experimental study establishing which alternatives are actually better than
brute force, and which perform best depending on the collection
characteristics, has not been carried out. In this paper we address this
shortcoming by exploring the relationship between the nature of the underlying
collection and the performance of current methods. Via extensive experiments we
show that established solutions are often beaten in practice by brute-force
alternatives. We also design new methods that offer superior time/space
trade-offs, particularly on repetitive collections.
|
1404.4911 | Communication Delay Co-Design in $\mathcal{H}_2$ Distributed Control
Using Atomic Norm Minimization | math.OC cs.SY | When designing distributed controllers for large-scale systems, the
actuation, sensing and communication architectures of the controller can no
longer be taken as given. In particular, controllers implemented using dense
architectures typically outperform controllers implemented using simpler ones
-- however, it is also desirable to minimize the cost of building the
architecture used to implement a controller. The recently introduced
Regularization for Design (RFD) framework poses the controller
architecture/control law co-design problem as one of jointly optimizing the
competing metrics of controller architecture cost and closed loop performance,
and shows that this task can be accomplished by augmenting the variational
solution to an optimal control problem with a suitable atomic norm penalty.
Although explicit constructions for atomic norms useful for the design of
actuation, sensing and joint actuation/sensing architectures are introduced, no
such construction is given for atomic norms used to design communication
architectures. This paper describes an atomic norm that can be used to design
communication architectures for which the resulting distributed optimal
controller is specified by the solution to a convex program. Using this atomic
norm we then show that in the context of $\mathcal{H}_2$ distributed optimal
control, the communication architecture/control law co-design task can be
performed through the use of finite dimensional second order cone programming.
|
1404.4923 | Unified Structured Learning for Simultaneous Human Pose Estimation and
Garment Attribute Classification | cs.CV | In this paper, we utilize structured learning to simultaneously address two
intertwined problems: human pose estimation (HPE) and garment attribute
classification (GAC), which are valuable for a variety of computer vision and
multimedia applications. Unlike previous works that usually handle the two
problems separately, our approach aims to produce a jointly optimal estimation
for both HPE and GAC via a unified inference procedure. To this end, we adopt a
preprocessing step to detect potential human parts from each image (i.e., a set
of "candidates") that allows us to have a manageable input space. In this way,
the simultaneous inference of HPE and GAC is converted to a structured learning
problem, where the inputs are the collections of candidate ensembles, the
outputs are the joint labels of human parts and garment attributes, and the
joint feature representation involves various cues such as pose-specific
features, garment-specific features, and cross-task features that encode
correlations between human parts and garment attributes. Furthermore, we
explore the "strong edge" evidence around the potential human parts so as to
derive more powerful representations for oriented human parts. Such evidences
can be seamlessly integrated into our structured learning model as a kind of
energy function, and the learning process could be performed by standard
structured Support Vector Machines (SVM) algorithm. However, the joint
structure of the two problems is a cyclic graph, which hinders efficient
inference. To resolve this issue, we compute instead approximate optima by
using an iterative procedure, where in each iteration the variables of one
problem are fixed. In this way, satisfactory solutions can be efficiently
computed by dynamic programming. Experimental results on two benchmark datasets
show the state-of-the-art performance of our approach.
|
1404.4927 | On the Number of Iterations for Convergence of CoSaMP and Subspace
Pursuit Algorithms | cs.IT math.IT | In compressive sensing, one important parameter that characterizes the
various greedy recovery algorithms is the iteration bound which provides the
maximum number of iterations by which the algorithm is guaranteed to converge.
In this letter, we present a new iteration bound for CoSaMP by certain
mathematical manipulations including formulation of appropriate sufficient
conditions that ensure passage of a chosen support through the two selection
stages of CoSaMP, Augment and Update. Subsequently, we extend the treatment to
the subspace pursuit (SP) algorithm. The proposed iteration bounds for both
CoSaMP and SP algorithms are seen to be improvements over their existing
counterparts, revealing that both CoSaMP and SP algorithms converge in fewer
iterations than suggested by results available in literature.
|
1404.4935 | Opinion Mining In Hindi Language: A Survey | cs.IR cs.CL | Opinions are very important in the life of human beings. These Opinions
helped the humans to carry out the decisions. As the impact of the Web is
increasing day by day, Web documents can be seen as a new source of opinion for
human beings. Web contains a huge amount of information generated by the users
through blogs, forum entries, and social networking websites and so on To
analyze this large amount of information it is required to develop a method
that automatically classifies the information available on the Web. This domain
is called Sentiment Analysis and Opinion Mining. Opinion Mining or Sentiment
Analysis is a natural language processing task that mine information from
various text forms such as reviews, news, and blogs and classify them on the
basis of their polarity as positive, negative or neutral. But, from the last
few years, enormous increase has been seen in Hindi language on the Web.
Research in opinion mining mostly carried out in English language but it is
very important to perform the opinion mining in Hindi language also as large
amount of information in Hindi is also available on the Web. This paper gives
an overview of the work that has been done Hindi language.
|
1404.4936 | Promoting cold-start items in recommender systems | cs.IR cs.SI physics.soc-ph | As one of major challenges, cold-start problem plagues nearly all recommender
systems. In particular, new items will be overlooked, impeding the development
of new products online. Given limited resources, how to utilize the knowledge
of recommender systems and design efficient marketing strategy for new items is
extremely important. In this paper, we convert this ticklish issue into a clear
mathematical problem based on a bipartite network representation. Under the
most widely used algorithm in real e-commerce recommender systems, so-called
the item-based collaborative filtering, we show that to simply push new items
to active users is not a good strategy. To our surprise, experiments on real
recommender systems indicate that to connect new items with some less active
users will statistically yield better performance, namely these new items will
have more chance to appear in other users' recommendation lists. Further
analysis suggests that the disassortative nature of recommender systems
contributes to such observation. In a word, getting in-depth understanding on
recommender systems could pave the way for the owners to popularize their
cold-start products with low costs.
|
1404.4939 | Bipartite Graph based Construction of Compressed Sensing Matrices | cs.IT math.IT | This paper proposes an efficient method to construct the bipartite graph with
as many edges as possible while without introducing the shortest cycles of
length equal to 4. The binary matrix associated with the bipartite graph
described above presents comparable and even better phase transitions than
Gaussian random matrices.
|
1404.4942 | Geometric Abstraction from Noisy Image-Based 3D Reconstructions | cs.CV | Creating geometric abstracted models from image-based scene reconstructions
is difficult due to noise and irregularities in the reconstructed model. In
this paper, we present a geometric modeling method for noisy reconstructions
dominated by planar horizontal and orthogonal vertical structures. We partition
the scene into horizontal slices and create an inside/outside labeling
represented by a floor plan for each slice by solving an energy minimization
problem. Consecutively, we create an irregular discretization of the volume
according to the individual floor plans and again label each cell as
inside/outside by minimizing an energy function. By adjusting the smoothness
parameter, we introduce different levels of detail. In our experiments, we show
results with varying regularization levels using synthetically generated and
real-world data.
|
1404.4944 | Unit commitment with valve-point loading effect | math.OC cs.CE | Valve-point loading affects the input-output characteristics of generating
units, bringing the fuel costs nonlinear and nonsmooth. This has been
considered in the solution of load dispatch problems, but not in the planning
phase of unit commitment. This paper presents a mathematical optimization model
for the thermal unit commitment problem considering valve-point loading. The
formulation is based on a careful linearization of the fuel cost function,
which is modeled with great detail on power regions being used in the current
solution, and roughly on other regions. A set of benchmark instances for this
problem is used for analyzing the method, with recourse to a general-purpose
mixed-integer optimization solver.
|
1404.4960 | Agent Behavior Prediction and Its Generalization Analysis | cs.LG | Machine learning algorithms have been applied to predict agent behaviors in
real-world dynamic systems, such as advertiser behaviors in sponsored search
and worker behaviors in crowdsourcing. The behavior data in these systems are
generated by live agents: once the systems change due to the adoption of the
prediction models learnt from the behavior data, agents will observe and
respond to these changes by changing their own behaviors accordingly. As a
result, the behavior data will evolve and will not be identically and
independently distributed, posing great challenges to the theoretical analysis
on the machine learning algorithms for behavior prediction. To tackle this
challenge, in this paper, we propose to use Markov Chain in Random Environments
(MCRE) to describe the behavior data, and perform generalization analysis of
the machine learning algorithms on its basis. Since the one-step transition
probability matrix of MCRE depends on both previous states and the random
environment, conventional techniques for generalization analysis cannot be
directly applied. To address this issue, we propose a novel technique that
transforms the original MCRE into a higher-dimensional time-homogeneous Markov
chain. The new Markov chain involves more variables but is more regular, and
thus easier to deal with. We prove the convergence of the new Markov chain when
time approaches infinity. Then we prove a generalization bound for the machine
learning algorithms on the behavior data generated by the new Markov chain,
which depends on both the Markovian parameters and the covering number of the
function class compounded by the loss function for behavior prediction and the
behavior prediction model. To the best of our knowledge, this is the first work
that performs the generalization analysis on data generated by complex
processes in real-world dynamic systems.
|
1404.4963 | Functional dependencies with null markers | cs.DB | Functional dependencies are an integral part of database design. However,
they are only defined when we exclude null markers. Yet we commonly use null
markers in practice. To bridge this gap between theory and practice,
researchers have proposed definitions of functional dependencies over relations
with null markers. Though sound, these definitions lack some qualities that we
find desirable. For example, some fail to satisfy Armstrong's axioms---while
these axioms are part of the foundation of common database methodologies. We
propose a set of properties that any extension of functional dependencies over
relations with null markers should possess. We then propose two new extensions
having these properties. These extensions attempt to allow null markers where
they make sense to practitioners.
They both support Armstrong's axioms and provide realizable null markers: at
any time, some or all of the null markers can be replaced by actual values
without causing an anomaly. Our proposals may improve database designs.
|
1404.4975 | Joint Latency and Cost Optimization for Erasure-coded Data Center
Storage | cs.DC cs.IT math.IT math.OC | Modern distributed storage systems offer large capacity to satisfy the
exponentially increasing need of storage space. They often use erasure codes to
protect against disk and node failures to increase reliability, while trying to
meet the latency requirements of the applications and clients. This paper
provides an insightful upper bound on the average service delay of such
erasure-coded storage with arbitrary service time distribution and consisting
of multiple heterogeneous files. Not only does the result supersede known delay
bounds that only work for a single file or homogeneous files, it also enables a
novel problem of joint latency and storage cost minimization over three
dimensions: selecting the erasure code, placement of encoded chunks, and
optimizing scheduling policy. The problem is efficiently solved via the
computation of a sequence of convex approximations with provable convergence.
We further prototype our solution in an open-source, cloud storage deployment
over three geographically distributed data centers. Experimental results
validate our theoretical delay analysis and show significant latency reduction,
providing valuable insights into the proposed latency-cost tradeoff in
erasure-coded storage.
|
1404.4983 | Shiva++: An Enhanced Graph based Ontology Matcher | cs.AI | With the web getting bigger and assimilating knowledge about different
concepts and domains, it is becoming very difficult for simple database driven
applications to capture the data for a domain. Thus developers have come out
with ontology based systems which can store large amount of information and can
apply reasoning and produce timely information. Thus facilitating effective
knowledge management. Though this approach has made our lives easier, but at
the same time has given rise to another problem. Two different ontologies
assimilating same knowledge tend to use different terms for the same concepts.
This creates confusion among knowledge engineers and workers, as they do not
know which is a better term then the other. Thus we need to merge ontologies
working on same domain so that the engineers can develop a better application
over it. This paper shows the development of one such matcher which merges the
concepts available in two ontologies at two levels; 1) at string level and 2)
at semantic level; thus producing better merged ontologies. We have used a
graph matching technique which works at the core of the system. We have also
evaluated the system and have tested its performance with its predecessor which
works only on string matching. Thus current approach produces better results.
|
1404.4984 | Information Theoretic Analysis of Concurrent Information Transfer and
Power Gain | cs.IT math.IT | In this paper, we analyze the fundamental trade-off between information
transfer and power gain by means of an information-theoretic framework in
communications circuits. This analysis is of interest as many of today's
applications require that maximum information and maximum signal power are
extracted (or transferred) through the circuit at the same time for further
processing so that a compromise concerning the signal spectral shape as well as
the matching network has to be found. To this end, the optimization framework
is applied to a two-port circuit, which is used as an abstraction for a
broadband amplifier. Thereby, we characterize the involved Pareto bound by
considering different optimization problems. The first one aims at optimizing
the input power spectral density (PSD) as well as the source and load
admittances, whereas the second approach assumes the PSD to be fixed and
uniformly distributed within a fixed bandwidth and optimizes the source and
load admittances only. Moreover, we will show that additional matching networks
may help to improve the trade-off.
|
1404.4995 | A Generalized Cut-Set Bound for Deterministic Multi-Flow Networks and
its Applications | cs.IT math.IT | We present a new outer bound for the sum capacity of general multi-unicast
deterministic networks. Intuitively, this bound can be understood as applying
the cut-set bound to concatenated copies of the original network with a special
restriction on the allowed transmit signal distributions. We first study
applications to finite-field networks, where we obtain a general outer-bound
expression in terms of ranks of the transfer matrices. We then show that, even
though our outer bound is for deterministic networks, a recent result relating
the capacity of AWGN KxKxK networks and the capacity of a deterministic
counterpart allows us to establish an outer bound to the DoF of KxKxK wireless
networks with general connectivity. This bound is tight in the case of the
"adjacent-cell interference" topology, and yields graph-theoretic necessary and
sufficient conditions for K DoF to be achievable in general topologies.
|
1404.4997 | Tight bounds for learning a mixture of two gaussians | cs.LG cs.DS stat.ML | We consider the problem of identifying the parameters of an unknown mixture
of two arbitrary $d$-dimensional gaussians from a sequence of independent
random samples. Our main results are upper and lower bounds giving a
computationally efficient moment-based estimator with an optimal convergence
rate, thus resolving a problem introduced by Pearson (1894). Denoting by
$\sigma^2$ the variance of the unknown mixture, we prove that
$\Theta(\sigma^{12})$ samples are necessary and sufficient to estimate each
parameter up to constant additive error when $d=1.$ Our upper bound extends to
arbitrary dimension $d>1$ up to a (provably necessary) logarithmic loss in $d$
using a novel---yet simple---dimensionality reduction technique. We further
identify several interesting special cases where the sample complexity is
notably smaller than our optimal worst-case bound. For instance, if the means
of the two components are separated by $\Omega(\sigma)$ the sample complexity
reduces to $O(\sigma^2)$ and this is again optimal.
Our results also apply to learning each component of the mixture up to small
error in total variation distance, where our algorithm gives strong
improvements in sample complexity over previous work. We also extend our lower
bound to mixtures of $k$ Gaussians, showing that $\Omega(\sigma^{6k-2})$
samples are necessary to estimate each parameter up to constant additive error.
|
1404.5002 | A Geometric Distance Oracle for Large Real-World Graphs | cs.SI cs.DS | Many graph processing algorithms require determination of shortest-path
distances between arbitrary numbers of node pairs. Since computation of exact
distances between all node-pairs of a large graph, e.g., 10M nodes and up, is
prohibitively expensive both in computational time and storage space, distance
approximation is often used in place of exact computation. In this paper, we
present a novel and scalable distance oracle that leverages the hyperbolic core
of real-world large graphs for fast and scalable distance approximation. We
show empirically that the proposed oracle significantly outperforms prior
oracles on a random set of test cases drawn from public domain graph libraries.
There are two sets of prior work against which we benchmark our approach. The
first set, which often outperforms other oracles, employs embedding of the
graph into low dimensional Euclidean spaces with carefully constructed
hyperbolic distances, but provides no guarantees on the distance estimation
error. The second set leverages Gromov-type tree contraction of the graph with
the additive error guaranteed not to exceed $2\delta\log{n}$, where $\delta$ is
the hyperbolic constant of the graph. We show that our proposed oracle 1) is
significantly faster than those oracles that use hyperbolic embedding (first
set) with similar approximation error and, perhaps surprisingly, 2) exhibits
substantially lower average estimation error compared to Gromov-like tree
contractions (second set). We substantiate our claims through numerical
computations on a collection of a dozen real world networks and synthetic test
cases from multiple domains, ranging in size from 10s of thousand to 10s of
millions of nodes.
|
1404.5007 | Secure Degrees of Freedom of the MIMO Multiple Access Channel with
Multiple unknown Eavesdroppers | cs.IT math.IT | We investigate the secure degrees of freedom (SDoF) of a two-transmitter
Gaussian multiple access channel with multiple antennas at the transmitters,
the legitimate receiver with the existence of an unknown number of
eavesdroppers each with a number of antennas less than or equal to a known
value $N_E$. The channel matrices between the legitimate transmitters and the
receiver are available everywhere, while the legitimate pair does not know the
eavesdroppers' channels matrices. We provide the exact sum SDoF for the
considered system. A new comprehensive upperbound is deduced and a new
achievable scheme based on utilizing jamming is exploited. We prove that
Cooperative Jamming is SDoF optimal even without the instantaneous eavesdropper
CSI available at the transmitters.
|
1404.5009 | Efficient Semidefinite Branch-and-Cut for MAP-MRF Inference | cs.CV cs.LG cs.NA | We propose a Branch-and-Cut (B&C) method for solving general MAP-MRF
inference problems. The core of our method is a very efficient bounding
procedure, which combines scalable semidefinite programming (SDP) and a
cutting-plane method for seeking violated constraints. In order to further
speed up the computation, several strategies have been exploited, including
model reduction, warm start and removal of inactive constraints.
We analyze the performance of the proposed method under different settings,
and demonstrate that our method either outperforms or performs on par with
state-of-the-art approaches. Especially when the connectivities are dense or
when the relative magnitudes of the unary costs are low, we achieve the best
reported results. Experiments show that the proposed algorithm achieves better
approximation than the state-of-the-art methods within a variety of time
budgets on challenging non-submodular MAP-MRF inference problems.
|
1404.5012 | On the MacWilliams Identity for Classical and Quantum Convolutional
Codes | cs.IT math.IT quant-ph | The weight generating functions associated with convolutional codes (CCs) are
based on state space realizations or the weight adjacency matrices (WAMs). The
MacWilliams identity for CCs on the WAMs was first established by Gluesing-
Luerssen and Schneider in the case of minimal encoders, and generalized by
Forney. We consider this problem in the viewpoint of constraint codes and
obtain a simple and direct proof of this MacWilliams identity in the case of
minimal encoders. For our purpose, we choose a different representation for the
exact weight generating function (EWGF) of a block code, by defining it as a
linear combination of orthonormal vectors in Dirac bra-ket notation. This
representation provides great flexibility so that general split weight
generating functions and their MacWilliams identities can be easily obtained
from the MacWilliams identity for EWGFs. As a result, we also obtain the
MacWilliams identity for the input-parity weight adjacency matrices of a
systematic convolutional code and its dual. Finally, paralleling the
development of the classical case, we establish the MacWilliams identity for
quantum convolutional codes.
|
1404.5021 | Local Rank Modulation for Flash Memories II | cs.IT math.IT | Local rank modulation scheme was suggested recently for representing
information in flash memories in order to overcome drawbacks of rank
modulation. For $0 < s\leq t\leq n$ with $s$ divides $n$, an $(s,t,n)$-LRM
scheme is a local rank modulation scheme where the $n$ cells are locally viewed
cyclically through a sliding window of size $t$ resulting in a sequence of
small permutations which requires less comparisons and less distinct values.
The gap between two such windows equals to $s$. In this work, encoding,
decoding, and asymptotic enumeration of the $(1,3,n)$-LRM scheme is studied.
The techniques which are suggested have some generalizations for $(1,t,n)$-LRM,
$t > 3$, but the proofs will become more complicated. The enumeration problem
is presented also as a purely combinatorial problem. Finally, we prove the
conjecture that the size of a constant weight $(1,2,n)$-LRM Gray code with
weight two is at most $2n$.
|
1404.5037 | Multiresolution analysis on compact Riemannian manifolds | cs.IT math.IT | In the chapter "Multiresolution Analysis on Compact Riemannian Manifolds"
Isaac Pesenson describes multiscale analysis, sampling, interpolation and
approximation of functions defined on manifolds. His main achievements are:
construction on manifolds of bandlimited and space-localized frames which have
Parseval property and construction of variational splines on manifolds. Such
frames and splines enable multiscale analysis on arbitrary compact manifolds,
and they already found a number of important applications (statistics, CMB,
crystallography) related to such manifolds as two-dimensional sphere and group
of its rotations.
|
1404.5043 | The predictable degree property, column reducedness, and minimality in
multidimensional convolutional coding | cs.IT math.IT | Higher-dimensional analogs of the predictable degree property and column
reducedness are defined, and it is proved that the two properties are
equivalent. It is shown that every multidimensional convolutional code has,
what is called, a minimal reduced polynomial resolution. It is uniquely
determined (up to isomorphism) and leads to a number of important integer
invariants of the code generalizing classical Forney's indices.
|
1404.5055 | Correlated Jamming in a Joint Source Channel Communication System | cs.IT math.IT | We study correlated jamming in joint source-channel communication systems. An
i.i.d. source is to be communicated over a memoryless channel in the presence
of a correlated jammer with non-causal knowledge of user transmission. This
user-jammer interaction is modeled as a zero sum game. A set of conditions on
the source and the channel is provided for the existence of a Nash equilibrium
for this game, where the user strategy is uncoded transmission and the jammer
strategy is i.i.d jamming. This generalizes a well-known example of uncoded
communication of a Gaussian sources over Gaussian channels with additive
jamming. Another example, of a Binary Symmetric source over a Binary Symmetric
channel with jamming, is provided as a validation of this result.
|
1404.5060 | Writing on a Dirty Paper in the presence of Jamming | cs.IT math.IT | In this paper, the problem of writing on a dirty paper in the presence of
jamming is examined. We consider an AWGN channel with an additive white
Gaussian state and an additive adversarial jammer. The state is assumed to be
known non-causally to the encoder and the jammer but not to the decoder. The
capacity of the channel in the presence of a jammer is determined. A surprising
result that this capacity is equal to the capacity of a relaxed version of the
problem, where the state is also known non-causally to the decoder, is proved.
|
1404.5062 | Rapid prototyping for sling design optimization | cs.CE | This paper deals with combination of two modern engineering methods in order
to optimise the shape of a representative casting product. The product being
analysed is a sling, which is used to attach pulling rope in timber
transportation. The first step was 3D modelling and static stress/strain
analysis using CAD/CAE software NX4. The slinger shape optimization was
performed using Traction method, by means of software Optishape-TS. To define
constraints for shape optimization, FEA software FEMAP was used. The mould
pattern with optimized 3D shape was then prepared using Fused Deposition
Modelling (FDM) Rapid prototyping method. The sling mass decreased by 20%,
while signifficantly better stress distribution was achieved, with maximum
stress 3.5 times less than initial value. The future researches should use 3D
scanning technology in order to provide more accurate 3D model of initial part.
Results of this research can be used by toolmakers in order to engage FEA/RP
technology to design and manufacture lighter products with acceptable stress
distribution.
|
1404.5065 | Multi-Target Regression via Random Linear Target Combinations | cs.LG | Multi-target regression is concerned with the simultaneous prediction of
multiple continuous target variables based on the same set of input variables.
It arises in several interesting industrial and environmental application
domains, such as ecological modelling and energy forecasting. This paper
presents an ensemble method for multi-target regression that constructs new
target variables via random linear combinations of existing targets. We discuss
the connection of our approach with multi-label classification algorithms, in
particular RA$k$EL, which originally inspired this work, and a family of recent
multi-label classification algorithms that involve output coding. Experimental
results on 12 multi-target datasets show that it performs significantly better
than a strong baseline that learns a single model for each target using
gradient boosting and compares favourably to multi-objective random forest
approach, which is a state-of-the-art approach. The experiments further show
that our approach improves more when stronger unconditional dependencies exist
among the targets.
|
1404.5068 | Directional Cell Discovery in Millimeter Wave Cellular Networks | cs.IT math.IT | The acute disparity between increasing bandwidth demand and available
spectrum, has brought millimeter wave (mmW) bands to the forefront of candidate
solutions for the next-generation cellular networks. Highly directional
transmissions are essential for cellular communication in these frequencies to
compensate for high isotropic path loss. This reliance on directional
beamforming, however, complicates initial cell search since the mobile and base
station must jointly search over a potentially large angular directional space
to locate a suitable path to initiate communication. To address this problem,
this paper proposes a directional cell discovery procedure where base stations
periodically transmit synchronization signals, potentially in time-varying
random directions, to scan the angular space. Detectors for these signals are
derived based on a Generalized Likelihood Ratio Test (GLRT) under various
signal and receiver assumptions. The detectors are then simulated under
realistic design parameters and channels based on actual experimental
measurements at 28~GHz in New York City. The study reveals two key findings:
(i) digital beamforming can significantly outperform analog beamforming even
when the digital beamforming uses very low quantization to compensate for the
additional power requirements; and (ii) omni-directional transmissions of the
synchronization signals from the base station generally outperforms random
directional scanning.
|
1404.5078 | TurKPF: TurKontrol as a Particle Filter | cs.AI | TurKontrol, and algorithm presented in (Dai et al. 2010), uses a POMDP to
model and control an iterative workflow for crowdsourced work. Here, TurKontrol
is re-implemented as "TurKPF," which uses a Particle Filter to reduce
computation time & memory usage. Most importantly, in our experimental
environment with default parameter settings, the action is chosen nearly
instantaneously. Through a series of experiments we see that TurKPF and
TurKontrol perform similarly.
|
1404.5083 | Transmit Antenna Selection in Underlay Cognitive Radio Environment | cs.IT math.IT | Cognitive radio (CR) technology addresses the problem of spectrum
under-utilization. In underlay CR mode, the secondary users are allowed to
communicate provided that their transmission is not detrimental to primary user
communication. Transmit antenna selection is one of the low-complexity methods
to increase the capacity of wireless communication systems. In this article, we
propose and analyze the performance benefit of a transmit antenna selection
scheme for underlay secondary system that ensures the instantaneous
interference caused by the secondary transmitter to the primary receiver is
below a predetermined level. Closed-form expressions of the outage probability,
amount of fading, and ergodic capacity for the secondary network are derived.
Monte-carlo simulations are also carried out to confirm various mathematical
results presented in this article.
|
1404.5121 | SleepScale: Runtime Joint Speed Scaling and Sleep States Management for
Power Efficient Data Centers | cs.PF cs.SY | Power consumption in data centers has been growing significantly in recent
years. To reduce power, servers are being equipped with increasingly
sophisticated power management mechanisms. Different mechanisms offer
dramatically different trade-offs between power savings and performance
penalties. Considering the complexity, variety, and temporally varying nature
of the applications hosted in a typical data center, intelligently determining
which power management policy to use and when is a complicated task.
In this paper we analyze a system model featuring both performance scaling
and low-power states. We reveal the interplay between performance scaling and
low-power states via intensive simulation and analytic verification. Based on
the observations, we present SleepScale, a runtime power management tool
designed to efficiently exploit existing power control mechanisms. At run time,
SleepScale characterizes power consumption and quality-of-service (QoS) for
each low-power state and frequency setting, and selects the best policy for a
given QoS constraint. We evaluate SleepScale using workload traces from data
centers and achieve significant power savings relative to conventional power
management strategies.
|
1404.5122 | Spatiotemporal Sparse Bayesian Learning with Applications to Compressed
Sensing of Multichannel Physiological Signals | cs.IT cs.LG math.IT stat.ML | Energy consumption is an important issue in continuous wireless
telemonitoring of physiological signals. Compressed sensing (CS) is a promising
framework to address it, due to its energy-efficient data compression
procedure. However, most CS algorithms have difficulty in data recovery due to
non-sparsity characteristic of many physiological signals. Block sparse
Bayesian learning (BSBL) is an effective approach to recover such signals with
satisfactory recovery quality. However, it is time-consuming in recovering
multichannel signals, since its computational load almost linearly increases
with the number of channels.
This work proposes a spatiotemporal sparse Bayesian learning algorithm to
recover multichannel signals simultaneously. It not only exploits temporal
correlation within each channel signal, but also exploits inter-channel
correlation among different channel signals. Furthermore, its computational
load is not significantly affected by the number of channels. The proposed
algorithm was applied to brain computer interface (BCI) and EEG-based driver's
drowsiness estimation. Results showed that the algorithm had both better
recovery performance and much higher speed than BSBL. Particularly, the
proposed algorithm ensured that the BCI classification and the drowsiness
estimation had little degradation even when data were compressed by 80%, making
it very suitable for continuous wireless telemonitoring of multichannel
signals.
|
1404.5144 | Influence of the learning method in the performance of feedforward
neural networks when the activity of neurons is modified | cs.NE | A method that allows us to give a different treatment to any neuron inside
feedforward neural networks is presented. The algorithm has been implemented
with two very different learning methods: a standard Back-propagation (BP)
procedure and an evolutionary algorithm. First, we have demonstrated that the
EA training method converges faster and gives more accurate results than BP.
Then we have made a full analysis of the effects of turning off different
combinations of neurons after the training phase. We demonstrate that EA is
much more robust than BP for all the cases under study. Even in the case when
two hidden neurons are lost, EA training is still able to give good average
results. This difference implies that we must be very careful when pruning or
redundancy effects are being studied since the network performance when losing
neurons strongly depends on the training method. Moreover, the influence of the
individual inputs will also depend on the training algorithm. Since EA keeps a
good classification performance when units are lost, this method could be a
good way to simulate biological learning systems since they must be robust
against deficient neuron performance. Although biological systems are much more
complex than the simulations shown in this article, we propose that a smart
training strategy such as the one shown here could be considered as a first
protection against the losing of a certain number of neurons.
|
1404.5165 | GP-Localize: Persistent Mobile Robot Localization using Online Sparse
Gaussian Process Observation Model | cs.RO cs.LG stat.ML | Central to robot exploration and mapping is the task of persistent
localization in environmental fields characterized by spatially correlated
measurements. This paper presents a Gaussian process localization (GP-Localize)
algorithm that, in contrast to existing works, can exploit the spatially
correlated field measurements taken during a robot's exploration (instead of
relying on prior training data) for efficiently and scalably learning the GP
observation model online through our proposed novel online sparse GP. As a
result, GP-Localize is capable of achieving constant time and memory (i.e.,
independent of the size of the data) per filtering step, which demonstrates the
practical feasibility of using GPs for persistent robot localization and
autonomy. Empirical evaluation via simulated experiments with real-world
datasets and a real robot experiment shows that GP-Localize outperforms
existing GP localization algorithms.
|
1404.5173 | Compression for Quadratic Similarity Queries: Finite Blocklength and
Practical Schemes | cs.IT math.IT | We study the problem of compression for the purpose of similarity
identification, where similarity is measured by the mean square Euclidean
distance between vectors. While the asymptotical fundamental limits of the
problem - the minimal compression rate and the error exponent - were found in a
previous work, in this paper we focus on the nonasymptotic domain and on
practical, implementable schemes. We first present a finite blocklength
achievability bound based on shape-gain quantization: The gain (amplitude) of
the vector is compressed via scalar quantization and the shape (the projection
on the unit sphere) is quantized using a spherical code. The results are
numerically evaluated and they converge to the asymptotic values as predicted
by the error exponent. We then give a nonasymptotic lower bound on the
performance of any compression scheme, and compare to the upper (achievability)
bound. For a practical implementation of such a scheme, we use wrapped
spherical codes, studied by Hamkins and Zeger, and use the Leech lattice as an
example for an underlying lattice. As a side result, we obtain a bound on the
covering angle of any wrapped spherical code, as a function of the covering
radius of the underlying lattice.
|
1404.5187 | Discrimination on the Grassmann Manifold: Fundamental Limits of Subspace
Classifiers | cs.IT math.IT | We present fundamental limits on the reliable classification of linear and
affine subspaces from noisy, linear features. Drawing an analogy between
discrimination among subspaces and communication over vector wireless channels,
we propose two Shannon-inspired measures to characterize asymptotic classifier
performance. First, we define the classification capacity, which characterizes
necessary and sufficient conditions for the misclassification probability to
vanish as the signal dimension, the number of features, and the number of
subspaces to be discerned all approach infinity. Second, we define the
diversity-discrimination tradeoff which, by analogy with the
diversity-multiplexing tradeoff of fading vector channels, characterizes
relationships between the number of discernible subspaces and the
misclassification probability as the noise power approaches zero. We derive
upper and lower bounds on these measures which are tight in many regimes.
Numerical results, including a face recognition application, validate the
results in practice.
|
1404.5190 | Sparse Approximation, List Decoding, and Uncertainty Principles | cs.IT math.IT | We consider list versions of sparse approximation problems, where unlike the
existing results in sparse approximation that consider situations with unique
solutions, we are interested in multiple solutions. We introduce these problems
and present the first combinatorial results on the output list size. These
generalize and enhance some of the existing results on threshold phenomenon and
uncertainty principles in sparse approximations. Our definitions and results
are inspired by similar results in list decoding. We also present lower bound
examples that bolster our results and show they are of the appropriate size.
|
1404.5214 | Graph Kernels via Functional Embedding | cs.LG cs.AI stat.ML | We propose a representation of graph as a functional object derived from the
power iteration of the underlying adjacency matrix. The proposed functional
representation is a graph invariant, i.e., the functional remains unchanged
under any reordering of the vertices. This property eliminates the difficulty
of handling exponentially many isomorphic forms. Bhattacharyya kernel
constructed between these functionals significantly outperforms the
state-of-the-art graph kernels on 3 out of the 4 standard benchmark graph
classification datasets, demonstrating the superiority of our approach. The
proposed methodology is simple and runs in time linear in the number of edges,
which makes our kernel more efficient and scalable compared to many widely
adopted graph kernels with running time cubic in the number of vertices.
|
1404.5236 | Sum-of-squares proofs and the quest toward optimal algorithms | cs.DS cs.CC cs.LG math.OC | In order to obtain the best-known guarantees, algorithms are traditionally
tailored to the particular problem we want to solve. Two recent developments,
the Unique Games Conjecture (UGC) and the Sum-of-Squares (SOS) method,
surprisingly suggest that this tailoring is not necessary and that a single
efficient algorithm could achieve best possible guarantees for a wide range of
different problems.
The Unique Games Conjecture (UGC) is a tantalizing conjecture in
computational complexity, which, if true, will shed light on the complexity of
a great many problems. In particular this conjecture predicts that a single
concrete algorithm provides optimal guarantees among all efficient algorithms
for a large class of computational problems.
The Sum-of-Squares (SOS) method is a general approach for solving systems of
polynomial constraints. This approach is studied in several scientific
disciplines, including real algebraic geometry, proof complexity, control
theory, and mathematical programming, and has found applications in fields as
diverse as quantum information theory, formal verification, game theory and
many others.
We survey some connections that were recently uncovered between the Unique
Games Conjecture and the Sum-of-Squares method. In particular, we discuss new
tools to rigorously bound the running time of the SOS method for obtaining
approximate solutions to hard optimization problems, and how these tools give
the potential for the sum-of-squares method to provide new guarantees for many
problems of interest, and possibly to even refute the UGC.
|
1404.5239 | InfluenceTracker: Rating the impact of a Twitter account | cs.SI physics.soc-ph | We describe a methodology of rating the influence of a Twitter ac-count in
this famous microblogging service. We then evaluate it over real ac-counts,
under the belief that influence is not only a matter of quantity (amount of
followers), but also a mixture of quality measures that reflect interaction,
awareness, and visibility in the social sphere. The authors of this paper have
created InfluenceTracker, a publicly available website where anyone can rate
and compare the recent activity of any Twitter account.
|
1404.5254 | Simultaneous Source for non-uniform data variance and missing data | cs.CE | The use of simultaneous sources in geophysical inverse problems has
revolutionized the ability to deal with large scale data sets that are obtained
from multiple source experiments. However, the technique breaks when the data
has non-uniform standard deviation or when some data are missing. In this paper
we develop, study, and compare a number of techniques that enable to utilize
advantages of the simultaneous source framework for these cases. We show that
the inverse problem can still be solved efficiently by using these new
techniques. We demonstrate our new approaches on the Direct Current Resistivity
inverse problem.
|
1404.5278 | The Frobenius anatomy of word meanings I: subject and object relative
pronouns | cs.CL | This paper develops a compositional vector-based semantics of subject and
object relative pronouns within a categorical framework. Frobenius algebras are
used to formalise the operations required to model the semantics of relative
pronouns, including passing information between the relative clause and the
modified noun phrase, as well as copying, combining, and discarding parts of
the relative clause. We develop two instantiations of the abstract semantics,
one based on a truth-theoretic approach and one based on corpus statistics.
|
1404.5287 | Quantification of entanglement entropy in helium by the Schmidt-Slater
decomposition method | quant-ph cs.IT math.IT physics.atom-ph physics.chem-ph | In this work, we present an investigation on the spatial entanglement
entropies in the helium atom by using highly correlated Hylleraas functions to
represent the S-wave states. Singlet-spin 1sns 1Se states (with n = 1 to 6) and
triplet-spin 1sns 3Se states (with n = 2 to 6) are investigated. As a measure
on the spatial entanglement, von Neumann entropy and linear entropy are
calculated. Furthermore, we apply the Schmidt-Slater decomposition method on
the two-electron wave functions, and obtain eigenvalues of the one-particle
reduced density matrix, from which the linear entropy and von Neumann entropy
can be determined.
|
1404.5322 | CitNetExplorer: A new software tool for analyzing and visualizing
citation networks | cs.DL cs.SI physics.soc-ph | We present CitNetExplorer, a new software tool for analyzing and visualizing
citation networks of scientific publications. CitNetExplorer can for instance
be used to study the development of a research field, to delineate the
literature on a research topic, and to support literature reviewing. We first
introduce the main concepts that need to be understood when working with
CitNetExplorer. We then demonstrate CitNetExplorer by using the tool to analyze
the scientometric literature and the literature on community detection in
networks. Finally, we discuss some technical details on the construction,
visualization, and analysis of citation networks in CitNetExplorer.
|
1404.5344 | A higher-order MRF based variational model for multiplicative noise
reduction | cs.CV | The Fields of Experts (FoE) image prior model, a filter-based higher-order
Markov Random Fields (MRF) model, has been shown to be effective for many image
restoration problems. Motivated by the successes of FoE-based approaches, in
this letter, we propose a novel variational model for multiplicative noise
reduction based on the FoE image prior model. The resulted model corresponds to
a non-convex minimization problem, which can be solved by a recently published
non-convex optimization algorithm. Experimental results based on synthetic
speckle noise and real synthetic aperture radar (SAR) images suggest that the
performance of our proposed method is on par with the best published
despeckling algorithm. Besides, our proposed model comes along with an
additional advantage, that the inference is extremely efficient. {Our GPU based
implementation takes less than 1s to produce state-of-the-art despeckling
performance.}
|
1404.5351 | Fast Approximate Matching of Cell-Phone Videos for Robust Background
Subtraction | cs.CV | We identify a novel instance of the background subtraction problem that
focuses on extracting near-field foreground objects captured using handheld
cameras. Given two user-generated videos of a scene, one with and the other
without the foreground object(s), our goal is to efficiently generate an output
video with only the foreground object(s) present in it. We cast this challenge
as a spatio-temporal frame matching problem, and propose an efficient solution
for it that exploits the temporal smoothness of the video sequences. We present
theoretical analyses for the error bounds of our approach, and validate our
findings using a detailed set of simulation experiments. Finally, we present
the results of our approach tested on multiple real videos captured using
handheld cameras, and compare them to several alternate foreground extraction
approaches.
|
1404.5356 | Finding safe strategies for competitive diffusion on trees | cs.DM cs.GR cs.SI | We study the two-player safe game of Competitive Diffusion, a game-theoretic
model for the diffusion of technologies or influence through a social network.
In game theory, safe strategies are mixed strategies with a minimal expected
gain against unknown strategies of the opponents. Safe strategies for
competitive diffusion lead to maximum spread of influence in the presence of
uncertainty about the other players. We study the safe game on two specific
classes of trees, spiders and complete trees, and give tight bounds on the
minimal expected gain. We then use these results to give an algorithm which
suggests a safe strategy for a player on any tree. We test this algorithm on
randomly generated trees, and show that it finds strategies that are close to
optimal.
|
1404.5357 | Morphological Analysis of the Bishnupriya Manipuri Language using Finite
State Transducers | cs.CL | In this work we present a morphological analysis of Bishnupriya Manipuri
language, an Indo-Aryan language spoken in the north eastern India. As of now,
there is no computational work available for the language. Finite state
morphology is one of the successful approaches applied in a wide variety of
languages over the year. Therefore we adapted the finite state approach to
analyse morphology of the Bishnupriya Manipuri language.
|
1404.5367 | Lexicon Infused Phrase Embeddings for Named Entity Resolution | cs.CL | Most state-of-the-art approaches for named-entity recognition (NER) use semi
supervised information in the form of word clusters and lexicons. Recently
neural network-based language models have been explored, as they as a byproduct
generate highly informative vector representations for words, known as word
embeddings. In this paper we present two contributions: a new form of learning
word embeddings that can leverage information from relevant lexicons to improve
the representations, and the first system to use neural word embeddings to
achieve state-of-the-art results on named-entity recognition in both CoNLL and
Ontonotes NER. Our system achieves an F1 score of 90.90 on the test set for
CoNLL 2003---significantly better than any previous system trained on public
data, and matching a system employing massive private industrial query-log
data.
|
1404.5372 | Linking Geographic Vocabularies through WordNet | cs.IR cs.CL | The linked open data (LOD) paradigm has emerged as a promising approach to
structuring and sharing geospatial information. One of the major obstacles to
this vision lies in the difficulties found in the automatic integration between
heterogeneous vocabularies and ontologies that provides the semantic backbone
of the growing constellation of open geo-knowledge bases. In this article, we
show how to utilize WordNet as a semantic hub to increase the integration of
LOD. With this purpose in mind, we devise Voc2WordNet, an unsupervised mapping
technique between a given vocabulary and WordNet, combining intensional and
extensional aspects of the geographic terms. Voc2WordNet is evaluated against a
sample of human-generated alignments with the OpenStreetMap (OSM) Semantic
Network, a crowdsourced geospatial resource, and the GeoNames ontology, the
vocabulary of a large digital gazetteer. These empirical results indicate that
the approach can obtain high precision and recall.
|
1404.5412 | Analytical Assessment of Coordinated Overlay D2D Communications | cs.IT math.IT | In this paper, analytical assessment of overlay-inband device-to-device (D2D)
communications is investigated, under cellular-network-assisted (coordinated)
scheduling. To this end, a simple scheduling scheme is assumed that takes into
account only local (per cell) topological information of the D2D links.
Stochastic geometry tools are utilized in order to obtain analytical
expressions for the interferers density as well as the D2D link
signal-to-interference-ratio distribution. The analytical results accuracy is
validated by comparison with simulations. In addition, the analytical
expressions are employed for efficiently optimizing the parameters of a
cellular system with overlay D2D communications. It is shown that coordinated
scheduling of D2D transmissions enhances system performance both in terms of
average user rate as well as maximum allowable D2D link distance.
|
1404.5417 | Attractor Metadynamics in Adapting Neural Networks | q-bio.NC cond-mat.dis-nn cs.NE | Slow adaption processes, like synaptic and intrinsic plasticity, abound in
the brain and shape the landscape for the neural dynamics occurring on
substantially faster timescales. At any given time the network is characterized
by a set of internal parameters, which are adapting continuously, albeit
slowly. This set of parameters defines the number and the location of the
respective adiabatic attractors. The slow evolution of network parameters hence
induces an evolving attractor landscape, a process which we term attractor
metadynamics. We study the nature of the metadynamics of the attractor
landscape for several continuous-time autonomous model networks. We find both
first- and second-order changes in the location of adiabatic attractors and
argue that the study of the continuously evolving attractor landscape
constitutes a powerful tool for understanding the overall development of the
neural dynamics.
|
1404.5421 | Concurrent bandits and cognitive radio networks | cs.LG cs.MA | We consider the problem of multiple users targeting the arms of a single
multi-armed stochastic bandit. The motivation for this problem comes from
cognitive radio networks, where selfish users need to coexist without any side
communication between them, implicit cooperation or common control. Even the
number of users may be unknown and can vary as users join or leave the network.
We propose an algorithm that combines an $\epsilon$-greedy learning rule with a
collision avoidance mechanism. We analyze its regret with respect to the
system-wide optimum and show that sub-linear regret can be obtained in this
setting. Experiments show dramatic improvement compared to other algorithms for
this setting.
|
1404.5433 | Equilibrium Refinement through Negotiation in Binary Voting | cs.GT cs.MA | We study voting games on binary issues, where voters hold an objective over
the outcome of the collective decision and are allowed, before the vote takes
place, to negotiate their voting strategy with the other participants. We
analyse the voters' rational behaviour in the resulting two-phase game, showing
under what conditions undesirable equilibria can be removed and desirable ones
sustained as a consequence of the pre-vote phase.
|
1404.5454 | Stochastic Privacy | cs.AI | Online services such as web search and e-commerce applications typically rely
on the collection of data about users, including details of their activities on
the web. Such personal data is used to enhance the quality of service via
personalization of content and to maximize revenues via better targeting of
advertisements and deeper engagement of users on sites. To date, service
providers have largely followed the approach of either requiring or requesting
consent for opting-in to share their data. Users may be willing to share
private information in return for better quality of service or for incentives,
or in return for assurances about the nature and extend of the logging of data.
We introduce \emph{stochastic privacy}, a new approach to privacy centering on
a simple concept: A guarantee is provided to users about the upper-bound on the
probability that their personal data will be used. Such a probability, which we
refer to as \emph{privacy risk}, can be assessed by users as a preference or
communicated as a policy by a service provider. Service providers can work to
personalize and to optimize revenues in accordance with preferences about
privacy risk. We present procedures, proofs, and an overall system for
maximizing the quality of services, while respecting bounds on allowable or
communicated privacy risk. We demonstrate the methodology with a case study and
evaluation of the procedures applied to web search personalization. We show how
we can achieve near-optimal utility of accessing information with provable
guarantees on the probability of sharing data.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.