id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1308.6300 | Computing Lexical Contrast | cs.CL | Knowing the degree of semantic contrast between words has widespread
application in natural language processing, including machine translation,
information retrieval, and dialogue systems. Manually-created lexicons focus on
opposites, such as {\rm hot} and {\rm cold}. Opposites are of many kinds such
as antipodals, complementaries, and gradable. However, existing lexicons often
do not classify opposites into the different kinds. They also do not explicitly
list word pairs that are not opposites but yet have some degree of contrast in
meaning, such as {\rm warm} and {\rm cold} or {\rm tropical} and {\rm
freezing}. We propose an automatic method to identify contrasting word pairs
that is based on the hypothesis that if a pair of words, $A$ and $B$, are
contrasting, then there is a pair of opposites, $C$ and $D$, such that $A$ and
$C$ are strongly related and $B$ and $D$ are strongly related. (For example,
there exists the pair of opposites {\rm hot} and {\rm cold} such that {\rm
tropical} is related to {\rm hot,} and {\rm freezing} is related to {\rm
cold}.) We will call this the contrast hypothesis. We begin with a large
crowdsourcing experiment to determine the amount of human agreement on the
concept of oppositeness and its different kinds. In the process, we flesh out
key features of different kinds of opposites. We then present an automatic and
empirical measure of lexical contrast that relies on the contrast hypothesis,
corpus statistics, and the structure of a {\it Roget}-like thesaurus. We show
that the proposed measure of lexical contrast obtains high precision and large
coverage, outperforming existing methods.
|
1308.6309 | Text recognition in both ancient and cartographic documents | cs.CV | This paper deals with the recognition and matching of text in both
cartographic maps and ancient documents. The purpose of this work is to find
similar text regions based on statistical and global features. A phase of
normalization is done first, in object to well categorize the same quantity of
information. A phase of wordspotting is done next by combining local and global
features. We make different experiments by combining the different techniques
of extracting features in order to obtain better results in recognition phase.
We applied fontspotting on both ancient documents and cartographic ones. We
also applied the wordspotting in which we adopted a new technique which tries
to compare the images of character and not the entire images words. We present
the precision and recall values obtained with three methods for the new method
of wordspotting applied on characters only.
|
1308.6311 | Categorizing ancient documents | cs.CV | The analysis of historical documents is still a topical issue given the
importance of information that can be extracted and also the importance given
by the institutions to preserve their heritage. The main idea in order to
characterize the content of the images of ancient documents after attempting to
clean the image is segmented blocks texts from the same image and tries to find
similar blocks in either the same image or the entire image database. Most
approaches of offline handwriting recognition proceed by segmenting words into
smaller pieces (usually characters) which are recognized separately.
Recognition of a word then requires the recognition of all characters (OCR)
that compose it. Our work focuses mainly on the characterization of classes in
images of old documents. We use Som toolbox for finding classes in documents.
We applied also fractal dimensions and points of interest to categorize and
match ancient documents.
|
1308.6316 | Retroactive Anti-Jamming for MISO Broadcast Channels | cs.IT math.IT | Jamming attacks can significantly impact the performance of wireless
communication systems. In addition to reducing the capacity, such attacks may
lead to insurmountable overhead in terms of re-transmissions and increased
power consumption. In this paper, we consider the multiple-input single-output
(MISO) broadcast channel (BC) in the presence of a jamming attack in which a
subset of the receivers can be jammed at any given time. Further,
countermeasures for mitigating the effects of such jamming attacks are
presented. The effectiveness of these anti-jamming countermeasures is
quantified in terms of the degrees-of-freedom (DoF) of the MISO BC under
various assumptions regarding the availability of the channel state information
(CSIT) and the jammer state information at the transmitter (JSIT). The main
contribution of this paper is the characterization of the DoF region of the two
user MISO BC under various assumptions on the availability of CSIT and JSIT.
Partial extensions to the multi-user broadcast channels are also presented.
|
1308.6319 | A proposition of a robust system for historical document images
indexation | cs.CV | Characterizing noisy or ancient documents is a challenging problem up to now.
Many techniques have been done in order to effectuate feature extraction and
image indexation for such documents. Global approaches are in general less
robust and exact than local approaches. That's why, we propose in this paper, a
hybrid system based on global approach(fractal dimension), and a local one
based on SIFT descriptor. The Scale Invariant Feature Transform seems to do
well with our application since it's rotation invariant and relatively robust
to changing illumination.In the first step the calculation of fractal dimension
is applied to images in order to eliminate images which have distant features
than image request characteristics. Next, the SIFT is applied to show which
images match well the request. However the average matching time using the
hybrid approach is better than "fractal dimension" and "SIFT descriptor" if
they are used alone.
|
1308.6324 | Prediction of breast cancer recurrence using Classification Restricted
Boltzmann Machine with Dropping | cs.LG | In this paper, we apply Classification Restricted Boltzmann Machine
(ClassRBM) to the problem of predicting breast cancer recurrence. According to
the Polish National Cancer Registry, in 2010 only, the breast cancer caused
almost 25% of all diagnosed cases of cancer in Poland. We propose how to use
ClassRBM for predicting breast cancer return and discovering relevant inputs
(symptoms) in illness reappearance. Next, we outline a general probabilistic
framework for learning Boltzmann machines with masks, which we refer to as
Dropping. The fashion of generating masks leads to different learning methods,
i.e., DropOut, DropConnect. We propose a new method called DropPart which is a
generalization of DropConnect. In DropPart the Beta distribution instead of
Bernoulli distribution in DropConnect is used. At the end, we carry out an
experiment using real-life dataset consisting of 949 cases, provided by the
Institute of Oncology Ljubljana.
|
1308.6337 | A dual algorithm for a class of augmented convex models | math.OC cs.IT math.IT | Convex optimization models find interesting applications, especially in
signal/image processing and compressive sensing. We study some augmented convex
models, which are perturbed by strongly convex functions, and propose a dual
gradient algorithm. The proposed algorithm includes the linearized Bregman
algorithm and the singular value thresholding algorithm as special cases. Based
on fundamental properties of proximal operators, we present a concise approach
to establish the convergence of both primal and dual sequences, improving the
results in the existing literature.
|
1308.6339 | New bounds for circulant Johnson-Lindenstrauss embeddings | cs.IT math.FA math.IT | This paper analyzes circulant Johnson-Lindenstrauss (JL) embeddings which, as
an important class of structured random JL embeddings, are formed by
randomizing the column signs of a circulant matrix generated by a random
vector. With the help of recent decoupling techniques and matrix-valued
Bernstein inequalities, we obtain a new bound
$k=O(\epsilon^{-2}\log^{(1+\delta)} (n))$ for Gaussian circulant JL embeddings.
Moreover, by using the Laplace transform technique (also called Bernstein's
trick), we extend the result to subgaussian case. The bounds in this paper
offer a small improvement over the current best bounds for Gaussian circulant
JL embeddings for certain parameter regimes and are derived using more direct
methods.
|
1308.6342 | Linear and Parallel Learning of Markov Random Fields | stat.ML cs.LG | We introduce a new embarrassingly parallel parameter learning algorithm for
Markov random fields with untied parameters which is efficient for a large
class of practical models. Our algorithm parallelizes naturally over cliques
and, for graphs of bounded degree, its complexity is linear in the number of
cliques. Unlike its competitors, our algorithm is fully parallel and for
log-linear models it is also data efficient, requiring only the local
sufficient statistics of the data to estimate parameters.
|
1308.6356 | Respondent-Driven Sampling in Online Social Networks | cs.SI stat.AP | Respondent-driven sampling (RDS) is a commonly used method for acquiring data
on hidden communities, i.e., those that lack unbiased sampling frames or face
social stigmas that make their mem- bers unwilling to identify themselves.
Obtaining accurate statistical data about such communities is important
because, for instance, they often have different health burdens from the
greater population, and without good statistics it is hard and expensive to
effectively reach them for pre- vention or treatment interventions. Online
social networks (OSN) have the potential to transform RDS for the better. We
present a new RDS recruitment protocol for (OSNs) and show via simulation that
it out- performs the standard RDS protocol in terms of sampling accuracy and
approaches the accuracy of Markov chain Monte Carlo random walks.
|
1308.6373 | Special Bent and Near-bent Functions | cs.IT math.IT | Starting from special near-bent functions in dimension 2t-1 we construct bent
functions in dimension 2t having a specific derivative. We deduce new famillies
of bent functions
|
1308.6384 | Collecting Coupons with Random Initial Stake | cs.DM cs.DS cs.NE | Motivated by a problem in the theory of randomized search heuristics, we give
a very precise analysis for the coupon collector problem where the collector
starts with a random set of coupons (chosen uniformly from all sets).
We show that the expected number of rounds until we have a coupon of each
type is $nH_{n/2} - 1/2 \pm o(1)$, where $H_{n/2}$ denotes the $(n/2)$th
harmonic number when $n$ is even, and $H_{n/2}:= (1/2) H_{\lfloor n/2 \rfloor}
+ (1/2) H_{\lceil n/2 \rceil}$ when $n$ is odd. Consequently, the coupon
collector with random initial stake is by half a round faster than the one
starting with exactly $n/2$ coupons (apart from additive $o(1)$ terms).
This result implies that classic simple heuristic called \emph{randomized
local search} needs an expected number of $nH_{n/2} - 1/2 \pm o(1)$ iterations
to find the optimum of any monotonic function defined on bit-strings of length
$n$.
|
1308.6388 | GNCGCP - Graduated NonConvexity and Graduated Concavity Procedure | cs.CV | In this paper we propose the Graduated NonConvexity and Graduated Concavity
Procedure (GNCGCP) as a general optimization framework to approximately solve
the combinatorial optimization problems on the set of partial permutation
matrices. GNCGCP comprises two sub-procedures, graduated nonconvexity (GNC)
which realizes a convex relaxation and graduated concavity (GC) which realizes
a concave relaxation. It is proved that GNCGCP realizes exactly a type of
convex-concave relaxation procedure (CCRP), but with a much simpler formulation
without needing convex or concave relaxation in an explicit way. Actually,
GNCGCP involves only the gradient of the objective function and is therefore
very easy to use in practical applications. Two typical NP-hard problems,
(sub)graph matching and quadratic assignment problem (QAP), are employed to
demonstrate its simplicity and state-of-the-art performance.
|
1308.6401 | A Synergistic Approach for Recovering Occlusion-Free Textured 3D Maps of
Urban Facades from Heterogeneous Cartographic Data | cs.CV | In this paper we present a practical approach for generating an
occlusion-free textured 3D map of urban facades by the synergistic use of
terrestrial images, 3D point clouds and area-based information. Particularly in
dense urban environments, the high presence of urban objects in front of the
facades causes significant difficulties for several stages in computational
building modeling. Major challenges lie on the one hand in extracting complete
3D facade quadrilateral delimitations and on the other hand in generating
occlusion-free facade textures. For these reasons, we describe a
straightforward approach for completing and recovering facade geometry and
textures by exploiting the data complementarity of terrestrial multi-source
imagery and area-based information.
|
1308.6415 | Learning-Based Procedural Content Generation | cs.AI cs.HC cs.LG cs.NE | Procedural content generation (PCG) has recently become one of the hottest
topics in computational intelligence and AI game researches. Among a variety of
PCG techniques, search-based approaches overwhelmingly dominate PCG development
at present. While SBPCG leads to promising results and successful applications,
it poses a number of challenges ranging from representation to evaluation of
the content being generated. In this paper, we present an alternative yet
generic PCG framework, named learning-based procedure content generation
(LBPCG), to provide potential solutions to several challenging problems in
existing PCG techniques. By exploring and exploiting information gained in game
development and public beta test via data-driven learning, our framework can
generate robust content adaptable to end-user or target players on-line with
minimal interruption to their experience. Furthermore, we develop enabling
techniques to implement the various models required in our framework. For a
proof of concept, we have developed a prototype based on the classic open
source first-person shooter game, Quake. Simulation results suggest that our
framework is promising in generating quality content.
|
1308.6432 | Robust L_infinity-induced deconvolution filtering for linear stochastic
systems and its application to fault reconstruction | cs.SY | The problem of stationary robust L_infinity-induced deconvolution filtering
for the uncertain continuous-time linear stochastic systems is addressed. The
state space model of the system contains state- and input-dependent noise and
deterministic parameter uncertainties residing in a given polytope. In the
presence of input-dependent noise, we extend the derived lemma in Berman and
Shaked (2010) characterizing the induced L_infinity norm by linear matrix
inequalities (LMIs), according to which we solve the deconvolution problem in
the quadratic framework. By decoupling product terms between the Lyapunov
matrix and system matrices, an improved version of the proposed
L_infinity-induced norm bound lemma for continuous-time stochastic systems is
obtained, which allows us to realize exploit parameter-dependent stability idea
in the deconvolution filter design. The theories presented are utilized for
sensor fault reconstruction in uncertain linear stochastic systems. The
effectiveness and advantages of the proposed design methods are shown via two
numerical examples.
|
1308.6437 | Coding with Scrambling, Concatenation, and HARQ for the AWGN Wire-Tap
Channel: A Security Gap Analysis | cs.IT math.IT | This study examines the use of nonsystematic channel codes to obtain secure
transmissions over the additive white Gaussian noise (AWGN) wire-tap channel.
Unlike the previous approaches, we propose to implement nonsystematic coded
transmission by scrambling the information bits, and characterize the bit error
rate of scrambled transmissions through theoretical arguments and numerical
simulations. We have focused on some examples of Bose-Chaudhuri-Hocquenghem
(BCH) and low-density parity-check (LDPC) codes to estimate the security gap,
which we have used as a measure of physical layer security, in addition to the
bit error rate. Based on a number of numerical examples, we found that such a
transmission technique can outperform alternative solutions. In fact, when an
eavesdropper (Eve) has a worse channel than the authorized user (Bob), the
security gap required to reach a given level of security is very small. The
amount of degradation of Eve's channel with respect to Bob's that is needed to
achieve sufficient security can be further reduced by implementing scrambling
and descrambling operations on blocks of frames, rather than on single frames.
While Eve's channel has a quality equal to or better than that of Bob's
channel, we have shown that the use of a hybrid automatic repeat-request (HARQ)
protocol with authentication still allows achieving a sufficient level of
security. Finally, the secrecy performance of some practical schemes has also
been measured in terms of the equivocation rate about the message at the
eavesdropper and compared with that of ideal codes.
|
1308.6481 | Nonparametric Decentralized Sequential Detection via Universal Source
Coding | cs.IT math.IT | We consider nonparametric or universal sequential hypothesis testing problem
when the distribution under the null hypothesis is fully known but the
alternate hypothesis corresponds to some other unknown distribution. These
algorithms are primarily motivated from spectrum sensing in Cognitive Radios
and intruder detection in wireless sensor networks. We use easily implementable
universal lossless source codes to propose simple algorithms for such a setup.
The algorithms are first proposed for discrete alphabet. Their performance and
asymptotic properties are studied theoretically. Later these are extended to
continuous alphabets. Their performance with two well known universal source
codes, Lempel-Ziv code and Krichevsky-Trofimov estimator with Arithmetic
Encoder are compared. These algorithms are also compared with the tests using
various other nonparametric estimators. Finally a decentralized version
utilizing spatial diversity is also proposed. Its performance is analysed and
asymptotic properties are proved.
|
1308.6487 | A New Algorithm of Speckle Filtering using Stochastic Distances | cs.IT cs.CV cs.GR math.IT stat.AP stat.ML | This paper presents a new approach for filter design based on stochastic
distances and tests between distributions. A window is defined around each
pixel, overlapping samples are compared and only those which pass a
goodness-of-fit test are used to compute the filtered value. The technique is
applied to intensity SAR data with homogeneous regions using the Gamma model.
The proposal is compared with the Lee's filter using a protocol based on Monte
Carlo. Among the criteria used to quantify the quality of filters, we employ
the equivalent number of looks, line and edge preservation. Moreover, we also
assessed the filters by the Universal Image Quality Index and the Pearson's
correlation on edges regions.
|
1308.6494 | Spectral community detection in sparse networks | physics.soc-ph cond-mat.stat-mech cs.SI | Spectral methods based on the eigenvectors of matrices are widely used in the
analysis of network data, particularly for community detection and graph
partitioning. Standard methods based on the adjacency matrix and related
matrices, however, break down for very sparse networks, which includes many
networks of practical interest. As a solution to this problem it has been
recently proposed that we focus instead on the spectrum of the non-backtracking
matrix, an alternative matrix representation of a network that shows better
behavior in the sparse limit. Inspired by this suggestion, we here make use of
a relaxation method to derive a spectral community detection algorithm that
works well even in the sparse regime where other methods break down.
Interestingly, however, the matrix at the heart of the method, it turns out, is
not exactly the non-backtracking matrix, but a variant of it with a somewhat
different definition. We study the behavior of this variant matrix for both
artificial and real-world networks and find it to have desirable properties,
especially in the common case of networks with broad degree distributions, for
which it appears to have a better behaved spectrum and eigenvectors than the
original non-backtracking matrix.
|
1308.6498 | Universal Approximation Using Shuffled Linear Models | math.DS cs.NE | This paper proposes a specific type of Local Linear Model, the Shuffled
Linear Model (SLM), that can be used as a universal approximator. Local
operating points are chosen randomly and linear models are used to approximate
a function or system around these points. The model can also be interpreted as
an extension to Extreme Learning Machines with Radial Basis Function nodes, or
as a specific way of using Takagi-Sugeno fuzzy models. Using the available
theory of Extreme Learning Machines, universal approximation of the SLM and an
upper bound on the number of models are proved mathematically, and an efficient
algorithm is proposed.
|
1308.6503 | Second-Order Asymptotics for the Classical Capacity of Image-Additive
Quantum Channels | quant-ph cs.IT math-ph math.IT math.MP | We study non-asymptotic fundamental limits for transmitting classical
information over memoryless quantum channels, i.e. we investigate the amount of
classical information that can be transmitted when a quantum channel is used a
finite number of times and a fixed, non-vanishing average error is permissible.
We consider the classical capacity of quantum channels that are image-additive,
including all classical to quantum channels, as well as the product state
capacity of arbitrary quantum channels. In both cases we show that the
non-asymptotic fundamental limit admits a second-order approximation that
illustrates the speed at which the rate of optimal codes converges to the
Holevo capacity as the blocklength tends to infinity. The behavior is governed
by a new channel parameter, called channel dispersion, for which we provide a
geometrical interpretation.
|
1308.6504 | On the Conditions of Sparse Parameter Estimation via Log-Sum Penalty
Regularization | cs.IT math.IT | For high-dimensional sparse parameter estimation problems, Log-Sum Penalty
(LSP) regularization effectively reduces the sampling sizes in practice.
However, it still lacks theoretical analysis to support the experience from
previous empirical study. The analysis of this article shows that, like
$\ell_0$-regularization, $O(s)$ sampling size is enough for proper LSP, where
$s$ is the non-zero components of the true parameter. We also propose an
efficient algorithm to solve LSP regularization problem. The solutions given by
the proposed algorithm give consistent parameter estimations under less
restrictive conditions than $\ell_1$-regularization.
|
1308.6537 | Percolation on random networks with arbitrary k-core structure | physics.soc-ph cond-mat.stat-mech cs.SI | The k-core decomposition of a network has thus far mainly served as a
powerful tool for the empirical study of complex networks. We now propose its
explicit integration in a theoretical model. We introduce a Hard-core Random
Network model that generates maximally random networks with arbitrary degree
distribution and arbitrary k-core structure. We then solve exactly the bond
percolation problem on the HRN model and produce fast and precise analytical
estimates for the corresponding real networks. Extensive comparison with
selected databases reveals that our approach performs better than existing
models, while requiring less input information.
|
1308.6543 | Resource Allocation in MIMO Radar With Multiple Targets for Non-Coherent
Localization | cs.IT math.IT | In a MIMO radar network the multiple transmit elements may emit waveforms
that differ on power and bandwidth. In this paper, we are asking, given that
these two resources are limited, what is the optimal power, optimal bandwidth
and optimal joint power and bandwidth allocation for best localization of
multiple targets. The well known Cr\'amer-Rao lower bound for target
localization accuracy is used as a figure of merit and approximate solutions
are found by minimizing a sequence of convex problems. Their quality is
assessed through extensive numerical simulations and with the help of a
lower-bound on the true solution. Simulations results reveal that bandwidth
allocation policies have a definitely stronger impact on performance than
power.
|
1308.6552 | Integer-Forcing Source Coding | cs.IT math.IT | Integer-Forcing (IF) is a new framework, based on compute-and-forward, for
decoding multiple integer linear combinations from the output of a Gaussian
multiple-input multiple-output channel. This work applies the IF approach to
arrive at a new low-complexity scheme, IF source coding, for distributed lossy
compression of correlated Gaussian sources under a minimum mean squared error
distortion measure. All encoders use the same nested lattice codebook. Each
encoder quantizes its observation using the fine lattice as a quantizer and
reduces the result modulo the coarse lattice, which plays the role of binning.
Rather than directly recovering the individual quantized signals, the decoder
first recovers a full-rank set of judiciously chosen integer linear
combinations of the quantized signals, and then inverts it. In general, the
linear combinations have smaller average powers than the original signals. This
allows to increase the density of the coarse lattice, which in turn translates
to smaller compression rates. We also propose and analyze a one-shot version of
IF source coding, that is simple enough to potentially lead to a new design
principle for analog-to-digital converters that can exploit spatial
correlations between the sampled signals.
|
1308.6566 | Classification and construction of closed-form kernels for signal
representation on the 2-sphere | cs.IT math.IT | This paper considers the construction of Reproducing Kernel Hilbert Spaces
(RKHS) on the sphere as an alternative to the conventional Hilbert space using
the inner product that yields the L^2(S^2) function space of finite energy
signals. In comparison with wavelet representations, which have
multi-resolution properties on L^2(S^2), the representations that arise from
the RKHS approach, which uses different inner products, have an overall
smoothness constraint, which may offer advantages and simplifications in
certain contexts. The key contribution of this paper is to construct classes of
closed-form kernels, such as one based on the von Mises-Fisher distribution,
which permits efficient inner product computation using kernel evaluations.
Three classes of RKHS are defined: isotropic kernels and non-isotropic kernels
both with spherical harmonic eigenfunctions, and general anisotropic kernels.
|
1308.6604 | A smart local moving algorithm for large-scale modularity-based
community detection | physics.soc-ph cs.SI physics.data-an | We introduce a new algorithm for modularity-based community detection in
large networks. The algorithm, which we refer to as a smart local moving
algorithm, takes advantage of a well-known local moving heuristic that is also
used by other algorithms. Compared with these other algorithms, our proposed
algorithm uses the local moving heuristic in a more sophisticated way. Based on
an analysis of a diverse set of networks, we show that our smart local moving
algorithm identifies community structures with higher modularity values than
other algorithms for large-scale modularity optimization, among which the
popular 'Louvain algorithm' introduced by Blondel et al. (2008). The
computational efficiency of our algorithm makes it possible to perform
community detection in networks with tens of millions of nodes and hundreds of
millions of edges. Our smart local moving algorithm also performs well in small
and medium-sized networks. In short computing times, it identifies community
structures with modularity values equally high as, or almost as high as, the
highest values reported in the literature, and sometimes even higher than the
highest values found in the literature.
|
1308.6628 | Joint Video and Text Parsing for Understanding Events and Answering
Queries | cs.CV cs.CL cs.MM | We propose a framework for parsing video and text jointly for understanding
events and answering user queries. Our framework produces a parse graph that
represents the compositional structures of spatial information (objects and
scenes), temporal information (actions and events) and causal information
(causalities between events and fluents) in the video and text. The knowledge
representation of our framework is based on a spatial-temporal-causal And-Or
graph (S/T/C-AOG), which jointly models possible hierarchical compositions of
objects, scenes and events as well as their interactions and mutual contexts,
and specifies the prior probabilistic distribution of the parse graphs. We
present a probabilistic generative model for joint parsing that captures the
relations between the input video/text, their corresponding parse graphs and
the joint parse graph. Based on the probabilistic model, we propose a joint
parsing system consisting of three modules: video parsing, text parsing and
joint inference. Video parsing and text parsing produce two parse graphs from
the input video and text respectively. The joint inference module produces a
joint parse graph by performing matching, deduction and revision on the video
and text parse graphs. The proposed framework has the following objectives:
Firstly, we aim at deep semantic parsing of video and text that goes beyond the
traditional bag-of-words approaches; Secondly, we perform parsing and reasoning
across the spatial, temporal and causal dimensions based on the joint S/T/C-AOG
representation; Thirdly, we show that deep joint parsing facilitates subsequent
applications such as generating narrative text descriptions and answering
queries in the forms of who, what, when, where and why. We empirically
evaluated our system based on comparison against ground-truth as well as
accuracy of query answering and obtained satisfactory results.
|
1308.6633 | Cross-Correlation of Photovoltaic Output Fluctuation in Power System
Operation for Large-Scale Photovoltaic Integration | cs.SY | We analyzed the cross-correlation of Photovoltaic (PV) output fluctuation for
the actual PV output time series data in both the Tokyo area and the whole of
Japan using the principal component analysis with the random matrix theory.
Based on the obtained cross-correlation coefficients, the forecast error for PV
output was estimated with/without considering the cross-correlations. Then
operation schedule of thermal plants is calculated to integrate PV output using
our unit commitment model with the estimated forecast error. The cost for grid
integration of PV system was also estimated. Finally, validity of the concept
of "local production for local consumption of renewable energy" and alternative
policy implications were also discussed.
|
1308.6641 | Local Average Consensus in Distributed Measurement of Spatial-Temporal
Varying Parameters: 1D Case | cs.SY | We study a new variant of consensus problems, termed `local average
consensus', in networks of agents. We consider the task of using sensor
networks to perform distributed measurement of a parameter which has both
spatial (in this paper 1D) and temporal variations. Our idea is to maintain
potentially useful local information regarding spatial variation, as contrasted
with reaching a single, global consensus, as well as to mitigate the effect of
measurement errors. We employ two schemes for computation of local average
consensus: exponential weighting and uniform finite window. In both schemes, we
design local average consensus algorithms to address first the case where the
measured parameter has spatial variation but is constant in time, and then the
case where the measured parameter has both spatial and temporal variations. Our
designed algorithms are distributed, in that information is exchanged only
among neighbors. Moreover, we analyze both spatial and temporal frequency
responses and noise propagation associated with the algorithms. The tradeoffs
of using local consensus, as compared to standard global consensus, include
higher memory requirement and degraded noise performance. Arbitrary updating
weights and random spacing between sensors are analyzed in the proposed
algorithms.
|
1308.6646 | Point values and normalization of two-direction multiwavelets and their
derivatives | cs.IT math.IT | Two-direction multiscaling functions $\boldsymbol{\phi}$ and two-direction
multiwavelets $\boldsymbol{\psi}$ associated with $\boldsymbol{\phi}$ are more
general and more flexible setting than one-direction multiscaling functions and
multiwavelets. In this paper, we investigate how to find and normalize point
values and those of derivatives of the two-direction multiscaling functions
$\boldsymbol{\phi}$ and multiwavelets $\boldsymbol{\psi}$. %associated with
$\boldsymbol{\phi}$. For finding point values, we investigate the eigenvalue
approach. For normalization, we investigate the normalizing conditions for them
by normalizing the zeroth continuous moment of $\boldsymbol{\phi}$. Examples
for illustrating the general theory are given.
|
1308.6659 | Spatio-spectral Formulation and Design of Spatially-Varying Filters for
Signal Estimation on the 2-Sphere | astro-ph.IM cs.IT math.IT | In this paper, we present an optimal filter for the enhancement or estimation
of signals on the 2-sphere corrupted by noise, when both the signal and noise
are realizations of anisotropic processes on the 2-sphere. The estimation of
such a signal in the spatial or spectral domain separately can be shown to be
inadequate. Therefore, we develop an optimal filter in the joint
spatio-spectral domain by using a framework recently presented in the
literature --- the spatially localized spherical harmonic transform ---
enabling such processing. Filtering of a signal in the spatio-spectral domain
facilitates taking into account anisotropic properties of both the signal and
noise processes. The proposed spatio-spectral filtering is optimal under the
mean-square error criterion. The capability of the proposed filtering framework
is demonstrated with by an example to estimate a signal corrupted by an
anisotropic noise process.
|
1308.6682 | A Novel Query-Based Approach for Addressing Summarizability Issues in
XOLAP | cs.DB | The business intelligence and decision-support systems used in many
application domains casually rely on data warehouses, which are
decision-oriented data repositories modeled as multidimensional (MD)
structures. MD structures help navigate data through hierarchical levels of
detail. In many real-world situations, hierarchies in MD models are complex,
which causes data aggregation issues, collectively known as the summarizability
problem. This problem leads to incorrect analyses and critically affects
decision making. To enforce summarizability, existing approaches alter either
MD models or data, and must be applied a priori, on a case-by-case basis, by an
expert. To alter neither models nor data, a few query-time approaches have been
proposed recently, but they only detect summarizability issues without solving
them. Thus, we propose in this paper a novel approach that automatically
detects and processes summarizability issues at query time, without requiring
any particular expertise from the user. Moreover, while most existing
approaches are based on the relational model, our approach focus on an XML MD
model, since XML data is customarily used to represent business data and its
format better copes with complex hierarchies than the relational model.
Finally, our experiments show that our method is likely to scale better than a
reference approach for addressing the summarizability problem in the MD
context.
|
1308.6683 | Benchmarking Summarizability Processing in XML Warehouses with Complex
Hierarchies | cs.DB | Business Intelligence plays an important role in decision making. Based on
data warehouses and Online Analytical Processing, a business intelligence tool
can be used to analyze complex data. Still, summarizability issues in data
warehouses cause ineffective analyses that may become critical problems to
businesses. To settle this issue, many researchers have studied and proposed
various solutions, both in relational and XML data warehouses. However, they
find difficulty in evaluating the performance of their proposals since the
available benchmarks lack complex hierarchies. In order to contribute to
summarizability analysis, this paper proposes an extension to the XML warehouse
benchmark (XWeB) with complex hierarchies. The benchmark enables us to generate
XML data warehouses with scalable complex hierarchies as well as
summarizability processing. We experimentally demonstrated that complex
hierarchies can definitely be included into a benchmark dataset, and that our
benchmark is able to compare two alternative approaches dealing with
summarizability issues.
|
1308.6687 | Image Set based Collaborative Representation for Face Recognition | cs.CV | With the rapid development of digital imaging and communication technologies,
image set based face recognition (ISFR) is becoming increasingly important. One
key issue of ISFR is how to effectively and efficiently represent the query
face image set by using the gallery face image sets. The set-to-set distance
based methods ignore the relationship between gallery sets, while representing
the query set images individually over the gallery sets ignores the correlation
between query set images. In this paper, we propose a novel image set based
collaborative representation and classification method for ISFR. By modeling
the query set as a convex or regularized hull, we represent this hull
collaboratively over all the gallery sets. With the resolved representation
coefficients, the distance between the query set and each gallery set can then
be calculated for classification. The proposed model naturally and effectively
extends the image based collaborative representation to an image set based one,
and our extensive experiments on benchmark ISFR databases show the superiority
of the proposed method to state-of-the-art ISFR methods under different set
sizes in terms of both recognition rate and efficiency.
|
1308.6697 | Detect adverse drug reactions for drug Atorvastatin | cs.CE | Adverse drug reactions (ADRs) are big concern for public health. ADRs are one
of most common causes to withdraw some drugs from markets. Now two major
methods for detecting ADRs are spontaneous reporting system (SRS), and
prescription event monitoring (PEM). The World Health Organization (WHO)
defines a signal in pharmacovigilance as "any reported information on a
possible causal relationship between an adverse event and a drug, the
relationship being unknown or incompletely documented previously". For
spontaneous reporting systems, many machine learning methods are used to detect
ADRs, such as Bayesian confidence propagation neural network (BCPNN), decision
support methods, genetic algorithms, knowledge based approaches, etc. One
limitation is the reporting mechanism to submit ADR reports, which has serious
underreporting and is not able to accurately quantify the corresponding risk.
Another limitation is hard to detect ADRs with small number of occurrences of
each drug-event association in the database. In this paper we propose feature
selection approach to detect ADRs from The Health Improvement Network (THIN)
database. First a feature matrix, which represents the medical events for the
patients before and after taking drugs, is created by linking patients'
prescriptions and corresponding medical events together. Then significant
features are selected based on feature selection methods, comparing the feature
matrix before patients take drugs with one after patients take drugs. Finally
the significant ADRs can be detected from thousands of medical events based on
corresponding features. Experiments are carried out on the drug Atorvastatin.
Good performance is achieved.
|
1308.6701 | Enhanced Data Integration for LabVIEW Laboratory Systems | cs.DB | Integrating data is a basic concern in many accredited laboratories that
perform a large variety of measurements. However, the present working style in
engineering faculties does not focus much on this aspect. To deal with this
challenge, we developed an educational platform that allows characterization of
acquisition ensembles, generation of Web pages for lessons, as well as
transformation of measured data and storage in a common format. As generally we
had to develop individual parsers for each instrument, we also added the
possibility to integrate the LabVIEW workbench, often used for rapid
development of applications in electrical engineering and automatic control.
This paper describes how we configure the platform for specific equipment, i.e.
how we model it, how we create the learning material and how we integrate the
results in a central database. It also introduces a case study for collecting
data from a thermocouple-based acquisition system based on LabVIEW, used by
students for a laboratory of measurement technologies and transducers.
|
1308.6702 | Adversarial hypothesis testing and a quantum Stein's Lemma for
restricted measurements | cs.IT math.IT math.PR quant-ph | Recall the classical hypothesis testing setting with two convex sets of
probability distributions P and Q. One receives either n i.i.d. samples from a
distribution p in P or from a distribution q in Q and wants to decide from
which set the points were sampled. It is known that the optimal exponential
rate at which errors decrease can be achieved by a simple maximum-likelihood
ratio test which does not depend on p or q, but only on the sets P and Q.
We consider an adaptive generalization of this model where the choice of p in
P and q in Q can change in each sample in some way that depends arbitrarily on
the previous samples. In other words, in the k'th round, an adversary, having
observed all the previous samples in rounds 1,...,k-1, chooses p_k in P and q_k
in Q, with the goal of confusing the hypothesis test. We prove that even in
this case, the optimal exponential error rate can be achieved by a simple
maximum-likelihood test that depends only on P and Q.
We then show that the adversarial model has applications in hypothesis
testing for quantum states using restricted measurements. For example, it can
be used to study the problem of distinguishing entangled states from the set of
all separable states using only measurements that can be implemented with local
operations and classical communication (LOCC). The basic idea is that in our
setup, the deleterious effects of entanglement can be simulated by an adaptive
classical adversary.
We prove a quantum Stein's Lemma in this setting: In many circumstances, the
optimal hypothesis testing rate is equal to an appropriate notion of quantum
relative entropy between two states. In particular, our arguments yield an
alternate proof of Li and Winter's recent strengthening of strong subadditivity
for quantum relative entropy.
|
1308.6705 | Digital breadcrumbs: Detecting urban mobility patterns and transport
mode choices from cellphone networks | cs.SI physics.data-an physics.soc-ph | Many modern and growing cities are facing declines in public transport usage,
with few efficient methods to explain why. In this article, we show that urban
mobility patterns and transport mode choices can be derived from cellphone call
detail records coupled with public transport data recorded from smart cards.
Specifically, we present new data mining approaches to determine the spatial
and temporal variability of public and private transportation usage and
transport mode preferences across Singapore. Our results, which were validated
by Singapore's quadriennial Household Interview Travel Survey (HITS), revealed
that there are 3.5 (HITS: 3.5 million) million and 4.3 (HITS: 4.4 million)
million inter-district passengers by public and private transport,
respectively. Along with classifying which transportation connections are weak
or underserved, the analysis shows that the mode share of public transport use
increases from 38 percent in the morning to 44 percent around mid-day and 52
percent in the evening.
|
1308.6709 | Distributed H-infinity Tracking Control for Discrete-Time Multi-Agent
Systems with a High-Dimensional Leader | cs.SY | This paper considers the distributed H-infinity leader-following tracking
problem for a class of discrete time multi-agent systems with a
high-dimensional dynamic leader. It is assumed that output information about
the leader is only available to designated followers, and the dynamics of the
followers are subject to perturbations. To achieve distributed H-infinity
leader-following tracking, a new class of control protocols is proposed which
is based on the feedback from the nearest neighbors as well as a distributed
state estimator. Under the assumptions that dynamics of the leader are
detectable and the communication topology contains a directed spanning tree,
sufficient conditions are obtained that enable all followers to track the
leader while achieving a desired H-infinity leader-following tracking
performance. Numerical simulations illustrate the effectiveness of the
theoretical analysis.
|
1308.6721 | Discriminative Parameter Estimation for Random Walks Segmentation | cs.CV cs.LG | The Random Walks (RW) algorithm is one of the most e - cient and easy-to-use
probabilistic segmentation methods. By combining contrast terms with prior
terms, it provides accurate segmentations of medical images in a fully
automated manner. However, one of the main drawbacks of using the RW algorithm
is that its parameters have to be hand-tuned. we propose a novel discriminative
learning framework that estimates the parameters using a training dataset. The
main challenge we face is that the training samples are not fully supervised.
Speci cally, they provide a hard segmentation of the images, instead of a
proba- bilistic segmentation. We overcome this challenge by treating the opti-
mal probabilistic segmentation that is compatible with the given hard
segmentation as a latent variable. This allows us to employ the latent support
vector machine formulation for parameter estimation. We show that our approach
signi cantly outperforms the baseline methods on a challenging dataset
consisting of real clinical 3D MRI volumes of skeletal muscles.
|
1308.6728 | Extension of "Model Parameter Adaptive Approach of Extended Object
Tracking Using Random Matrix" | cs.SY | This is a draft of summary of multi-model algorithm of extended object
tracking based on random matrix (RMF-MM).
|
1308.6732 | Strong converse for the classical capacity of the pure-loss bosonic
channel | quant-ph cs.IT math.IT | This paper strengthens the interpretation and understanding of the classical
capacity of the pure-loss bosonic channel, first established in [Giovannetti et
al., Physical Review Letters 92, 027902 (2004), arXiv:quant-ph/0308012]. In
particular, we first prove that there exists a trade-off between communication
rate and error probability if one imposes only a mean-photon number constraint
on the channel inputs. That is, if we demand that the mean number of photons at
the channel input cannot be any larger than some positive number N_S, then it
is possible to respect this constraint with a code that operates at a rate
g(\eta N_S / (1-p)) where p is the code's error probability, \eta\ is the
channel transmissivity, and g(x) is the entropy of a bosonic thermal state with
mean photon number x. We then prove that a strong converse theorem holds for
the classical capacity of this channel (that such a rate-error trade-off cannot
occur) if one instead demands for a maximum photon number constraint, in such a
way that mostly all of the "shadow" of the average density operator for a given
code is required to be on a subspace with photon number no larger than n N_S,
so that the shadow outside this subspace vanishes as the number n of channel
uses becomes large. Finally, we prove that a small modification of the
well-known coherent-state coding scheme meets this more demanding constraint.
|
1308.6736 | Wiretap Channel With Causal State Information and Secure Rate-Limited
Feedback | cs.IT math.IT | In this paper, we consider the secrecy capacity of a wiretap channel in the
presence of causal state information and secure rate-limited feedback. In this
scenario, the causal state information from the channel is available to both
the legitimate transmitter and legitimate receiver. In addition, the legitimate
receiver can send secure feedback to the transmitter at a limited rate Rf . We
shown that the secrecy capacity is bounded. Moreover, when the channel to the
legitimate receiver is less noisy than the channel to the eavesdropper, the
bound is shown to be tight. The capacity achieving scheme is based on both the
Wyner wiretap coding and two steps of shared-key generation: one from the state
information and one via the noiseless feedback. Finally, we consider several
special cases. When state information is available only at the legitimate
receiver, the analysis suggests that unlike previous results involving
feedback, it is better to use the feedback to send the state information to the
transmitter (when possible), rather than send a random key.
|
1308.6744 | Preventing Disclosure of Sensitive Knowledge by Hiding Inference | cs.CR cs.DB cs.LG | Data Mining is a way of extracting data or uncovering hidden patterns of
information from databases. So, there is a need to prevent the inference rules
from being disclosed such that the more secure data sets cannot be identified
from non sensitive attributes. This can be done through removing or adding
certain item sets in the transactions Sanitization. The purpose is to hide the
Inference rules, so that the user may not be able to discover any valuable
information from other non sensitive data and any organisation can release all
samples of their data without the fear of Knowledge Discovery In Databases
which can be achieved by investigating frequently occurring item sets, rules
that can be mined from them with the objective of hiding them. Another way is
to release only limited samples in the new database so that there is no
information loss and it also satisfies the legitimate needs of the users. The
major problem is uncovering hidden patterns, which causes a threat to the
database security. Sensitive data are inferred from non-sensitive data based on
the semantics of the application the user has, commonly known as the inference
problem. Two fundamental approaches to protect sensitive rules from disclosure
are that, preventing rules from being generated by hiding the frequent sets of
data items and reducing the importance of the rules by setting their confidence
below a user-specified threshold.
|
1308.6750 | Robust Iterative Interference Alignment for Cellular Networks with
Limited Feedback | cs.IT math.IT | In theory coordinated multi-point transmission (CoMP) promises vast gains in
spectral efficiency. But industrial field trials show rather disappointing
throughput gains, whereby the major limiting factor is proper sharing of
channel state information. Many recent papers consider this so-called limited
feedback problem in the context of CoMP. Usually taking the assumptions: 1)
infinite SNR regime, 2) no user selection and 3) ideal link adaptation;
rendering the analysis too optimistic. In this paper we make a step forward
towards a more realistic assessment of the limited feedback problem by
introducing an improved metric for the performance evaluation which better
captures the throughput degradation. We find the relevant scaling laws (lower
and upper bounds) and how that they are different from existing ones. Moreover,
we provide a robust iterative interference alignment algorithm and
corresponding feedback strategies achieving the obtained scaling laws. The main
idea is that instead of sending the complete channel matrix each user fixes a
receive filter and feeds back a quantized version of the effective channel.
Finally we underline our findings with simulations for the proposed system.
|
1308.6783 | Bipartite entanglement of quantum states in a pair basis | quant-ph cond-mat.quant-gas cs.IT math.IT | The unambiguous detection and quantification of entanglement is a hot topic
of scientific research, though it is limited to low dimensions or specific
classes of states. Here we identify an additional class of quantum states, for
which bipartite entanglement measures can be efficiently computed, providing
new rigorous results. Such states are written in arbitrary $d\times d$
dimensions, where each basis state in the subsystem A is paired with only one
state in B. This new class, that we refer to as pair basis states, is
remarkably relevant in many physical situations, including quantum optics. We
find that negativity is a necessary and sufficient measure of entanglement for
mixtures of states written in the same pair basis. We also provide analytical
expressions for a tight lower-bound estimation of the entanglement of
formation, a central quantity in quantum information.
|
1308.6797 | Online Ranking: Discrete Choice, Spearman Correlation and Other Feedback | cs.LG cs.GT stat.ML | Given a set $V$ of $n$ objects, an online ranking system outputs at each time
step a full ranking of the set, observes a feedback of some form and suffers a
loss. We study the setting in which the (adversarial) feedback is an element in
$V$, and the loss is the position (0th, 1st, 2nd...) of the item in the
outputted ranking. More generally, we study a setting in which the feedback is
a subset $U$ of at most $k$ elements in $V$, and the loss is the sum of the
positions of those elements.
We present an algorithm of expected regret $O(n^{3/2}\sqrt{Tk})$ over a time
horizon of $T$ steps with respect to the best single ranking in hindsight. This
improves previous algorithms and analyses either by a factor of either
$\Omega(\sqrt{k})$, a factor of $\Omega(\sqrt{\log n})$ or by improving running
time from quadratic to $O(n\log n)$ per round. We also prove a matching lower
bound. Our techniques also imply an improved regret bound for online rank
aggregation over the Spearman correlation measure, and to other more complex
ranking loss functions.
|
1308.6804 | A Low-Dimensional Representation for Robust Partial Isometric
Correspondences Computation | cs.CV cs.GR | Intrinsic isometric shape matching has become the standard approach for pose
invariant correspondence estimation among deformable shapes. Most existing
approaches assume global consistency, i.e., the metric structure of the whole
manifold must not change significantly. While global isometric matching is well
understood, only a few heuristic solutions are known for partial matching.
Partial matching is particularly important for robustness to topological noise
(incomplete data and contacts), which is a common problem in real-world 3D
scanner data. In this paper, we introduce a new approach to partial, intrinsic
isometric matching. Our method is based on the observation that isometries are
fully determined by purely local information: a map of a single point and its
tangent space fixes an isometry for both global and the partial maps. From this
idea, we develop a new representation for partial isometric maps based on
equivalence classes of correspondences between pairs of points and their
tangent spaces. From this, we derive a local propagation algorithm that find
such mappings efficiently. In contrast to previous heuristics based on RANSAC
or expectation maximization, our method is based on a simple and sound
theoretical model and fully deterministic. We apply our approach to register
partial point clouds and compare it to the state-of-the-art methods, where we
obtain significant improvements over global methods for real-world data and
stronger guarantees than previous heuristic partial matching algorithms.
|
1308.6823 | A Hypergraph-Partitioned Vertex Programming Approach for Large-scale
Consensus Optimization | cs.AI cs.DC | In modern data science problems, techniques for extracting value from big
data require performing large-scale optimization over heterogenous, irregularly
structured data. Much of this data is best represented as multi-relational
graphs, making vertex programming abstractions such as those of Pregel and
GraphLab ideal fits for modern large-scale data analysis. In this paper, we
describe a vertex-programming implementation of a popular consensus
optimization technique known as the alternating direction of multipliers
(ADMM). ADMM consensus optimization allows elegant solution of complex
objectives such as inference in rich probabilistic models. We also introduce a
novel hypergraph partitioning technique that improves over state-of-the-art
partitioning techniques for vertex programming and significantly reduces the
communication cost by reducing the number of replicated nodes up to an order of
magnitude. We implemented our algorithm in GraphLab and measure scaling
performance on a variety of realistic bipartite graph distributions and a large
synthetic voter-opinion analysis application. In our experiments, we are able
to achieve a 50% improvement in runtime over the current state-of-the-art
GraphLab partitioning scheme.
|
1308.6833 | Stability of Polynomial Differential Equations: Complexity and Converse
Lyapunov Questions | math.OC cs.CC cs.SY math.CA math.DS | We consider polynomial differential equations and make a number of
contributions to the questions of (i) complexity of deciding stability, (ii)
existence of polynomial Lyapunov functions, and (iii) existence of sum of
squares (sos) Lyapunov functions.
(i) We show that deciding local or global asymptotic stability of cubic
vector fields is strongly NP-hard. Simple variations of our proof are shown to
imply strong NP-hardness of several other decision problems: testing local
attractivity of an equilibrium point, stability of an equilibrium point in the
sense of Lyapunov, invariance of the unit ball, boundedness of trajectories,
convergence of all trajectories in a ball to a given equilibrium point,
existence of a quadratic Lyapunov function, local collision avoidance, and
existence of a stabilizing control law.
(ii) We present a simple, explicit example of a globally asymptotically
stable quadratic vector field on the plane which does not admit a polynomial
Lyapunov function (joint work with M. Krstic). For the subclass of homogeneous
vector fields, we conjecture that asymptotic stability implies existence of a
polynomial Lyapunov function, but show that the minimum degree of such a
Lyapunov function can be arbitrarily large even for vector fields in fixed
dimension and degree. For the same class of vector fields, we further establish
that there is no monotonicity in the degree of polynomial Lyapunov functions.
(iii) We show via an explicit counterexample that if the degree of the
polynomial Lyapunov function is fixed, then sos programming may fail to find a
valid Lyapunov function even though one exists. On the other hand, if the
degree is allowed to increase, we prove that existence of a polynomial Lyapunov
function for a planar or a homogeneous vector field implies existence of a
polynomial Lyapunov function that is sos and that the negative of its
derivative is also sos.
|
1309.0003 | Concentration Inequalities for Bounded Random Vectors | math.PR cs.LG math.ST stat.TH | We derive simple concentration inequalities for bounded random vectors, which
generalize Hoeffding's inequalities for bounded scalar random variables. As
applications, we apply the general results to multinomial and Dirichlet
distributions to obtain multivariate concentration inequalities.
|
1309.0040 | Enhanced Flow in Small-World Networks | cond-mat.dis-nn cs.SI physics.soc-ph | The small-world property is known to have a profound effect on the navigation
efficiency of complex networks [J. M. Kleinberg, Nature 406, 845 (2000)].
Accordingly, the proper addition of shortcuts to a regular substrate can lead
to the formation of a highly efficient structure for information propagation.
Here we show that enhanced flow properties can also be observed in these
complex topologies. Precisely, our model is a network built from an underlying
regular lattice over which long-range connections are randomly added according
to the probability, $P_{ij}\sim r_{ij}^{-\alpha}$, where $r_{ij}$ is the
Manhattan distance between nodes $i$ and $j$, and the exponent $\alpha$ is a
controlling parameter. The mean two-point global conductance of the system is
computed by considering that each link has a local conductance given by
$g_{ij}\propto r_{ij}^{-\delta}$, where $\delta$ determines the extent of the
geographical limitations (costs) on the long-range connections. Our results
show that the best flow conditions are obtained for $\delta=0$ with $\alpha=0$,
while for $\delta \gg 1$ the overall conductance always increases with
$\alpha$. For $\delta\approx 1$, $\alpha=d$ becomes the optimal exponent, where
$d$ is the topological dimension of the substrate. Interestingly, this exponent
is identical to the one obtained for optimal navigation in small-world networks
using decentralized algorithms.
|
1309.0052 | Accelerating a Cloud-Based Software GNSS Receiver | cs.PF cs.CE cs.DC | In this paper we discuss ways to reduce the execution time of a software
Global Navigation Satellite System (GNSS) receiver that is meant for offline
operation in a cloud environment. Client devices record satellite signals they
receive, and send them to the cloud, to be processed by this software. The goal
of this project is for each client request to be processed as fast as possible,
but also to increase total system throughput by making sure as many requests as
possible are processed within a unit of time. The characteristics of our
application provided both opportunities and challenges for increasing
performance. We describe the speedups we obtained by enabling the software to
exploit multi-core CPUs and GPGPUs. We mention which techniques worked for us
and which did not. To increase throughput, we describe how we control the
resources allocated to each invocation of the software to process a client
request, such that multiple copies of the application can run at the same time.
We use the notion of effective running time to measure the system's throughput
when running multiple instances at the same time, and show how we can determine
when the system's computing resources have been saturated.
|
1309.0085 | Artificial Intelligence Based Cognitive Routing for Cognitive Radio
Networks | cs.NI cs.AI | Cognitive radio networks (CRNs) are networks of nodes equipped with cognitive
radios that can optimize performance by adapting to network conditions. While
cognitive radio networks (CRN) are envisioned as intelligent networks,
relatively little research has focused on the network level functionality of
CRNs. Although various routing protocols, incorporating varying degrees of
adaptiveness, have been proposed for CRNs, it is imperative for the long term
success of CRNs that the design of cognitive routing protocols be pursued by
the research community. Cognitive routing protocols are envisioned as routing
protocols that fully and seamless incorporate AI-based techniques into their
design. In this paper, we provide a self-contained tutorial on various AI and
machine-learning techniques that have been, or can be, used for developing
cognitive routing protocols. We also survey the application of various classes
of AI techniques to CRNs in general, and to the problem of routing in
particular. We discuss various decision making techniques and learning
techniques from AI and document their current and potential applications to the
problem of routing in CRNs. We also highlight the various inference, reasoning,
modeling, and learning sub tasks that a cognitive routing protocol must solve.
Finally, open research issues and future directions of work are identified.
|
1309.0088 | Caching Gain in Wireless Networks with Fading: A Multi-User Diversity
Perspective | cs.IT math.IT | We consider the effect of caching in wireless networks where fading is the
dominant channel effect. First, we propose a one-hop transmission strategy for
cache-enabled wireless networks, which is based on exploiting multi-user
diversity gain. Then, we derive a closed-form result for throughput scaling of
the proposed scheme in large networks, which reveals the inherent trade-off
between cache memory size and network throughput. Our results show that
substantial throughput improvements are achievable in networks with sources
equipped with large cache size. We also verify our analytical result through
simulations.
|
1309.0111 | Turing Instability in Reaction-Diffusion Systems with a Single Diffuser:
Characterization Based on Root Locus | cs.SY nlin.PS q-bio.QM | Cooperative behaviors arising from bacterial cell-to-cell communication can
be modeled by reaction-diffusion equations having only a single diffusible
component. This paper presents the following three contributions for the
systematic analysis of Turing instability in such reaction-diffusion systems.
(i) We first introduce a unified framework to formulate the reaction-diffusion
system as an interconnected multi-agent dynamical system. (ii) Then, we
mathematically classify biologically plausible and implausible Turing
instabilities and characterize them by the root locus of each agent's dynamics,
or the local reaction dynamics. (iii) Using this characterization, we derive
analytic conditions for biologically plausible Turing instability, which
provide useful guidance for the design and the analysis of biological networks.
These results are demonstrated on an extended Gray-Scott model with a single
diffuser.
|
1309.0113 | Non-Asymptotic Convergence Analysis of Inexact Gradient Methods for
Machine Learning Without Strong Convexity | math.OC cs.LG | Many recent applications in machine learning and data fitting call for the
algorithmic solution of structured smooth convex optimization problems.
Although the gradient descent method is a natural choice for this task, it
requires exact gradient computations and hence can be inefficient when the
problem size is large or the gradient is difficult to evaluate. Therefore,
there has been much interest in inexact gradient methods (IGMs), in which an
efficiently computable approximate gradient is used to perform the update in
each iteration. Currently, non-asymptotic linear convergence results for IGMs
are typically established under the assumption that the objective function is
strongly convex, which is not satisfied in many applications of interest; while
linear convergence results that do not require the strong convexity assumption
are usually asymptotic in nature. In this paper, we combine the best of these
two types of results and establish---under the standard assumption that the
gradient approximation errors decrease linearly to zero---the non-asymptotic
linear convergence of IGMs when applied to a class of structured convex
optimization problems. Such a class covers settings where the objective
function is not necessarily strongly convex and includes the least squares and
logistic regression problems. We believe that our techniques will find further
applications in the non-asymptotic convergence analysis of other first-order
methods.
|
1309.0123 | A Robust Alternating Direction Method for Constrained Hybrid Variational
Deblurring Model | cs.CV | In this work, a new constrained hybrid variational deblurring model is
developed by combining the non-convex first- and second-order total variation
regularizers. Moreover, a box constraint is imposed on the proposed model to
guarantee high deblurring performance. The developed constrained hybrid
variational model could achieve a good balance between preserving image details
and alleviating ringing artifacts. In what follows, we present the
corresponding numerical solution by employing an iteratively reweighted
algorithm based on alternating direction method of multipliers. The
experimental results demonstrate the superior performance of the proposed
method in terms of quantitative and qualitative image quality assessments.
|
1309.0129 | Information filtering via hybridization of similarity preferential
diffusion processes | cs.IR cs.SI physics.soc-ph | The recommender system is one of the most promising ways to address the
information overload problem in online systems. Based on the personal
historical record, the recommender system can find interesting and relevant
objects for the user within a huge information space. Many physical processes
such as the mass diffusion and heat conduction have been applied to design the
recommendation algorithms. The hybridization of these two algorithms has been
shown to provide both accurate and diverse recommendation results. In this
paper, we proposed two similarity preferential diffusion processes. Extensive
experimental analyses on two benchmark data sets demonstrate that both
recommendation and accuracy and diversity are improved duet to the similarity
preference in the diffusion. The hybridization of the similarity preferential
diffusion processes is shown to significantly outperform the state-of-art
recommendation algorithm. Finally, our analysis on network sparsity show that
there is significant difference between dense and sparse system, indicating
that all the former conclusions on recommendation in the literature should be
reexamined in sparse system.
|
1309.0136 | Near-optimal Frequency-weighted Interpolatory Model Reduction | cs.SY math.DS math.NA | This paper develops an interpolatory framework for weighted-$\mathcal{H}_2$
model reduction of MIMO dynamical systems. A new representation of the
weighted-$\mathcal{H}_2$ inner products in MIMO settings is introduced and used
to derive associated first-order necessary conditions satisfied by optimal
weighted-$\mathcal{H}_2$ reduced-order models. Equivalence of these new
interpolatory conditions with earlier Riccati-based conditions given by Halevi
is also shown. An examination of realizations for equivalent
weighted-$\mathcal{H}_2$ systems leads then to an algorithm that remains
tractable for large state-space dimension. Several numerical examples
illustrate the effectiveness of this approach and its competitiveness with
Frequency Weighted Balanced Truncation and an earlier interpolatory approach,
the Weighted Iterative Rational Krylov Algorithm.
|
1309.0141 | Empirical distribution of good channel codes with non-vanishing error
probability (extended version) | cs.IT math.IT math.PR | This paper studies several properties of channel codes that approach the
fundamental limits of a given (discrete or Gaussian) memoryless channel with a
non-vanishing probability of error. The output distribution induced by an
$\epsilon$-capacity-achieving code is shown to be close in a strong sense to
the capacity achieving output distribution. Relying on the concentration of
measure (isoperimetry) property enjoyed by the latter, it is shown that regular
(Lipschitz) functions of channel outputs can be precisely estimated and turn
out to be essentially non-random and independent of the actual code. It is also
shown that the output distribution of a good code and the capacity achieving
one cannot be distinguished with exponential reliability. The random process
produced at the output of the channel is shown to satisfy the asymptotic
equipartition property. Using related methods it is shown that quadratic forms
and sums of $q$-th powers when evaluated at codewords of good AWGN codes
approach the values obtained from a randomly generated Gaussian codeword.
|
1309.0145 | Delay Minimization for Instantly Decodable Network Coding in Persistent
Channels with Feedback Intermittence | cs.IT math.IT | In this paper, we consider the problem of minimizing the multicast decoding
delay of generalized instantly decodable network coding (G-IDNC) over
persistent forward and feedback erasure channels with feedback intermittence.
In such an environment, the sender does not always receive acknowledgement from
the receivers after each transmission. Moreover, both the forward and feedback
channels are subject to persistent erasures, which can be modelled by a two
state (good and bad states) Markov chain known as Gilbert-Elliott channel
(GEC). Due to such feedback imperfections, the sender is unable to determine
subsequent instantly decodable packets combination for all receivers. Given
this harsh channel and feedback model, we first derive expressions for the
probability distributions of decoding delay increments and then employ these
expressions in formulating the minimum decoding problem in such environment as
a maximum weight clique problem in the G-IDNC graph. We also show that the
problem formulations in simpler channel and feedback models are special cases
of our generalized formulation. Since this problem is NP-hard, we design a
greedy algorithm to solve it and compare it to blind approaches proposed in
literature. Through extensive simulations, our adaptive algorithm is shown to
outperform the blind approaches in all situations and to achieve significant
improvement in the decoding delay, especially when the channel is highly
persistent
|
1309.0157 | A complementary construction using mutually unbiased bases | cs.IT cs.DM math.IT | We present a construction for complementary pairs of arrays that exploits a
set of mutually-unbiased bases, and enumerate these arrays as well as the
corresponding set of complementary sequences obtained from the arrays by
projection. We also sketch an algorithm to uniquely generate these sequences.
The pairwise squared inner-product of members of the sequence set is shown to
be $\frac{1}{2}$. Moreover, a subset of the set can be viewed as a codebook
that asymptotically achieves $\sqrt{\frac{3}{2}}$ times the Welch bound.
|
1309.0158 | Robustness of large-scale stochastic matrices to localized perturbations | math.PR cs.DM cs.SI cs.SY | Upper bounds are derived on the total variation distance between the
invariant distributions of two stochastic matrices differing on a subset W of
rows. Such bounds depend on three parameters: the mixing time and the minimal
expected hitting time on W for the Markov chain associated to one of the
matrices; and the escape time from W for the Markov chain associated to the
other matrix. These results, obtained through coupling techniques, prove
particularly useful in scenarios where W is a small subset of the state space,
even if the difference between the two matrices is not small in any norm.
Several applications to large-scale network problems are discussed, including
robustness of Google's PageRank algorithm, distributed averaging and consensus
algorithms, and interacting particle systems.
|
1309.0165 | On time-reversibility of linear stochastic models | cs.SY math.PR | Reversal of the time direction in stochastic systems driven by white noise
has been central throughout the development of stochastic realization theory,
filtering and smoothing. Similar ideas were developed in connection with
certain problems in the theory of moments, where a duality induced by time
reversal was introduced to parametrize solutions. In this latter work it was
shown that stochastic systems driven by arbitrary second-order stationary
processes can be similarly time-reversed. By combining these two sets of ideas
we present herein a generalization of time-reversal in stochastic realization
theory.
|
1309.0186 | A Solution to the Network Challenges of Data Recovery in Erasure-coded
Distributed Storage Systems: A Study on the Facebook Warehouse Cluster | cs.NI cs.DC cs.IT math.IT | Erasure codes, such as Reed-Solomon (RS) codes, are being increasingly
employed in data centers to combat the cost of reliably storing large amounts
of data. Although these codes provide optimal storage efficiency, they require
significantly high network and disk usage during recovery of missing data. In
this paper, we first present a study on the impact of recovery operations of
erasure-coded data on the data-center network, based on measurements from
Facebook's warehouse cluster in production. To the best of our knowledge, this
is the first study of its kind available in the literature. Our study reveals
that recovery of RS-coded data results in a significant increase in network
traffic, more than a hundred terabytes per day, in a cluster storing multiple
petabytes of RS-coded data.
To address this issue, we present a new storage code using our recently
proposed "Piggybacking" framework, that reduces the network and disk usage
during recovery by 30% in theory, while also being storage optimal and
supporting arbitrary design parameters. The implementation of the proposed code
in the Hadoop Distributed File System (HDFS) is underway. We use the
measurements from the warehouse cluster to show that the proposed code would
lead to a reduction of close to fifty terabytes of cross-rack traffic per day.
|
1309.0193 | Design of Minimum Correlated, Maximal Clique Sets of One-Dimensional
Uni-polar (Optical) Orthogonal Codes | cs.IT math.IT | This paper proposes an algorithm to search a family of multiple sets of
minimum correlated one dimensional uni-polar (optical) orthogonal codes
(1-DUOC) or optical orthogonal codes (OOC) with fixed as well as variable code
parameters. The cardinality of each set is equal to upper bound. The codes
within a set can be searched for general values of code length, code weight,
auto-correlation constraint and cross-correlation constraint. Each set forms a
maximal clique of the codes within given range of correlation properties .
These one-dimensional uni-polar orthogonal codes can find their application as
signature sequences for spectral spreading purpose in incoherent optical code
division multiple access (CDMA) systems.
|
1309.0213 | Learning to Rank for Blind Image Quality Assessment | cs.CV | Blind image quality assessment (BIQA) aims to predict perceptual image
quality scores without access to reference images. State-of-the-art BIQA
methods typically require subjects to score a large number of images to train a
robust model. However, subjective quality scores are imprecise, biased, and
inconsistent, and it is challenging to obtain a large scale database, or to
extend existing databases, because of the inconvenience of collecting images,
training the subjects, conducting subjective experiments, and realigning human
quality evaluations. To combat these limitations, this paper explores and
exploits preference image pairs (PIPs) such as "the quality of image $I_a$ is
better than that of image $I_b$" for training a robust BIQA model. The
preference label, representing the relative quality of two images, is generally
precise and consistent, and is not sensitive to image content, distortion type,
or subject identity; such PIPs can be generated at very low cost. The proposed
BIQA method is one of learning to rank. We first formulate the problem of
learning the mapping from the image features to the preference label as one of
classification. In particular, we investigate the utilization of a multiple
kernel learning algorithm based on group lasso (MKLGL) to provide a solution. A
simple but effective strategy to estimate perceptual image quality scores is
then presented. Experiments show that the proposed BIQA method is highly
effective and achieves comparable performance to state-of-the-art BIQA
algorithms. Moreover, the proposed method can be easily extended to new
distortion categories.
|
1309.0238 | API design for machine learning software: experiences from the
scikit-learn project | cs.LG cs.MS | Scikit-learn is an increasingly popular machine learning li- brary. Written
in Python, it is designed to be simple and efficient, accessible to
non-experts, and reusable in various contexts. In this paper, we present and
discuss our design choices for the application programming interface (API) of
the project. In particular, we describe the simple and elegant interface shared
by all learning and processing units in the library and then discuss its
advantages in terms of composition and reusability. The paper also comments on
implementation details specific to the Python ecosystem and analyzes obstacles
faced by users and developers of the library.
|
1309.0239 | Energy-Neutral Source-Channel Coding with Battery and Memory Size
Constraints | cs.IT math.IT | We study energy management policies for the compression and transmission of
source data collected by an energy-harvesting sensor node with a finite energy
buffer (e.g., rechargeable battery) and a finite data buffer (memory) between
source encoder and channel encoder. The sensor node can adapt the source and
channel coding rates depending on the observation and channel states. In such a
system, the absence of precise information about the amount of energy available
in the future is a key challenge. We provide analytical bounds and scaling laws
for the average distortion that depend on the size of the energy and data
buffers. We furthermore design a resource allocation policy that achieves
almost optimal distortion scaling. Our results demonstrate that the energy
leakage of state of art energy management policies can be avoided by jointly
controlling the source and channel coding rates.
|
1309.0242 | Ensemble approaches for improving community detection methods | physics.soc-ph cs.LG cs.SI stat.ML | Statistical estimates can often be improved by fusion of data from several
different sources. One example is so-called ensemble methods which have been
successfully applied in areas such as machine learning for classification and
clustering. In this paper, we present an ensemble method to improve community
detection by aggregating the information found in an ensemble of community
structures. This ensemble can found by re-sampling methods, multiple runs of a
stochastic community detection method, or by several different community
detection algorithms applied to the same network. The proposed method is
evaluated using random networks with community structures and compared with two
commonly used community detection methods. The proposed method when applied on
a stochastic community detection algorithm performs well with low computational
complexity, thus offering both a new approach to community detection and an
additional community detection method.
|
1309.0261 | Multi-Column Deep Neural Networks for Offline Handwritten Chinese
Character Classification | cs.CV | Our Multi-Column Deep Neural Networks achieve best known recognition rates on
Chinese characters from the ICDAR 2011 and 2013 offline handwriting
competitions, approaching human performance.
|
1309.0270 | High-Accuracy Total Variation for Compressed Video Sensing | math.OC cs.CV | Numerous total variation (TV) regularizers, engaged in image restoration
problem, encode the gradients by means of simple $[-1,1]$ FIR filter. Despite
its low computational processing, this filter severely deviates signal's high
frequency components pertinent to edge/discontinuous information and cause
several deficiency issues known as texture and geometric loss. This paper
addresses this problem by proposing an alternative model to the TV
regularization problem via high order accuracy differential FIR filters to
preserve rapid transitions in signal recovery. A numerical encoding scheme is
designed to extend the TV model into multidimensional representation (tensorial
decomposition). We adopt this design to regulate the spatial and temporal
redundancy in compressed video sensing problem to jointly recover frames from
under-sampled measurements. We then seek the solution via alternating direction
methods of multipliers and find a unique solution to quadratic minimization
step with capability of handling different boundary conditions. The resulting
algorithm uses much lower sampling rate and highly outperforms alternative
state-of-the-art methods. This is evaluated both in terms of restoration
accuracy and visual quality of the recovered frames.
|
1309.0302 | Unmixing Incoherent Structures of Big Data by Randomized or Greedy
Decomposition | stat.ML cs.DS cs.LG | Learning big data by matrix decomposition always suffers from expensive
computation, mixing of complicated structures and noise. In this paper, we
study more adaptive models and efficient algorithms that decompose a data
matrix as the sum of semantic components with incoherent structures. We firstly
introduce "GO decomposition (GoDec)", an alternating projection method
estimating the low-rank part $L$ and the sparse part $S$ from data matrix
$X=L+S+G$ corrupted by noise $G$. Two acceleration strategies are proposed to
obtain scalable unmixing algorithm on big data: 1) Bilateral random projection
(BRP) is developed to speed up the update of $L$ in GoDec by a closed-form
built from left and right random projections of $X-S$ in lower dimensions; 2)
Greedy bilateral (GreB) paradigm updates the left and right factors of $L$ in a
mutually adaptive and greedy incremental manner, and achieve significant
improvement in both time and sample complexities. Then we proposes three
nontrivial variants of GoDec that generalizes GoDec to more general data type
and whose fast algorithms can be derived from the two strategies......
|
1309.0303 | Fundamental Limits of HRR Profiling and Velocity Compensation For
Stepped-Frequency Waveforms | cs.IT math.IT | The stepped-frequency (SF) waveform is an effective way to achieve high range
resolution (HRR) in modern radars. In this paper, we determine some fundamental
limits of SF waveforms on ambiguity, stability and accuracy of stable targets
profiling, and velocity compensation accuracy of moving targets. The
investigation shows that via using the information contained in both phase and
envelop of the echo signal, the radar can achieve HRR profiles without
ambiguity under a looser criterion, and can compensate the range shift caused
by targets' radial velocity. The results of this paper can help the SF waveform
design and the processing algorithm development for HRR profiling and velocity
compensation.
|
1309.0305 | Quantifying 'causality' in complex systems: Understanding Transfer
Entropy | cond-mat.stat-mech cs.IT math.IT | 'Causal' direction is of great importance when dealing with complex systems.
Often big volumes of data in the form of time series are available and it is
important to develop methods that can inform about possible causal connections
between the different observables. Here we investigate the ability of the
Transfer Entropy measure to identify causal relations embedded in emergent
coherent correlations. We do this by firstly applying Transfer Entropy to an
amended Ising model. In addition we use a simple Random Transition model to
test the reliability of Transfer Entropy as a measure of `causal' direction in
the presence of stochastic fluctuations. In particular we systematically study
the effect of the finite size of data sets.
|
1309.0309 | A Study on Unsupervised Dictionary Learning and Feature Encoding for
Action Classification | cs.CV | Many efforts have been devoted to develop alternative methods to traditional
vector quantization in image domain such as sparse coding and soft-assignment.
These approaches can be split into a dictionary learning phase and a feature
encoding phase which are often closely connected. In this paper, we investigate
the effects of these phases by separating them for video-based action
classification. We compare several dictionary learning methods and feature
encoding schemes through extensive experiments on KTH and HMDB51 datasets.
Experimental results indicate that sparse coding performs consistently better
than the other encoding methods in large complex dataset (i.e., HMDB51), and it
is robust to different dictionaries. For small simple dataset (i.e., KTH) with
less variation, however, all the encoding strategies perform competitively. In
addition, we note that the strength of sophisticated encoding approaches comes
not from their corresponding dictionaries but the encoding mechanisms, and we
can just use randomly selected exemplars as dictionaries for video-based action
classification.
|
1309.0326 | Tagging Scientific Publications using Wikipedia and Natural Language
Processing Tools. Comparison on the ArXiv Dataset | cs.CL cs.DL | In this work, we compare two simple methods of tagging scientific
publications with labels reflecting their content. As a first source of labels
Wikipedia is employed, second label set is constructed from the noun phrases
occurring in the analyzed corpus. We examine the statistical properties and the
effectiveness of both approaches on the dataset consisting of abstracts from
0.7 million of scientific documents deposited in the ArXiv preprint collection.
We believe that obtained tags can be later on applied as useful document
features in various machine learning tasks (document similarity, clustering,
topic modelling, etc.).
|
1309.0337 | Scalable Probabilistic Entity-Topic Modeling | stat.ML cs.IR cs.LG | We present an LDA approach to entity disambiguation. Each topic is associated
with a Wikipedia article and topics generate either content words or entity
mentions. Training such models is challenging because of the topic and
vocabulary size, both in the millions. We tackle these problems using a novel
distributed inference and representation framework based on a parallel Gibbs
sampler guided by the Wikipedia link graph, and pipelines of MapReduce allowing
fast and memory-frugal processing of large datasets. We report state-of-the-art
performance on a public dataset.
|
1309.0363 | Sigma Point Belief Propagation | cs.AI cs.DC | The sigma point (SP) filter, also known as unscented Kalman filter, is an
attractive alternative to the extended Kalman filter and the particle filter.
Here, we extend the SP filter to nonsequential Bayesian inference corresponding
to loopy factor graphs. We propose sigma point belief propagation (SPBP) as a
low-complexity approximation of the belief propagation (BP) message passing
scheme. SPBP achieves approximate marginalizations of posterior distributions
corresponding to (generally) loopy factor graphs. It is well suited for
decentralized inference because of its low communication requirements. For a
decentralized, dynamic sensor localization problem, we demonstrate that SPBP
can outperform nonparametric (particle-based) BP while requiring significantly
less computations and communications.
|
1309.0365 | Guaranteed Cost Tracking for Uncertain Coupled Multi-agent Systems Using
Consensus over a Directed Graph | cs.SY | This paper considers the leader-follower control problem for a linear
multi-agent system with directed communication topology and linear nonidentical
uncertain coupling subject to integral quadratic constraints (IQCs). A
consensus-type control protocol is proposed based on each agent's states
relative to its neighbors and leader's state relative to agents which observe
the leader. A sufficient condition is obtained by overbounding the cost
function. Based on this sufficient condition, a computational algorithm is
introduced to minimize the proposed guaranteed bound on tracking performance,
which yields a suboptimal bound on the system consensus control and tracking
performance. The effectiveness of the proposed method is demonstrated using a
simulation example.
|
1309.0373 | ENFrame: A Platform for Processing Probabilistic Data | cs.DB | This paper introduces ENFrame, a unified data processing platform for
querying and mining probabilistic data. Using ENFrame, users can write programs
in a fragment of Python with constructs such as bounded-range loops, list
comprehension, aggregate operations on lists, and calls to external database
engines. The program is then interpreted probabilistically by ENFrame.
The realisation of ENFrame required novel contributions along several
directions. We propose an event language that is expressive enough to
succinctly encode arbitrary correlations, trace the computation of user
programs, and allow for computation of discrete probability distributions of
program variables. We exemplify ENFrame on three clustering algorithms:
k-means, k-medoids, and Markov Clustering. We introduce sequential and
distributed algorithms for computing the probability of interconnected events
exactly or approximately with error guarantees. Experiments with k-medoids
clustering of sensor readings from energy networks show orders-of-magnitude
improvements of exact clustering using ENFrame over na\"ive clustering in each
possible world, of approximate over exact, and of distributed over sequential
algorithms.
|
1309.0403 | On the Geometry of Balls in the Grassmannian and List Decoding of Lifted
Gabidulin Codes | cs.IT math.AG math.IT | The finite Grassmannian $\mathcal{G}_{q}(k,n)$ is defined as the set of all
$k$-dimensional subspaces of the ambient space $\mathbb{F}_{q}^{n}$. Subsets of
the finite Grassmannian are called constant dimension codes and have recently
found an application in random network coding. In this setting codewords from
$\mathcal{G}_{q}(k,n)$ are sent through a network channel and, since errors may
occur during transmission, the received words can possible lie in
$\mathcal{G}_{q}(k',n)$, where $k'\neq k$. In this paper, we study the balls in
$\mathcal{G}_{q}(k,n)$ with center that is not necessarily in
$\mathcal{G}_{q}(k,n)$. We describe the balls with respect to two different
metrics, namely the subspace and the injection metric. Moreover, we use two
different techniques for describing these balls, one is the Pl\"ucker embedding
of $\mathcal{G}_{q}(k,n)$, and the second one is a rational parametrization of
the matrix representation of the codewords.
With these results, we consider the problem of list decoding a certain family
of constant dimension codes, called lifted Gabidulin codes. We describe a way
of representing these codes by linear equations in either the matrix
representation or a subset of the Pl\"ucker coordinates. The union of these
equations and the equations which arise from the description of the ball of a
given radius in the Grassmannian describe the list of codewords with distance
less than or equal to the given radius from the received word.
|
1309.0442 | A Verifiable and Correct-by-Construction Controller for Robot Functional
Levels | cs.RO cs.SE | Autonomous robots are complex systems that require the interaction and
cooperation between numerous heterogeneous software components. In recent
times, robots are being increasingly used for complex and safety-critical
tasks, such as exploring Mars and assisting/replacing humans. Consequently,
robots are becoming critical systems that must meet safety properties, in
particular, logical, temporal and real-time constraints. To this end, we
present an evolution of the LAAS architecture for autonomous systems, in
particular its GenoM tool. This evolution relies on the BIP component-based
design framework, which has been successfully used in other domains such as
embedded systems. We show how we integrate BIP into our existing methodology
for developing the lowest (functional) level of robots. Particularly, we
discuss the componentization of the functional level, the synthesis of an
execution controller for it, and how we verify whether the resulting functional
level conforms to properties such as deadlock-freedom. We also show through
experimentation that the verification is feasible and usable for complex, real
world robotic systems, and that the BIP-based functional levels resulting from
our new methodology are, despite an overhead during execution, still practical
on real world robotic platforms. Our approach has been fully implemented in the
LAAS architecture, and the implementation has been used in several experiments
on a real robot.
|
1309.0448 | Distributed Sensing and Transmission of Sporadic Random Samples in a
Multiple-Access Channel | cs.IT math.IT | This work considers distributed sensing and transmission of sporadic random
samples. Lower bounds are derived for the reconstruction error of a single
normally or uniformly-distributed finite-dimensional vector imperfectly
measured by a network of sensors and transmitted with finite energy to a common
receiver via an additive white Gaussian noise asynchronous multiple-access
channel. Transmission makes use of a perfect causal feedback link to the
encoder connected to each sensor. A retransmission protocol inspired by the
classical scheme in [1] applied to the transmission of single and bi-variate
analog samples analyzed in [2] and [3] is extended to the more general network
scenario, for which asymptotic upper-bounds on the reconstruction error are
provided. Both the upper and lower-bounds show that collaboration can be
achieved through energy accumulation under certain circumstances. In order to
investigate the practical performance of the proposed retransmission protocol
we provide a numerical evaluation of the upper-bounds in the non-asymptotic
energy regime using low-order quantization in the sensors. The latter includes
a minor modification of the protocol to improve reconstruction fidelity.
Numerical results show that an increase in the size of the network brings
benefit in terms of performance, but that the gain in terms of energy
efficiency diminishes quickly at finite energies due to a non-coherent
combining loss.
|
1309.0458 | Capacity of Non-Malleable Codes | cs.IT cs.CC cs.CR math.IT | Non-malleable codes, introduced by Dziembowski, Pietrzak and Wichs (ICS
2010), encode messages $s$ in a manner so that tampering the codeword causes
the decoder to either output $s$ or a message that is independent of $s$. While
this is an impossible goal to achieve against unrestricted tampering functions,
rather surprisingly non-malleable coding becomes possible against every fixed
family $F$ of tampering functions that is not too large (for instance, when
$|F| \le \exp(2^{\alpha n})$ for some $\alpha \in [0, 1)$ where $n$ is the
number of bits in a codeword).
In this work, we study the "capacity of non-malleable coding", and establish
optimal bounds on the achievable rate as a function of the family size,
answering an open problem from Dziembowski et al. (ICS 2010). Specifically,
1. We prove that for every family $F$ with $|F| \le \exp(2^{\alpha n})$,
there exist non-malleable codes against $F$ with rate arbitrarily close to
$1-\alpha$ (this is achieved w.h.p. by a randomized construction).
2. We show the existence of families of size $\exp(n^{O(1)} 2^{\alpha n})$
against which there is no non-malleable code of rate $1-\alpha$ (in fact this
is the case w.h.p for a random family of this size).
3. We also show that $1-\alpha$ is the best achievable rate for the family of
functions which are only allowed to tamper the first $\alpha n$ bits of the
codeword, which is of special interest.
As a corollary, this implies that the capacity of non-malleable coding in the
split-state model (where the tampering function acts independently but
arbitrarily on the two halves of the codeword) equals 1/2.
We also give an efficient Monte Carlo construction of codes of rate close to
1 with polynomial time encoding and decoding that is non-malleable against any
fixed $c > 0$ and family $F$ of size $\exp(n^c)$, in particular tampering
functions with, say, cubic size circuits.
|
1309.0482 | Law of Log Determinant of Sample Covariance Matrix and Optimal
Estimation of Differential Entropy for High-Dimensional Gaussian
Distributions | math.ST cs.IT math.IT stat.TH | Differential entropy and log determinant of the covariance matrix of a
multivariate Gaussian distribution have many applications in coding,
communications, signal processing and statistical inference. In this paper we
consider in the high dimensional setting optimal estimation of the differential
entropy and the log-determinant of the covariance matrix. We first establish a
central limit theorem for the log determinant of the sample covariance matrix
in the high dimensional setting where the dimension $p(n)$ can grow with the
sample size $n$. An estimator of the differential entropy and the log
determinant is then considered. Optimal rate of convergence is obtained. It is
shown that in the case $p(n)/n \rightarrow 0$ the estimator is asymptotically
sharp minimax. The ultra-high dimensional setting where $p(n) > n$ is also
discussed.
|
1309.0489 | Relative Comparison Kernel Learning with Auxiliary Kernels | cs.LG | In this work we consider the problem of learning a positive semidefinite
kernel matrix from relative comparisons of the form: "object A is more similar
to object B than it is to C", where comparisons are given by humans. Existing
solutions to this problem assume many comparisons are provided to learn a high
quality kernel. However, this can be considered unrealistic for many real-world
tasks since relative assessments require human input, which is often costly or
difficult to obtain. Because of this, only a limited number of these
comparisons may be provided. In this work, we explore methods for aiding the
process of learning a kernel with the help of auxiliary kernels built from more
easily extractable information regarding the relationships among objects. We
propose a new kernel learning approach in which the target kernel is defined as
a conic combination of auxiliary kernels and a kernel whose elements are
learned directly. We formulate a convex optimization to solve for this target
kernel that adds only minor overhead to methods that use no auxiliary
information. Empirical results show that in the presence of few training
relative comparisons, our method can learn kernels that generalize to more
out-of-sample comparisons than methods that do not utilize auxiliary
information, as well as similar methods that learn metrics over objects.
|
1309.0535 | Decentralized Rigidity Maintenance Control with Range Measurements for
Multi-Robot Systems | cs.SY cs.MA cs.RO math.OC | This work proposes a fully decentralized strategy for maintaining the
formation rigidity of a multi-robot system using only range measurements, while
still allowing the graph topology to change freely over time. In this
direction, a first contribution of this work is an extension of rigidity theory
to weighted frameworks and the rigidity eigenvalue, which when positive ensures
the infinitesimal rigidity of the framework. We then propose a distributed
algorithm for estimating a common relative position reference frame amongst a
team of robots with only range measurements in addition to one agent endowed
with the capability of measuring the bearing to two other agents. This first
estimation step is embedded into a subsequent distributed algorithm for
estimating the rigidity eigenvalue associated with the weighted framework. The
estimate of the rigidity eigenvalue is finally used to generate a local control
action for each agent that both maintains the rigidity property and enforces
additional con- straints such as collision avoidance and sensing/communication
range limits and occlusions. As an additional feature of our approach, the
communication and sensing links among the robots are also left free to change
over time while preserving rigidity of the whole framework. The proposed scheme
is then experimentally validated with a robotic testbed consisting of 6
quadrotor UAVs operating in a cluttered environment.
|
1309.0551 | Optimizing the performance of Lattice Gauge Theory simulations with
Streaming SIMD extensions | cs.CE cs.PF physics.comp-ph | Two factors, which affect simulation quality are the amount of computing
power and implementation. The Streaming SIMD (single instruction multiple data)
extensions (SSE) present a technique for influencing both by exploiting the
processor's parallel functionalism. In this paper, we show how SSE improves
performance of lattice gauge theory simulations. We identified two significant
trends through an analysis of data from various runs. The speed-ups were higher
for single precision than double precision floating point numbers. Notably,
though the use of SSE significantly improved simulation time, it did not
deliver the theoretical maximum. There are a number of reasons for this:
architectural constraints imposed by the FSB speed, the spatial and temporal
patterns of data retrieval, ratio of computational to non-computational
instructions, and the need to interleave miscellaneous instructions with
computational instructions. We present a model for analyzing the SSE
performance, which could help factor in the bottlenecks or weaknesses in the
implementation, the computing architecture, and the mapping of software to the
computing substrate while evaluating the improvement in efficiency. The model
or framework would be useful in evaluating the use of other computational
frameworks, and in predicting the benefits that can be derived from future
hardware or architectural improvements.
|
1309.0566 | Enhanced Precision Through Multiple Reads for LDPC Decoding in Flash
Memories | cs.IT math.IT | Multiple reads of the same Flash memory cell with distinct word-line voltages
provide enhanced precision for LDPC decoding. In this paper, the word-line
voltages are optimized by maximizing the mutual information (MI) of the
quantized channel. The enhanced precision from a few additional reads allows
FER performance to approach that of full-precision soft information and enables
an LDPC code to significantly outperform a BCH code. A constant-ratio
constraint provides a significant simplification in the optimization with no
noticeable loss in performance. For a well-designed LDPC code, the quantization
that maximizes the mutual information also minimizes the frame error rate in
our simulations. However, for an example LDPC code with a high error floor
caused by small absorbing sets, the MMI quantization does not provide the
lowest frame error rate. The best quantization in this case introduces more
erasures than would be optimal for the channel MI in order to mitigate the
absorbing sets of the poorly designed code. The paper also identifies a
trade-off in LDPC code design when decoding is performed with multiple
precision levels; the best code at one level of precision will typically not be
the best code at a different level of precision.
|
1309.0569 | Product-form solutions for integrated services packet networks and cloud
computing systems | math.PR cs.IT cs.NI cs.PF math.IT math.OC | We iteratively derive the product-form solutions of stationary distributions
of priority multiclass queueing networks with multi-sever stations. The
networks are Markovian with exponential interarrival and service time
distributions. These solutions can be used to conduct performance analysis or
as comparison criteria for approximation and simulation studies of large scale
networks with multi-processor shared-memory switches and cloud computing
systems with parallel-server stations. Numerical comparisons with existing
Brownian approximating model are provided to indicate the effectiveness of our
algorithm.
|
1309.0576 | Robust Stability of Quantum Systems with Nonlinear Dynamic Uncertainties | quant-ph cs.SY math.OC | This paper considers the problem of robust stability for a class of uncertain
nonlinear quantum systems subject to unknown perturbations in the system
Hamiltonian. The nominal system is a linear quantum system defined by a linear
vector of coupling operators and a quadratic Hamiltonian. This paper extends
previous results on the robust stability of nonlinear quantum systems to allow
for quantum systems with dynamic uncertainties. These dynamic uncertainties are
required to satisfy a certain quantum stochastic integral quadratic constraint.
The robust stability condition is given in terms of a strict bounded real
condition. This result is applied to the robust stability analysis of an
optical parametric amplifier.
|
1309.0578 | Coherent-Classical Estimation for Quantum Linear Systems | quant-ph cs.SY math.OC | This paper introduces a problem of coherent-classical estimation for a class
of linear quantum systems. In this problem, the estimator is a mixed
quantum-classical system which produces a classical estimate of a system
variable. The coherent-classical estimator may also involve coherent feedback.
An example involving optical squeezers is given to illustrate the efficacy of
this idea.
|
1309.0607 | On Throughput and Decoding Delay Performance of Instantly Decodable
Network Coding | cs.IT math.IT | In this paper, a comprehensive study of packet-based instantly decodable
network coding (IDNC) for single-hop wireless broadcast is presented. The
optimal IDNC solution in terms of throughput is proposed and its packet
decoding delay performance is investigated. Lower and upper bounds on the
achievable throughput and decoding delay performance of IDNC are derived and
assessed through extensive simulations. Furthermore, the impact of receivers'
feedback frequency on the performance of IDNC is studied and optimal IDNC
solutions are proposed for scenarios where receivers' feedback is only
available after and IDNC round, composed of several coded transmissions.
However, since finding these IDNC optimal solutions is computational complex,
we further propose simple yet efficient heuristic IDNC algorithms. The impact
of system settings and parameters such as channel erasure probability, feedback
frequency, and the number of receivers is also investigated and simple
guidelines for practical implementations of IDNC are proposed.
|
1309.0634 | Skew Handling in Aggregate Streaming Queries on GPUs | cs.DB cs.DC | Nowadays, the data to be processed by database systems has grown so large
that any conventional, centralized technique is inadequate. At the same time,
general purpose computation on GPU (GPGPU) recently has successfully drawn
attention from the data management community due to its ability to achieve
significant speed-ups at a small cost. Efficient skew handling is a well-known
problem in parallel queries, independently of the execution environment. In
this work, we investigate solutions to the problem of load imbalances in
parallel aggregate queries on GPUs that are caused by skewed data. We present a
generic load-balancing framework along with several instantiations, which we
experimentally evaluate. To the best of our knowledge, this is the first
attempt to present runtime load-balancing techniques for database operations on
GPUs.
|
1309.0659 | Majority Rule for Belief Evolution in Social Networks | cs.AI cs.SI physics.soc-ph | In this paper, we study how an agent's belief is affected by her neighbors in
a social network. We first introduce a general framework, where every agent has
an initial belief on a statement, and updates her belief according to her and
her neighbors' current beliefs under some belief evolution functions, which,
arguably, should satisfy some basic properties. Then, we focus on the majority
rule belief evolution function, that is, an agent will (dis)believe the
statement iff more than half of her neighbors (dis)believe it. We consider some
fundamental issues about majority rule belief evolution, for instance, whether
the belief evolution process will eventually converge. The answer is no in
general. However, for random asynchronous belief evolution, this is indeed the
case.
|
1309.0671 | BayesOpt: A Library for Bayesian optimization with Robotics Applications | cs.RO cs.AI cs.LG cs.MS | The purpose of this paper is twofold. On one side, we present a general
framework for Bayesian optimization and we compare it with some related fields
in active learning and Bayesian numerical analysis. On the other hand, Bayesian
optimization and related problems (bandits, sequential experimental design) are
highly dependent on the surrogate model that is selected. However, there is no
clear standard in the literature. Thus, we present a fast and flexible toolbox
that allows to test and combine different models and criteria with little
effort. It includes most of the state-of-the-art contributions, algorithms and
models. Its speed also removes part of the stigma that Bayesian optimization
methods are only good for "expensive functions". The software is free and it
can be used in many operating systems and computer languages.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.