id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1310.4223 | Exact Learning of RNA Energy Parameters From Structure | q-bio.BM cs.LG | We consider the problem of exact learning of parameters of a linear RNA
energy model from secondary structure data. A necessary and sufficient
condition for learnability of parameters is derived, which is based on
computing the convex hull of union of translated Newton polytopes of input
sequences. The set of learned energy parameters is characterized as the convex
cone generated by the normal vectors to those facets of the resulting polytope
that are incident to the origin. In practice, the sufficient condition may not
be satisfied by the entire training data set; hence, computing a maximal subset
of training data for which the sufficient condition is satisfied is often
desired. We show that problem is NP-hard in general for an arbitrary
dimensional feature space. Using a randomized greedy algorithm, we select a
subset of RNA STRAND v2.0 database that satisfies the sufficient condition for
separate A-U, C-G, G-U base pair counting model. The set of learned energy
parameters includes experimentally measured energies of A-U, C-G, and G-U
pairs; hence, our parameter set is in agreement with the Turner parameters.
|
1310.4227 | On Measure Concentration of Random Maximum A-Posteriori Perturbations | cs.LG math.PR | The maximum a-posteriori (MAP) perturbation framework has emerged as a useful
approach for inference and learning in high dimensional complex models. By
maximizing a randomly perturbed potential function, MAP perturbations generate
unbiased samples from the Gibbs distribution. Unfortunately, the computational
cost of generating so many high-dimensional random variables can be
prohibitive. More efficient algorithms use sequential sampling strategies based
on the expected value of low dimensional MAP perturbations. This paper develops
new measure concentration inequalities that bound the number of samples needed
to estimate such expected values. Applying the general result to MAP
perturbations can yield a more efficient algorithm to approximate sampling from
the Gibbs distribution. The measure concentration result is of general interest
and may be applicable to other areas involving expected estimations.
|
1310.4249 | Mapping the stereotyped behaviour of freely-moving fruit flies | q-bio.QM cs.CV physics.bio-ph stat.ML | Most animals possess the ability to actuate a vast diversity of movements,
ostensibly constrained only by morphology and physics. In practice, however, a
frequent assumption in behavioral science is that most of an animal's
activities can be described in terms of a small set of stereotyped motifs. Here
we introduce a method for mapping the behavioral space of organisms, relying
only upon the underlying structure of postural movement data to organize and
classify behaviors. We find that six different drosophilid species each perform
a mix of non-stereotyped actions and over one hundred hierarchically-organized,
stereotyped behaviors. Moreover, we use this approach to compare these species'
behavioral spaces, systematically identifying subtle behavioral differences
between closely-related species.
|
1310.4252 | Multilabel Consensus Classification | stat.ML cs.LG | In the era of big data, a large amount of noisy and incomplete data can be
collected from multiple sources for prediction tasks. Combining multiple models
or data sources helps to counteract the effects of low data quality and the
bias of any single model or data source, and thus can improve the robustness
and the performance of predictive models. Out of privacy, storage and bandwidth
considerations, in certain circumstances one has to combine the predictions
from multiple models or data sources to obtain the final predictions without
accessing the raw data. Consensus-based prediction combination algorithms are
effective for such situations. However, current research on prediction
combination focuses on the single label setting, where an instance can have one
and only one label. Nonetheless, data nowadays are usually multilabeled, such
that more than one label have to be predicted at the same time. Direct
applications of existing prediction combination methods to multilabel settings
can lead to degenerated performance. In this paper, we address the challenges
of combining predictions from multiple multilabel classifiers and propose two
novel algorithms, MLCM-r (MultiLabel Consensus Maximization for ranking) and
MLCM-a (MLCM for microAUC). These algorithms can capture label correlations
that are common in multilabel classifications, and optimize corresponding
performance metrics. Experimental results on popular multilabel classification
tasks verify the theoretical analysis and effectiveness of the proposed
methods.
|
1310.4261 | An Online Algorithm for Separating Sparse and Low-dimensional Signal
Sequences from their Sum | cs.IT math.IT | This paper designs and evaluates a practical algorithm, called practical
recursive projected compressive sensing (Prac-ReProCS), for recovering a time
sequence of sparse vectors $S_t$ and a time sequence of dense vectors $L_t$
from their sum, $M_t:= S_t + L_t$, when any subsequence of the $L_t$'s lies in
a slowly changing low-dimensional subspace. A key application where this
problem occurs is in video layering where the goal is to separate a video
sequence into a slowly changing background sequence and a sparse foreground
sequence that consists of one or more moving regions/objects. Prac-ReProCS is a
practical modification of its theoretical counterpart which was analyzed in our
recent work. Experimental comparisons demonstrating the advantage of the
approach for both simulated and real videos are shown. Extension to the
undersampled case is also developed.
|
1310.4284 | Signal Reconstruction from Rechargeable Wireless Sensor Networks using
Sparse Random Projections | cs.NI cs.IT math.IT | Due to non-homogeneous spread of sunlight, sensing nodes possess non-uniform
energy budget in recharge- able Wireless Sensor Networks (WSNs). An
energy-aware workload distribution strategy is therefore nec- essary to achieve
good data accuracy subject to energy-neutral operation. Recently proposed
signal approx- imation strategies assume uniform sampling and fail to ensure
energy neutral operation in rechargeable wireless sensor networks. We propose
EAST (Energy Aware Sparse approximation Technique), which ap- proximates a
signal, by adapting sensor node sampling workload according to solar energy
availability. To the best of our knowledge, we are the first to propose sparse
approximation to model energy-aware workload distribution in rechargeable WSNs.
Experimental results, using data from an outdoor WSN deployment suggest that
EAST significantly improves the approximation accuracy offering approximately
50% higher sensor on-time. EAST requires the approximation error to be known
beforehand to determine the number of measure- ments. However, it is not always
possible to decide the accuracy a-priori. We improve EAST and propose EAST+,
which, given only the energy budget of the nodes, computes the optimal number
of measurements subject to the energy neutral operation.
|
1310.4301 | Adaptive Mode Selection in Multiuser MISO Cognitive Networks with
Limited Cooperation and Feedback | cs.IT math.IT | In this paper, we consider a multiuser MISO downlink cognitive network
coexisting with a primary network. With the purpose of exploiting the spatial
degree of freedom to counteract the inter-network interference and
intra-network (inter-user) interference simultaneously, we propose to perform
zero-forcing beamforming (ZFBF) at the multi-antenna cognitive base station
(BS) based on the instantaneous channel state information (CSI). The challenge
of designing ZFBF in cognitive networks lies in how to obtain the interference
CSI. To solve it, we introduce a limited inter-network cooperation protocol,
namely the quantized CSI conveyance from the primary receiver to the cognitive
BS via purchase. Clearly, the more the feedback amount, the better the
performance, but the higher the feedback cost. In order to achieve a balance
between the performance and feedback cost, we take the maximization of feedback
utility function, defined as the difference of average sum rate and feedback
cost while satisfying the interference constraint, as the optimization
objective, and derive the transmission mode and feedback amount joint
optimization scheme. Moreover, we quantitatively investigate the impact of CSI
feedback delay and obtain the corresponding optimization scheme. Furthermore,
through asymptotic analysis, we present some simple schemes. Finally, numerical
results confirm the effectiveness of our theoretical claims.
|
1310.4342 | An Extensive Report on Cellular Automata Based Artificial Immune System
for Strengthening Automated Protein Prediction | cs.AI cs.CE | Artificial Immune System (AIS-MACA) a novel computational intelligence
technique is can be used for strengthening the automated protein prediction
system with more adaptability and incorporating more parallelism to the system.
Most of the existing approaches are sequential which will classify the input
into four major classes and these are designed for similar sequences. AIS-MACA
is designed to identify ten classes from the sequences that share twilight zone
similarity and identity with the training sequences with mixed and hybrid
variations. This method also predicts three states (helix, strand, and coil)
for the secondary structure. Our comprehensive design considers 10 feature
selection methods and 4 classifiers to develop MACA (Multiple Attractor
Cellular Automata) based classifiers that are build for each of the ten
classes. We have tested the proposed classifier with twilight-zone and
1-high-similarity benchmark datasets with over three dozens of modern competing
predictors shows that AIS-MACA provides the best overall accuracy that ranges
between 80% and 89.8% depending on the dataset.
|
1310.4347 | M-ary Detection and q-ary Decoding in Large-Scale MIMO: A Non-Binary
Belief Propagation Approach | cs.IT math.IT | In this paper, we propose a non-binary belief propagation approach (NB-BP)
for detection of $M$-ary modulation symbols and decoding of $q$-ary LDPC codes
in large-scale multiuser MIMO systems. We first propose a message passing based
symbol detection algorithm which computes vector messages using a scalar
Gaussian approximation of interference, which results in a total complexity of
just $O(KN\sqrt{M})$, where $K$ is the number of uplink users and $N$ is the
number of base station (BS) antennas. The proposed NB-BP detector does not need
to do a matrix inversion, which gives a complexity advantage over MMSE
detection. We then design optimized $q$-ary LDPC codes by matching the EXIT
charts of the proposed detector and the LDPC decoder. Simulation results show
that the proposed NB-BP detection-decoding approach using the optimized LDPC
codes achieve significantly better performance (by about 1 dB to 7 dB at
$10^{-5}$ coded BER for various system loading factors with number of users
ranging from 16 to 128 and number of BS antennas fixed at 128) compared to
using linear detectors (e.g., MMSE detector) and off-the-shelf $q$-ary
irregular LDPC codes. Also, even with estimated channel knowledge (e.g., with
MMSE channel estimate), the performance of the proposed NB-BP detector is
better than that of the MMSE detector.
|
1310.4349 | An Improved Majority-Logic Decoder Offering Massively Parallel Decoding
for Real-Time Control in Embedded Systems | cs.IT cs.AR cs.DM cs.ET math.IT | We propose an easy-to-implement hard-decision majority-logic decoding
algorithm for Reed-Muller codes RM(r,m) with m >= 3, m/2 >= r >= 1. The
presented algorithm outperforms the best known majority-logic decoding
algorithms and offers highly parallel decoding. The result is of special
importance for safety- and time-critical applications in embedded systems. A
simple combinational circuit can perform the proposed decoding. In particular,
we show how our decoder for the three-error-correcting code RM(2,5) of
dimension 16 and length 32 can be realized on hardware level.
|
1310.4362 | Bayesian Information Sharing Between Noise And Regression Models
Improves Prediction of Weak Effects | stat.ML cs.LG | We consider the prediction of weak effects in a multiple-output regression
setup, when covariates are expected to explain a small amount, less than
$\approx 1%$, of the variance of the target variables. To facilitate the
prediction of the weak effects, we constrain our model structure by introducing
a novel Bayesian approach of sharing information between the regression model
and the noise model. Further reduction of the effective number of parameters is
achieved by introducing an infinite shrinkage prior and group sparsity in the
context of the Bayesian reduced rank regression, and using the Bayesian
infinite factor model as a flexible low-rank noise model. In our experiments
the model incorporating the novelties outperformed alternatives in genomic
prediction of rich phenotype data. In particular, the information sharing
between the noise and regression models led to significant improvement in
prediction accuracy.
|
1310.4366 | An FCA-based Boolean Matrix Factorisation for Collaborative Filtering | cs.IR cs.DS stat.ML | We propose a new approach for Collaborative Filtering which is based on
Boolean Matrix Factorisation (BMF) and Formal Concept Analysis. In a series of
experiments on real data (Movielens dataset) we compare the approach with the
SVD- and NMF-based algorithms in terms of Mean Average Error (MAE). One of the
experimental consequences is that it is enough to have a binary-scaled rating
data to obtain almost the same quality in terms of MAE by BMF than for the
SVD-based algorithm in case of non-scaled data.
|
1310.4377 | Hierarchical Block Structures and High-resolution Model Selection in
Large Networks | physics.data-an cond-mat.dis-nn cond-mat.stat-mech cs.SI physics.soc-ph stat.ML | Discovering and characterizing the large-scale topological features in
empirical networks are crucial steps in understanding how complex systems
function. However, most existing methods used to obtain the modular structure
of networks suffer from serious problems, such as being oblivious to the
statistical evidence supporting the discovered patterns, which results in the
inability to separate actual structure from noise. In addition to this, one
also observes a resolution limit on the size of communities, where smaller but
well-defined clusters are not detectable when the network becomes large. This
phenomenon occurs not only for the very popular approach of modularity
optimization, which lacks built-in statistical validation, but also for more
principled methods based on statistical inference and model selection, which do
incorporate statistical validation in a formally correct way. Here we construct
a nested generative model that, through a complete description of the entire
network hierarchy at multiple scales, is capable of avoiding this limitation,
and enables the detection of modular structure at levels far beyond those
possible with current approaches. Even with this increased resolution, the
method is based on the principle of parsimony, and is capable of separating
signal from noise, and thus will not lead to the identification of spurious
modules even on sparse networks. Furthermore, it fully generalizes other
approaches in that it is not restricted to purely assortative mixing patterns,
directed or undirected graphs, and ad hoc hierarchical structures such as
binary trees. Despite its general character, the approach is tractable, and can
be combined with advanced techniques of community detection to yield an
efficient algorithm that scales well for very large networks.
|
1310.4378 | Efficient Monte Carlo and greedy heuristic for the inference of
stochastic block models | physics.data-an cond-mat.stat-mech cs.SI physics.comp-ph stat.ML | We present an efficient algorithm for the inference of stochastic block
models in large networks. The algorithm can be used as an optimized Markov
chain Monte Carlo (MCMC) method, with a fast mixing time and a much reduced
susceptibility to getting trapped in metastable states, or as a greedy
agglomerative heuristic, with an almost linear $O(N\ln^2N)$ complexity, where
$N$ is the number of nodes in the network, independent on the number of blocks
being inferred. We show that the heuristic is capable of delivering results
which are indistinguishable from the more exact and numerically expensive MCMC
method in many artificial and empirical networks, despite being much faster.
The method is entirely unbiased towards any specific mixing pattern, and in
particular it does not favor assortative community structures.
|
1310.4389 | ImageSpirit: Verbal Guided Image Parsing | cs.GR cs.CV | Humans describe images in terms of nouns and adjectives while algorithms
operate on images represented as sets of pixels. Bridging this gap between how
humans would like to access images versus their typical representation is the
goal of image parsing, which involves assigning object and attribute labels to
pixel. In this paper we propose treating nouns as object labels and adjectives
as visual attribute labels. This allows us to formulate the image parsing
problem as one of jointly estimating per-pixel object and attribute labels from
a set of training images. We propose an efficient (interactive time) solution.
Using the extracted labels as handles, our system empowers a user to verbally
refine the results. This enables hands-free parsing of an image into pixel-wise
object/attribute labels that correspond to human semantics. Verbally selecting
objects of interests enables a novel and natural interaction modality that can
possibly be used to interact with new generation devices (e.g. smart phones,
Google Glass, living room devices). We demonstrate our system on a large number
of real-world images with varying complexity. To help understand the tradeoffs
compared to traditional mouse based interactions, results are reported for both
a large scale quantitative evaluation and a user study.
|
1310.4393 | An algorithm for variable density sampling with block-constrained
acquisition | cs.IT math.IT math.OC | Reducing acquisition time is of fundamental importance in various imaging
modalities. The concept of variable density sampling provides a nice framework
to achieve this. It was justified recently from a theoretical point of view in
the compressed sensing (CS) literature. Unfortunately, the sampling schemes
suggested by current CS theories may not be relevant since they do not take the
acquisition constraints into account (for example, continuity of the
acquisition trajectory in Magnetic Resonance Imaging - MRI). In this paper, we
propose a numerical method to perform variable density sampling with block
constraints. Our main contribution is to propose a new way to draw the blocks
in order to mimic CS strategies based on isolated measurements. The basic idea
is to minimize a tailored dissimilarity measure between a probability
distribution defined on the set of isolated measurements and a probability
distribution defined on a set of blocks of measurements. This problem turns out
to be convex and solvable in high dimension. Our second contribution is to
define an efficient minimization algorithm based on Nesterov's accelerated
gradient descent in metric spaces. We study carefully the choice of the metrics
and of the prox function. We show that the optimal choice may depend on the
type of blocks under consideration. Finally, we show that we can obtain better
MRI reconstruction results using our sampling schemes than standard strategies
such as equiangularly distributed radial lines.
|
1310.4399 | Analyzing User Behavior across Social Sharing Environments | cs.SI cs.CY physics.soc-ph | In this work we present an in-depth analysis of the user behaviors on
different Social Sharing systems. We consider three popular platforms, Flickr,
Delicious and StumbleUpon, and, by combining techniques from social network
analysis with techniques from semantic analysis, we characterize the tagging
behavior as well as the tendency to create friendship relationships of the
users of these platforms. The aim of our investigation is to see if (and how)
the features and goals of a given Social Sharing system reflect on the behavior
of its users and, moreover, if there exists a correlation between the social
and tagging behavior of the users. We report our findings in terms of the
characteristics of user profiles according to three different dimensions: (i)
intensity of user activities, (ii) tag-based characteristics of user profiles,
and (iii) semantic characteristics of user profiles.
|
1310.4412 | Delay on broadcast erasure channels under random linear combinations | cs.IT math.IT | We consider a transmitter broadcasting random linear combinations (over a
field of size $d$) formed from a block of $c$ packets to a collection of $n$
receivers, where the channels between the transmitter and each receiver are
independent erasure channels with reception probabilities $\mathbf{q} =
(q_1,\ldots,q_n)$. We establish several properties of the random delay until
all $n$ receivers have recovered all $c$ packets, denoted $Y_{n:n}^{(c)}$.
First, we provide lower and upper bounds, exact expressions, and a recurrence
for the moments of $Y_{n:n}^{(c)}$. Second, we study the delay per packet
$Y_{n:n}^{(c)}/c$ as a function of $c$, including the asymptotic delay (as $c
\to \infty$), and monotonicity (in $c$) properties of the delay per packet.
Third, we employ extreme value theory to investigate $Y_{n:n}^{(c)}$ as a
function of $n$ (as $n \to \infty$). Several results are new, some results are
extensions of existing results, and some results are proofs of known results
using new (probabilistic) proof techniques.
|
1310.4456 | Inference, Sampling, and Learning in Copula Cumulative Distribution
Networks | stat.ML cs.LG | The cumulative distribution network (CDN) is a recently developed class of
probabilistic graphical models (PGMs) permitting a copula factorization, in
which the CDF, rather than the density, is factored. Despite there being much
recent interest within the machine learning community about copula
representations, there has been scarce research into the CDN, its amalgamation
with copula theory, and no evaluation of its performance. Algorithms for
inference, sampling, and learning in these models are underdeveloped compared
those of other PGMs, hindering widerspread use.
One advantage of the CDN is that it allows the factors to be parameterized as
copulae, combining the benefits of graphical models with those of copula
theory. In brief, the use of a copula parameterization enables greater
modelling flexibility by separating representation of the marginals from the
dependence structure, permitting more efficient and robust learning. Another
advantage is that the CDN permits the representation of implicit latent
variables, whose parameterization and connectivity are not required to be
specified. Unfortunately, that the model can encode only latent relationships
between variables severely limits its utility.
In this thesis, we present inference, learning, and sampling for CDNs, and
further the state-of-the-art. First, we explain the basics of copula theory and
the representation of copula CDNs. Then, we discuss inference in the models,
and develop the first sampling algorithm. We explain standard learning methods,
propose an algorithm for learning from data missing completely at random
(MCAR), and develop a novel algorithm for learning models of arbitrary
treewidth and size. Properties of the models and algorithms are investigated
through Monte Carlo simulations. We conclude with further discussion of the
advantages and limitations of CDNs, and suggest future work.
|
1310.4485 | The BeiHang Keystroke Dynamics Authentication System | cs.CR cs.LG | Keystroke Dynamics is an important biometric solution for person
authentication. Based upon keystroke dynamics, this paper designs an embedded
password protection device, develops an online system, collects two public
databases for promoting the research on keystroke authentication, exploits the
Gabor filter bank to characterize keystroke dynamics, and provides benchmark
results of three popular classification algorithms, one-class support vector
machine, Gaussian classifier, and nearest neighbour classifier.
|
1310.4495 | Multiple Attractor Cellular Automata (MACA) for Addressing Major
Problems in Bioinformatics | cs.CE cs.LG | CA has grown as potential classifier for addressing major problems in
bioinformatics. Lot of bioinformatics problems like predicting the protein
coding region, finding the promoter region, predicting the structure of protein
and many other problems in bioinformatics can be addressed through Cellular
Automata. Even though there are some prediction techniques addressing these
problems, the approximate accuracy level is very less. An automated procedure
was proposed with MACA (Multiple Attractor Cellular Automata) which can address
all these problems. The genetic algorithm is also used to find rules with good
fitness values. Extensive experiments are conducted for reporting the accuracy
of the proposed tool. The average accuracy of MACA when tested with ENCODE,
BG570, HMR195, Fickett and Tongue, ASP67 datasets is 78%.
|
1310.4545 | Decentralized stochastic control | math.OC cs.SY | Decentralized stochastic control refers to the multi-stage optimization of a
dynamical system by multiple controllers that have access to different
information. Decentralization of information gives rise to new conceptual
challenges that require new solution approaches. In this expository paper, we
use the notion of an \emph{information-state} to explain the two commonly used
solution approaches to decentralized control: the person-by-person approach and
the common-information approach.
|
1310.4546 | Distributed Representations of Words and Phrases and their
Compositionality | cs.CL cs.LG stat.ML | The recently introduced continuous Skip-gram model is an efficient method for
learning high-quality distributed vector representations that capture a large
number of precise syntactic and semantic word relationships. In this paper we
present several extensions that improve both the quality of the vectors and the
training speed. By subsampling of the frequent words we obtain significant
speedup and also learn more regular word representations. We also describe a
simple alternative to the hierarchical softmax called negative sampling. An
inherent limitation of word representations is their indifference to word order
and their inability to represent idiomatic phrases. For example, the meanings
of "Canada" and "Air" cannot be easily combined to obtain "Air Canada".
Motivated by this example, we present a simple method for finding phrases in
text, and show that learning good vector representations for millions of
phrases is possible.
|
1310.4579 | Discriminative Link Prediction using Local Links, Node Features and
Community Structure | cs.LG cs.SI physics.soc-ph | A link prediction (LP) algorithm is given a graph, and has to rank, for each
node, other nodes that are candidates for new linkage. LP is strongly motivated
by social search and recommendation applications. LP techniques often focus on
global properties (graph conductance, hitting or commute times, Katz score) or
local properties (Adamic-Adar and many variations, or node feature vectors),
but rarely combine these signals. Furthermore, neither of these extremes
exploit link densities at the intermediate level of communities. In this paper
we describe a discriminative LP algorithm that exploits two new signals. First,
a co-clustering algorithm provides community level link density estimates,
which are used to qualify observed links with a surprise value. Second, links
in the immediate neighborhood of the link to be predicted are not interpreted
at face value, but through a local model of node feature similarities. These
signals are combined into a discriminative link predictor. We evaluate the new
predictor using five diverse data sets that are standard in the literature. We
report on significant accuracy boosts compared to standard LP methods
(including Adamic-Adar and random walk). Apart from the new predictor, another
contribution is a rigorous protocol for benchmarking and reporting LP
algorithms, which reveals the regions of strengths and weaknesses of all the
predictors studied here, and establishes the new proposal as the most robust.
|
1310.4581 | Quantum Side Information: Uncertainty Relations, Extractors, Channel
Simulations | quant-ph cs.IT math-ph math.IT math.MP | In the first part of this thesis, we discuss the algebraic approach to
classical and quantum physics and develop information theoretic concepts within
this setup.
In the second part, we discuss the uncertainty principle in quantum
mechanics. The principle states that even if we have full classical information
about the state of a quantum system, it is impossible to deterministically
predict the outcomes of all possible measurements. In comparison, the
perspective of a quantum observer allows to have quantum information about the
state of a quantum system. This then leads to an interplay between uncertainty
and quantum correlations. We provide an information theoretic analysis by
discussing entropic uncertainty relations with quantum side information.
In the third part, we discuss the concept of randomness extractors. Classical
and quantum randomness are an essential resource in information theory,
cryptography, and computation. However, most sources of randomness exhibit only
weak forms of unpredictability, and the goal of randomness extraction is to
convert such weak randomness into (almost) perfect randomness. We discuss
various constructions for classical and quantum randomness extractors, and we
examine especially the performance of these constructions relative to an
observer with quantum side information.
In the fourth part, we discuss channel simulations. Shannon's noisy channel
theorem can be understood as the use of a noisy channel to simulate a noiseless
one. Channel simulations as we want to consider them here are about the reverse
problem: simulating noisy channels from noiseless ones. Starting from the
purely classical case (the classical reverse Shannon theorem), we develop
various kinds of quantum channel simulation results. We achieve this by using
classical and quantum randomness extractors that also work with respect to
quantum side information.
|
1310.4583 | Low complexity resource allocation for load minimization in OFDMA
wireless networks | cs.IT cs.NI math.IT | To cope with the ever increasing demand for bandwidth, future wireless
networks will be designed with reuse distance equal to one. This scenario
requires the implementation of techniques able to manage the strong multiple
access interference each cell generates towards its neighbor cells. In
particular, low complexity and reduced feedback are important requirements for
practical algorithms. In this paper we study an allocation problem for OFDMA
networks formulated with the objective of minimizing the load of each cell in
the system subject to the constraint that each user meets its target rate. We
decompose resource allocation into two sub-problems: channel allocation under
deterministic power assignment and continuous power assignment optimization.
Channel allocation is formulated as the problem of finding the maximum weighted
independent set (MWIS) in graph theory. In addition, we propose a minimal
weighted-degree greedy (MWDG) algorithm of which the approximation factor is
analyzed. For power allocation, an iterative power reassignment algorithm
(DPRA) is proposed. The control information requested to perform the allocation
is limited and the computational burden is shared between the base station and
the user equipments. Simulations have been carried out under constant bit rate
traffic model and the results have been compared with other allocation schemes
of similar complexity. MWDG has excellent performance and outperforms all other
techniques.
|
1310.4596 | Energy-Efficient Cooperative Protocols for Full-Duplex Relay Channels | cs.IT math.IT | In this work, energy-efficient cooperative protocols are studied for
full-duplex relaying (FDR) with loopback interference. In these protocols,
relay assistance is only sought under certain conditions on the different link
outages to ensure effective cooperation. Recently, an energy-efficient
selective decode-and-forward protocol was proposed for FDR, and was shown to
outperform existing schemes in terms of outage. Here, we propose an incremental
selective decode-and-forward protocol that offers additional power savings,
while keeping the same outage performance. We compare the performance of the
two protocols in terms of the end-to-end signal-to-noise ratio cumulative
distribution function via closed-form expressions. Finally, we corroborate our
theoretical results with simulation, and show the relative relay power savings
in comparison to non-selective cooperation in which the relay cooperates
regardless of channel conditions.
|
1310.4633 | Matching-centrality decomposition and the forecasting of new links in
networks | physics.soc-ph cs.SI q-bio.PE | Networks play a prominent role in the study of complex systems of interacting
entities in biology, sociology, and economics. Despite this diversity, we
demonstrate here that a statistical model decomposing networks into matching
and centrality components provides a comprehensive and unifying quantification
of their architecture. First we show, for a diverse set of networks, that this
decomposition provides an extremely tight fit to observed networks.
Consequently, the model allows very accurate prediction of missing links in
partially known networks. Second, when node characteristics are known, we show
how the matching-centrality decomposition can be related to this external
information. Consequently, it offers a simple and versatile tool to explore how
node characteristics explain network architecture. Finally, we demonstrate the
efficiency and flexibility of the model to forecast the links that a novel node
would create if it were to join an existing network.
|
1310.4638 | Using multiobjective optimization to map the entropy region of four
random variables | cs.IT math.IT math.OC | Presently the only available method of exploring the 15-dimensional entropy
region formed by the entropies of four random variables is the one of Zhang and
Yeung from 1998. It is argued that their method is equivalent to solving linear
multiobjective optimization problems. Benson's outer approximation algorithm is
a fundamental tool for solving these optimization problems. An improved version
of Benson's algorithm is described which requires solving one scalar linear
program in each iteration rather than two or three as in previous versions.
During the algorithm design special care was taken for numerical stability. The
implemented algorithm was used to check previous statements about the entropy
region, and to gain new information on that region. The experimental results
demonstrate the viability of the method for determining the extremal set of
medium size, numerically ill-posed optimization problems. With growing problem
size two limitations of Benson's algorithm have been observed: the inefficiency
of the scalar LP solver on one hand and the unexpectedly large number of
intermediate vertices on the other.
|
1310.4647 | Census Data Mining and Data Analysis using WEKA | cs.DB cs.CY | Data mining (also known as knowledge discovery from databases) is the process
of extraction of hidden, previously unknown and potentially useful information
from databases. The outcome of the extracted data can be analyzed for the
future planning and development perspectives. In this paper, we have made an
attempt to demonstrate how one can extract the local (district) level census,
socio-economic and population related other data for knowledge discovery and
their analysis using the powerful data mining tool Weka.
|
1310.4656 | Maximizing Barber's bipartite modularity is also hard | cs.SI cs.CC physics.soc-ph | Modularity introduced by Newman and Girvan [Phys. Rev. E 69, 026113 (2004)]
is a quality function for community detection. Numerous methods for modularity
maximization have been developed so far. In 2007, Barber [Phys. Rev. E 76,
066102 (2007)] introduced a variant of modularity called bipartite modularity
which is appropriate for bipartite networks. Although maximizing the standard
modularity is known to be NP-hard, the computational complexity of maximizing
bipartite modularity has yet to be revealed. In this study, we prove that
maximizing bipartite modularity is also NP-hard. More specifically, we show the
NP-completeness of its decision version by constructing a reduction from a
classical partitioning problem.
|
1310.4661 | Minimax rates in permutation estimation for feature matching | math.ST cs.LG stat.TH | The problem of matching two sets of features appears in various tasks of
computer vision and can be often formalized as a problem of permutation
estimation. We address this problem from a statistical point of view and
provide a theoretical analysis of the accuracy of several natural estimators.
To this end, the minimax rate of separation is investigated and its expression
is obtained as a function of the sample size, noise level and dimension. We
consider the cases of homoscedastic and heteroscedastic noise and establish, in
each case, tight upper bounds on the separation distance of several estimators.
These upper bounds are shown to be unimprovable both in the homoscedastic and
heteroscedastic settings. Interestingly, these bounds demonstrate that a phase
transition occurs when the dimension $d$ of the features is of the order of the
logarithm of the number of features $n$. For $d=O(\log n)$, the rate is
dimension free and equals $\sigma (\log n)^{1/2}$, where $\sigma$ is the noise
level. In contrast, when $d$ is larger than $c\log n$ for some constant $c>0$,
the minimax rate increases with $d$ and is of the order $\sigma(d\log
n)^{1/4}$. We also discuss the computational aspects of the estimators and
provide empirical evidence of their consistency on synthetic data. Finally, we
show that our results extend to more general matching criteria.
|
1310.4707 | Emergence of Blind Areas in Information Spreading | physics.soc-ph cs.SI | Recently, contagion-based (disease, information, etc.) spreading on social
networks has been extensively studied. In this paper, other than traditional
full interaction, we propose a partial interaction based spreading model,
considering that the informed individuals would transmit information to only a
certain fraction of their neighbors due to the transmission ability in
real-world social networks. Simulation results on three representative networks
(BA, ER, WS) indicate that the spreading efficiency is highly correlated with
the network heterogeneity. In addition, a special phenomenon, namely
\emph{Information Blind Areas} where the network is separated by several
information-unreachable clusters, will emerge from the spreading process.
Furthermore, we also find that the size distribution of such information blind
areas obeys power-law-like distribution, which has very similar exponent with
that of site percolation. Detailed analyses show that the critical value is
decreasing along with the network heterogeneity for the spreading process,
which is complete the contrary to that of random selection. Moreover, the
critical value in the latter process is also larger that of the former for the
same network. Those findings might shed some lights in in-depth understanding
the effect of network properties on information spreading.
|
1310.4713 | Calibration of an Articulated Camera System with Scale Factor Estimation | cs.CV cs.CG | Multiple Camera Systems (MCS) have been widely used in many vision
applications and attracted much attention recently. There are two principle
types of MCS, one is the Rigid Multiple Camera System (RMCS); the other is the
Articulated Camera System (ACS). In a RMCS, the relative poses (relative 3-D
position and orientation) between the cameras are invariant. While, in an ACS,
the cameras are articulated through movable joints, the relative pose between
them may change. Therefore, through calibration of an ACS we want to find not
only the relative poses between the cameras but also the positions of the
joints in the ACS.
In this paper, we developed calibration algorithms for the ACS using a simple
constraint: the joint is fixed relative to the cameras connected with it during
the transformations of the ACS. When the transformations of the cameras in an
ACS can be estimated relative to the same coordinate system, the positions of
the joints in the ACS can be calculated by solving linear equations. However,
in a non-overlapping view ACS, only the ego-transformations of the cameras and
can be estimated. We proposed a two-steps method to deal with this problem. In
both methods, the ACS is assumed to have performed general transformations in a
static environment. The efficiency and robustness of the proposed methods are
tested by simulation and real experiments. In the real experiment, the
intrinsic and extrinsic parameters of the ACS are obtained simultaneously by
our calibration procedure using the same image sequences, no extra data
capturing step is required. The corresponding trajectory is recovered and
illustrated using the calibration results of the ACS. Since the estimated
translations of different cameras in an ACS may scaled by different scale
factors, a scale factor estimation algorithm is also proposed. To our
knowledge, we are the first to study the calibration of ACS.
|
1310.4716 | SOSTOOLS Version 4.00 Sum of Squares Optimization Toolbox for MATLAB | math.OC cs.MS cs.SY | The release of SOSTOOLS v4.00 comes as we approach the 20th anniversary of
the original release of SOSTOOLS v1.00 back in April, 2002. SOSTOOLS was
originally envisioned as a flexible tool for parsing and solving polynomial
optimization problems, using the SOS tightening of polynomial positivity
constraints, and capable of adapting to the ever-evolving fauna of applications
of SOS. There are now a variety of SOS programming parsers beyond SOSTOOLS,
including YALMIP, Gloptipoly, SumOfSquares, and others. We hope SOSTOOLS
remains the most intuitive, robust and adaptable toolbox for SOS programming.
Recent progress in Semidefinite programming has opened up new possibilities for
solving large Sum of Squares programming problems, and we hope the next decade
will be one where SOS methods will find wide application in different areas.
In SOSTOOLS v4.00, we implement a parsing approach that reduces the
computational and memory requirements of the parser below that of the SDP
solver itself. We have re-developed the internal structure of our polynomial
decision variables. Specifically, polynomial and SOS variable declarations made
using sossosvar, sospolyvar, sosmatrixvar, etc now return a new polynomial
structure, dpvar. This new polynomial structure, is documented in the enclosed
dpvar guide, and isolates the scalar SDP decision variables in the SOS program
from the independent variables used to construct the SOS program. As a result,
the complexity of the parser scales almost linearly in the number of decision
variables. As a result of these changes, almost all users will notice a
significant increase in speed, with large-scaleproblems experiencing the most
dramatic speedups. Parsing time is now always less than 10% of time spent in
the SDP solver. Finally, SOSTOOLS now provides support for the MOSEK solver
interface as well as the SeDuMi, SDPT3, CSDP, SDPNAL, SDPNAL+, and SDPA
solvers.
|
1310.4734 | On Robustness Analysis of Stochastic Biochemical Systems by
Probabilistic Model Checking | cs.NA cs.CE cs.SY | This report proposes a novel framework for a rigorous robustness analysis of
stochastic biochemical systems. The technique is based on probabilistic model
checking. We adapt the general definition of robustness introduced by Kitano to
the class of stochastic systems modelled as continuous time Markov Chains in
order to extensively analyse and compare robustness of biological models with
uncertain parameters. The framework utilises novel computational methods that
enable to effectively evaluate the robustness of models with respect to
quantitative temporal properties and parameters such as reaction rate constants
and initial conditions.
The framework is applied to gene regulation as an example of a central
biological mechanism where intrinsic and extrinsic stochasticity plays crucial
role due to low numbers of DNA and RNA molecules. Using our methods we have
obtained a comprehensive and precise analysis of stochastic dynamics under
parameter uncertainty. Furthermore, we apply our framework to compare several
variants of two-component signalling networks from the perspective of
robustness with respect to intrinsic noise caused by low populations of
signalling components. We succeeded to extend previous studies performed on
deterministic models (ODE) and show that stochasticity may significantly affect
obtained predictions. Our case studies demonstrate that the framework can
provide deeper insight into the role of key parameters in maintaining the
system functionality and thus it significantly contributes to formal methods in
computational systems biology.
|
1310.4753 | Society Functions Best with an Intermediate Level of Creativity | cs.MA q-bio.NC | In a society, a proportion of the individuals can benefit from creativity
without being creative themselves by copying the creators. This paper uses an
agent-based model of cultural evolution to investigate how society is affected
by different levels of individual creativity. We performed a time series
analysis of the mean fitness of ideas across the artificial society varying
both the percentage of creators, C, and how creative they are, p using two
discounting methods. Both analyses revealed a valley in the adaptive landscape,
indicating a tradeoff between C and p. The results suggest that excess
creativity at the individual level can be detrimental at the level of the
society because creators invest in unproven ideas at the expense of propagating
proven ideas.
|
1310.4756 | Effectiveness of pre- and inprocessing for CDCL-based SAT solving | cs.LO cs.AI | Applying pre- and inprocessing techniques to simplify CNF formulas both
before and during search can considerably improve the performance of modern SAT
solvers. These algorithms mostly aim at reducing the number of clauses,
literals, and variables in the formula. However, to be worthwhile, it is
necessary that their additional runtime does not exceed the runtime saved
during the subsequent SAT solver execution. In this paper we investigate the
efficiency and the practicability of selected simplification algorithms for
CDCL-based SAT solving. We first analyze them by means of their expected impact
on the CNF formula and SAT solving at all. While testing them on real-world and
combinatorial SAT instances, we show which techniques and combinations of them
yield a desirable speedup and which ones should be avoided.
|
1310.4759 | Fine-grained Categorization -- Short Summary of our Entry for the
ImageNet Challenge 2012 | cs.CV | In this paper, we tackle the problem of visual categorization of dog breeds,
which is a surprisingly challenging task due to simultaneously present low
interclass distances and high intra-class variances. Our approach combines
several techniques well known in our community but often not utilized for
fine-grained recognition:
(1) automatic segmentation, (2) efficient part detection, and (3) combination
of multiple features. In particular, we demonstrate that a simple head detector
embedded in an off-the-shelf recognition pipeline can improve recognition
accuracy quite significantly, highlighting the importance of part features for
fine-grained recognition tasks. Using our approach, we achieved a 24.59% mean
average precision performance on the Stanford dog dataset.
|
1310.4761 | Towards Energy Neutrality in Energy Harvesting Wireless Sensor Networks:
A Case for Distributed Compressive Sensing? | cs.IT cs.NI math.IT | This paper advocates the use of the emerging distributed compressive sensing
(DCS) paradigm in order to deploy energy harvesting (EH) wireless sensor
networks (WSN) with practical network lifetime and data gathering rates that
are substantially higher than the state-of-the-art. In particular, we argue
that there are two fundamental mechanisms in an EH WSN: i) the energy diversity
associated with the EH process that entails that the harvested energy can vary
from sensor node to sensor node, and ii) the sensing diversity associated with
the DCS process that entails that the energy consumption can also vary across
the sensor nodes without compromising data recovery. We also argue that such
mechanisms offer the means to match closely the energy demand to the energy
supply in order to unlock the possibility for energy-neutral WSNs that leverage
EH capability. A number of analytic and simulation results are presented in
order to illustrate the potential of the approach.
|
1310.4774 | IntelligentWeb Agent for Search Engines | cs.IR | In this paper we review studies of the growth of the Internet and
technologies that are useful for information search and retrieval on the Web.
Search engines are retrieve the efficient information. We collected data on the
Internet from several different sources, e.g., current as well as projected
number of users, hosts, and Web sites. The trends cited by the sources are
consistent and point to exponential growth in the past and in the coming
decade. Hence it is not surprising that about 85% of Internet users surveyed
claim using search engines and search services to find specific information and
users are not satisfied with the performance of the current generation of
search engines; the slow retrieval speed, communication delays, and poor
quality of retrieved results. Web agents, programs acting autonomously on some
task, are already present in the form of spiders, crawler, and robots. Agents
offer substantial benefits and hazards, and because of this, their development
must involve attention to technical details. This paper illustrates the
different types of agents,crawlers, robots,etc for mining the contents of web
in a methodical, automated manner, also discusses the use of crawler to gather
specific types of information from Web pages, such as harvesting e-mail
addresses
|
1310.4802 | On Demand Memory Specialization for Distributed Graph Databases | cs.DB cs.DC | In this paper, we propose the DN-tree that is a data structure to build lossy
summaries of the frequent data access patterns of the queries in a distributed
graph data management system. These compact representations allow us an
efficient communication of the data structure in distributed systems. We
exploit this data structure with a new \textit{Dynamic Data Partitioning}
strategy (DYDAP) that assigns the portions of the graph according to historical
data access patterns, and guarantees a small network communication and a
computational load balance in distributed graph queries. This method is able to
adapt dynamically to new workloads and evolve when the query distribution
changes. Our experiments show that DYDAP yields a throughput up to an order of
magnitude higher than previous methods based on cache specialization, in a
variety of scenarios, and the average response time of the system is divided by
two.
|
1310.4822 | Principal motion components for gesture recognition using a
single-example | cs.CV | This paper introduces principal motion components (PMC), a new method for
one-shot gesture recognition. In the considered scenario a single
training-video is available for each gesture to be recognized, which limits the
application of traditional techniques (e.g., HMMs). In PMC, a 2D map of motion
energy is obtained per each pair of consecutive frames in a video. Motion maps
associated to a video are processed to obtain a PCA model, which is used for
recognition under a reconstruction-error approach. The main benefits of the
proposed approach are its simplicity, easiness of implementation, competitive
performance and efficiency. We report experimental results in one-shot gesture
recognition using the ChaLearn Gesture Dataset; a benchmark comprising more
than 50,000 gestures, recorded as both RGB and depth video with a Kinect
camera. Results obtained with PMC are competitive with alternative methods
proposed for the same data set.
|
1310.4849 | On the Bayes-optimality of F-measure maximizers | stat.ML cs.LG | The F-measure, which has originally been introduced in information retrieval,
is nowadays routinely used as a performance metric for problems such as binary
classification, multi-label classification, and structured output prediction.
Optimizing this measure is a statistically and computationally challenging
problem, since no closed-form solution exists. Adopting a decision-theoretic
perspective, this article provides a formal and experimental analysis of
different approaches for maximizing the F-measure. We start with a Bayes-risk
analysis of related loss functions, such as Hamming loss and subset zero-one
loss, showing that optimizing such losses as a surrogate of the F-measure leads
to a high worst-case regret. Subsequently, we perform a similar type of
analysis for F-measure maximizing algorithms, showing that such algorithms are
approximate, while relying on additional assumptions regarding the statistical
distribution of the binary response variables. Furthermore, we present a new
algorithm which is not only computationally efficient but also Bayes-optimal,
regardless of the underlying distribution. To this end, the algorithm requires
only a quadratic (with respect to the number of binary responses) number of
parameters of the joint distribution. We illustrate the practical performance
of all analyzed methods by means of experiments with multi-label classification
problems.
|
1310.4891 | Dictionary Learning and Sparse Coding on Grassmann Manifolds: An
Extrinsic Solution | cs.CV | Recent advances in computer vision and machine learning suggest that a wide
range of problems can be addressed more appropriately by considering
non-Euclidean geometry. In this paper we explore sparse dictionary learning
over the space of linear subspaces, which form Riemannian structures known as
Grassmann manifolds. To this end, we propose to embed Grassmann manifolds into
the space of symmetric matrices by an isometric mapping, which enables us to
devise a closed-form solution for updating a Grassmann dictionary, atom by
atom. Furthermore, to handle non-linearity in data, we propose a kernelised
version of the dictionary learning algorithm. Experiments on several
classification tasks (face recognition, action recognition, dynamic texture
classification) show that the proposed approach achieves considerable
improvements in discrimination accuracy, in comparison to state-of-the-art
methods such as kernelised Affine Hull Method and graph-embedding Grassmann
discriminant analysis.
|
1310.4894 | Traffic Control for Network Protection Against Spreading Processes | cs.SY cs.SI math.OC | Epidemic outbreaks in human populations are facilitated by the underlying
transportation network. We consider strategies for containing a viral spreading
process by optimally allocating a limited budget to three types of protection
resources: (i) Traffic control resources, (ii), preventative resources and
(iii) corrective resources. Traffic control resources are employed to impose
restrictions on the traffic flowing across directed edges in the transportation
network. Preventative resources are allocated to nodes to reduce the
probability of infection at that node (e.g. vaccines), and corrective resources
are allocated to nodes to increase the recovery rate at that node (e.g.
antidotes). We assume these resources have monetary costs associated with them,
from which we formalize an optimal budget allocation problem which maximizes
containment of the infection. We present a polynomial time solution to the
optimal budget allocation problem using Geometric Programming (GP) for an
arbitrary weighted and directed contact network and a large class of resource
cost functions. We illustrate our approach by designing optimal traffic control
strategies to contain an epidemic outbreak that propagates through a real-world
air transportation network.
|
1310.4896 | A unified characterization of generalized information and certainty
measures | cs.IT math.IT | In this paper we consider the axiomatic characterization of information and
certainty measures in a unified way. We present the general axiomatic system
which captures the common properties of a large number of the measures
previously considered by numerous authors. We provide the corresponding
characterization theorems and define a new generalized measure called the
Inforcer, which is the quasi-linear mean of the function associated to the
event probability following the general composition law. In particular, we pay
attention to the polynomial composition and the corresponding polynomially
composable Inforcer measure. The most common measures appearing in literature
can be obtained by specific choice of parameters appearing in our generic
measures and they are listed in tables.
|
1310.4899 | Laplacian Spectral Properties of Graphs from Random Local Samples | cs.SI cs.DM math.OC | The Laplacian eigenvalues of a network play an important role in the analysis
of many structural and dynamical network problems. In this paper, we study the
relationship between the eigenvalue spectrum of the normalized Laplacian matrix
and the structure of `local' subgraphs of the network. We call a subgraph
\emph{local} when it is induced by the set of nodes obtained from a
breath-first search (BFS) of radius $r$ around a node. In this paper, we
propose techniques to estimate spectral properties of the normalized Laplacian
matrix from a random collection of induced local subgraphs. In particular, we
provide an algorithm to estimate the spectral moments of the normalized
Laplacian matrix (the power-sums of its eigenvalues). Moreover, we propose a
technique, based on convex optimization, to compute upper and lower bounds on
the spectral radius of the normalized Laplacian matrix from local subgraphs. We
illustrate our results studying the normalized Laplacian spectrum of a
large-scale online social network.
|
1310.4909 | Text Classification For Authorship Attribution Analysis | cs.DL cs.CL cs.LG | Authorship attribution mainly deals with undecided authorship of literary
texts. Authorship attribution is useful in resolving issues like uncertain
authorship, recognize authorship of unknown texts, spot plagiarism so on.
Statistical methods can be used to set apart the approach of an author
numerically. The basic methodologies that are made use in computational
stylometry are word length, sentence length, vocabulary affluence, frequencies
etc. Each author has an inborn style of writing, which is particular to
himself. Statistical quantitative techniques can be used to differentiate the
approach of an author in a numerical way. The problem can be broken down into
three sub problems as author identification, author characterization and
similarity detection. The steps involved are pre-processing, extracting
features, classification and author identification. For this different
classifiers can be used. Here fuzzy learning classifier and SVM are used. After
author identification the SVM was found to have more accuracy than Fuzzy
classifier. Later combined the classifiers to obtain a better accuracy when
compared to individual SVM and fuzzy classifier.
|
1310.4914 | Activity date estimation in timestamped interaction networks | math.ST cs.SI stat.TH | We propose in this paper a new generative model for graphs that uses a latent
space approach to explain timestamped interactions. The model is designed to
provide global estimates of activity dates in historical networks where only
the interaction dates between agents are known with reasonable precision.
Experimental results show that the model provides better results than local
averages in dense enough networks
|
1310.4938 | A Logic-based Approach for Recognizing Textual Entailment Supported by
Ontological Background Knowledge | cs.CL cs.AI cs.LO | We present the architecture and the evaluation of a new system for
recognizing textual entailment (RTE). In RTE we want to identify automatically
the type of a logical relation between two input texts. In particular, we are
interested in proving the existence of an entailment between them. We conceive
our system as a modular environment allowing for a high-coverage syntactic and
semantic text analysis combined with logical inference. For the syntactic and
semantic analysis we combine a deep semantic analysis with a shallow one
supported by statistical models in order to increase the quality and the
accuracy of results. For RTE we use logical inference of first-order employing
model-theoretic techniques and automated reasoning tools. The inference is
supported with problem-relevant background knowledge extracted automatically
and on demand from external sources like, e.g., WordNet, YAGO, and OpenCyc, or
other, more experimental sources with, e.g., manually defined presupposition
resolutions, or with axiomatized general and common sense knowledge. The
results show that fine-grained and consistent knowledge coming from diverse
sources is a necessary condition determining the correctness and traceability
of results.
|
1310.4939 | Asymptotically optimal decision rules for joint detection and source
coding | cs.IT math.IT | The problem of joint detection and lossless source coding is considered. We
derive asymptotically optimal decision rules for deciding whether or not a
sequence of observations has emerged from a desired information source, and to
compress it if has. In particular, our decision rules asymptotically minimize
the cost of compression in the case that the data has been classified as
`desirable', subject to given constraints on the two kinds of the probability
of error. In another version of this performance criterion, the constraint on
the false alarm probability is replaced by the a constraint on the cost of
compression in the false alarm event. We then analyze the asymptotic
performance of these decision rules and demonstrate that they may exhibit
certain phase transitions. We also derive universal decision rules for the case
where the underlying sources (under either hypothesis or both) are unknown, and
training sequences from each source may or may not be available. Finally, we
discuss how our framework can be extended in several directions.
|
1310.4943 | N-continuous OFDM: System Optimization and Performance Analysis | cs.IT math.IT | N-continuous orthogonal frequency division multiplexing (NC-OFDM) is a
promising technique to achieve significant sidelobe suppression of baseband
OFDM signals. However, the high complexity limits its application. Based on
conventional NC-OFDM, in this paper, a new technique, called time-domain
N-continuous OFDM (TD-NC-OFDM), is proposed to transfer the original
frequency-domain processing to the time domain, by the linear combination of a
novel basis set to smooth the consecutive OFDM symbols and their high-order
derivatives. We prove that TD-NC-OFDM is an equivalent to conventional one
while consuming much lower complexity. Furthermore, via the time-domain
structure, a closed-form spectral expression of NC-OFDM signals is derived and
a compact upper bound of sidelobe decaying is derived. This paper also
investigates the impact of the TD-NC-OFDM technique on received
signal-to-interference-plus-noise ratio (SINR) and provides a closed-form
analytical expression. Theoretical analyses and simulation results show that
TD-NC-OFDM can prohibitively suppress the sidelobe with much lower complexity.
|
1310.4945 | A novel sparsity and clustering regularization | cs.LG cs.CV stat.ML | We propose a novel SPARsity and Clustering (SPARC) regularizer, which is a
modified version of the previous octagonal shrinkage and clustering algorithm
for regression (OSCAR), where, the proposed regularizer consists of a
$K$-sparse constraint and a pair-wise $\ell_{\infty}$ norm restricted on the
$K$ largest components in magnitude. The proposed regularizer is able to
separably enforce $K$-sparsity and encourage the non-zeros to be equal in
magnitude. Moreover, it can accurately group the features without shrinking
their magnitude. In fact, SPARC is closely related to OSCAR, so that the
proximity operator of the former can be efficiently computed based on that of
the latter, allowing using proximal splitting algorithms to solve problems with
SPARC regularization. Experiments on synthetic data and with benchmark breast
cancer data show that SPARC is a competitive group-sparsity inducing
regularizer for regression and classification.
|
1310.4954 | Compressed Vertical Partitioning for Full-In-Memory RDF Management | cs.DB cs.DS cs.IR | The Web of Data has been gaining momentum and this leads to increasingly
publish more semi-structured datasets following the RDF model, based on atomic
triple units of subject, predicate, and object. Although it is a simple model,
compression methods become necessary because datasets are increasingly larger
and various scalability issues arise around their organization and storage.
This requirement is more restrictive in RDF stores because efficient SPARQL
resolution on the compressed RDF datasets is also required.
This article introduces a novel RDF indexing technique (called k2-triples)
supporting efficient SPARQL resolution in compressed space. k2-triples, uses
the predicate to vertically partition the dataset into disjoint subsets of
pairs (subject, object), one per predicate. These subsets are represented as
binary matrices in which 1-bits mean that the corresponding triple exists in
the dataset. This model results in very sparse matrices, which are efficiently
compressed using k2-trees. We enhance this model with two compact indexes
listing the predicates related to each different subject and object, in order
to address the specific weaknesses of vertically partitioned representations.
The resulting technique not only achieves by far the most compressed
representations, but also the best overall performance for RDF retrieval in our
experiments. Our approach uses up to 10 times less space than a state of the
art baseline, and outperforms its performance by several order of magnitude on
the most basic query patterns. In addition, we optimize traditional join
algorithms on k2-triples and define a novel one leveraging its specific
features. Our experimental results show that our technique overcomes
traditional vertical partitioning for join resolution, reporting the best
numbers for joins in which the non-joined nodes are provided, and being
competitive in the majority of the cases.
|
1310.4975 | Competitive dynamics of lexical innovations in multi-layer networks | physics.soc-ph cs.SI | We study the introduction of lexical innovations into a community of language
users. Lexical innovations, i.e., new terms added to people's vocabulary, play
an important role in the process of language evolution. Nowadays, information
is spread through a variety of networks, including, among others, online and
offline social networks and the World Wide Web. The entire system, comprising
networks of different nature, can be represented as a multi-layer network. In
this context, lexical innovations diffusion occurs in a peculiar fashion. In
particular, a lexical innovation can undergo three different processes: its
original meaning is accepted; its meaning can be changed or misunderstood
(e.g., when not properly explained), hence more than one meaning can emerge in
the population; lastly, in the case of a loan word, it can be translated into
the population language (i.e., defining a new lexical innovation or using a
synonym) or into a dialect spoken by part of the population. Therefore, lexical
innovations cannot be considered simply as information. We develop a model for
analyzing this scenario using a multi-layer network comprising a social network
and a media network. The latter represents the set of all information systems
of a society, e.g., television, the World Wide Web and radio. Furthermore, we
identify temporal directed edges between the nodes of these two networks. In
particular, at each time step, nodes of the media network can be connected to
randomly chosen nodes of the social network and vice versa. In so doing,
information spreads through the whole system and people can share a lexical
innovation with their neighbors or, in the event they work as reporters, by
using media nodes. Lastly, we use the concept of "linguistic sign" to model
lexical innovations, showing its fundamental role in the study of these
dynamics. Many numerical simulations have been performed.
|
1310.4977 | Learning Tensors in Reproducing Kernel Hilbert Spaces with Multilinear
Spectral Penalties | cs.LG | We present a general framework to learn functions in tensor product
reproducing kernel Hilbert spaces (TP-RKHSs). The methodology is based on a
novel representer theorem suitable for existing as well as new spectral
penalties for tensors. When the functions in the TP-RKHS are defined on the
Cartesian product of finite discrete sets, in particular, our main problem
formulation admits as a special case existing tensor completion problems. Other
special cases include transfer learning with multimodal side information and
multilinear multitask learning. For the latter case, our kernel-based view is
instrumental to derive nonlinear extensions of existing model classes. We give
a novel algorithm and show in experiments the usefulness of the proposed
extensions.
|
1310.4986 | Computing Preferred Extensions in Abstract Argumentation: a SAT-based
Approach | cs.AI | This paper presents a novel SAT-based approach for the computation of
extensions in abstract argumentation, with focus on preferred semantics, and an
empirical evaluation of its performances. The approach is based on the idea of
reducing the problem of computing complete extensions to a SAT problem and then
using a depth-first search method to derive preferred extensions. The proposed
approach has been tested using two distinct SAT solvers and compared with three
state-of-the-art systems for preferred extension computation. It turns out that
the proposed approach delivers significantly better performances in the large
majority of the considered cases.
|
1310.4993 | Fractional Interference Alignment: An Interference Alignment Scheme for
Finite Alphabet Signals | cs.IT math.IT | Interference Alignment (IA) is a transmission scheme which achieves 1/2
Degrees-of-Freedom (DoF) per transmit-antenna per user. The constraints imposed
on the scheme are based on the linear receiver since conventional IA assumes
Gaussian signaling. However, when the transmitters employ Finite Alphabet (FA)
signaling, neither the conventional IA precoders nor the linear receiver are
optimal structures. Therefore, a novel Fractional Interference Alignment (FIA)
scheme is introduced when FA signals are used, where the alignment constraints
are now based on the non-linear, minimum distance (MD) detector. Since DoF is
defined only as signal-to-noise ratio tends to infinity, we introduce a new
metric called SpAC (number of Symbols transmitted-per-transmit
Antenna-per-Channel use) for analyzing the FIA scheme. The maximum SpAC is one,
and the FIA achieves any value of SpAC in the range [0,1]. The key motivation
for this work is that numerical simulations with FA signals and MD detector for
fixed SpAC (=1/2, as in IA) over a set of optimization problems, like
minimizing bit error rate or maximizing the mutual information, achieves a
significantly better error rate performance when compared to the existing
algorithms that minimize mean square error or maximize signal-to-interference
plus noise ratio.
|
1310.5007 | Online Classification Using a Voted RDA Method | cs.LG stat.ML | We propose a voted dual averaging method for online classification problems
with explicit regularization. This method employs the update rule of the
regularized dual averaging (RDA) method, but only on the subsequence of
training examples where a classification error is made. We derive a bound on
the number of mistakes made by this method on the training set, as well as its
generalization error rate. We also introduce the concept of relative strength
of regularization, and show how it affects the mistake bound and generalization
performance. We experimented with the method using $\ell_1$ regularization on a
large-scale natural language processing task, and obtained state-of-the-art
classification performance with fairly sparse models.
|
1310.5008 | Thompson Sampling in Dynamic Systems for Contextual Bandit Problems | cs.LG | We consider the multiarm bandit problems in the timevarying dynamic system
for rich structural features. For the nonlinear dynamic model, we propose the
approximate inference for the posterior distributions based on Laplace
Approximation. For the context bandit problems, Thompson Sampling is adopted
based on the underlying posterior distributions of the parameters. More
specifically, we introduce the discount decays on the previous samples impact
and analyze the different decay rates with the underlying sample dynamics.
Consequently, the exploration and exploitation is adaptively tradeoff according
to the dynamics in the system.
|
1310.5022 | Division of the Energy Market into Zones in Variable Weather Conditions
using Locational Marginal Prices | cs.CE cs.CY cs.SY | Adopting a zonal structure of electricity market requires specification of
zones' borders. One of the approaches to identify zones is based on clustering
of Locational Marginal Prices (LMP). The purpose of the paper is twofold: (i)
we extend the LMP methodology by taking into account variable weather
conditions and (ii) we point out some weaknesses of the method and suggest
their potential solutions. The offered extension comprises simulations based on
the Optimal Power Flow (OPF) algorithm and twofold clustering method. First,
LMP are calculated by OPF for each of scenario representing different weather
conditions. Second, hierarchical clustering based on Ward's criterion is used
on each realization of the prices separately. Then, another clustering method,
i.e. consensus clustering, is used to aggregate the results from all
simulations and to find the global division into zones. The offered method of
aggregation is not limited only to LMP methodology and is universal.
|
1310.5025 | The Optimal Division of the Energy Market into Zones: Comparison of Two
Methodologies under Variable Wind Conditions | cs.CE cs.CY cs.SY | We compare two competing methodologies of market zones identification under
the criterion of social welfare maximization: (i) consensus clustering of
Locational Marginal Prices over different wind scenarios and (ii) congestion
contribution identification with congested lines identified across variable
wind generation outputs. We test the division of market into zones based on
each of the two methodologies using a welfare criterion, i.e., comparing the
cost of supplying energy on uniform market (including readjustments made on a
balancing market to overcome the congestion) with cost on k-zone market. A
division which maximizes the welfare is considered as the optimum.
|
1310.5034 | A Theoretical and Experimental Comparison of the EM and SEM Algorithm | cs.LG stat.ML | In this paper we provide a new analysis of the SEM algorithm. Unlike previous
work, we focus on the analysis of a single run of the algorithm. First, we
discuss the algorithm for general mixture distributions. Second, we consider
Gaussian mixture models and show that with high probability the update
equations of the EM algorithm and its stochastic variant are almost the same,
given that the input set is sufficiently large. Our experiments confirm that
this still holds for a large number of successive update steps. In particular,
for Gaussian mixture models, we show that the stochastic variant runs nearly
twice as fast.
|
1310.5035 | Linearized Alternating Direction Method with Parallel Splitting and
Adaptive Penalty for Separable Convex Programs in Machine Learning | cs.NA cs.LG math.OC stat.ML | Many problems in machine learning and other fields can be (re)for-mulated as
linearly constrained separable convex programs. In most of the cases, there are
multiple blocks of variables. However, the traditional alternating direction
method (ADM) and its linearized version (LADM, obtained by linearizing the
quadratic penalty term) are for the two-block case and cannot be naively
generalized to solve the multi-block case. So there is great demand on
extending the ADM based methods for the multi-block case. In this paper, we
propose LADM with parallel splitting and adaptive penalty (LADMPSAP) to solve
multi-block separable convex programs efficiently. When all the component
objective functions have bounded subgradients, we obtain convergence results
that are stronger than those of ADM and LADM, e.g., allowing the penalty
parameter to be unbounded and proving the sufficient and necessary conditions}
for global convergence. We further propose a simple optimality measure and
reveal the convergence rate of LADMPSAP in an ergodic sense. For programs with
extra convex set constraints, with refined parameter estimation we devise a
practical version of LADMPSAP for faster convergence. Finally, we generalize
LADMPSAP to handle programs with more difficult objective functions by
linearizing part of the objective function as well. LADMPSAP is particularly
suitable for sparse representation and low-rank recovery problems because its
subproblems have closed form solutions and the sparsity and low-rankness of the
iterates can be preserved during the iteration. It is also highly
parallelizable and hence fits for parallel or distributed computing. Numerical
experiments testify to the advantages of LADMPSAP in speed and numerical
accuracy.
|
1310.5042 | Distributional semantics beyond words: Supervised learning of analogy
and paraphrase | cs.LG cs.AI cs.CL cs.IR | There have been several efforts to extend distributional semantics beyond
individual words, to measure the similarity of word pairs, phrases, and
sentences (briefly, tuples; ordered sets of words, contiguous or
noncontiguous). One way to extend beyond words is to compare two tuples using a
function that combines pairwise similarities between the component words in the
tuples. A strength of this approach is that it works with both relational
similarity (analogy) and compositional similarity (paraphrase). However, past
work required hand-coding the combination function for different tasks. The
main contribution of this paper is that combination functions are generated by
supervised learning. We achieve state-of-the-art results in measuring
relational similarity between word pairs (SAT analogies and SemEval~2012 Task
2) and measuring compositional similarity between noun-modifier phrases and
unigrams (multiple-choice paraphrase questions).
|
1310.5047 | Higher-order structure and epidemic dynamics in clustered networks | physics.soc-ph cs.SI q-bio.PE | Clustering is typically measured by the ratio of triangles to all triples,
open or closed. Generating clustered networks, and how clustering affects
dynamics on networks, is reasonably well understood for certain classes of
networks \cite{vmclust, karrerclust2010}, e.g., networks composed of lines and
non-overlapping triangles. In this paper we show that it is possible to
generate networks which, despite having the same degree distribution and equal
clustering, exhibit different higher-order structure, specifically, overlapping
triangles and other order-four (a closed network motif composed of four nodes)
structures. To distinguish and quantify these additional structural features,
we develop a new network metric capable of measuring order-four structure
which, when used alongside traditional network metrics, allows us to more
accurately describe a network's topology. Three network generation algorithms
are considered: a modified configuration model and two rewiring algorithms. By
generating homogeneous networks with equal clustering we study and quantify
their structural differences, and using SIS (Susceptible-Infected-Susceptible)
and SIR (Susceptible-Infected-Recovered) dynamics we investigate
computationally how differences in higher-order structure impact on epidemic
threshold, final epidemic or prevalence levels and time evolution of epidemics.
Our results suggest that characterising and measuring higher-order network
structure is needed to advance our understanding of the impact of network
topology on dynamics unfolding on the networks.
|
1310.5059 | Squashing model for detectors and applications to quantum key
distribution protocols | quant-ph cs.CR cs.IT math.IT | We develop a framework that allows a description of measurements in Hilbert
spaces that are smaller than their natural representation. This description,
which we call a "squashing model", consists of a squashing map that maps the
input states of the measurement from the original Hilbert space to the smaller
one, followed by a targeted prescribed measurement on the smaller Hilbert
space. This framework has applications in quantum key distribution, but also in
other cryptographic tasks, as it greatly simplifies the theoretical analysis
under adversarial conditions.
|
1310.5061 | On dual toric complete intersection codes | math.AG cs.IT math.IT | In this paper we study duality for evaluation codes on intersections of d
hypersurfaces with given d-dimensional Newton polytopes, so called toric
complete intersection codes. In particular, we give a condition for such a code
to be quasi-self-dual. In the case of d=2 it reduces to a combinatorial
condition on the Newton polygons. This allows us to give an explicit
construction of dual and quasi-self-dual toric complete intersection codes. We
provide a list of examples over the field of 16 elements.
|
1310.5062 | Aspects of randomness in neural graph structures | physics.soc-ph cs.SI q-bio.NC | In the past two decades, significant advances have been made in understanding
the structural and functional properties of biological networks, via
graph-theoretic analysis. In general, most graph-theoretic studies are
conducted in the presence of serious uncertainties, such as major undersampling
of the experimental data. In the specific case of neural systems, however, a
few moderately robust experimental reconstructions do exist, and these have
long served as fundamental prototypes for studying connectivity patterns in the
nervous system. In this paper, we provide a comparative analysis of these
"historical" graphs, both in (unmodified) directed and (often symmetrized)
undirected forms, and focus on simple structural characterizations of their
connectivity. We find that in most measures the networks studied are captured
by simple random graph models; in a few key measures, however, we observe a
marked departure from the random graph prediction. Our results suggest that the
mechanism of graph formation in the networks studied is not well-captured by
existing abstract graph models, such as the small-world or scale-free graph.
|
1310.5082 | On the Suitable Domain for SVM Training in Image Coding | cs.CV cs.LG stat.ML | Conventional SVM-based image coding methods are founded on independently
restricting the distortion in every image coefficient at some particular image
representation. Geometrically, this implies allowing arbitrary signal
distortions in an $n$-dimensional rectangle defined by the
$\varepsilon$-insensitivity zone in each dimension of the selected image
representation domain. Unfortunately, not every image representation domain is
well-suited for such a simple, scalar-wise, approach because statistical and/or
perceptual interactions between the coefficients may exist. These interactions
imply that scalar approaches may induce distortions that do not follow the
image statistics and/or are perceptually annoying. Taking into account these
relations would imply using non-rectangular $\varepsilon$-insensitivity regions
(allowing coupled distortions in different coefficients), which is beyond the
conventional SVM formulation.
In this paper, we report a condition on the suitable domain for developing
efficient SVM image coding schemes. We analytically demonstrate that no linear
domain fulfills this condition because of the statistical and perceptual
inter-coefficient relations that exist in these domains. This theoretical
result is experimentally confirmed by comparing SVM learning in previously
reported linear domains and in a recently proposed non-linear perceptual domain
that simultaneously reduces the statistical and perceptual relations (so it is
closer to fulfilling the proposed condition). These results highlight the
relevance of an appropriate choice of the image representation before SVM
learning.
|
1310.5089 | Kernel Multivariate Analysis Framework for Supervised Subspace Learning:
A Tutorial on Linear and Kernel Multivariate Methods | stat.ML cs.LG | Feature extraction and dimensionality reduction are important tasks in many
fields of science dealing with signal processing and analysis. The relevance of
these techniques is increasing as current sensory devices are developed with
ever higher resolution, and problems involving multimodal data sources become
more common. A plethora of feature extraction methods are available in the
literature collectively grouped under the field of Multivariate Analysis (MVA).
This paper provides a uniform treatment of several methods: Principal Component
Analysis (PCA), Partial Least Squares (PLS), Canonical Correlation Analysis
(CCA) and Orthonormalized PLS (OPLS), as well as their non-linear extensions
derived by means of the theory of reproducing kernel Hilbert spaces. We also
review their connections to other methods for classification and statistical
dependence estimation, and introduce some recent developments to deal with the
extreme cases of large-scale and low-sized problems. To illustrate the wide
applicability of these methods in both classification and regression problems,
we analyze their performance in a benchmark of publicly available data sets,
and pay special attention to specific real applications involving audio
processing for music genre prediction and hyperspectral satellite images for
Earth and climate monitoring.
|
1310.5095 | Regularization in Relevance Learning Vector Quantization Using l one
Norms | stat.ML cs.LG | We propose in this contribution a method for l one regularization in
prototype based relevance learning vector quantization (LVQ) for sparse
relevance profiles. Sparse relevance profiles in hyperspectral data analysis
fade down those spectral bands which are not necessary for classification. In
particular, we consider the sparsity in the relevance profile enforced by LASSO
optimization. The latter one is obtained by a gradient learning scheme using a
differentiable parametrized approximation of the $l_{1}$-norm, which has an
upper error bound. We extend this regularization idea also to the matrix
learning variant of LVQ as the natural generalization of relevance learning.
|
1310.5096 | Opinion Dynamic with agents immigration | physics.soc-ph cs.SI | We propose a strategy for achieving maximum cooperation in evolutionary games
on complex networks. Each individual is assigned a weight that is proportional
to the power of its degree, where the exponent alpha is an adjustable parameter
that controls the level of diversity among individuals in the network. During
the evolution, every individual chooses one of its neighbors as a reference
with a probability proportional to the weight of the neighbor, and updates its
strategy depending on their payoff difference. It is found that there exists an
optimal value of alpha, for which the level of cooperation reaches maximum.
This phenomenon indicates that, although high-degree individuals play a
prominent role in maintaining the cooperation, too strong influences from the
hubs may counterintuitively inhibit the diffusion of cooperation. We provide a
physical theory, aided by numerical computations, to explain the emergence of
the optimal cooperation. Other pertinent quantities such as the payoff, the
cooperator density as a function of the degree, and the payoff distribution,
are also investigated. Our results suggest that, in order to achieve strong
cooperation on a complex network, individuals should learn more frequently from
neighbors with higher degrees, but only to certain extent.
|
1310.5107 | Advances in Hyperspectral Image Classification: Earth monitoring with
statistical learning methods | cs.CV | Hyperspectral images show similar statistical properties to natural grayscale
or color photographic images. However, the classification of hyperspectral
images is more challenging because of the very high dimensionality of the
pixels and the small number of labeled examples typically available for
learning. These peculiarities lead to particular signal processing problems,
mainly characterized by indetermination and complex manifolds. The framework of
statistical learning has gained popularity in the last decade. New methods have
been presented to account for the spatial homogeneity of images, to include
user's interaction via active learning, to take advantage of the manifold
structure with semisupervised learning, to extract and encode invariances, or
to adapt classifiers and image representations to unseen yet similar scenes.
This tutuorial reviews the main advances for hyperspectral remote sensing image
classification through illustrative examples.
|
1310.5111 | Complexity of Word Collocation Networks: A Preliminary Structural
Analysis | cs.SI physics.soc-ph | In this paper, we explore complex network properties of word collocation
networks (Ferret, 2002) from four different genres. Each document of a
particular genre was converted into a network of words with word collocations
as edges. We analyzed graphically and statistically how the global properties
of these networks varied across different genres, and among different network
types within the same genre. Our results indicate that the distributions of
network properties are visually similar but statistically apart across
different genres, and interesting variations emerge when we consider different
network types within a single genre. We further investigate how the global
properties change as we add more and more collocation edges to the graph of one
particular genre, and observe that except for the number of vertices and the
size of the largest connected component, network properties change in phases,
via jumps and drops.
|
1310.5114 | Explore or exploit? A generic model and an exactly solvable case | cond-mat.dis-nn cs.LG physics.soc-ph q-fin.GN | Finding a good compromise between the exploitation of known resources and the
exploration of unknown, but potentially more profitable choices, is a general
problem, which arises in many different scientific disciplines. We propose a
stylized model for these exploration-exploitation situations, including
population or economic growth, portfolio optimisation, evolutionary dynamics,
or the problem of optimal pinning of vortices or dislocations in disordered
materials. We find the exact growth rate of this model for tree-like geometries
and prove the existence of an optimal migration rate in this case. Numerical
simulations in the one-dimensional case confirm the generic existence of an
optimum.
|
1310.5142 | Crowdsourced Task Routing via Matrix Factorization | cs.CY cs.IR | We describe methods to predict a crowd worker's accuracy on new tasks based
on his accuracy on past tasks. Such prediction provides a foundation for
identifying the best workers to route work to in order to maximize accuracy on
the new task. Our key insight is to model similarity of past tasks to the
target task such that past task accuracies can be optimally integrated to
predict target task accuracy. We describe two matrix factorization (MF)
approaches from collaborative filtering which not only exploit such task
similarity, but are known to be robust to sparse data. Experiments on synthetic
and real-world datasets provide feasibility assessment and comparative
evaluation of MF approaches vs. two baseline methods. Across a range of data
scales and task similarity conditions, we evaluate: 1) prediction error over
all workers; and 2) how well each method predicts the best workers to use for
each task. Results show the benefit of task routing over random assignment, the
strength of probabilistic MF over baseline methods, and the robustness of
methods under different conditions.
|
1310.5163 | A New Notion of Effective Resistance for Directed Graphs-Part I:
Definition and Properties | math.OC cs.SY | The graphical notion of effective resistance has found wide-ranging
applications in many areas of pure mathematics, applied mathematics and control
theory. By the nature of its construction, effective resistance can only be
computed in undirected graphs and yet in several areas of its application,
directed graphs arise as naturally (or more naturally) than undirected ones. In
part I of this work, we propose a generalization of effective resistance to
directed graphs that preserves its control-theoretic properties in relation to
consensus-type dynamics. We proceed to analyze the dependence of our algebraic
definition on the structural properties of the graph and the relationship
between our construction and a graphical distance. The results make possible
the calculation of effective resistance between any two nodes in any directed
graph and provide a solid foundation for the application of effective
resistance to problems involving directed graphs.
|
1310.5168 | A New Notion of Effective Resistance for Directed Graphs-Part II:
Computing Resistances | math.OC cs.SY | In Part I of this work we defined a generalization of the concept of
effective resistance to directed graphs, and we explored some of the properties
of this new definition. Here, we use the theory developed in Part I to compute
effective resistances in some prototypical directed graphs. This exploration
highlights cases where our notion of effective resistance for directed graphs
behaves analogously to our experience from undirected graphs, as well as cases
where it behaves in unexpected ways.
|
1310.5187 | Distributed Reed-Solomon Codes for Simple Multiple Access Networks | cs.IT math.IT | We consider a simple multiple access network in which a destination node
receives information from multiple sources via a set of relay nodes. Each relay
node has access to a subset of the sources, and is connected to the destination
by a unit capacity link. We also assume that $z$ of the relay nodes are
adversarial. We propose a computationally efficient distributed coding scheme
and show that it achieves the full capacity region for up to three sources.
Specifically, the relay nodes encode in a distributed fashion such that the
overall codewords received at the destination are codewords from a single
Reed-Solomon code.
|
1310.5199 | A Notion of Robustness for Cyber-Physical Systems | cs.SY math.OC | Robustness as a system property describes the degree to which a system is
able to function correctly in the presence of disturbances, i.e., unforeseen or
erroneous inputs. In this paper, we introduce a notion of robustness termed
input-output dynamical stability for cyber-physical systems (CPS) which merges
existing notions of robustness for continuous systems and discrete systems. The
notion captures two intuitive aims of robustness: bounded disturbances have
bounded effects and the consequences of a sporadic disturbance disappear over
time. We present a design methodology for robust CPS which is based on an
abstraction and refinement process. We suggest several novel notions of
simulation relations to ensure the soundness of the approach. In addition, we
show how such simulation relations can be constructed compositionally. The
different concepts and results are illustrated throughout the paper with
examples.
|
1310.5202 | Discriminative Measures for Comparison of Phylogenetic Trees | q-bio.PE cs.CE cs.CG | In this paper we introduce and study three new measures for efficient
discriminative comparison of phylogenetic trees. The NNI navigation
dissimilarity $d_{nav}$ counts the steps along a "combing" of the Nearest
Neighbor Interchange (NNI) graph of binary hierarchies, providing an efficient
approximation to the (NP-hard) NNI distance in terms of "edit length". At the
same time, a closed form formula for $d_{nav}$ presents it as a weighted count
of pairwise incompatibilities between clusters, lending it the character of an
edge dissimilarity measure as well. A relaxation of this formula to a simple
count yields another measure on all trees --- the crossing dissimilarity
$d_{CM}$. Both dissimilarities are symmetric and positive definite (vanish only
between identical trees) on binary hierarchies but they fail to satisfy the
triangle inequality. Nevertheless, both are bounded below by the widely used
Robinson-Foulds metric and bounded above by a closely related true metric, the
cluster-cardinality metric $d_{CC}$. We show that each of the three proposed
new dissimilarities is computable in time $O(n^2)$ in the number of leaves $n$,
and conclude the paper with a brief numerical exploration of the distribution
over tree space of these dissimilarities in comparison with the Robinson-Foulds
metric and the more recently introduced matching-split distance.
|
1310.5205 | Robustness of Network of Networks with Interdependent and Interconnected
links | physics.soc-ph cs.SI | Robustness of network of networks (NON) has been studied only for dependency
coupling (J.X. Gao et. al., Nature Physics, 2012) and only for connectivity
coupling (E.A. Leicht and R.M. D Souza, arxiv:0907.0894). The case of network
of n networks with both interdependent and interconnected links is more
complicated, and also more closely to real-life coupled network systems. Here
we develop a framework to study analytically and numerically the robustness of
this system. For the case of starlike network of n ER networks, we find that
the system undergoes from second order to first order phase transition as
coupling strength q increases. We find that increasing intra-connectivity links
or inter-connectivity links can increase the robustness of the system, while
the interdependency links decrease its robustness. Especially when q=1, we find
exact analytical solutions of the giant component and the first order
transition point. Understanding the robustness of network of networks with
interdependent and interconnected links is helpful to design resilient
infrastructures.
|
1310.5207 | A Radial Basis Function (RBF)-Finite Difference Method for the
Simulation of Reaction-Diffusion Equations on Stationary Platelets within the
Augmented Forcing Method | math.NA cs.CE cs.NA q-bio.QM | We present a computational method for solving the coupled problem of chemical
transport in a fluid (blood) with binding/unbinding of the chemical to/from
cellular (platelet) surfaces in contact with the fluid, and with transport of
the chemical on the cellular surfaces. The overall framework is the Augmented
Forcing Point Method (AFM) (\emph{L. Yao and A.L. Fogelson, Simulations of
chemical transport and reaction in a suspension of cells I: An augmented
forcing point method for the stationary case, IJNMF (2012) 69, 1736-52.}) for
solving fluid-phase transport in a region outside of a collection of cells
suspended in the fluid. We introduce a novel Radial Basis Function-Finite
Difference (RBF-FD) method to solve reaction-diffusion equations on the surface
of each of a collection of 2D stationary platelets suspended in blood.
Parametric RBFs are used to represent the geometry of the platelets and give
accurate geometric information needed for the RBF-FD method. Symmetric
Hermite-RBF interpolants are used for enforcing the boundary conditions on the
fluid-phase chemical concentration, and their use removes a significant
limitation of the original AFM. The efficacy of the new methods are shown
through a series of numerical experiments; in particular, second order
convergence for the coupled problem is demonstrated.
|
1310.5225 | Generalized Extended Hamming Codes over Galois Ring of Characteristic
$2^{n}$ | cs.IT math.IT | In this paper, we introduce generalized extended Hamming codes over Galois
rings $GR(2^n,m)$ of characteristic $2^n$ with extension degree $m$.
Furthermore we prove that the minimum Hamming weight of generalized extended
Hamming codes over $GR(2^n,m)$ is 4 and the minimum Lee weight of generalized
extended Hamming codes over $GR(8,m)$ is 6 for all $m \geq 3$.
|
1310.5230 | Prefix and plain Kolmogorov complexity characterizations of
2-randomness: simple proofs | cs.IT math.IT | Joseph Miller [16] and independently Andre Nies, Frank Stephan and Sebastiaan
Terwijn [18] gave a complexity characterization of 2-random sequences in terms
of plain Kolmogorov complexity C: they are sequences that have infinitely many
initial segments with O(1)-maximal plain complexity (among the strings of the
same length). Later Miller [17] showed that prefix complexity K can also be
used in a similar way: a sequence is 2-random if and only if it has infinitely
many initial segments with O(1)-maximal prefix complexity (which is n + K (n)
for strings of length n). The known proofs of these results are quite involved;
in this paper we provide simple direct proofs for both of them.
In [16] Miller also gave a quantitative version of the first result: the
0'-randomness deficiency of a sequence {\omega} equals lim inf [n - C
({\omega}1 . . . {\omega}n)] + O(1). (Our simplified proof can also be used to
prove this.) We show (and this seems to be a new result) that a similar
quantitative result is also true for prefix complexity: 0'-randomness
deficiency equals lim inf [n + K (n) -- K ({\omega}1 . . . {\omega}n)] + O(1).
|
1310.5249 | Graph-Based Approaches to Clustering Network-Constrained Trajectory Data | cs.LG | Clustering trajectory data attracted considerable attention in the last few
years. Most of prior work assumed that moving objects can move freely in an
euclidean space and did not consider the eventual presence of an underlying
road network and its influence on evaluating the similarity between
trajectories. In this paper, we present an approach to clustering such
network-constrained trajectory data. More precisely we aim at discovering
groups of road segments that are often travelled by the same trajectories. To
achieve this end, we model the interactions between segments w.r.t. their
similarity as a weighted graph to which we apply a community detection
algorithm to discover meaningful clusters. We showcase our proposition through
experimental results obtained on synthetic datasets.
|
1310.5251 | Sparsity-Promoting Sensor Selection for Non-linear Measurement Models | cs.IT eess.SP math.IT | Sensor selection is an important design problem in large-scale sensor
networks. Sensor selection can be interpreted as the problem of selecting the
best subset of sensors that guarantees a certain estimation performance. We
focus on observations that are related to a general non-linear model. The
proposed framework is valid as long as the observations are independent, and
its likelihood satisfies the regularity conditions. We use several functions of
the Cram\'er-Rao bound (CRB) as a performance measure. We formulate the sensor
selection problem as the design of a selection vector, which in its original
form is a nonconvex l0-(quasi) norm optimization problem. We present relaxed
sensor selection solvers that can be efficiently solved in polynomial time. We
also propose a projected subgradient algorithm that is attractive for
large-scale problems and also show how the algorithm can be easily distributed.
The proposed framework is illustrated with a number of examples related to
sensor placement design for localization.
|
1310.5254 | Real Time Data Warehouse | cs.DB | Data Warehouse (DW) is an essential part of Business Intelligence. DW emerged
as a fast growing reporting and analysis technique in early 1980s. Today, it
has almost replaced relational databases. However, with passage of time, static
and historic data of DWs could not produce Real Time reporting and analysis,
thus giving a way to emerge the Idea of Real Time Data Warehouse (RTDW).
Although, there are problems with RTDWs, but with advancement in technology and
researchers focus, RTDWs will be able to generate real time reports, analysis
and forecasting.
|
1310.5288 | GPatt: Fast Multidimensional Pattern Extrapolation with Gaussian
Processes | stat.ML cs.AI cs.LG stat.ME | Gaussian processes are typically used for smoothing and interpolation on
small datasets. We introduce a new Bayesian nonparametric framework -- GPatt --
enabling automatic pattern extrapolation with Gaussian processes on large
multidimensional datasets. GPatt unifies and extends highly expressive kernels
and fast exact inference techniques. Without human intervention -- no hand
crafting of kernel features, and no sophisticated initialisation procedures --
we show that GPatt can solve large scale pattern extrapolation, inpainting, and
kernel discovery problems, including a problem with 383400 training points. We
find that GPatt significantly outperforms popular alternative scalable Gaussian
process methods in speed and accuracy. Moreover, we discover profound
differences between each of these methods, suggesting expressive kernels,
nonparametric representations, and exact inference are useful for modelling
large scale multidimensional patterns.
|
1310.5306 | Can social microblogging be used to forecast intraday exchange rates? | cs.SI cs.CE q-fin.ST | The Efficient Market Hypothesis (EMH) is widely accepted to hold true under
certain assumptions. One of its implications is that the prediction of stock
prices at least in the short run cannot outperform the random walk model. Yet,
recently many studies stressing the psychological and social dimension of
financial behavior have challenged the validity of the EMH. Towards this aim,
over the last few years, internet-based communication platforms and search
engines have been used to extract early indicators of social and economic
trends. Here, we used Twitter's social networking platform to model and
forecast the EUR/USD exchange rate in a high-frequency intradaily trading
scale. Using time series and trading simulations analysis, we provide some
evidence that the information provided in social microblogging platforms such
as Twitter can in certain cases enhance the forecasting efficiency regarding
the very short (intradaily) forex.
|
1310.5347 | Bayesian Extensions of Kernel Least Mean Squares | stat.ML cs.LG | The kernel least mean squares (KLMS) algorithm is a computationally efficient
nonlinear adaptive filtering method that "kernelizes" the celebrated (linear)
least mean squares algorithm. We demonstrate that the least mean squares
algorithm is closely related to the Kalman filtering, and thus, the KLMS can be
interpreted as an approximate Bayesian filtering method. This allows us to
systematically develop extensions of the KLMS by modifying the underlying
state-space and observation models. The resulting extensions introduce many
desirable properties such as "forgetting", and the ability to learn from
discrete data, while retaining the computational simplicity and time complexity
of the original algorithm.
|
1310.5356 | Rewiring the network. What helps an innovation to diffuse? | physics.soc-ph cs.SI | A fundamental question related to innovation diffusion is how the social
network structure influences the process. Empirical evidence regarding
real-world influence networks is very limited. On the other hand, agent-based
modeling literature reports different and at times seemingly contradictory
results. In this paper we study innovation diffusion processes for a range of
Watts-Strogatz networks in an attempt to shed more light on this problem. Using
the so-called Sznajd model as the backbone of opinion dynamics, we find that
the published results are in fact consistent and allow to predict the role of
network topology in various situations. In particular, the diffusion of
innovation is easier on more regular graphs, i.e. with a higher clustering
coefficient. Moreover, in the case of uncertainty - which is particularly high
for innovations connected to public health programs or ecological campaigns - a
more clustered network will help the diffusion. On the other hand, when social
influence is less important (i.e. in the case of perfect information), a
shorter path will help the innovation to spread in the society and - as a
result - the diffusion will be easiest on a random graph.
|
1310.5359 | Association schemes on general measure spaces and zero-dimensional
Abelian groups | math.FA cs.IT math.CO math.IT | Association schemes form one of the main objects of algebraic combinatorics,
classically defined on finite sets. In this paper we define association schemes
on arbitrary, possibly uncountable sets with a measure. We study operator
realizations of the adjacency algebras of schemes and derive simple properties
of these algebras. To develop a theory of general association schemes, we focus
on schemes on topological Abelian groups where we can employ duality theory and
the machinery of harmonic analysis. We construct translation association
schemes on such groups using the language of spectrally dual partitions. Such
partitions are shown to arise naturally on topological zero-dimensional Abelian
groups, for instance, Cantor-type groups or the groups of p-adic numbers. This
enables us to construct large classes of dual pairs of association schemes on
zero-dimensional groups with respect to their Haar measure, and to compute
their eigenvalues and intersection numbers. We also derive properties of
infinite metric schemes, connecting them with the properties of the
non-Archimedean metric on the group.
Pursuing the connection between schemes on zero-dimensional groups and
harmonic analysis, we show that the eigenvalues have a natural interpretation
in terms of Littlewood-Paley wavelet bases, and in the (equivalent) language of
martingale theory. For a class of nonmetric schemes constructed in the paper,
the eigenvalues coincide with values of orthogonal functions on
zero-dimensional groups. We observe that these functions, which we call
Haar-like bases, have the properties of wavelets on the group, including in
some special cases the self-similarity property. This establishes a seemingly
new link between algebraic combinatorics and harmonic analysis.
We conclude the paper by studying some analogs of problems of classical
coding theory related to the theory of association schemes.
|
1310.5376 | Hypermap-Homology Quantum Codes (Ph.D. thesis) | cs.IT math.IT quant-ph | We introduce a new type of sparse CSS quantum error correcting code based on
the homology of hypermaps. Sparse quantum error correcting codes are of
interest in the building of quantum computers due to their ease of
implementation and the possibility of developing fast decoders for them. Codes
based on the homology of embeddings of graphs, such as Kitaev's toric code,
have been discussed widely in the literature and our class of codes generalize
these. We use embedded hypergraphs, which are a generalization of graphs that
can have edges connected to more than two vertices. We develop theorems and
examples of our hypermap-homology codes, especially in the case that we choose
a special type of basis in our homology chain complex. In particular the most
straightforward generalization of the $m \times m$ toric code to
hypermap-homology codes gives us a $[(3/2)m^2,2,m]$ code as compared to the
toric code which is a $[2m^2,2,m]$ code. Thus we can protect the same amount of
quantum information, with the same error-correcting capability, using less
physical qubits.
|
1310.5393 | Multi-Task Regularization with Covariance Dictionary for Linear
Classifiers | cs.LG | In this paper we propose a multi-task linear classifier learning problem
called D-SVM (Dictionary SVM). D-SVM uses a dictionary of parameter covariance
shared by all tasks to do multi-task knowledge transfer among different tasks.
We formally define the learning problem of D-SVM and show two interpretations
of this problem, from both the probabilistic and kernel perspectives. From the
probabilistic perspective, we show that our learning formulation is actually a
MAP estimation on all optimization variables. We also show its equivalence to a
multiple kernel learning problem in which one is trying to find a re-weighting
kernel for features from a dictionary of basis (despite the fact that only
linear classifiers are learned). Finally, we describe an alternative
optimization scheme to minimize the objective function and present empirical
studies to valid our algorithm.
|
1310.5409 | Pulse-Doppler Signal Processing with Quadrature Compressive Sampling | cs.IT math.IT | Quadrature compressive sampling (QuadCS) is a newly introduced sub-Nyquist
sampling for acquiring inphase and quadrature (I/Q) components of
radio-frequency signals. For applications to pulse-Doppler radars, the QuadCS
outputs can be arranged in 2-dimensional data similar to that by Nyquist
sampling. This paper develops a compressive sampling pulse-Doppler (CoSaPD)
processing scheme from the sub-Nyquist samples. The CoSaPD scheme follows
Doppler estimation/detection and range estimation and is conducted on the
sub-Nyquist samples without recovering the Nyquist samples. The Doppler
estimation is realized through spectrum analyzer as in classic processing. The
detection is done on the Doppler bin data. The range estimation is performed
through sparse recovery algorithms on the detected targets and thus the
computational load is reduced. The detection threshold can be set at a low
value for improving detection probability and then the introduced false targets
are removed in the range estimation stage through inherent detection
characteristic in the recovery algorithms. Simulation results confirm our
findings. The CoSaPD scheme with the data at one eighth the Nyquist rate and
for SNR above -25dB can achieve performance of the classic processing with
Nyquist samples.
|
1310.5420 | On the Performance of Adaptive Packetized Wireless Communication Links
under Jamming | cs.IT math.IT | We employ a game theoretic approach to formulate communication between two
nodes over a wireless link in the presence of an adversary. We define a
constrained, two-player, zero-sum game between a transmitter/receiver pair with
adaptive transmission parameters and an adversary with average and maximum
power constraints. In this model, the transmitter's goal is to maximize the
achievable expected performance of the communication link, defined by a utility
function, while the jammer's goal is to minimize the same utility function.
Inspired by capacity/rate as a performance measure, we define a general utility
function and a payoff matrix which may be applied to a variety of jamming
problems. We show the existence of a threshold such that if the jammer's
average power exceeds this threshold, the expected payoff of the transmitter at
Nash Equilibrium (NE) is the same as the case when the jammer uses its maximum
allowable power all the time. We provide analytical and numerical results for
transmitter and jammer optimal strategies and a closed form expression for the
expected value of the game at the NE. As a special case, we investigate the
maximum achievable transmission rate of a rate-adaptive, packetized, wireless
AWGN communication link under different jamming scenarios and show that
randomization can significantly assist a smart jammer with limited average
power.
|
1310.5426 | MLI: An API for Distributed Machine Learning | cs.LG cs.DC stat.ML | MLI is an Application Programming Interface designed to address the
challenges of building Machine Learn- ing algorithms in a distributed setting
based on data-centric computing. Its primary goal is to simplify the
development of high-performance, scalable, distributed algorithms. Our initial
results show that, relative to existing systems, this interface can be used to
build distributed implementations of a wide variety of common Machine Learning
algorithms with minimal complexity and highly competitive performance and
scalability.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.