id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1304.7971 | Adaptive Mode Selection and Power Allocation in Bidirectional
Buffer-aided Relay Networks | cs.IT math.IT | In this paper, we consider the problem of sum rate maximization in a
bidirectional relay network with fading. Hereby, user 1 and user 2 communicate
with each other only through a relay, i.e., a direct link between user 1 and
user 2 is not present. In this network, there exist six possible transmission
modes: four point-to-point modes (user 1-to-relay, user 2-to-relay,
relay-to-user 1, relay-to-user 2), a multiple access mode (both users to the
relay), and a broadcast mode (the relay to both users). Most existing protocols
assume a fixed schedule of using a subset of the aforementioned transmission
modes, as a result, the sum rate is limited by the capacity of the weakest link
associated with the relay in each time slot. Motivated by this limitation, we
develop a protocol which is not restricted to adhere to a predefined schedule
for using the transmission modes. Therefore, all transmission modes of the
bidirectional relay network can be used adaptively based on the instantaneous
channel state information (CSI) of the involved links. To this end, the relay
has to be equipped with two buffers for the storage of the information received
from users 1 and 2, respectively. For the considered network, given a total
average power budget for all nodes, we jointly optimize the transmission mode
selection and power allocation based on the instantaneous CSI in each time slot
for sum rate maximization. Simulation results show that the proposed protocol
outperforms existing protocols for all signal-to-noise ratios (SNRs).
Specifically, we obtain a considerable gain at low SNRs due to the adaptive
power allocation and at high SNRs due to the adaptive mode selection.
|
1304.7984 | GeoDBLP: Geo-Tagging DBLP for Mining the Sociology of Computer Science | cs.SI cs.DL physics.soc-ph | Many collective human activities have been shown to exhibit universal
patterns. However, the possibility of universal patterns across timing events
of researcher migration has barely been explored at global scale. Here, we show
that timing events of migration within different countries exhibit remarkable
similarities. Specifically, we look at the distribution governing the data of
researcher migration inferred from the web. Compiling the data in itself
represents a significant advance in the field of quantitative analysis of
migration patterns. Official and commercial records are often access
restricted, incompatible between countries, and especially not registered
across researchers. Instead, we introduce GeoDBLP where we propagate
geographical seed locations retrieved from the web across the DBLP database of
1,080,958 authors and 1,894,758 papers. But perhaps more important is that we
are able to find statistical patterns and create models that explain the
migration of researchers. For instance, we show that the science job market can
be treated as a Poisson process with individual propensities to migrate
following a log-normal distribution over the researcher's career stage. That
is, although jobs enter the market constantly, researchers are generally not
"memoryless" but have to care greatly about their next move. The propensity to
make k>1 migrations, however, follows a gamma distribution suggesting that
migration at later career stages is "memoryless". This aligns well but actually
goes beyond scientometric models typically postulated based on small case
studies. On a very large, transnational scale, we establish the first general
regularities that should have major implications on strategies for education
and research worldwide.
|
1304.7992 | Fast Reconstruction of Compact Context-Specific Metabolic Network Models | q-bio.MN cs.CE math.OC | Systemic approaches to the study of a biological cell or tissue rely
increasingly on the use of context-specific metabolic network models. The
reconstruction of such a model from high-throughput data can routinely involve
large numbers of tests under different conditions and extensive parameter
tuning, which calls for fast algorithms. We present FASTCORE, a generic
algorithm for reconstructing context-specific metabolic network models from
global genome-wide metabolic network models such as Recon X. FASTCORE takes as
input a core set of reactions that are known to be active in the context of
interest (e.g., cell or tissue), and it searches for a flux consistent
subnetwork of the global network that contains all reactions from the core set
and a minimal set of additional reactions. Our key observation is that a
minimal consistent reconstruction can be defined via a set of sparse modes of
the global network, and FASTCORE iteratively computes such a set via a series
of linear programs. Experiments on liver data demonstrate speedups of several
orders of magnitude, and significantly more compact reconstructions, over a
chief rival method. Given its simplicity and its excellent performance,
FASTCORE can form the backbone of many future metabolic network reconstruction
algorithms.
|
1304.7993 | Digenes: genetic algorithms to discover conjectures about directed and
undirected graphs | cs.DM cs.NE | We present Digenes, a new discovery system that aims to help researchers in
graph theory. While its main task is to find extremal graphs for a given
(function of) invariants, it also provides some basic support in proof
conception. This has already been proved to be very useful to find new
conjectures since the AutoGraphiX system of Caporossi and Hansen (Discrete
Math. 212-2000). However, unlike existing systems, Digenes can be used both
with directed or undirected graphs. In this paper, we present the principles
and functionality of Digenes, describe the genetic algorithms that have been
designed to achieve them, and give some computational results and open
questions. This do arise some interesting questions regarding genetic
algorithms design particular to this field, such as crossover definition.
|
1304.8016 | On Semantic Word Cloud Representation | cs.DS cs.CL | We study the problem of computing semantic-preserving word clouds in which
semantically related words are close to each other. While several heuristic
approaches have been described in the literature, we formalize the underlying
geometric algorithm problem: Word Rectangle Adjacency Contact (WRAC). In this
model each word is associated with rectangle with fixed dimensions, and the
goal is to represent semantically related words by ensuring that the two
corresponding rectangles touch. We design and analyze efficient polynomial-time
algorithms for some variants of the WRAC problem, show that several general
variants are NP-hard, and describe a number of approximation algorithms.
Finally, we experimentally demonstrate that our theoretically-sound algorithms
outperform the early heuristics.
|
1304.8019 | Recursive Estimation of Orientation Based on the Bingham Distribution | cs.SY cs.RO | Directional estimation is a common problem in many tracking applications.
Traditional filters such as the Kalman filter perform poorly because they fail
to take the periodic nature of the problem into account. We present a recursive
filter for directional data based on the Bingham distribution in two
dimensions. The proposed filter can be applied to circular filtering problems
with 180 degree symmetry, i.e., rotations by 180 degrees cannot be
distinguished. It is easily implemented using standard numerical techniques and
suitable for real-time applications. The presented approach is extensible to
quaternions, which allow tracking arbitrary three-dimensional orientations. We
evaluate our filter in a challenging scenario and compare it to a traditional
Kalman filtering approach.
|
1304.8020 | Semi-Supervised Information-Maximization Clustering | cs.LG stat.ML | Semi-supervised clustering aims to introduce prior knowledge in the decision
process of a clustering algorithm. In this paper, we propose a novel
semi-supervised clustering algorithm based on the information-maximization
principle. The proposed method is an extension of a previous unsupervised
information-maximization clustering algorithm based on squared-loss mutual
information to effectively incorporate must-links and cannot-links. The
proposed method is computationally efficient because the clustering solution
can be obtained analytically via eigendecomposition. Furthermore, the proposed
method allows systematic optimization of tuning parameters such as the kernel
width, given the degree of belief in the must-links and cannot-links. The
usefulness of the proposed method is demonstrated through experiments.
|
1304.8029 | Cooperative Synchronization in Wireless Networks | cs.DC cs.IT math.IT | Synchronization is a key functionality in wireless network, enabling a wide
variety of services. We consider a Bayesian inference framework whereby network
nodes can achieve phase and skew synchronization in a fully distributed way. In
particular, under the assumption of Gaussian measurement noise, we derive two
message passing methods (belief propagation and mean field), analyze their
convergence behavior, and perform a qualitative and quantitative comparison
with a number of competing algorithms. We also show that both methods can be
applied in networks with and without master nodes. Our performance results are
complemented by, and compared with, the relevant Bayesian Cram\'er-Rao bounds.
|
1304.8046 | Sophistication vs Logical Depth | cs.IT cs.CC math.IT | Sophistication and logical depth are two measures that express how
complicated the structure in a string is. Sophistication is defined as the
minimal complexity of a computable function that defines a two-part description
for the string that is shortest within some precision; the second can be
defined as the minimal computation time of a program that is shortest within
some precision. We show that the Busy Beaver function of the sophistication of
a string exceeds its logical depth with logarithmically bigger precision, and
that logical depth exceeds the Busy Beaver function of sophistication with
logarithmically bigger precision. We also show that this is not true if the
precision is only increased by a constant (when the notions are defined with
plain Kolmogorov complexity). Finally we show that sophistication is unstable
in its precision: constant variations can change its value by a linear term in
the length of the string.
|
1304.8052 | Registration of Images with Outliers Using Joint Saliency Map | cs.CV | Mutual information (MI) is a popular similarity measure for image
registration, whereby good registration can be achieved by maximizing the
compactness of the clusters in the joint histogram. However, MI is sensitive to
the "outlier" objects that appear in one image but not the other, and also
suffers from local and biased maxima. We propose a novel joint saliency map
(JSM) to highlight the corresponding salient structures in the two images, and
emphatically group those salient structures into the smoothed compact clusters
in the weighted joint histogram. This strategy could solve both the outlier and
the local maxima problems. Experimental results show that the JSM-MI based
algorithm is not only accurate but also robust for registration of challenging
image pairs with outliers.
|
1304.8083 | Adaptive Video Streaming for Wireless Networks with Multiple Users and
Helpers | cs.NI cs.IT math.IT math.OC | We consider the optimal design of a scheduling policy for adaptive video
streaming in a wireless network formed by several users and helpers. A feature
of such networks is that any user is typically in the range of multiple
helpers. Hence, in order to cope with user-helper association, load balancing
and inter-cell interference, an efficient streaming policy should allow the
users to dynamically select the helper node to download from, and determine
adaptively the video quality level of the download. In order to obtain a
tractable formulation, we follow a "divide and conquer" approach: i) Assuming
that each video packet (chunk) is delivered within its playback delay ("smooth
streaming regime"), the problem is formulated as a network utility maximization
(NUM), subject to queue stability, where the network utility function is a
concave and componentwise non-decreasing function of the users' video quality
measure. ii) We solve the NUM problem by using a Lyapunov Drift Plus Penalty
approach, obtaining a scheme that naturally decomposes into two sub-policies
referred to as "congestion control" (adaptive video quality and helper station
selection) and "transmission scheduling" (dynamic allocation of the helper-user
physical layer transmission rates).Our solution is provably optimal with
respect to the proposed NUM problem, in a strong per-sample path sense. iii)
Finally, we propose a method to adaptively estimate the maximum queuing delays,
such that each user can calculate its pre-buffering and re-buffering time in
order to cope with the fluctuations of the queuing delays. Through simulations,
we evaluate the performance of the proposed algorithm under realistic
assumptions of a network with densely deployed helper nodes, and demonstrate
the per-sample path optimality of the proposed solution by considering a
non-stationary non-ergodic scenario with user mobility, VBR video coding.
|
1304.8087 | Uniqueness of Tensor Decompositions with Applications to Polynomial
Identifiability | cs.DS cs.LG math.ST stat.TH | We give a robust version of the celebrated result of Kruskal on the
uniqueness of tensor decompositions: we prove that given a tensor whose
decomposition satisfies a robust form of Kruskal's rank condition, it is
possible to approximately recover the decomposition if the tensor is known up
to a sufficiently small (inverse polynomial) error.
Kruskal's theorem has found many applications in proving the identifiability
of parameters for various latent variable models and mixture models such as
Hidden Markov models, topic models etc. Our robust version immediately implies
identifiability using only polynomially many samples in many of these settings.
This polynomial identifiability is an essential first step towards efficient
learning algorithms for these models.
Recently, algorithms based on tensor decompositions have been used to
estimate the parameters of various hidden variable models efficiently in
special cases as long as they satisfy certain "non-degeneracy" properties. Our
methods give a way to go beyond this non-degeneracy barrier, and establish
polynomial identifiability of the parameters under much milder conditions.
Given the importance of Kruskal's theorem in the tensor literature, we expect
that this robust version will have several applications beyond the settings we
explore in this work.
|
1304.8092 | Fractal-Based Detection of Microcalcification Clusters in Digital
Mammograms | cs.CV | In this paper, a novel method for edge detection of microcalcification
clusters in mammogram images is presented using the concept of Fractal
Dimension and Hurst co-efficient that enables to locate the microcalcifications
in the mammograms. This technique detects the edges accurately than the ones
obtained by the conventional Sobel method. Generally, Sobel method detects the
edges of the regions/objects in an image using the Fudge factor that assumes
its value as 0.5, by default. In this proposed technique, the Fudge factor is
suitably replaced with Hurst Co-efficient, which is computed as the difference
of Fractal dimension and the topological dimension of a given input image.
These two dimensions are image-dependent, and hence the respective Hurst
co-efficient too varies with respect to images. Hence, the image-dependent
Hurst co-efficient based Sobel method is proved to produce better results than
the Fudge factor based Sobel method. The results of the proposed method
substantiate the merit of the proposed technique.
|
1304.8102 | On Convexity of Error Rates in Digital Communications | cs.IT math.IT | Convexity properties of error rates of a class of decoders, including the
ML/min-distance one as a special case, are studied for arbitrary
constellations, bit mapping and coding. Earlier results obtained for the AWGN
channel are extended to a wide class of noise densities, including unimodal and
spherically-invariant noise. Under these broad conditions, symbol and bit error
rates are shown to be convex functions of the SNR in the high-SNR regime with
an explicitly-determined threshold, which depends only on the constellation
dimensionality and minimum distance, thus enabling an application of the
powerful tools of convex optimization to such digital communication systems in
a rigorous way. It is the decreasing nature of the noise power density around
the decision region boundaries that insures the convexity of symbol error rates
in the general case. The known high/low SNR bounds of the convexity/concavity
regions are tightened and no further improvement is shown to be possible in
general. The high SNR bound fits closely into the channel coding theorem: all
codes, including capacity-achieving ones, whose decision regions include the
hardened noise spheres (from the noise sphere hardening argument in the channel
coding theorem) satisfies this high SNR requirement and thus has convex error
rates in both SNR and noise power. We conjecture that all capacity-achieving
codes have convex error rates. Convexity properties in signal amplitude and
noise power are also investigated. Some applications of the results are
discussed. In particular, it is shown that fading is convexity-preserving and
is never good in low dimensions under spherically-invariant noise, which may
also include any linear diversity combining.
|
1304.8125 | On Discrete Preferences and Coordination | cs.GT cs.SI physics.soc-ph | An active line of research has considered games played on networks in which
payoffs depend on both a player's individual decision and also the decisions of
her neighbors. Such games have been used to model issues including the
formation of opinions and the adoption of technology. A basic question that has
remained largely open in this area is to consider games where the strategies
available to the players come from a fixed, discrete set, and where players may
have different intrinsic preferences among the possible strategies. It is
natural to model the tension among these different preferences by positing a
distance function on the strategy set that determines a notion of "similarity"
among strategies; a player's payoff is determined by the distance from her
chosen strategy to her preferred strategy and to the strategies chosen by her
network neighbors. Even when there are only two strategies available, this
framework already leads to natural open questions about a version of the
classical Battle of the Sexes problem played on a graph.
We develop a set of techniques for analyzing this class of games, which we
refer to as discrete preference games. We parametrize the games by the relative
extent to which a player takes into account the effect of her preferred
strategy and the effect of her neighbors' strategies, allowing us to
interpolate between network coordination games and unilateral decision-making.
When these two effects are balanced, we show that the price of stability is
equal to 1 for any discrete preference game in which the distance function on
the strategies is a tree metric; as a special case, this includes the Battle of
the Sexes on a graph. We also show that trees form the maximal family of
metrics for which the price of stability is 1, and produce a collection of
metrics on which the price of stability converges to a tight bound of 2.
|
1304.8126 | Robust Spectral Compressed Sensing via Structured Matrix Completion | cs.IT cs.SY math.IT math.NA stat.ML | The paper explores the problem of \emph{spectral compressed sensing}, which
aims to recover a spectrally sparse signal from a small random subset of its
$n$ time domain samples. The signal of interest is assumed to be a
superposition of $r$ multi-dimensional complex sinusoids, while the underlying
frequencies can assume any \emph{continuous} values in the normalized frequency
domain. Conventional compressed sensing paradigms suffer from the basis
mismatch issue when imposing a discrete dictionary on the Fourier
representation. To address this issue, we develop a novel algorithm, called
\emph{Enhanced Matrix Completion (EMaC)}, based on structured matrix completion
that does not require prior knowledge of the model order. The algorithm starts
by arranging the data into a low-rank enhanced form exhibiting multi-fold
Hankel structure, and then attempts recovery via nuclear norm minimization.
Under mild incoherence conditions, EMaC allows perfect recovery as soon as the
number of samples exceeds the order of $r\log^{4}n$, and is stable against
bounded noise. Even if a constant portion of samples are corrupted with
arbitrary magnitude, EMaC still allows exact recovery, provided that the sample
complexity exceeds the order of $r^{2}\log^{3}n$. Along the way, our results
demonstrate the power of convex relaxation in completing a low-rank multi-fold
Hankel or Toeplitz matrix from minimal observed entries. The performance of our
algorithm and its applicability to super resolution are further validated by
numerical experiments.
|
1304.8129 | Local Correctability of Expander Codes | cs.IT math.IT | In this work, we present the first local-decoding algorithm for expander
codes. This yields a new family of constant-rate codes that can recover from a
constant fraction of errors in the codeword symbols, and where any symbol of
the codeword can be recovered with high probability by reading $N^\epsilon$
symbols from the corrupted codeword, where $N$ is the block-length of the code.
Expander codes, introduced by Sipser and Spielman, are formed from an
expander graph $G = (V,E)$ of degree $d$, and an inner code of block-length $d$
over an alphabet $\Sigma$. Each edge of the expander graph is associated with a
symbol in $\Sigma$. A string in $\Sigma^{E}$ will be a codeword if for each
vertex in $V$, the symbols on the adjacent edges form a codeword in the inner
code.
We show that if the inner code has a smooth reconstruction algorithm in the
noiseless setting, then the corresponding expander code has an efficient
local-correction algorithm in the noisy setting. Instantiating our construction
with inner codes based on finite geometries, we obtain novel locally decodable
codes with rate approaching one. This provides an alternative to the
multiplicity codes of Kopparty, Saraf and Yekhanin (STOC '11) and the lifted
codes of Guo, Kopparty and Sudan (ITCS '13).
|
1304.8132 | Local Graph Clustering Beyond Cheeger's Inequality | cs.DS cs.LG stat.ML | Motivated by applications of large-scale graph clustering, we study
random-walk-based LOCAL algorithms whose running times depend only on the size
of the output cluster, rather than the entire graph. All previously known such
algorithms guarantee an output conductance of $\tilde{O}(\sqrt{\phi(A)})$ when
the target set $A$ has conductance $\phi(A)\in[0,1]$. In this paper, we improve
it to $$\tilde{O}\bigg( \min\Big\{\sqrt{\phi(A)},
\frac{\phi(A)}{\sqrt{\mathsf{Conn}(A)}} \Big\} \bigg)\enspace, $$ where the
internal connectivity parameter $\mathsf{Conn}(A) \in [0,1]$ is defined as the
reciprocal of the mixing time of the random walk over the induced subgraph on
$A$.
For instance, using $\mathsf{Conn}(A) = \Omega(\lambda(A) / \log n)$ where
$\lambda$ is the second eigenvalue of the Laplacian of the induced subgraph on
$A$, our conductance guarantee can be as good as
$\tilde{O}(\phi(A)/\sqrt{\lambda(A)})$. This builds an interesting connection
to the recent advance of the so-called improved Cheeger's Inequality [KKL+13],
which says that global spectral algorithms can provide a conductance guarantee
of $O(\phi_{\mathsf{opt}}/\sqrt{\lambda_3})$ instead of
$O(\sqrt{\phi_{\mathsf{opt}}})$.
In addition, we provide theoretical guarantee on the clustering accuracy (in
terms of precision and recall) of the output set. We also prove that our
analysis is tight, and perform empirical evaluation to support our theory on
both synthetic and real data.
It is worth noting that, our analysis outperforms prior work when the cluster
is well-connected. In fact, the better it is well-connected inside, the more
significant improvement (both in terms of conductance and accuracy) we can
obtain. Our results shed light on why in practice some random-walk-based
algorithms perform better than its previous theory, and help guide future
research about local clustering.
|
1305.0015 | Inferring ground truth from multi-annotator ordinal data: a
probabilistic approach | stat.ML cs.LG | A popular approach for large scale data annotation tasks is crowdsourcing,
wherein each data point is labeled by multiple noisy annotators. We consider
the problem of inferring ground truth from noisy ordinal labels obtained from
multiple annotators of varying and unknown expertise levels. Annotation models
for ordinal data have been proposed mostly as extensions of their
binary/categorical counterparts and have received little attention in the
crowdsourcing literature. We propose a new model for crowdsourced ordinal data
that accounts for instance difficulty as well as annotator expertise, and
derive a variational Bayesian inference algorithm for parameter estimation. We
analyze the ordinal extensions of several state-of-the-art annotator models for
binary/categorical labels and evaluate the performance of all the models on two
real world datasets containing ordinal query-URL relevance scores, collected
through Amazon's Mechanical Turk. Our results indicate that the proposed model
performs better or as well as existing state-of-the-art methods and is more
resistant to `spammy' annotators (i.e., annotators who assign labels randomly
without actually looking at the instance) than popular baselines such as mean,
median, and majority vote which do not account for annotator expertise.
|
1305.0020 | Image Compression By Embedding Five Modulus Method Into JPEG | cs.CV cs.MM | The standard JPEG format is almost the optimum format in image compression.
The compression ratio in JPEG sometimes reaches 30:1. The compression ratio of
JPEG could be increased by embedding the Five Modulus Method (FMM) into the
JPEG algorithm. The novel algorithm gives twice the time as the standard JPEG
algorithm or more. The novel algorithm was called FJPEG (Five-JPEG). The
quality of the reconstructed image after compression is approximately
approaches the JPEG. Standard test images have been used to support and
implement the suggested idea in this paper and the error metrics have been
computed and compared with JPEG.
|
1305.0032 | Construction of PMDS and SD Codes extending RAID 5 | cs.IT math.IT | A construction of Partial Maximum Distance Separable (PMDS) and Sector-Disk
(SD) codes extending RAID 5 with two extra parities is given, solving an open
problem. Previous constructions relied on computer searches, while our
constructions provide a theoretical solution to the problem.
|
1305.0034 | Regret Minimization in Non-Zero-Sum Games with Applications to Building
Champion Multiplayer Computer Poker Agents | cs.GT cs.MA | In two-player zero-sum games, if both players minimize their average external
regret, then the average of the strategy profiles converges to a Nash
equilibrium. For n-player general-sum games, however, theoretical guarantees
for regret minimization are less understood. Nonetheless, Counterfactual Regret
Minimization (CFR), a popular regret minimization algorithm for extensive-form
games, has generated winning three-player Texas Hold'em agents in the Annual
Computer Poker Competition (ACPC). In this paper, we provide the first set of
theoretical properties for regret minimization algorithms in non-zero-sum games
by proving that solutions eliminate iterative strict domination. We formally
define \emph{dominated actions} in extensive-form games, show that CFR avoids
iteratively strictly dominated actions and strategies, and demonstrate that
removing iteratively dominated actions is enough to win a mock tournament in a
small poker game. In addition, for two-player non-zero-sum games, we bound the
worst case performance and show that in practice, regret minimization can yield
strategies very close to equilibrium. Our theoretical advancements lead us to a
new modification of CFR for games with more than two players that is more
efficient and may be used to generate stronger strategies than previously
possible. Furthermore, we present a new three-player Texas Hold'em poker agent
that was built using CFR and a novel game decomposition method. Our new agent
wins the three-player events of the 2012 ACPC and defeats the winning
three-player programs from previous competitions while requiring less resources
to generate than the 2011 winner. Finally, we show that our CFR modification
computes a strategy of equal quality to our new agent in a quarter of the time
of standard CFR using half the memory.
|
1305.0051 | Revealing social networks of spammers through spectral clustering | cs.SI cs.LG physics.soc-ph stat.ML | To date, most studies on spam have focused only on the spamming phase of the
spam cycle and have ignored the harvesting phase, which consists of the mass
acquisition of email addresses. It has been observed that spammers conceal
their identity to a lesser degree in the harvesting phase, so it may be
possible to gain new insights into spammers' behavior by studying the behavior
of harvesters, which are individuals or bots that collect email addresses. In
this paper, we reveal social networks of spammers by identifying communities of
harvesters with high behavioral similarity using spectral clustering. The data
analyzed was collected through Project Honey Pot, a distributed system for
monitoring harvesting and spamming. Our main findings are (1) that most
spammers either send only phishing emails or no phishing emails at all, (2)
that most communities of spammers also send only phishing emails or no phishing
emails at all, and (3) that several groups of spammers within communities
exhibit coherent temporal behavior and have similar IP addresses. Our findings
reveal some previously unknown behavior of spammers and suggest that there is
indeed social structure between spammers to be discovered.
|
1305.0060 | Complexity penalized hydraulic fracture localization and moment tensor
estimation under limited model information | physics.geo-ph cs.IT math.IT stat.AP | In this paper we present a novel technique for micro-seismic localization
using a group sparse penalization that is robust to the focal mechanism of the
source and requires only a velocity model of the stratigraphy rather than a
full Green's function model of the earth's response. In this technique we
construct a set of perfect delta detector responses, one for each detector in
the array, to a seismic event at a given location and impose a group sparsity
across the array. This scheme is independent of the moment tensor and exploits
the time compactness of the incident seismic signal. Furthermore we present a
method for improving the inversion of the moment tensor and Green's function
when the geometry of seismic array is limited. In particular we demonstrate
that both Tikhonov regularization and truncated SVD can improve the recovery of
the moment tensor and be robust to noise. We evaluate our algorithm on
synthetic data and present error bounds for both estimation of the moment
tensor as well as localization. Furthermore we discuss the estimated moment
tensor accuracy as a function of both array geometry and fault orientation.
|
1305.0061 | Optimal Ternary Cyclic Codes from Monomials | cs.IT math.IT | Cyclic codes are a subclass of linear codes and have applications in consumer
electronics, data storage systems, and communication systems as they have
efficient encoding and decoding algorithms. Perfect nonlinear monomials were
employed to construct optimal ternary cyclic codes with parameters $[3^m-1,
3^m-1-2m, 4]$ by Carlet, Ding and Yuan in 2005. In this paper, almost perfect
nonlinear monomials, and a number of other monomials over $\gf(3^m)$ are used
to construct optimal ternary cyclic codes with the same parameters. Nine open
problems on such codes are also presented.
|
1305.0062 | Distilled Single Cell Genome Sequencing and De Novo Assembly for Sparse
Microbial Communities | q-bio.GN cs.CE | Identification of every single genome present in a microbial sample is an
important and challenging task with crucial applications. It is challenging
because there are typically millions of cells in a microbial sample, the vast
majority of which elude cultivation. The most accurate method to date is
exhaustive single cell sequencing using multiple displacement amplification,
which is simply intractable for a large number of cells. However, there is hope
for breaking this barrier as the number of different cell types with distinct
genome sequences is usually much smaller than the number of cells.
Here, we present a novel divide and conquer method to sequence and de novo
assemble all distinct genomes present in a microbial sample with a sequencing
cost and computational complexity proportional to the number of genome types,
rather than the number of cells. The method is implemented in a tool called
Squeezambler. We evaluated Squeezambler on simulated data. The proposed divide
and conquer method successfully reduces the cost of sequencing in comparison
with the naive exhaustive approach.
Availability: Squeezambler and datasets are available under
http://compbio.cs.wayne.edu/software/squeezambler/.
|
1305.0103 | Clustering Unclustered Data: Unsupervised Binary Labeling of Two
Datasets Having Different Class Balances | cs.LG | We consider the unsupervised learning problem of assigning labels to
unlabeled data. A naive approach is to use clustering methods, but this works
well only when data is properly clustered and each cluster corresponds to an
underlying class. In this paper, we first show that this unsupervised labeling
problem in balanced binary cases can be solved if two unlabeled datasets having
different class balances are available. More specifically, estimation of the
sign of the difference between probability densities of two unlabeled datasets
gives the solution. We then introduce a new method to directly estimate the
sign of the density difference without density estimation. Finally, we
demonstrate the usefulness of the proposed method against several clustering
methods on various toy problems and real-world datasets.
|
1305.0153 | Convergence Analysis of Mixed Timescale Cross-Layer Stochastic
Optimization | cs.SY cs.IT math.IT | This paper considers a cross-layer optimization problem driven by
multi-timescale stochastic exogenous processes in wireless communication
networks. Due to the hierarchical information structure in a wireless network,
a mixed timescale stochastic iterative algorithm is proposed to track the
time-varying optimal solution of the cross-layer optimization problem, where
the variables are partitioned into short-term controls updated in a faster
timescale, and long-term controls updated in a slower timescale. We focus on
establishing a convergence analysis framework for such multi-timescale
algorithms, which is difficult due to the timescale separation of the algorithm
and the time-varying nature of the exogenous processes. To cope with this
challenge, we model the algorithm dynamics using stochastic differential
equations (SDEs) and show that the study of the algorithm convergence is
equivalent to the study of the stochastic stability of a virtual stochastic
dynamic system (VSDS). Leveraging the techniques of Lyapunov stability, we
derive a sufficient condition for the algorithm stability and a tracking error
bound in terms of the parameters of the multi-timescale exogenous processes.
Based on these results, an adaptive compensation algorithm is proposed to
enhance the tracking performance. Finally, we illustrate the framework by an
application example in wireless heterogeneous network.
|
1305.0185 | A 2.0 Gb/s Throughput Decoder for QC-LDPC Convolutional Codes | cs.IT cs.AR math.IT | This paper propose a decoder architecture for low-density parity-check
convolutional code (LDPCCC). Specifically, the LDPCCC is derived from a
quasi-cyclic (QC) LDPC block code. By making use of the quasi-cyclic structure,
the proposed LDPCCC decoder adopts a dynamic message storage in the memory and
uses a simple address controller. The decoder efficiently combines the memories
in the pipelining processors into a large memory block so as to take advantage
of the data-width of the embedded memory in a modern field-programmable gate
array (FPGA). A rate-5/6 QC-LDPCCC has been implemented on an Altera Stratix
FPGA. It achieves up to 2.0 Gb/s throughput with a clock frequency of 100 MHz.
Moreover, the decoder displays an excellent error performance of lower than
$10^{-13}$ at a bit-energy-to-noise-power-spectral-density ratio ($E_b/N_0$) of
3.55 dB.
|
1305.0187 | A Community Based Algorithm for Large Scale Web Service Composition | cs.AI cs.SE | Web service composition is the process of synthesizing a new composite
service using a set of available Web services in order to satisfy a client
request that cannot be treated by any available Web services. The Web services
space is a dynamic environment characterized by a huge number of elements.
Furthermore, many Web services are offering similar functionalities. In this
paper we propose a model for Web service composition designed to address the
scale effect and the redundancy issue. The Web services space is represented by
a two-layered network architecture. A concrete similarity network layer
organizes the Web services operations into communities of functionally similar
operations. An abstract interaction network layer represents the composition
relationships between the sets of communities. Composition synthesis is
performed by a two-phased graph search algorithm. First, the interaction
network is mined in order to discover abstract solutions to the request goal.
Then, the abstract compositions are instantiated with concrete operations
selected from the similarity network. This strategy allows an efficient
exploration of the Web services space. Furthermore, operations grouped in a
community can be easily substituted if necessary during the composition's
synthesis's process.
|
1305.0191 | Benefits of Semantics on Web Service Composition from a Complex Network
Perspective | cs.SI cs.AI cs.SE | The number of publicly available Web services (WS) is continuously growing,
and in parallel, we are witnessing a rapid development in semantic-related web
technologies. The intersection of the semantic web and WS allows the
development of semantic WS. In this work, we adopt a complex network
perspective to perform a comparative analysis of the syntactic and semantic
approaches used to describe WS. From a collection of publicly available WS
descriptions, we extract syntactic and semantic WS interaction networks. We
take advantage of tools from the complex network field to analyze them and
determine their properties. We show that WS interaction networks exhibit some
of the typical characteristics observed in real-world networks, such as short
average distance between nodes and community structure. By comparing syntactic
and semantic networks through their properties, we show the introduction of
semantics in WS descriptions should improve the composition process.
|
1305.0194 | MATAWS: A Multimodal Approach for Automatic WS Semantic Annotation | cs.SE cs.CL cs.IR | Many recent works aim at developing methods and tools for the processing of
semantic Web services. In order to be properly tested, these tools must be
applied to an appropriate benchmark, taking the form of a collection of
semantic WS descriptions. However, all of the existing publicly available
collections are limited by their size or their realism (use of randomly
generated or resampled descriptions). Larger and realistic syntactic (WSDL)
collections exist, but their semantic annotation requires a certain level of
automation, due to the number of operations to be processed. In this article,
we propose a fully automatic method to semantically annotate such large WS
collections. Our approach is multimodal, in the sense it takes advantage of the
latent semantics present not only in the parameter names, but also in the type
names and structures. Concept-to-word association is performed by using Sigma,
a mapping of WordNet to the SUMO ontology. After having described in details
our annotation method, we apply it to the larger collection of real-world
syntactic WS descriptions we could find, and assess its efficiency.
|
1305.0196 | Topological Properties of Web Services Similarity Networks | cs.IR cs.SI | The number of publicly available Web services (WS) is continuously growing.
To perform efficient WS discovery, it is desirable to organize the WS space.
Works in this direction propose to group WS according to certain shared
properties. Such groups commonly called communities are based either on
similarity or on interaction between WS. In this paper we focus on the former,
and propose a new network-based approach to extract communities from a WS
collection. This process is three-stepped: first we define several similarity
functions able to compare WS operations, second we use them to build so-called
similarity networks, and third we identify communities under the form of
specific structures in these networks. We apply our method on a collection of
real-world WS and comment the resulting communities. Finally, we additionally
provide an analysis and an interpretation of our similarity networks with a
complex networks perspective.
|
1305.0205 | The effect of the initial network configuration on preferential
attachment | physics.soc-ph cond-mat.stat-mech cs.SI physics.data-an | The classical preferential attachment model is sensitive to the choice of the
initial configuration of the network. As the number of initial nodes and their
degree grow, so does the time needed for an equilibrium degree distribution to
be established. We study this phenomenon, provide estimates of the
equilibration time, and characterize the degree distribution cutoff observed at
finite times. When the initial network is dense and exceeds a certain small
size, there is no equilibration and a suitable statistical test can always
discern the produced degree distribution from the equilibrium one. As a
by-product, the weighted Kolmogorov-Smirnov statistic is demonstrated to be
more suitable for statistical analysis of power-law distributions with cutoff
when the data is ample.
|
1305.0208 | Perceptron Mistake Bounds | cs.LG | We present a brief survey of existing mistake bounds and introduce novel
bounds for the Perceptron or the kernel Perceptron algorithm. Our novel bounds
generalize beyond standard margin-loss type bounds, allow for any convex and
Lipschitz loss function, and admit a very simple proof.
|
1305.0213 | Recovering Graph-Structured Activations using Adaptive Compressive
Measurements | stat.ML cs.IT math.IT | We study the localization of a cluster of activated vertices in a graph, from
adaptively designed compressive measurements. We propose a hierarchical
partitioning of the graph that groups the activated vertices into few
partitions, so that a top-down sensing procedure can identify these partitions,
and hence the activations, using few measurements. By exploiting the cluster
structure, we are able to provide localization guarantees at weaker signal to
noise ratios than in the unstructured setting. We complement this performance
guarantee with an information theoretic lower bound, providing a necessary
signal-to-noise ratio for any algorithm to successfully localize the cluster.
We verify our analysis with some simulations, demonstrating the practicality of
our algorithm.
|
1305.0218 | Video Segmentation via Diffusion Bases | cs.CV cs.MM | Identifying moving objects in a video sequence, which is produced by a static
camera, is a fundamental and critical task in many computer-vision
applications. A common approach performs background subtraction, which
identifies moving objects as the portion of a video frame that differs
significantly from a background model. A good background subtraction algorithm
has to be robust to changes in the illumination and it should avoid detecting
non-stationary background objects such as moving leaves, rain, snow, and
shadows. In addition, the internal background model should quickly respond to
changes in background such as objects that start to move or stop. We present a
new algorithm for video segmentation that processes the input video sequence as
a 3D matrix where the third axis is the time domain. Our approach identifies
the background by reducing the input dimension using the \emph{diffusion bases}
methodology. Furthermore, we describe an iterative method for extracting and
deleting the background. The algorithm has two versions and thus covers the
complete range of backgrounds: one for scenes with static backgrounds and the
other for scenes with dynamic (moving) backgrounds.
|
1305.0261 | Web Services Dependency Networks Analysis | cs.IR cs.SI physics.soc-ph | Along with a continuously growing number of publicly available Web services
(WS), we are witnessing a rapid development in semantic-related web
technologies, which lead to the apparition of semantically described WS. In
this work, we perform a comparative analysis of the syntactic and semantic
approaches used to describe WS, from a complex network perspective. First, we
extract syntactic and semantic WS dependency networks from a collection of
publicly available WS descriptions. Then, we take advantage of tools from the
complex network field to analyze them and determine their topological
properties. We show WS dependency networks exhibit some of the typical
characteristics observed in real-world networks, such as small world and scale
free properties, as well as community structure. By comparing syntactic and
semantic networks through their topological properties, we show the
introduction of semantics in WS description allows modeling more accurately the
dependencies between parameters, which in turn could lead to improved
composition mining methods.
|
1305.0297 | The operad of wiring diagrams: formalizing a graphical language for
databases, recursion, and plug-and-play circuits | cs.DB math.CT math.LO | Wiring diagrams, as seen in digital circuits, can be nested hierarchically
and thus have an aspect of self-similarity. We show that wiring diagrams form
the morphisms of an operad $\mcT$, capturing this self-similarity. We discuss
the algebra $\Rel$ of mathematical relations on $\mcT$, and in so doing use
wiring diagrams as a graphical language with which to structure queries on
relational databases. We give the example of circuit diagrams as a special
case. We move on to show how plug-and-play devices and also recursion can be
formulated in the operadic framework as well. Throughout we include many
examples and figures.
|
1305.0311 | An Adaptive Descriptor Design for Object Recognition in the Wild | cs.CV | Digital images nowadays have various styles of appearance, in the aspects of
color tones, contrast, vignetting, and etc. These 'picture styles' are directly
related to the scene radiance, image pipeline of the camera, and post
processing functions. Due to the complexity and nonlinearity of these causes,
popular gradient-based image descriptors won't be invariant to different
picture styles, which will decline the performance of object recognition. Given
that images shared online or created by individual users are taken with a wide
range of devices and may be processed by various post processing functions, to
find a robust object recognition system is useful and challenging. In this
paper, we present the first study on the influence of picture styles for object
recognition, and propose an adaptive approach based on the kernel view of
gradient descriptors and multiple kernel learning, without estimating or
specifying the styles of images used in training and testing. We conduct
experiments on Domain Adaptation data set and Oxford Flower data set. The
experiments also include several variants of the flower data set by processing
the images with popular photo effects. The results demonstrate that our
proposed method improve from standard descriptors in all cases.
|
1305.0321 | Hidden Markov Model Identifiability via Tensors | cs.IT math.IT | The prevalence of hidden Markov models (HMMs) in various applications of
statistical signal processing and communications is a testament to the power
and flexibility of the model. In this paper, we link the identifiability
problem with tensor decomposition, in particular, the Canonical Polyadic
decomposition. Using recent results in deriving uniqueness conditions for
tensor decomposition, we are able to provide a necessary and sufficient
condition for the identification of the parameters of discrete time finite
alphabet HMMs. This result resolves a long standing open problem regarding the
derivation of a necessary and sufficient condition for uniquely identifying an
HMM. We then further extend recent preliminary work on the identification of
HMMs with multiple observers by deriving necessary and sufficient conditions
for identifiability in this setting.
|
1305.0355 | Model Selection for High-Dimensional Regression under the Generalized
Irrepresentability Condition | math.ST cs.IT cs.LG math.IT stat.ME stat.ML stat.TH | In the high-dimensional regression model a response variable is linearly
related to $p$ covariates, but the sample size $n$ is smaller than $p$. We
assume that only a small subset of covariates is `active' (i.e., the
corresponding coefficients are non-zero), and consider the model-selection
problem of identifying the active covariates. A popular approach is to estimate
the regression coefficients through the Lasso ($\ell_1$-regularized least
squares). This is known to correctly identify the active set only if the
irrelevant covariates are roughly orthogonal to the relevant ones, as
quantified through the so called `irrepresentability' condition. In this paper
we study the `Gauss-Lasso' selector, a simple two-stage method that first
solves the Lasso, and then performs ordinary least squares restricted to the
Lasso active set. We formulate `generalized irrepresentability condition'
(GIC), an assumption that is substantially weaker than irrepresentability. We
prove that, under GIC, the Gauss-Lasso correctly recovers the active set.
|
1305.0357 | Relevance distributions across Bradford Zones: Can Bradfordizing improve
search? | cs.IR cs.DL | The purpose of this paper is to describe the evaluation of the effectiveness
of the bibliometric technique Bradfordizing in an information retrieval (IR)
scenario. Bradfordizing is used to re-rank topical document sets from
conventional abstracting & indexing (A&I) databases into core and more
peripheral document zones. Bradfordized lists of journal articles and
monographs will be tested in a controlled scenario consisting of different A&I
databases from social and political sciences, economics, psychology and medical
science, 164 standardized IR topics and intellectual assessments of the listed
documents. Does Bradfordizing improve the ratio of relevant documents in the
first third (core) compared to the second and last third (zone 2 and zone 3,
respectively)? The IR tests show that relevance distributions after re-ranking
improve at a significant level if documents in the core are compared with
documents in the succeeding zones. After Bradfordizing of document pools, the
core has a significant better average precision than zone 2, zone 3 and
baseline. This paper should be seen as an argument in favour of alternative
non-textual (bibliometric) re-ranking methods which can be simply applied in
text-based retrieval systems and in particular in A&I databases.
|
1305.0361 | Braess's Paradox in Epidemic Game: Better Condition Results in Less
Payoff | physics.soc-ph cs.SI q-bio.PE | Facing the threats of infectious diseases, we take various actions to protect
ourselves, but few studies considered an evolving system with competing
strategies. In view of that, we propose an evolutionary epidemic model coupled
with human behaviors, where individuals have three strategies: vaccination,
self-protection and laissez faire, and could adjust their strategies according
to their neighbors' strategies and payoffs at the beginning of each new season
of epidemic spreading. We found a counter-intuitive phenomenon analogous to the
well-known \emph{Braess's Paradox}, namely a better condition may lead to worse
performance. Specifically speaking, increasing the successful rate of
self-protection does not necessarily reduce the epidemic size or improve the
system payoff. This phenomenon is insensitive to the network topologies, and
can be well explained by a mean-field approximation. Our study demonstrates an
important fact that a better condition for individuals may yield a worse
outcome for the society.
|
1305.0384 | Optimal Distributed Scheduling in Wireless Networks under the SINR
interference model | cs.IT math.IT | Radio resource sharing mechanisms are key to ensuring good performance in
wireless networks. In their seminal paper \cite{tassiulas1}, Tassiulas and
Ephremides introduced the Maximum Weighted Scheduling algorithm, and proved its
throughput-optimality. Since then, there have been extensive research efforts
to devise distributed implementations of this algorithm. Recently, distributed
adaptive CSMA scheduling schemes \cite{jiang08} have been proposed and shown to
be optimal, without the need of message passing among transmitters. However
their analysis relies on the assumption that interference can be accurately
modelled by a simple interference graph. In this paper, we consider the more
realistic and challenging SINR interference model. We present {\it the first
distributed scheduling algorithms that (i) are optimal under the SINR
interference model, and (ii) that do not require any message passing}. They are
based on a combination of a simple and efficient power allocation strategy
referred to as {\it Power Packing} and randomization techniques. We first
devise algorithms that are rate-optimal in the sense that they perform as well
as the best centralized scheduling schemes in scenarios where each transmitter
is aware of the rate at which it should send packets to the corresponding
receiver. We then extend these algorithms so that they reach
throughput-optimality.
|
1305.0395 | Tensor Decompositions: A New Concept in Brain Data Analysis? | cs.NA cs.LG q-bio.NC stat.ML | Matrix factorizations and their extensions to tensor factorizations and
decompositions have become prominent techniques for linear and multilinear
blind source separation (BSS), especially multiway Independent Component
Analysis (ICA), NonnegativeMatrix and Tensor Factorization (NMF/NTF), Smooth
Component Analysis (SmoCA) and Sparse Component Analysis (SCA). Moreover,
tensor decompositions have many other potential applications beyond multilinear
BSS, especially feature extraction, classification, dimensionality reduction
and multiway clustering. In this paper, we briefly overview new and emerging
models and approaches for tensor decompositions in applications to group and
linked multiway BSS/ICA, feature extraction, classification andMultiway Partial
Least Squares (MPLS) regression problems. Keywords: Multilinear BSS, linked
multiway BSS/ICA, tensor factorizations and decompositions, constrained Tucker
and CP models, Penalized Tensor Decompositions (PTD), feature extraction,
classification, multiway PLS and CCA.
|
1305.0412 | Filter Design with Secrecy Constraints: The MIMO Gaussian Wiretap
Channel | cs.IT math.IT | This paper considers the problem of filter design with secrecy constraints,
where two legitimate parties (Alice and Bob) communicate in the presence of an
eavesdropper (Eve), over a Gaussian multiple-input-multiple-output (MIMO)
wiretap channel. This problem involves designing, subject to a power
constraint, the transmit and the receive filters which minimize the
mean-squared error (MSE) between the legitimate parties whilst assuring that
the eavesdropper MSE remains above a certain threshold. We consider a general
MIMO Gaussian wiretap scenario, where the legitimate receiver uses a linear
Zero-Forcing (ZF) filter and the eavesdropper receiver uses either a ZF or an
optimal linear Wiener filter. We provide a characterization of the optimal
filter designs by demonstrating the convexity of the optimization problems. We
also provide generalizations of the filter designs from the scenario where the
channel state is known to all the parties to the scenario where there is
uncertainty in the channel state. A set of numerical results illustrates the
performance of the novel filter designs, including the robustness to channel
modeling errors. In particular, we assess the efficacy of the designs in
guaranteeing not only a certain MSE level at the eavesdropper, but also in
limiting the error probability at the eavesdropper. We also assess the impact
of the filter designs on the achievable secrecy rates. The penalty induced by
the fact that the eavesdropper may use the optimal non-linear receive filter
rather than the optimal linear one is also explored in the paper.
|
1305.0423 | Testing Hypotheses by Regularized Maximum Mean Discrepancy | cs.LG cs.AI stat.ML | Do two data samples come from different distributions? Recent studies of this
fundamental problem focused on embedding probability distributions into
sufficiently rich characteristic Reproducing Kernel Hilbert Spaces (RKHSs), to
compare distributions by the distance between their embeddings. We show that
Regularized Maximum Mean Discrepancy (RMMD), our novel measure for kernel-based
hypothesis testing, yields substantial improvements even when sample sizes are
small, and excels at hypothesis tests involving multiple comparisons with power
control. We derive asymptotic distributions under the null and alternative
hypotheses, and assess power control. Outstanding results are obtained on:
challenging EEG data, MNIST, the Berkley Covertype, and the Flare-Solar
dataset.
|
1305.0445 | Deep Learning of Representations: Looking Forward | cs.LG | Deep learning research aims at discovering learning algorithms that discover
multiple levels of distributed representations, with higher levels representing
more abstract concepts. Although the study of deep learning has already led to
impressive theoretical results, learning algorithms and breakthrough
experiments, several challenges lie ahead. This paper proposes to examine some
of these challenges, centering on the questions of scaling deep learning
algorithms to much larger models and datasets, reducing optimization
difficulties due to ill-conditioning or local minima, designing more efficient
and powerful inference and sampling procedures, and learning to disentangle the
factors of variation underlying the observed data. It also proposes a few
forward-looking research directions aimed at overcoming these challenges.
|
1305.0458 | From the Grid to the Smart Grid, Topologically | physics.soc-ph cs.CE cs.CY cs.SI | The Smart Grid is not just about the digitalization of the Power Grid. In its
more visionary acceptation, it is a model of energy management in which the
users are engaged in producing energy as well as consuming it, while having
information systems fully aware of the energy demand-response of the network
and of dynamically varying prices. A natural question is then: to make the
Smart Grid a reality will the Distribution Grid have to be updated? We assume a
positive answer to the question and we consider the lower layers of Medium and
Low Voltage to be the most affected by the change. In our previous work, we
have analyzed samples of the Dutch Distribution Grid in our previous work and
we have considered possible evolutions of these using synthetic topologies
modeled after studies of complex systems in other technological domains in
another previous work. In this paper, we take an extra important further step
by defining a methodology for evolving any existing physical Power Grid to a
good Smart Grid model thus laying the foundations for a decision support system
for utilities and governmental organizations. In doing so, we consider several
possible evolution strategies and apply then to the Dutch Distribution Grid. We
show how more connectivity is beneficial in realizing more efficient and
reliable networks. Our proposal is topological in nature, and enhanced with
economic considerations of the costs of such evolutions in terms of cabling
expenses and economic benefits of evolving the Grid.
|
1305.0471 | Community Structure in Interaction Web Service Networks | cs.SI cs.NI physics.soc-ph | Many real-world complex systems such as social, biological, information as
well as technological systems results of a decentralized and unplanned
evolution which leads to a common structuration. Irrespective of their origin,
these so-called complex networks typically exhibit small-world and scale-free
properties. Another common feature is their organisation into communities. In
this paper, we introduce models of interaction networks based on the
composition process of syntactic and semantic Web services. An extensive
experimental study conducted on a benchmark of real Web services shows that
these networks possess the typical properties of complex networks (small-world,
scale-free). Unlike most social networks, they are not transitive. Using a
representative sample of community detection algorithms, a community
structuration is revealed. The comparative evaluation of the discovered
community structures shows that they are very similar in terms of content.
Furthermore, the analysis performed on the community structures and on the
communities themselves, leads us to conclude that their topological properties
are consistent.
|
1305.0502 | Simple, Fast, and Scalable Reachability Oracle | cs.DB | A reachability oracle (or hop labeling) assigns each vertex v two sets of
vertices: Lout(v) and Lin(v), such that u reaches v iff Lout(u) \cap Lin(v)
\neq \emptyset. Despite their simplicity and elegance, reachability oracles
have failed to achieve efficiency in more than ten years since their
introduction: the main problem is high construction cost, which stems from a
set-cover framework and the need to materialize transitive closure. In this
paper, we present two simple and efficient labeling algorithms,
Hierarchical-Labeling and Distribution-Labeling, which can work onmassive
real-world graphs: their construction time is an order of magnitude faster than
the setcover based labeling approach, and transitive closure materialization is
not needed. On large graphs, their index sizes and their query performance can
now beat the state-of-the-art transitive closure compression and online search
approaches.
|
1305.0503 | Graph-Theoretic Characterization of The Feasibility of The
Precoding-Based 3-Unicast Interference Alignment Scheme | cs.IT math.IT | A new precoding-based intersession network coding (NC) scheme has recently
been proposed, which applies the interference alignment technique, originally
devised for wireless interference channels, to the 3-unicast problem of
directed acyclic networks. The main result of this work is a graph-theoretic
characterization of the feasibility of the 3-unicast interference alignment
scheme. To that end, we first investigate several key relationships between the
point-to-point network channel gains and the underlying graph structure. Such
relationships turn out to be critical when characterizing graph-theoretically
the feasibility of precoding-based solutions.
|
1305.0507 | Hub-Accelerator: Fast and Exact Shortest Path Computation in Large
Social Networks | cs.SI cs.DB physics.soc-ph | Shortest path computation is one of the most fundamental operations for
managing and analyzing large social networks. Though existing techniques are
quite effective for finding the shortest path on large but sparse road
networks, social graphs have quite different characteristics: they are
generally non-spatial, non-weighted, scale-free, and they exhibit small-world
properties in addition to their massive size. In particular, the existence of
hubs, those vertices with a large number of connections, explodes the search
space, making the shortest path computation surprisingly challenging. In this
paper, we introduce a set of novel techniques centered around hubs,
collectively referred to as the Hub-Accelerator framework, to compute the
k-degree shortest path (finding the shortest path between two vertices if their
distance is within k). These techniques enable us to significantly reduce the
search space by either greatly limiting the expansion scope of hubs (using the
novel distance- preserving Hub-Network concept) or completely pruning away the
hubs in the online search (using the Hub2-Labeling approach). The
Hub-Accelerator approaches are more than two orders of magnitude faster than
BFS and the state-of-the-art approximate shortest path method Sketch for the
shortest path computation. The Hub- Network approach does not introduce
additional index cost with light pre-computation cost; the index size and index
construction cost of Hub2-Labeling are also moderate and better than or
comparable to the approximation indexing Sketch method.
|
1305.0513 | Limiting the Neighborhood: De-Small-World Network for Outbreak
Prevention | cs.SI physics.soc-ph | In this work, we study a basic and practically important strategy to help
prevent and/or delay an outbreak in the context of network: limiting the
contact between individuals. In this paper, we introduce the average
neighborhood size as a new measure for the degree of being small-world and
utilize it to formally define the desmall- world network problem. We also prove
the NP-hardness of the general reachable pair cut problem and propose a greedy
edge betweenness based approach as the benchmark in selecting the candidate
edges for solving our problem. Furthermore, we transform the de-small-world
network problem as an OR-AND Boolean function maximization problem, which is
also an NP-hardness problem. In addition, we develop a numerical relaxation
approach to solve the Boolean function maximization and the de-small-world
problem. Also, we introduce the short-betweenness, which measures the edge
importance in terms of all short paths with distance no greater than a certain
threshold, and utilize it to speed up our numerical relaxation approach. The
experimental evaluation demonstrates the effectiveness and efficiency of our
approaches.
|
1305.0540 | Privacy Preserving Recommendation System Based on Groups | cs.IR | Recommendation systems have received considerable attention in the recent
decades. Yet with the development of information technology and social media,
the risk in revealing private data to service providers has been a growing
concern to more and more users. Trade-offs between quality and privacy in
recommendation systems naturally arise. In this paper, we present a privacy
preserving recommendation framework based on groups. The main idea is to use
groups as a natural middleware to preserve users' privacy. A distributed
preference exchange algorithm is proposed to ensure the anonymity of data,
wherein the effective size of the anonymity set asymptotically approaches the
group size with time. We construct a hybrid collaborative filtering model based
on Markov random walks to provide recommendations and predictions to group
members. Experimental results on the MovieLens and Epinions datasets show that
our proposed methods outperform the baseline methods, L+ and ItemRank, two
state-of-the-art personalized recommendation algorithms, for both
recommendation precision and hit rate despite the absence of personal
preference information.
|
1305.0543 | Burstiness and spreading on temporal networks | physics.soc-ph cs.SI q-bio.PE | We discuss how spreading processes on temporal networks are impacted by the
shape of their inter-event time distributions. Through simple mathematical
arguments and toy examples, we find that the key factor is the ordering in
which events take place, a property that tends to be affected by the bulk of
the distributions and not only by their tail, as usually considered in the
literature. We show that a detailed modeling of the temporal patterns observed
in complex networks can change dramatically the properties of a spreading
process, such as the ergodicity of a random walk process or the persistence of
an epidemic.
|
1305.0547 | On Achievable Rates for Channels with Mismatched Decoding | cs.IT math.IT | The problem of mismatched decoding for discrete memoryless channels is
addressed. A mismatched cognitive multiple-access channel is introduced, and an
inner bound on its capacity region is derived using two alternative encoding
methods: superposition coding and random binning. The inner bounds are derived
by analyzing the average error probability of the code ensemble for both
methods and by a tight characterization of the resulting error exponents.
Random coding converse theorems are also derived. A comparison of the
achievable regions shows that in the matched case, random binning performs as
well as superposition coding, i.e., the region achievable by random binning is
equal to the capacity region. The achievability results are further specialized
to obtain a lower bound on the mismatch capacity of the single-user channel by
investigating a cognitive multiple access channel whose achievable sum-rate
serves as a lower bound on the single-user channel's capacity. In certain
cases, for given auxiliary random variables this bound strictly improves on the
achievable rate derived by Lapidoth.
|
1305.0552 | Self-organization of progress across the century of physics | physics.soc-ph cs.DL cs.SI physics.data-an | We make use of information provided in the titles and abstracts of over half
a million publications that were published by the American Physical Society
during the past 119 years. By identifying all unique words and phrases and
determining their monthly usage patterns, we obtain quantifiable insights into
the trends of physics discovery from the end of the 19th century to today. We
show that the magnitudes of upward and downward trends yield heavy-tailed
distributions, and that their emergence is due to the Matthew effect. This
indicates that both the rise and fall of scientific paradigms is driven by
robust principles of self-organization. Data also confirm that periods of war
decelerate scientific progress, and that the later is very much subject to
globalization.
|
1305.0556 | A quantum teleportation inspired algorithm produces sentence meaning
from word meaning and grammatical structure | cs.CL quant-ph | We discuss an algorithm which produces the meaning of a sentence given
meanings of its words, and its resemblance to quantum teleportation. In fact,
this protocol was the main source of inspiration for this algorithm which has
many applications in the area of Natural Language Processing.
|
1305.0574 | Extending Modern SAT Solvers for Enumerating All Models | cs.AI | In this paper, we address the problem of enumerating all models of a Boolean
formula in conjunctive normal form (CNF). We propose an extension of CDCL-based
SAT solvers to deal with this fundamental problem. Then, we provide an
experimental evaluation of our proposed SAT model enumeration algorithms on
both satisfiable SAT instances taken from the last SAT challenge and on
instances from the SAT-based encoding of sequence mining problems.
|
1305.0585 | Design and Stability of Load-Side Primary Frequency Control in Power
Systems | cs.SY math.OC | We present a systematic method to design ubiquitous continuous fast-acting
distributed load control for primary frequency regulation in power networks, by
formulating an optimal load control (OLC) problem where the objective is to
minimize the aggregate cost of tracking an operating point subject to power
balance over the network. We prove that the swing dynamics and the branch power
flows, coupled with frequency-based load control, serve as a distributed
primal-dual algorithm to solve OLC. We establish the global asymptotic
stability of a multimachine network under such type of load-side primary
frequency control. These results imply that the local frequency deviations at
each bus convey exactly the right information about the global power imbalance
for the loads to make individual decisions that turn out to be globally
optimal. Simulations confirm that the proposed algorithm can rebalance power
and resynchronize bus frequencies after a disturbance with significantly
improved transient performance.
|
1305.0596 | An Empirical Investigation of V-I Trajectory based Load Signatures for
Non-Intrusive Load Monitoring | cs.CE | Choice of load signature or feature space is one of the most fundamental
design choices for non-intrusive load monitoring or energy disaggregation
problem. Electrical power quantities, harmonic load characteristics, canonical
transient and steady-state waveforms are some of the typical choices of load
signature or load signature basis for current research addressing appliance
classification and prediction. This paper expands and evaluates appliance load
signatures based on V-I trajectory - the mutual locus of instantaneous voltage
and current waveforms - for precision and robustness of prediction in
classification algorithms used to disaggregate residential overall energy use
and predict constituent appliance profiles. We also demonstrate the use of
variants of differential evolution as a novel strategy for selection of optimal
load models in context of energy disaggregation. A publicly available benchmark
dataset REDD is employed for evaluation purposes. Our experimental evaluations
indicate that these load signatures, in conjunction with a number of popular
classification algorithms, offer better or generally comparable overall
precision of prediction, robustness and reliability against dynamic, noisy and
highly similar load signatures with reference to electrical power quantities
and harmonic content. Herein, wave-shape features are found to be an effective
new basis of classification and prediction for semi-automated energy
disaggregation and monitoring.
|
1305.0606 | Results from a Practical Deployment of the MyZone Decentralized P2P
Social Network | cs.CR cs.DC cs.SI | This paper presents MyZone, a private online social network for relatively
small, closely-knit communities. MyZone has three important distinguishing
features. First, users keep the ownership of their data and have complete
control over maintaining their privacy. Second, MyZone is free from any
possibility of content censorship and is highly resilient to any single point
of disconnection. Finally, MyZone minimizes deployment cost by minimizing its
computation, storage and network bandwidth requirements. It incorporates both a
P2P architecture and a centralized architecture in its design ensuring high
availability, security and privacy. A prototype of MyZone was deployed over a
period of 40 days with a membership of more than one hundred users. The paper
provides a detailed evaluation of the results obtained from this deployment.
|
1305.0619 | Resource Allocation for Downlink Channel Transmission Based on
Superposition Coding | cs.IT math.IT | We analyze the problem of transmitting information to multiple users over a
shared wireless channel. The problem of resource allocation (RA) for the users
with the knowledge of their channel state information has been treated
extensively in the literature where various approaches trading off the users'
throughput and fairness were proposed. The emphasis was mostly on the
time-sharing (TS) approach, where the resource allocated to the user is
equivalent to its time share of the channel access. In this work, we propose to
take advantage of the broadcast nature of the channel and we adopt
superposition coding (SC)-known to outperform TS in multiple users broadcasting
scenarios. In SC, users' messages are simultaneously transmitted by superposing
their codewords with different power fractions under a total power constraint.
The main challenge is to find a simple way to allocate these power fractions to
all users taking into account the fairness/throughput tradeoff. We present an
algorithm with this purpose and we apply it in the case of popular proportional
fairness (PF). The obtained results using SC are illustrated with various
numerical examples where, comparing to TS, a rate increase between 20% and 300%
is observed.
|
1305.0625 | CONATION: English Command Input/Output System for Computers | cs.HC cs.CL | In this information technology age, a convenient and user friendly interface
is required to operate the computer system on very fast rate. In the human
being, speech being a natural mode of communication has potential to being a
fast and convenient mode of interaction with computer. Speech recognition will
play an important role in taking technology to them. It is the need of this era
to access the information within seconds. This paper describes the design and
development of speaker independent and English command interpreted system for
computers. HMM model is used to represent the phoneme like speech commands.
Experiments have been done on real world data and system has been trained in
normal condition for real world subject.
|
1305.0626 | An Improved EM algorithm | cs.LG cs.AI stat.ML | In this paper, we firstly give a brief introduction of expectation
maximization (EM) algorithm, and then discuss the initial value sensitivity of
expectation maximization algorithm. Subsequently, we give a short proof of EM's
convergence. Then, we implement experiments with the expectation maximization
algorithm (We implement all the experiments on Gaussion mixture model (GMM)).
Our experiment with expectation maximization is performed in the following
three cases: initialize randomly; initialize with result of K-means; initialize
with result of K-medoids. The experiment result shows that expectation
maximization algorithm depend on its initial state or parameters. And we found
that EM initialized with K-medoids performed better than both the one
initialized with K-means and the one initialized randomly.
|
1305.0638 | Feature Selection Based on Term Frequency and T-Test for Text
Categorization | cs.LG cs.IR stat.ML | Much work has been done on feature selection. Existing methods are based on
document frequency, such as Chi-Square Statistic, Information Gain etc.
However, these methods have two shortcomings: one is that they are not reliable
for low-frequency terms, and the other is that they only count whether one term
occurs in a document and ignore the term frequency. Actually, high-frequency
terms within a specific category are often regards as discriminators.
This paper focuses on how to construct the feature selection function based
on term frequency, and proposes a new approach based on $t$-test, which is used
to measure the diversity of the distributions of a term between the specific
category and the entire corpus. Extensive comparative experiments on two text
corpora using three classifiers show that our new approach is comparable to or
or slightly better than the state-of-the-art feature selection methods (i.e.,
$\chi^2$, and IG) in terms of macro-$F_1$ and micro-$F_1$.
|
1305.0664 | Practical Implementation of Spatial Modulation | cs.IT math.IT | In this work we seek to characterise the performance of spatial modulation
(SM) and spatial multiplexing (SMX) with an experimental test bed. Two National
Instruments (NI)-PXIe devices are used for the system testing, one for the
transmitter and one for the receiver. The digital signal processing that
formats the information data in preparation of transmission is described along
with the digital signal processing that recovers the information data. In
addition, the hardware limitations of the system are also analysed. The average
bit error ratio (ABER) of the system is validated through both theoretical
analysis and simulation results for SM and SMX under line of sight (LoS)
channel conditions.
|
1305.0665 | Spectral Classification Using Restricted Boltzmann Machine | cs.LG | In this study, a novel machine learning algorithm, restricted Boltzmann
machine (RBM), is introduced. The algorithm is applied for the spectral
classification in astronomy. RBM is a bipartite generative graphical model with
two separate layers (one visible layer and one hidden layer), which can extract
higher level features to represent the original data. Despite generative, RBM
can be used for classification when modified with a free energy and a soft-max
function. Before spectral classification, the original data is binarized
according to some rule. Then we resort to the binary RBM to classify
cataclysmic variables (CVs) and non-CVs (one half of all the given data for
training and the other half for testing). The experiment result shows
state-of-the-art accuracy of 100%, which indicates the efficiency of the binary
RBM algorithm.
|
1305.0688 | On Flexible Web Services Composition Networks | cs.SE cs.IR | The semantic Web service community develops efforts to bring semantics to Web
service descriptions and allow automatic discovery and composition. However,
there is no widespread adoption of such descriptions yet, because semantically
defining Web services is highly complicated and costly. As a result, production
Web services still rely on syntactic descriptions, key-word based discovery and
predefined compositions. Hence, more advanced research on syntactic Web
services is still ongoing. In this work we build syntactic composition Web
services networks with three well known similarity metrics, namely Levenshtein,
Jaro and Jaro-Winkler. We perform a comparative study on the metrics
performance by studying the topological properties of networks built from a
test collection of real-world descriptions. It appears Jaro-Winkler finds more
appropriate similarities and can be used at higher thresholds. For lower
thresholds, the Jaro metric would be preferable because it detect less
irrelevant relationships.
|
1305.0698 | Learning from Imprecise and Fuzzy Observations: Data Disambiguation
through Generalized Loss Minimization | cs.LG | Methods for analyzing or learning from "fuzzy data" have attracted increasing
attention in recent years. In many cases, however, existing methods (for
precise, non-fuzzy data) are extended to the fuzzy case in an ad-hoc manner,
and without carefully considering the interpretation of a fuzzy set when being
used for modeling data. Distinguishing between an ontic and an epistemic
interpretation of fuzzy set-valued data, and focusing on the latter, we argue
that a "fuzzification" of learning algorithms based on an application of the
generic extension principle is not appropriate. In fact, the extension
principle fails to properly exploit the inductive bias underlying statistical
and machine learning methods, although this bias, at least in principle, offers
a means for "disambiguating" the fuzzy data. Alternatively, we therefore
propose a method which is based on the generalization of loss functions in
empirical risk minimization, and which performs model identification and data
disambiguation simultaneously. Elaborating on the fuzzification of specific
types of losses, we establish connections to well-known loss functions in
regression and classification. We compare our approach with related methods and
illustrate its use in logistic regression for binary classification.
|
1305.0699 | Fast, Incremental Inverted Indexing in Main Memory for Web-Scale
Collections | cs.IR cs.DB | For text retrieval systems, the assumption that all data structures reside in
main memory is increasingly common. In this context, we present a novel
incremental inverted indexing algorithm for web-scale collections that directly
constructs compressed postings lists in memory. Designing efficient in-memory
algorithms requires understanding modern processor architectures and memory
hierarchies: in this paper, we explore the issue of postings lists contiguity.
Naturally, postings lists that occupy contiguous memory regions are preferred
for retrieval, but maintaining contiguity increases complexity and slows
indexing. On the other hand, allowing discontiguous index segments simplifies
index construction but decreases retrieval performance. Understanding this
tradeoff is our main contribution: We find that co-locating small groups of
inverted list segments yields query evaluation performance that is
statistically indistinguishable from fully-contiguous postings lists. In other
words, it is not necessary to lay out in-memory data structures such that all
postings for a term are contiguous; we can achieve ideal performance with a
relatively small amount of effort.
|
1305.0735 | Increasing Smart Meter Privacy Through Energy Harvesting and Storage
Devices | cs.IT math.IT | Smart meters are key elements for the operation of smart grids. By providing
near realtime information on the energy consumption of individual users, smart
meters increase the efficiency in generation, distribution and storage of
energy in a smart grid. The ability of the utility provider to track users
energy consumption inevitably leads to important threats to privacy. In this
paper, privacy in a smart metering system is studied from an information
theoretic perspective in the presence of energy harvesting and storage units.
It is shown that energy harvesting provides increased privacy by diversifying
the energy source, while a storage device can be used to increase both the
energy efficiency and the privacy of the user. For given input load and energy
harvesting rates, it is shown that there exists a trade-off between the
information leakage rate, which is used to measure the privacy of the user, and
the wasted energy rate, which is a measure of the energy-efficiency. The impact
of the energy harvesting rate and the size of the storage device on this
trade-off is also studied.
|
1305.0751 | Marginal AMP Chain Graphs | stat.ML cs.AI | We present a new family of models that is based on graphs that may have
undirected, directed and bidirected edges. We name these new models marginal
AMP (MAMP) chain graphs because each of them is Markov equivalent to some AMP
chain graph under marginalization of some of its nodes. However, MAMP chain
graphs do not only subsume AMP chain graphs but also multivariate regression
chain graphs. We describe global and pairwise Markov properties for MAMP chain
graphs and prove their equivalence for compositional graphoids. We also
characterize when two MAMP chain graphs are Markov equivalent.
For Gaussian probability distributions, we also show that every MAMP chain
graph is Markov equivalent to some directed and acyclic graph with
deterministic nodes under marginalization and conditioning on some of its
nodes. This is important because it implies that the independence model
represented by a MAMP chain graph can be accounted for by some data generating
process that is partially observed and has selection bias. Finally, we modify
MAMP chain graphs so that they are closed under marginalization for Gaussian
probability distributions. This is a desirable feature because it guarantees
parsimonious models under marginalization.
|
1305.0757 | Hierarchies of Predominantly Connected Communities | cs.DS cs.SI physics.soc-ph | We consider communities whose vertices are predominantly connected, i.e., the
vertices in each community are stronger connected to other community members of
the same community than to vertices outside the community. Flake et al.
introduced a hierarchical clustering algorithm that finds such predominantly
connected communities of different coarseness depending on an input parameter.
We present a simple and efficient method for constructing a clustering
hierarchy according to Flake et al. that supersedes the necessity of choosing
feasible parameter values and guarantees the completeness of the resulting
hierarchy, i.e., the hierarchy contains all clusterings that can be constructed
by the original algorithm for any parameter value. However, predominantly
connected communities are not organized in a single hierarchy. Thus, we develop
a framework that, after precomputing at most $2(n-1)$ maximum flows, admits a
linear time construction of a clustering $\C(S)$ of predominantly connected
communities that contains a given community $S$ and is maximum in the sense
that any further clustering of predominantly connected communities that also
contains $S$ is hierarchically nested in $\C(S)$. We further generalize this
construction yielding a clustering with similar properties for $k$ given
communities in $O(kn)$ time. This admits the analysis of a network's structure
with respect to various communities in different hierarchies.
|
1305.0763 | Quantifying the Impact of Parameter Tuning on Nature-Inspired Algorithms | cs.NE | The problem of parameterization is often central to the effective deployment
of nature-inspired algorithms. However, finding the optimal set of parameter
values for a combination of problem instance and solution method is highly
challenging, and few concrete guidelines exist on how and when such tuning may
be performed. Previous work tends to either focus on a specific algorithm or
use benchmark problems, and both of these restrictions limit the applicability
of any findings. Here, we examine a number of different algorithms, and study
them in a "problem agnostic" fashion (i.e., one that is not tied to specific
instances) by considering their performance on fitness landscapes with varying
characteristics. Using this approach, we make a number of observations on which
algorithms may (or may not) benefit from tuning, and in which specific
circumstances.
|
1305.0817 | Optimal Relay Selection for Physical-Layer Security in Cooperative
Wireless Networks | cs.IT math.IT | In this paper, we explore the physical-layer security in cooperative wireless
networks with multiple relays where both amplify-and-forward (AF) and
decode-and-forward (DF) protocols are considered. We propose the AF and DF
based optimal relay selection (i.e., AFbORS and DFbORS) schemes to improve the
wireless security against eavesdropping attack. For the purpose of comparison,
we examine the traditional AFbORS and DFbORS schemes, denoted by T-AFbORS and
TDFbORS, respectively. We also investigate a so-called multiple relay combining
(MRC) framework and present the traditional AF and DF based MRC schemes, called
T-AFbMRC and TDFbMRC, where multiple relays participate in forwarding the
source signal to destination which then combines its received signals from the
multiple relays. We derive closed-form intercept probability expressions of the
proposed AFbORS and DFbORS (i.e., P-AFbORS and P-DFbORS) as well as the
T-AFbORS, TDFbORS, T-AFbMRC and T-DFbMRC schemes in the presence of
eavesdropping attack. We further conduct an asymptotic intercept probability
analysis to evaluate the diversity order performance of relay selection schemes
and show that no matter which relaying protocol is considered (i.e., AF and
DF), the traditional and proposed optimal relay selection approaches both
achieve the diversity order M where M represents the number of relays. In
addition, numerical results show that for both AF and DF protocols, the
intercept probability performance of proposed optimal relay selection is
strictly better than that of the traditional relay selection and multiple relay
combining methods.
|
1305.0842 | Time Invariant Error Bounds for Modified-CS based Sparse Signal Sequence
Recovery | cs.IT math.IT | In this work, we obtain performance guarantees for modified-CS and for its
improved version, modified-CS-Add-LS-Del, for recursive reconstruction of a
time sequence of sparse signals from a reduced set of noisy measurements
available at each time. Under mild assumptions, we show that the support
recovery error of both algorithms is bounded by a time-invariant and small
value at all times. The same is also true for the reconstruction error. Under a
slow support change assumption, (i) the support recovery error bound is small
compared to the support size; and (ii) our results hold under weaker
assumptions on the number of measurements than what $\ell_1$ minimization for
noisy data needs. We first give a general result that only assumes a bound on
support size, number of support changes and number of small magnitude nonzero
entries at each time. Later, we specialize the main idea of these results for
two sets of signal change assumptions that model the class of problems in which
a new element that is added to the support either gets added at a large initial
magnitude or its magnitude slowly increases to a large enough value within a
finite delay. Simulation experiments are shown to back up our claims.
|
1305.0848 | Bound entangled states with a private key and their classical
counterpart | quant-ph cs.IT math.IT | Entanglement is a fundamental resource for quantum information processing. In
its pure form, it allows quantum teleportation and sharing classical secrets.
Realistic quantum states are noisy and their usefulness is only partially
understood. Bound-entangled states are central to this question---they have no
distillable entanglement, yet sometimes still have a private classical key. We
present a construction of bound-entangled states with private key based on
classical probability distributions. From this emerge states possessing a new
classical analogue of bound entanglement, distinct from the long-sought bound
information. We also find states of smaller dimensions and higher key rates
than previously known. Our construction has implications for classical
cryptography: we show that existing protocols are insufficient for extracting
private key from our distributions due to their "bound-entangled" nature. We
propose a simple extension of existing protocols that can extract key from
them.
|
1305.0860 | Nonlinearity Computation for Sparse Boolean Functions | cs.IT math.IT | An algorithm for computing the nonlinearity of a Boolean function from its
algebraic normal form (ANF) is proposed. By generalizing the expression of the
weight of a Boolean function in terms of its ANF coefficients, a formulation of
the distances to linear functions is obtained. The special structure of these
distances can be exploited to reduce the task of nonlinearity computation to
solving an associated binary integer programming problem. The proposed
algorithm can be used in cases where applying the Fast Walsh transform is
infeasible, typically when the number of input variables exceeds 40.
|
1305.0868 | Precoding-Based Network Alignment For Three Unicast Sessions | cs.IT math.IT | We consider the problem of network coding across three unicast sessions over
a directed acyclic graph, where each sender and the receiver is connected to
the network via a single edge of unit capacity. We consider a network model in
which the middle of the network only performs random linear network coding, and
restrict our approaches to precoding-based linear schemes, where the senders
use precoding matrices to encode source symbols. We adapt a precoding-based
interference alignment technique, originally developed for the wireless
interference channel, to construct a precoding-based linear scheme, which we
refer to as as a {\em precoding-based network alignment scheme (PBNA)}. A
primary difference between this setting and the wireless interference channel
is that the network topology can introduce dependencies between elements of the
transfer matrix, which we refer to as coupling relations, and can potentially
affect the achievable rate of PBNA. We identify all possible such coupling
relations, and interpret these coupling relations in terms of network topology
and present polynomial-time algorithms to check the presence of these coupling
relations. Finally, we show that, depending on the coupling relations present
in the network, the optimal symmetric rate achieved by precoding-based linear
scheme can take only three possible values, all of which can be achieved by
PBNA.
|
1305.0870 | Computing a k-sparse n-length Discrete Fourier Transform using at most
4k samples and O(k log k) complexity | cs.DS cs.IT cs.MM math.IT | Given an $n$-length input signal $\mbf{x}$, it is well known that its
Discrete Fourier Transform (DFT), $\mbf{X}$, can be computed in $O(n \log n)$
complexity using a Fast Fourier Transform (FFT). If the spectrum $\mbf{X}$ is
exactly $k$-sparse (where $k<<n$), can we do better? We show that
asymptotically in $k$ and $n$, when $k$ is sub-linear in $n$ (precisely, $k
\propto n^{\delta}$ where $0 < \delta <1$), and the support of the non-zero DFT
coefficients is uniformly random, we can exploit this sparsity in two
fundamental ways (i) {\bf {sample complexity}}: we need only $M=rk$
deterministically chosen samples of the input signal $\mbf{x}$ (where $r < 4$
when $0 < \delta < 0.99$); and (ii) {\bf {computational complexity}}: we can
reliably compute the DFT $\mbf{X}$ using $O(k \log k)$ operations, where the
constants in the big Oh are small and are related to the constants involved in
computing a small number of DFTs of length approximately equal to the sparsity
parameter $k$. Our algorithm succeeds with high probability, with the
probability of failure vanishing to zero asymptotically in the number of
samples acquired, $M$.
|
1305.0871 | Dictionary learning based image enhancement for rarity detection | cs.CV | Image enhancement is an important image processing technique that processes
images suitably for a specific application e.g. image editing. The conventional
solutions of image enhancement are grouped into two categories which are
spatial domain processing method and transform domain processing method such as
contrast manipulation, histogram equalization, homomorphic filtering. This
paper proposes a new image enhance method based on dictionary learning.
Particularly, the proposed method adjusts the image by manipulating the rarity
of dictionary atoms. Firstly, learn the dictionary through sparse coding
algorithms on divided sub-image blocks. Secondly, compute the rarity of
dictionary atoms on statistics of the corresponding sparse coefficients.
Thirdly, adjust the rarity according to specific application and form a new
dictionary. Finally, reconstruct the image using the updated dictionary and
sparse coefficients. Compared with the traditional techniques, the proposed
method enhances image based on the image content not on distribution of pixel
grey value or frequency. The advantages of the proposed method lie in that it
is in better correspondence with the response of the human visual system and
more suitable for salient objects extraction. The experimental results
demonstrate the effectiveness of the proposed image enhance method.
|
1305.0900 | Mathematical practice, crowdsourcing, and social machines | cs.SI cs.DL math.HO physics.soc-ph | The highest level of mathematics has traditionally been seen as a solitary
endeavour, to produce a proof for review and acceptance by research peers.
Mathematics is now at a remarkable inflexion point, with new technology
radically extending the power and limits of individuals. Crowdsourcing pulls
together diverse experts to solve problems; symbolic computation tackles huge
routine calculations; and computers check proofs too long and complicated for
humans to comprehend.
Mathematical practice is an emerging interdisciplinary field which draws on
philosophy and social science to understand how mathematics is produced. Online
mathematical activity provides a novel and rich source of data for empirical
investigation of mathematical practice - for example the community question
answering system {\it mathoverflow} contains around 40,000 mathematical
conversations, and {\it polymath} collaborations provide transcripts of the
process of discovering proofs. Our preliminary investigations have demonstrated
the importance of "soft" aspects such as analogy and creativity, alongside
deduction and proof, in the production of mathematics, and have given us new
ways to think about the roles of people and machines in creating new
mathematical knowledge. We discuss further investigation of these resources and
what it might reveal.
Crowdsourced mathematical activity is an example of a "social machine", a new
paradigm, identified by Berners-Lee, for viewing a combination of people and
computers as a single problem-solving entity, and the subject of major
international research endeavours. We outline a future research agenda for
mathematics social machines, a combination of people, computers, and
mathematical archives to create and apply mathematics, with the potential to
change the way people do mathematics, and to transform the reach, pace, and
impact of mathematics research.
|
1305.0904 | What does mathoverflow tell us about the production of mathematics? | cs.SI cs.DL math.HO physics.soc-ph | The highest level of mathematics research is traditionally seen as a solitary
activity. Yet new innovations by mathematicians themselves are starting to
harness the power of social computation to create new modes of mathematical
production. We study the effectiveness of one such system, and make proposals
for enhancement, drawing on AI and computer based mathematics. We analyse the
content of a sample of questions and responses in the community question
answering system for research mathematicians, math-overflow. We find that
mathoverflow is very effective, with 90% of our sample of questions answered
completely or in part. A typical response is an informal dialogue, allowing
error and speculation, rather than rigorous mathematical argument: 37% of our
sample discussions acknowledged error. Responses typically present information
known to the respondent, and readily checked by other users: thus the
effectiveness of mathoverflow comes from information sharing. We conclude that
extending and the power and reach of mathoverflow through a combination of
people and machines raises new challenges for artificial intelligence and
computational mathematics, in particular how to handle error, analogy and
informal reasoning.
|
1305.0909 | An Asymptotically Efficient Backlog Estimate for Dynamic Frame Aloha | cs.IT math.IT | In this paper we investigate backlog estimation procedures for Dynamic Frame
Aloha (DFA) in Radio Frequency Identification (RFID) environment. In
particular, we address the tag identification efficiency with any tag number
$N$, including $N\rightarrow\infty$. Although in the latter case efficiency
$e^{-1}$ is possible, none of the solution proposed in the literature has been
shown to reach such value. We analyze Schoute's backlog estimate, which is very
attractive for its simplicity, and formally show that its asymptotic efficiency
is 0.311. Leveraging the analysis, we propose the Asymptotic Efficient backlog
Estimate (AE$^2$) an improvement of the Schoute's backlog estimate, whose
efficiency reaches $e^{-1}$ asymptotically. We further show that AE$^2$ can be
optimized in order to present an efficiency very close to $e^{-1}$ for
practically any value of the population size. We also evaluate the loss of
efficiency when the frame size is constrained to be a power of two, as required
by RFID standards for DFA, and theoretically show that the asymptotic
efficiency becomes 0.356.
|
1305.0918 | Primer and Recent Developments on Fountain Codes | cs.IT cs.NI math.IT | In this paper we survey the various erasure codes which had been proposed and
patented recently, and along the survey we provide introductory tutorial on
many of the essential concepts and readings in erasure and Fountain codes.
Packet erasure is a fundamental characteristic inherent in data storage and
data transmission system. Traditionally replication/ retransmission based
techniques had been employed to deal with packet erasures in such systems.
While the Reed-Solomon (RS) erasure codes had been known for quite some time to
improve system reliability and reduce data redundancy, the high decoding
computation cost of RS codes has offset wider implementation of RS codes.
However recent exponential growth in data traffic and demand for larger data
storage capacity has simmered interest in erasure codes. Recent results have
shown promising results to address the decoding computation complexity and
redundancy tradeoff inherent in erasure codes.
|
1305.0922 | On Comparison between Evolutionary Programming Network-based Learning
and Novel Evolution Strategy Algorithm-based Learning | cs.NE cs.LG | This paper presents two different evolutionary systems - Evolutionary
Programming Network (EPNet) and Novel Evolutions Strategy (NES) Algorithm.
EPNet does both training and architecture evolution simultaneously, whereas NES
does a fixed network and only trains the network. Five mutation operators
proposed in EPNet to reflect the emphasis on evolving ANNs behaviors. Close
behavioral links between parents and their offspring are maintained by various
mutations, such as partial training and node splitting. On the other hand, NES
uses two new genetic operators - subpopulation-based max-mean arithmetical
crossover and time-variant mutation. The above-mentioned two algorithms have
been tested on a number of benchmark problems, such as the medical diagnosis
problems (breast cancer, diabetes, and heart disease). The results and the
comparison between them are also presented in this paper.
|
1305.0935 | The physics of custody | physics.soc-ph cond-mat.stat-mech cs.SI | Divorced individuals face complex situations when they have children with
different ex-partners, or even more, when their new partners have children of
their own. In such cases, and when kids spend every other weekend with each
parent, a practical problem emerges: Is it possible to have such a custody
arrangement that every couple has either all of the kids together or no kids at
all? We show that in general, it is not possible, but that the number of
couples that do can be maximized. The problem turns out to be equivalent to
finding the ground state of a spin glass system, which is known to be
equivalent to what is called a weighted max-cut problem in graph theory, and
hence it is NP-Complete.
|
1305.0939 | Intelligent Agent Based Semantic Web in Cloud Computing Environment | cs.IR | Considering today's web scenario, there is a need of effective and meaningful
search over the web which is provided by Semantic Web. Existing search engines
are keyword based. They are vulnerable in answering intelligent queries from
the user due to the dependence of their results on information available in web
pages. While semantic search engines provides efficient and relevant results as
the semantic web is an extension of the current web in which information is
given well defined meaning. MetaCrawler is a search tool that uses several
existing search engines and provides combined results by using their own page
ranking algorithm. This paper proposes development of a meta-semantic-search
engine called SemanTelli which works within cloud. SemanTelli fetches results
from different semantic search engines such as Hakia, DuckDuckGo, SenseBot with
the help of intelligent agents that eliminate the limitations of existing
search engines.
|
1305.0943 | Weighted Electoral Control | cs.GT cs.CC cs.MA | Although manipulation and bribery have been extensively studied under
weighted voting, there has been almost no work done on election control under
weighted voting. This is unfortunate, since weighted voting appears in many
important natural settings. In this paper, we study the complexity of
controlling the outcome of weighted elections through adding and deleting
voters. We obtain polynomial-time algorithms, NP-completeness results, and for
many NP-complete cases, approximation algorithms. In particular, for scoring
rules we completely characterize the complexity of weighted voter control. Our
work shows that for quite a few important cases, either polynomial-time exact
algorithms or polynomial-time approximation algorithms exist.
|
1305.0947 | A Versatile Dependent Model for Heterogeneous Cellular Networks | cs.NI cs.IT math.IT | We propose a new model for heterogeneous cellular networks that incorporates
dependencies between the layers. In particular, it places lower-tier base
stations at locations that are poorly covered by the macrocells, and it
includes a small-cell model for the case where the goal is to enhance network
capacity.
|
1305.0978 | Optimization Approach to Parametric Tuning of Power System Stabilizer
Based on Trajectory Sensitivity Analysis | cs.SY | This paper proposed an transient-based optimal parametric tuning method for
power system stabilizer (PSS) based on trajectory sensitivity (TS) analysis of
hybrid system, such as hybrid power system (HPS). The main objective is to
explore a systematic optimization approach of PSS under large disturbance of
HPS, where its nonlinear features cannot be ignored, which, however, the
traditional eigenvalue-based small signal optimizations do neglect the higher
order terms of Taylor series of the system state equations. In contrast to
previous work, the proposed TS optimal method focuses on the gradient
information of objective function with respect to decision variables by means
of the trajectory sensitivity of HPS to the PSS parameters, and optimizes the
PSS parameters in terms of the conjugate gradient method. Firstly, the
traditional parametric tuning methods of PSS are introduced. Then, the
systematic mathematical models and transient trajectory simulation are
presented by introducing switching/reset events in terms of triggering
hypersurfaces so as to formulate the optimization problem using TS analysis.
Finally, a case study of IEEE three-machine-nine-bus standard test system is
discussed in detail to exemplify the practicality and effectiveness of the
proposed optimal method.
|
1305.0983 | Real-Time Welfare-Maximizing Regulation Allocation in Dynamic
Aggregator-EVs System | cs.SY cs.PF math.OC | The concept of vehicle-to-grid (V2G) has gained recent interest as more and
more electric vehicles (EVs) are put to use. In this paper, we consider a
dynamic aggregator-EVs system, where an aggregator centrally coordinates a
large number of dynamic EVs to perform regulation service. We propose a
Welfare-Maximizing Regulation Allocation (WMRA) algorithm for the aggregator to
fairly allocate the regulation amount among its EVs. Compared to previous
works, WMRA accommodates a wide spectrum of vital system characteristics,
including dynamics of EV, limited EV battery size, EV battery degradation cost,
and the cost of using external energy sources for the aggregator. The algorithm
operates in real time and does not require any prior knowledge of the
statistical information of the system. Theoretically, we demonstrate that WMRA
is away from the optimum by O(1/V), where V is a controlling parameter
depending on EV's battery size. In addition, our simulation results indicate
that WMRA can substantially outperform a suboptimal greedy algorithm.
|
1305.1002 | Efficient Estimation of the number of neighbours in Probabilistic K
Nearest Neighbour Classification | cs.LG stat.ML | Probabilistic k-nearest neighbour (PKNN) classification has been introduced
to improve the performance of original k-nearest neighbour (KNN) classification
algorithm by explicitly modelling uncertainty in the classification of each
feature vector. However, an issue common to both KNN and PKNN is to select the
optimal number of neighbours, $k$. The contribution of this paper is to
incorporate the uncertainty in $k$ into the decision making, and in so doing
use Bayesian model averaging to provide improved classification. Indeed the
problem of assessing the uncertainty in $k$ can be viewed as one of statistical
model selection which is one of the most important technical issues in the
statistics and machine learning domain. In this paper, a new functional
approximation algorithm is proposed to reconstruct the density of the model
(order) without relying on time consuming Monte Carlo simulations. In addition,
this algorithm avoids cross validation by adopting Bayesian framework. The
performance of this algorithm yielded very good performance on several real
experimental datasets.
|
1305.1012 | Low Complexity Delay-Constrained Beamforming for Multi-User MIMO Systems
with Imperfect CSIT | cs.IT math.IT | In this paper, we consider the delay-constrained beamforming control for
downlink multi-user MIMO (MU- MIMO) systems with imperfect channel state
information at the transmitter (CSIT). The delay-constrained control problem is
formulated as an infinite horizon average cost partially observed Markov
decision process. To deal with the curse of dimensionality, we introduce a
virtual continuous time system and derive a closed-form approximate value
function using perturbation analysis w.r.t. the CSIT errors. To deal with the
challenge of the conditional packet error rate (PER), we build a tractable
closed- form approximation using a Bernstein-type inequality. Based on the
closed-form approximations of the relative value function and the conditional
PER, we propose a conservative formulation of the original beamforming control
problem. The conservative problem is non-convex and we transform it into a
convex problem using the semidefinite relaxation (SDR) technique. We then
propose an alternating iterative algorithm to solve the SDR problem. Finally,
the proposed scheme is compared with various baselines through simulations and
it is shown that significant performance gain can be achieved.
|
1305.1019 | Simple Deep Random Model Ensemble | cs.LG | Representation learning and unsupervised learning are two central topics of
machine learning and signal processing. Deep learning is one of the most
effective unsupervised representation learning approach. The main contributions
of this paper to the topics are as follows. (i) We propose to view the
representative deep learning approaches as special cases of the knowledge reuse
framework of clustering ensemble. (ii) We propose to view sparse coding when
used as a feature encoder as the consensus function of clustering ensemble, and
view dictionary learning as the training process of the base clusterings of
clustering ensemble. (ii) Based on the above two views, we propose a very
simple deep learning algorithm, named deep random model ensemble (DRME). It is
a stack of random model ensembles. Each random model ensemble is a special
k-means ensemble that discards the expectation-maximization optimization of
each base k-means but only preserves the default initialization method of the
base k-means. (iv) We propose to select the most powerful representation among
the layers by applying DRME to clustering where the single-linkage is used as
the clustering algorithm. Moreover, the DRME based clustering can also detect
the number of the natural clusters accurately. Extensive experimental
comparisons with 5 representation learning methods on 19 benchmark data sets
demonstrate the effectiveness of DRME.
|
1305.1027 | Regret Bounds for Reinforcement Learning with Policy Advice | stat.ML cs.LG | In some reinforcement learning problems an agent may be provided with a set
of input policies, perhaps learned from prior experience or provided by
advisors. We present a reinforcement learning with policy advice (RLPA)
algorithm which leverages this input set and learns to use the best policy in
the set for the reinforcement learning task at hand. We prove that RLPA has a
sub-linear regret of \tilde O(\sqrt{T}) relative to the best input policy, and
that both this regret and its computational complexity are independent of the
size of the state and action space. Our empirical simulations support our
theoretical analysis. This suggests RLPA may offer significant advantages in
large domains where some prior good policies are provided.
|
1305.1040 | On the Convergence and Consistency of the Blurring Mean-Shift Process | stat.ML cs.LG | The mean-shift algorithm is a popular algorithm in computer vision and image
processing. It can also be cast as a minimum gamma-divergence estimation. In
this paper we focus on the "blurring" mean shift algorithm, which is one
version of the mean-shift process that successively blurs the dataset. The
analysis of the blurring mean-shift is relatively more complicated compared to
the nonblurring version, yet the algorithm convergence and the estimation
consistency have not been well studied in the literature. In this paper we
prove both the convergence and the consistency of the blurring mean-shift. We
also perform simulation studies to compare the efficiency of the blurring and
the nonblurring versions of the mean-shift algorithms. Our results show that
the blurring mean-shift has more efficiency.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.