id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1302.5453 | The Quantum Entropy Cone of Stabiliser States | quant-ph cs.IT math-ph math.IT math.MP | We investigate the universal linear inequalities that hold for the von
Neumann entropies in a multi-party system, prepared in a stabiliser state. We
demonstrate here that entropy vectors for stabiliser states satisfy, in
addition to the classic inequalities, a type of linear rank inequalities
associated with the combinatorial structure of normal subgroups of certain
matrix groups.
In the 4-party case, there is only one such inequality, the so-called
Ingleton inequality. For these systems we show that strong subadditivity, weak
monotonicity and Ingleton inequality exactly characterize the entropy cone for
stabiliser states.
|
1302.5455 | Seeding Influential Nodes in Non-Submodular Models of Information
Diffusion | cs.SI physics.soc-ph | We consider the model of information diffusion in social networks from
\cite{Hui2010a} which incorporates trust (weighted links) between actors, and
allows actors to actively participate in the spreading process, specifically
through the ability to query friends for additional information. This model
captures how social agents transmit and act upon information more realistically
as compared to the simpler threshold and cascade models. However, it is more
difficult to analyze, in particular with respect to seeding strategies. We
present efficient, scalable algorithms for determining good seed sets --
initial nodes to inject with the information. Our general approach is to reduce
our model to a class of simpler models for which provably good sets can be
constructed. By tuning this class of simpler models, we obtain a good seed set
for the original more complex model. We call this the \emph{projected greedy
approach} because you `project' your model onto a class of simpler models where
a greedy seed set selection is near-optimal. We demonstrate the effectiveness
of our seeding strategy on synthetic graphs as well as a realistic San Diego
evacuation network constructed during the 2007 fires.
|
1302.5474 | On the performance of a hybrid genetic algorithm in dynamic environments | cs.NE math.OC | The ability to track the optimum of dynamic environments is important in many
practical applications. In this paper, the capability of a hybrid genetic
algorithm (HGA) to track the optimum in some dynamic environments is
investigated for different functional dimensions, update frequencies, and
displacement strengths in different types of dynamic environments. Experimental
results are reported by using the HGA and some other existing evolutionary
algorithms in the literature. The results show that the HGA has better
capability to track the dynamic optimum than some other existing algorithms.
|
1302.5518 | Locally Repairable Codes with Multiple Repair Alternatives | cs.IT cs.DC math.IT | Distributed storage systems need to store data redundantly in order to
provide some fault-tolerance and guarantee system reliability. Different coding
techniques have been proposed to provide the required redundancy more
efficiently than traditional replication schemes. However, compared to
replication, coding techniques are less efficient for repairing lost
redundancy, as they require retrieval of larger amounts of data from larger
subsets of storage nodes. To mitigate these problems, several recent works have
presented locally repairable codes designed to minimize the repair traffic and
the number of nodes involved per repair. Unfortunately, existing methods often
lead to codes where there is only one subset of nodes able to repair a piece of
lost data, limiting the local repairability to the availability of the nodes in
this subset. In this paper, we present a new family of locally repairable codes
that allows different trade-offs between the number of contacted nodes per
repair, and the number of different subsets of nodes that enable this repair.
We show that slightly increasing the number of contacted nodes per repair
allows to have repair alternatives, which in turn increases the probability of
being able to perform efficient repairs. Finally, we present pg-BLRC, an
explicit construction of locally repairable codes with multiple repair
alternatives, constructed from partial geometries, in particular from
Generalized Quadrangles. We show how these codes can achieve practical lengths
and high rates, while requiring a small number of nodes per repair, and
providing multiple repair alternatives.
|
1302.5521 | Towards Python-based Domain-specific Languages for Self-reconfigurable
Modular Robotics Research | cs.RO cs.OS cs.PL cs.SE | This paper explores the role of operating system and high-level languages in
the development of software and domain-specific languages (DSLs) for
self-reconfigurable robotics. We review some of the current trends in
self-reconfigurable robotics and describe the development of a software system
for ATRON II which utilizes Linux and Python to significantly improve software
abstraction and portability while providing some basic features which could
prove useful when using Python, either stand-alone or via a DSL, on a
self-reconfigurable robot system. These features include transparent socket
communication, module identification, easy software transfer and reliable
module-to-module communication. The end result is a software platform for
modular robots that where appropriate builds on existing work in operating
systems, virtual machines, middleware and high-level languages.
|
1302.5526 | Stochastic dynamics of lexicon learning in an uncertain and nonuniform
world | physics.soc-ph cond-mat.stat-mech cs.CL q-bio.NC | We study the time taken by a language learner to correctly identify the
meaning of all words in a lexicon under conditions where many plausible
meanings can be inferred whenever a word is uttered. We show that the most
basic form of cross-situational learning - whereby information from multiple
episodes is combined to eliminate incorrect meanings - can perform badly when
words are learned independently and meanings are drawn from a nonuniform
distribution. If learners further assume that no two words share a common
meaning, we find a phase transition between a maximally-efficient learning
regime, where the learning time is reduced to the shortest it can possibly be,
and a partially-efficient regime where incorrect candidate meanings for words
persist at late times. We obtain exact results for the word-learning process
through an equivalence to a statistical mechanical problem of enumerating loops
in the space of word-meaning mappings.
|
1302.5549 | On Graph Deltas for Historical Queries | cs.DB cs.SI | In this paper, we address the problem of evaluating historical queries on
graphs. To this end, we investigate the use of graph deltas, i.e., a log of
time-annotated graph operations. Our storage model maintains the current graph
snapshot and the delta. We reconstruct past snapshots by applying appropriate
parts of the graph delta on the current snapshot. Query evaluation proceeds on
the reconstructed snapshots but we also propose algorithms based mostly on
deltas for efficiency. We introduce various techniques for improving
performance, including materializing intermediate snapshots, partial
reconstruction and indexing deltas.
|
1302.5554 | Self-similar prior and wavelet bases for hidden incompressible turbulent
motion | stat.AP cs.CV cs.NA physics.flu-dyn | This work is concerned with the ill-posed inverse problem of estimating
turbulent flows from the observation of an image sequence. From a Bayesian
perspective, a divergence-free isotropic fractional Brownian motion (fBm) is
chosen as a prior model for instantaneous turbulent velocity fields. This
self-similar prior characterizes accurately second-order statistics of velocity
fields in incompressible isotropic turbulence. Nevertheless, the associated
maximum a posteriori involves a fractional Laplacian operator which is delicate
to implement in practice. To deal with this issue, we propose to decompose the
divergent-free fBm on well-chosen wavelet bases. As a first alternative, we
propose to design wavelets as whitening filters. We show that these filters are
fractional Laplacian wavelets composed with the Leray projector. As a second
alternative, we use a divergence-free wavelet basis, which takes implicitly
into account the incompressibility constraint arising from physics. Although
the latter decomposition involves correlated wavelet coefficients, we are able
to handle this dependence in practice. Based on these two wavelet
decompositions, we finally provide effective and efficient algorithms to
approach the maximum a posteriori. An intensive numerical evaluation proves the
relevance of the proposed wavelet-based self-similar priors.
|
1302.5565 | The Importance of Clipping in Neurocontrol by Direct Gradient Descent on
the Cost-to-Go Function and in Adaptive Dynamic Programming | cs.LG | In adaptive dynamic programming, neurocontrol and reinforcement learning, the
objective is for an agent to learn to choose actions so as to minimise a total
cost function. In this paper we show that when discretized time is used to
model the motion of the agent, it can be very important to do "clipping" on the
motion of the agent in the final time step of the trajectory. By clipping we
mean that the final time step of the trajectory is to be truncated such that
the agent stops exactly at the first terminal state reached, and no distance
further. We demonstrate that when clipping is omitted, learning performance can
fail to reach the optimum; and when clipping is done properly, learning
performance can improve significantly.
The clipping problem we describe affects algorithms which use explicit
derivatives of the model functions of the environment to calculate a learning
gradient. These include Backpropagation Through Time for Control, and methods
based on Dual Heuristic Dynamic Programming. However the clipping problem does
not significantly affect methods based on Heuristic Dynamic Programming,
Temporal Differences or Policy Gradient Learning algorithms. Similarly, the
clipping problem does not affect fixed-length finite-horizon problems.
|
1302.5592 | A tournament of order 24 with two disjoint TEQ-retentive sets | cs.MA math.CO | Brandt et al. (2013) have recently disproved a conjecture by Schwartz (1990)
by non-constructively showing the existence of a counterexample with about
10^136 alternatives. We provide a concrete counterexample for Schwartz's
conjecture with only 24 alternatives.
|
1302.5607 | Distributed Community Detection in Dynamic Graphs | cs.SI cs.DC math.PR | Inspired by the increasing interest in self-organizing social opportunistic
networks, we investigate the problem of distributed detection of unknown
communities in dynamic random graphs. As a formal framework, we consider the
dynamic version of the well-studied \emph{Planted Bisection Model}
$\sdG(n,p,q)$ where the node set $[n]$ of the network is partitioned into two
unknown communities and, at every time step, each possible edge $(u,v)$ is
active with probability $p$ if both nodes belong to the same community, while
it is active with probability $q$ (with $q<<p$) otherwise. We also consider a
time-Markovian generalization of this model.
We propose a distributed protocol based on the popular \emph{Label
Propagation Algorithm} and prove that, when the ratio $p/q$ is larger than
$n^{b}$ (for an arbitrarily small constant $b>0$), the protocol finds the right
"planted" partition in $O(\log n)$ time even when the snapshots of the dynamic
graph are sparse and disconnected (i.e. in the case $p=\Theta(1/n)$).
|
1302.5608 | Accelerated Linear SVM Training with Adaptive Variable Selection
Frequencies | stat.ML cs.LG | Support vector machine (SVM) training is an active research area since the
dawn of the method. In recent years there has been increasing interest in
specialized solvers for the important case of linear models. The algorithm
presented by Hsieh et al., probably best known under the name of the
"liblinear" implementation, marks a major breakthrough. The method is analog to
established dual decomposition algorithms for training of non-linear SVMs, but
with greatly reduced computational complexity per update step. This comes at
the cost of not keeping track of the gradient of the objective any more, which
excludes the application of highly developed working set selection algorithms.
We present an algorithmic improvement to this method. We replace uniform
working set selection with an online adaptation of selection frequencies. The
adaptation criterion is inspired by modern second order working set selection
methods. The same mechanism replaces the shrinking heuristic. This novel
technique speeds up training in some cases by more than an order of magnitude.
|
1302.5645 | Role of temporal inference in the recognition of textual inference | cs.CL | This project is a part of nature language processing and its aims to develop
a system of recognition inference text-appointed TIMINF. This type of system
can detect, given two portions of text, if a text is semantically deducted from
the other. We focused on making the inference time in this type of system. For
that we have built and analyzed a body built from questions collected through
the web. This study has enabled us to classify different types of times
inferences and for designing the architecture of TIMINF which seeks to
integrate a module inference time in a detection system inference text. We also
assess the performance of sorties TIMINF system on a test corpus with the same
strategy adopted in the challenge RTE.
|
1302.5657 | A realistic distributed storage system: the rack model | cs.IT cs.DC math.IT | In a realistic distributed storage environment, storage nodes are usually
placed in racks, a metallic support designed to accommodate electronic
equipment. It is known that the communication (bandwidth) cost between nodes
which are in the same rack is much lower than between nodes which are in
different racks.
In this paper, a new model, where the storage nodes are placed in two racks,
is proposed and analyzed. Moreover, the two-rack model is generalized to any
number of racks. In this model, the storage nodes have different repair costs
depending on the rack where they are placed. A threshold function, which
minimizes the amount of stored data per node and the bandwidth needed to
regenerate a failed node, is shown. This threshold function generalizes the
ones given for previous distributed storage models. The tradeoff curve obtained
from this threshold function is compared with the ones obtained from the
previous models, and it is shown that this new model outperforms the previous
ones in terms of repair cost.
|
1302.5662 | Low Latency Communications | cs.IT math.IT | Numerous applications demand communication schemes that minimize the
transmission delay while achieving a given level of reliability. An extreme
case is high-frequency trading whereby saving a fraction of millisecond over a
route between Chicago and New York can be a game-changer. While such
communications are often carried by fiber, microwave links can reduce
transmission delays over large distances due to more direct routes and faster
wave propagation. In order to bridge large distances, information is sent over
a multihop relay network.
Motivated by these applications, this papers present an information-theoretic
approach to the design of optimal multihop microwave networks that minimizes
end-to-end transmission delay. To characterize the delay introduced by coding,
we derive error exponents achievable in multihop networks. We formulate and
solve an optimization problem that determines optimal selection of
amplify-and-forward and decode-and-forward relays. We present the optimal
solution for several examples of networks. We prove that in high SNR the
optimum transmission scheme is for all relays to perform amplify-and-forward.
We then analyze the impact of deploying noisy feedback
|
1302.5669 | Asymmetric Quantum Codes: New Codes from Old | quant-ph cs.IT math.IT | In this paper we extend to asymmetric quantum error-correcting codes (AQECC)
the construction methods, namely: puncturing, extending, expanding, direct sum
and the (u|u + v) construction. By applying these methods, several families of
asymmetric quantum codes can be constructed. Consequently, as an example of
application of quantum code expansion developed here, new families of
asymmetric quantum codes derived from generalized Reed-Muller (GRM) codes,
quadratic residue (QR), Bose-Chaudhuri-Hocquenghem (BCH), character codes and
affine-invariant codes are constructed.
|
1302.5675 | Development of Yes/No Arabic Question Answering System | cs.CL cs.IR | Developing Question Answering systems has been one of the important research
issues because it requires insights from a variety of
disciplines,including,Artificial Intelligence,Information Retrieval,
Information Extraction,Natural Language Processing, and Psychology.In this
paper we realize a formal model for a lightweight semantic based open domain
yes/no Arabic question answering system based on paragraph retrieval with
variable length. We propose a constrained semantic representation. Using an
explicit unification framework based on semantic similarities and query
expansion synonyms and antonyms.This frequently improves the precision of the
system. Employing the passage retrieval system achieves a better precision by
retrieving more paragraphs that contain relevant answers to the question; It
significantly reduces the amount of text to be processed by the system.
|
1302.5681 | Weighted Sets of Probabilities and Minimax Weighted Expected Regret: New
Approaches for Representing Uncertainty and Making Decisions | cs.GT cs.AI | We consider a setting where an agent's uncertainty is represented by a set of
probability measures, rather than a single measure. Measure-by-measure updating
of such a set of measures upon acquiring new information is well-known to
suffer from problems; agents are not always able to learn appropriately. To
deal with these problems, we propose using weighted sets of probabilities: a
representation where each measure is associated with a weight, which denotes
its significance. We describe a natural approach to updating in such a
situation and a natural approach to determining the weights. We then show how
this representation can be used in decision-making, by modifying a standard
approach to decision making -- minimizing expected regret -- to obtain minimax
weighted expected regret (MWER). We provide an axiomatization that
characterizes preferences induced by MWER both in the static and dynamic case.
|
1302.5696 | Capacity Bounds for Wireless Ergodic Fading Broadcast Channels with
Partial CSIT | cs.IT math.IT | The two-user wireless ergodic fading Broadcast Channel (BC) with partial
Channel State Information at the Transmitter (CSIT) is considered. The CSIT is
given by an arbitrary deterministic function of the channel state. This
characteristic yields a full control over how much state information is
available, from perfect to no information. In literature, capacity derivations
for wireless ergodic fading channels, specifically for fading BCs, mostly rely
on the analysis of channels comprising of parallel sub-channels. This technique
is usually suitable for the cases where perfect state information is available
at the transmitters. In this paper, new arguments are proposed to directly
derive (without resorting to the analysis of parallel channels) capacity bounds
for the two-user fading BC with both common and private messages based on the
existing bounds for the discrete channel. Specifically, a novel approach is
developed to adapt and evaluate the well-known UV-outer bound for the Gaussian
fading channel using the entropy power inequality. Our approach indeed sheds
light on the role of broadcast auxiliaries in the fading channel. It is shown
that the derived outer bound is optimal for the channel with perfect CSIT as
well as for some special cases with partial CSIT. Our outer bound is also
directly applicable to the case without CSIT which has been recently considered
in several papers. Next, the approach is developed to analyze for the fading BC
with secrecy. In the case of perfect CSIT, a full characterization of the
secrecy capacity region is derived for the channel with common and confidential
messages. This result completes a gap in a previous work by Ekrem and Ulukus.
For the channel without common message, the secrecy capacity region is also
derived when the transmitter has access only to the degradedness ordering of
the channel.
|
1302.5729 | Sparse Signal Estimation by Maximally Sparse Convex Optimization | cs.LG stat.ML | This paper addresses the problem of sparsity penalized least squares for
applications in sparse signal processing, e.g. sparse deconvolution. This paper
aims to induce sparsity more strongly than L1 norm regularization, while
avoiding non-convex optimization. For this purpose, this paper describes the
design and use of non-convex penalty functions (regularizers) constrained so as
to ensure the convexity of the total cost function, F, to be minimized. The
method is based on parametric penalty functions, the parameters of which are
constrained to ensure convexity of F. It is shown that optimal parameters can
be obtained by semidefinite programming (SDP). This maximally sparse convex
(MSC) approach yields maximally non-convex sparsity-inducing penalty functions
constrained such that the total cost function, F, is convex. It is demonstrated
that iterative MSC (IMSC) can yield solutions substantially more sparse than
the standard convex sparsity-inducing approach, i.e., L1 norm minimization.
|
1302.5734 | Invisible Flow Watermarks for Channels with Dependent Substitution,
Deletion, and Bursty Insertion Errors | cs.CR cs.IT cs.NI math.IT | Flow watermarks efficiently link packet flows in a network in order to thwart
various attacks such as stepping stones. We study the problem of designing good
flow watermarks. Earlier flow watermarking schemes mostly considered
substitution errors, neglecting the effects of packet insertions and deletions
that commonly happen within a network. More recent schemes consider packet
deletions but often at the expense of the watermark visibility. We present an
invisible flow watermarking scheme capable of enduring a large number of packet
losses and insertions. To maintain invisibility, our scheme uses quantization
index modulation (QIM) to embed the watermark into inter-packet delays, as
opposed to time intervals including many packets. As the watermark is injected
within individual packets, packet losses and insertions may lead to watermark
desynchronization and substitution errors. To address this issue, we add a
layer of error-correction coding to our scheme. Experimental results on both
synthetic and real network traces demonstrate that our scheme is robust to
network jitter, packet drops and splits, while remaining invisible to an
attacker.
|
1302.5762 | Probabilistic Non-Local Means | cs.CV stat.AP stat.CO | In this paper, we propose a so-called probabilistic non-local means (PNLM)
method for image denoising. Our main contributions are: 1) we point out defects
of the weight function used in the classic NLM; 2) we successfully derive all
theoretical statistics of patch-wise differences for Gaussian noise; and 3) we
employ this prior information and formulate the probabilistic weights truly
reflecting the similarity between two noisy patches. The probabilistic nature
of the new weight function also provides a theoretical basis to choose
thresholds rejecting dissimilar patches for fast computations. Our simulation
results indicate the PNLM outperforms the classic NLM and many NLM recent
variants in terms of peak signal noise ratio (PSNR) and structural similarity
(SSIM) index. Encouraging improvements are also found when we replace the NLM
weights with the probabilistic weights in tested NLM variants.
|
1302.5794 | Constant Communities in Complex Networks | physics.soc-ph cs.SI | Identifying community structure is a fundamental problem in network analysis.
Most community detection algorithms are based on optimizing a combinatorial
parameter, for example modularity. This optimization is generally NP-hard, thus
merely changing the vertex order can alter their assignments to the community.
However, there has been very less study on how vertex ordering influences the
results of the community detection algorithms. Here we identify and study the
properties of invariant groups of vertices (constant communities) whose
assignment to communities are, quite remarkably, not affected by vertex
ordering. The percentage of constant communities can vary across different
applications and based on empirical results we propose metrics to evaluate
these communities. Using constant communities as a pre-processing step, one can
significantly reduce the variation of the results. Finally, we present a case
study on phoneme network and illustrate that constant communities, quite
strikingly, form the core functional units of the larger communities.
|
1302.5797 | Prediction by Random-Walk Perturbation | cs.LG | We propose a version of the follow-the-perturbed-leader online prediction
algorithm in which the cumulative losses are perturbed by independent symmetric
random walks. The forecaster is shown to achieve an expected regret of the
optimal order O(sqrt(n log N)) where n is the time horizon and N is the number
of experts. More importantly, it is shown that the forecaster changes its
prediction at most O(sqrt(n log N)) times, in expectation. We also extend the
analysis to online combinatorial optimization and show that even in this more
general setting, the forecaster rarely switches between experts while having a
regret of near-optimal order.
|
1302.5824 | Measuring Visual Complexity of Cluster-Based Visualizations | cs.AI | Handling visual complexity is a challenging problem in visualization owing to
the subjectiveness of its definition and the difficulty in devising
generalizable quantitative metrics. In this paper we address this challenge by
measuring the visual complexity of two common forms of cluster-based
visualizations: scatter plots and parallel coordinatess. We conceptualize
visual complexity as a form of visual uncertainty, which is a measure of the
degree of difficulty for humans to interpret a visual representation correctly.
We propose an algorithm for estimating visual complexity for the aforementioned
visualizations using Allen's interval algebra. We first establish a set of
primitive 2-cluster cases in scatter plots and another set for parallel
coordinatess based on symmetric isomorphism. We confirm that both are the
minimal sets and verify the correctness of their members computationally. We
score the uncertainty of each primitive case based on its topological
properties, including the existence of overlapping regions, splitting regions
and meeting points or edges. We compare a few optional scoring schemes against
a set of subjective scores by humans, and identify the one that is the most
consistent with the subjective scores. Finally, we extend the 2-cluster measure
to k-cluster measure as a general purpose estimator of visual complexity for
these two forms of cluster-based visualization.
|
1302.5860 | A universal, operational theory of unicast multi-user communication with
fidelity criteria | cs.IT math.IT | This is a three part paper.
Optimality of source-channel separation for communication with a fidelity
criterion when the channel is compound as defined by Csiszar and Korner in
their book and general as defined by Verdu and Han, is proved in Part I. It is
assumed that random codes are permitted. The word "universal" in the title of
this paper refers to the fact that the channel model is compound. The proof
uses a layered black-box or a layered input-output view-point. In particular,
only the end-to-end description of the channel as being capable of
communicating a source to within a certain distortion level is used when
proving separation. This implies that the channel model does not play any role
for separation to hold as long as there is a source model. Further implications
of the layered black-box view-point are discussed.
Optimality of source-medium separation for multi-user communication with
fidelity criteria over a general, compound medium in the unicast setting is
proved in Part II, thus generalizing Part I to the unicast, multi-user setting.
Part III gets to an understanding of the question, "Why is a channel which is
capable of communicating a source to within a certain distortion level, also
capable of communicating bits at any rate less than the infimum of the rates
needed to code the source to within the distortion level": this lies at the
heart of why optimality of separation for communication with a fidelity
criterion holds. The perspective taken to get to this understanding is a
randomized covering-packing perspective, and the proof is operational.
|
1302.5867 | Design of Nonlinear State Observers for One-Sided Lipschitz Systems | cs.SY math.DS math.OC | Control and state estimation of nonlinear systems satisfying a Lipschitz
continuity condition have been important topics in nonlinear system theory for
over three decades, resulting in a substantial amount of literature. The main
criticism behind this approach, however, has been the restrictive nature of the
Lipschitz continuity condition and the conservativeness of the related results.
This work deals with an extension to this problem by introducing a more general
family of nonlinear functions, namely one-sided Lipschitz functions. The
corresponding class of systems is a superset of its well-known Lipschitz
counterpart and possesses inherent advantages with respect to conservativeness.
In this paper, first the problem of state observer design for this class of
systems is established, the challenges are discussed and some analysis-oriented
tools are provided. Then, a solution to the observer design problem is proposed
in terms of nonlinear matrix inequalities which in turn are converted into
numerically efficiently solvable linear matrix inequalities.
|
1302.5872 | A Piggybacking Design Framework for Read-and Download-efficient
Distributed Storage Codes | cs.IT cs.DC cs.NI math.IT | We present a new 'piggybacking' framework for designing distributed storage
codes that are efficient in data-read and download required during node-repair.
We illustrate the power of this framework by constructing classes of explicit
codes that entail the smallest data-read and download for repair among all
existing solutions for three important settings: (a) codes meeting the
constraints of being Maximum-Distance-Separable (MDS), high-rate and having a
small number of substripes, arising out of practical considerations for
implementation in data centers, (b) binary MDS codes for all parameters where
binary MDS codes exist, (c) MDS codes with the smallest repair-locality. In
addition, we employ this framework to enable efficient repair of parity nodes
in existing codes that were originally constructed to address the repair of
only the systematic nodes. The basic idea behind our framework is to take
multiple instances of existing codes and add carefully designed functions of
the data of one instance to the other. Typical savings in data-read during
repair is 25% to 50% depending on the choice of the code parameters.
|
1302.5878 | Robustness of Link-prediction Algorithm Based on Similarity and
Application to Biological Networks | physics.soc-ph cs.SI | Many algorithms have been proposed to predict missing links in a variety of
real networks. These studies focus on mainly both accuracy and efficiency of
these algorithms. However, little attention is paid to their robustness against
either noise or irrationality of a link existing in almost all of real
networks. In this paper, we investigate the robustness of several typical
node-similarity-based algorithms and find that these algorithms are sensitive
to the strength of noise. Moreover, we find that it also depends on networks'
structure properties, especially on network efficiency, clustering coefficient
and average degree. In addition, we make an attempt to enhance the robustness
by using link weighting method to transform un-weighted network to weighted one
and then make use of weights of links to characterize their reliability. The
result shows that proper link weighting scheme can enhance both robustness and
accuracy of these algorithms significantly in biological networks while it
brings little computational effort.
|
1302.5894 | Four Side Distance: A New Fourier Shape Signature | cs.CV | Shape is one of the main features in content based image retrieval (CBIR).
This paper proposes a new shape signature. In this technique, features of each
shape are extracted based on four sides of the rectangle that covers the shape.
The proposed technique is Fourier based and it is invariant to translation,
scaling and rotation. The retrieval performance between some commonly used
Fourier based signatures and the proposed four sides distance (FSD) signature
has been tested using MPEG-7 database. Experimental results are shown that the
FSD signature has better performance compared with those signatures.
|
1302.5906 | Achieving AWGN Channel Capacity With Lattice Gaussian Coding | cs.IT math.IT | We propose a new coding scheme using only one lattice that achieves the
$\frac{1}{2}\log(1+\SNR)$ capacity of the additive white Gaussian noise (AWGN)
channel with lattice decoding, when the signal-to-noise ratio $\SNR>e-1$. The
scheme applies a discrete Gaussian distribution over an AWGN-good lattice, but
otherwise does not require a shaping lattice or dither. Thus, it significantly
simplifies the default lattice coding scheme of Erez and Zamir which involves a
quantization-good lattice as well as an AWGN-good lattice. Using the flatness
factor, we show that the error probability of the proposed scheme under minimum
mean-square error (MMSE) lattice decoding is almost the same as that of Erez
and Zamir, for any rate up to the AWGN channel capacity. We introduce the
notion of good constellations, which carry almost the same mutual information
as that of continuous Gaussian inputs. We also address the implementation of
Gaussian shaping for the proposed lattice Gaussian coding scheme.
|
1302.5909 | Studying complex tourism systems: a novel approach based on networks
derived from a time series | physics.soc-ph cs.SI | A tourism destination is a complex dynamic system. As such it requires
specific methods and tools to be analyzed and understood in order to better
tailor governance and policy measures for steering the destination along an
evolutionary growth path. Many proposals have been put forward for the
investigation of complex systems and some have been successfully applied to
tourism destinations. This paper uses a recent suggestion, that of transforming
a time series into a network and analyzes it with the objective of uncovering
the structural and dynamic features of a tourism destination. The algorithm,
called visibility graph, is simple and its implementation straightforward, yet
it is able to provide a number of interesting insights. An example is worked
out using data from two destinations: Italy as a country and the island of
Elba, one of its most known areas.
|
1302.5910 | Polar Lattices: Where Ar{\i}kan Meets Forney | cs.IT math.IT | In this paper, we propose the explicit construction of a new class of
lattices based on polar codes, which are provably good for the additive white
Gaussian noise (AWGN) channel. We follow the multilevel construction of Forney
\textit{et al.} (i.e., Construction D), where the code on each level is a
capacity-achieving polar code for that level. The proposed polar lattices are
efficiently decodable by using multistage decoding. Computable performance
bounds are derived to measure the gap to the generalized capacity at given
error probability. A design example is presented to demonstrate the performance
of polar lattices.
|
1302.5936 | Compressed Sensing with Sparse Binary Matrices: Instance Optimal Error
Guarantees in Near-Optimal Time | cs.IT math.IT math.NA | A compressed sensing method consists of a rectangular measurement matrix, $M
\in \mathbbm{R}^{m \times N}$ with $m \ll N$, together with an associated
recovery algorithm, $\mathcal{A}: \mathbbm{R}^m \rightarrow \mathbbm{R}^N$.
Compressed sensing methods aim to construct a high quality approximation to any
given input vector ${\bf x} \in \mathbbm{R}^N$ using only $M {\bf x} \in
\mathbbm{R}^m$ as input. In particular, we focus herein on instance optimal
nonlinear approximation error bounds for $M$ and $\mathcal{A}$ of the form $ \|
{\bf x} - \mathcal{A} (M {\bf x}) \|_p \leq \| {\bf x} - {\bf x}^{\rm opt}_k
\|_p + C k^{1/p - 1/q} \| {\bf x} - {\bf x}^{\rm opt}_k \|_q$ for ${\bf x} \in
\mathbbm{R}^N$, where ${\bf x}^{\rm opt}_k$ is the best possible $k$-term
approximation to ${\bf x}$.
In this paper we develop a compressed sensing method whose associated
recovery algorithm, $\mathcal{A}$, runs in $O((k \log k) \log N)$-time,
matching a lower bound up to a $O(\log k)$ factor. This runtime is obtained by
using a new class of sparse binary compressed sensing matrices of near optimal
size in combination with sublinear-time recovery techniques motivated by
sketching algorithms for high-volume data streams. The new class of matrices is
constructed by randomly subsampling rows from well-chosen incoherent matrix
constructions which already have a sub-linear number of rows. As a consequence,
fewer random bits than previously required are needed in order to select the
rows utilized by the fast reconstruction algorithms considered herein.
|
1302.5941 | Optimization of thermal comfort in building through envelope design | cs.CE | Due to the current environmental situation, energy saving has become the
leading drive in modern research. Although the residential houses in tropical
climate do not use air conditioning to maintain thermal comfort in order to
avoid use of electricity. As the thermal comfort is maintained by adequate
envelope composition and natural ventilation, this paper shows that it is
possible to determine the thickness of envelope layers for which the best
thermal comfort is obtained. The building is modeled in EnergyPlus software and
HookeJeves optimization methodology. The investigated house is a typical
residential house one-storey high with five thermal zones located at Reunion
Island, France. Three optimizations are performed such as the optimization of
the thickness of the concrete block layer, of the wood layer, and that of the
thermal insulation layer. The results show optimal thickness of thermal
envelope layers that yield the maximum TC according to Fanger predicted mean
vote.
|
1302.5942 | Performances of Low Temperature Radiant Heating Systems | cs.CE | Low temperature heating panel systems offer distinctive advantages in terms
of thermal comfort and energy consumption, allowing work with low exergy
sources. The purpose of this paper is to compare floor, wall, ceiling, and
floor-ceiling panel heating systems in terms of energy, exergy and CO2
emissions. Simulation results for each of the analyzed panel system are given
by its energy (the consumption of gas for heating, electricity for pumps and
primary energy) and exergy consumption, the price of heating, and its carbon
dioxide emission. Then, the values of the air temperatures of rooms are
investigated and that of the surrounding walls and floors. It is found that the
floor-ceiling heating system has the lowest energy, exergy, CO2 emissions,
operating costs, and uses boiler of the lowest power. The worst system by all
these parameters is the classical ceiling heating
|
1302.5945 | Queue-Based Random-Access Algorithms: Fluid Limits and Stability Issues | cs.NI cs.IT math.IT math.PR | We use fluid limits to explore the (in)stability properties of wireless
networks with queue-based random-access algorithms. Queue-based random-access
schemes are simple and inherently distributed in nature, yet provide the
capability to match the optimal throughput performance of centralized
scheduling mechanisms in a wide range of scenarios. Unfortunately, the type of
activation rules for which throughput optimality has been established, may
result in excessive queue lengths and delays. The use of more
aggressive/persistent access schemes can improve the delay performance, but
does not offer any universal maximum-stability guarantees. In order to gain
qualitative insight and investigate the (in)stability properties of more
aggressive/persistent activation rules, we examine fluid limits where the
dynamics are scaled in space and time. In some situations, the fluid limits
have smooth deterministic features and maximum stability is maintained, while
in other scenarios they exhibit random oscillatory characteristics, giving rise
to major technical challenges. In the latter regime, more aggressive access
schemes continue to provide maximum stability in some networks, but may cause
instability in others. Simulation experiments are conducted to illustrate and
validate the analytical results.
|
1302.5955 | Multi-Feedback Successive Interference Cancellation for Multiuser MIMO
Systems | cs.IT math.IT | In this paper, a low-complexity multiple feedback successive interference
cancellation (MF-SIC) strategy is proposed for the uplink of multiuser
multiple-input multiple-output (MU-MIMO) systems. In the proposed MF-SIC
{algorithm with shadow area constraints (SAC)}, an enhanced interference
cancellation is achieved by introducing {constellation points as the
candidates} to combat the error propagation in decision feedback loops. We also
combine the MF-SIC with multi-branch (MB) processing, which achieves a higher
detection diversity order. For coded systems, a low-complexity soft-input
soft-output (SISO) iterative (turbo) detector is proposed based on the MF and
the MB-MF interference suppression techniques. The computational complexity of
the MF-SIC is {comparable to} the conventional SIC algorithm {since very little
additional complexity is required}. {Simulation} results show that the
algorithms significantly outperform the conventional SIC scheme and {approach}
the optimal detector.
|
1302.5957 | Shape Characterization via Boundary Distortion | cs.CV | In this paper, we derive new shape descriptors based on a directional
characterization. The main idea is to study the behavior of the shape
neighborhood under family of transformations. We obtain a description invariant
with respect to rotation, reflection, translation and scaling. A well-defined
metric is then proposed on the associated feature space. We show the continuity
of this metric. Some results on shape retrieval are provided on two databases
to show the accuracy of the proposed shape metric.
|
1302.5958 | Adaptive Decision Feedback Detection with Parallel Interference
Cancellation and Constellation Constraints for Multi-Antenna Systems | cs.IT math.IT | In this paper, a novel low-complexity adaptive decision feedback detection
with parallel decision feedback and constellation constraints (P-DFCC) is
proposed for multiuser MIMO systems. We propose a constrained constellation map
which introduces a number of selected points served as the feedback candidates
for interference cancellation. By introducing a reliability checking, a higher
degree of freedom is introduced to refine the unreliable estimates. The P-DFCC
is followed by an adaptive receive filter to estimate the transmitted symbol.
In order to reduce the complexity of computing the filters with time-varying
MIMO channels, an adaptive recursive least squares (RLS) algorithm is employed
in the proposed P-DFCC scheme. An iterative detection and decoding (Turbo)
scheme is considered with the proposed P-DFCC algorithm. Simulations show that
the proposed technique has a complexity comparable to the conventional parallel
decision feedback detector while it obtains a performance close to the maximum
likelihood detector at a low to medium SNR range.
|
1302.5960 | Low-Complexity Variable Forgetting Factor Techniques for RLS Algorithms
in Interference Rejection Applications | cs.IT math.IT | We propose a low-complexity variable forgetting factor (VFF) mechanism for
recursive least square (RLS) algorithms in interference suppression
applications. The proposed VFF mechanism employs an updated component related
to the time average of the error correlation to automatically adjust the
forgetting factor in order to ensure fast convergence and good tracking of the
interference and the channel. Convergence and tracking analyses are carried out
and analytical expressions for predicting the mean squared error of the
proposed adaptation technique are obtained. Simulation results for a
direct-sequence code-division multiple access (DS-CDMA) system are presented in
nonstationary environments and show that the proposed VFF mechanism achieves
superior performance to previously reported methods at a reduced complexity.
|
1302.5973 | Sample Approximation-Based Deflation Approaches for Chance SINR
Constrained Joint Power and Admission Control | cs.IT math.IT | Consider the joint power and admission control (JPAC) problem for a
multi-user single-input single-output (SISO) interference channel. Most
existing works on JPAC assume the perfect instantaneous channel state
information (CSI). In this paper, we consider the JPAC problem with the
imperfect CSI, that is, we assume that only the channel distribution
information (CDI) is available. We formulate the JPAC problem into a chance
(probabilistic) constrained program, where each link's SINR outage probability
is enforced to be less than or equal to a specified tolerance. To circumvent
the computational difficulty of the chance SINR constraints, we propose to use
the sample (scenario) approximation scheme to convert them into finitely many
simple linear constraints. Furthermore, we reformulate the sample approximation
of the chance SINR constrained JPAC problem as a composite group sparse
minimization problem and then approximate it by a second-order cone program
(SOCP). The solution of the SOCP approximation can be used to check the
simultaneous supportability of all links in the network and to guide an
iterative link removal procedure (the deflation approach). We exploit the
special structure of the SOCP approximation and custom-design an efficient
algorithm for solving it. Finally, we illustrate the effectiveness and
efficiency of the proposed sample approximation-based deflation approaches by
simulations.
|
1302.5975 | Low-Complexity Algorithm for Worst-Case Utility Maximization in
Multiuser MISO Downlink | cs.IT math.IT | This work considers worst-case utility maximization (WCUM) problem for a
downlink wireless system where a multiantenna base station communicates with
multiple single-antenna users. Specifically, we jointly design transmit
covariance matrices for each user to robustly maximize the worst-case (i.e.,
minimum) system utility function under channel estimation errors bounded within
a spherical region. This problem has been shown to be NP-hard, and so any
algorithms for finding the optimal solution may suffer from prohibitively high
complexity. In view of this, we seek an efficient and more accurate suboptimal
solution for the WCUM problem. A low-complexity iterative WCUM algorithm is
proposed for this nonconvex problem by solving two convex problems
alternatively. We also show the convergence of the proposed algorithm, and
prove its Pareto optimality to the WCUM problem. Some simulation results are
presented to demonstrate its substantial performance gain and higher
computational efficiency over existing algorithms.
|
1302.5978 | Limited Feedback Design for Interference Alignment on MIMO Interference
Networks with Heterogeneous Path Loss and Spatial Correlations | cs.IT math.IT | Interference alignment is degree of freedom optimal in K -user MIMO
interference channels and many previous works have studied the transceiver
designs. However, these works predominantly focus on networks with perfect
channel state information at the transmitters and symmetrical interference
topology. In this paper, we consider a limited feedback system with
heterogeneous path loss and spatial correlations, and investigate how the
dynamics of the interference topology can be exploited to improve the feedback
efficiency. We propose a novel spatial codebook design, and perform dynamic
quantization via bit allocations to adapt to the asymmetry of the interference
topology. We bound the system throughput under the proposed dynamic scheme in
terms of the transmit SNR, feedback bits and the interference topology
parameters. It is shown that when the number of feedback bits scales with SNR
as C_{s}\cdot\log\textrm{SNR}, the sum degrees of freedom of the network are
preserved. Moreover, the value of scaling coefficient C_{s} can be
significantly reduced in networks with asymmetric interference topology.
|
1302.5979 | Vaccination intervention on epidemic dynamics in networks | physics.soc-ph cond-mat.stat-mech cs.SI | Vaccination is an important measure available for preventing or reducing the
spread of infectious diseases. In this paper, an epidemic model including
susceptible, infected, and imperfectly vaccinated compartments is studied on
Watts-Strogatz small-world, Barab\'asi-Albert scale-free, and random scale-free
networks. The epidemic threshold and prevalence are analyzed. For small-world
networks, the effective vaccination intervention is suggested and its influence
on the threshold and prevalence is analyzed. For scale-free networks, the
threshold is found to be strongly dependent both on the effective vaccination
rate and on the connectivity distribution. Moreover, so long as vaccination is
effective, it can linearly decrease the epidemic prevalence in small-world
networks, whereas for scale-free networks it acts exponentially. These results
can help in adopting pragmatic treatment upon diseases in structured
populations.
|
1302.5985 | A Meta-Theory of Boundary Detection Benchmarks | cs.CV | Human labeled datasets, along with their corresponding evaluation algorithms,
play an important role in boundary detection. We here present a psychophysical
experiment that addresses the reliability of such benchmarks. To find better
remedies to evaluate the performance of any boundary detection algorithm, we
propose a computational framework to remove inappropriate human labels and
estimate the intrinsic properties of boundaries.
|
1302.5990 | A Modified Riccati Transformation for Decentralized Computation of the
Viability Kernel Under LTI Dynamics | cs.SY math.OC | Computing the viability kernel is key in providing guarantees of safety and
proving existence of safety-preserving controllers for constrained dynamical
systems. Current numerical techniques that approximate this construct suffer
from a complexity that is exponential in the dimension of the state. We study
conditions under which a linear time-invariant (LTI) system can be suitably
decomposed into lower-dimensional subsystems so as to admit a conservative
computation of the viability kernel in a decentralized fashion in subspaces. We
then present an isomorphism that imposes these desired conditions, particularly
on two-time-scale systems. Decentralized computations are performed in the
transformed coordinates, yielding a conservative approximation of the viability
kernel in the original state space. Significant reduction of complexity can be
achieved, allowing the previously inapplicable tools to be employed for
treatment of higher-dimensional systems. We show the results on two examples
including a 6D system.
|
1302.6009 | On learning parametric-output HMMs | cs.LG math.ST stat.ML stat.TH | We present a novel approach for learning an HMM whose outputs are distributed
according to a parametric family. This is done by {\em decoupling} the learning
task into two steps: first estimating the output parameters, and then
estimating the hidden states transition probabilities. The first step is
accomplished by fitting a mixture model to the output stationary distribution.
Given the parameters of this mixture model, the second step is formulated as
the solution of an easily solvable convex quadratic program. We provide an
error analysis for the estimated transition probabilities and show they are
robust to small perturbations in the estimates of the mixture parameters.
Finally, we support our analysis with some encouraging empirical results.
|
1302.6030 | A Fast Template Based Heuristic For Global Multiple Sequence Alignment | cs.CE q-bio.QM | Advances in bio-technology have made available massive amounts of functional,
structural and genomic data for many biological sequences. This increased
availability of heterogeneous biological data has resulted in biological
applications where a multiple sequence alignment (msa) is required for aligning
similar features, where a feature is described in structural, functional or
evolutionary terms. In these applications, for a given set of sequences,
depending on the feature of interest the optimal msa is likely to be different,
and sequence similarity can only be used as a rough initial estimate on the
accuracy of an msa. This has motivated the growth in template based heuristics
that supplement the sequence information with evolutionary, structural and
functional data and exploit feature similarity instead of sequence similarity
to construct multiple sequence alignments that are biologically more accurate.
However, current frameworks for designing template based heuristics do not
allow the user to explicitly specify information that can help to classify
features into types and associate weights signifying the relative importance of
a feature with respect to other features. In this paper, we first provide a
mechanism where as a part of the template information the user can explicitly
specify for each feature, its type, and weight. The type is to classify the
features into different categories based on their characteristics and the
weight signifies the relative importance of a feature with respect to other
features in that sequence. Second, we exploit the above information to define
scoring models for pair-wise sequence alignment that assume segment
conservation as opposed to single character (residue) conservation. Finally, we
present a fast progressive alignment based heuristic framework that helps in
constructing a global msa efficiently.
|
1302.6031 | Phoneme discrimination using KS algebra I | cs.SD cs.AI cs.NE | In our work we define a new algebra of operators as a substitute for fuzzy
logic. Its primary purpose is for construction of binary discriminators for
phonemes based on spectral content. It is optimized for design of
non-parametric computational circuits, and makes uses of 4 operations: $\min$,
$\max$, the difference and generalized additively homogenuous means.
|
1302.6093 | On the analogue of the concavity of entropy power in the Brunn-Minkowski
theory | math.FA cs.IT math.IT math.MG | Elaborating on the similarity between the entropy power inequality and the
Brunn-Minkowski inequality, Costa and Cover conjectured in {\it On the
similarity of the entropy power inequality and the Brunn-Minkowski inequality}
(IEEE Trans. Inform. Theory 30 (1984), no. 6, 837-839) the
$\frac{1}{n}$-concavity of the outer parallel volume of measurable sets as an
analogue of the concavity of entropy power. We investigate this conjecture and
study its relationship with geometric inequalities.
|
1302.6105 | Image restoration using sparse approximations of spatially varying blur
operators in the wavelet domain | math.OC cs.CV math.NA | Restoration of images degraded by spatially varying blurs is an issue of
increasing importance in the context of photography, satellite or microscopy
imaging. One of the main difficulty to solve this problem comes from the huge
dimensions of the blur matrix. It prevents the use of naive approaches for
performing matrix-vector multiplications. In this paper, we propose to
approximate the blur operator by a matrix sparse in the wavelet domain. We
justify this approach from a mathematical point of view and investigate the
approximation quality numerically. We finish by showing that the sparsity
pattern of the matrix can be pre-defined, which is central in tasks such as
blind deconvolution.
|
1302.6109 | Social Resilience in Online Communities: The Autopsy of Friendster | cs.SI physics.soc-ph | We empirically analyze five online communities: Friendster, Livejournal,
Facebook, Orkut, Myspace, to identify causes for the decline of social
networks. We define social resilience as the ability of a community to
withstand changes. We do not argue about the cause of such changes, but
concentrate on their impact. Changes may cause users to leave, which may
trigger further leaves of others who lost connection to their friends. This may
lead to cascades of users leaving. A social network is said to be resilient if
the size of such cascades can be limited. To quantify resilience, we use the
k-core analysis, to identify subsets of the network in which all users have at
least k friends. These connections generate benefits (b) for each user, which
have to outweigh the costs (c) of being a member of the network. If this
difference is not positive, users leave. After all cascades, the remaining
network is the k-core of the original network determined by the cost-to-benefit
c/b ratio. By analysing the cumulative distribution of k-cores we are able to
calculate the number of users remaining in each community. This allows us to
infer the impact of the c/b ratio on the resilience of these online
communities. We find that the different online communities have different
k-core distributions. Consequently, similar changes in the c/b ratio have a
different impact on the amount of active users. As a case study, we focus on
the evolution of Friendster. We identify time periods when new users entering
the network observed an insufficient c/b ratio. This measure can be seen as a
precursor of the later collapse of the community. Our analysis can be applied
to estimate the impact of changes in the user interface, which may temporarily
increase the c/b ratio, thus posing a threat for the community to shrink, or
even to collapse.
|
1302.6149 | Work in Progress: Enabling robot device discovery through robot device
descriptions | cs.RO | There is no dearth of new robots that provide both generalized and customized
platforms for learning and research. Unfortunately as we attempt to adapt
existing software components, we are faced with an explosion of device drivers
that interface each hardware platform with existing frameworks. We certainly
gain the efficiencies of reusing algorithms and tools developed across
platforms but only once the device driver is created.
We propose a domain specific language that describes the development and
runtime interface of a robot and defines its link to existing frameworks. The
Robot Device Interface Specification (RDIS) takes advantage of the internal
firmware present on many existing devices by defining the communication
mechanism, syntax and semantics in such a way to enable the generation of
automatic interface links and resource discovery. We present the current domain
model as it relates to differential drive robots as a mechanism to use the RDIS
to link described robots to HTML5 via web sockets and ROS (Robot Operating
System).
|
1302.6154 | Upper Bounds on the Size of Grain-Correcting Codes | cs.DM cs.IT math.IT | In this paper, we re-visit the combinatorial error model of Mazumdar et al.
that models errors in high-density magnetic recording caused by lack of
knowledge of grain boundaries in the recording medium. We present new upper
bounds on the cardinality/rate of binary block codes that correct errors within
this model.
|
1302.6173 | Robust Capon Beamforming via Shaping Beam Pattern | cs.IT math.IT | High sidelobe level and direction of arrival (DOA) estimation sensitivity are
two major disadvantages of the Capon beamforming. To deal with these problems,
this paper gives an overview of a series of robust Capon beamforming methods
via shaping beam pattern, including sparse Capon beamforming, weighted sparse
Capon beamforming, mixed norm based Capon beamforming, total variation
minimization based Capon beamforming, mainlobe-to-sidelobe power ratio
maximization based Capon beamforming. With these additional structure-inducing
constraints, the sidelobe is suppressed, and the robustness against DOA
mismatch is improved too. Simulations show that the obtained beamformers
outperform the standard Capon beamformer.
|
1302.6194 | Phoneme discrimination using $KS$-algebra II | cs.SD cs.LG stat.ML | $KS$-algebra consists of expressions constructed with four kinds operations,
the minimum, maximum, difference and additively homogeneous generalized means.
Five families of $Z$-classifiers are investigated on binary classification
tasks between English phonemes. It is shown that the classifiers are able to
reflect well known formant characteristics of vowels, while having very small
Kolmogoroff's complexity.
|
1302.6210 | A Homogeneous Ensemble of Artificial Neural Networks for Time Series
Forecasting | cs.NE cs.LG | Enhancing the robustness and accuracy of time series forecasting models is an
active area of research. Recently, Artificial Neural Networks (ANNs) have found
extensive applications in many practical forecasting problems. However, the
standard backpropagation ANN training algorithm has some critical issues, e.g.
it has a slow convergence rate and often converges to a local minimum, the
complex pattern of error surfaces, lack of proper training parameters selection
methods, etc. To overcome these drawbacks, various improved training methods
have been developed in literature; but, still none of them can be guaranteed as
the best for all problems. In this paper, we propose a novel weighted ensemble
scheme which intelligently combines multiple training algorithms to increase
the ANN forecast accuracies. The weight for each training algorithm is
determined from the performance of the corresponding ANN model on the
validation dataset. Experimental results on four important time series depicts
that our proposed technique reduces the mentioned shortcomings of individual
ANN training algorithms to a great extent. Also it achieves significantly
better forecast accuracies than two other popular statistical models.
|
1302.6214 | Modification of conceptual clustering algorithm Cobweb for numerical
data using fuzzy membership function | cs.AI | Modification of a conceptual clustering algorithm Cobweb for the purpose of
its application for numerical data is offered. Keywords: clustering, algorithm
Cobweb, numerical data, fuzzy membership function.
|
1302.6220 | Directed closure measures for networks with reciprocity | cs.SI cs.DS physics.soc-ph | The study of triangles in graphs is a standard tool in network analysis,
leading to measures such as the \emph{transitivity}, i.e., the fraction of
paths of length $2$ that participate in triangles. Real-world networks are
often directed, and it can be difficult to "measure" this network structure
meaningfully. We propose a collection of \emph{directed closure values} for
measuring triangles in directed graphs in a way that is analogous to
transitivity in an undirected graph. Our study of these values reveals much
information about directed triadic closure. For instance, we immediately see
that reciprocal edges have a high propensity to participate in triangles. We
also observe striking similarities between the triadic closure patterns of
different web and social networks. We perform mathematical and empirical
analysis showing that directed configuration models that preserve reciprocity
cannot capture the triadic closure patterns of real networks.
|
1302.6256 | Parallel Maximum Clique Algorithms with Applications to Network Analysis
and Storage | cs.SI cs.DC cs.DM cs.DS physics.soc-ph | We propose a fast, parallel maximum clique algorithm for large sparse graphs
that is designed to exploit characteristics of social and information networks.
The method exhibits a roughly linear runtime scaling over real-world networks
ranging from 1000 to 100 million nodes. In a test on a social network with 1.8
billion edges, the algorithm finds the largest clique in about 20 minutes. Our
method employs a branch and bound strategy with novel and aggressive pruning
techniques. For instance, we use the core number of a vertex in combination
with a good heuristic clique finder to efficiently remove the vast majority of
the search space. In addition, we parallelize the exploration of the search
tree. During the search, processes immediately communicate changes to upper and
lower bounds on the size of maximum clique, which occasionally results in a
super-linear speedup because vertices with large search spaces can be pruned by
other processes. We apply the algorithm to two problems: to compute temporal
strong components and to compress graphs.
|
1302.6259 | A Treatise on Stability of Autonomous and Non-autonomous Systems: Theory
and Illustrative Practical Applications | cs.IT math.IT | Stability is a very important property of any physical system. By a stable
system, we broadly mean that small disturbances either in the system inputs or
in the initial conditions do not lead to large changes in the overall behavior
of the system. To be of practical use, a system must have to be stable. The
theory of stability is a vast, rapidly growing subject with prolific and
innovative contributions from numerous researchers. As such, an introductory
book that covers the basic concepts and minute details about this theory is
essential. The primary aim of this book is to make the readers familiar with
the various terminologies and methods related to the stability analysis of
time-invariant (autonomous) and time-varying (non-autonomous) systems. A
special treatment is given to the celebrated Liapunov's direct method which is
so far the most widely used and perhaps the best method for determining the
stability nature of both autonomous as well as non-autonomous systems. After
discussing autonomous systems to a considerable extent, the book concentrates
on the non-autonomous systems. From stability point of view, these systems are
often quite difficult to manage. Also, unlike their autonomous counterparts,
the non-autonomous systems often behave in peculiar manners which can make the
analysts arrive at misleading conclusions. Due to these issues, this book
attempts to present a careful and systematic study about the stability
properties of non-autonomous systems.
|
1302.6262 | The depolarising channel and Horns problem | quant-ph cs.IT math.IT math.RT | We investigate the action of the depolarising (qubit) channel on permutation
invariant input states. More specifically, we raise the question on which
invariant subspaces the output of the depolarising channel, given such special
input, is supported. An answer is given for equidistributed states on
isotypical subspaces, also called symmetric Werner states. Horns problem and
two of the corresponding inequalities are invoked as a method of proof.
|
1302.6276 | The Role of Information Diffusion in the Evolution of Social Networks | cs.SI cs.CY physics.soc-ph | Every day millions of users are connected through online social networks,
generating a rich trove of data that allows us to study the mechanisms behind
human interactions. Triadic closure has been treated as the major mechanism for
creating social links: if Alice follows Bob and Bob follows Charlie, Alice will
follow Charlie. Here we present an analysis of longitudinal micro-blogging
data, revealing a more nuanced view of the strategies employed by users when
expanding their social circles. While the network structure affects the spread
of information among users, the network is in turn shaped by this communication
activity. This suggests a link creation mechanism whereby Alice is more likely
to follow Charlie after seeing many messages by Charlie. We characterize users
with a set of parameters associated with different link creation strategies,
estimated by a Maximum-Likelihood approach. Triadic closure does have a strong
effect on link formation, but shortcuts based on traffic are another key factor
in interpreting network evolution. However, individual strategies for following
other users are highly heterogeneous. Link creation behaviors can be summarized
by classifying users in different categories with distinct structural and
behavioral characteristics. Users who are popular, active, and influential tend
to create traffic-based shortcuts, making the information diffusion process
more efficient in the network.
|
1302.6288 | Super-resolution via superset selection and pruning | cs.IT math.IT math.NA | We present a pursuit-like algorithm that we call the "superset method" for
recovery of sparse vectors from consecutive Fourier measurements in the
super-resolution regime. The algorithm has a subspace identification step that
hinges on the translation invariance of the Fourier transform, followed by a
removal step to estimate the solution's support. The superset method is always
successful in the noiseless regime (unlike L1-minimization) and generalizes to
higher dimensions (unlike the matrix pencil method). Relative robustness to
noise is demonstrated numerically.
|
1302.6292 | An Iterative Noncoherent Relay Receiver for the Two-way Relay Channel | cs.IT math.IT | Physical-layer network coding improves the throughput of the two-way relay
channel by allowing multiple source terminals to transmit simultaneously to the
relay. However, it is generally not feasible to align the phases of the
multiple received signals at the relay, which motivates the exploration of
noncoherent solutions. In this paper, turbo-coded orthogonal multi-tone
frequency-shift keying (FSK) is considered for the two-way relay channel. In
contrast with analog network coding, the system considered is an instance of
digital network coding; i.e., the relay decodes the network codeword and
forwards a re-encoded version. Crucial to noncoherent digital network coding is
the implementation of the relay receiver, which is the primary focus of the
paper. The relay receiver derived in this paper supports any modulation order
that is a power of two, and features the iterative feedback of a priori
information from the turbo channel decoder to the demodulator; i.e., it uses
bit interleaved coded modulation with iterative decoding (BICM-ID). The
performance of the receiver is investigated in Rayeligh fading channels through
error-rate simulations and a capacity analysis. Results show that the BICM-ID
receiver improves energy efficiency by 0.5-0.9 dB compared to a non-iterative
receiver implementation.
|
1302.6309 | Reciprocal versus Parasocial Relationships in Online Social Networks | cs.SI physics.soc-ph | Many online social networks are fundamentally directed, i.e., they consist of
both reciprocal edges (i.e., edges that have already been linked back) and
parasocial edges (i.e., edges that haven't been linked back). Thus,
understanding the structures and evolutions of reciprocal edges and parasocial
ones, exploring the factors that influence parasocial edges to become
reciprocal ones, and predicting whether a parasocial edge will turn into a
reciprocal one are basic research problems.
However, there have been few systematic studies about such problems. In this
paper, we bridge this gap using a novel large-scale Google+ dataset crawled by
ourselves as well as one publicly available social network dataset. First, we
compare the structures and evolutions of reciprocal edges and those of
parasocial edges. For instance, we find that reciprocal edges are more likely
to connect users with similar degrees while parasocial edges are more likely to
link ordinary users (e.g., users with low degrees) and popular users (e.g.,
celebrities). However, the impacts of reciprocal edges linking ordinary and
popular users on the network structures increase slowly as the social networks
evolve. Second, we observe that factors including user behaviors, node
attributes, and edge attributes all have significant impacts on the formation
of reciprocal edges. Third, in contrast to previous studies that treat
reciprocal edge prediction as either a supervised or a semi-supervised learning
problem, we identify that reciprocal edge prediction is better modeled as an
outlier detection problem. Finally, we perform extensive evaluations with the
two datasets, and we show that our proposal outperforms previous reciprocal
edge prediction approaches.
|
1302.6310 | Estimating Sectoral Pollution Load in Lagos, Nigeria Using Data Mining
Techniques | cs.NE | Industrial pollution is often considered to be one of the prime factors
contributing to air, water and soil pollution. Sectoral pollution loads
(ton/yr) into different media (i.e. air, water and land) in Lagos were
estimated using Industrial Pollution Projected System (IPPS). These were
further studied using Artificial neural Networks (ANNs), a data mining
technique that has the ability of detecting and describing patterns in large
data sets with variables that are non- linearly related. Time Lagged Recurrent
Network (TLRN) appeared as the best Neural Network model among all the neural
networks considered which includes Multilayer Perceptron (MLP) Network,
Generalized Feed Forward Neural Network (GFNN), Radial Basis Function (RBF)
Network and Recurrent Network (RN). TLRN modelled the data-sets better than the
others in terms of the mean average error (MAE) (0.14), time (39 s) and linear
correlation coefficient (0.84). The results showed that Artificial Neural
Networks (ANNs) technique (i.e., Time Lagged Recurrent Network) is also
applicable and effective in environmental assessment study. Keywords:
Artificial Neural Networks (ANNs), Data Mining Techniques, Industrial Pollution
Projection System (IPPS), Pollution load, Pollution Intensity.
|
1302.6315 | Rate-Distortion Bounds for an Epsilon-Insensitive Distortion Measure | cs.IT cs.LG math.IT | Direct evaluation of the rate-distortion function has rarely been achieved
when it is strictly greater than its Shannon lower bound. In this paper, we
consider the rate-distortion function for the distortion measure defined by an
epsilon-insensitive loss function. We first present the Shannon lower bound
applicable to any source distribution with finite differential entropy. Then,
focusing on the Laplacian and Gaussian sources, we prove that the
rate-distortion functions of these sources are strictly greater than their
Shannon lower bounds and obtain analytically evaluable upper bounds for the
rate-distortion functions. Small distortion limit and numerical evaluation of
the bounds suggest that the Shannon lower bound provides a good approximation
to the rate-distortion function for the epsilon-insensitive distortion measure.
|
1302.6330 | An event-based model for contracts | cs.LO cs.MA | We introduce a basic model for contracts. Our model extends event structures
with a new relation, which faithfully captures the circular dependencies among
contract clauses. We establish whether an agreement exists which respects all
the contracts at hand (i.e. all the dependencies can be resolved), and we
detect the obligations of each participant. The main technical contribution is
a correspondence between our model and a fragment of the contract logic PCL.
More precisely, we show that the reachable events are exactly those which
correspond to provable atoms in the logic. Despite of this strong
correspondence, our model improves previous work on PCL by exhibiting a
finer-grained notion of culpability, which takes into account the legitimate
orderings of events.
|
1302.6334 | Non-simplifying Graph Rewriting Termination | cs.CL cs.CC cs.LO | So far, a very large amount of work in Natural Language Processing (NLP) rely
on trees as the core mathematical structure to represent linguistic
informations (e.g. in Chomsky's work). However, some linguistic phenomena do
not cope properly with trees. In a former paper, we showed the benefit of
encoding linguistic structures by graphs and of using graph rewriting rules to
compute on those structures. Justified by some linguistic considerations, graph
rewriting is characterized by two features: first, there is no node creation
along computations and second, there are non-local edge modifications. Under
these hypotheses, we show that uniform termination is undecidable and that
non-uniform termination is decidable. We describe two termination techniques
based on weights and we give complexity bound on the derivation length for
these rewriting system.
|
1302.6340 | A Fuzzy Logic based Method for Efficient Retrieval of Vague and
Uncertain Spatial Expressions in Text Exploiting the Granulation of the
Spatial Event Queries | cs.IR | The arrangement of things in n-dimensional space is specified as Spatial.
Spatial data consists of values that denote the location and shape of objects
and areas on the earths surface. Spatial information includes facts such as
location of features, the relationship of geographic features and measurements
of geographic features. The spatial cognition is a primal area of study in
various other fields such as Robotics, Psychology, Geosciences, Geography,
Political Sciences, Geographic Economy, Environmental, Mining and Petroleum
Engineering, Natural Resources, Epidemiology, Demography etc., Any text
document which contains physical location specifications such as place names,
geographic coordinates, landmarks, country names etc., are supposed to contain
the spatial information. The spatial information may also be represented using
vague or fuzzy descriptions involving linguistic terms such as near to, far
from, to the east of, very close. Given a query involving events, the aim of
this ongoing research work is to extract the relevant information from multiple
text documents, resolve the uncertainty and vagueness and translate them in to
locations in a map. The input to the system would be a text Corpus and a
Spatial Query event. The output of the system is a map showing the most
possible, disambiguated location of the event queried. The author proposes
Fuzzy Logic Techniques for resolving the uncertainty in the spatial
expressions.
|
1302.6352 | URDP: General Framework for Direct CCA2 Security from any Lattice-Based
PKE Scheme | cs.CR cs.IT math.IT | Design efficient lattice-based cryptosystem secure against adaptive chosen
ciphertext attack (IND-CCA2) is a challenge problem. To the date, full
CCA2-security of all proposed lattice-based PKE schemes achieved by using a
generic transformations such as either strongly unforgeable one-time signature
schemes (SU-OT-SS), or a message authentication code (MAC) and weak form of
commitment. The drawback of these schemes is that encryption requires "separate
encryption". Therefore, the resulting encryption scheme is not sufficiently
efficient to be used in practice and it is inappropriate for many applications
such as small ubiquitous computing devices with limited resources such as smart
cards, active RFID tags, wireless sensor networks and other embedded devices.
In this work, for the first time, we introduce an efficient universal random
data padding (URDP) scheme, and show how it can be used to construct a "direct"
CCA2-secure encryption scheme from "any" worst-case hardness problems in
(ideal) lattice in the standard model, resolving a problem that has remained
open till date. This novel approach is a "black-box" construction and leads to
the elimination of separate encryption, as it avoids using general
transformation from CPA-secure scheme to a CCA2-secure one. IND-CCA2 security
of this scheme can be tightly reduced in the standard model to the assumption
that the underlying primitive is an one-way trapdoor function.
|
1302.6363 | Realtime market microstructure analysis: online Transaction Cost
Analysis | q-fin.TR cs.IT math.IT math.ST stat.TH | Motivated by the practical challenge in monitoring the performance of a large
number of algorithmic trading orders, this paper provides a methodology that
leads to automatic discovery of the causes that lie behind a poor trading
performance. It also gives theoretical foundations to a generic framework for
real-time trading analysis. Academic literature provides different ways to
formalize these algorithms and show how optimal they can be from a
mean-variance, a stochastic control, an impulse control or a statistical
learning viewpoint. This paper is agnostic about the way the algorithm has been
built and provides a theoretical formalism to identify in real-time the market
conditions that influenced its efficiency or inefficiency. For a given set of
characteristics describing the market context, selected by a practitioner, we
first show how a set of additional derived explanatory factors, called anomaly
detectors, can be created for each market order. We then will present an online
methodology to quantify how this extended set of factors, at any given time,
predicts which of the orders are underperforming while calculating the
predictive power of this explanatory factor set. Armed with this information,
which we call influence analysis, we intend to empower the order monitoring
user to take appropriate action on any affected orders by re-calibrating the
trading algorithms working the order through new parameters, pausing their
execution or taking over more direct trading control. Also we intend that use
of this method in the post trade analysis of algorithms can be taken advantage
of to automatically adjust their trading action.
|
1302.6379 | Image-based Face Detection and Recognition: "State of the Art" | cs.CV | Face recognition from image or video is a popular topic in biometrics
research. Many public places usually have surveillance cameras for video
capture and these cameras have their significant value for security purpose. It
is widely acknowledged that the face recognition have played an important role
in surveillance system as it doesn't need the object's cooperation. The actual
advantages of face based identification over other biometrics are uniqueness
and acceptance. As human face is a dynamic object having high degree of
variability in its appearance, that makes face detection a difficult problem in
computer vision. In this field, accuracy and speed of identification is a main
issue.
The goal of this paper is to evaluate various face detection and recognition
methods, provide complete solution for image based face detection and
recognition with higher accuracy, better response rate as an initial step for
video surveillance. Solution is proposed based on performed tests on various
face rich databases in terms of subjects, pose, emotions, race and light.
|
1302.6390 | The adaptive Gril estimator with a diverging number of parameters | stat.ME cs.LG | We consider the problem of variables selection and estimation in linear
regression model in situations where the number of parameters diverges with the
sample size. We propose the adaptive Generalized Ridge-Lasso (\mbox{AdaGril})
which is an extension of the the adaptive Elastic Net. AdaGril incorporates
information redundancy among correlated variables for model selection and
estimation. It combines the strengths of the quadratic regularization and the
adaptively weighted Lasso shrinkage. In this paper, we highlight the grouped
selection property for AdaCnet method (one type of AdaGril) in the equal
correlation case. Under weak conditions, we establish the oracle property of
AdaGril which ensures the optimal large performance when the dimension is high.
Consequently, it achieves both goals of handling the problem of collinearity in
high dimension and enjoys the oracle property. Moreover, we show that AdaGril
estimator achieves a Sparsity Inequality, i. e., a bound in terms of the number
of non-zero components of the 'true' regression coefficient. This bound is
obtained under a similar weak Restricted Eigenvalue (RE) condition used for
Lasso. Simulations studies show that some particular cases of AdaGril
outperform its competitors.
|
1302.6421 | ML4PG in Computer Algebra verification | cs.LO cs.LG | ML4PG is a machine-learning extension that provides statistical proof hints
during the process of Coq/SSReflect proof development. In this paper, we use
ML4PG to find proof patterns in the CoqEAL library -- a library that was
devised to verify the correctness of Computer Algebra algorithms. In
particular, we use ML4PG to help us in the formalisation of an efficient
algorithm to compute the inverse of triangular matrices.
|
1302.6426 | Segmentation of Alzheimers Disease in PET scan datasets using MATLAB | cs.NE | Positron Emission Tomography (PET) scan images are one of the bio medical
imaging techniques similar to that of MRI scan images but PET scan images are
helpful in finding the development of tumors.The PET scan images requires
expertise in the segmentation where clustering plays an important role in the
automation process.The segmentation of such images is manual to automate the
process clustering is used.Clustering is commonly known as unsupervised
learning process of n dimensional data sets are clustered into k groups so as
to maximize the inter cluster similarity and to minimize the intra cluster
similarity.This paper is proposed to implement the commonly used K Means and
Fuzzy CMeans (FCM) clustering algorithm.This work is implemented using MATrix
LABoratory (MATLAB) and tested with sample PET scan image. The sample data is
collected from Alzheimers Disease Neuro imaging Initiative ADNI. Medical Image
Processing and Visualization Tool (MIPAV) are used to compare the resultant
images.
|
1302.6436 | A Domain-Specific Language for Rich Motor Skill Architectures | cs.RO cs.SE | Model-driven software development is a promising way to cope with the
complexity of system integration in advanced robotics, as it already
demonstrated its benefits in domains with comparably challenging system
integration requirements. This paper reports on work in progress in this area
which aims to improve the research and experimentation process in a
collaborative research project developing motor skill architectures for
compliant robots. Our goal is to establish a model-driven development process
throughout the project around a domain-specific language (DSL) facilitating the
compact description of adaptive modular architectures for rich motor skills.
Incorporating further languages for other aspects (e.g. mapping to a technical
component architecture) the approach allows not only the formal description of
motor skill architectures but also automated code-generation for
experimentation on technical robot platforms. This paper reports on a first
case study exemplifying how the developed AMARSi DSL helps to conceptualize
different architectural approaches and to identify their similarities and
differences.
|
1302.6442 | A Modelling Approach Based on Fuzzy Agents | cs.AI | Modelling of complex systems is mainly based on the decomposition of these
systems in autonomous elements, and the identification and definitio9n of
possible interactions between these elements. For this, the agent-based
approach is a modelling solution often proposed. Complexity can also be due to
external events or internal to systems, whose main characteristics are
uncertainty, imprecision, or whose perception is subjective (i.e. interpreted).
Insofar as fuzzy logic provides a solution for modelling uncertainty, the
concept of fuzzy agent can model both the complexity and uncertainty. This
paper focuses on introducing the concept of fuzzy agent: a classical
architecture of agent is redefined according to a fuzzy perspective. A
pedagogical illustration of fuzzy agentification of a smart watering system is
then proposed.
|
1302.6452 | A Conformal Prediction Approach to Explore Functional Data | stat.ML cs.LG | This paper applies conformal prediction techniques to compute simultaneous
prediction bands and clustering trees for functional data. These tools can be
used to detect outliers and clusters. Both our prediction bands and clustering
trees provide prediction sets for the underlying stochastic process with a
guaranteed finite sample behavior, under no distributional assumptions. The
prediction sets are also informative in that they correspond to the high
density region of the underlying process. While ordinary conformal prediction
has high computational cost for functional data, we use the inductive conformal
predictor, together with several novel choices of conformity scores, to
simplify the computation. Our methods are illustrated on some real data
examples.
|
1302.6521 | Imperfect and Unmatched CSIT is Still Useful for the Frequency
Correlated MISO Broadcast Channel | cs.IT math.IT | Since Maddah-Ali and Tse showed that the completely stale transmitter-side
channel state information (CSIT) still benefits the Degrees of Freedom (DoF) of
the Multiple-Input-Multiple-Output (MISO) Broadcast Channel (BC), there has
been much interest in the academic literature to investigate the impact of
imperfect CSIT on \emph{DoF} region of time correlated broadcast channel. Even
though the research focus has been on time correlated channels so far, a
similar but different problem concerns the frequency correlated channels.
Indeed, the imperfect CSIT also impacts the DoF region of frequency correlated
channels, as exemplified by current multi-carrier wireless systems.
This contribution, for the first time in the literature, investigates a
general frequency correlated setting where a two-antenna transmitter has
imperfect knowledge of CSI of two single-antenna users on two adjacent
subbands. A new scheme is derived as an integration of Zero-Forcing Beamforming
(ZFBF) and the scheme proposed by Maddah-Ali and Tse. The achievable DoF region
resulted by this scheme is expressed as a function of the qualities of CSIT.
|
1302.6523 | Sparse Frequency Analysis with Sparse-Derivative Instantaneous Amplitude
and Phase Functions | cs.LG | This paper addresses the problem of expressing a signal as a sum of frequency
components (sinusoids) wherein each sinusoid may exhibit abrupt changes in its
amplitude and/or phase. The Fourier transform of a narrow-band signal, with a
discontinuous amplitude and/or phase function, exhibits spectral and temporal
spreading. The proposed method aims to avoid such spreading by explicitly
modeling the signal of interest as a sum of sinusoids with time-varying
amplitudes. So as to accommodate abrupt changes, it is further assumed that the
amplitude/phase functions are approximately piecewise constant (i.e., their
time-derivatives are sparse). The proposed method is based on a convex
variational (optimization) approach wherein the total variation (TV) of the
amplitude functions are regularized subject to a perfect (or approximate)
reconstruction constraint. A computationally efficient algorithm is derived
based on convex optimization techniques. The proposed technique can be used to
perform band-pass filtering that is relatively insensitive to narrow-band
amplitude/phase jumps present in data, which normally pose a challenge (due to
transients, leakage, etc.). The method is illustrated using both synthetic
signals and human EEG data for the purpose of band-pass filtering and the
estimation of phase synchrony indexes.
|
1302.6556 | On Sharing Private Data with Multiple Non-Colluding Adversaries | cs.DB | We present SPARSI, a theoretical framework for partitioning sensitive data
across multiple non-colluding adversaries. Most work in privacy-aware data
sharing has considered disclosing summaries where the aggregate information
about the data is preserved, but sensitive user information is protected.
Nonetheless, there are applications, including online advertising, cloud
computing and crowdsourcing markets, where detailed and fine-grained user-data
must be disclosed. We consider a new data sharing paradigm and introduce the
problem of privacy-aware data partitioning, where a sensitive dataset must be
partitioned among k untrusted parties (adversaries). The goal is to maximize
the utility derived by partitioning and distributing the dataset, while
minimizing the amount of sensitive information disclosed. The data should be
distributed so that an adversary, without colluding with other adversaries,
cannot draw additional inferences about the private information, by linking
together multiple pieces of information released to her. The assumption of no
collusion is both reasonable and necessary in the above application domains
that require release of private user information. SPARSI enables us to formally
define privacy-aware data partitioning using the notion of sensitive properties
for modeling private information and a hypergraph representation for describing
the interdependencies between data entries and private information. We show
that solving privacy-aware partitioning is, in general, NP-hard, but for
specific information disclosure functions, good approximate solutions can be
found using relaxation techniques. Finally, we present a local search algorithm
applicable to generic information disclosure functions. We apply SPARSI
together with the proposed algorithms on data from a real advertising scenario
and show that we can partition data with no disclosure to any single
advertiser.
|
1302.6557 | Geodesic-based Salient Object Detection | cs.CV cs.AI | Saliency detection has been an intuitive way to provide useful cues for
object detection and segmentation, as desired for many vision and graphics
applications. In this paper, we provided a robust method for salient object
detection and segmentation. Other than using various pixel-level contrast
definitions, we exploited global image structures and proposed a new geodesic
method dedicated for salient object detection. In the proposed approach, a new
geodesic scheme, namely geodesic tunneling is proposed to tackle with textures
and local chaotic structures. With our new geodesic approach, a geodesic
saliency map is estimated in correspondence to spatial structures in an image.
Experimental evaluation on a salient object benchmark dataset validated that
our algorithm consistently outperformed a number of the state-of-art saliency
methods, yielding higher precision and better recall rates. With the robust
saliency estimation, we also present an unsupervised hierarchical salient
object cut scheme simply using adaptive saliency thresholding, which attained
the highest score in our F-measure test. We also applied our geodesic cut
scheme to a number of image editing tasks as demonstrated in additional
experiments.
|
1302.6562 | An Improvement to Levenshtein's Upper Bound on the Cardinality of
Deletion Correcting Codes | cs.IT cs.DM math.IT | We consider deletion correcting codes over a q-ary alphabet. It is well known
that any code capable of correcting s deletions can also correct any
combination of s total insertions and deletions. To obtain asymptotic upper
bounds on code size, we apply a packing argument to channels that perform
different mixtures of insertions and deletions. Even though the set of codes is
identical for all of these channels, the bounds that we obtain vary. Prior to
this work, only the bounds corresponding to the all insertion case and the all
deletion case were known. We recover these as special cases. The bound from the
all deletion case, due to Levenshtein, has been the best known for more than
forty five years. Our generalized bound is better than Levenshtein's bound
whenever the number of deletions to be corrected is larger than the alphabet
size.
|
1302.6567 | Teach Network Science to Teenagers | physics.ed-ph cs.SI math.CO physics.pop-ph physics.soc-ph | We discuss our outreach efforts to introduce school students to network
science and explain why networks researchers should be involved in such
outreach activities. We provide overviews of modules that we have designed for
these efforts, comment on our successes and failures, and illustrate the
potentially enormous impact of such outreach efforts.
|
1302.6569 | Characterizing scientific production and consumption in Physics | physics.soc-ph cs.DL cs.SI | We analyze the entire publication database of the American Physical Society
generating longitudinal (50 years) citation networks geolocalized at the level
of single urban areas. We define the knowledge diffusion proxy, and scientific
production ranking algorithms to capture the spatio-temporal dynamics of
Physics knowledge worldwide. By using the knowledge diffusion proxy we identify
the key cities in the production and consumption of knowledge in Physics as a
function of time. The results from the scientific production ranking algorithm
allow us to characterize the top cities for scholarly research in Physics.
Although we focus on a single dataset concerning a specific field, the
methodology presented here opens the path to comparative studies of the
dynamics of knowledge across disciplines and research areas
|
1302.6570 | Secure Degrees of Freedom of the Gaussian Wiretap Channel with Helpers
and No Eavesdropper CSI: Blind Cooperative Jamming | cs.IT cs.CR math.IT | We consider the Gaussian wiretap channel with M helpers, where no
eavesdropper channel state information (CSI) is available at the legitimate
entities. The exact secure d.o.f. of the Gaussian wiretap channel with M
helpers with perfect CSI at the transmitters was found in [1], [2] to be
M/(M+1). One of the key ingredients of the optimal achievable scheme in [1],
[2] is to align cooperative jamming signals with the information symbols at the
eavesdropper to limit the information leakage rate. This required perfect
eavesdropper CSI at the transmitters. Motivated by the recent result in [3], we
propose a new achievable scheme in which cooperative jamming signals span the
entire space of the eavesdropper, but are not exactly aligned with the
information symbols. We show that this scheme achieves the same secure d.o.f.
of M/(M+1) in [1], [2] but does not require any eavesdropper CSI; the
transmitters blindly cooperative jam the eavesdropper.
|
1302.6574 | Energy and Sampling Constrained Asynchronous Communication | cs.IT math.IT | The minimum energy, and, more generally, the minimum cost, to transmit one
bit of information has been recently derived for bursty communication when
information is available infrequently at random times at the transmitter. This
result assumes that the receiver is always in the listening mode and samples
all channel outputs until it makes a decision. If the receiver is constrained
to sample only a fraction f>0 of the channel outputs, what is the cost penalty
due to sparse output sampling?
Remarkably, there is no penalty: regardless of f>0 the asynchronous capacity
per unit cost is the same as under full sampling, ie, when f=1. There is not
even a penalty in terms of decoding delay---the elapsed time between when
information is available until when it is decoded. This latter result relies on
the possibility to sample adaptively; the next sample can be chosen as a
function of past samples. Under non-adaptive sampling, it is possible to
achieve the full sampling asynchronous capacity per unit cost, but the decoding
delay gets multiplied by 1/f. Therefore adaptive sampling strategies are of
particular interest in the very sparse sampling regime.
|
1302.6580 | Finding the Right Set of Users: Generalized Constraints for Group
Recommendations | cs.IR | Recently, group recommendations have attracted considerable attention. Rather
than recommending items to individual users, group recommenders recommend items
to groups of users. In this position paper, we introduce the problem of forming
an appropriate group of users to recommend an item when constraints apply to
the members of the group. We present a formal model of the problem and an
algorithm for its solution. Finally, we identify several directions for future
work.
|
1302.6584 | Variational Algorithms for Marginal MAP | stat.ML cs.AI cs.IT cs.LG math.IT | The marginal maximum a posteriori probability (MAP) estimation problem, which
calculates the mode of the marginal posterior distribution of a subset of
variables with the remaining variables marginalized, is an important inference
problem in many models, such as those with hidden variables or uncertain
parameters. Unfortunately, marginal MAP can be NP-hard even on trees, and has
attracted less attention in the literature compared to the joint MAP
(maximization) and marginalization problems. We derive a general dual
representation for marginal MAP that naturally integrates the marginalization
and maximization operations into a joint variational optimization problem,
making it possible to easily extend most or all variational-based algorithms to
marginal MAP. In particular, we derive a set of "mixed-product" message passing
algorithms for marginal MAP, whose form is a hybrid of max-product, sum-product
and a novel "argmax-product" message updates. We also derive a class of
convergent algorithms based on proximal point methods, including one that
transforms the marginal MAP problem into a sequence of standard marginalization
problems. Theoretically, we provide guarantees under which our algorithms give
globally or locally optimal solutions, and provide novel upper bounds on the
optimal objectives. Empirically, we demonstrate that our algorithms
significantly outperform the existing approaches, including a state-of-the-art
algorithm based on local search methods.
|
1302.6595 | Combining Multiple Time Series Models Through A Robust Weighted
Mechanism | cs.AI stat.AP | Improvement of time series forecasting accuracy through combining multiple
models is an important as well as a dynamic area of research. As a result,
various forecasts combination methods have been developed in literature.
However, most of them are based on simple linear ensemble strategies and hence
ignore the possible relationships between two or more participating models. In
this paper, we propose a robust weighted nonlinear ensemble technique which
considers the individual forecasts from different models as well as the
correlations among them while combining. The proposed ensemble is constructed
using three well-known forecasting models and is tested for three real-world
time series. A comparison is made among the proposed scheme and three other
widely used linear combination methods, in terms of the obtained forecast
errors. This comparison shows that our ensemble scheme provides significantly
lower forecast errors than each individual model as well as each of the four
linear combination methods.
|
1302.6602 | Using Modified Partitioning Around Medoids Clustering Technique in
Mobile Network Planning | cs.AI cs.NI | Every cellular network deployment requires planning and optimization in order
to provide adequate coverage, capacity, and quality of service (QoS).
Optimization mobile radio network planning is a very complex task, as many
aspects must be taken into account. With the rapid development in mobile
network we need effective network planning tool to satisfy the need of
customers. However, deciding upon the optimum placement for the base stations
(BS s) to achieve best services while reducing the cost is a complex task
requiring vast computational resource. This paper introduces the spatial
clustering to solve the Mobile Networking Planning problem. It addresses
antenna placement problem or the cell planning problem, involves locating and
configuring infrastructure for mobile networks by modified the original
Partitioning Around Medoids PAM algorithm. M-PAM (Modified Partitioning Around
Medoids) has been proposed to satisfy the requirements and constraints. PAM
needs to specify number of clusters (k) before starting to search for the best
locations of base stations. The M-PAM algorithm uses the radio network planning
to determine k. We calculate for each cluster its coverage and capacity and
determine if they satisfy the mobile requirements, if not we will increase (k)
and reapply algorithms depending on two methods for clustering. Implementation
of this algorithm to a real case study is presented. Experimental results and
analysis indicate that the M-PAM algorithm when applying method two is
effective in case of heavy load distribution, and leads to minimum number of
base stations, which directly affected onto the cost of planning the network.
|
1302.6613 | An Introductory Study on Time Series Modeling and Forecasting | cs.LG stat.ML | Time series modeling and forecasting has fundamental importance to various
practical domains. Thus a lot of active research works is going on in this
subject during several years. Many important models have been proposed in
literature for improving the accuracy and effectiveness of time series
forecasting. The aim of this dissertation work is to present a concise
description of some popular time series forecasting models used in practice,
with their salient features. In this thesis, we have described three important
classes of time series models, viz. the stochastic, neural networks and SVM
based models, together with their inherent forecasting strengths and
weaknesses. We have also discussed about the basic issues related to time
series modeling, such as stationarity, parsimony, overfitting, etc. Our
discussion about different time series models is supported by giving the
experimental forecast results, performed on six real time series datasets.
While fitting a model to a dataset, special care is taken to select the most
parsimonious one. To evaluate forecast accuracy as well as to compare among
different models fitted to a time series, we have used the five performance
measures, viz. MSE, MAD, RMSE, MAPE and Theil's U-statistics. For each of the
six datasets, we have shown the obtained forecast diagram which graphically
depicts the closeness between the original and forecasted observations. To have
authenticity as well as clarity in our discussion about time series modeling
and forecasting, we have taken the help of various published research works
from reputed journals and some standard books.
|
1302.6615 | PSO based Neural Networks vs. Traditional Statistical Models for
Seasonal Time Series Forecasting | cs.NE | Seasonality is a distinctive characteristic which is often observed in many
practical time series. Artificial Neural Networks (ANNs) are a class of
promising models for efficiently recognizing and forecasting seasonal patterns.
In this paper, the Particle Swarm Optimization (PSO) approach is used to
enhance the forecasting strengths of feedforward ANN (FANN) as well as Elman
ANN (EANN) models for seasonal data. Three widely popular versions of the basic
PSO algorithm, viz. Trelea-I, Trelea-II and Clerc-Type1 are considered here.
The empirical analysis is conducted on three real-world seasonal time series.
Results clearly show that each version of the PSO algorithm achieves notably
better forecasting accuracies than the standard Backpropagation (BP) training
method for both FANN and EANN models. The neural network forecasting results
are also compared with those from the three traditional statistical models,
viz. Seasonal Autoregressive Integrated Moving Average (SARIMA), Holt-Winters
(HW) and Support Vector Machine (SVM). The comparison demonstrates that both
PSO and BP based neural networks outperform SARIMA, HW and SVM models for all
three time series datasets. The forecasting performances of ANNs are further
improved through combining the outputs from the three PSO based models.
|
1302.6617 | Arriving on time: estimating travel time distributions on large-scale
road networks | cs.LG cs.AI | Most optimal routing problems focus on minimizing travel time or distance
traveled. Oftentimes, a more useful objective is to maximize the probability of
on-time arrival, which requires statistical distributions of travel times,
rather than just mean values. We propose a method to estimate travel time
distributions on large-scale road networks, using probe vehicle data collected
from GPS. We present a framework that works with large input of data, and
scales linearly with the size of the network. Leveraging the planar topology of
the graph, the method computes efficiently the time correlations between
neighboring streets. First, raw probe vehicle traces are compressed into pairs
of travel times and number of stops for each traversed road segment using a
`stop-and-go' algorithm developed for this work. The compressed data is then
used as input for training a path travel time model, which couples a Markov
model along with a Gaussian Markov random field. Finally, scalable inference
algorithms are developed for obtaining path travel time distributions from the
composite MM-GMRF model. We illustrate the accuracy and scalability of our
model on a 505,000 road link network spanning the San Francisco Bay Area.
|
1302.6634 | A Matrix-Field Weighted Mean-Square-Error Model for MIMO Transceiver
Designs | cs.IT math.IT | In this letter, we investigate an important and famous issue, namely weighted
mean-square-error (MSE) minimization transceiver designs. In our work, for
transceiver designs a novel weighted MSE model is proposed, which is defined as
a linear matrix function with respect to the traditional data detection MSE
matrix. The new model can be interpreted an extension of weighting operation
from vector field to matrix field. Based on the proposed weighting operation a
general transceiver design is proposed, which aims at minimizing an increasing
matrix-monotone function of the output of the previous linear matrix function.
The structure of the optimal solutions is also derived. Furthermore, two
important special cases of the matrix-monotone functions are discussed in
detail. It is also revealed that these two problems are exactly equivalent to
the transceiver designs of sum MSE minimization and capacity maximization for
dual-hop amplify-and-forward (AF) MIMO relaying systems, respectively. Finally,
it is concluded that the AF relaying is undoubtedly this kind of weighting
operation.
|
1302.6636 | A Scalable Generative Graph Model with Community Structure | cs.SI physics.soc-ph | Network data is ubiquitous and growing, yet we lack realistic generative
network models that can be calibrated to match real-world data. The recently
proposed Block Two-Level Erdss-Renyi (BTER) model can be tuned to capture two
fundamental properties: degree distribution and clustering coefficients. The
latter is particularly important for reproducing graphs with community
structure, such as social networks. In this paper, we compare BTER to other
scalable models and show that it gives a better fit to real data. We provide a
scalable implementation that requires only O(d_max) storage where d_max is the
maximum number of neighbors for a single node. The generator is trivially
parallelizable, and we show results for a Hadoop MapReduce implementation for a
modeling a real-world web graph with over 4.6 billion edges. We propose that
the BTER model can be used as a graph generator for benchmarking purposes and
provide idealized degree distributions and clustering coefficient profiles that
can be tuned for user specifications.
|
1302.6660 | Optimal rate algebraic list decoding using narrow ray class fields | math.NT cs.CC cs.IT math.IT | We use class field theory, specifically Drinfeld modules of rank 1, to
construct a family of asymptotically good algebraic-geometric (AG) codes over
fixed alphabets. Over a field of size $\ell^2$, these codes are within
$2/(\sqrt{\ell}-1)$ of the Singleton bound. The functions fields underlying
these codes are subfields with a cyclic Galois group of the narrow ray class
field of certain function fields. The resulting codes are "folded" using a
generator of the Galois group. This generalizes earlier work by the first
author on folded AG codes based on cyclotomic function fields. Using the
Chebotarev density theorem, we argue the abundance of inert places of large
degree in our cyclic extension, and use this to devise a linear-algebraic
algorithm to list decode these folded codes up to an error fraction approaching
$1-R$ where $R$ is the rate. The list decoding can be performed in polynomial
time given polynomial amount of pre-processed information about the function
field.
Our construction yields algebraic codes over constant-sized alphabets that
can be list decoded up to the Singleton bound --- specifically, for any desired
rate $R \in (0,1)$ and constant $\eps > 0$, we get codes over an alphabet size
$(1/\eps)^{O(1/\eps^2)}$ that can be list decoded up to error fraction
$1-R-\eps$ confining close-by messages to a subspace with $N^{O(1/\eps^2)}$
elements. Previous results for list decoding up to error-fraction $1-R-\eps$
over constant-sized alphabets were either based on concatenation or involved
taking a carefully sampled subcode of algebraic-geometric codes. In contrast,
our result shows that these folded algebraic-geometric codes {\em themselves}
have the claimed list decoding property.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.