id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1011.0450
|
From Sparse Signals to Sparse Residuals for Robust Sensing
|
stat.ML cs.IT cs.LG math.IT
|
One of the key challenges in sensor networks is the extraction of information
by fusing data from a multitude of distinct, but possibly unreliable sensors.
Recovering information from the maximum number of dependable sensors while
specifying the unreliable ones is critical for robust sensing. This sensing
task is formulated here as that of finding the maximum number of feasible
subsystems of linear equations, and proved to be NP-hard. Useful links are
established with compressive sampling, which aims at recovering vectors that
are sparse. In contrast, the signals here are not sparse, but give rise to
sparse residuals. Capitalizing on this form of sparsity, four sensing schemes
with complementary strengths are developed. The first scheme is a convex
relaxation of the original problem expressed as a second-order cone program
(SOCP). It is shown that when the involved sensing matrices are Gaussian and
the reliable measurements are sufficiently many, the SOCP can recover the
optimal solution with overwhelming probability. The second scheme is obtained
by replacing the initial objective function with a concave one. The third and
fourth schemes are tailored for noisy sensor data. The noisy case is cast as a
combinatorial problem that is subsequently surrogated by a (weighted) SOCP.
Interestingly, the derived cost functions fall into the framework of robust
multivariate linear regression, while an efficient block-coordinate descent
algorithm is developed for their minimization. The robust sensing capabilities
of all schemes are verified by simulated tests.
|
1011.0468
|
Efficient Triangle Counting in Large Graphs via Degree-based Vertex
Partitioning
|
cs.DS cs.SI physics.soc-ph
|
The number of triangles is a computationally expensive graph statistic which
is frequently used in complex network analysis (e.g., transitivity ratio), in
various random graph models (e.g., exponential random graph model) and in
important real world applications such as spam detection, uncovering of the
hidden thematic structure of the Web and link recommendation. Counting
triangles in graphs with millions and billions of edges requires algorithms
which run fast, use small amount of space, provide accurate estimates of the
number of triangles and preferably are parallelizable.
In this paper we present an efficient triangle counting algorithm which can
be adapted to the semistreaming model. The key idea of our algorithm is to
combine the sampling algorithm of Tsourakakis et al. and the partitioning of
the set of vertices into a high degree and a low degree subset respectively as
in the Alon, Yuster and Zwick work treating each set appropriately. We obtain a
running time $O \left(m + \frac{m^{3/2} \Delta \log{n}}{t \epsilon^2} \right)$
and an $\epsilon$ approximation (multiplicative error), where $n$ is the number
of vertices, $m$ the number of edges and $\Delta$ the maximum number of
triangles an edge is contained.
Furthermore, we show how this algorithm can be adapted to the semistreaming
model with space usage $O\left(m^{1/2}\log{n} + \frac{m^{3/2} \Delta \log{n}}{t
\epsilon^2} \right)$ and a constant number of passes (three) over the graph
stream. We apply our methods in various networks with several millions of edges
and we obtain excellent results. Finally, we propose a random projection based
method for triangle counting and provide a sufficient condition to obtain an
estimate with low variance.
|
1011.0472
|
Regularized Risk Minimization by Nesterov's Accelerated Gradient
Methods: Algorithmic Extensions and Empirical Studies
|
cs.LG
|
Nesterov's accelerated gradient methods (AGM) have been successfully applied
in many machine learning areas. However, their empirical performance on
training max-margin models has been inferior to existing specialized solvers.
In this paper, we first extend AGM to strongly convex and composite objective
functions with Bregman style prox-functions. Our unifying framework covers both
the $\infty$-memory and 1-memory styles of AGM, tunes the Lipschiz constant
adaptively, and bounds the duality gap. Then we demonstrate various ways to
apply this framework of methods to a wide range of machine learning problems.
Emphasis will be given on their rate of convergence and how to efficiently
compute the gradient and optimize the models. The experimental results show
that with our extensions AGM outperforms state-of-the-art solvers on max-margin
models.
|
1011.0474
|
Construction of New Delay-Tolerant Space-Time Codes
|
cs.IT math.IT
|
Perfect Space-Time Codes (STC) are optimal codes in their original
construction for Multiple Input Multiple Output (MIMO) systems. Based on Cyclic
Division Algebras (CDA), they are full-rate, full-diversity codes, have
Non-Vanishing Determinants (NVD) and hence achieve Diversity-Multiplexing
Tradeoff (DMT). In addition, these codes have led to optimal distributed
space-time codes when applied in cooperative networks under the assumption of
perfect synchronization between relays. However, they loose their diversity
when delays are introduced and thus are not delay-tolerant. In this paper,
using the cyclic division algebras of perfect codes, we construct new codes
that maintain the same properties as perfect codes in the synchronous case.
Moreover, these codes preserve their full-diversity in asynchronous
transmission.
|
1011.0487
|
Stochastic Simulation of Process Calculi for Biology
|
cs.PL cs.CE q-bio.QM
|
Biological systems typically involve large numbers of components with
complex, highly parallel interactions and intrinsic stochasticity. To model
this complexity, numerous programming languages based on process calculi have
been developed, many of which are expressive enough to generate unbounded
numbers of molecular species and reactions. As a result of this expressiveness,
such calculi cannot rely on standard reaction-based simulation methods, which
require fixed numbers of species and reactions. Rather than implementing custom
stochastic simulation algorithms for each process calculus, we propose to use a
generic abstract machine that can be instantiated to a range of process calculi
and a range of reaction-based simulation algorithms. The abstract machine
functions as a just-in-time compiler, which dynamically updates the set of
possible reactions and chooses the next reaction in an iterative cycle. In this
short paper we give a brief summary of the generic abstract machine, and show
how it can be instantiated with the stochastic simulation algorithm known as
Gillespie's Direct Method. We also discuss the wider implications of such an
abstract machine, and outline how it can be used to simulate multiple calculi
simultaneously within a common framework.
|
1011.0488
|
Measurable Stochastics for Brane Calculus
|
cs.CE
|
We give a stochastic extension of the Brane Calculus, along the lines of
recent work by Cardelli and Mardare. In this presentation, the semantics of a
Brane process is a measure of the stochastic distribution of possible
derivations. To this end, we first introduce a labelled transition system for
Brane Calculus, proving its adequacy w.r.t. the usual reduction semantics.
Then, brane systems are presented as Markov processes over the measurable space
generated by terms up-to syntactic congruence, and where the measures are
indexed by the actions of this new LTS. Finally, we provide a SOS presentation
of this stochastic semantics, which is compositional and syntax-driven.
|
1011.0489
|
An Abstraction Theory for Qualitative Models of Biological Systems
|
cs.CE cs.DM
|
Multi-valued network models are an important qualitative modelling approach
used widely by the biological community. In this paper we consider developing
an abstraction theory for multi-valued network models that allows the state
space of a model to be reduced while preserving key properties of the model.
This is important as it aids the analysis and comparison of multi-valued
networks and in particular, helps address the well-known problem of state space
explosion associated with such analysis. We also consider developing techniques
for efficiently identifying abstractions and so provide a basis for the
automation of this task. We illustrate the theory and techniques developed by
investigating the identification of abstractions for two published MVN models
of the lysis-lysogeny switch in the bacteriophage lambda.
|
1011.0490
|
Computational Modeling for the Activation Cycle of G-proteins by
G-protein-coupled Receptors
|
cs.CE q-bio.QM
|
In this paper, we survey five different computational modeling methods. For
comparison, we use the activation cycle of G-proteins that regulate cellular
signaling events downstream of G-protein-coupled receptors (GPCRs) as a driving
example. Starting from an existing Ordinary Differential Equations (ODEs)
model, we implement the G-protein cycle in the stochastic Pi-calculus using
SPiM, as Petri-nets using Cell Illustrator, in the Kappa Language using
Cellucidate, and in Bio-PEPA using the Bio-PEPA eclipse plug in. We also
provide a high-level notation to abstract away from communication primitives
that may be unfamiliar to the average biologist, and we show how to translate
high-level programs into stochastic Pi-calculus processes and chemical
reactions.
|
1011.0491
|
Aspects of multiscale modelling in a process algebra for biological
systems
|
cs.LO cs.CE cs.FL
|
We propose a variant of the CCS process algebra with new features aiming at
allowing multiscale modelling of biological systems. In the usual semantics of
process algebras for modelling biological systems actions are instantaneous.
When different scale levels of biological systems are considered in a single
model, one should take into account that actions at a level may take much more
time than actions at a lower level. Moreover, it might happen that while a
component is involved in one long lasting high level action, it is involved
also in several faster lower level actions. Hence, we propose a process algebra
with operations and with a semantics aimed at dealing with these aspects of
multiscale modelling. We study behavioural equivalences for such an algebra and
give some examples.
|
1011.0492
|
Multiscale Bone Remodelling with Spatial P Systems
|
cs.CE q-bio.QM
|
Many biological phenomena are inherently multiscale, i.e. they are
characterized by interactions involving different spatial and temporal scales
simultaneously. Though several approaches have been proposed to provide
"multilayer" models, only Complex Automata, derived from Cellular Automata,
naturally embed spatial information and realize multiscaling with
well-established inter-scale integration schemas. Spatial P systems, a variant
of P systems in which a more geometric concept of space has been added, have
several characteristics in common with Cellular Automata. We propose such a
formalism as a basis to rephrase the Complex Automata multiscaling approach
and, in this perspective, provide a 2-scale Spatial P system describing bone
remodelling. The proposed model not only results to be highly faithful and
expressive in a multiscale scenario, but also highlights the need of a deep and
formal expressiveness study involving Complex Automata, Spatial P systems and
other promising multiscale approaches, such as our shape-based one already
resulted to be highly faithful.
|
1011.0493
|
Modeling biological systems with delays in Bio-PEPA
|
cs.CE q-bio.QM
|
Delays in biological systems may be used to model events for which the
underlying dynamics cannot be precisely observed, or to provide abstraction of
some behavior of the system resulting more compact models. In this paper we
enrich the stochastic process algebra Bio-PEPA, with the possibility of
assigning delays to actions, yielding a new non-Markovian process algebra:
Bio-PEPAd. This is a conservative extension meaning that the original syntax of
Bio-PEPA is retained and the delay specification which can now be associated
with actions may be added to existing Bio-PEPA models. The semantics of the
firing of the actions with delays is the delay-as-duration approach, earlier
presented in papers on the stochastic simulation of biological systems with
delays. These semantics of the algebra are given in the Starting-Terminating
style, meaning that the state and the completion of an action are observed as
two separate events, as required by delays. Furthermore we outline how to
perform stochastic simulation of Bio-PEPAd systems and how to automatically
translate a Bio-PEPAd system into a set of Delay Differential Equations, the
deterministic framework for modeling of biological systems with delays. We end
the paper with two example models of biological systems with delays to
illustrate the approach.
|
1011.0494
|
Hybrid Calculus of Wrapped Compartments
|
cs.PL cs.CE q-bio.QM
|
The modelling and analysis of biological systems has deep roots in
Mathematics, specifically in the field of ordinary differential equations
(ODEs). Alternative approaches based on formal calculi, often derived from
process algebras or term rewriting systems, provide a quite complementary way
to analyze the behaviour of biological systems. These calculi allow to cope in
a natural way with notions like compartments and membranes, which are not easy
(sometimes impossible) to handle with purely numerical approaches, and are
often based on stochastic simulation methods. Recently, it has also become
evident that stochastic effects in regulatory networks play a crucial role in
the analysis of such systems. Actually, in many situations it is necessary to
use stochastic models. For example when the system to be described is based on
the interaction of few molecules, when we are at the presence of a chemical
instability, or when we want to simulate the functioning of a pool of entities
whose compartmentalised structure evolves dynamically. In contrast, stable
metabolic networks, involving a large number of reagents, for which the
computational cost of a stochastic simulation becomes an insurmountable
obstacle, are efficiently modelled with ODEs. In this paper we define a hybrid
simulation method, combining the stochastic approach with ODEs, for systems
described in CWC, a calculus on which we can express the compartmentalisation
of a biological system whose evolution is defined by a set of rewrite rules.
|
1011.0496
|
Lumpability Abstractions of Rule-based Systems
|
cs.CE
|
The induction of a signaling pathway is characterized by transient complex
formation and mutual posttranslational modification of proteins. To faithfully
capture this combinatorial process in a mathematical model is an important
challenge in systems biology. Exploiting the limited context on which most
binding and modification events are conditioned, attempts have been made to
reduce the combinatorial complexity by quotienting the reachable set of
molecular species, into species aggregates while preserving the deterministic
semantics of the thermodynamic limit. Recently we proposed a quotienting that
also preserves the stochastic semantics and that is complete in the sense that
the semantics of individual species can be recovered from the aggregate
semantics. In this paper we prove that this quotienting yields a sufficient
condition for weak lumpability and that it gives rise to a backward Markov
bisimulation between the original and aggregated transition system. We
illustrate the framework on a case study of the EGF/insulin receptor crosstalk.
|
1011.0498
|
Qualitative modelling and analysis of regulations in multi-cellular
systems using Petri nets and topological collections
|
cs.CE
|
In this paper, we aim at modelling and analyzing the regulation processes in
multi-cellular biological systems, in particular tissues.
The modelling framework is based on interconnected logical regulatory
networks a la Rene Thomas equipped with information about their spatial
relationships. The semantics of such models is expressed through colored Petri
nets to implement regulation rules, combined with topological collections to
implement the spatial information.
Some constraints are put on the the representation of spatial information in
order to preserve the possibility of an enumerative and exhaustive state space
exploration.
This paper presents the modelling framework, its semantics, as well as a
prototype implementation that allowed preliminary experimentation on some
applications.
|
1011.0502
|
A New Email Retrieval Ranking Approach
|
cs.IR
|
Email Retrieval task has recently taken much attention to help the user
retrieve the email(s) related to the submitted query. Up to our knowledge,
existing email retrieval ranking approaches sort the retrieved emails based on
some heuristic rules, which are either search clues or some predefined user
criteria rooted in email fields. Unfortunately, the user usually does not know
the effective rule that acquires best ranking related to his query. This paper
presents a new email retrieval ranking approach to tackle this problem. It
ranks the retrieved emails based on a scoring function that depends on crucial
email fields, namely subject, content, and sender. The paper also proposes an
architecture to allow every user in a network/group of users to be able, if
permissible, to know the most important network senders who are interested in
his submitted query words. The experimental evaluation on Enron corpus prove
that our approach outperforms known email retrieval ranking approaches.
|
1011.0506
|
A Very Fast Algorithm for Matrix Factorization
|
stat.CO cs.IR physics.data-an stat.ML
|
We present a very fast algorithm for general matrix factorization of a data
matrix for use in the statistical analysis of high-dimensional data via latent
factors. Such data are prevalent across many application areas and generate an
ever-increasing demand for methods of dimension reduction in order to undertake
the statistical analysis of interest. Our algorithm uses a gradient-based
approach which can be used with an arbitrary loss function provided the latter
is differentiable. The speed and effectiveness of our algorithm for dimension
reduction is demonstrated in the context of supervised classification of some
real high-dimensional data sets from the bioinformatics literature.
|
1011.0519
|
Stabilizing knowledge through standards - A perspective for the
humanities
|
cs.CL
|
It is usual to consider that standards generate mixed feelings among
scientists. They are often seen as not really reflecting the state of the art
in a given domain and a hindrance to scientific creativity. Still, scientists
should theoretically be at the best place to bring their expertise into
standard developments, being even more neutral on issues that may typically be
related to competing industrial interests. Even if it could be thought of as
even more complex to think about developping standards in the humanities, we
will show how this can be made feasible through the experience gained both
within the Text Encoding Initiative consortium and the International
Organisation for Standardisation. By taking the specific case of lexical
resources, we will try to show how this brings about new ideas for designing
future research infrastructures in the human and social sciences.
|
1011.0520
|
Adaptive Algorithms for Coverage Control and Space Partitioning in
Mobile Robotic Networks
|
math.OC cs.RO
|
This paper considers deployment problems where a mobile robotic network must
optimize its configuration in a distributed way in order to minimize a
steady-state cost function that depends on the spatial distribution of certain
probabilistic events of interest. Moreover, it is assumed that the event
location distribution is a priori unknown, and can only be progressively
inferred from the observation of the actual event occurrences. Three classes of
problems are discussed in detail: coverage control problems, spatial
partitioning problems, and dynamic vehicle routing problems. In each case,
distributed stochastic gradient algorithms optimizing the performance objective
are presented. The stochastic gradient view simplifies and generalizes
previously proposed solutions, and is applicable to new complex scenarios, such
as adaptive coverage involving heterogeneous agents. Remarkably, these
algorithms often take the form of simple distributed rules that could be
implemented on resource-limited platforms.
|
1011.0596
|
Multiple View Reconstruction of Calibrated Images using Singular Value
Decomposition
|
cs.CV
|
Calibration in a multi camera network has widely been studied for over
several years starting from the earlier days of photogrammetry. Many authors
have presented several calibration algorithms with their relative advantages
and disadvantages. In a stereovision system, multiple view reconstruction is a
challenging task. However, the total computational procedure in detail has not
been presented before. Here in this work, we are dealing with the problem that,
when a world coordinate point is fixed in space, image coordinates of that 3D
point vary for different camera positions and orientations. In computer vision
aspect, this situation is undesirable. That is, the system has to be designed
in such a way that image coordinate of the world coordinate point will be fixed
irrespective of the position & orientation of the cameras. We have done it in
an elegant fashion. Firstly, camera parameters are calculated in its local
coordinate system. Then, we use global coordinate data to transfer all local
coordinate data of stereo cameras into same global coordinate system, so that
we can register everything into this global coordinate system. After all the
transformations, when the image coordinate of the world coordinate point is
calculated, it gives same coordinate value for all camera positions &
orientations. That is, the whole system is calibrated.
|
1011.0628
|
Significance of Classification Techniques in Prediction of Learning
Disabilities
|
cs.AI
|
The aim of this study is to show the importance of two classification
techniques, viz. decision tree and clustering, in prediction of learning
disabilities (LD) of school-age children. LDs affect about 10 percent of all
children enrolled in schools. The problems of children with specific learning
disabilities have been a cause of concern to parents and teachers for some
time. Decision trees and clustering are powerful and popular tools used for
classification and prediction in Data mining. Different rules extracted from
the decision tree are used for prediction of learning disabilities. Clustering
is the assignment of a set of observations into subsets, called clusters, which
are useful in finding the different signs and symptoms (attributes) present in
the LD affected child. In this paper, J48 algorithm is used for constructing
the decision tree and K-means algorithm is used for creating the clusters. By
applying these classification techniques, LD in any child can be identified.
|
1011.0630
|
Inter-arrival times of message propagation on directed networks
|
cond-mat.dis-nn cs.NI cs.SI physics.soc-ph
|
One of the challenges in fighting cybercrime is to understand the dynamics of
message propagation on botnets, networks of infected computers used to send
viruses, unsolicited commercial emails (SPAM) or denial of service attacks. We
map this problem to the propagation of multiple random walkers on directed
networks and we evaluate the inter-arrival time distribution between successive
walkers arriving at a target. We show that the temporal organization of this
process, which models information propagation on unstructured peer to peer
networks, has the same features as SPAM arriving to a single user. We study the
behavior of the message inter-arrival time distribution on three different
network topologies using two different rules for sending messages. In all
networks the propagation is not a pure Poisson process. It shows universal
features on Poissonian networks and a more complex behavior on scale free
networks. Results open the possibility to indirectly learn about the process of
sending messages on networks with unknown topologies, by studying inter-arrival
times at any node of the network.
|
1011.0640
|
Lesion Border Detection in Dermoscopy Images
|
cs.CV
|
Background: Dermoscopy is one of the major imaging modalities used in the
diagnosis of melanoma and other pigmented skin lesions. Due to the difficulty
and subjectivity of human interpretation, computerized analysis of dermoscopy
images has become an important research area. One of the most important steps
in dermoscopy image analysis is the automated detection of lesion borders.
Methods: In this article, we present a systematic overview of the recent border
detection methods in the literature paying particular attention to
computational issues and evaluation aspects. Conclusion: Common problems with
the existing approaches include the acquisition, size, and diagnostic
distribution of the test image set, the evaluation of the results, and the
inadequate description of the employed methods. Border determination by
dermatologists appears to depend upon higher-level knowledge, therefore it is
likely that the incorporation of domain knowledge in automated methods will
enable them to perform better, especially in sets of images with a variety of
diagnoses.
|
1011.0673
|
Modeling the structure and evolution of discussion cascades
|
physics.data-an cs.SI physics.soc-ph
|
We analyze the structure and evolution of discussion cascades in four popular
websites: Slashdot, Barrapunto, Meneame and Wikipedia. Despite the big
heterogeneities between these sites, a preferential attachment (PA) model with
bias to the root can capture the temporal evolution of the observed trees and
many of their statistical properties, namely, probability distributions of the
branching factors (degrees), subtree sizes and certain correlations. The
parameters of the model are learned efficiently using a novel maximum
likelihood estimation scheme for PA and provide a figurative interpretation
about the communication habits and the resulting discussion cascades on the
four different websites.
|
1011.0686
|
A Reduction of Imitation Learning and Structured Prediction to No-Regret
Online Learning
|
cs.LG cs.AI stat.ML
|
Sequential prediction problems such as imitation learning, where future
observations depend on previous predictions (actions), violate the common
i.i.d. assumptions made in statistical learning. This leads to poor performance
in theory and often in practice. Some recent approaches provide stronger
guarantees in this setting, but remain somewhat unsatisfactory as they train
either non-stationary or stochastic policies and require a large number of
iterations. In this paper, we propose a new iterative algorithm, which trains a
stationary deterministic policy, that can be seen as a no regret algorithm in
an online learning setting. We show that any such no regret algorithm, combined
with additional reduction assumptions, must find a policy with good performance
under the distribution of observations it induces in such sequential settings.
We demonstrate that this new approach outperforms previous approaches on two
challenging imitation learning problems and a benchmark sequence labeling
problem.
|
1011.0774
|
Leaders, Followers, and Community Detection
|
stat.ML cs.SI physics.soc-ph
|
Communities in social networks or graphs are sets of well-connected,
overlapping vertices. The effectiveness of a community detection algorithm is
determined by accuracy in finding the ground-truth communities and ability to
scale with the size of the data. In this work, we provide three contributions.
First, we show that a popular measure of accuracy known as the F1 score, which
is between 0 and 1, with 1 being perfect detection, has an information lower
bound is 0.5. We provide a trivial algorithm that produces communities with an
F1 score of 0.5 for any graph! Somewhat surprisingly, we find that popular
algorithms such as modularity optimization, BigClam and CESNA have F1 scores
less than 0.5 for the popular IMDB graph. To rectify this, as the second
contribution we propose a generative model for community formation, the
sequential community graph, which is motivated by the formation of social
networks. Third, motivated by our generative model, we propose the
leader-follower algorithm (LFA). We prove that it recovers all communities for
sequential community graphs by establishing a structural result that sequential
community graphs are chordal. For a large number of popular social networks, it
recovers communities with a much higher F1 score than other popular algorithms.
For the IMDB graph, it obtains an F1 score of 0.81. We also propose a
modification to the LFA called the fast leader-follower algorithm (FLFA) which
in addition to being highly accurate, is also fast, with a scaling that is
almost linear in the network size.
|
1011.0786
|
Gaussian Process Techniques for Wireless Communications
|
cs.IT math.IT
|
Bayesian filtering is a general framework for recursively estimating the
state of a dynamical system. Classical solutions such that Kalman filter and
Particle filter are introduced in this report. Gaussian processes have been
introduced as a non-parametric technique for system estimation from supervision
learning. For the thesis project, we intend to propose a new, general
methodology for inference and learning in non-linear state-space models
probabilistically incorporating with the Gaussian process model estimation.
|
1011.0800
|
Soil Classification Using GATree
|
cs.NE
|
This paper details the application of a genetic programming framework for
classification of decision tree of Soil data to classify soil texture. The
database contains measurements of soil profile data. We have applied GATree for
generating classification decision tree. GATree is a decision tree builder that
is based on Genetic Algorithms (GAs). The idea behind it is rather simple but
powerful. Instead of using statistic metrics that are biased towards specific
trees we use a more flexible, global metric of tree quality that try to
optimize accuracy and size. GATree offers some unique features not to be found
in any other tree inducers while at the same time it can produce better results
for many difficult problems. Experimental results are presented which
illustrate the performance of generating best decision tree for classifying
soil texture for soil data set.
|
1011.0835
|
A PDTB-Styled End-to-End Discourse Parser
|
cs.CL
|
We have developed a full discourse parser in the Penn Discourse Treebank
(PDTB) style. Our trained parser first identifies all discourse and
non-discourse relations, locates and labels their arguments, and then
classifies their relation types. When appropriate, the attribution spans to
these relations are also determined. We present a comprehensive evaluation from
both component-wise and error-cascading perspectives.
|
1011.0851
|
Tracking control with adaption of kites
|
math.OC cs.SY
|
A novel tracking paradigm for flying geometric trajectories using tethered
kites is presented. It is shown how the differential-geometric notion of
turning angle can be used as a one-dimensional representation of the kite
trajectory, and how this leads to a single-input single-output (SISO) tracking
problem. Based on this principle a Lyapunov-based nonlinear adaptive controller
is developed that only needs control derivatives of the kite aerodynamic model.
The resulting controller is validated using simulations with a point-mass kite
model.
|
1011.0935
|
Probabilistic Inferences in Bayesian Networks
|
cs.AI cs.NI
|
Bayesian network is a complete model for the variables and their
relationships, it can be used to answer probabilistic queries about them. A
Bayesian network can thus be considered a mechanism for automatically applying
Bayes' theorem to complex problems. In the application of Bayesian networks,
most of the work is related to probabilistic inferences. Any variable updating
in any node of Bayesian networks might result in the evidence propagation
across the Bayesian networks. This paper sums up various inference techniques
in Bayesian networks and provide guidance for the algorithm calculation in
probabilistic inference in Bayesian networks.
|
1011.0950
|
Detecting Ontological Conflicts in Protocols between Semantic Web
Services
|
cs.AI
|
The task of verifying the compatibility between interacting web services has
traditionally been limited to checking the compatibility of the interaction
protocol in terms of message sequences and the type of data being exchanged.
Since web services are developed largely in an uncoordinated way, different
services often use independently developed ontologies for the same domain
instead of adhering to a single ontology as standard. In this work we
investigate the approaches that can be taken by the server to verify the
possibility to reach a state with semantically inconsistent results during the
execution of a protocol with a client, if the client ontology is published.
Often database is used to store the actual data along with the ontologies
instead of storing the actual data as a part of the ontology description. It is
important to observe that at the current state of the database the semantic
conflict state may not be reached even if the verification done by the server
indicates the possibility of reaching a conflict state. A relational algebra
based decision procedure is also developed to incorporate the current state of
the client and the server databases in the overall verification procedure.
|
1011.0953
|
Overcoming Problems in the Measurement of Biological Complexity
|
cs.CE cs.NE nlin.AO q-bio.PE
|
In a genetic algorithm, fluctuations of the entropy of a genome over time are
interpreted as fluctuations of the information that the genome's organism is
storing about its environment, being this reflected in more complex organisms.
The computation of this entropy presents technical problems due to the small
population sizes used in practice. In this work we propose and test an
alternative way of measuring the entropy variation in a population by means of
algorithmic information theory, where the entropy variation between two
generational steps is the Kolmogorov complexity of the first step conditioned
to the second one. As an example application of this technique, we report
experimental differences in entropy evolution between systems in which sexual
reproduction is present or absent.
|
1011.0997
|
Performance Analysis of Spectral Clustering on Compressed, Incomplete
and Inaccurate Measurements
|
math.NA cs.CV math.FA stat.ML
|
Spectral clustering is one of the most widely used techniques for extracting
the underlying global structure of a data set. Compressed sensing and matrix
completion have emerged as prevailing methods for efficiently recovering sparse
and partially observed signals respectively. We combine the distance preserving
measurements of compressed sensing and matrix completion with the power of
robust spectral clustering. Our analysis provides rigorous bounds on how small
errors in the affinity matrix can affect the spectral coordinates and
clusterability. This work generalizes the current perturbation results of
two-class spectral clustering to incorporate multi-class clustering with k
eigenvectors. We thoroughly track how small perturbation from using compressed
sensing and matrix completion affect the affinity matrix and in succession the
spectral coordinates. These perturbation results for multi-class clustering
require an eigengap between the kth and (k+1)th eigenvalues of the affinity
matrix, which naturally occurs in data with k well-defined clusters. Our
theoretical guarantees are complemented with numerical results along with a
number of examples of the unsupervised organization and clustering of image
data.
|
1011.1035
|
Featureless 2D-3D Pose Estimation by Minimising an
Illumination-Invariant Loss
|
cs.CV
|
The problem of identifying the 3D pose of a known object from a given 2D
image has important applications in Computer Vision ranging from robotic vision
to image analysis. Our proposed method of registering a 3D model of a known
object on a given 2D photo of the object has numerous advantages over existing
methods: It does neither require prior training nor learning, nor knowledge of
the camera parameters, nor explicit point correspondences or matching features
between image and model. Unlike techniques that estimate a partial 3D pose (as
in an overhead view of traffic or machine parts on a conveyor belt), our method
estimates the complete 3D pose of the object, and works on a single static
image from a given view, and under varying and unknown lighting conditions. For
this purpose we derive a novel illumination-invariant distance measure between
2D photo and projected 3D model, which is then minimised to find the best pose
parameters. Results for vehicle pose detection are presented.
|
1011.1040
|
A parametric approach to list decoding of Reed-Solomon codes using
interpolation
|
cs.IT math.IT
|
In this paper we present a minimal list decoding algorithm for Reed-Solomon
(RS) codes. Minimal list decoding for a code $C$ refers to list decoding with
radius $L$, where $L$ is the minimum of the distances between the received word
$\mathbf{r}$ and any codeword in $C$. We consider the problem of determining
the value of $L$ as well as determining all the codewords at distance $L$. Our
approach involves a parametrization of interpolating polynomials of a minimal
Gr\"obner basis $G$. We present two efficient ways to compute $G$. We also show
that so-called re-encoding can be used to further reduce the complexity. We
then demonstrate how our parametric approach can be solved by a computationally
feasible rational curve fitting solution from a recent paper by Wu. Besides, we
present an algorithm to compute the minimum multiplicity as well as the optimal
values of the parameters associated with this multiplicity which results in
overall savings in both memory and computation.
|
1011.1043
|
Detecting Communities in Tripartite Hypergraphs
|
cs.SI physics.soc-ph
|
In social tagging systems, also known as folksonomies, users collaboratively
manage tags to annotate resources. Naturally, social tagging systems can be
modeled as a tripartite hypergraph, where there are three different types of
nodes, namely users, resources and tags, and each hyperedge has three end
nodes, connecting a user, a resource and a tag that the user employs to
annotate the resource. Then, how can we automatically detect user, resource and
tag communities from the tripartite hypergraph? In this paper, by turning the
problem into a problem of finding an efficient compression of the hypergraph's
structure, we propose a quality function for measuring the goodness of
partitions of a tripartite hypergraph into communities. Later, we develop a
fast community detection algorithm based on minimizing the quality function. We
explain advantages of our method and validate it by comparing with various
state of the art techniques in a set of synthetic datasets.
|
1011.1081
|
Evolution of Coordination in Social Networks: A Numerical Study
|
physics.soc-ph cs.SI
|
Coordination games are important to explain efficient and desirable social
behavior. Here we study these games by extensive numerical simulation on
networked social structures using an evolutionary approach. We show that local
network effects may promote selection of efficient equilibria in both pure and
general coordination games and may explain social polarization. These results
are put into perspective with respect to known theoretical results. The main
insight we obtain is that clustering, and especially community structure in
social networks has a positive role in promoting socially efficient outcomes.
|
1011.1124
|
Opinion formation and cyclic dominance in adaptive networks
|
nlin.AO cond-mat.dis-nn cs.SI physics.soc-ph
|
The Rock-Paper-Scissors(RPS) game is a paradigmatic model for cyclic
dominance in biological systems. Here we consider this game in the social
context of competition between opinions in a networked society. In our model,
every agent has an opinion which is drawn from the three choices: rock, paper
or scissors. In every timestep a link is selected randomly and the game is
played between the nodes connected by the link. The loser either adopts the
opinion of the winner or rewires the link. These rules define an adaptive
network on which the agent's opinions coevolve with the network topology of
social contacts. We show analytically and numerically that nonequilibrium phase
transitions occur as a function of the rewiring strength. The transitions
separate four distinct phases which differ in the observed dynamics of opinions
and topology. In particular, there is one phase where the population settles to
an arbitrary consensus opinion. We present a detailed analysis of the
corresponding transitions revealing an apparently paradoxial behavior. The
system approaches consensus states where they are unstable, whereas other
dynamics prevail when the consensus states are stable.
|
1011.1161
|
Multiarmed Bandit Problems with Delayed Feedback
|
cs.DS cs.LG
|
In this paper we initiate the study of optimization of bandit type problems
in scenarios where the feedback of a play is not immediately known. This arises
naturally in allocation problems which have been studied extensively in the
literature, albeit in the absence of delays in the feedback. We study this
problem in the Bayesian setting. In presence of delays, no solution with
provable guarantees is known to exist with sub-exponential running time.
We show that bandit problems with delayed feedback that arise in allocation
settings can be forced to have significant structure, with a slight loss in
optimality. This structure gives us the ability to reason about the
relationship of single arm policies to the entangled optimum policy, and
eventually leads to a O(1) approximation for a significantly general class of
priors. The structural insights we develop are of key interest and carry over
to the setting where the feedback of an action is available instantaneously,
and we improve all previous results in this setting as well.
|
1011.1212
|
CplexA: a Mathematica package to study macromolecular-assembly control
of gene expression
|
q-bio.QM cond-mat.stat-mech cs.CE physics.bio-ph q-bio.MN
|
Summary: Macromolecular assembly vertebrates essential cellular processes,
such as gene regulation and signal transduction. A major challenge for
conventional computational methods to study these processes is tackling the
exponential increase of the number of configurational states with the number of
components. CplexA is a Mathematica package that uses functional programming to
efficiently compute probabilities and average properties over such
exponentially large number of states from the energetics of the interactions.
The package is particularly suited to study gene expression at complex
promoters controlled by multiple, local and distal, DNA binding sites for
transcription factors. Availability: CplexA is freely available together with
documentation at http://sourceforge.net/projects/cplexa/.
|
1011.1225
|
On the Capacity of Multiple-Access-Z-Interference Channels
|
cs.IT math.IT
|
The capacity of a network in which a multiple access channel (MAC) generates
interference to a single-user channel is studied. An achievable rate region
based on superposition coding and joint decoding is established for the
discrete case. If the interference is very strong, the capacity region is
obtained for both the discrete memoryless channel and the Gaussian channel. For
the strong interference case, the capacity region is established for the
discrete memoryless channel; for the Gaussian case, we attain a line segment on
the boundary of the capacity region. Moreover, the capacity region for the
Gaussian channel is identified for the case when one interference link being
strong, and the other being very strong. For a subclass of Gaussian channels
with mixed interference, a boundary point of the capacity region is determined.
Finally, for the Gaussian channel with weak interference, sum capacities are
obtained under various channel coefficient and power constraint conditions.
|
1011.1261
|
On the Saddle-point Solution and the Large-Coalition Asymptotics of
Fingerprinting Games
|
cs.IT cs.CR math.IT
|
We study a fingerprinting game in which the number of colluders and the
collusion channel are unknown. The encoder embeds fingerprints into a host
sequence and provides the decoder with the capability to trace back pirated
copies to the colluders.
Fingerprinting capacity has recently been derived as the limit value of a
sequence of maximin games with mutual information as their payoff functions.
However, these games generally do not admit saddle-point solutions and are very
hard to solve numerically. Here under the so-called Boneh-Shaw marking
assumption, we reformulate the capacity as the value of a single two-person
zero-sum game, and show that it is achieved by a saddle-point solution.
If the maximal coalition size is k and the fingerprinting alphabet is binary,
we show that capacity decays quadratically with k. Furthermore, we prove
rigorously that the asymptotic capacity is 1/(k^2 2ln2) and we confirm our
earlier conjecture that Tardos' choice of the arcsine distribution
asymptotically maximizes the mutual information payoff function while the
interleaving attack minimizes it. Along with the asymptotic behavior, numerical
solutions to the game for small k are also presented.
|
1011.1264
|
Equivalence of the Random Oracle Model and the Ideal Cipher Model,
Revisited
|
cs.CR cs.CC cs.IT math.IT
|
We consider the cryptographic problem of constructing an invertible random
permutation from a public random function (i.e., which can be accessed by the
adversary). This goal is formalized by the notion of indifferentiability of
Maurer et al. (TCC 2004). This is the natural extension to the public setting
of the well-studied problem of building random permutations from random
functions, which was first solved by Luby and Rackoff (Siam J. Comput., '88)
using the so-called Feistel construction.
The most important implication of such a construction is the equivalence of
the random oracle model (Bellare and Rogaway, CCS '93) and the ideal cipher
model, which is typically used in the analysis of several constructions in
symmetric cryptography.
Coron et al. (CRYPTO 2008) gave a rather involved proof that the six-round
Feistel construction with independent random round functions is
indifferentiable from an invertible random permutation. Also, it is known that
fewer than six rounds do not suffice for indifferentiability. The first
contribution (and starting point) of our paper is a concrete distinguishing
attack which shows that the indifferentiability proof of Coron et al. is not
correct. In addition, we provide supporting evidence that an
indifferentiability proof for the six-round Feistel construction may be very
hard to find.
To overcome this gap, our main contribution is a proof that the Feistel
construction with eigthteen rounds is indifferentiable from an invertible
random permutation. The approach of our proof relies on assigning to each of
the rounds in the construction a unique and specific role needed in the proof.
This avoids many of the problems that appear in the six-round case.
|
1011.1293
|
Evolutionary Games defined at the Network Mesoscale: The Public Goods
game
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
The evolutionary dynamics of the Public Goods game addresses the emergence of
cooperation within groups of individuals. However, the Public Goods game on
large populations of interconnected individuals has been usually modeled
without any knowledge about their group structure. In this paper, by focusing
on collaboration networks, we show that it is possible to include the
mesoscopic information about the structure of the real groups by means of a
bipartite graph. We compare the results with the projected (coauthor) and the
original bipartite graphs and show that cooperation is enhanced by the
mesoscopic structure contained. We conclude by analyzing the influence of the
size of the groups in the evolutionary success of cooperation.
|
1011.1295
|
A Markovian Model for Joint Observations, Bell's Inequality and Hidden
States
|
cs.IT math.IT quant-ph
|
While the standard approach to quantum systems studies length preserving
linear transformations of wave functions, the Markov picture focuses on trace
preserving operators on the space of Hermitian (self-adjoint) matrices. The
Markov approach extends the standard one and provides a refined analysis of
measurements and quantum Markov chains. In particular, Bell's inequality
becomes structurally clear. It turns out that hidden state models are natural
in the Markov context. In particular, a violation of Bell's inequality is seen
to be compatible with the existence of hidden states. The Markov model moreover
clarifies the role of the "negative probabilities" in Feynman's analysis of the
EPR paradox.
|
1011.1296
|
Privately Releasing Conjunctions and the Statistical Query Barrier
|
cs.DS cs.CR cs.LG
|
Suppose we would like to know all answers to a set of statistical queries C
on a data set up to small error, but we can only access the data itself using
statistical queries. A trivial solution is to exhaustively ask all queries in
C. Can we do any better?
+ We show that the number of statistical queries necessary and sufficient for
this task is---up to polynomial factors---equal to the agnostic learning
complexity of C in Kearns' statistical query (SQ) model. This gives a complete
answer to the question when running time is not a concern.
+ We then show that the problem can be solved efficiently (allowing arbitrary
error on a small fraction of queries) whenever the answers to C can be
described by a submodular function. This includes many natural concept classes,
such as graph cuts and Boolean disjunctions and conjunctions.
While interesting from a learning theoretic point of view, our main
applications are in privacy-preserving data analysis:
Here, our second result leads to the first algorithm that efficiently
releases differentially private answers to of all Boolean conjunctions with 1%
average error. This presents significant progress on a key open problem in
privacy-preserving data analysis.
Our first result on the other hand gives unconditional lower bounds on any
differentially private algorithm that admits a (potentially
non-privacy-preserving) implementation using only statistical queries. Not only
our algorithms, but also most known private algorithms can be implemented using
only statistical queries, and hence are constrained by these lower bounds. Our
result therefore isolates the complexity of agnostic learning in the SQ-model
as a new barrier in the design of differentially private algorithms.
|
1011.1348
|
Probabilistic Sinr Constrained Robust Transmit Beamforming: A
Bernstein-Type Inequality Based Conservative Approach
|
cs.IT math.IT
|
Recently, robust transmit beamforming has drawn considerable attention
because it can provide guaranteed receiver performance in the presence of
channel state information (CSI) errors. Assuming complex Gaussian distributed
CSI errors, this paper investigates the robust beamforming design problem that
minimizes the transmission power subject to probabilistic
signal-to-interference-plus-noise ratio (SINR) constraints. The probabilistic
SINR constraints in general have no closed-form expression and are difficult to
handle. Based on a Bernstein-type inequality of complex Gaussian random
variables, we propose a conservative formulation to the robust beamforming
design problem. The semidefinite relaxation technique can be applied to
efficiently handle the proposed conservative formulation. Simulation results
show that, in comparison with the existing methods, the proposed method is more
power efficient and is able to support higher target SINR values for receivers.
|
1011.1352
|
Average Sum-Rate of Distributed Alamouti Space--Time Scheme in Two-Way
Amplify-and-Forward Relay Networks
|
cs.IT math.IT
|
In this paper, we propose a distributed Alamouti space-time code (DASTC) for
two-way relay networks employing a single amplify-and-forward (AF) relay. We
first derive closed-form expressions for the approximated average sum-rate of
the proposed DASTC scheme. Our analysis is validated by a comparison against
the results of Monte-Carlo simulations. Numerical results verify the
effectiveness of our proposed scheme over the conventional DASTC with one-way
communication.
|
1011.1368
|
Transformation of Wiktionary entry structure into tables and relations
in a relational database schema
|
cs.IR
|
This paper addresses the question of automatic data extraction from the
Wiktionary, which is a multilingual and multifunctional dictionary. Wiktionary
is a collaborative project working on the same principles as the Wikipedia. The
Wiktionary entry is a plain text from the text processing point of view.
Wiktionary guidelines prescribe the entry layout and rules, which should be
followed by editors of the dictionary. The presence of the structure of a
Wiktionary article and formatting rules allows transforming the Wiktionary
entry structure into tables and relations in a relational database schema,
which is a part of a machine-readable dictionary (MRD). The paper describes how
the flat text of the Wiktionary entry was extracted, converted, and stored in
the specially designed relational database. The MRD contains the definitions,
semantic relations, and translations extracted from the English and Russian
Wiktionaries. The parser software is released under the open source license
agreement (GPL), to facilitate its dissemination, modification and upgrades, to
draw researchers and programmers into parsing other Wiktionaries, not only
Russian and English.
|
1011.1377
|
Construction of Network Error Correction Codes in Packet Networks
|
cs.IT math.IT
|
Recently, network error correction coding (NEC) has been studied extensively.
Several bounds in classical coding theory have been extended to network error
correction coding, especially the Singleton bound. In this paper, following the
research line using the extended global encoding kernels proposed in
\cite{zhang-correction}, the refined Singleton bound of NEC can be proved more
explicitly. Moreover, we give a constructive proof of the attainability of this
bound and indicate that the required field size for the existence of network
maximum distance separable (MDS) codes can become smaller further. By this
proof, an algorithm is proposed to construct general linear network error
correction codes including the linear network error correction MDS codes.
Finally, we study the error correction capability of random linear network
error correction coding. Motivated partly by the performance analysis of random
linear network coding \cite{Ho-etc-random}, we evaluate the different failure
probabilities defined in this paper in order to analyze the performance of
random linear network error correction coding. Several upper bounds on these
probabilities are obtained and they show that these probabilities will approach
to zero as the size of the base field goes to infinity. Using these upper
bounds, we slightly improve on the probability mass function of the minimum
distance of random linear network error correction codes in
\cite{zhang-random}, as well as the upper bound on the field size required for
the existence of linear network error correction codes with degradation at most
$d$.
|
1011.1432
|
Modeling micro-macro pedestrian counterflow in heterogeneous domains
|
math-ph cs.SI math.MP physics.soc-ph
|
We present a micro-macro strategy able to describe the dynamics of crowds in
heterogeneous media. Herein we focus on the example of pedestrian counterflow.
The main working tools include the use of mass and porosity measures together
with their transport as well as suitable application of a version of
Radon-Nikodym Theorem formulated for finite measures. Finally, we illustrate
numerically our microscopic model and emphasize the effects produced by an
implicitly defined social velocity.
Keywords: Crowd dynamics; mass measures; porosity measure; social networks
|
1011.1478
|
Gradient Computation In Linear-Chain Conditional Random Fields Using The
Entropy Message Passing Algorithm
|
cs.AI
|
The paper proposes a numerically stable recursive algorithm for the exact
computation of the linear-chain conditional random field gradient. It operates
as a forward algorithm over the log-domain expectation semiring and has the
purpose of enhancing memory efficiency when applied to long observation
sequences. Unlike the traditional algorithm based on the forward-backward
recursions, the memory complexity of our algorithm does not depend on the
sequence length. The experiments on real data show that it can be useful for
the problems which deal with long sequences.
|
1011.1503
|
Quantization using Compressive Sensing
|
cs.IT math.IT
|
The problem of compressing a real-valued sparse source using compressive
sensing techniques is studied. The rate distortion optimality of a coding
scheme in which compressively sensed signals are quantized and then
reconstructed is established when the reconstruction is also required to be
sparse. The result holds in general when the distortion constraint is on the
expected $p$-norm of error between the source and the reconstruction. A new
restricted isometry like property is introduced for this purpose and the
existence of matrices that satisfy this property is shown.
|
1011.1508
|
Forecast Bias Correction: A Second Order Method
|
cs.CE math.DS math.OC
|
The difference between a model forecast and actual observations is called
forecast bias. This bias is due to either incomplete model assumptions and/or
poorly known parameter values and initial/boundary conditions. In this paper we
discuss a method for estimating corrections to parameters and initial
conditions that would account for the forecast bias. A set of simple
experiments with the logistic ordinary differential equation is performed using
an iterative version of a first order version of our method to compare with the
second order version of the method.
|
1011.1518
|
Robust Matrix Decomposition with Outliers
|
stat.ML cs.LG math.NA
|
Suppose a given observation matrix can be decomposed as the sum of a low-rank
matrix and a sparse matrix (outliers), and the goal is to recover these
individual components from the observed sum. Such additive decompositions have
applications in a variety of numerical problems including system
identification, latent variable graphical modeling, and principal components
analysis. We study conditions under which recovering such a decomposition is
possible via a combination of $\ell_1$ norm and trace norm minimization. We are
specifically interested in the question of how many outliers are allowed so
that convex programming can still achieve accurate recovery, and we obtain
stronger recovery guarantees than previous studies. Moreover, we do not assume
that the spatial pattern of outliers is random, which stands in contrast to
related analyses under such assumptions via matrix completion.
|
1011.1519
|
Fuzzy Controller for Matrix Converter System to Improve its Quality of
Output
|
cs.SY
|
In this paper, Fuzzy Logic controller is developed for ac/ac Matrix
Converter. Furthermore, Total Harmonic Distortion is reduced significantly.
Space Vector Algorithm is a method to improve power quality of the converter
output. But its quality is limited to 86.7%.We are introduced a Cross coupled
DQ axis controller to improve power quality. The Matrix Converter is an
attractive topology for High voltage transformation ratio. A Matlab / Simulink
simulation analysis of the Matrix Converter system is provided. The design and
implementation of fuzzy controlled Matrix Converter is described. This AC-AC
system is proposed as an effective replacement for the conventional AC-DC-AC
system which employs a two-step power conversion.
|
1011.1539
|
Cycle structure of permutation functions over finite fields and their
applications
|
cs.IT math.IT
|
In this work we establish some new interleavers based on permutation
functions. The inverses of these interleavers are known over a finite field
$\mathbb{F}_q$. For the first time M\"{o}bius and R\'edei functions are used to
give new deterministic interleavers. Furthermore we employ Skolem sequences in
order to find new interleavers with known cycle structure. In the case of
R\'edei functions an exact formula for the inverse function is derived. The
cycle structure of R\'edei functions is also investigated. The self-inverse and
non-self-inverse versions of these permutation functions can be used to
construct new interleavers.
|
1011.1547
|
Being Rational or Aggressive? A Revisit to Dunbar's Number in Online
Social Networks
|
cs.SI physics.soc-ph
|
Recent years have witnessed the explosion of online social networks (OSNs).
They provide powerful IT-innovations for online social activities such as
organizing contacts, publishing contents, and sharing interests between friends
who may never meet before. As more and more people become the active users of
online social networks, one may ponder questions such as: (1) Do OSNs indeed
improve our sociability? (2) To what extent can we expand our offline social
spectrum in OSNs? (3) Can we identify some interesting user behaviors in OSNs?
Our work in this paper just aims to answer these interesting questions. To this
end, we pay a revisit to the well-known Dunbar's number in online social
networks. Our main research contributions are as follows. First, to our best
knowledge, our work is the first one that systematically validates the
existence of the online Dunbar's number in the range of [200,300]. To reach
this, we combine using local-structure analysis and user-interaction analysis
for extensive real-world OSNs. Second, we divide OSNs users into two
categories: rational and aggressive, and find that rational users intend to
develop close and reciprocated relationships, whereas aggressive users have no
consistent behaviors. Third, we build a simple model to capture the constraints
of time and cognition that affect the evolution of online social networks.
Finally, we show the potential use of our findings in viral marketing and
privacy management in online social networks.
|
1011.1549
|
Multivariate vector sampling expansions in shift invariant subspaces
|
cs.IT math.IT
|
In this paper, we study multivariate vector sampling expansions on general
finitely generated shift-invariant subspaces. Necessary and sufficient
conditions for a multivariate vector sampling theorem to hold are given.
|
1011.1566
|
Robust Rate-Maximization Game Under Bounded Channel Uncertainty
|
cs.IT math.IT
|
We consider the problem of decentralized power allocation for competitive
rate-maximization in a frequency-selective Gaussian interference channel under
bounded channel uncertainty. We formulate a distribution-free robust framework
for the rate-maximization game. We present the robust-optimization equilibrium
for this game and derive sufficient conditions for its existence and
uniqueness. We show that an iterative waterfilling algorithm converges to this
equilibrium under certain sufficient conditions. We analyse the social
properties of the equilibrium under varying channel uncertainty bounds for the
two-user case. We also observe an interesting phenomenon that the equilibrium
moves towards a frequency-division multiple access solution for any set of
channel coefficients under increasing channel uncertainty bounds. We further
prove that increasing channel uncertainty can lead to a more efficient
equilibrium, and hence, a better sum rate in certain two-user communication
systems. Finally, we confirm, through simulations, this improvement in
equilibrium efficiency is also observed in systems with a higher number of
users.
|
1011.1576
|
Online Importance Weight Aware Updates
|
cs.LG
|
An importance weight quantifies the relative importance of one example over
another, coming up in applications of boosting, asymmetric classification
costs, reductions, and active learning. The standard approach for dealing with
importance weights in gradient descent is via multiplication of the gradient.
We first demonstrate the problems of this approach when importance weights are
large, and argue in favor of more sophisticated ways for dealing with them. We
then develop an approach which enjoys an invariance property: that updating
twice with importance weight $h$ is equivalent to updating once with importance
weight $2h$. For many important losses this has a closed form update which
satisfies standard regret guarantees when all examples have $h=1$. We also
briefly discuss two other reasonable approaches for handling large importance
weights. Empirically, these approaches yield substantially superior prediction
with similar computational performance while reducing the sensitivity of the
algorithm to the exact setting of the learning rate. We apply these to online
active learning yielding an extraordinarily fast active learning algorithm that
works even in the presence of adversarial noise.
|
1011.1581
|
Asymptotic Synchronization for Finite-State Sources
|
nlin.CD cs.IT math.DS math.IT stat.ML
|
We extend a recent synchronization analysis of exact finite-state sources to
nonexact sources for which synchronization occurs only asymptotically. Although
the proof methods are quite different, the primary results remain the same. We
find that an observer's average uncertainty in the source state vanishes
exponentially fast and, as a consequence, an observer's average uncertainty in
predicting future output converges exponentially fast to the source entropy
rate.
|
1011.1607
|
To Feed or Not to Feed Back
|
cs.IT math.IT
|
We study the communication over Finite State Channels (FSCs), where the
encoder and the decoder can control the availability or the quality of the
noise-free feedback. Specifically, the instantaneous feedback is a function of
an action taken by the encoder, an action taken by the decoder, and the channel
output. Encoder and decoder actions take values in finite alphabets, and may be
subject to average cost constraints. We prove capacity results for such a
setting by constructing a sequence of achievable rates, using a simple scheme
based on 'code tree' generation, that generates channel input symbols along
with encoder and decoder actions. We prove that the limit of this sequence
exists. For a given block length and probability of error, we give an upper
bound on the maximum achievable rate. Our upper and lower bounds coincide and
hence yield the capacity for the case where the probability of initial state is
positive for all states. Further, for stationary indecomposable channels
without intersymbol interference (ISI), the capacity is given as the limit of
normalized directed information between the input and output sequence,
maximized over an appropriate set of causally conditioned distributions. As an
important special case, we consider the framework of 'to feed or not to feed
back' where either the encoder or the decoder takes binary actions, which
determine whether current channel output will be fed back to the encoder, with
a constraint on the fraction of channel outputs that are fed back. As another
special case of our framework, we characterize the capacity of 'coding on the
backward link' in FSCs, i.e. when the decoder sends limited-rate instantaneous
coded noise-free feedback on the backward link. Finally, we propose an
extension of the Blahut-Arimoto algorithm for evaluating the capacity when
actions can be cost constrained, and demonstrate its application on a few
examples.
|
1011.1660
|
Reinforcement Learning Based on Active Learning Method
|
cs.AI
|
In this paper, a new reinforcement learning approach is proposed which is
based on a powerful concept named Active Learning Method (ALM) in modeling. ALM
expresses any multi-input-single-output system as a fuzzy combination of some
single-input-singleoutput systems. The proposed method is an actor-critic
system similar to Generalized Approximate Reasoning based Intelligent Control
(GARIC) structure to adapt the ALM by delayed reinforcement signals. Our system
uses Temporal Difference (TD) learning to model the behavior of useful actions
of a control system. The goodness of an action is modeled on Reward-
Penalty-Plane. IDS planes will be updated according to this plane. It is shown
that the system can learn with a predefined fuzzy system or without it (through
random actions).
|
1011.1662
|
A New Sufficient Condition for 1-Coverage to Imply Connectivity
|
cs.AI
|
An effective approach for energy conservation in wireless sensor networks is
scheduling sleep intervals for extraneous nodes while the remaining nodes stay
active to provide continuous service. For the sensor network to operate
successfully the active nodes must maintain both sensing coverage and network
connectivity, It proved before if the communication range of nodes is at least
twice the sensing range, complete coverage of a convex area implies
connectivity among the working set of nodes. In this paper we consider a
rectangular region A = a *b, such that R a R b s s {\pounds}, {\pounds}, where
s R is the sensing range of nodes. and put a constraint on minimum allowed
distance between nodes(s). according to this constraint we present a new lower
bound for communication range relative to sensing range of sensors(s 2 + 3 *R)
that complete coverage of considered area implies connectivity among the
working set of nodes; also we present a new distribution method, that satisfy
our constraint.
|
1011.1677
|
Convergence Rate Analysis of Distributed Gossip (Linear Parameter)
Estimation: Fundamental Limits and Tradeoffs
|
cs.IT math.IT math.OC math.PR
|
The paper considers gossip distributed estimation of a (static) distributed
random field (a.k.a., large scale unknown parameter vector) observed by
sparsely interconnected sensors, each of which only observes a small fraction
of the field. We consider linear distributed estimators whose structure
combines the information \emph{flow} among sensors (the \emph{consensus} term
resulting from the local gossiping exchange among sensors when they are able to
communicate) and the information \emph{gathering} measured by the sensors (the
\emph{sensing} or \emph{innovations} term.) This leads to mixed time scale
algorithms--one time scale associated with the consensus and the other with the
innovations. The paper establishes a distributed observability condition
(global observability plus mean connectedness) under which the distributed
estimates are consistent and asymptotically normal. We introduce the
distributed notion equivalent to the (centralized) Fisher information rate,
which is a bound on the mean square error reduction rate of any distributed
estimator; we show that under the appropriate modeling and structural network
communication conditions (gossip protocol) the distributed gossip estimator
attains this distributed Fisher information rate, asymptotically achieving the
performance of the optimal centralized estimator. Finally, we study the
behavior of the distributed gossip estimator when the measurements fade (noise
variance grows) with time; in particular, we consider the maximum rate at which
the noise variance can grow and still the distributed estimator being
consistent, by showing that, as long as the centralized estimator is
consistent, the distributed estimator remains consistent.
|
1011.1701
|
Analytical Solution of Covariance Evolution for Irregular LDPC Codes
|
cs.IT math.IT
|
A scaling law developed by Amraoui et al. is a powerful technique to estimate
the block error probability of finite length low-density parity-check (LDPC)
codes. Solving a system of differential equations called covariance evolution
is a method to obtain the scaling parameter. However, the covariance evolution
has not been analytically solved. In this paper, we present the analytical
solution of the covariance evolution for irregular LDPC code ensembles.
|
1011.1703
|
Point process modeling for directed interaction networks
|
stat.ME cs.SI math.ST stat.TH
|
Network data often take the form of repeated interactions between senders and
receivers tabulated over time. A primary question to ask of such data is which
traits and behaviors are predictive of interaction. To answer this question, a
model is introduced for treating directed interactions as a multivariate point
process: a Cox multiplicative intensity model using covariates that depend on
the history of the process. Consistency and asymptotic normality are proved for
the resulting partial-likelihood-based estimators under suitable regularity
conditions, and an efficient fitting procedure is described. Multicast
interactions--those involving a single sender but multiple receivers--are
treated explicitly. The resulting inferential framework is then employed to
model message sending behavior in a corporate e-mail network. The analysis
gives a precise quantification of which static shared traits and dynamic
network effects are predictive of message recipient selection.
|
1011.1716
|
Least Squares Ranking on Graphs
|
cs.NA cs.LG math.NA
|
Given a set of alternatives to be ranked, and some pairwise comparison data,
ranking is a least squares computation on a graph. The vertices are the
alternatives, and the edge values comprise the comparison data. The basic idea
is very simple and old: come up with values on vertices such that their
differences match the given edge data. Since an exact match will usually be
impossible, one settles for matching in a least squares sense. This formulation
was first described by Leake in 1976 for rankingfootball teams and appears as
an example in Professor Gilbert Strang's classic linear algebra textbook. If
one is willing to look into the residual a little further, then the problem
really comes alive, as shown effectively by the remarkable recent paper of
Jiang et al. With or without this twist, the humble least squares problem on
graphs has far-reaching connections with many current areas ofresearch. These
connections are to theoretical computer science (spectral graph theory, and
multilevel methods for graph Laplacian systems); numerical analysis (algebraic
multigrid, and finite element exterior calculus); other mathematics (Hodge
decomposition, and random clique complexes); and applications (arbitrage, and
ranking of sports teams). Not all of these connections are explored in this
paper, but many are. The underlying ideas are easy to explain, requiring only
the four fundamental subspaces from elementary linear algebra. One of our aims
is to explain these basic ideas and connections, to get researchers in many
fields interested in this topic. Another aim is to use our numerical
experiments for guidance on selecting methods and exposing the need for further
development.
|
1011.1738
|
Regulating Response Time in an Autonomic Computing System: A Comparision
of Proportional Control and Fuzzy Control Approaches
|
cs.SY
|
Ecommerce is an area where an Autonomic Computing system could be very
effectively deployed. Ecommerce has created demand for high quality information
technology services and businesses are seeking quality of service guarantees
from their service providers. These guarantees are expressed as part of service
level agreements. Properly adjusting tuning parameters for enforcement of the
service level agreement is time-consuming and skills-intensive. Moreover, in
case of changes to the workload, the setting of the parameters may no longer be
optimum. In an ecommerce system, where the workload changes frequently, there
is a need to update the parameters at regular intervals. This paper describes
two approaches, one, using a proportional controller and two, using a fuzzy
controller, to automate the tuning of MaxClients parameter of Apache web server
based on the required response time and the current workload. This is an
illustration of the self-optimizing characteristic of an autonomic computing
system.
|
1011.1841
|
Fundamentals of Mathematical Theory of Emotional Robots
|
cs.RO cs.AI
|
In this book we introduce a mathematically formalized concept of emotion,
robot's education and other psychological parameters of intelligent robots. We
also introduce unitless coefficients characterizing an emotional memory of a
robot. Besides, the effect of a robot's memory upon its emotional behavior is
studied, and theorems defining fellowship and conflicts in groups of robots are
proved. Also unitless parameters describing emotional states of those groups
are introduced, and a rule of making alternative (binary) decisions based on
emotional selection is given. We introduce a concept of equivalent educational
process for robots and a concept of efficiency coefficient of an educational
process, and suggest an algorithm of emotional contacts within a group of
robots. And generally, we present and describe a model of a virtual reality
with emotional robots. The book is meant for mathematical modeling specialists
and emotional robot software developers.
|
1011.1868
|
Asymptotically Optimal Randomized Rumor Spreading
|
cs.DS cs.SI
|
We propose a new protocol solving the fundamental problem of disseminating a
piece of information to all members of a group of n players. It builds upon the
classical randomized rumor spreading protocol and several extensions. The main
achievements are the following:
Our protocol spreads the rumor to all other nodes in the asymptotically
optimal time of (1 + o(1)) \log_2 n. The whole process can be implemented in a
way such that only O(n f(n)) calls are made, where f(n)= \omega(1) can be
arbitrary.
In contrast to other protocols suggested in the literature, our algorithm
only uses push operations, i.e., only informed nodes take active actions in the
network. To the best of our knowledge, this is the first randomized push
algorithm that achieves an asymptotically optimal running time.
|
1011.1876
|
Statistical mechanics of digital halftoning
|
cond-mat.dis-nn cs.CV physics.comp-ph
|
We consider the problem of digital halftoning from the view point of
statistical mechanics. The digital halftoning is a sort of image processing,
namely, representing each grayscale in terms of black and white binary dots.
The digital halftoning is achieved by making use of the threshold mask, namely,
for each pixel, the halftoned binary pixel is determined as black if the
original grayscale pixel is greater than or equal to the mask value and is
determined as white vice versa. To determine the optimal value of the mask on
each pixel for a given original grayscale image, we first assume that the
human-eyes might recognize the black and white binary halftoned image as the
corresponding grayscale one by linear filters. The Hamiltonian is constructed
as a distance between the original and the recognized images which is written
in terms of the threshold mask. We are confirmed that the system described by
the Hamiltonian is regarded as a kind of antiferromagnetic Ising model with
quenched disorders. By searching the ground state of the Hamiltonian, we obtain
the optimal threshold mask and the resulting halftoned binary dots
simultaneously. From the power-spectrum analysis, we find that the binary dots
image is physiologically plausible from the view point of human-eyes modulation
properties. We also propose a theoretical framework to investigate statistical
performance of inverse digital halftoning, that is, the inverse process of
halftoning. From the Bayesian inference view point, we rigorously show that the
Bayes-optimal inverse-halftoning is achieved on a specific condition which is
very similar to the so-called Nishimori line in the research field of spin
glasses.
|
1011.1933
|
Shortened Hamming Codes Maximizing Double Error Detection
|
cs.DM cs.IT math.IT
|
Given $r\geq 3$ and $2^{r-1}+1\leq n< 2^{r}-1$, an $[n,n-r,3]$ shortened
Hamming code that can detect a maximal number of double errors is constructed.
The optimality of the construction is proven.
|
1011.1936
|
Blackwell Approachability and Low-Regret Learning are Equivalent
|
cs.LG cs.GT
|
We consider the celebrated Blackwell Approachability Theorem for two-player
games with vector payoffs. We show that Blackwell's result is equivalent, via
efficient reductions, to the existence of "no-regret" algorithms for Online
Linear Optimization. Indeed, we show that any algorithm for one such problem
can be efficiently converted into an algorithm for the other. We provide a
useful application of this reduction: the first efficient algorithm for
calibrated forecasting.
|
1011.1939
|
Discrete Partitioning and Coverage Control for Gossiping Robots
|
cs.RO cs.SY math.OC
|
We propose distributed algorithms to automatically deploy a team of mobile
robots to partition and provide coverage of a non-convex environment. To handle
arbitrary non-convex environments, we represent them as graphs. Our
partitioning and coverage algorithm requires only short-range, unreliable
pairwise "gossip" communication. The algorithm has two components: (1) a motion
protocol to ensure that neighboring robots communicate at least sporadically,
and (2) a pairwise partitioning rule to update territory ownership when two
robots communicate. By studying an appropriate dynamical system on the space of
partitions of the graph vertices, we prove that territory ownership converges
to a pairwise-optimal partition in finite time. This new equilibrium set
represents improved performance over common Lloyd-type algorithms.
Additionally, we detail how our algorithm scales well for large teams in large
environments and how the computation can run in anytime with limited resources.
Finally, we report on large-scale simulations in complex environments and
hardware experiments using the Player/Stage robot control system.
|
1011.1970
|
Using Model-based Overlapping Seed Expansion to detect highly
overlapping community structure
|
physics.soc-ph cs.SI stat.ML
|
As research into community finding in social networks progresses, there is a
need for algorithms capable of detecting overlapping community structure. Many
algorithms have been proposed in recent years that are capable of assigning
each node to more than a single community. The performance of these algorithms
tends to degrade when the ground-truth contains a more highly overlapping
community structure, with nodes assigned to more than two communities. Such
highly overlapping structure is likely to exist in many social networks, such
as Facebook friendship networks. In this paper we present a scalable algorithm,
MOSES, based on a statistical model of community structure, which is capable of
detecting highly overlapping community structure, especially when there is
variance in the number of communities each node is in. In evaluation on
synthetic data MOSES is found to be superior to existing algorithms, especially
at high levels of overlap. We demonstrate MOSES on real social network data by
analyzing the networks of friendship links between students of five US
universities.
|
1011.1972
|
Assisted Entanglement Distillation
|
quant-ph cs.IT math.IT
|
Motivated by the problem of designing quantum repeaters, we study
entanglement distillation between two parties, Alice and Bob, starting from a
mixed state and with the help of "repeater" stations. To treat the case of a
single repeater, we extend the notion of entanglement of assistance to
arbitrary mixed tripartite states and exhibit a protocol, based on a random
coding strategy, for extracting pure entanglement. The rates achievable by this
protocol formally resemble those achievable if the repeater station could merge
its state to one of Alice and Bob even when such merging is impossible. This
rate is provably better than the hashing bound for sufficiently pure tripartite
states. We also compare our assisted distillation protocol to a hierarchical
strategy consisting of entanglement distillation followed by entanglement
swapping. We demonstrate by the use of a simple example that our random
measurement strategy outperforms hierarchical distillation strategies when the
individual helper stations' states fail to individually factorize into portions
associated specifically with Alice and Bob. Finally, we use these results to
find achievable rates for the more general scenario, where many spatially
separated repeaters help two recipients distill entanglement.
|
1011.1974
|
One-shot Multiparty State Merging
|
quant-ph cs.IT math.IT
|
We present a protocol for performing state merging when multiple parties
share a single copy of a mixed state, and analyze the entanglement cost in
terms of min- and max-entropies. Our protocol allows for interpolation between
corner points of the rate region without the need for time-sharing, a primitive
which is not available in the one-shot setting. We also compare our protocol to
the more naive strategy of repeatedly applying a single-party merging protocol
one party at a time, by performing a detailed analysis of the rates required to
merge variants of the embezzling states. Finally, we analyze a variation of
multiparty merging, which we call split-transfer, by considering two receivers
and many additional helpers sharing a mixed state. We give a protocol for
performing a split-transfer and apply it to the problem of assisted
entanglement distillation.
|
1011.2009
|
Comparison of Spearman's rho and Kendall's tau in Normal and
Contaminated Normal Models
|
cs.IT math.IT
|
This paper analyzes the performances of the Spearman's rho (SR) and Kendall's
tau (KT) with respect to samples drawn from bivariate normal and bivariate
contaminated normal populations. The exact analytical formulae of the variance
of SR and the covariance between SR and KT are obtained based on the Childs's
reduction formula for the quadrivariate normal positive orthant probabilities.
Close form expressions with respect to the expectations of SR and KT are
established under the bivariate contaminated normal models. The bias, mean
square error (MSE) and asymptotic relative efficiency (ARE) of the three
estimators based on SR and KT to the Pearson's product moment correlation
coefficient (PPMCC) are investigated in both the normal and contaminated normal
models. Theoretical and simulation results suggest that, contrary to the
opinion of equivalence between SR and KT in some literature, the behaviors of
SR and KT are strikingly different in the aspects of bias effect, variance,
mean square error, and asymptotic relative efficiency. The new findings
revealed in this work provide not only deeper insights into the two most widely
used rank based correlation coefficients, but also a guidance for choosing
which one to use under the circumstances where the PPMCC fails to apply.
|
1011.2078
|
Design and Analysis of LT Codes with Decreasing Ripple Size
|
cs.IT cs.NI math.IT
|
In this paper we propose a new design of LT codes, which decreases the amount
of necessary overhead in comparison to existing designs. The design focuses on
a parameter of the LT decoding process called the ripple size. This parameter
was also a key element in the design proposed in the original work by Luby.
Specifically, Luby argued that an LT code should provide a constant ripple size
during decoding. In this work we show that the ripple size should decrease
during decoding, in order to reduce the necessary overhead. Initially we
motivate this claim by analytical results related to the redundancy within an
LT code. We then propose a new design procedure, which can provide any desired
achievable decreasing ripple size. The new design procedure is evaluated and
compared to the current state of the art through simulations. This reveals a
significant increase in performance with respect to both average overhead and
error probability at any fixed overhead.
|
1011.2109
|
On Secure Transmission over Parallel Relay Eavesdropper Channel
|
cs.IT math.IT
|
We study a four terminal parallel relay-eavesdropper channel which consists
of multiple independent relay-eavesdropper channels as subchannels. For the
discrete memoryless case, we establish inner and outer bounds on the
rate-equivocation region. For each subchannel, secure transmission is obtained
through one of the two coding schemes at the relay: decoding-and-forwarding the
source message or confusing the eavesdropper through noise injection. The inner
bound allows relay mode selection. For the Gaussian model we establish lower
and upper bounds on the perfect secrecy rate. We show that the bounds meet in
some special cases, including when the relay does not hear the source. We
illustrate the analytical results through some numerical examples.
|
1011.2113
|
Complexity Adjusted Soft-Output Sphere Decoding by Adaptive LLR Clipping
|
cs.IT math.IT
|
A-posteriori probability (APP) receivers operating over multiple-input,
multiple-output channels provide enhanced bit error rate (BER) performance at
the cost of increased complexity. However, employing full APP processing over
favorable transmission environments, where less efficient approaches may
already provide the required performance at a reduced complexity, results in
unnecessary processing. For slowly varying channel statistics substantial
complexity savings can be achieved by simple adaptive schemes. Such schemes
track the BER performance and adjust the complexity of the soft output sphere
decoder by adaptively setting the related log-likelihood ratio (LLR) clipping
value.
|
1011.2115
|
Secure Communication over Parallel Relay Channel
|
cs.IT math.IT
|
We investigate the problem of secure communication over parallel relay
channel in the presence of a passive eavesdropper. We consider a four terminal
relay-eavesdropper channel which consists of multiple relay-eavesdropper
channels as subchannels. For the discrete memoryless model, we establish outer
and inner bounds on the rate-equivocation region. The inner bound allows mode
selection at the relay. For each subchannel, secure transmission is obtained
through one of two coding schemes at the relay: decoding-and-forwarding the
source message or confusing the eavesdropper through noise injection. For the
Gaussian memoryless channel, we establish lower and upper bounds on the perfect
secrecy rate. Furthermore, we study a special case in which the relay does not
hear the source and show that under certain conditions the lower and upper
bounds coincide. The results established for the parallel Gaussian
relay-eavesdropper channel are then applied to study the fading
relay-eavesdropper channel. Analytical results are illustrated through some
numerical examples.
|
1011.2173
|
Photometric Catalogue of Quasars and Other Point Sources in the Sloan
Digital Sky Survey
|
astro-ph.IM cs.AI
|
We present a catalogue of about 6 million unresolved photometric detections
in the Sloan Digital Sky Survey Seventh Data Release classifying them into
stars, galaxies and quasars. We use a machine learning classifier trained on a
subset of spectroscopically confirmed objects from 14th to 22nd magnitude in
the SDSS {\it i}-band. Our catalogue consists of 2,430,625 quasars, 3,544,036
stars and 63,586 unresolved galaxies from 14th to 24th magnitude in the SDSS
{\it i}-band. Our algorithm recovers 99.96% of spectroscopically confirmed
quasars and 99.51% of stars to i $\sim$21.3 in the colour window that we study.
The level of contamination due to data artefacts for objects beyond $i=21.3$ is
highly uncertain and all mention of completeness and contamination in the paper
are valid only for objects brighter than this magnitude. However, a comparison
of the predicted number of quasars with the theoretical number counts shows
reasonable agreement.
|
1011.2180
|
On Reliability Function of BSC with Noisy Feedback
|
cs.IT math.IT
|
For information transmission a binary symmetric channel is used. There is
also another noisy binary symmetric channel (feedback channel), and the
transmitter observes without delay all the outputs of the forward channel via
that feedback channel. The transmission of an exponential number of messages
(i.e. the transmission rate is positive) is considered. The achievable decoding
error exponent for such a combination of channels is investigated. It is shown
that if the crossover probability of the feedback channel is less than a
certain positive value, then the achievable error exponent is better than the
decoding error exponent of the channel without feedback.
|
1011.2196
|
Degrees of Freedom Regions of Two-User MIMO Z and Full Interference
Channels: The Benefit of Reconfigurable Antennas
|
cs.IT math.IT
|
We study the degrees of freedom (DoF) regions of two-user multiple-input
multiple-output (MIMO) Z and full interference channels in this paper. We
assume that the receivers always have perfect channel state information. We
first derive the DoF region of Z interference channel with channel state
information at transmitter (CSIT). For full interference channel without CSIT,
the DoF region has been fully characterized recently and it is shown that the
previously known outer bound is not achievable. In this work, we investigate
the no-CSIT case further by assuming that the transmitter has the ability of
antenna mode switching. We obtain the DoF region as a function of the number of
available antenna modes and reveal the incremental gain in DoF that each extra
antenna mode can bring. It is shown that in certain cases the reconfigurable
antennas can bring extra DoF gains. In these cases, the DoF region is maximized
when the number of modes is at least equal to the number of receive antennas at
the corresponding receiver, in which case the previously outer bound is
achieved. In all cases, we propose systematic constructions of the beamforming
and nulling matrices for achieving the DoF region. The constructions bear an
interesting space-frequency interpretation.
|
1011.2222
|
Static and dynamic characteristics of protein contact networks
|
cs.CE cs.SI physics.bio-ph q-bio.BM
|
The principles underlying protein folding remains one of Nature's puzzles
with important practical consequences for Life. An approach that has gathered
momentum since the late 1990's, looks at protein hetero-polymers and their
folding process through the lens of complex network analysis. Consequently,
there is now a body of empirical studies describing topological characteristics
of protein macro-molecules through their contact networks and linking these
topological characteristics to protein folding. The present paper is primarily
a review of this rich area. But it delves deeper into certain aspects by
emphasizing short-range and long-range links, and suggests unconventional
places where "power-laws" may be lurking within protein contact networks.
Further, it considers the dynamical view of protein contact networks. This
closer scrutiny of protein contact networks raises new questions for further
research, and identifies new regularities which may be useful to parameterize a
network approach to protein folding. Preliminary experiments with such a model
confirm that the regularities we identified cannot be easily reproduced through
random effects. Indeed, the grand challenge of protein folding is to elucidate
the process(es) which not only generates the specific and diverse linkage
patterns of protein contact networks, but also reproduces the dynamic behavior
of proteins as they fold. Keywords: network analysis, protein contact networks,
protein folding
|
1011.2245
|
A Distributed Method for Trust-Aware Recommendation in Social Networks
|
cs.SI
|
This paper contains the details of a distributed trust-aware recommendation
system. Trust-base recommenders have received a lot of attention recently. The
main aim of trust-based recommendation is to deal the problems in traditional
Collaborative Filtering recommenders. These problems include cold start users,
vulnerability to attacks, etc.. Our proposed method is a distributed approach
and can be easily deployed on social networks or real life networks such as
sensor networks or peer to peer networks.
|
1011.2272
|
Single Frame Image super Resolution using Learned Directionlets
|
cs.CV
|
In this paper, a new directionally adaptive, learning based, single image
super resolution method using multiple direction wavelet transform, called
Directionlets is presented. This method uses directionlets to effectively
capture directional features and to extract edge information along different
directions of a set of available high resolution images .This information is
used as the training set for super resolving a low resolution input image and
the Directionlet coefficients at finer scales of its high-resolution image are
learned locally from this training set and the inverse Directionlet transform
recovers the super-resolved high resolution image. The simulation results
showed that the proposed approach outperforms standard interpolation techniques
like Cubic spline interpolation as well as standard Wavelet-based learning,
both visually and in terms of the mean squared error (mse) values. This method
gives good result with aliased images also.
|
1011.2292
|
Image Segmentation with Multidimensional Refinement Indicators
|
math.NA cs.CV
|
We transpose an optimal control technique to the image segmentation problem.
The idea is to consider image segmentation as a parameter estimation problem.
The parameter to estimate is the color of the pixels of the image. We use the
adaptive parameterization technique which builds iteratively an optimal
representation of the parameter into uniform regions that form a partition of
the domain, hence corresponding to a segmentation of the image. We minimize an
error function during the iterations, and the partition of the image into
regions is optimally driven by the gradient of this error. The resulting
segmentation algorithm inherits desirable properties from its optimal control
origin: soundness, robustness, and flexibility.
|
1011.2304
|
Target tracking in the recommender space: Toward a new recommender
system based on Kalman filtering
|
cs.AI
|
In this paper, we propose a new approach for recommender systems based on
target tracking by Kalman filtering. We assume that users and their seen
resources are vectors in the multidimensional space of the categories of the
resources. Knowing this space, we propose an algorithm based on a Kalman filter
to track users and to predict the best prediction of their future position in
the recommendation space.
|
1011.2313
|
Weighted Centroid Algorithm for Estimating Primary User Location:
Theoretical Analysis and Distributed Implementation
|
cs.PF cs.IT cs.NI math.IT
|
Information about primary transmitter location is crucial in enabling several
key capabilities in cognitive radio networks, including improved
spatio-temporal sensing, intelligent location-aware routing, as well as aiding
spectrum policy enforcement. Compared to other proposed non-interactive
localization algorithms, the weighted centroid localization (WCL) scheme uses
only the received signal strength information, which makes it simple to
implement and robust to variations in the propagation environment. In this
paper we present the first theoretical framework for WCL performance analysis
in terms of its localization error distribution parameterized by node density,
node placement, shadowing variance, correlation distance and inaccuracy of
sensor node positioning. Using this analysis, we quantify the robustness of WCL
to various physical conditions and provide design guidelines, such as node
placement and spacing, for the practical deployment of WCL. We also propose a
power-efficient method for implementing WCL through a distributed cluster-based
algorithm, that achieves comparable accuracy with its centralized counterpart.
|
1011.2336
|
A network model with structured nodes
|
cs.SI cs.CE physics.soc-ph q-bio.MN
|
We present a network model in which words over a specific alphabet, called
{\it structures}, are associated to each node and undirected edges are added
depending on some distance between different structures. It is shown that this
model can generate, without the use of preferential attachment or any other
heuristic, networks with topological features similar to biological networks:
power law degree distribution, clustering coefficient independent from the
network size, etc. Specific biological networks ({\it C. Elegans} neural
network and {\it E. Coli} protein-protein interaction network) are replicated
using this model.
|
1011.2348
|
Ergodic Control and Polyhedral approaches to PageRank Optimization
|
math.OC cs.DS cs.SY
|
We study a general class of PageRank optimization problems which consist in
finding an optimal outlink strategy for a web site subject to design
constraints. We consider both a continuous problem, in which one can choose the
intensity of a link, and a discrete one, in which in each page, there are
obligatory links, facultative links and forbidden links. We show that the
continuous problem, as well as its discrete variant when there are no
constraints coupling different pages, can both be modeled by constrained Markov
decision processes with ergodic reward, in which the webmaster determines the
transition probabilities of websurfers. Although the number of actions turns
out to be exponential, we show that an associated polytope of transition
measures has a concise representation, from which we deduce that the continuous
problem is solvable in polynomial time, and that the same is true for the
discrete problem when there are no coupling constraints. We also provide
efficient algorithms, adapted to very large networks. Then, we investigate the
qualitative features of optimal outlink strategies, and identify in particular
assumptions under which there exists a "master" page to which all controlled
pages should point. We report numerical results on fragments of the real web
graph.
|
1011.2361
|
Distributed Storage Codes with Repair-by-Transfer and Non-achievability
of Interior Points on the Storage-Bandwidth Tradeoff
|
cs.IT cs.DC cs.NI math.IT
|
Regenerating codes are a class of recently developed codes for distributed
storage that, like Reed-Solomon codes, permit data recovery from any subset of
k nodes within the n-node network. However, regenerating codes possess in
addition, the ability to repair a failed node by connecting to an arbitrary
subset of d nodes. It has been shown that for the case of functional-repair,
there is a tradeoff between the amount of data stored per node and the
bandwidth required to repair a failed node. A special case of functional-repair
is exact-repair where the replacement node is required to store data identical
to that in the failed node. Exact-repair is of interest as it greatly
simplifies system implementation. The first result of the paper is an explicit,
exact-repair code for the point on the storage-bandwidth tradeoff corresponding
to the minimum possible repair bandwidth, for the case when d=n-1. This code
has a particularly simple graphical description and most interestingly, has the
ability to carry out exact-repair through mere transfer of data and without any
need to perform arithmetic operations. Hence the term `repair-by-transfer'. The
second result of this paper shows that the interior points on the
storage-bandwidth tradeoff cannot be achieved under exact-repair, thus pointing
to the existence of a separate tradeoff under exact-repair. Specifically, we
identify a set of scenarios, termed `helper node pooling', and show that it is
the necessity to satisfy such scenarios that over-constrains the system.
|
1011.2488
|
Shape Calculus: Timed Operational Semantics and Well-formedness
|
cs.PL cs.CE cs.CG
|
The Shape Calculus is a bio-inspired calculus for describing 3D shapes moving
in a space. A shape forms a 3D process when combined with a behaviour.
Behaviours are specified with a timed CCS-like process algebra using a notion
of channel that models naturally binding sites on the surface of shapes.
Processes can represent molecules or other mobile objects and can be part of
networks of processes that move simultaneously and interact in a given
geometrical space. The calculus embeds collision detection and response,
binding of compatible 3D processes and splitting of previously established
bonds. In this work the full formal timed operational semantics of the calculus
is provided, together with examples that illustrate the use of the calculus in
a well-known biological scenario. Moreover, a result of well-formedness about
the evolution of a given network of well-formed 3D processes is proved.
|
1011.2511
|
Individual Privacy vs Population Privacy: Learning to Attack
Anonymization
|
cs.DB
|
Over the last decade there have been great strides made in developing
techniques to compute functions privately. In particular, Differential Privacy
gives strong promises about conclusions that can be drawn about an individual.
In contrast, various syntactic methods for providing privacy (criteria such as
kanonymity and l-diversity) have been criticized for still allowing private
information of an individual to be inferred. In this report, we consider the
ability of an attacker to use data meeting privacy definitions to build an
accurate classifier. We demonstrate that even under Differential Privacy, such
classifiers can be used to accurately infer "private" attributes in realistic
data. We compare this to similar approaches for inferencebased attacks on other
forms of anonymized data. We place these attacks on the same scale, and observe
that the accuracy of inference of private attributes for Differentially Private
data and l-diverse data can be quite similar.
|
1011.2512
|
Extended Active Learning Method
|
cs.AI cs.LG
|
Active Learning Method (ALM) is a soft computing method which is used for
modeling and control, based on fuzzy logic. Although ALM has shown that it acts
well in dynamic environments, its operators cannot support it very well in
complex situations due to losing data. Thus ALM can find better membership
functions if more appropriate operators be chosen for it. This paper
substituted two new operators instead of ALM original ones; which consequently
renewed finding membership functions in a way superior to conventional ALM.
This new method is called Extended Active Learning Method (EALM).
|
1011.2515
|
Existence of Stable Exclusive Bilateral Exchanges in Networks
|
cs.GT cs.SI
|
In this paper we show that when individuals in a bipartite network
exclusively choose partners and exchange valued goods with their partners, then
there exists a set of exchanges that are pair-wise stable. Pair-wise stability
implies that no individual breaks her partnership and no two neighbors in the
network can form a new partnership while breaking other partnerships if any so
that at least one of them improves her payoff and the other one does at least
as good. We consider a general class of continuous, strictly convex and
strongly monotone preferences over bundles of goods for individuals. Thus, this
work extends the general equilibrium framework from markets to networks with
exclusive exchanges. We present the complete existence proof using the
existence of a generalized stable matching in
\cite{Generalized-Stable-Matching}. The existence proof can be extended to
problems in social games as in \cite{Matching-Equilibrium} and
\cite{Social-Games}.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.