id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1111.4639
|
Cancer gene prioritization by integrative analysis of mRNA expression
and DNA copy number data: a comparative review
|
cs.CE q-bio.GN stat.AP stat.ME
|
A variety of genome-wide profiling techniques are available to probe
complementary aspects of genome structure and function. Integrative analysis of
heterogeneous data sources can reveal higher-level interactions that cannot be
detected based on individual observations. A standard integration task in
cancer studies is to identify altered genomic regions that induce changes in
the expression of the associated genes based on joint analysis of genome-wide
gene expression and copy number profiling measurements. In this review, we
provide a comparison among various modeling procedures for integrating
genome-wide profiling data of gene copy number and transcriptional alterations
and highlight common approaches to genomic data integration. A transparent
benchmarking procedure is introduced to quantitatively compare the cancer gene
prioritization performance of the alternative methods. The benchmarking
algorithms and data sets are available at http://intcomp.r-forge.r-project.org
|
1111.4645
|
Incremental Learning with Accuracy Prediction of Social and Individual
Properties from Mobile-Phone Data
|
cs.SI
|
Mobile phones are quickly becoming the primary source for social, behavioral,
and environmental sensing and data collection. Today's smartphones are equipped
with increasingly more sensors and accessible data types that enable the
collection of literally dozens of signals related to the phone, its user, and
its environment. A great deal of research effort in academia and industry is
put into mining this raw data for higher level sense-making, such as
understanding user context, inferring social networks, learning individual
features, predicting outcomes, and so on. In this work we investigate the
properties of learning and inference of real world data collected via mobile
phones over time. In particular, we look at the dynamic learning process over
time, and how the ability to predict individual parameters and social links is
incrementally enhanced with the accumulation of additional data. To do this, we
use the Friends and Family dataset, which contains rich data signals gathered
from the smartphones of 140 adult members of a young-family residential
community for over a year, and is one of the most comprehensive mobile phone
datasets gathered in academia to date. We develop several models that predict
social and individual properties from sensed mobile phone data, including
detection of life-partners, ethnicity, and whether a person is a student or
not. Then, for this set of diverse learning tasks, we investigate how the
prediction accuracy evolves over time, as new data is collected. Finally, based
on gained insights, we propose a method for advance prediction of the maximal
learning accuracy possible for the learning task at hand, based on an initial
set of measurements. This has practical implications, like informing the design
of mobile data collection campaigns, or evaluating analysis strategies.
|
1111.4646
|
On the Fundamental Limits of Adaptive Sensing
|
math.ST cs.IT math.IT stat.TH
|
Suppose we can sequentially acquire arbitrary linear measurements of an
n-dimensional vector x resulting in the linear model y = Ax + z, where z
represents measurement noise. If the signal is known to be sparse, one would
expect the following folk theorem to be true: choosing an adaptive strategy
which cleverly selects the next row of A based on what has been previously
observed should do far better than a nonadaptive strategy which sets the rows
of A ahead of time, thus not trying to learn anything about the signal in
between observations. This paper shows that the folk theorem is false. We prove
that the advantages offered by clever adaptive strategies and sophisticated
estimation procedures---no matter how intractable---over classical compressed
acquisition/recovery schemes are, in general, minimal.
|
1111.4650
|
Trends Prediction Using Social Diffusion Models
|
cs.SI physics.soc-ph
|
The importance of the ability of predict trends in social media has been
growing rapidly in the past few years with the growing dominance of social
media in our everyday's life. Whereas many works focus on the detection of
anomalies in networks, there exist little theoretical work on the prediction of
the likelihood of anomalous network pattern to globally spread and become
"trends". In this work we present an analytic model the social diffusion
dynamics of spreading network patterns. Our proposed method is based on
information diffusion models, and is capable of predicting future trends based
on the analysis of past social interactions between the community's members. We
present an analytic lower bound for the probability that emerging trends would
successful spread through the network. We demonstrate our model using two
comprehensive social datasets - the "Friends and Family" experiment that was
held in MIT for over a year, where the complete activity of 140 users was
analyzed, and a financial dataset containing the complete activities of over
1.5 million members of the "eToro" social trading community.
|
1111.4654
|
A self-portrait of young Leonardo
|
cs.CV
|
One of the most famous drawings by Leonardo da Vinci is a self-portrait in
red chalk, where he looks quite old. In fact, there is a sketch in one of his
notebooks, partially covered by written notes, that can be a self-portrait of
the artist when he was young. The use of image processing, to remove the
handwritten text and improve the image, allows a comparison of the two
portraits.
|
1111.4676
|
Facial Asymmetry and Emotional Expression
|
cs.CV
|
This report is about facial asymmetry, its connection to emotional
expression, and methods of measuring facial asymmetry in videos of faces. The
research was motivated by two factors: firstly, there was a real opportunity to
develop a novel measure of asymmetry that required minimal human involvement
and that improved on earlier measures in the literature; and secondly, the
study of the relationship between facial asymmetry and emotional expression is
both interesting in its own right, and important because it can inform
neuropsychological theory and answer open questions concerning emotional
processing in the brain. The two aims of the research were: first, to develop
an automatic frame-by-frame measure of facial asymmetry in videos of faces that
improved on previous measures; and second, to use the measure to analyse the
relationship between facial asymmetry and emotional expression, and connect our
findings with previous research of the relationship.
|
1111.4729
|
Influence Diffusion Dynamics and Influence Maximization in Social
Networks with Friend and Foe Relationships
|
cs.SI cs.DM physics.soc-ph
|
Influence diffusion and influence maximization in large-scale online social
networks (OSNs) have been extensively studied, because of their impacts on
enabling effective online viral marketing. Existing studies focus on social
networks with only friendship relations, whereas the foe or enemy relations
that commonly exist in many OSNs, e.g., Epinions and Slashdot, are completely
ignored. In this paper, we make the first attempt to investigate the influence
diffusion and influence maximization in OSNs with both friend and foe
relations, which are modeled using positive and negative edges on signed
networks. In particular, we extend the classic voter model to signed networks
and analyze the dynamics of influence diffusion of two opposite opinions. We
first provide systematic characterization of both short-term and long-term
dynamics of influence diffusion in this model, and illustrate that the steady
state behaviors of the dynamics depend on three types of graph structures,
which we refer to as balanced graphs, anti-balanced graphs, and strictly
unbalanced graphs. We then apply our results to solve the influence
maximization problem and develop efficient algorithms to select initial seeds
of one opinion that maximize either its short-term influence coverage or
long-term steady state influence coverage. Extensive simulation results on both
synthetic and real-world networks, such as Epinions and Slashdot, confirm our
theoretical analysis on influence diffusion dynamics, and demonstrate the
efficacy of our influence maximization algorithm over other heuristic
algorithms.
|
1111.4768
|
Capacity of Multiple Unicast in Wireless Networks: A Polymatroidal
Approach
|
cs.IT cs.NI math.IT
|
A classical result in undirected wireline networks is the near optimality of
routing (flow) for multiple-unicast traffic (multiple sources communicating
independent messages to multiple destinations): the min cut upper bound is
within a logarithmic factor of the number of sources of the max flow. In this
paper we "extend" the wireline result to the wireless context.
Our main result is the approximate optimality of a simple layering principle:
{\em local physical-layer schemes combined with global routing}. We use the
{\em reciprocity} of the wireless channel critically in this result. Our formal
result is in the context of channel models for which "good" local schemes, that
achieve the cut-set bound, exist (such as Gaussian MAC and broadcast channels,
broadcast erasure networks, fast fading Gaussian networks).
Layered architectures, common in the engineering-design of wireless networks,
can have near-optimal performance if the {\em locality} over which
physical-layer schemes should operate is carefully designed. Feedback is shown
to play a critical role in enabling the separation between the physical and the
network layers. The key technical idea is the modeling of a wireless network by
an undirected "polymatroidal" network, for which we establish a max-flow
min-cut approximation theorem.
|
1111.4785
|
Global parameter identification of stochastic reaction networks from
single trajectories
|
q-bio.MN cs.CE
|
We consider the problem of inferring the unknown parameters of a stochastic
biochemical network model from a single measured time-course of the
concentration of some of the involved species. Such measurements are available,
e.g., from live-cell fluorescence microscopy in image-based systems biology. In
addition, fluctuation time-courses from, e.g., fluorescence correlation
spectroscopy provide additional information about the system dynamics that can
be used to more robustly infer parameters than when considering only mean
concentrations. Estimating model parameters from a single experimental
trajectory enables single-cell measurements and quantification of cell--cell
variability. We propose a novel combination of an adaptive Monte Carlo sampler,
called Gaussian Adaptation, and efficient exact stochastic simulation
algorithms that allows parameter identification from single stochastic
trajectories. We benchmark the proposed method on a linear and a non-linear
reaction network at steady state and during transient phases. In addition, we
demonstrate that the present method also provides an ellipsoidal volume
estimate of the viable part of parameter space and is able to estimate the
physical volume of the compartment in which the observed reactions take place.
|
1111.4795
|
IRIE: Scalable and Robust Influence Maximization in Social Networks
|
cs.SI physics.soc-ph
|
Influence maximization is the problem of selecting top $k$ seed nodes in a
social network to maximize their influence coverage under certain influence
diffusion models. In this paper, we propose a novel algorithm IRIE that
integrates a new message passing based influence ranking (IR), and influence
estimation (IE) methods for influence maximization in both the independent
cascade (IC) model and its extension IC-N that incorporates negative opinion
propagations. Through extensive experiments, we demonstrate that IRIE matches
the influence coverage of other algorithms while scales much better than all
other algorithms. Moreover IRIE is more robust and stable than other algorithms
both in running time and memory usage for various density of networks and
cascade size. It runs up to two orders of magnitude faster than other
state-of-the-art algorithms such as PMIA for large networks with tens of
millions of nodes and edges, while using only a fraction of memory comparing
with PMIA.
|
1111.4800
|
Enhancement of Image Resolution by Binarization
|
cs.CV cs.MM
|
Image segmentation is one of the principal approaches of image processing.
The choice of the most appropriate Binarization algorithm for each case proved
to be a very interesting procedure itself. In this paper, we have done the
comparison study between the various algorithms based on Binarization
algorithms and propose a methodologies for the validation of Binarization
algorithms. In this work we have developed two novel algorithms to determine
threshold values for the pixels value of the gray scale image. The performance
estimation of the algorithm utilizes test images with, the evaluation metrics
for Binarization of textual and synthetic images. We have achieved better
resolution of the image by using the Binarization method of optimum
thresholding techniques.
|
1111.4802
|
Bayesian optimization using sequential Monte Carlo
|
math.OC cs.LG stat.CO
|
We consider the problem of optimizing a real-valued continuous function $f$
using a Bayesian approach, where the evaluations of $f$ are chosen sequentially
by combining prior information about $f$, which is described by a random
process model, and past evaluation results. The main difficulty with this
approach is to be able to compute the posterior distributions of quantities of
interest which are used to choose evaluation points. In this article, we decide
to use a Sequential Monte Carlo (SMC) approach.
|
1111.4825
|
Chebyshev Polynomials in Distributed Consensus Applications
|
cs.SY cs.DC cs.MA
|
In this paper we analyze the use of Chebyshev polynomials in distributed
consensus applications. We study the properties of these polynomials to propose
a distributed algorithm that reaches the consensus in a fast way. The algorithm
is expressed in the form of a linear iteration and, at each step, the agents
only require to transmit their current state to their neighbors. The difference
with respect to previous approaches is that the update rule used by the network
is based on the second order difference equation that describes the Chebyshev
polynomials of first kind. As a consequence, we show that our algorithm
achieves the consensus using far less iterations than other approaches. We
characterize the main properties of the algorithm for both, fixed and switching
communication topologies. The main contribution of the paper is the study of
the properties of the Chebyshev polynomials in distributed consensus
applications, proposing an algorithm that increases the convergence rate with
respect to existing approaches. Theoretical results, as well as experiments
with synthetic data, show the benefits using our algorithm.
|
1111.4831
|
Analytical calculation of optimal POVM for unambiguous discrimination of
quantum states using KKT method
|
cs.IT math.IT quant-ph
|
In the present paper, an exact analytic solution for the optimal unambiguous
state discrimination (OPUSD) problem involving an arbitrary number of pure
linearly independent quantum states with real and complex inner product is
presented. Using semidefinite programming and Karush-Kuhn-Tucker convex
optimization method, we derive an analytical formula which shows the relation
between optimal solution of unambiguous state discrimination problem and an
arbitrary number of pure linearly independent quantum states.
|
1111.4840
|
Distributed Multi-view Matching in Networks with Limited Communications
|
cs.CV cs.MA cs.RO
|
We address the problem of distributed matching of features in networks with
vision systems. Every camera in the network has limited communication
capabilities and can only exchange local matches with its neighbors. We propose
a distributed algorithm that takes these local matches and computes global
correspondences by a proper propagation in the network. When the algorithm
finishes, each camera knows the global correspondences between its features and
the features of all the cameras in the network. The presence of spurious
introduced by the local matcher may produce inconsistent global
correspondences, which are association paths between features from the same
camera. The contributions of this work are the propagation of the local matches
and the detection and resolution of these inconsistencies by deleting local
matches. Our resolution algorithm considers the quality of each local match,
when this information is provided by the local matcher. We formally prove that
after executing the algorithm, the network finishes with a global data
association free of inconsistencies. We provide a fully decentralized solution
to the problem which does not rely on any particular communication topology.
Simulations and experimental results with real images show the performance of
the method considering different features, matching functions and scenarios.
|
1111.4852
|
Biased diffusion on Japanese inter-firm trading network: Estimation of
sales from network structure
|
q-fin.GN cs.SI physics.soc-ph
|
To investigate the actual phenomena of transport on a complex network, we
analysed empirical data for an inter-firm trading network, which consists of
about one million Japanese firms and the sales of these firms (a sale
corresponds to the total in-flow into a node). First, we analysed the
relationships between sales and sales of nearest neighbourhoods from which we
obtain a simple linear relationship between sales and the weighted sum of sales
of nearest neighbourhoods (i.e., customers). In addition, we introduce a simple
money transport model that is coherent with this empirical observation. In this
model, a firm (i.e., customer) distributes money to its out-edges (suppliers)
proportionally to the in-degree of destinations. From intensive numerical
simulations, we find that the steady flows derived from these models can
approximately reproduce the distribution of sales of actual firms. The sales of
individual firms deduced from the money-transport model are shown to be
proportional, on an average, to the real sales.
|
1111.4886
|
Prediction Of Arrival Of Nodes In A Scale Free Network
|
cs.SI cs.NI physics.soc-ph q-bio.PE
|
Most of the networks observed in real life obey power-law degree
distribution. It is hypothesized that the emergence of such a degree
distribution is due to preferential attachment of the nodes. Barabasi-Albert
model is a generative procedure that uses preferential attachment based on
degree and one can use this model to generate networks with power-law degree
distribution. In this model, the network is assumed to grow one node every time
step. After the evolution of such a network, it is impossible for one to
predict the exact order of node arrivals. We present in this article, a novel
strategy to partially predict the order of node arrivals in such an evolved
network. We show that our proposed method outperforms other centrality measure
based approaches. We bin the nodes and predict the order of node arrivals
between the bins with an accuracy of above 80%.
|
1111.4898
|
A Navigation Algorithm Inspired by Human Navigation
|
cs.SI physics.soc-ph
|
Human navigation has been a topic of interest in spatial cognition from the
past few decades. It has been experimentally observed that humans accomplish
the task of way-finding a destination in an unknown environment by recognizing
landmarks. Investigations using network analytic techniques reveal that humans,
when asked to way-find their destination, learn the top ranked nodes of a
network. In this paper we report a study simulating the strategy used by humans
to recognize the centers of a network. We show that the paths obtained from our
simulation has the same properties as the paths obtained in human based
experiment. The simulation thus performed leads to a novel way of path-finding
in a network. We discuss the performance of our method and compare it with the
existing techniques to find a path between a pair of nodes in a network.
|
1111.4930
|
Comparative study of Financial Time Series Prediction by Artificial
Neural Network with Gradient Descent Learning
|
cs.NE cs.AI
|
Financial forecasting is an example of a signal processing problem which is
challenging due to Small sample sizes, high noise, non-stationarity, and
non-linearity,but fast forecasting of stock market price is very important for
strategic business planning.Present study is aimed to develop a comparative
predictive model with Feedforward Multilayer Artificial Neural Network &
Recurrent Time Delay Neural Network for the Financial Timeseries
Prediction.This study is developed with the help of historical stockprice
dataset made available by GoogleFinance.To develop this prediction model
Backpropagation method with Gradient Descent learning has been
implemented.Finally the Neural Net, learned with said algorithm is found to be
skillful predictor for non-stationary noisy Financial Timeseries.
|
1111.5003
|
Construction of Almost Disjunct Matrices for Group Testing
|
cs.IT cs.DM math.IT
|
In a \emph{group testing} scheme, a set of tests is designed to identify a
small number $t$ of defective items among a large set (of size $N$) of items.
In the non-adaptive scenario the set of tests has to be designed in one-shot.
In this setting, designing a testing scheme is equivalent to the construction
of a \emph{disjunct matrix}, an $M \times N$ matrix where the union of supports
of any $t$ columns does not contain the support of any other column. In
principle, one wants to have such a matrix with minimum possible number $M$ of
rows (tests). One of the main ways of constructing disjunct matrices relies on
\emph{constant weight error-correcting codes} and their \emph{minimum
distance}. In this paper, we consider a relaxed definition of a disjunct matrix
known as \emph{almost disjunct matrix}. This concept is also studied under the
name of \emph{weakly separated design} in the literature. The relaxed
definition allows one to come up with group testing schemes where a
close-to-one fraction of all possible sets of defective items are identifiable.
Our main contribution is twofold. First, we go beyond the minimum distance
analysis and connect the \emph{average distance} of a constant weight code to
the parameters of an almost disjunct matrix constructed from it. Our second
contribution is to explicitly construct almost disjunct matrices based on our
average distance analysis, that have much smaller number of rows than any
previous explicit construction of disjunct matrices. The parameters of our
construction can be varied to cover a large range of relations for $t$ and $N$.
|
1111.5046
|
Cooperative Sequential Spectrum Sensing Based on Level-triggered
Sampling
|
stat.AP cs.IT math.IT
|
We propose a new framework for cooperative spectrum sensing in cognitive
radio networks, that is based on a novel class of non-uniform samplers, called
the event-triggered samplers, and sequential detection. In the proposed scheme,
each secondary user computes its local sensing decision statistic based on its
own channel output; and whenever such decision statistic crosses certain
predefined threshold values, the secondary user will send one (or several) bit
of information to the fusion center. The fusion center asynchronously receives
the bits from different secondary users and updates the global sensing decision
statistic to perform a sequential probability ratio test (SPRT), to reach a
sensing decision. We provide an asymptotic analysis for the above scheme, and
under different conditions, we compare it against the cooperative sensing
scheme that is based on traditional uniform sampling and sequential detection.
Simulation results show that the proposed scheme, using even 1 bit, can
outperform its uniform sampling counterpart that uses infinite number of bits
under changing target error probabilities, SNR values, and number of SUs.
|
1111.5092
|
Coset Sum: an alternative to the tensor product in wavelet construction
|
math.NA cs.IT math.IT
|
A multivariate biorthogonal wavelet system can be obtained from a pair of
multivariate biorthogonal refinement masks in Multiresolution Analysis setup.
Some multivariate refinement masks may be decomposed into lower dimensional
refinement masks. Tensor product is a popular way to construct a decomposable
multivariate refinement mask from lower dimensional refinement masks.
We present an alternative method, which we call coset sum, for constructing
multivariate refinement masks from univariate refinement masks. The coset sum
shares many essential features of the tensor product that make it attractive in
practice: (1) it preserves the biorthogonality of univariate refinement masks,
(2) it preserves the accuracy number of the univariate refinement mask, and (3)
the wavelet system associated with it has fast algorithms for computing and
inverting the wavelet coefficients. The coset sum can even provide a wavelet
system with faster algorithms in certain cases than the tensor product. These
features of the coset sum suggest that it is worthwhile to develop and practice
alternative methods to the tensor product for constructing multivariate wavelet
systems. Some experimental results using 2-D images are presented to illustrate
our findings.
|
1111.5108
|
A Theory for Optical flow-based Transport on Image Manifolds
|
cs.CV
|
An image articulation manifold (IAM) is the collection of images formed when
an object is articulated in front of a camera. IAMs arise in a variety of image
processing and computer vision applications, where they provide a natural
low-dimensional embedding of the collection of high-dimensional images. To date
IAMs have been studied as embedded submanifolds of Euclidean spaces.
Unfortunately, their promise has not been realized in practice, because real
world imagery typically contains sharp edges that render an IAM
non-differentiable and hence non-isometric to the low-dimensional parameter
space under the Euclidean metric. As a result, the standard tools from
differential geometry, in particular using linear tangent spaces to transport
along the IAM, have limited utility. In this paper, we explore a nonlinear
transport operator for IAMs based on the optical flow between images and
develop new analytical tools reminiscent of those from differential geometry
using the idea of optical flow manifolds (OFMs). We define a new metric for
IAMs that satisfies certain local isometry conditions, and we show how to use
this metric to develop a new tools such as flow fields on IAMs, parallel flow
fields, parallel transport, as well as a intuitive notion of curvature. The
space of optical flow fields along a path of constant curvature has a natural
multi-scale structure via a monoid structure on the space of all flow fields
along a path. We also develop lower bounds on approximation errors while
approximating non-parallel flow fields by parallel flow fields.
|
1111.5123
|
Pretty Private Group Management
|
cs.DC cs.SI
|
Group management is a fundamental building block of today's Internet
applications. Mailing lists, chat systems, collaborative document edition but
also online social networks such as Facebook and Twitter use group management
systems. In many cases, group security is required in the sense that access to
data is restricted to group members only. Some applications also require
privacy by keeping group members anonymous and unlinkable. Group management
systems routinely rely on a central authority that manages and controls the
infrastructure and data of the system. Personal user data related to groups
then becomes de facto accessible to the central authority. In this paper, we
propose a completely distributed approach for group management based on
distributed hash tables. As there is no enrollment to a central authority, the
created groups can be leveraged by various applications. Following this
paradigm we describe a protocol for such a system. We consider security and
privacy issues inherently introduced by removing the central authority and
provide a formal validation of security properties of the system using AVISPA.
We demonstrate the feasibility of this protocol by implementing a prototype
running on top of Vuze's DHT.
|
1111.5135
|
A New IRIS Normalization Process For Recognition System With
Cryptographic Techniques
|
cs.CV cs.CR
|
Biometric technologies are the foundation of personal identification systems.
It provides an identification based on a unique feature possessed by the
individual. This paper provides a walkthrough for image acquisition,
segmentation, normalization, feature extraction and matching based on the Human
Iris imaging. A Canny Edge Detection scheme and a Circular Hough Transform, is
used to detect the iris boundaries in the eye's digital image. The extracted
IRIS region was normalized by using Image Registration technique. A phase
correlation base method is used for this iris image registration purpose. The
features of the iris region is encoded by convolving the normalized iris region
with 2D Gabor filter. Hamming distance measurement is used to compare the
quantized vectors and authenticate the users. To improve the security,
Reed-Solomon technique is employed directly to encrypt and decrypt the data.
Experimental results show that our system is quite effective and provides
encouraging performance. Keywords: Biometric, Iris Recognition, Phase
correlation, cryptography, Reed-Solomon
|
1111.5207
|
Robot Companions: Technology for Humans
|
cs.RO
|
Creation of devices and mechanisms which help people has a long history.
Their inventors always targeted practical goals such as irrigation, harvesting,
devices for construction sites, measurement, and, last but not least, military
tasks for different mechanical and later mechatronic systems. Development of
such assisting mechanisms counts back to Greek engineering, came through Middle
Ages and led finally in XIX and XX centuries to autonomous devices, which we
call today "Robots". This chapter provides overview of several robotic
technologies, introduces bio-/chemo- hybrid and collective systems and discuss
their applications in service areas.
|
1111.5219
|
Awareness and Self-Awareness for Multi-Robot Organisms
|
cs.RO
|
Awareness and self-awareness are two different notions related to knowing the
environment and itself. In a general context, the mechanism of self-awareness
belongs to a class of co-called "self-issues" (self-* or self-star):
self-adaptation, self-repairing, self-replication, self-development or
self-recovery. The self-* issues are connected in many ways to adaptability and
evolvability, to the emergence of behavior and to the controllability of
long-term developmental processes. Self-* are either natural properties of
several systems, such as self-assembling of molecular networks, or may emerge
as a result of homeostatic regulation. Different computational processes,
leading to a global optimization, increasing scalability and reliability of
collective systems, create such a homeostatic regulation. Moreover, conditions
of ecological survival, imposed on such systems, lead to a discrimination
between "self" and "non-self" as well as to the emergence of different
self-phenomena. There are many profound challenges, such as understanding these
mechanisms, or long-term predictability, which have a considerable impact on
research in the area of artificial intelligence and intelligent systems.
|
1111.5228
|
Privacy-Preserving Methods for Sharing Financial Risk Exposures
|
q-fin.RM cs.CE cs.CR q-fin.CP
|
Unlike other industries in which intellectual property is patentable, the
financial industry relies on trade secrecy to protect its business processes
and methods, which can obscure critical financial risk exposures from
regulators and the public. We develop methods for sharing and aggregating such
risk exposures that protect the privacy of all parties involved and without the
need for a trusted third party. Our approach employs secure multi-party
computation techniques from cryptography in which multiple parties are able to
compute joint functions without revealing their individual inputs. In our
framework, individual financial institutions evaluate a protocol on their
proprietary data which cannot be inverted, leading to secure computations of
real-valued statistics such a concentration indexes, pairwise correlations, and
other single- and multi-point statistics. The proposed protocols are
computationally tractable on realistic sample sizes. Potential financial
applications include: the construction of privacy-preserving real-time indexes
of bank capital and leverage ratios; the monitoring of delegated portfolio
investments; financial audits; and the publication of new indexes of
proprietary trading strategies.
|
1111.5241
|
Refinement of Gini-Means Inequalities and Connections with Divergence
Measures
|
cs.IT math.IT
|
In 1938, Gini studied a mean having two parameters. Later, many authors
studied properties of this mean. It contains as particular cases the famous
means such as harmonic, geometric, arithmetic, etc. Also it contains, the power
mean of order r and Lehmer mean as particular cases. In this paper we have
considered inequalities arising due to Gini-Mean and Heron's mean, and improved
them based on the results recently studied by the author (Taneja, 2011).
|
1111.5272
|
Efficient High-Dimensional Inference in the Multiple Measurement Vector
Problem
|
cs.IT math.IT
|
In this work, a Bayesian approximate message passing algorithm is proposed
for solving the multiple measurement vector (MMV) problem in compressive
sensing, in which a collection of sparse signal vectors that share a common
support are recovered from undersampled noisy measurements. The algorithm,
AMP-MMV, is capable of exploiting temporal correlations in the amplitudes of
non-zero coefficients, and provides soft estimates of the signal vectors as
well as the underlying support. Central to the proposed approach is an
extension of recently developed approximate message passing techniques to the
amplitude-correlated MMV setting. Aided by these techniques, AMP-MMV offers a
computational complexity that is linear in all problem dimensions. In order to
allow for automatic parameter tuning, an expectation-maximization algorithm
that complements AMP-MMV is described. Finally, a detailed numerical study
demonstrates the power of the proposed approach and its particular suitability
for application to high-dimensional problems.
|
1111.5280
|
Stochastic gradient descent on Riemannian manifolds
|
math.OC cs.LG stat.ML
|
Stochastic gradient descent is a simple approach to find the local minima of
a cost function whose evaluations are corrupted by noise. In this paper, we
develop a procedure extending stochastic gradient descent algorithms to the
case where the function is defined on a Riemannian manifold. We prove that, as
in the Euclidian case, the gradient descent algorithm converges to a critical
point of the cost function. The algorithm has numerous potential applications,
and is illustrated here by four examples. In particular a novel gossip
algorithm on the set of covariance matrices is derived and tested numerically.
|
1111.5287
|
On the Gaussian Z-Interference Channel with Processing Energy Cost
|
cs.IT math.IT
|
This work considers a Gaussian interference channel with processing energy
cost, which explicitly takes into account the energy expended for processing
when each transmitter is on. With processing overhead, bursty transmission at
each transmitter generally becomes more advantageous. Assuming on-off states do
not carry information, for a two-user Z-interference channel, the new regime of
very strong interference is identified and shown to be enlarged compared with
the conventional one. With the interfered receiver listening when its own
transmitter is silent, for a wide range of cross-link power gains, one can
either achieve or get close to the interference-free upper bound on sum rate.
|
1111.5293
|
Rule based Part of speech Tagger for Homoeopathy Clinical realm
|
cs.CL
|
A tagger is a mandatory segment of most text scrutiny systems, as it
consigned a s yntax class (e.g., noun, verb, adjective, and adverb) to every
word in a sentence. In this paper, we present a simple part of speech tagger
for homoeopathy clinical language. This paper reports about the anticipated
part of speech tagger for homoeopathy clinical language. It exploit standard
pattern for evaluating sentences, untagged clinical corpus of 20085 words is
used, from which we had selected 125 sentences (2322 tokens). The problem of
tagging in natural language processing is to find a way to tag every word in a
text as a meticulous part of speech. The basic idea is to apply a set of rules
on clinical sentences and on each word, Accuracy is the leading factor in
evaluating any POS tagger so the accuracy of proposed tagger is also conversed.
|
1111.5296
|
Analytical and Learning-Based Spectrum Sensing Time Optimization in
Cognitive Radio Systems
|
cs.NI cs.AI
|
Powerful spectrum sensing schemes enable cognitive radios (CRs) to find
transmission opportunities in spectral resources allocated exclusively to the
primary users. In this paper, maximizing the average throughput of a secondary
user by optimizing its spectrum sensing time is formulated assuming that a
prior knowledge of the presence and absence probabilities of the primary users
is available. The energy consumed for finding a transmission opportunity is
evaluated and a discussion on the impact of the number of the primary users on
the secondary user throughput and consumed energy is presented. In order to
avoid the challenges associated with the analytical method, as a second
solution, a systematic neural network-based sensing time optimization approach
is also proposed in this paper. The proposed adaptive scheme is able to find
the optimum value of the channel sensing time without any prior knowledge or
assumption about the wireless environment. The structure, performance, and
cooperation of the artificial neural networks used in the proposed method are
disclosed in detail and a set of illustrative simulation results is presented
to validate the analytical results as well as the performance of the proposed
learning-based optimization scheme.
|
1111.5312
|
Representations and Ensemble Methods for Dynamic Relational
Classification
|
cs.AI cs.SI physics.soc-ph stat.ML
|
Temporal networks are ubiquitous and evolve over time by the addition,
deletion, and changing of links, nodes, and attributes. Although many
relational datasets contain temporal information, the majority of existing
techniques in relational learning focus on static snapshots and ignore the
temporal dynamics. We propose a framework for discovering temporal
representations of relational data to increase the accuracy of statistical
relational learning algorithms. The temporal relational representations serve
as a basis for classification, ensembles, and pattern mining in evolving
domains. The framework includes (1) selecting the time-varying relational
components (links, attributes, nodes), (2) selecting the temporal granularity,
(3) predicting the temporal influence of each time-varying relational
component, and (4) choosing the weighted relational classifier. Additionally,
we propose temporal ensemble methods that exploit the temporal-dimension of
relational data. These ensembles outperform traditional and more sophisticated
relational ensembles while avoiding the issue of learning the most optimal
representation. Finally, the space of temporal-relational models are evaluated
using a sample of classifiers. In all cases, the proposed temporal-relational
classifiers outperform competing models that ignore the temporal information.
The results demonstrate the capability and necessity of the temporal-relational
representations for classification, ensembles, and for mining temporal
datasets.
|
1111.5358
|
Contextually Guided Semantic Labeling and Search for 3D Point Clouds
|
cs.RO cs.AI cs.CV
|
RGB-D cameras, which give an RGB image to- gether with depths, are becoming
increasingly popular for robotic perception. In this paper, we address the task
of detecting commonly found objects in the 3D point cloud of indoor scenes
obtained from such cameras. Our method uses a graphical model that captures
various features and contextual relations, including the local visual
appearance and shape cues, object co-occurence relationships and geometric
relationships. With a large number of object classes and relations, the model's
parsimony becomes important and we address that by using multiple types of edge
potentials. We train the model using a maximum-margin learning approach. In our
experiments over a total of 52 3D scenes of homes and offices (composed from
about 550 views), we get a performance of 84.06% and 73.38% in labeling office
and home scenes respectively for 17 object classes each. We also present a
method for a robot to search for an object using the learned model and the
contextual information available from the current labelings of the scene. We
applied this algorithm successfully on a mobile robot for the task of finding
12 object classes in 10 different offices and achieved a precision of 97.56%
with 78.43% recall.
|
1111.5377
|
DECENT: A Decentralized Architecture for Enforcing Privacy in Online
Social Networks
|
cs.CR cs.NI cs.SI
|
A multitude of privacy breaches, both accidental and malicious, have prompted
users to distrust centralized providers of online social networks (OSNs) and
investigate decentralized solutions. We examine the design of a fully
decentralized (peer-to-peer) OSN, with a special focus on privacy and security.
In particular, we wish to protect the confidentiality, integrity, and
availability of user content and the privacy of user relationships. We propose
DECENT, an architecture for OSNs that uses a distributed hash table to store
user data, and features cryptographic protections for confidentiality and
integrity, as well as support for flexible attribute policies and fast
revocation. DECENT ensures that neither data nor social relationships are
visible to unauthorized users and provides availability through replication and
authentication of updates. We evaluate DECENT through simulation and
experiments on the PlanetLab network and show that DECENT is able to replicate
the main functionality of current centralized OSNs with manageable overhead.
|
1111.5382
|
Range-limited Centrality Measures in Complex Networks
|
physics.soc-ph cond-mat.stat-mech cs.DS cs.SI
|
Here we present a range-limited approach to centrality measures in both
non-weighted and weighted directed complex networks. We introduce an efficient
method that generates for every node and every edge its betweenness centrality
based on shortest paths of lengths not longer than $\ell = 1,...,L$ in case of
non-weighted networks, and for weighted networks the corresponding quantities
based on minimum weight paths with path weights not larger than $w_{\ell}=\ell
\Delta$, $\ell=1,2...,L=R/\Delta$. These measures provide a systematic
description on the positioning importance of a node (edge) with respect to its
network neighborhoods 1-step out, 2-steps out, etc. up to including the whole
network. We show that range-limited centralities obey universal scaling laws
for large non-weighted networks. As the computation of traditional centrality
measures is costly, this scaling behavior can be exploited to efficiently
estimate centralities of nodes and edges for all ranges, including the
traditional ones. The scaling behavior can also be exploited to show that the
ranking top-list of nodes (edges) based on their range-limited centralities
quickly freezes as function of the range, and hence the diameter-range top-list
can be efficiently predicted. We also show how to estimate the typical largest
node-to-node distance for a network of $N$ nodes, exploiting the aforementioned
scaling behavior. These observations are illustrated on model networks and on a
large social network inferred from cell-phone trace logs ($\sim 5.5\times 10^6$
nodes and $\sim 2.7\times 10^7$ edges). Finally, we apply these concepts to
efficiently detect the vulnerability backbone of a network (defined as the
smallest percolating cluster of the highest betweenness nodes and edges) and
illustrate the importance of weight-based centrality measures in weighted
networks in detecting such backbones.
|
1111.5417
|
How people make friends in social networking sites - A microscopic
perspective
|
physics.soc-ph cs.SI
|
We study the detailed growth of a social networking site with full temporal
information by examining the creation process of each friendship relation that
can collectively lead to the macroscopic properties of the network. We first
study the reciprocal behavior of users, and find that link requests are quickly
responded to and that the distribution of reciprocation intervals decays in an
exponential form. The degrees of inviters/accepters are slightly negatively
correlative with reciprocation time. In addition, the temporal feature of the
online community shows that the distributions of intervals of user behaviors,
such as sending or accepting link requests, follow a power law with a universal
exponent, and peaks emerge for intervals of an integral day. We finally study
the preferential selection and linking phenomena of the social networking site
and find that, for the former, a linear preference holds for preferential
sending and reception, and for the latter, a linear preference also holds for
preferential acceptance, creation, and attachment. Based on the linearly
preferential linking, we put forward an analyzable network model which can
reproduce the degree distribution of the network. The research framework
presented in the paper could provide a potential insight into how the
micro-motives of users lead to the global structure of online social networks.
|
1111.5425
|
Are problems in Quantum Information Theory (un)decidable?
|
quant-ph cs.IT math-ph math.IT math.MP
|
This note is intended to foster a discussion about the extent to which
typical problems arising in quantum information theory are algorithmically
decidable (in principle rather than in practice). Various problems in the
context of entanglement theory and quantum channels turn out to be decidable
via quantifier elimination as long as they admit a compact formulation without
quantification over integers. For many asymptotically defined properties which
have to hold for all or for one integer N, however, effective procedures seem
to be difficult if not impossible to find. We review some of the main tools for
(dis)proving decidability and apply them to problems in quantum information
theory. We find that questions like "can we overcome fidelity 1/2 w.r.t. a
two-qubit singlet state?" easily become undecidable. A closer look at such
questions might rule out some of the "single-letter" formulas sought in quantum
information theory.
|
1111.5454
|
The Management and Use of Social Network Sites in a Government
Department
|
cs.SI cs.SE
|
In this paper we report findings from a study of social network site use in a
UK Government department. We have investigated this from a managerial,
organisational perspective. We found at the study site that there are already
several social network technologies in use, and that these: misalign with and
problematize organisational boundaries; blur boundaries between working and
social lives; present differing opportunities for control; have different
visibilities; have overlapping functionality with each other and with other
information technologies; that they evolve and change over time; and that their
uptake is conditioned by existing infrastructure and availability. We find the
organisational complexity that social technologies are often hoped to cut
across is, in reality, something that shapes their uptake and use. We argue the
idea of a single, central social network site for supporting cooperative work
within an organisation will hit the same problems as any effort of
centralisation in organisations. We argue that while there is still plenty of
scope for design and innovation in this area, an important challenge now is in
supporting organisations in managing what can best be referred to as a social
network site 'ecosystem'.
|
1111.5479
|
The Graphical Lasso: New Insights and Alternatives
|
stat.ML cs.LG
|
The graphical lasso \citep{FHT2007a} is an algorithm for learning the
structure in an undirected Gaussian graphical model, using $\ell_1$
regularization to control the number of zeros in the precision matrix
${\B\Theta}={\B\Sigma}^{-1}$ \citep{BGA2008,yuan_lin_07}. The {\texttt R}
package \GL\ \citep{FHT2007a} is popular, fast, and allows one to efficiently
build a path of models for different values of the tuning parameter.
Convergence of \GL\ can be tricky; the converged precision matrix might not be
the inverse of the estimated covariance, and occasionally it fails to converge
with warm starts. In this paper we explain this behavior, and propose new
algorithms that appear to outperform \GL.
By studying the "normal equations" we see that, \GL\ is solving the {\em
dual} of the graphical lasso penalized likelihood, by block coordinate ascent;
a result which can also be found in \cite{BGA2008}.
In this dual, the target of estimation is $\B\Sigma$, the covariance matrix,
rather than the precision matrix $\B\Theta$. We propose similar primal
algorithms \PGL\ and \DPGL, that also operate by block-coordinate descent,
where $\B\Theta$ is the optimization target. We study all of these algorithms,
and in particular different approaches to solving their coordinate
sub-problems. We conclude that \DPGL\ is superior from several points of view.
|
1111.5483
|
The diminishing role of hubs in dynamical processes on complex networks
|
cs.IT math.IT nlin.AO physics.soc-ph
|
It is notoriously difficult to predict the behaviour of a complex
self-organizing system, where the interactions among dynamical units form a
heterogeneous topology. Even if the dynamics of each microscopic unit is known,
a real understanding of their contributions to the macroscopic system behaviour
is still lacking. Here we develop information-theoretical methods to
distinguish the contribution of each individual unit to the collective
out-of-equilibrium dynamics. We show that for a system of units connected by a
network of interaction potentials with an arbitrary degree distribution, highly
connected units have less impact on the system dynamics as compared to
intermediately connected units. In an equilibrium setting, the hubs are often
found to dictate the long-term behaviour. However, we find both analytically
and experimentally that the instantaneous states of these units have a
short-lasting effect on the state trajectory of the entire system. We present
qualitative evidence of this phenomenon from empirical findings about a social
network of product recommendations, a protein-protein interaction network, and
a neural network, suggesting that it might indeed be a widespread property in
nature.
|
1111.5484
|
A class of punctured simplex codes which are proper for error detection
|
cs.IT math.IT
|
Binary linear [n,k] codes that are proper for error detection are known for
many combinations of n and k. For the remaining combinations, existence of
proper codes is conjectured. In this paper, a particular class of [n,k] codes
is studied in detail. In particular, it is shown that these codes are proper
for many combinations of n and k which were previously unsettled.
|
1111.5485
|
Membership(s) and compliance(s) with class-based graphs
|
cs.SI physics.soc-ph
|
Besides the need for a better understanding of networks, there is a need for
prescriptive models and tools to specify requirements concerning networks and
their associated graph representations. We propose class-based graphs as a
means to specify requirements concerning object-based graphs. Various variants
of membership are proposed as special relations between class-based and
object-based graphs at the local level, while various variants of compliance
are proposed at the global level.
|
1111.5493
|
A Formalization of Social Requirements for Human Interactions with
Service Protocols
|
cs.SI cs.SE
|
Collaboration models and tools aim at improving the efficiency and
effectiveness of human interactions. Although social relations among
collaborators have been identified as having a strong influence on
collaboration, they are still insufficiently taken into account in current
collaboration models and tools. In this paper, the concept of service protocols
is proposed as a model for human interactions supporting social requirements,
i.e., sets of constraints on the relations among interacting humans. Service
protocols have been proposed as an answer to the need for models for human
interactions in which not only the potential sequences of activities are
specified-as in process models-but also the constraints on the relations among
collaborators. Service protocols are based on two main ideas: first, service
protocols are rooted in the service-oriented architecture (SOA): each service
protocol contains a service-oriented summary which provides a representation of
the activities of an associated process model in SOA terms. Second, a
class-based graph-referred to as a service network schema-restricts the set of
potential service elements that may participate in the service protocol by
defining constraints on nodes and constraints on arcs, i.e., social
requirements. Another major contribution to the modelling of human interactions
is a unified approach organized around the concept of service, understood in a
broad sense with services being not only Web services, but also provided by
humans.
|
1111.5502
|
Modelling Competences for Partner Selection in Service-Oriented Virtual
Organization Breeding Environments
|
cs.SE cs.SI
|
In the context of globalization and dynamic markets, collaboration among
organizations is a condition sine qua non for organizations, especially small
and medium enterprises, to remain competitive. Virtual organizations have been
proposed as an organizational structure adapted to collaboration among
organizations. The concept of Virtual Organization Breeding Environment (VOBE)
has been proposed as a means to support the creation and operation of virtual
organizations. With the rise of the service-oriented architecture (SOA), the
concept of service-oriented VOBE (SOVOBE) has been proposed as a VOBE
systematically organized around the concept of services. In the context of
SOVOBEs, novel competence models supporting both service orientation and
collaboration among organizations have to be developed to support efficiently
partner selection, a key aspect of VO creation. In this paper, such a
competence model is presented. Our competence model consists of a competence
description model, a competence verification method, and a competence search
method. The competence description model is an information model to describe
organizations, their competences, and services they provides. The competence
verification method enables the verification of the reliance and relevance of
competence descriptions. The competence search method allows a VO planner to
select appropriate partners based on VO specifications, encompassing competence
requirements. Finally, implementation concerns based on the development of the
prototype ErGo system are presented.
|
1111.5518
|
Efficient Super-Peer-Based Queries Routing: Simulation and Evaluation
|
cs.DB cs.NI
|
Peer-to-peer (P2P) Data-sharing systems now generate a significant portion of
internet traffic. P2P systems have emerged as a popular way to share huge
volumes of data. Requirements for widely distributed information systems
supporting virtual organizations have given rise to a new category of P2P
systems called schema- based. In such systems each peer is a database
management system in itself, ex-posing its own schema. A fundamental problem
that confronts peer-to-peer applications is the efficient location of the node
that stores a desired data item. In such settings, the main objective is the
efficient search across peer databases by processing each incoming query
without overly consuming bandwidth. The usability of these systems depends on
effective techniques to find and retrieve data; however, efficient and
effective routing of content- based queries is an emerging problem in P2P
networks. In this paper, we propose an architecture, based on super-peers, and
we focus on query routing. Our approach considers that super-Peers having
similar interests are grouped together for an efficient query routing method.
In such groups, called Knowledge-Super-Peers (KSP), super-peers submit queries
that are often processed by members of this group.
|
1111.5534
|
How people interact in evolving online affiliation networks
|
physics.soc-ph cs.SI
|
The study of human interactions is of central importance for understanding
the behavior of individuals, groups and societies. Here, we observe the
formation and evolution of networks by monitoring the addition of all new links
and we analyze quantitatively the tendencies used to create ties in these
evolving online affiliation networks. We first show that an accurate estimation
of these probabilistic tendencies can only be achieved by following the time
evolution of the network. For example, actions that are attributed to the usual
friend of a friend mechanism through a static snapshot of the network are
overestimated by a factor of two. A detailed analysis of the dynamic network
evolution shows that half of those triangles were generated through other
mechanisms, in spite of the characteristic static pattern. We start by
characterizing every single link when the tie was established in the network.
This allows us to describe the probabilistic tendencies of tie formation and
extract sociological conclusions as follows. The tendencies to add new links
differ significantly from what we would expect if they were not affected by the
individuals' structural position in the network, i.e., from random link
formation. We also find significant differences in behavioral traits among
individuals according to their degree of activity, gender, age, popularity and
other attributes. For instance, in the particular datasets analyzed here, we
find that women reciprocate connections three times as much as men and this
difference increases with age. Men tend to connect with the most popular people
more often than women across all ages. On the other hand, triangular ties
tendencies are similar and independent of gender. Our findings can be useful to
build models of realistic social network structures and discover the underlying
laws that govern establishment of ties in evolving social networks.
|
1111.5548
|
Computation of generalized inverses using Php/MySql environment
|
cs.DB cs.DS math.NA
|
The main aim of this paper is to develop a client/server-based model for
computing the weighted Moore-Penrose inverse using the partitioning method as
well as for storage of generated results. The web application is developed in
the PHP/MySQL environment. The source code is open and free for testing by
using a web browser. Influence of different matrix representations and storage
systems on the computational time is investigated. The CPU time for searching
the previously stored pseudo-inverses is compared with the CPU time spent for
new computation of the same inverses.
|
1111.5595
|
The Dynamics of Protest Recruitment through an Online Network
|
physics.soc-ph cs.SI
|
The recent wave of mobilizations in the Arab world and across Western
countries has generated much discussion on how digital media is connected to
the diffusion of protests. We examine that connection using data from the surge
of mobilizations that took place in Spain in May 2011. We study recruitment
patterns in the Twitter network and find evidence of social influence and
complex contagion. We identify the network position of early participants (i.e.
the leaders of the recruitment process) and of the users who acted as seeds of
message cascades (i.e. the spreaders of information). We find that early
participants cannot be characterized by a typical topological position but
spreaders tend to me more central to the network. These findings shed light on
the connection between online networks, social contagion, and collective
dynamics, and offer an empirical test to the recruitment mechanisms theorized
in formal models of collective action.
|
1111.5612
|
Distributed Representation of Geometrically Correlated Images with
Compressed Linear Measurements
|
cs.CV cs.MM
|
This paper addresses the problem of distributed coding of images whose
correlation is driven by the motion of objects or positioning of the vision
sensors. It concentrates on the problem where images are encoded with
compressed linear measurements. We propose a geometry-based correlation model
in order to describe the common information in pairs of images. We assume that
the constitutive components of natural images can be captured by visual
features that undergo local transformations (e.g., translation) in different
images. We first identify prominent visual features by computing a sparse
approximation of a reference image with a dictionary of geometric basis
functions. We then pose a regularized optimization problem to estimate the
corresponding features in correlated images given by quantized linear
measurements. The estimated features have to comply with the compressed
information and to represent consistent transformation between images. The
correlation model is given by the relative geometric transformations between
corresponding features. We then propose an efficient joint decoding algorithm
that estimates the compressed images such that they stay consistent with both
the quantized measurements and the correlation model. Experimental results show
that the proposed algorithm effectively estimates the correlation between
images in multi-view datasets. In addition, the proposed algorithm provides
effective decoding performance that compares advantageously to independent
coding solutions as well as state-of-the-art distributed coding schemes based
on disparity learning.
|
1111.5639
|
A New Technique to Backup and Restore DBMS using XML and .NET
Technologies
|
cs.DB
|
In this paper, we proposed a new technique for backing up and restoring
different Database Management Systems (DBMS). The technique is enabling to
backup and restore a part of or the whole database using a unified interface
using ASP.NET and XML technologies. It presents a Web Solution allowing the
administrators to do their jobs from everywhere, locally or remotely. To show
the importance of our solution, we have taken two case studies, oracle 11g and
SQL Server 2008.
|
1111.5648
|
Falsification and future performance
|
stat.ML cs.IT cs.LG math.IT
|
We information-theoretically reformulate two measures of capacity from
statistical learning theory: empirical VC-entropy and empirical Rademacher
complexity. We show these capacity measures count the number of hypotheses
about a dataset that a learning algorithm falsifies when it finds the
classifier in its repertoire minimizing empirical risk. It then follows from
that the future performance of predictors on unseen data is controlled in part
by how many hypotheses the learner falsifies. As a corollary we show that
empirical VC-entropy quantifies the message length of the true hypothesis in
the optimal code of a particular probability distribution, the so-called actual
repertoire.
|
1111.5654
|
Serf and Turf: Crowdturfing for Fun and Profit
|
cs.SI cs.CR
|
Popular Internet services in recent years have shown that remarkable things
can be achieved by harnessing the power of the masses using crowd-sourcing
systems. However, crowd-sourcing systems can also pose a real challenge to
existing security mechanisms deployed to protect Internet services. Many of
these techniques make the assumption that malicious activity is generated
automatically by machines, and perform poorly or fail if users can be organized
to perform malicious tasks using crowd-sourcing systems. Through measurements,
we have found surprising evidence showing that not only do malicious
crowd-sourcing systems exist, but they are rapidly growing in both user base
and total revenue. In this paper, we describe a significant effort to study and
understand these "crowdturfing" systems in today's Internet. We use detailed
crawls to extract data about the size and operational structure of these
crowdturfing systems. We analyze details of campaigns offered and performed in
these sites, and evaluate their end-to-end effectiveness by running active,
non-malicious campaigns of our own. Finally, we study and compare the source of
workers on crowdturfing sites in different countries. Our results suggest that
campaigns on these systems are highly effective at reaching users, and their
continuing growth poses a concrete threat to online communities such as social
networks, both in the US and elsewhere.
|
1111.5668
|
Connecting Spatially Coupled LDPC Code Chains
|
cs.IT cs.DM math.IT
|
Codes constructed from connected spatially coupled low-density parity-check
code (SC-LDPCC) chains are proposed and analyzed. It is demonstrated that
connecting coupled chains results in improved iterative decoding performance.
The constructed protograph ensembles have better iterative decoding thresholds
compared to an individual SC-LDPCC chain and require less computational
complexity per bit when operating in the near-threshold region. In addition, it
is shown that the proposed constructions are asymptotically good in terms of
minimum distance.
|
1111.5679
|
Fisher information as a performance metric for locally optimum
processing
|
physics.data-an cs.IT math.IT
|
For a known weak signal in additive white noise, the asymptotic performance
of a locally optimum processor (LOP) is shown to be given by the Fisher
information (FI) of a standardized even probability density function (PDF) of
noise in three cases: (i) the maximum signal-to-noise ratio (SNR) gain for a
periodic signal; (ii) the optimal asymptotic relative efficiency (ARE) for
signal detection; (iii) the best cross-correlation gain (CG) for signal
transmission. The minimal FI is unity, corresponding to a Gaussian PDF, whereas
the FI is certainly larger than unity for any non-Gaussian PDFs. In the sense
of a realizable LOP, it is found that the dichotomous noise PDF possesses an
infinite FI for known weak signals perfectly processed by the corresponding
LOP. The significance of FI lies in that it provides a upper bound for the
performance of locally optimum processing.
|
1111.5682
|
Flip-OFDM for Optical Wireless Communications
|
cs.IT math.IT
|
We consider two uniploar OFDM techniques for optical wireless communications:
asymmetric clipped optical OFDM (ACO-OFDM) and Flip-OFDM. Both techniques can
be used to compensate multipath distortion effects in optical wireless
channels. However, ACO-OFDM has been widely studied in the literature, while
the performance of Flip-OFDM has never been investigated. In this paper, we
conduct the performance analysis of Flip-OFDM and propose additional
modification to the original scheme in order to compare the performance of both
techniques. Finally, it is shown by simulation that both techniques have the
same performance but different hardware complexities. In particular, for slow
fading channels, Flip-OFDM offers 50% saving in hardware complexity over
ACO-OFDM at the receiver.
|
1111.5687
|
Coron : Plate-forme d'extraction de connaissances dans les bases de
donn\'ees
|
cs.DB
|
Coron is a domain and platform independent, multi-purposed data mining
toolkit, which incorporates not only a rich collection of data mining
algorithms, but also allows a number of auxiliary operations. To the best of
our knowledge, a data mining toolkit designed specifically for itemset
extraction and association rule generation like Coron does not exist elsewhere.
Coron also provides support for preparing and filtering data, and for
interpreting the extracted units of knowledge.
|
1111.5689
|
Revisiting Numerical Pattern Mining with Formal Concept Analysis
|
cs.AI
|
In this paper, we investigate the problem of mining numerical data in the
framework of Formal Concept Analysis. The usual way is to use a scaling
procedure --transforming numerical attributes into binary ones-- leading either
to a loss of information or of efficiency, in particular w.r.t. the volume of
extracted patterns. By contrast, we propose to directly work on numerical data
in a more precise and efficient way, and we prove it. For that, the notions of
closed patterns, generators and equivalent classes are revisited in the
numerical context. Moreover, two original algorithms are proposed and used in
an evaluation involving real-world data, showing the predominance of the
present approach.
|
1111.5690
|
The Coron System
|
cs.DB
|
Coron is a domain and platform independent, multi-purposed data mining
toolkit, which incorporates not only a rich collection of data mining
algorithms, but also allows a number of auxiliary operations. To the best of
our knowledge, a data mining toolkit designed specifically for itemset
extraction and association rule generation like Coron does not exist elsewhere.
Coron also provides support for preparing and filtering data, and for
interpreting the extracted units of knowledge.
|
1111.5710
|
On Mean Field Convergence and Stationary Regime
|
math.DS cs.PF cs.SY
|
Assume that a family of stochastic processes on some Polish space $E$
converges to a deterministic process; the convergence is in distribution (hence
in probability) at every fixed point in time. This assumption holds for a large
family of processes, among which many mean field interaction models and is
weaker than previously assumed. We show that any limit point of an invariant
probability of the stochastic process is an invariant probability of the
deterministic process. The results are valid in discrete and in continuous
time.
|
1111.5720
|
A GP-MOEA/D Approach for Modelling Total Electron Content over Cyprus
|
cs.AI cs.NE
|
Vertical Total Electron Content (vTEC) is an ionospheric characteristic used
to derive the signal delay imposed by the ionosphere on near-vertical
trans-ionospheric links. The major aim of this paper is to design a prediction
model based on the main factors that influence the variability of this
parameter on a diurnal, seasonal and long-term time-scale. The model should be
accurate and general (comprehensive) enough for efficiently approximating the
high variations of vTEC. However, good approximation and generalization are
conflicting objectives. For this reason a Genetic Programming (GP) with
Multi-objective Evolutionary Algorithm based on Decomposition characteristics
(GP-MOEA/D) is designed and proposed for modeling vTEC over Cyprus.
Experimental results show that the Multi-Objective GP-model, considering real
vTEC measurements obtained over a period of 11 years, has produced a good
approximation of the modeled parameter and can be implemented as a local model
to account for the ionospheric imposed error in positioning. Particulary, the
GP-MOEA/D approach performs better than a Single Objective Optimization GP, a
GP with Non-dominated Sorting Genetic Algorithm-II (NSGA-II) characteristics
and the previously proposed Neural Network-based approach in most cases.
|
1111.5733
|
Social Service Brokerage based on UDDI and Social Requirements
|
cs.SE cs.SI
|
The choice of a suitable service provider is an important issue often
overlooked in existing architectures. Current systems focus mostly on the
service itself, paying little (if at all) attention to the service provider. In
the Service Oriented Architecture (SOA), Universal Description, Discovery and
Integration (UDDI) registries have been proposed as a way to publish and find
information about available services. These registries have been criticized for
not being completely trustworthy. In this paper, an enhancement of existing
mechanisms for finding services is proposed. The concept of Social Service
Broker addressing both service and social requirements is proposed. While UDDI
registries still provide information about available services, methods from
Social Network Analysis are proposed as a way to evaluate and rank the services
proposed by a UDDI registry in social terms.
|
1111.5735
|
Efficient Joint Network-Source Coding for Multiple Terminals with Side
Information
|
cs.IT math.IT
|
Consider the problem of source coding in networks with multiple receiving
terminals, each having access to some kind of side information. In this case,
standard coding techniques are either prohibitively complex to decode, or
require network-source coding separation, resulting in sub-optimal transmission
rates. To alleviate this problem, we offer a joint network-source coding scheme
based on matrix sparsification at the code design phase, which allows the
terminals to use an efficient decoding procedure (syndrome decoding using
LDPC), despite the network coding throughout the network. Via a novel relation
between matrix sparsification and rate-distortion theory, we give lower and
upper bounds on the best achievable sparsification performance. These bounds
allow us to analyze our scheme, and, in particular, show that in the limit
where all receivers have comparable side information (in terms of conditional
entropy), or, equivalently, have weak side information, a vanishing density can
be achieved. As a result, efficient decoding is possible at all terminals
simultaneously. Simulation results motivate the use of this scheme at
non-limiting rates as well.
|
1111.5750
|
Effects of mass media on opinion spreading in the Sznajd sociophysics
model
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
In this work we consider the influence of mass media in the dynamics of the
two-dimensional Sznajd model. This influence acts as an external field, and it
is introduced in the model by means of a probability $p$ of the agents to
follow the media opinion. We performed Monte Carlo simulations on square
lattices with different sizes, and our numerical results suggest a change on
the critical behavior of the model, with the absence of the usual phase
transition for $p>\sim 0.18$. Another effect of the probability $p$ is to
decrease the average relaxation times $\tau$, that are log-normally
distributed, as in the standard model. In addition, the $\tau$ values depend on
the lattice size $L$ in a power-law form, $\tau\sim L^{\alpha}$, where the
power-law exponent depends on the probability $p$.
|
1111.5773
|
Social Requirements for Virtual Organization Breeding Environments
|
cs.SI
|
The creation of Virtual Breeding Environments (VBE) is a topic which has
received too little attention: in most former works, the existence of the VBE
is either assumed, or is considered as the result of the voluntary,
participatory gathering of a set of candidate companies. In this paper, the
creation of a VBE by a third authority is considered: chambers of commerce, as
organizations whose goal is to promote and facilitate business interests and
activity in the community, could be good candidates for exogenous VBE creators.
During VBE planning, there is a need to specify social requirements for the
VBE. In this paper, SNA metrics are proposed as a way for a VBE planner to
express social requirements for a VBE to be created. Additionally, a set of
social requirements for VO planners, VO brokers, and VBE members are proposed.
|
1111.5799
|
Spatial Throughput of Mobile Ad Hoc Networks Powered by Energy
Harvesting
|
cs.IT math.IT
|
Designing mobiles to harvest ambient energy such as kinetic activities or
electromagnetic radiation will enable wireless networks to be self sustaining
besides alleviating global warming. In this paper, the spatial throughput of a
mobile ad hoc network powered by energy harvesting is analyzed using a
stochastic-geometry model. In this model, transmitters are distributed as a
Poisson point process and energy arrives at each transmitter randomly with a
uniform average rate called the energy arrival rate; upon harvesting sufficient
energy, each transmitter transmits with fixed power to an intended receiver
under an outage-probability constraint for a target
signal-to-interference-and-noise ratio. It is assumed that transmitters store
energy in batteries with infinite capacity. By applying the random-walk theory,
the probability that a transmitter transmits, called the transmission
probability, is proved to be equal to one if the energy-arrival rate exceeds
transmission power or otherwise is equal to their ratio. This result and tools
from stochastic geometry are applied to maximize the network throughput for a
given energy-arrival rate by optimizing transmission power. The maximum network
throughput is shown to be proportional to the optimal transmission probability,
which is equal to one if the transmitter density is below a derived function of
the energy-arrival rate or otherwise is smaller than one and solves a given
polynomial equation. Last, the limits of the maximum network throughput are
obtained for the extreme cases of high energy-arrival rates and dense networks.
|
1111.5848
|
Receiver Architectures for MIMO-OFDM Based on a Combined VMP-SP
Algorithm
|
stat.ML cs.IT math.IT
|
Iterative information processing, either based on heuristics or analytical
frameworks, has been shown to be a very powerful tool for the design of
efficient, yet feasible, wireless receiver architectures. Within this context,
algorithms performing message-passing on a probabilistic graph, such as the
sum-product (SP) and variational message passing (VMP) algorithms, have become
increasingly popular.
In this contribution, we apply a combined VMP-SP message-passing technique to
the design of receivers for MIMO-ODFM systems. The message-passing equations of
the combined scheme can be obtained from the equations of the stationary points
of a constrained region-based free energy approximation. When applied to a
MIMO-OFDM probabilistic model, we obtain a generic receiver architecture
performing iterative channel weight and noise precision estimation,
equalization and data decoding. We show that this generic scheme can be
particularized to a variety of different receiver structures, ranging from
high-performance iterative structures to low complexity receivers. This allows
for a flexible design of the signal processing specially tailored for the
requirements of each specific application. The numerical assessment of our
solutions, based on Monte Carlo simulations, corroborates the high performance
of the proposed algorithms and their superiority to heuristic approaches.
|
1111.5867
|
Suboptimality of Nonlocal Means for Images with Sharp Edges
|
math.ST cs.CV cs.IT math.IT stat.TH
|
We conduct an asymptotic risk analysis of the nonlocal means image denoising
algorithm for the Horizon class of images that are piecewise constant with a
sharp edge discontinuity. We prove that the mean square risk of an optimally
tuned nonlocal means algorithm decays according to $n^{-1}\log^{1/2+\epsilon}
n$, for an $n$-pixel image with $\epsilon>0$. This decay rate is an improvement
over some of the predecessors of this algorithm, including the linear
convolution filter, median filter, and the SUSAN filter, each of which provides
a rate of only $n^{-2/3}$. It is also within a logarithmic factor from
optimally tuned wavelet thresholding. However, it is still substantially lower
than the the optimal minimax rate of $n^{-4/3}$.
|
1111.5880
|
Robustness Analysis for Battery Supported Cyber-Physical Systems
|
cs.ET cs.OS cs.SY
|
This paper establishes a novel analytical approach to quantify robustness of
scheduling and battery management for battery supported cyber-physical systems.
A dynamic schedulability test is introduced to determine whether tasks are
schedulable within a finite time window. The test is used to measure robustness
of a real-time scheduling algorithm by evaluating the strength of computing
time perturbations that break schedulability at runtime. Robustness of battery
management is quantified analytically by an adaptive threshold on the state of
charge. The adaptive threshold significantly reduces the false alarm rate for
battery management algorithms to decide when a battery needs to be replaced.
|
1111.5892
|
Evolving Chart Pattern Sensitive Neural Network Based Forex Trading
Agents
|
cs.NE cs.CE
|
Though machine learning has been applied to the foreign exchange market for
algorithmic trading for quiet some time now, and neural networks(NN) have been
shown to yield positive results, in most modern approaches the NN systems are
optimized through traditional methods like the backpropagation algorithm for
example, and their input signals are price lists, and lists composed of other
technical indicator elements. The aim of this paper is twofold: the
presentation and testing of the application of topology and weight evolving
artificial neural network (TWEANN) systems to automated currency trading, and
to demonstrate the performance when using Forex chart images as input to
geometrical regularity aware indirectly encoded neural network systems,
enabling them to use the patterns & trends within, when trading. This paper
presents the benchmark results of NN based automated currency trading systems
evolved using TWEANNs, and compares the performance and generalization
capabilities of these direct encoded NNs which use the standard sliding-window
based price vector inputs, and the indirect (substrate) encoded NNs which use
charts as input. The TWEANN algorithm I will use in this paper to evolve these
currency trading agents is the memetic algorithm based TWEANN system called
Deus Ex Neural Network (DXNN) platform.
|
1111.5897
|
Variational Splines and Paley--Wiener Spaces on Combinatorial Graphs
|
cs.IT math.IT
|
Notions of interpolating variational splines and Paley-Wiener spaces are
introduced on a combinatorial graph G. Both of these definitions explore
existence of a combinatorial Laplace operator onG. The existence and uniqueness
of interpolating variational splines on a graph is shown. As an application of
variational splines, the paper presents a reconstruction algorithm of
Paley-Wiener functions on graphs from their uniqueness sets.
|
1111.5899
|
Sampling, Filtering and Sparse Approximations on Combinatorial Graphs
|
cs.IT math.FA math.IT
|
In this paper we address sampling and approximation of functions on
combinatorial graphs. We develop filtering on graphs by using Schr\"odinger's
group of operators generated by combinatorial Laplace operator. Then we
construct a sampling theory by proving Poincare and Plancherel-Polya-type
inequalities for functions on graphs. These results lead to a theory of sparse
approximations on graphs and have potential applications to filtering,
denoising, data dimension reduction, image processing, image compression,
computer graphics, visualization and learning theory.
|
1111.5900
|
Cubature formulas and discrete fourier transform on compact manifolds
|
math.FA cs.IT math.IT
|
The goal of the paper is to describe essentially optimal cubature formulas on
compact Riemannian manifolds which are exact on spaces of band- limited
functions.
|
1111.5930
|
Agent Development Toolkits
|
cs.MA
|
Development of agents as well as their wide usage requires good underlying
infrastructure. Literature indicates scarcity of agent development tools in
initial years of research which limited the exploitation of this beneficial
technology. However, today a wide variety of tools are available, for
developing robust infrastructure. This technical note provides a deep overview
of such tools and contrasts features provided by them.
|
1111.5950
|
Non-Linear Transformations of Gaussians and Gaussian-Mixtures with
implications on Estimation and Information Theory
|
cs.IT math.IT math.PR math.ST stat.TH
|
This paper investigates the statistical properties of non-linear
transformations (NLT) of random variables, in order to establish useful tools
for estimation and information theory. Specifically, the paper focuses on
linear regression analysis of the NLT output and derives sufficient general
conditions to establish when the input-output regression coefficient is equal
to the \emph{partial} regression coefficient of the output with respect to a
(additive) part of the input. A special case is represented by zero-mean
Gaussian inputs, obtained as the sum of other zero-mean Gaussian random
variables. The paper shows how this property can be generalized to the
regression coefficient of non-linear transformations of Gaussian-mixtures. Due
to its generality, and the wide use of Gaussians and Gaussian-mixtures to
statistically model several phenomena, this theoretical framework can find
applications in multiple disciplines, such as communication, estimation, and
information theory, when part of the nonlinear transformation input is the
quantity of interest and the other part is the noise. In particular, the paper
shows how the said properties can be exploited to simplify closed-form
computation of the signal-to-noise ratio (SNR), the estimation mean-squared
error (MSE), and bounds on the mutual information in additive non-Gaussian
(possibly non-linear) channels, also establishing relationships among them.
|
1111.6030
|
An image processing of a Raphael's portrait of Leonardo
|
cs.CV
|
In one of his paintings, the School of Athens, Raphael is depicting Leonardo
da Vinci as the philosopher Plato. Some image processing tools can help us in
comparing this portrait with two Leonardo's portraits, considered as
self-portraits.
|
1111.6074
|
Flavor network and the principles of food pairing
|
physics.soc-ph cs.SI
|
The cultural diversity of culinary practice, as illustrated by the variety of
regional cuisines, raises the question of whether there are any general
patterns that determine the ingredient combinations used in food today or
principles that transcend individual tastes and recipes. We introduce a flavor
network that captures the flavor compounds shared by culinary ingredients.
Western cuisines show a tendency to use ingredient pairs that share many flavor
compounds, supporting the so-called food pairing hypothesis. By contrast, East
Asian cuisines tend to avoid compound sharing ingredients. Given the increasing
availability of information on food preparation, our data-driven investigation
opens new avenues towards a systematic understanding of culinary practice.
|
1111.6082
|
Trading Regret for Efficiency: Online Convex Optimization with Long Term
Constraints
|
cs.LG
|
In this paper we propose a framework for solving constrained online convex
optimization problem. Our motivation stems from the observation that most
algorithms proposed for online convex optimization require a projection onto
the convex set $\mathcal{K}$ from which the decisions are made. While for
simple shapes (e.g. Euclidean ball) the projection is straightforward, for
arbitrary complex sets this is the main computational challenge and may be
inefficient in practice. In this paper, we consider an alternative online
convex optimization problem. Instead of requiring decisions belong to
$\mathcal{K}$ for all rounds, we only require that the constraints which define
the set $\mathcal{K}$ be satisfied in the long run. We show that our framework
can be utilized to solve a relaxed version of online learning with side
constraints addressed in \cite{DBLP:conf/colt/MannorT06} and
\cite{DBLP:conf/aaai/KvetonYTM08}. By turning the problem into an online
convex-concave optimization problem, we propose an efficient algorithm which
achieves $\tilde{\mathcal{O}}(\sqrt{T})$ regret bound and
$\tilde{\mathcal{O}}(T^{3/4})$ bound for the violation of constraints. Then we
modify the algorithm in order to guarantee that the constraints are satisfied
in the long run. This gain is achieved at the price of getting
$\tilde{\mathcal{O}}(T^{3/4})$ regret bound. Our second algorithm is based on
the Mirror Prox method \citep{nemirovski-2005-prox} to solve variational
inequalities which achieves $\tilde{\mathcal{\mathcal{O}}}(T^{2/3})$ bound for
both regret and the violation of constraints when the domain $\K$ can be
described by a finite number of linear constraints. Finally, we extend the
result to the setting where we only have partial access to the convex set
$\mathcal{K}$ and propose a multipoint bandit feedback algorithm with the same
bounds in expectation as our first algorithm.
|
1111.6084
|
Semantic Query Reformulation in Social PDMS
|
cs.DB cs.SI
|
We consider social peer-to-peer data management systems (PDMS), where each
peer maintains both semantic mappings between its schema and some
acquaintances, and social links with peer friends. In this context,
reformulating a query from a peer's schema into other peer's schemas is a hard
problem, as it may generate as many rewritings as the set of mappings from that
peer to the outside and transitively on, by eventually traversing the entire
network. However, not all the obtained rewritings are relevant to a given
query. In this paper, we address this problem by inspecting semantic mappings
and social links to find only relevant rewritings. We propose a new notion of
'relevance' of a query with respect to a mapping, and, based on this notion, a
new semantic query reformulation approach for social PDMS, which achieves great
accuracy and flexibility. To find rapidly the most interesting mappings, we
combine several techniques: (i) social links are expressed as FOAF (Friend of a
Friend) links to characterize peer's friendship and compact mapping summaries
are used to obtain mapping descriptions; (ii) local semantic views are special
views that contain information about external mappings; and (iii) gossiping
techniques improve the search of relevant mappings. Our experimental
evaluation, based on a prototype on top of PeerSim and a simulated network
demonstrate that our solution yields greater recall, compared to traditional
query translation approaches proposed in the literature.
|
1111.6087
|
Fast Distributed Computation of Distances in Networks
|
cs.DC cs.NI cs.SI
|
This paper presents a distributed algorithm to simultaneously compute the
diameter, radius and node eccentricity in all nodes of a synchronous network.
Such topological information may be useful as input to configure other
algorithms. Previous approaches have been modular, progressing in sequential
phases using building blocks such as BFS tree construction, thus incurring
longer executions than strictly required. We present an algorithm that, by
timely propagation of available estimations, achieves a faster convergence to
the correct values. We show local criteria for detecting convergence in each
node. The algorithm avoids the creation of BFS trees and simply manipulates
sets of node ids and hop counts. For the worst scenario of variable start
times, each node i with eccentricity ecc(i) can compute: the node eccentricity
in diam(G)+ecc(i)+2 rounds; the diameter in 2*diam(G)+ecc(i)+2 rounds; and the
radius in diam(G)+ecc(i)+2*radius(G) rounds.
|
1111.6115
|
Discovering Network Structure Beyond Communities
|
physics.soc-ph cond-mat.dis-nn cs.SI nlin.AO
|
To understand the formation, evolution, and function of complex systems, it
is crucial to understand the internal organization of their interaction
networks. Partly due to the impossibility of visualizing large complex
networks, resolving network structure remains a challenging problem. Here we
overcome this difficulty by combining the visual pattern recognition ability of
humans with the high processing speed of computers to develop an exploratory
method for discovering groups of nodes characterized by common network
properties, including but not limited to communities of densely connected
nodes. Without any prior information about the nature of the groups, the method
simultaneously identifies the number of groups, the group assignment, and the
properties that define these groups. The results of applying our method to real
networks suggest the possibility that most group structures lurk undiscovered
in the fast-growing inventory of social, biological, and technological networks
of scientific interest.
|
1111.6116
|
AstroDAbis: Annotations and Cross-Matches for Remote Catalogues
|
astro-ph.IM cs.DL cs.IR
|
Astronomers are good at sharing data, but poorer at sharing knowledge.
Almost all astronomical data ends up in open archives, and access to these is
being simplified by the development of the global Virtual Observatory (VO).
This is a great advance, but the fundamental problem remains that these
archives contain only basic observational data, whereas all the astrophysical
interpretation of that data -- which source is a quasar, which a low-mass star,
and which an image artefact -- is contained in journal papers, with very little
linkage back from the literature to the original data archives. It is therefore
currently impossible for an astronomer to pose a query like "give me all
sources in this data archive that have been identified as quasars" and this
limits the effective exploitation of these archives, as the user of an archive
has no direct means of taking advantage of the knowledge derived by its
previous users.
The AstroDAbis service aims to address this, in a prototype service enabling
astronomers to record annotations and cross-identifications in the AstroDAbis
service, annotating objects in other catalogues. We have deployed two
interfaces to the annotations, namely one astronomy-specific one using the TAP
protocol}, and a second exploiting generic Linked Open Data (LOD) and RDF
techniques.
|
1111.6117
|
Principles of Solomonoff Induction and AIXI
|
cs.AI
|
We identify principles characterizing Solomonoff Induction by demands on an
agent's external behaviour. Key concepts are rationality, computability,
indifference and time consistency. Furthermore, we discuss extensions to the
full AI case to derive AIXI.
|
1111.6174
|
Resolving conflicts between statistical methods by probability
combination: Application to empirical Bayes analyses of genomic data
|
stat.ME cs.IT math.IT math.ST q-bio.QM stat.TH
|
In the typical analysis of a data set, a single method is selected for
statistical reporting even when equally applicable methods yield very different
results. Examples of equally applicable methods can correspond to those of
different ancillary statistics in frequentist inference and of different prior
distributions in Bayesian inference. More broadly, choices are made between
parametric and nonparametric methods and between frequentist and Bayesian
methods.
Rather than choosing a single method, it can be safer, in a game-theoretic
sense, to combine those that are equally appropriate in light of the available
information. Since methods of combining subjectively assessed probability
distributions are not objective enough for that purpose, this paper introduces
a method of distribution combination that does not require any assignment of
distribution weights. It does so by formalizing a hedging strategy in terms of
a game between three players: nature, a statistician combining distributions,
and a statistician refusing to combine distributions. The optimal move of the
first statistician reduces to the solution of a simpler problem of selecting an
estimating distribution that minimizes the Kullback-Leibler loss maximized over
the plausible distributions to be combined. The resulting combined distribution
is a linear combination of the most extreme of the distributions to be combined
that are scientifically plausible. The optimal weights are close enough to each
other that no extreme distribution dominates the others.
The new methodology is illustrated by combining conflicting empirical Bayes
methodologies in the context of gene expression data analysis.
|
1111.6188
|
Design of Optimal Sparse Feedback Gains via the Alternating Direction
Method of Multipliers
|
math.OC cs.SY
|
We design sparse and block sparse feedback gains that minimize the variance
amplification (i.e., the $H_2$ norm) of distributed systems. Our approach
consists of two steps. First, we identify sparsity patterns of feedback gains
by incorporating sparsity-promoting penalty functions into the optimal control
problem, where the added terms penalize the number of communication links in
the distributed controller. Second, we optimize feedback gains subject to
structural constraints determined by the identified sparsity patterns. In the
first step, the sparsity structure of feedback gains is identified using the
alternating direction method of multipliers, which is a powerful algorithm
well-suited to large optimization problems. This method alternates between
promoting the sparsity of the controller and optimizing the closed-loop
performance, which allows us to exploit the structure of the corresponding
objective functions. In particular, we take advantage of the separability of
the sparsity-promoting penalty functions to decompose the minimization problem
into sub-problems that can be solved analytically. Several examples are
provided to illustrate the effectiveness of the developed approach.
|
1111.6191
|
Pattern-Based Classification: A Unifying Perspective
|
cs.AI
|
The use of patterns in predictive models is a topic that has received a lot
of attention in recent years. Pattern mining can help to obtain models for
structured domains, such as graphs and sequences, and has been proposed as a
means to obtain more accurate and more interpretable models. Despite the large
amount of publications devoted to this topic, we believe however that an
overview of what has been accomplished in this area is missing. This paper
presents our perspective on this evolving area. We identify the principles of
pattern mining that are important when mining patterns for models and provide
an overview of pattern-based classification methods. We categorize these
methods along the following dimensions: (1) whether they post-process a
pre-computed set of patterns or iteratively execute pattern mining algorithms;
(2) whether they select patterns model-independently or whether the pattern
selection is guided by a model. We summarize the results that have been
obtained for each of these methods.
|
1111.6201
|
Learning a Factor Model via Regularized PCA
|
cs.LG stat.ML
|
We consider the problem of learning a linear factor model. We propose a
regularized form of principal component analysis (PCA) and demonstrate through
experiments with synthetic and real data the superiority of resulting estimates
to those produced by pre-existing factor analysis approaches. We also establish
theoretical results that explain how our algorithm corrects the biases induced
by conventional approaches. An important feature of our algorithm is that its
computational requirements are similar to those of PCA, which enjoys wide use
in large part due to its efficiency.
|
1111.6214
|
Robust Max-Product Belief Propagation
|
cs.CE cs.LG math.OC
|
We study the problem of optimizing a graph-structured objective function
under \emph{adversarial} uncertainty. This problem can be modeled as a
two-persons zero-sum game between an Engineer and Nature. The Engineer controls
a subset of the variables (nodes in the graph), and tries to assign their
values to maximize an objective function. Nature controls the complementary
subset of variables and tries to minimize the same objective. This setting
encompasses estimation and optimization problems under model uncertainty, and
strategic problems with a graph structure. Von Neumann's minimax theorem
guarantees the existence of a (minimax) pair of randomized strategies that
provide optimal robustness for each player against its adversary.
We prove several structural properties of this strategy pair in the case of
graph-structured payoff function. In particular, the randomized minimax
strategies (distributions over variable assignments) can be chosen in such a
way to satisfy the Markov property with respect to the graph. This
significantly reduces the problem dimensionality. Finally we introduce a
message passing algorithm to solve this minimax problem. The algorithm
generalizes max-product belief propagation to this new domain.
|
1111.6223
|
Lower Bounds Optimization for Coordinated Linear Transmission Beamformer
Design in Multicell Network Downlink
|
cs.IT math.IT
|
We consider the coordinated downlink beamforming problem in a cellular
network with the base stations (BSs) equipped with multiple antennas, and with
each user equipped with a single antenna. The BSs cooperate in sharing their
local interference information, and they aim at maximizing the sum rate of the
users in the network. A set of new lower bounds (one bound for each BS) of the
non-convex sum rate is identified. These bounds facilitate the development of a
set of algorithms that allow the BSs to update their beams by optimizing their
respective lower bounds. We show that when there is a single user per-BS, the
lower bound maximization problem can be solved exactly with rank-1 solutions.
In this case, the overall sum rate maximization problem can be solved to a KKT
point. Numerical results show that the proposed algorithms achieve high system
throughput with reduced backhaul information exchange among the BSs.
|
1111.6237
|
Fast Algorithms for Sparse Recovery with Perturbed Dictionary
|
cs.IT math.IT
|
In this paper, we account for approaches of sparse recovery from large
underdetermined linear models with perturbation present in both the
measurements and the dictionary matrix. Existing methods have high computation
and low efficiency. The total least-squares (TLS) criterion has well-documented
merits in solving linear regression problems while FOCal Underdetermined System
Solver (FOCUSS) has low-computation complexity in sparse recovery. Based on TLS
and FOCUSS methods, the present paper develops more fast and robust algorithms,
TLS-FOCUSS and SD-FOCUSS. TLS-FOCUSS algorithm is not only near-optimum but
also fast in solving TLS optimization problems under sparsity constraints, and
thus fit for large scale computation. In order to reduce the complexity of
algorithm further, another suboptimal algorithm named D-FOCUSS is devised.
SD-FOCUSS can be applied in MMV (multiple-measurement-vectors) TLS model, which
fills the gap of solving linear regression problems under sparsity constraints.
The convergence of TLS-FOCUSS algorithm and SD-FOCUSS algorithm is established
with mathematical proof. The simulations illustrate the advantage of TLS-FOCUSS
and SD-FOCUSS in accuracy and stability, compared with other algorithms.
|
1111.6244
|
Efficient and Universal Corruption Resilient Fountain Codes
|
cs.IT cs.CR math.IT
|
In this paper, we present a new family of fountain codes which overcome
adversarial errors. That is, we consider the possibility that some portion of
the arriving packets of a rateless erasure code are corrupted in an
undetectable fashion. In practice, the corrupted packets may be attributed to a
portion of the communication paths which are controlled by an adversary or to a
portion of the sources that are malicious.
The presented codes resemble and extend LT and Raptor codes. Yet, their
benefits over existing coding schemes are manifold. First, to overcome the
corrupted packets, our codes use information theoretic techniques, rather than
cryptographic primitives. Thus, no secret channel between the senders and the
receivers is required. Second, the encoders in the suggested scheme are
oblivious to the strength of the adversary, yet perform as if its strength was
known in advance. Third, the sparse structure of the codes facilitates
efficient decoding. Finally, the codes easily fit a decentralized scenario with
several sources, when no communication between the sources is allowed.
We present both exhaustive as well as efficient decoding rules. Beyond the
obvious use as a rateless codes, our codes have important applications in
distributed computing.
|
1111.6276
|
Compressed sensing of astronomical images:orthogonal wavelets domains
|
cs.CV astro-ph.IM physics.data-an
|
A simple approach for orthogonal wavelets in compressed sensing (CS)
applications is presented. We compare efficient algorithm for different
orthogonal wavelet measurement matrices in CS for image processing from scanned
photographic plates (SPP). Some important characteristics were obtained for
astronomical image processing of SPP. The best orthogonal wavelet choice for
measurement matrix construction in CS for image compression of images of SPP is
given. The image quality measure for linear and nonlinear image compression
method is defined.
|
1111.6278
|
Vanishing ideals over graphs and even cycles
|
math.AC cs.IT math.AG math.CO math.IT
|
Let X be an algebraic toric set in a projective space over a finite field. We
study the vanishing ideal, I(X), of X and show some useful degree bounds for a
minimal set of generators of I(X). We give an explicit description of a set of
generators of I(X), when X is the algebraic toric set associated to an even
cycle or to a connected bipartite graph with pairwise disjoint even cycles. In
this case, a fomula for the regularity of I(X) is given. We show an upper bound
for this invariant, when X is associated to a (not necessarily connected)
bipartite graph. The upper bound is sharp if the graph is connected. We are
able to show a formula for the length of the parameterized linear code
associated with any graph, in terms of the number of bipartite and
non-bipartite components.
|
1111.6285
|
Ward's Hierarchical Clustering Method: Clustering Criterion and
Agglomerative Algorithm
|
stat.ML cs.CV stat.AP
|
The Ward error sum of squares hierarchical clustering method has been very
widely used since its first description by Ward in a 1963 publication. It has
also been generalized in various ways. However there are different
interpretations in the literature and there are different implementations of
the Ward agglomerative algorithm in commonly used software systems, including
differing expressions of the agglomerative criterion. Our survey work and case
studies will be useful for all those involved in developing software for data
analysis using Ward's hierarchical clustering method.
|
1111.6289
|
Inverse Determinant Sums and Connections Between Fading Channel
Information Theory and Algebra
|
cs.IT math.IT math.NT math.RA
|
This work concentrates on the study of inverse determinant sums, which arise
from the union bound on the error probability, as a tool for designing and
analyzing algebraic space-time block codes.
A general framework to study these sums is established, and the connection
between asymptotic growth of inverse determinant sums and the
diversity-multiplexing gain trade-off is investigated. It is proven that the
growth of the inverse determinant sum of a division algebra-based space-time
code is completely determined by the growth of the unit group. This reduces the
inverse determinant sum analysis to studying certain asymptotic integrals in
Lie groups.
Using recent methods from ergodic theory, a complete classification of the
inverse determinant sums of the most well known algebraic space-time codes is
provided. The approach reveals an interesting and tight relation between
diversity-multiplexing gain trade-off and point counting in Lie groups.
|
1111.6334
|
On the error performance of the $A_n$ lattices
|
cs.IT math.IT
|
We consider the root lattice $A_n$ and derive explicit formulae for the
moments of its Voronoi cell. We then show that these formulae enable accurate
prediction of the error probability of lattice codes constructed from $A_n$.
|
1111.6337
|
Regret Bound by Variation for Online Convex Optimization
|
cs.LG
|
In citep{Hazan-2008-extract}, the authors showed that the regret of online
linear optimization can be bounded by the total variation of the cost vectors.
In this paper, we extend this result to general online convex optimization. We
first analyze the limitations of the algorithm in \citep{Hazan-2008-extract}
when applied it to online convex optimization. We then present two algorithms
for online convex optimization whose regrets are bounded by the variation of
cost functions. We finally consider the bandit setting, and present a
randomized algorithm for online bandit convex optimization with a
variation-based regret bound. We show that the regret bound for online bandit
convex optimization is optimal when the variation of cost functions is
independent of the number of trials.
|
1111.6349
|
XML Information Retrieval Systems: A Survey
|
cs.IR
|
The continuous growth in the XML information repositories has been matched by
increasing efforts in development of XML retrieval systems, in large parts
aiming at supporting content-oriented XML retrieval. These systems exploit the
available structural information, as market up in XML documents, in order to
return documents components- the so called XML elements-instead of the
complement documents in repose to the user query. In this paper, we provide an
overview of the different XML information retrieval systems and classify them
according to their storage and query evaluation strategies.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.