id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1108.6290
|
Compression and Quantitative Analysis of Buffer Map Message in P2P
Streaming System
|
cs.MM cs.IT math.IT
|
BM compression is a straightforward and operable way to reduce buffer message
length as well as to improve system performance. In this paper, we thoroughly
discuss the principles and protocol progress of different compression schemes,
and for the first time present an original compression scheme which can nearly
remove all redundant information from buffer message. Theoretical limit of
compression rates are deduced in the theory of information. Through the
analysis of information content and simulation with our measured BM trace of
UUSee, the validity and superiority of our compression scheme are validated in
term of compression ratio.
|
1108.6293
|
Buffer Map Message Compression Based on Relevant Window in P2P Streaming
Media System
|
cs.MM cs.IT math.IT
|
Popular peer to peer streaming media systems such as PPLive and UUSee rely on
periodic buffer-map exchange between peers for proper operation. The buffer-map
exchange contains redundant information which causes non-negligible overhead.
In this paper we present a theoretical framework to study how the overhead can
be lowered. Differentiating from the traditional data compression approach, we
do not treat each buffer-map as an isolated data block, but consider the
correlations between the sequentially exchanged buffer-maps. Under this
framework, two buffer-map compression schemes are proposed and the correctness
of the schemes is proved mathematically. Moreover, we derive the theoretical
limit of compression gain based on probability theory and information theory.
Based on the system parameters of UUSee (a popular P2P streaming platform), our
simulations show that the buffer-map sizes are reduced by 86% and 90% (from 456
bits down to only 66 bits and 46 bits) respectively after applying our schemes.
Furthermore, by combining with the traditional compression methods (on
individual blocks), the sizes are decreased by 91% and 95% (to 42 bits and 24
bits) respectively. Our study provides a guideline for developing practical
compression algorithms.
|
1108.6294
|
Biometric Authorization System using Gait Biometry
|
cs.CV
|
Human gait, which is a new biometric aimed to recognize individuals by the
way they walk have come to play an increasingly important role in visual
surveillance applications. In this paper a novel hybrid holistic approach is
proposed to show how behavioural walking characteristics can be used to
recognize unauthorized and suspicious persons when they enter a surveillance
area. Initially background is modelled from the input video captured from
cameras deployed for security and the foreground moving object in the
individual frames are segmented using the background subtraction algorithm.
Then gait representing spatial, temporal and wavelet components are extracted
and fused for training and testing multi class support vector machine models
(SVM). The proposed system is evaluated using side view videos of NLPR
database. The experimental results demonstrate that the proposed system
achieves a pleasing recognition rate and also the results indicate that the
classification ability of SVM with Radial Basis Function (RBF) is better than
with other kernel functions.
|
1108.6296
|
Infinite Tucker Decomposition: Nonparametric Bayesian Models for
Multiway Data Analysis
|
cs.LG cs.NA
|
Tensor decomposition is a powerful computational tool for multiway data
analysis. Many popular tensor decomposition approaches---such as the Tucker
decomposition and CANDECOMP/PARAFAC (CP)---amount to multi-linear
factorization. They are insufficient to model (i) complex interactions between
data entities, (ii) various data types (e.g. missing data and binary data), and
(iii) noisy observations and outliers. To address these issues, we propose
tensor-variate latent nonparametric Bayesian models, coupled with efficient
inference methods, for multiway data analysis. We name these models InfTucker.
Using these InfTucker, we conduct Tucker decomposition in an infinite feature
space. Unlike classical tensor decomposition models, our new approaches handle
both continuous and binary data in a probabilistic framework. Unlike previous
Bayesian models on matrices and tensors, our models are based on latent
Gaussian or $t$ processes with nonlinear covariance functions. To efficiently
learn the InfTucker from data, we develop a variational inference technique on
tensors. Compared with classical implementation, the new technique reduces both
time and space complexities by several orders of magnitude. Our experimental
results on chemometrics and social network datasets demonstrate that our new
models achieved significantly higher prediction accuracy than the most
state-of-art tensor decomposition
|
1108.6304
|
Anisotropic k-Nearest Neighbor Search Using Covariance Quadtree
|
cs.CV cs.CG cs.DS
|
We present a variant of the hyper-quadtree that divides a multidimensional
space according to the hyperplanes associated to the principal components of
the data in each hyperquadrant. Each of the $2^\lambda$ hyper-quadrants is a
data partition in a $\lambda$-dimension subspace, whose intrinsic
dimensionality $\lambda\leq d$ is reduced from the root dimensionality $d$ by
the principal components analysis, which discards the irrelevant eigenvalues of
the local covariance matrix. In the present method a component is irrelevant if
its length is smaller than, or comparable to, the local inter-data spacing.
Thus, the covariance hyper-quadtree is fully adaptive to the local
dimensionality. The proposed data-structure is used to compute the anisotropic
K nearest neighbors (kNN), supported by the Mahalanobis metric. As an
application, we used the present k nearest neighbors method to perform density
estimation over a noisy data distribution. Such estimation method can be
further incorporated to the smoothed particle hydrodynamics, allowing computer
simulations of anisotropic fluid flows.
|
1108.6312
|
Computation Alignment: Capacity Approximation without Noise Accumulation
|
cs.IT math.IT
|
Consider several source nodes communicating across a wireless network to a
destination node with the help of several layers of relay nodes. Recent work by
Avestimehr et al. has approximated the capacity of this network up to an
additive gap. The communication scheme achieving this capacity approximation is
based on compress-and-forward, resulting in noise accumulation as the messages
traverse the network. As a consequence, the approximation gap increases
linearly with the network depth.
This paper develops a computation alignment strategy that can approach the
capacity of a class of layered, time-varying wireless relay networks up to an
approximation gap that is independent of the network depth. This strategy is
based on the compute-and-forward framework, which enables relays to decode
deterministic functions of the transmitted messages. Alone, compute-and-forward
is insufficient to approach the capacity as it incurs a penalty for
approximating the wireless channel with complex-valued coefficients by a
channel with integer coefficients. Here, this penalty is circumvented by
carefully matching channel realizations across time slots to create
integer-valued effective channels that are well-suited to compute-and-forward.
Unlike prior constant gap results, the approximation gap obtained in this paper
also depends closely on the fading statistics, which are assumed to be i.i.d.
Rayleigh.
|
1108.6328
|
Foundations of Traversal Based Query Execution over Linked Data
(Extended Version)
|
cs.DB
|
Query execution over the Web of Linked Data has attracted much attention
recently. A particularly interesting approach is link traversal based query
execution which proposes to integrate the traversal of data links into the
construction of query results. Hence -in contrast to traditional query
execution paradigms- this approach does not assume a fixed set of relevant data
sources beforehand; instead, it discovers data on the fly and, thus, enables
applications to tap the full potential of the Web.
While several authors study possibilities to implement the idea of link
traversal based query execution and to optimize query execution in this
context, no work exists that discusses the theoretical foundations of the
approach in general. Our paper fills this gap.
We introduce a well-defined semantics for queries that may be executed using
the link traversal based approach. Based on this semantics we formally analyze
properties of such queries. In particular, we study the computability of
queries as well as the implications of querying a potentially infinite Web of
Linked Data. Our results show that query computation in general is not
guaranteed to terminate and that for any given query it is undecidable whether
the execution terminates. Furthermore, we define an abstract execution model
that captures the integration of link traversal into the query execution
process. Based on this model we prove the soundness and completeness of link
traversal based query execution and analyze an existing implementation
approach..
|
1109.0003
|
The MultiDark Database: Release of the Bolshoi and MultiDark
Cosmological Simulations
|
astro-ph.CO astro-ph.IM cs.DB
|
We present the online MultiDark Database -- a Virtual Observatory-oriented,
relational database for hosting various cosmological simulations. The data is
accessible via an SQL (Structured Query Language) query interface, which also
allows users to directly pose scientific questions, as shown in a number of
examples in this paper. Further examples for the usage of the database are
given in its extensive online documentation (www.multidark.org). The database
is based on the same technology as the Millennium Database, a fact that will
greatly facilitate the usage of both suites of cosmological simulations. The
first release of the MultiDark Database hosts two 8.6 billion particle
cosmological N-body simulations: the Bolshoi (250/h Mpc simulation box, 1/h kpc
resolution) and MultiDark Run1 simulation (MDR1, or BigBolshoi, 1000/h Mpc
simulation box, 7/h kpc resolution). The extraction methods for halos/subhalos
from the raw simulation data, and how this data is structured in the database
are explained in this paper. With the first data release, users get full access
to halo/subhalo catalogs, various profiles of the halos at redshifts z=0-15,
and raw dark matter data for one time-step of the Bolshoi and four time-steps
of the MultiDark simulation. Later releases will also include galaxy mock
catalogs and additional merging trees for both simulations as well as new large
volume simulations with high resolution. This project is further proof of the
viability to store and present complex data using relational database
technology. We encourage other simulators to publish their results in a similar
manner.
|
1109.0035
|
Statistical Model of Downlink Power Consumption in Cellular CDMA
Networks
|
cs.SY
|
Present work proposes a theoretical statistical model of the downlink power
consumption in cellular CDMA networks. The proposed model employs a simple but
popular propagation model, which breaks down path losses into a distance
dependent and a log-normal shadowing loss term. Based on the aforementioned
path loss formalism, closed-form expressions for the first and the second
moment of power consumption are obtained taking into account conditions placed
by cell selection and handoff algorithms. Numerical results for various radio
propagation environments and cell selection as well as handoff schemes are
provided and discussed.
|
1109.0059
|
Cluster size entropy in the Axelrod model of social influence:
small-world networks and mass media
|
physics.soc-ph cs.SI
|
We study the Axelrod's cultural adaptation model using the concept of cluster
size entropy, $S_{c}$ that gives information on the variability of the cultural
cluster size present in the system. Using networks of different topologies,
from regular to random, we find that the critical point of the well-known
nonequilibrium monocultural-multicultural (order-disorder) transition of the
Axelrod model is unambiguously given by the maximum of the $S_{c}(q)$
distributions. The width of the cluster entropy distributions can be used to
qualitatively determine whether the transition is first- or second-order. By
scaling the cluster entropy distributions we were able to obtain a relationship
between the critical cultural trait $q_c$ and the number $F$ of cultural
features in regular networks. We also analyze the effect of the mass media
(external field) on social systems within the Axelrod model in a square
network. We find a new partially ordered phase whose largest cultural cluster
is not aligned with the external field, in contrast with a recent suggestion
that this type of phase cannot be formed in regular networks. We draw a new
$q-B$ phase diagram for the Axelrod model in regular networks.
|
1109.0069
|
Inter-rater Agreement on Sentence Formality
|
cs.CL
|
Formality is one of the most important dimensions of writing style variation.
In this study we conducted an inter-rater reliability experiment for assessing
sentence formality on a five-point Likert scale, and obtained good agreement
results as well as different rating distributions for different sentence
categories. We also performed a difficulty analysis to identify the bottlenecks
of our rating procedure. Our main objective is to design an automatic scoring
mechanism for sentence-level formality, and this study is important for that
purpose.
|
1109.0077
|
A Radio Based Intelligent Railway Grade Crossing System to Avoid
Collision
|
cs.SY
|
Railway grade crossing is become the major headache for the transportation
system. This paper describes an intelligent railway crossing control system for
multiple tracks that features a controller which receives messages from
incoming and outgoing trains by sensors. These messages contain detail
information including the direction and identity of a train. Depending on those
messages the controller device decides whenever the railroad crossing gate will
close or open.
|
1109.0085
|
Self-Adaptation Mechanism to Control the Diversity of the Population in
Genetic Algorithm
|
cs.NE
|
One of the problems in applying Genetic Algorithm is that there is some
situation where the evolutionary process converges too fast to a solution which
causes it to be trapped in local optima. To overcome this problem, a proper
diversity in the candidate solutions must be determined. Most existing
diversity-maintenance mechanisms require a problem specific knowledge to setup
parameters properly. This work proposes a method to control diversity of the
population without explicit parameter setting. A self-adaptation mechanism is
proposed based on the competition of preference characteristic in mating. It
can adapt the population toward proper diversity for the problems. The
experiments are carried out to measure the effectiveness of the proposed method
based on nine well-known test problems. The performance of the adaptive method
is comparable to traditional Genetic Algorithm with the best parameter setting.
|
1109.0086
|
Comments on "Stack-based Algorithms for Pattern Matching on DAGs"
|
cs.DB
|
The paper "Stack-based Algorithms for Pattern Matching on DAGs" generalizes
the classical holistic twig join algorithms and proposes PathStackD, TwigStackD
and DagStackD to respectively evaluate path, twig and DAG pattern queries on
directed acyclic graphs. In this paper, we investigate the major results of
that paper, pointing out several discrepancies and proposing solutions to
resolving them. We show that the original algorithms do not find particular
types of query solutions that are common in practice. We also analyze the
effect of an underlying assumption on the correctness of the algorithms and
discuss the pre-filtering process that the original work proposes to prune
redundant nodes. Our experimental study on both real and synthetic data
substantiates our conclusions.
|
1109.0090
|
An Efficient Codebook Initialization Approach for LBG Algorithm
|
cs.CV
|
In VQ based image compression technique has three major steps namely (i)
Codebook Design, (ii) VQ Encoding Process and (iii) VQ Decoding Process. The
performance of VQ based image compression technique depends upon the
constructed codebook. A widely used technique for VQ codebook design is the
Linde-Buzo-Gray (LBG) algorithm. However the performance of the standard LBG
algorithm is highly dependent on the choice of the initial codebook. In this
paper, we have proposed a simple and very effective approach for codebook
initialization for LBG algorithm. The simulation results show that the proposed
scheme is computationally efficient and gives expected performance as compared
to the standard LBG algorithm.
|
1109.0093
|
Local Component Analysis
|
cs.LG
|
Kernel density estimation, a.k.a. Parzen windows, is a popular density
estimation method, which can be used for outlier detection or clustering. With
multivariate data, its performance is heavily reliant on the metric used within
the kernel. Most earlier work has focused on learning only the bandwidth of the
kernel (i.e., a scalar multiplicative factor). In this paper, we propose to
learn a full Euclidean metric through an expectation-minimization (EM)
procedure, which can be seen as an unsupervised counterpart to neighbourhood
component analysis (NCA). In order to avoid overfitting with a fully
nonparametric density estimator in high dimensions, we also consider a
semi-parametric Gaussian-Parzen density model, where some of the variables are
modelled through a jointly Gaussian density, while others are modelled through
Parzen windows. For these two models, EM leads to simple closed-form updates
based on matrix inversions and eigenvalue decompositions. We show empirically
that our method leads to density estimators with higher test-likelihoods than
natural competing methods, and that the metrics may be used within most
unsupervised learning techniques that rely on such metrics, such as spectral
clustering or manifold learning methods. Finally, we present a stochastic
approximation scheme which allows for the use of this method in a large-scale
setting.
|
1109.0094
|
DNA Lossless Differential Compression Algorithm based on Similarity of
Genomic Sequence Database
|
cs.DS cs.CE cs.SE
|
Modern biological science produces vast amounts of genomic sequence data.
This is fuelling the need for efficient algorithms for sequence compression and
analysis. Data compression and the associated techniques coming from
information theory are often perceived as being of interest for data
communication and storage. In recent years, a substantial effort has been made
for the application of textual data compression techniques to various
computational biology tasks, ranging from storage and indexing of large
datasets to comparison of genomic databases. This paper presents a differential
compression algorithm that is based on production of difference sequences
according to op-code table in order to optimize the compression of homologous
sequences in dataset. Therefore, the stored data are composed of reference
sequence, the set of differences, and differences locations, instead of storing
each sequence individually. This algorithm does not require a priori knowledge
about the statistics of the sequence set. The algorithm was applied to three
different datasets of genomic sequences, it achieved up to 195-fold compression
rate corresponding to 99.4% space saving.
|
1109.0105
|
Differentially Private Online Learning
|
cs.LG cs.CR stat.ML
|
In this paper, we consider the problem of preserving privacy in the online
learning setting. We study the problem in the online convex programming (OCP)
framework---a popular online learning setting with several interesting
theoretical and practical implications---while using differential privacy as
the formal privacy measure. For this problem, we distill two critical
attributes that a private OCP algorithm should have in order to provide
reasonable privacy as well as utility guarantees: 1) linearly decreasing
sensitivity, i.e., as new data points arrive their effect on the learning model
decreases, 2) sub-linear regret bound---regret bound is a popular
goodness/utility measure of an online learning algorithm.
Given an OCP algorithm that satisfies these two conditions, we provide a
general framework to convert the given algorithm into a privacy preserving OCP
algorithm with good (sub-linear) regret. We then illustrate our approach by
converting two popular online learning algorithms into their differentially
private variants while guaranteeing sub-linear regret ($O(\sqrt{T})$). Next, we
consider the special case of online linear regression problems, a practically
important class of online learning problems, for which we generalize an
approach by Dwork et al. to provide a differentially private algorithm with
just $O(\log^{1.5} T)$ regret. Finally, we show that our online learning
framework can be used to provide differentially private algorithms for offline
learning as well. For the offline learning problem, our approach obtains better
error bounds as well as can handle larger class of problems than the existing
state-of-the-art methods Chaudhuri et al.
|
1109.0113
|
aspcud: A Linux Package Configuration Tool Based on Answer Set
Programming
|
cs.AI cs.LO
|
We present the Linux package configuration tool aspcud based on Answer Set
Programming. In particular, we detail aspcud's preprocessor turning a CUDF
specification into a set of logical facts.
|
1109.0114
|
(Re)configuration based on model generation
|
cs.AI cs.LO
|
Reconfiguration is an important activity for companies selling configurable
products or services which have a long life time. However, identification of a
set of required changes in a legacy configuration is a hard problem, since even
small changes in the requirements might imply significant modifications. In
this paper we show a solution based on answer set programming, which is a
logic-based knowledge representation formalism well suited for a compact
description of (re)configuration problems. Its applicability is demonstrated on
simple abstractions of several real-world scenarios. The evaluation of our
solution on a set of benchmark instances derived from commercial
(re)configuration problems shows its practical applicability.
|
1109.0137
|
Architectural solutions of conformal network-centric staring-sensor
systems with spherical field of view
|
cs.SY math.PR physics.optics
|
The article presents the concept of network-centric conformal electro-optical
systems construction with spherical field of view. It discusses abstract
passive distributed electro-optical systems with focal array detectors based on
a group of moving objects distributed in space. The system performs conformal
processing of information from sensor matrix in a single event coordinate-time
field. Unequivocally the construction of the systems which satisfy the
different criteria of optimality is very complicated and requires special
approaches to their development and design. The paper briefly touches upon key
questions (in the authors' opinion) in the synthesis of such systems that meet
different criteria of optimality. The synthesis of such systems is discussed by
authors with the systematic and synergy approaches.
|
1109.0138
|
Automatic Application Level Set Approach in Detection Calcifications in
Mammographic Image
|
cs.CV
|
Breast cancer is considered as one of a major health problem that constitutes
the strongest cause behind mortality among women in the world. So, in this
decade, breast cancer is the second most common type of cancer, in term of
appearance frequency, and the fifth most common cause of cancer related death.
In order to reduce the workload on radiologists, a variety of CAD systems;
Computer-Aided Diagnosis (CADi) and Computer-Aided Detection (CADe) have been
proposed. In this paper, we interested on CADe tool to help radiologist to
detect cancer. The proposed CADe is based on a three-step work flow; namely,
detection, analysis and classification. This paper deals with the problem of
automatic detection of Region Of Interest (ROI) based on Level Set approach
depended on edge and region criteria. This approach gives good visual
information from the radiologist. After that, the features extraction using
textures characteristics and the vector classification using Multilayer
Perception (MLP) and k-Nearest Neighbours (KNN) are adopted to distinguish
different ACR (American College of Radiology) classification. Moreover, we use
the Digital Database for Screening Mammography (DDSM) for experiments and these
results in term of accuracy varied between 60 % and 70% are acceptable and must
be ameliorated to aid radiologist.
|
1109.0166
|
Discovering the Impact of Knowledge in Recommender Systems: A
Comparative Study
|
cs.IR
|
Recommender systems engage user profiles and appropriate filtering techniques
to assist users in finding more relevant information over the large volume of
information. User profiles play an important role in the success of
recommendation process since they model and represent the actual user needs.
However, a comprehensive literature review of recommender systems has
demonstrated no concrete study on the role and impact of knowledge in user
profiling and filtering approache. In this paper, we review the most prominent
recommender systems in the literature and examine the impression of knowledge
extracted from different sources. We then come up with this finding that
semantic information from the user context has substantial impact on the
performance of knowledge based recommender systems. Finally, some new clues for
improvement the knowledge-based profiles have been proposed.
|
1109.0172
|
Effect of diffusion of elements on network topology and
self-organization
|
physics.comp-ph cs.SI physics.soc-ph
|
We study the influence of elements diffusing in and out of a network to the
topological changes of the network and characterize it by investigating the
behavior of probability of degree distribution ($\Gamma(k)$) with degree $k$.
The local memory of the incoming element and its interaction with the elements
already present in the network during the growing process significantly affect
the network stability which in turn reorganize the network properties. We found
that the properties of $\Gamma(k)$ of this network are deviated from scale free
type, where the power law behavior contains a exponentially decay factor
supporting earlier reported results of Amaral et.al. \cite{ama} and Newman
\cite{new1} and recent statistical analysis results on degree distribution data
of some scale free network [11]. Our numerical results also support the
behavior of this $\Gamma(k)$. However, we found numerically the contribution
from exponential factor to the $\Gamma(k)$ to be very weak as compared to the
scale free factor showing that the network as a whole carries the scale free
properties approximately.
|
1109.0181
|
Improving the recall of decentralised linked data querying through
implicit knowledge
|
cs.DB
|
Aside from crawling, indexing, and querying RDF data centrally, Linked Data
principles allow for processing SPARQL queries on-the-fly by dereferencing
URIs. Proposed link-traversal query approaches for Linked Data have the
benefits of up-to-date results and decentralised (i.e., client-side) execution,
but operate on incomplete knowledge available in dereferenced documents, thus
affecting recall. In this paper, we investigate how implicit knowledge -
specifically that found through owl:sameAs and RDFS reasoning - can improve the
recall in this setting. We start with an empirical analysis of a large crawl
featuring 4 m Linked Data sources and 1.1 g quadruples: we (1) measure expected
recall by only considering dereferenceable information, (2) measure the
improvement in recall given by considering rdfs:seeAlso links as previous
proposals did. We further propose and measure the impact of additionally
considering (3) owl:sameAs links, and (4) applying lightweight RDFS reasoning
(specifically {\rho}DF) for finding more results, relying on static schema
information. We evaluate our methods for live queries over our crawl.
|
1109.0216
|
Evaluation of Huffman and Arithmetic Algorithms for Multimedia
Compression Standards
|
cs.IT cs.MM math.IT
|
Compression is a technique to reduce the quantity of data without excessively
reducing the quality of the multimedia data. The transition and storing of
compressed multimedia data is much faster and more efficient than original
uncompressed multimedia data. There are various techniques and standards for
multimedia data compression, especially for image compression such as the JPEG
and JPEG2000 standards. These standards consist of different functions such as
color space conversion and entropy coding. Arithmetic and Huffman coding are
normally used in the entropy coding phase. In this paper we try to answer the
following question. Which entropy coding, arithmetic or Huffman, is more
suitable compared to other from the compression ratio, performance, and
implementation points of view? We have implemented and tested Huffman and
arithmetic algorithms. Our implemented results show that compression ratio of
arithmetic coding is better than Huffman coding, while the performance of the
Huffman coding is higher than Arithmetic coding. In addition, implementation of
Huffman coding is much easier than the Arithmetic coding.
|
1109.0217
|
Vessel Segmentation in Medical Imaging Using a Tight-Frame Based
Algorithm
|
math.NA cs.CV
|
Tight-frame, a generalization of orthogonal wavelets, has been used
successfully in various problems in image processing, including inpainting,
impulse noise removal, super-resolution image restoration, etc. Segmentation is
the process of identifying object outlines within images. There are quite a few
efficient algorithms for segmentation that depend on the variational approach
and the partial differential equation (PDE) modeling.
In this paper, we propose to apply the tight-frame approach to automatically
identify tube-like structures such as blood vessels in Magnetic Resonance
Angiography (MRA) images. Our method iteratively refines a region that encloses
the possible boundary or surface of the vessels. In each iteration, we apply
the tight-frame algorithm to denoise and smooth the possible boundary and
sharpen the region. We prove the convergence of our algorithm. Numerical
experiments on real 2D/3D MRA images demonstrate that our method is very
efficient with convergence usually within a few iterations, and it outperforms
existing PDE and variational methods as it can extract more tubular objects and
fine details in the images.
|
1109.0262
|
Estimating within-school contact networks to understand influenza
transmission
|
stat.ME cs.SI physics.soc-ph stat.AP
|
Many epidemic models approximate social contact behavior by assuming random
mixing within mixing groups (e.g., homes, schools and workplaces). The effect
of more realistic social network structure on estimates of epidemic parameters
is an open area of exploration. We develop a detailed statistical model to
estimate the social contact network within a high school using friendship
network data and a survey of contact behavior. Our contact network model
includes classroom structure, longer durations of contacts to friends than
nonfriends and more frequent contacts with friends, based on reports in the
contact survey. We performed simulation studies to explore which network
structures are relevant to influenza transmission. These studies yield two key
findings. First, we found that the friendship network structure important to
the transmission process can be adequately represented by a dyad-independent
exponential random graph model (ERGM). This means that individual-level sampled
data is sufficient to characterize the entire friendship network. Second, we
found that contact behavior was adequately represented by a static rather than
dynamic contact network.
|
1109.0264
|
Simple Regenerating Codes: Network Coding for Cloud Storage
|
cs.IT cs.DC cs.NI math.IT
|
Network codes designed specifically for distributed storage systems have the
potential to provide dramatically higher storage efficiency for the same
availability. One main challenge in the design of such codes is the exact
repair problem: if a node storing encoded information fails, in order to
maintain the same level of reliability we need to create encoded information at
a new node. One of the main open problems in this emerging area has been the
design of simple coding schemes that allow exact and low cost repair of failed
nodes and have high data rates. In particular, all prior known explicit
constructions have data rates bounded by 1/2.
In this paper we introduce the first family of distributed storage codes that
have simple look-up repair and can achieve arbitrarily high rates. Our
constructions are very simple to implement and perform exact repair by simple
XORing of packets. We experimentally evaluate the proposed codes in a realistic
cloud storage simulator and show significant benefits in both performance and
reliability compared to replication and standard Reed-Solomon codes.
|
1109.0318
|
Compressive Matched-Field Processing
|
cs.IT math.IT
|
Source localization by matched-field processing (MFP) generally involves
solving a number of computationally intensive partial differential equations.
This paper introduces a technique that mitigates this computational workload by
"compressing" these computations. Drawing on key concepts from the recently
developed field of compressed sensing, it shows how a low-dimensional proxy for
the Green's function can be constructed by backpropagating a small set of
random receiver vectors. Then, the source can be located by performing a number
of "short" correlations between this proxy and the projection of the recorded
acoustic data in the compressed space. Numerical experiments in a Pekeris ocean
waveguide are presented which demonstrate that this compressed version of MFP
is as effective as traditional MFP even when the compression is significant.
The results are particularly promising in the broadband regime where using as
few as two random backpropagations per frequency performs almost as well as the
traditional broadband MFP, but with the added benefit of generic applicability.
That is, the computationally intensive backpropagations may be computed offline
independently from the received signals, and may be reused to locate any source
within the search grid area.
|
1109.0325
|
Quantum adiabatic machine learning
|
quant-ph cs.LG
|
We develop an approach to machine learning and anomaly detection via quantum
adiabatic evolution. In the training phase we identify an optimal set of weak
classifiers, to form a single strong classifier. In the testing phase we
adiabatically evolve one or more strong classifiers on a superposition of
inputs in order to find certain anomalous elements in the classification space.
Both the training and testing phases are executed via quantum adiabatic
evolution. We apply and illustrate this approach in detail to the problem of
software verification and validation.
|
1109.0333
|
A KIF Formalization for the IFF Category Theory Ontology
|
cs.LO cs.AI math.CT
|
This paper begins the discussion of how the Information Flow Framework can be
used to provide a principled foundation for the metalevel (or structural level)
of the Standard Upper Ontology (SUO). This SUO structural level can be used as
a logical framework for manipulating collections of ontologies in the object
level of the SUO or other middle level or domain ontologies. From the
Information Flow perspective, the SUO structural level resolves into several
metalevel ontologies. This paper discusses a KIF formalization for one of those
metalevel categories, the Category Theory Ontology. In particular, it discusses
its category and colimit sub-namespaces.
|
1109.0337
|
On discrete cosine transform
|
cs.IT math.IT
|
The discrete cosine transform (DCT), introduced by Ahmed, Natarajan and Rao,
has been used in many applications of digital signal processing, data
compression and information hiding. There are four types of the discrete cosine
transform. In simulating the discrete cosine transform, we propose a
generalized discrete cosine transform with three parameters, and prove its
orthogonality for some new cases. A new type of discrete cosine transform is
proposed and its orthogonality is proved. Finally, we propose a generalized
discrete W transform with three parameters, and prove its orthogonality for
some new cases.
|
1109.0351
|
Directed Information, Causal Estimation, and Communication in Continuous
Time
|
cs.IT math.IT
|
A notion of directed information between two continuous-time processes is
proposed. A key component in the definition is taking an infimum over all
possible partitions of the time interval, which plays a role no less
significant than the supremum over "space" partitions inherent in the
definition of mutual information. Properties and operational interpretations in
estimation and communication are then established for the proposed notion of
directed information. For the continuous-time additive white Gaussian noise
channel, it is shown that Duncan's classical relationship between causal
estimation and information continues to hold in the presence of feedback upon
replacing mutual information by directed information. A parallel result is
established for the Poisson channel. The utility of this relationship is then
demonstrated in computing the directed information rate between the input and
output processes of a continuous-time Poisson channel with feedback, where the
channel input process is constrained to be constant between events at the
channel output. Finally, the capacity of a wide class of continuous-time
channels with feedback is established via directed information, characterizing
the fundamental limit on reliable communication.
|
1109.0392
|
Context Tree Estimation in Variable Length Hidden Markov Models
|
cs.IT math.IT math.ST stat.TH
|
We address the issue of context tree estimation in variable length hidden
Markov models. We propose an estimator of the context tree of the hidden Markov
process which needs no prior upper bound on the depth of the context tree. We
prove that the estimator is strongly consistent. This uses
information-theoretic mixture inequalities in the spirit of Finesso and
Lorenzo(Consistent estimation of the order for Markov and hidden Markov
chains(1990)) and E.Gassiat and S.Boucheron (Optimal error exponents in hidden
Markov model order estimation(2003)). We propose an algorithm to efficiently
compute the estimator and provide simulation studies to support our result.
|
1109.0414
|
Anti-Structure Problems
|
cs.IT math.IT
|
The recent success of structured solutions for a class of
information-theoretic network problems, calls for exploring their limits. We
show that sum-product channels resist a solution by structured (as well as
random) codes. We conclude that the structured approach fails whenever the
channel operations do not commute (or for general functional channels, when the
channel function is non decomposable).
|
1109.0418
|
Tropical Algebraic approach to Consensus over Networks
|
math.OC cs.DM cs.SY
|
In this paper we study the convergence of the max-consensus protocol.
Tropical algebra is used to formulate the problem. Necessary and sufficient
conditions for convergence of the max-consensus protocol over fixed as well as
switching topology networks are given.
|
1109.0420
|
Meta-song evaluation for chord recognition
|
cs.IR
|
We present a new approach to evaluate chord recognition systems on songs
which do not have full annotations. The principle is to use online chord
databases to generate high accurate "pseudo annotations" for these songs and
compute "pseudo accuracies" of test systems. Statistical models that model the
relationship between "pseudo accuracy" and real performance are then applied to
estimate test systems' performance. The approach goes beyond the existing
evaluation metrics, allowing us to carry out extensive analysis on chord
recognition systems, such as their generalizations to different genres. In the
experiments we applied this method to evaluate three state-of-the-art chord
recognition systems, of which the results verified its reliability.
|
1109.0428
|
A survey of fuzzy control for stabilized platforms
|
cs.SY
|
This paper focusses on the application of fuzzy control techniques (fuzzy
type-1 and type-2) and their hybrid forms (Hybrid adaptive fuzzy controller and
fuzzy-PID controller) in the area of stabilized platforms. It represents an
attempt to cover the basic principles and concepts of fuzzy control in
stabilization and position control, with an outline of a number of recent
applications used in advanced control of stabilized platform. Overall, in this
survey we will make some comparisons with the classical control techniques such
us PID control to demonstrate the advantages and disadvantages of the
application of fuzzy control techniques.
|
1109.0455
|
Gradient-based kernel dimension reduction for supervised learning
|
stat.ML cs.LG
|
This paper proposes a novel kernel approach to linear dimension reduction for
supervised learning. The purpose of the dimension reduction is to find
directions in the input space to explain the output as effectively as possible.
The proposed method uses an estimator for the gradient of regression function,
based on the covariance operators on reproducing kernel Hilbert spaces. In
comparison with other existing methods, the proposed one has wide applicability
without strong assumptions on the distributions or the type of variables, and
uses computationally simple eigendecomposition. Experimental results show that
the proposed method successfully finds the effective directions with efficient
computation.
|
1109.0486
|
The Variational Garrote
|
stat.ME cs.LG
|
In this paper, we present a new variational method for sparse regression
using $L_0$ regularization. The variational parameters appear in the
approximate model in a way that is similar to Breiman's Garrote model. We refer
to this method as the variational Garrote (VG). We show that the combination of
the variational approximation and $L_0$ regularization has the effect of making
the problem effectively of maximal rank even when the number of samples is
small compared to the number of variables. The VG is compared numerically with
the Lasso method, ridge regression and the recently introduced paired mean
field method (PMF) (M. Titsias & M. L\'azaro-Gredilla., NIPS 2012). Numerical
results show that the VG and PMF yield more accurate predictions and more
accurately reconstruct the true model than the other methods. It is shown that
the VG finds correct solutions when the Lasso solution is inconsistent due to
large input correlations. Globally, VG is significantly faster than PMF and
tends to perform better as the problems become denser and in problems with
strongly correlated inputs. The naive implementation of the VG scales cubic
with the number of features. By introducing Lagrange multipliers we obtain a
dual formulation of the problem that scales cubic in the number of samples, but
close to linear in the number of features.
|
1109.0507
|
How Open Should Open Source Be?
|
cs.CR cs.LG
|
Many open-source projects land security fixes in public repositories before
shipping these patches to users. This paper presents attacks on such projects -
taking Firefox as a case-study - that exploit patch metadata to efficiently
search for security patches prior to shipping. Using access-restricted bug
reports linked from patch descriptions, security patches can be immediately
identified for 260 out of 300 days of Firefox 3 development. In response to
Mozilla obfuscating descriptions, we show that machine learning can exploit
metadata such as patch author to search for security patches, extending the
total window of vulnerability by 5 months in an 8 month period when examining
up to two patches daily. Finally we present strong evidence that further
metadata obfuscation is unlikely to prevent information leaks, and we argue
that open-source projects instead ought to keep security patches secret until
they are ready to be released.
|
1109.0530
|
Orthogonal Query Expansion
|
cs.IR
|
Over the last fifteen years, web searching has seen tremendous improvements.
Starting from a nearly random collection of matching pages in 1995, today,
search engines tend to satisfy the user's informational need on well-formulated
queries. One of the main remaining challenges is to satisfy the users' needs
when they provide a poorly formulated query. When the pages matching the user's
original keywords are judged to be unsatisfactory, query expansion techniques
are used to alter the result set. These techniques find keywords that are
similar to the keywords given by the user, which are then appended to the
original query leading to a perturbation of the result set. However, when the
original query is sufficiently ill-posed, the user's informational need is best
met using entirely different keywords, and a small perturbation of the original
result set is bound to fail.
We propose a novel approach that is not based on the keywords of the original
query. We intentionally seek out orthogonal queries, which are related queries
that have low similarity to the user's query. The result sets of orthogonal
queries intersect with the result set of the original query on a small number
of pages. An orthogonal query can access the user's informational need while
consisting of entirely different terms than the original query. We illustrate
the effectiveness of our approach by proposing a query expansion method derived
from these observations that improves upon results obtained using the Yahoo
BOSS infrastructure.
|
1109.0556
|
Effects of long-range links on metastable states in a dynamic
interaction network
|
cond-mat.stat-mech cs.SI physics.soc-ph
|
We introduce a model for random-walking nodes on a periodic lattice, where
the dynamic interaction network is defined from local interactions and E
randomly-added long-range links. With periodic states for nodes and an
interaction rule of repeated averaging, we numerically find two types of
metastable states at low- and high-E limits, respectively, along with consensus
states. If we apply this model to opinion dynamics, metastable states can be
interpreted as sustainable diversities in our societies, and our result then
implies that, while diversities decrease and eventually disappear with more
long-range connections, another type of states of diversities can appear when
networks are almost fully-connected.
|
1109.0573
|
Phase Retrieval via Matrix Completion
|
cs.IT math.IT math.NA
|
This paper develops a novel framework for phase retrieval, a problem which
arises in X-ray crystallography, diffraction imaging, astronomical imaging and
many other applications. Our approach combines multiple structured
illuminations together with ideas from convex programming to recover the phase
from intensity measurements, typically from the modulus of the diffracted wave.
We demonstrate empirically that any complex-valued object can be recovered from
the knowledge of the magnitude of just a few diffracted patterns by solving a
simple convex optimization problem inspired by the recent literature on matrix
completion. More importantly, we also demonstrate that our noise-aware
algorithms are stable in the sense that the reconstruction degrades gracefully
as the signal-to-noise ratio decreases. Finally, we introduce some theory
showing that one can design very simple structured illumination patterns such
that three diffracted figures uniquely determine the phase of the object we
wish to recover.
|
1109.0596
|
Discrete Wigner Function Reconstruction and Compressed Sensing
|
quant-ph cond-mat.other cs.IT math.IT
|
A new reconstruction method for Wigner function is reported for quantum
tomography based on compressed sensing. By analogy with computed tomography,
Wigner functions for some quantum states can be reconstructed with less
measurements utilizing this compressed sensing based method.
|
1109.0601
|
Application of distributed constraint satisfaction problem to the
agent-based planning in manufacturing systems
|
cs.MA
|
Nowadays, a globalization of national markets requires developing flexible
and demand-driven production systems. Agent-based technology, being
distributed, flexible and autonomous is expected to provide a short-time
reaction to disturbances and sudden changes of environment and allows
satisfying the mentioned requirements. The distributed constraint satisfaction
approach underlying the suggested method is described by a modified Petri
network providing both the conceptual notions and main details of
implementation.
|
1109.0616
|
ATP and Presentation Service for Mizar Formalizations
|
cs.DL cs.AI
|
This paper describes the Automated Reasoning for Mizar (MizAR) service, which
integrates several automated reasoning, artificial intelligence, and
presentation tools with Mizar and its authoring environment. The service
provides ATP assistance to Mizar authors in finding and explaining proofs, and
offers generation of Mizar problems as challenges to ATP systems. The service
is based on a sound translation from the Mizar language to that of first-order
ATP systems, and relies on the recent progress in application of ATP systems in
large theories containing tens of thousands of available facts. We present the
main features of MizAR services, followed by an account of initial experiments
in finding proofs with the ATP assistance. Our initial experience indicates
that the tool offers substantial help in exploring the Mizar library and in
preparing new Mizar articles.
|
1109.0617
|
Metadata Challenge for Query Processing Over Heterogeneous Wireless
Sensor Network
|
cs.DB
|
Wireless sensor networks become integral part of our life. These networks can
be used for monitoring the data in various domain due to their flexibility and
functionality. Query processing and optimization in the WSN is a very
challenging task because of their energy and memory constraint. In this paper,
first our focus is to review the different approaches that have significant
impacts on the development of query processing techniques for WSN. Finally, we
aim to illustrate the existing approach in popular query processing engines
with future research challenges in query optimization.
|
1109.0621
|
Visual Inference Specification Methods for Modularized Rulebases.
Overview and Integration Proposal
|
cs.AI cs.SE
|
The paper concerns selected rule modularization techniques. Three visual
methods for inference specification for modularized rule- bases are described:
Drools Flow, BPMN and XTT2. Drools Flow is a popular technology for workflow or
process modeling, BPMN is an OMG standard for modeling business processes, and
XTT2 is a hierarchical tab- ular system specification method. Because of some
limitations of these solutions, several proposals of their integration are
given.
|
1109.0624
|
Building Ontologies to Understand Spoken Tunisian Dialect
|
cs.CL
|
This paper presents a method to understand spoken Tunisian dialect based on
lexical semantic. This method takes into account the specificity of the
Tunisian dialect which has no linguistic processing tools. This method is
ontology-based which allows exploiting the ontological concepts for semantic
annotation and ontological relations for speech interpretation. This
combination increases the rate of comprehension and limits the dependence on
linguistic resources. This paper also details the process of building the
ontology used for annotation and interpretation of Tunisian dialect in the
context of speech understanding in dialogue systems for restricted domain.
|
1109.0628
|
The Weight Distributions of Cyclic Codes and Elliptic Curves
|
cs.IT math.IT
|
Cyclic codes with two zeros and their dual codes as a practically and
theoretically interesting class of linear codes, have been studied for many
years. However, the weight distributions of cyclic codes are difficult to
determine. From elliptic curves, this paper determines the weight distributions
of dual codes of cyclic codes with two zeros for a few more cases.
|
1109.0631
|
LWE-based Identification Schemes
|
cs.CR cs.IT math.IT
|
Some hard problems from lattices, like LWE (Learning with Errors), are
particularly suitable for application in Cryptography due to the possibility of
using worst-case to average-case reductions as evidence of strong security
properties. In this work, we show two LWE-based constructions of zero-knowledge
identification schemes and discuss their performance and security. We also
highlight the design choices that make our solution of both theoretical and
practical interest.
|
1109.0633
|
Eliciting implicit assumptions of proofs in the MIZAR Mathematical
Library by property omission
|
cs.LO cs.AI math.LO
|
When formalizing proofs with interactive theorem provers, it often happens
that extra background knowledge (declarative or procedural) about mathematical
concepts is employed without the formalizer explicitly invoking it, to help the
formalizer focus on the relevant details of the proof. In the contexts of
producing and studying a formalized mathematical argument, such mechanisms are
clearly valuable. But we may not always wish to suppress background knowledge.
For certain purposes, it is important to know, as far as possible, precisely
what background knowledge was implicitly employed in a formal proof. In this
note we describe an experiment conducted on the MIZAR Mathematical Library of
formal mathematical proofs to elicit one such class of implicitly employed
background knowledge: properties of functions and relations (e.g.,
commutativity, asymmetry, etc.).
|
1109.0651
|
Mathematical Analysis of the BIBEE Approximation for Molecular
Solvation: Exact Results for Spherical Inclusions
|
cs.CE physics.chem-ph physics.comp-ph
|
We analyze the mathematically rigorous BIBEE (boundary-integral based
electrostatics estimation) approximation of the mixed-dielectric continuum
model of molecular electrostatics, using the analytically solvable case of a
spherical solute containing an arbitrary charge distribution. Our analysis,
which builds on Kirkwood's solution using spherical harmonics, clarifies
important aspects of the approximation and its relationship to Generalized Born
models. First, our results suggest a new perspective for analyzing fast
electrostatic models: the separation of variables between material properties
(the dielectric constants) and geometry (the solute dielectric boundary and
charge distribution). Second, we find that the eigenfunctions of the
reaction-potential operator are exactly preserved in the BIBEE model for the
sphere, which supports the use of this approximation for analyzing
charge-charge interactions in molecular binding. Third, a comparison of BIBEE
to the recent GB$\epsilon$ theory suggests a modified BIBEE model capable of
predicting electrostatic solvation free energies to within 4% of a full
numerical Poisson calculation. This modified model leads to a
projection-framework understanding of BIBEE and suggests opportunities for
future improvements.
|
1109.0660
|
Mismatch and resolution in compressive imaging
|
cs.IT math.IT math.NA
|
Highly coherent sensing matrices arise in discretization of continuum
problems such as radar and medical imaging when the grid spacing is below the
Rayleigh threshold as well as in using highly coherent, redundant dictionaries
as sparsifying operators. Algorithms (BOMP, BLOOMP) based on techniques of band
exclusion and local optimization are proposed to enhance Orthogonal Matching
Pursuit (OMP) and deal with such coherent sensing matrices. BOMP and BLOOMP
have provably performance guarantee of reconstructing sparse, widely separated
objects {\em independent} of the redundancy and have a sparsity constraint and
computational cost similar to OMP's. Numerical study demonstrates the
effectiveness of BLOOMP for compressed sensing with highly coherent, redundant
sensing matrices.
|
1109.0681
|
Generic Optimization of Linear Precoding in Multibeam Satellite Systems
|
cs.IT math.IT
|
Multibeam satellite systems have been employed to provide interactive
broadband services to geographical areas under-served by terrestrial
infrastructure. In this context, this paper studies joint multiuser linear
precoding design in the forward link of fixed multibeam satellite systems. We
provide a generic optimization framework for linear precoding design to handle
any objective functions of data rate with general linear and nonlinear power
constraints. To achieve this, an iterative algorithm which optimizes the
precoding vectors and power allocation alternatingly is proposed and most
importantly, the proposed algorithm is proved to always converge. The proposed
optimization algorithm is also applicable to nonlinear dirty paper coding. In
addition, the aforementioned problems and algorithms are extended to the case
that each terminal has multiple co-polarization or dual-polarization antennas.
Simulation results demonstrate substantial performance improvement of the
proposed schemes over conventional multibeam satellite systems, zero-forcing
and regularized zero-forcing precoding schemes in terms of meeting the traffic
demand. The performance of the proposed linear precoding scheme is also shown
to be very close to the dirty paper coding.
|
1109.0687
|
Performance of distributed mechanisms for flow admission in wireless
adhoc networks
|
cs.IT cs.DC math.IT
|
Given a wireless network where some pairs of communication links interfere
with each other, we study sufficient conditions for determining whether a given
set of minimum bandwidth quality-of-service (QoS) requirements can be
satisfied. We are especially interested in algorithms which have low
communication overhead and low processing complexity. The interference in the
network is modeled using a conflict graph whose vertices correspond to the
communication links in the network. Two links are adjacent in this graph if and
only if they interfere with each other due to being in the same vicinity and
hence cannot be simultaneously active. The problem of scheduling the
transmission of the various links is then essentially a fractional, weighted
vertex coloring problem, for which upper bounds on the fractional chromatic
number are sought using only localized information. We recall some distributed
algorithms for this problem, and then assess their worst-case performance. Our
results on this fundamental problem imply that for some well known classes of
networks and interference models, the performance of these distributed
algorithms is within a bounded factor away from that of an optimal, centralized
algorithm. The performance bounds are simple expressions in terms of graph
invariants. It is seen that the induced star number of a network plays an
important role in the design and performance of such networks.
|
1109.0693
|
Transportation dynamics on networks of mobile agents
|
physics.soc-ph cs.SI
|
Most existing works on transportation dynamics focus on networks of a fixed
structure, but networks whose nodes are mobile have become widespread, such as
cell-phone networks. We introduce a model to explore the basic physics of
transportation on mobile networks. Of particular interest are the dependence of
the throughput on the speed of agent movement and communication range. Our
computations reveal a hierarchical dependence for the former while, for the
latter, we find an algebraic power law between the throughput and the
communication range with an exponent determined by the speed. We develop a
physical theory based on the Fokker-Planck equation to explain these phenomena.
Our findings provide insights into complex transportation dynamics arising
commonly in natural and engineering systems.
|
1109.0696
|
Hybrid Digital/Analog Schemes for Secure Transmission with Side
Information
|
cs.IT math.IT
|
Recent results on source-channel coding for secure transmission show that
separation holds in several cases under some less-noisy conditions. However, it
has also been proved through a simple counterexample that pure analog schemes
can be optimal and hence outperform digital ones. According to these
observations and assuming matched-bandwidth, we present a novel hybrid
digital/analog scheme that aims to gather the advantages of both digital and
analog ones. In the quadratic Gaussian setup when side information is only
present at the eavesdropper, this strategy is proved to be optimal.
Furthermore, it outperforms both digital and analog schemes and cannot be
achieved via time-sharing. An application example to binary symmetric sources
with side information is also investigated.
|
1109.0724
|
Throughput Maximization for the Gaussian Relay Channel with Energy
Harvesting Constraints
|
cs.IT math.IT
|
This paper considers the use of energy harvesters, instead of conventional
time-invariant energy sources, in wireless cooperative communication. For the
purpose of exposition, we study the classic three-node Gaussian relay channel
with decode-and-forward (DF) relaying, in which the source and relay nodes
transmit with power drawn from energy-harvesting (EH) sources. Assuming a
deterministic EH model under which the energy arrival time and the harvested
amount are known prior to transmission, the throughput maximization problem
over a finite horizon of $N$ transmission blocks is investigated. In
particular, two types of data traffic with different delay constraints are
considered: delay-constrained (DC) traffic (for which only one-block decoding
delay is allowed at the destination) and no-delay-constrained (NDC) traffic
(for which arbitrary decoding delay up to $N$ blocks is allowed). For the DC
case, we show that the joint source and relay power allocation over time is
necessary to achieve the maximum throughput, and propose an efficient algorithm
to compute the optimal power profiles. For the NDC case, although the
throughput maximization problem is non-convex, we prove the optimality of a
separation principle for the source and relay power allocation problems, based
upon which a two-stage power allocation algorithm is developed to obtain the
optimal source and relay power profiles separately. Furthermore, we compare the
DC and NDC cases, and obtain the sufficient and necessary conditions under
which the NDC case performs strictly better than the DC case. It is shown that
NDC transmission is able to exploit a new form of diversity arising from the
independent source and relay energy availability over time in cooperative
communication, termed "energy diversity", even with time-invariant channels.
|
1109.0732
|
Multilingual ontology matching based on Wiktionary data accessible via
SPARQL endpoint
|
cs.IR
|
Interoperability is a feature required by the Semantic Web. It is provided by
the ontology matching methods and algorithms. But now ontologies are presented
not only in English, but in other languages as well. It is important to use an
automatic translation for obtaining correct matching pairs in multilingual
ontology matching. The translation into many languages could be based on the
Google Translate API, the Wiktionary database, etc. From the point of view of
the balance of presence of many languages, of manually crafted translations, of
a huge size of a dictionary, the most promising resource is the Wiktionary. It
is a collaborative project working on the same principles as the Wikipedia. The
parser of the Wiktionary was developed and the machine-readable dictionary was
designed. The data of the machine-readable Wiktionary are stored in a
relational database, but with the help of D2R server the database is presented
as an RDF store. Thus, it is possible to get lexicographic information
(definitions, translations, synonyms) from web service using SPARQL requests.
In the case study, the problem entity is a task of multilingual ontology
matching based on Wiktionary data accessible via SPARQL endpoint. Ontology
matching results obtained using Wiktionary were compared with results based on
Google Translate API.
|
1109.0736
|
Compression Aware Physical Database Design
|
cs.DB
|
Modern RDBMSs support the ability to compress data using methods such as null
suppression and dictionary encoding. Data compression offers the promise of
significantly reducing storage requirements and improving I/O performance for
decision support queries. However, compression can also slow down update and
query performance due to the CPU costs of compression and decompression. In
this paper, we study how data compression affects choice of appropriate
physical database design, such as indexes, for a given workload. We observe
that approaches that decouple the decision of whether or not to choose an index
from whether or not to compress the index can result in poor solutions. Thus,
we focus on the novel problem of integrating compression into physical database
design in a scalable manner. We have implemented our techniques by modifying
Microsoft SQL Server and the Database Engine Tuning Advisor (DTA) physical
design tool. Our techniques are general and are potentially applicable to DBMSs
that support other compression methods. Our experimental results on real world
as well as TPC-H benchmark workloads demonstrate the effectiveness of our
techniques.
|
1109.0758
|
Exploring Social Influence for Recommendation - A Probabilistic
Generative Model Approach
|
cs.SI cs.IR physics.soc-ph
|
In this paper, we propose a probabilistic generative model, called unified
model, which naturally unifies the ideas of social influence, collaborative
filtering and content-based methods for item recommendation. To address the
issue of hidden social influence, we devise new algorithms to learn the model
parameters of our proposal based on expectation maximization (EM). In addition
to a single-machine version of our EM algorithm, we further devise a
parallelized implementation on the Map-Reduce framework to process two
large-scale datasets we collect. Moreover, we show that the social influence
obtained from our generative models can be used for group recommendation.
Finally, we conduct comprehensive experiments using the datasets crawled from
last.fm and whrrl.com to validate our ideas. Experimental results show that the
generative models with social influence significantly outperform those without
incorporating social influence. The unified generative model proposed in this
paper obtains the best performance. Moreover, our study on social influence
finds that users in whrrl.com are more likely to get influenced by friends than
those in last.fm. The experimental results also confirm that our social
influence based group recommendation algorithm outperforms the state-of-the-art
algorithms for group recommendation.
|
1109.0762
|
Tunable Dual-band IFA Antenna using LC Resonators
|
cs.IT math.IT
|
A tunable dual-band inverted F antenna (IFA) is presented in this paper. By
placing a LC resonator on the radiating arm, dual-band characteristic is
achieved. Especially, the capacitor in the resonator is a tunable thin-film BST
capacitor, which has a 3.3:1 tuning ratio. The capacitance of the BST
capacitors can be tuned by an external DC bias voltage. By varying the
capacitance, both the lower band and the upper band of the IFA antenna can be
tuned. And the total bandwidth can cover six systems, i.e., GSM-850, GSM-900,
GPS, DCS, PCS, and UMTS.
|
1109.0766
|
Cooperative Secret Key Generation from Phase Estimation in Narrowband
Fading Channels
|
cs.CR cs.IT math.IT
|
By exploiting multipath fading channels as a source of common randomness,
physical layer (PHY) based key generation protocols allow two terminals with
correlated observations to generate secret keys with information-theoretical
security. The state of the art, however, still suffers from major limitations,
e.g., low key generation rate, lower entropy of key bits and a high reliance on
node mobility. In this paper, a novel cooperative key generation protocol is
developed to facilitate high-rate key generation in narrowband fading channels,
where two keying nodes extract the phase randomness of the fading channel with
the aid of relay node(s). For the first time, we explicitly consider the effect
of estimation methods on the extraction of secret key bits from the underlying
fading channels and focus on a popular statistical method--maximum likelihood
estimation (MLE). The performance of the cooperative key generation scheme is
extensively evaluated theoretically. We successfully establish both a
theoretical upper bound on the maximum secret key rate from mutual information
of correlated random sources and a more practical upper bound from Cramer-Rao
bound (CRB) in estimation theory. Numerical examples and simulation studies are
also presented to demonstrate the performance of the cooperative key generation
system. The results show that the key rate can be improved by a couple of
orders of magnitude compared to the existing approaches.
|
1109.0800
|
Quantized Compute and Forward: A Low-Complexity Architecture for
Distributed Antenna Systems
|
cs.IT math.IT
|
We consider a low-complexity version of the Compute and Forward scheme that
involves only scaling, offset (dithering removal) and scalar quantization at
the relays. The proposed scheme is suited for the uplink of a distributed
antenna system where the antenna elements must be very simple and are connected
to a oint processor via orthogonal perfect links of given rate R0. We consider
the design of non-binary LDPC codes naturally matched to the proposed scheme.
Each antenna element performs individual (decentralized) Belief Propagation
decoding of its own quantized signal, and sends a linear combination of the
users' information messages via the noiseless link to the joint processor,
which retrieves the users' messages by Gaussian elimination. The complexity of
this scheme is linear in the coding block length and polynomial in the system
size (number of relays).
|
1109.0802
|
Achieving the Han-Kobayashi inner bound for the quantum interference
channel by sequential decoding
|
quant-ph cs.IT math.IT
|
In this paper, we study the power of sequential decoding strategies for
several channels with classical input and quantum output. In our sequential
decoding strategies, the receiver loops through all candidate messages trying
to project the received state onto a `typical' subspace for the candidate
message under consideration, stopping if the projection succeeds for a message,
which is then declared as the guess of the receiver for the sent message. We
show that even such a conceptually simple strategy can be used to achieve rates
up to the mutual information for a single sender single receiver channel called
cq-channel henceforth, as well as the standard inner bound for a two sender
single receiver multiple access channel, called ccq-MAC in this paper. Our
decoding scheme for the ccq-MAC uses a new kind of conditionally typical
projector which is constructed using a geometric result about how two subspaces
interact structurally.
As the main application of our methods, we construct an encoding and decoding
scheme achieving the Chong-Motani-Garg inner bound for a two sender two
receiver interference channel with classical input and quantum output, called
ccqq-IC henceforth. This matches the best known inner bound for the
interference channel in the classical setting. Achieving the Chong-Motani-Garg
inner bound, which is known to be equivalent to the Han-Kobayashi inner bound,
answers an open question raised recently by Fawzi et al. (arxiv:1102.2624). Our
encoding scheme is the same as that of Chong-Motani-Garg, and our decoding
scheme is sequential.
|
1109.0807
|
Harmonic Analysis of Boolean Networks: Determinative Power and
Perturbations
|
cs.IT cond-mat.dis-nn math.IT q-bio.MN
|
Consider a large Boolean network with a feed forward structure. Given a
probability distribution on the inputs, can one find, possibly small,
collections of input nodes that determine the states of most other nodes in the
network? To answer this question, a notion that quantifies the determinative
power of an input over the states of the nodes in the network is needed. We
argue that the mutual information (MI) between a given subset of the inputs X =
{X_1, ..., X_n} of some node i and its associated function f_i(X) quantifies
the determinative power of this set of inputs over node i. We compare the
determinative power of a set of inputs to the sensitivity to perturbations to
these inputs, and find that, maybe surprisingly, an input that has large
sensitivity to perturbations does not necessarily have large determinative
power. However, for unate functions, which play an important role in genetic
regulatory networks, we find a direct relation between MI and sensitivity to
perturbations. As an application of our results, we analyze the large-scale
regulatory network of Escherichia coli. We identify the most determinative
nodes and show that a small subset of those reduces the overall uncertainty of
the network state significantly. Furthermore, the network is found to be
tolerant to perturbations of its inputs.
|
1109.0820
|
ShareBoost: Efficient Multiclass Learning with Feature Sharing
|
cs.LG cs.AI cs.CV stat.ML
|
Multiclass prediction is the problem of classifying an object into a relevant
target class. We consider the problem of learning a multiclass predictor that
uses only few features, and in particular, the number of used features should
increase sub-linearly with the number of possible classes. This implies that
features should be shared by several classes. We describe and analyze the
ShareBoost algorithm for learning a multiclass predictor that uses few shared
features. We prove that ShareBoost efficiently finds a predictor that uses few
shared features (if such a predictor exists) and that it has a small
generalization error. We also describe how to use ShareBoost for learning a
non-linear predictor that has a fast evaluation time. In a series of
experiments with natural data sets we demonstrate the benefits of ShareBoost
and evaluate its success relatively to other state-of-the-art approaches.
|
1109.0827
|
A Trellis Coded Modulation Scheme for the Fading Relay Channel
|
cs.IT math.IT
|
A decode and forward protocol based Trellis Coded Modulation (TCM) scheme for
the half-duplex relay channel, in a Rayleigh fading environment, is presented.
The proposed scheme can achieve any spectral efficiency greater than or equal
to one bit per channel use (bpcu). A near-ML decoder for the suggested TCM
scheme is proposed. It is shown that the high SNR performance of this near-ML
decoder approaches the performance of the optimal ML decoder. The high SNR
performance of this near-ML decoder is independent of the strength of the
Source-Relay link and approaches the performance of the optimal ML decoder with
an ideal Source-Relay link. Based on the derived Pair-wise Error Probability
(PEP) bounds, design criteria to maximize the diversity and coding gains are
obtained. Simulation results show a large gain in SNR for the proposed TCM
scheme over uncoded communication as well as the direct transmission without
the relay. Also, it is shown that even for the uncoded transmission scheme, the
choice of the labelling scheme (mapping from bits to complex symbols) used at
the source and the relay significantly impacts the BER vs SNR performance. We
provide a good labelling scheme for $2^l$-PSK signal set, where $l\geq 2$ is an
integer.
|
1109.0839
|
Percolation on correlated random networks
|
cond-mat.stat-mech cs.SI physics.soc-ph
|
We consider a class of random, weighted networks, obtained through a
redefinition of patterns in an Hopfield-like model and, by performing
percolation processes, we get information about topology and resilience
properties of the networks themselves. Given the weighted nature of the graphs,
different kinds of bond percolation can be studied: stochastic (deleting links
randomly) and deterministic (deleting links based on rank weights), each
mimicking a different physical process. The evolution of the network is
accordingly different, as evidenced by the behavior of the largest component
size and of the distribution of cluster sizes. In particular, we can derive
that weak ties are crucial in order to maintain the graph connected and that,
when they are the most prone to failure, the giant component typically shrinks
without abruptly breaking apart; these results have been recently evidenced in
several kinds of social networks.
|
1109.0847
|
Robust Transceiver with Tomlinson-Harashima Precoding for
Amplify-and-Forward MIMO Relaying Systems
|
cs.IT math.IT
|
In this paper, robust transceiver design with Tomlinson-Harashima precoding
(THP) for multi-hop amplify-and-forward (AF) multiple-input multiple-output
(MIMO) relaying systems is investigated. At source node, THP is adopted to
mitigate the spatial intersymbol interference. However, due to its nonlinear
nature, THP is very sensitive to channel estimation errors. In order to reduce
the effects of channel estimation errors, a joint Bayesian robust design of THP
at source, linear forwarding matrices at relays and linear equalizer at
destination is proposed. With novel applications of elegant characteristics of
multiplicative convexity and matrix-monotone functions, the optimal structure
of the nonlinear transceiver is first derived. Based on the derived structure,
the transceiver design problem reduces to a much simpler one with only scalar
variables which can be efficiently solved. Finally, the performance advantage
of the proposed robust design over non-robust design is demonstrated by
simulation results.
|
1109.0882
|
Moving Object Detection by Detecting Contiguous Outliers in the Low-Rank
Representation
|
cs.CV
|
Object detection is a fundamental step for automated video analysis in many
vision applications. Object detection in a video is usually performed by object
detectors or background subtraction techniques. Often, an object detector
requires manually labeled examples to train a binary classifier, while
background subtraction needs a training sequence that contains no objects to
build a background model. To automate the analysis, object detection without a
separate training phase becomes a critical task. People have tried to tackle
this task by using motion information. But existing motion-based methods are
usually limited when coping with complex scenarios such as nonrigid motion and
dynamic background. In this paper, we show that above challenges can be
addressed in a unified framework named DEtecting Contiguous Outliers in the
LOw-rank Representation (DECOLOR). This formulation integrates object detection
and background learning into a single process of optimization, which can be
solved by an alternating algorithm efficiently. We explain the relations
between DECOLOR and other sparsity-based methods. Experiments on both simulated
data and real sequences demonstrate that DECOLOR outperforms the
state-of-the-art approaches and it can work effectively on a wide range of
complex scenarios.
|
1109.0895
|
Nonlinear Channel Estimation for OFDM System by Complex LS-SVM under
High Mobility Conditions
|
cs.LG stat.ML
|
A nonlinear channel estimator using complex Least Square Support Vector
Machines (LS-SVM) is proposed for pilot-aided OFDM system and applied to Long
Term Evolution (LTE) downlink under high mobility conditions. The estimation
algorithm makes use of the reference signals to estimate the total frequency
response of the highly selective multipath channel in the presence of
non-Gaussian impulse noise interfering with pilot signals. Thus, the algorithm
maps trained data into a high dimensional feature space and uses the structural
risk minimization (SRM) principle to carry out the regression estimation for
the frequency response function of the highly selective channel. The
simulations show the effectiveness of the proposed method which has good
performance and high precision to track the variations of the fading channels
compared to the conventional LS method and it is robust at high speed mobility.
|
1109.0908
|
Increasing Physical Layer Security through Scrambled Codes and ARQ
|
cs.IT cs.CR math.IT
|
We develop the proposal of non-systematic channel codes on the AWGN wire-tap
channel. Such coding technique, based on scrambling, achieves high transmission
security with a small degradation of the eavesdropper's channel with respect to
the legitimate receiver's channel. In this paper, we show that, by implementing
scrambling and descrambling on blocks of concatenated frames, rather than on
single frames, the channel degradation needed is further reduced. The usage of
concatenated scrambling allows to achieve security also when both receivers
experience the same channel quality. However, in this case, the introduction of
an ARQ protocol with authentication is needed.
|
1109.0916
|
Ranking of Wikipedia articles in search engines revisited: Fair ranking
for reasonable quality?
|
cs.IR
|
This paper aims to review the fiercely discussed question of whether the
ranking of Wikipedia articles in search engines is justified by the quality of
the articles. After an overview of current research on information quality in
Wikipedia, a summary of the extended discussion on the quality of encyclopedic
entries in general is given. On this basis, a heuristic method for evaluating
Wikipedia entries is developed and applied to Wikipedia articles that scored
highly in a search engine retrieval effectiveness test and compared with the
relevance judgment of jurors. In all search engines tested, Wikipedia results
are unanimously judged better by the jurors than other results on the
corresponding results position. Relevance judgments often roughly correspond
with the results from the heuristic evaluation. Cases in which high relevance
judgments are not in accordance with the comparatively low score from the
heuristic evaluation are interpreted as an indicator of a high degree of trust
in Wikipedia. One of the systemic shortcomings of Wikipedia lies in its
necessarily incoherent user model. A further tuning of the suggested criteria
catalogue, for instance the different weighing of the supplied criteria, could
serve as a starting point for a user model differentiated evaluation of
Wikipedia articles. Approved methods of quality evaluation of reference works
are applied to Wikipedia articles and integrated with the question of search
engine evaluation.
|
1109.0923
|
Reliability in Source Coding with Side Information
|
cs.IT math.IT
|
We study error exponents for source coding with side information. Both
achievable exponents and converse bounds are obtained for the following two
cases: lossless source coding with coded information (SCCSI) and lossy source
coding with full side information (Wyner-Ziv). These results recover and extend
several existing results on source-coding error exponents and are tight in some
circumstances. Our bounds have a natural interpretation as a two-player game
between nature and the code designer, with nature seeking to minimize the
exponent and the code designer seeking to maximize it. In the Wyner-Ziv problem
our analysis exposes a tension in the choice of test channel with the optimal
test channel balancing two competing error events. The Gaussian and
binary-erasure cases are examined in detail.
|
1109.1015
|
High-resolution measurements of face-to-face contact patterns in a
primary school
|
physics.soc-ph cs.SI q-bio.QM
|
Little quantitative information is available on the mixing patterns of
children in school environments. Describing and understanding contacts between
children at school would help quantify the transmission opportunities of
respiratory infections and identify situations within schools where the risk of
transmission is higher. We report on measurements carried out in a French
school (6-12 years children), where we collected data on the time-resolved
face-to-face proximity of children and teachers using a proximity-sensing
infrastructure based on radio frequency identification devices.
Data on face-to-face interactions were collected on October 1st and 2nd,
2009. We recorded 77,602 contact events between 242 individuals. Each child has
on average 323 contacts per day with 47 other children, leading to an average
daily interaction time of 176 minutes. Most contacts are brief, but long
contacts are also observed. Contacts occur mostly within each class, and each
child spends on average three times more time in contact with classmates than
with children of other classes. We describe the temporal evolution of the
contact network and the trajectories followed by the children in the school,
which constrain the contact patterns. We determine an exposure matrix aimed at
informing mathematical models. This matrix exhibits a class and age structure
which is very different from the homogeneous mixing hypothesis.
The observed properties of the contact patterns between school children are
relevant for modeling the propagation of diseases and for evaluating control
measures. We discuss public health implications related to the management of
schools in case of epidemics and pandemics. Our results can help define a
prioritization of control measures based on preventive measures, case
isolation, classes and school closures, that could reduce the disruption to
education during epidemics.
|
1109.1032
|
Tech Report A Variational HEM Algorithm for Clustering Hidden Markov
Models
|
cs.AI stat.ML
|
The hidden Markov model (HMM) is a generative model that treats sequential
data under the assumption that each observation is conditioned on the state of
a discrete hidden variable that evolves in time as a Markov chain. In this
paper, we derive a novel algorithm to cluster HMMs through their probability
distributions. We propose a hierarchical EM algorithm that i) clusters a given
collection of HMMs into groups of HMMs that are similar, in terms of the
distributions they represent, and ii) characterizes each group by a "cluster
center", i.e., a novel HMM that is representative for the group. We present
several empirical studies that illustrate the benefits of the proposed
algorithm.
|
1109.1041
|
Alternative Awaiting and Broadcast for Two-Way Relay Fading Channels
|
cs.IT math.IT
|
We investigate a two-way relay (TWR) fading channel where two source nodes
wish to exchange information with the help of a relay node. Given traditional
TWR protocols, transmission rates in both directions are known to be limited by
the hop with lower capacity, i.e., the min operations between uplink and
downlink. In this paper, we propose a new transmission protocol, named as
alternative awaiting and broadcast (AAB), to cancel the min operations in the
TWR fading channels. The operational principles, new upper bound on ergodic
sum-capacity (ESC) and convergence behavior of average delay of signal
transmission (ST) (in relay buffer) for the proposed AAB protocol are analyzed.
Moreover, we propose a suboptimal encoding/decoding solution for the AAB
protocol and derive an achievable ergodic sum-rate (ESR) with corresponding
average delay of ST. Numerical results show that 1) the proposed AAB protocol
significantly improves the achievable ESR compared to the traditional TWR
protocols, 2) considering the average delay of system service (SS) (in source
buffer), the average delay of ST induced by the proposed AAB protocol is very
small and negligible.
|
1109.1044
|
Proceedings Third International Workshop on Computational Models for
Cell Processes
|
cs.CE q-bio.CB
|
This volume contains the final versions of the papers presented at the 3rd
International Workshop on Computational Models for Cell Processes (CompMod
2011). The workshop took place on September 10, 2011 at the University of
Aachen, Germany, in conjunction with CONCUR 2011. The first edition of the
workshop (2008) took place in Turku, Finland, in conjunction with Formal
Methods 2008 and the second edition (2009) took place in Eindhoven, the
Netherlands, as well in conjunction with Formal Methods 2009. The goal of the
CompMod workshop series is to bring together researchers in Computer Science
(especially in Formal Methods) and Mathematics (both discrete and continuous),
interested in the opportunities and the challenges of Systems Biology.
|
1109.1045
|
On the Linear Precoder Design for MIMO Channels with Finite-Alphabet
Inputs and Statistical CSI
|
cs.IT math.IT
|
This paper investigates the linear precoder design that maximizes the average
mutual information of multiple-input multiple-output channels with
finite-alphabet inputs and statistical channel state information known at the
transmitter. This linear precoder design is an important open problem and is
extremely difficult to solve: First, average mutual information lacks
closed-form expression and involves complicated computations; Second, the
optimization problem over precoder is nonconcave. This study explores the
solution to this problem and provides the following contributions: 1) A
closed-form lower bound of average mutual information is derived. It achieves
asymptotic optimality at low and high signal-to-noise ratio regions and, with a
constant shift, offers an accurate approximation to the average mutual
information; 2) The optimal structure of the precoder is revealed, and a
unified two-step iterative algorithm is proposed to solve this problem.
Numerical examples show the convergence and the efficacy of the proposed
algorithm. Compared to its conventional counterparts, the proposed linear
precoding method provides a significant performance gain.
|
1109.1057
|
Toward Designing Intelligent PDEs for Computer Vision: An Optimal
Control Approach
|
cs.CV
|
Many computer vision and image processing problems can be posed as solving
partial differential equations (PDEs). However, designing PDE system usually
requires high mathematical skills and good insight into the problems. In this
paper, we consider designing PDEs for various problems arising in computer
vision and image processing in a lazy manner: \emph{learning PDEs from real
data via data-based optimal control}. We first propose a general intelligent
PDE system which holds the basic translational and rotational invariance rule
for most vision problems. By introducing a PDE-constrained optimal control
framework, it is possible to use the training data resulting from multiple ways
(ground truth, results from other methods, and manual results from humans) to
learn PDEs for different computer vision tasks. The proposed optimal control
based training framework aims at learning a PDE-based regressor to approximate
the unknown (and usually nonlinear) mapping of different vision tasks. The
experimental results show that the learnt PDEs can solve different vision
problems reasonably well. In particular, we can obtain PDEs not only for
problems that traditional PDEs work well but also for problems that PDE-based
methods have never been tried before, due to the difficulty in describing those
problems in a mathematical way.
|
1109.1059
|
C-Rank: A Link-based Similarity Measure for Scientific Literature
Databases
|
cs.DL cs.IR physics.soc-ph
|
As the number of people who use scientific literature databases grows, the
demand for literature retrieval services has been steadily increased. One of
the most popular retrieval services is to find a set of papers similar to the
paper under consideration, which requires a measure that computes similarities
between papers. Scientific literature databases exhibit two interesting
characteristics that are different from general databases. First, the papers
cited by old papers are often not included in the database due to technical and
economic reasons. Second, since a paper references the papers published before
it, few papers cite recently-published papers. These two characteristics cause
all existing similarity measures to fail in at least one of the following
cases: (1) measuring the similarity between old, but similar papers, (2)
measuring the similarity between recent, but similar papers, and (3) measuring
the similarity between two similar papers: one old, the other recent. In this
paper, we propose a new link-based similarity measure called C-Rank, which uses
both in-link and out-link by disregarding the direction of references. In
addition, we discuss the most suitable normalization method for scientific
literature databases and propose an evaluation method for measuring the
accuracy of similarity measures. We have used a database with real-world papers
from DBLP and their reference information crawled from Libra for experiments
and compared the performance of C-Rank with those of existing similarity
measures. Experimental results show that C-Rank achieves a higher accuracy than
existing similarity measures.
|
1109.1062
|
Review on Feature Selection Techniques and the Impact of SVM for Cancer
Classification using Gene Expression Profile
|
cs.CE cs.ET cs.LG q-bio.QM
|
The DNA microarray technology has modernized the approach of biology research
in such a way that scientists can now measure the expression levels of
thousands of genes simultaneously in a single experiment. Gene expression
profiles, which represent the state of a cell at a molecular level, have great
potential as a medical diagnosis tool. But compared to the number of genes
involved, available training data sets generally have a fairly small sample
size for classification. These training data limitations constitute a challenge
to certain classification methodologies. Feature selection techniques can be
used to extract the marker genes which influence the classification accuracy
effectively by eliminating the un wanted noisy and redundant genes This paper
presents a review of feature selection techniques that have been employed in
micro array data based cancer classification and also the predominant role of
SVM for cancer classification.
|
1109.1063
|
A Community-Based Sampling Method Using DPL for Online Social Network
|
cs.SI physics.soc-ph
|
In this paper, we propose a new graph sampling method for online social
networks that achieves the following. First, a sample graph should reflect the
ratio between the number of nodes and the number of edges of the original
graph. Second, a sample graph should reflect the topology of the original
graph. Third, sample graphs should be consistent with each other when they are
sampled from the same original graph. The proposed method employs two
techniques: hierarchical community extraction and densification power law. The
proposed method partitions the original graph into a set of communities to
preserve the topology of the original graph. It also uses the densification
power law which captures the ratio between the number of nodes and the number
of edges in online social networks. In experiments, we use several real-world
online social networks, create sample graphs using the existing methods and
ours, and analyze the differences between the sample graph by each sampling
method and the original graph.
|
1109.1067
|
Automatic Diagnosis of Abnormal Tumor Region from Brain Computed
Tomography Images Using Wavelet Based Statistical Texture Features
|
cs.CV
|
The research work presented in this paper is to achieve the tissue
classification and automatically diagnosis the abnormal tumor region present in
Computed Tomography (CT) images using the wavelet based statistical texture
analysis method. Comparative studies of texture analysis method are performed
for the proposed wavelet based texture analysis method and Spatial Gray Level
Dependence Method (SGLDM). Our proposed system consists of four phases i)
Discrete Wavelet Decomposition (ii) Feature extraction (iii) Feature selection
(iv) Analysis of extracted texture features by classifier. A wavelet based
statistical texture feature set is derived from normal and tumor regions.
Genetic Algorithm (GA) is used to select the optimal texture features from the
set of extracted texture features. We construct the Support Vector Machine
(SVM) based classifier and evaluate the performance of classifier by comparing
the classification results of the SVM based classifier with the Back
Propagation Neural network classifier(BPN). The results of Support Vector
Machine (SVM), BPN classifiers for the texture analysis methods are evaluated
using Receiver Operating Characteristic (ROC) analysis. Experimental results
show that the classification accuracy of SVM is 96% for 10 fold cross
validation method. The system has been tested with a number of real Computed
Tomography brain images and has achieved satisfactory results.
|
1109.1068
|
An Automatic Clustering Technique for Optimal Clusters
|
cs.CV
|
This paper proposes a simple, automatic and efficient clustering algorithm,
namely, Automatic Merging for Optimal Clusters (AMOC) which aims to generate
nearly optimal clusters for the given datasets automatically. The AMOC is an
extension to standard k-means with a two phase iterative procedure combining
certain validation techniques in order to find optimal clusters with automation
of merging of clusters. Experiments on both synthetic and real data have proved
that the proposed algorithm finds nearly optimal clustering structures in terms
of number of clusters, compactness and separation.
|
1109.1074
|
A Framework for Predicting Phishing Websites using Neural Networks
|
cs.NE
|
In India many people are now dependent on online banking. This raises
security concerns as the banking websites are forged and fraud can be committed
by identity theft. These forged websites are called as Phishing websites and
created by malicious people to mimic web pages of real websites and it attempts
to defraud people of their personal information. Detecting and identifying
phishing websites is a really complex and dynamic problem involving many
factors and criteria. This paper discusses about the prediction of phishing
websites using neural networks. A neural network is a multilayer system which
reduces the error and increases the performance. This paper describes a
framework to better classify and predict the phishing sites using neural
networks.
|
1109.1077
|
Nonparametric Link Prediction in Large Scale Dynamic Networks
|
stat.ML cs.SI physics.soc-ph
|
We propose a nonparametric approach to link prediction in large-scale dynamic
networks. Our model uses graph-based features of pairs of nodes as well as
those of their local neighborhoods to predict whether those nodes will be
linked at each time step. The model allows for different types of evolution in
different parts of the graph (e.g, growing or shrinking communities). We focus
on large-scale graphs and present an implementation of our model that makes use
of locality-sensitive hashing to allow it to be scaled to large problems.
Experiments with simulated data as well as five real-world dynamic graphs show
that we outperform the state of the art, especially when sharp fluctuations or
nonlinearities are present. We also establish theoretical properties of our
estimator, in particular consistency and weak convergence, the latter making
use of an elaboration of Stein's method for dependency graphs.
|
1109.1087
|
A Business Intelligence Model to Predict Bankruptcy using Financial
Domain Ontology with Association Rule Mining Algorithm
|
cs.DB
|
Today in every organization financial analysis provides the basis for
understanding and evaluating the results of business operations and delivering
how well a business is doing. This means that the organizations can control the
operational activities primarily related to corporate finance. One way that
doing this is by analysis of bankruptcy prediction. This paper develops an
ontological model from financial information of an organization by analyzing
the Semantics of the financial statement of a business. One of the best
bankruptcy prediction models is Altman Z-score model. Altman Z-score method
uses financial rations to predict bankruptcy. From the financial ontological
model the relation between financial data is discovered by using data mining
algorithm. By combining financial domain ontological model with association
rule mining algorithm and Zscore model a new business intelligence model is
developed to predict the bankruptcy.
|
1109.1088
|
A Framework for Business Intelligence Application using Ontological
Classification
|
cs.IR
|
Every business needs knowledge about their competitors to survive better. One
of the information repositories is web. Retrieving Specific information from
the web is challenging. An Ontological model is developed to capture specific
information by using web semantics. From the Ontology model, the relations
between the data are mined using decision tree. From all these a new framework
is developed for Business Intelligence.
|
1109.1093
|
Multi Agent Communication System for Online Auction with Decision
Support System by JADE and TRACE
|
cs.MA
|
The success of online auctions has given buyers access to greater product
diversity with potentially lower prices. It has provided sellers with access to
large numbers of potential buyers and reduced transaction costs by enabling
auctions to take place without regard to time or place. However it is difficult
to spend more time period with system and closely monitor the auction until
auction participant wins the bid or closing of the auction. Determining which
items to bid on or what may be the recommended bid and when to bid it are
difficult questions to answer for online auction participants. The multi agent
auction advisor system JADE and TRACE, which is connected with decision support
system, gives the recommended bid to buyers for online auctions. The auction
advisor system relies on intelligent agents both for the retrieval of relevant
auction data and for the processing of that data to enable meaningful
recommendations, statistical reports and market prediction report to be made to
auction participants.
|
1109.1102
|
Stability of time-varying nonlinear switching systems under
perturbations
|
cs.SY math.OC
|
Using a Liao-type exponent, we study the stability of a time-varying
nonlinear switching system.
|
1109.1105
|
Embedding Constructions of Tail-Biting Trellises for Linear Block Codes
|
cs.IT math.IT
|
In this paper, embedding construction of tail-biting trellises for linear
block codes is presented. With the new approach of constructing tail-biting
trellises, most of the study of tail-biting trellises can be converted into the
study of conventional trellises. It is proved that any minimal tail-biting
trellis can be constructed by the recursive process of embedding constructions
from the well-known Bahl-Cocke-Jelinek-Raviv (BCJR) constructed conventional
trellises. Furthermore, several properties of embedding constructions of
tail-biting trellises are discussed. Finally, we give four sufficient
conditions to reduce the maximum state-complexity of a trellis with one peak.
|
1109.1133
|
Color Texture Classification Approach Based on Combination of Primitive
Pattern Units and Statistical Features
|
cs.CV cs.AI
|
Texture classification became one of the problems which has been paid much
attention on by image processing scientists since late 80s. Consequently, since
now many different methods have been proposed to solve this problem. In most of
these methods the researchers attempted to describe and discriminate textures
based on linear and non-linear patterns. The linear and non-linear patterns on
any window are based on formation of Grain Components in a particular order.
Grain component is a primitive unit of morphology that most meaningful
information often appears in the form of occurrence of that. The approach which
is proposed in this paper could analyze the texture based on its grain
components and then by making grain components histogram and extracting
statistical features from that would classify the textures. Finally, to
increase the accuracy of classification, proposed approach is expanded to color
images to utilize the ability of approach in analyzing each RGB channels,
individually. Although, this approach is a general one and it could be used in
different applications, the method has been tested on the stone texture and the
results can prove the quality of approach.
|
1109.1144
|
Event Centric Modeling Approach in Colocation Pattern Snalysis from
Spatial Data
|
cs.DB
|
Spatial co-location patterns are the subsets of Boolean spatial features
whose instances are often located in close geographic proximity. Co-location
rules can be identified by spatial statistics or data mining approaches. In
data mining method, Association rule-based approaches can be used which are
further divided into transaction-based approaches and distance-based
approaches. Transaction-based approaches focus on defining transactions over
space so that an Apriori algorithm can be used. The natural notion of
transactions is absent in spatial data sets which are embedded in continuous
geographic space. A new distance -based approach is developed to mine
co-location patterns from spatial data by using the concept of proximity
neighborhood. A new interest measure, a participation index, is used for
spatial co-location patterns as it possesses an anti-monotone property. An
algorithm to discover co-location patterns are designed which generates
candidate locations and their table instances. Finally the co-location rules
are generated to identify the patterns.
|
1109.1149
|
On Partial Opimality by Auxiliary Submodular Problems
|
cs.DM cs.CV math.OC
|
In this work, we prove several relations between three different energy
minimization techniques. A recently proposed methods for determining a provably
optimal partial assignment of variables by Ivan Kovtun (IK), the linear
programming relaxation approach (LP) and the popular expansion move algorithm
by Yuri Boykov. We propose a novel sufficient condition of optimal partial
assignment, which is based on LP relaxation and called LP-autarky. We show that
methods of Kovtun, which build auxiliary submodular problems, fulfill this
sufficient condition. The following link is thus established: LP relaxation
cannot be tightened by IK. For non-submodular problems this is a non-trivial
result. In the case of two labels, LP relaxation provides optimal partial
assignment, known as persistency, which, as we show, dominates IK. Relating IK
with expansion move, we show that the set of fixed points of expansion move
with any "truncation" rule for the initial problem and the problem restricted
by one-vs-all method of IK would coincide -- i.e. expansion move cannot be
improved by this method. In the case of two labels, expansion move with a
particular truncation rule coincide with one-vs-all method.
|
1109.1151
|
An Achievable Rate Region for a Two-Relay Network with
Receiver-Transmitter Feedback
|
cs.IT math.IT
|
We consider a relay network with two relays and a feedback link from the
receiver to the sender. To obtain the achievability result, we use
compress-and-forward and random binning techniques combined with deterministic
binning and restricted decoding. Moreover, we use joint decoding technique to
decode the relays' compressed information to achieve a higher rate in the
receiver.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.