id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1208.1926
|
Role of Ranking Algorithms for Information Retrieval
|
cs.IR
|
As the use of web is increasing more day by day, the web users get easily
lost in the web's rich hyper structure. The main aim of the owner of the
website is to give the relevant information according their needs to the users.
We explained the Web mining is used to categorize users and pages by analyzing
user's behavior, the content of pages and then describe Web Structure mining.
This paper includes different Page Ranking algorithms and compares those
algorithms used for Information Retrieval. Different Page Rank based algorithms
like Page Rank (PR), WPR (Weighted Page Rank), HITS (Hyperlink Induced Topic
Selection), Distance Rank and EigenRumor algorithms are discussed and compared.
Simulation Interface has been designed for PageRank algorithm and Weighted
PageRank algorithm but PageRank is the only ranking algorithm on which Google
search engine works.
|
1208.1927
|
CrowdER: Crowdsourcing Entity Resolution
|
cs.DB
|
Entity resolution is central to data integration and data cleaning.
Algorithmic approaches have been improving in quality, but remain far from
perfect. Crowdsourcing platforms offer a more accurate but expensive (and slow)
way to bring human insight into the process. Previous work has proposed
batching verification tasks for presentation to human workers but even with
batching, a human-only approach is infeasible for data sets of even moderate
size, due to the large numbers of matches to be tested. Instead, we propose a
hybrid human-machine approach in which machines are used to do an initial,
coarse pass over all the data, and people are used to verify only the most
likely matching pairs. We show that for such a hybrid system, generating the
minimum number of verification tasks of a given size is NP-Hard, but we develop
a novel two-tiered heuristic approach for creating batched tasks. We describe
this method, and present the results of extensive experiments on real data sets
using a popular crowdsourcing platform. The experiments show that our hybrid
approach achieves both good efficiency and high accuracy compared to
machine-only or human-only alternatives.
|
1208.1931
|
Uncertain Time-Series Similarity: Return to the Basics
|
cs.DB
|
In the last years there has been a considerable increase in the availability
of continuous sensor measurements in a wide range of application domains, such
as Location-Based Services (LBS), medical monitoring systems, manufacturing
plants and engineering facilities to ensure efficiency, product quality and
safety, hydrologic and geologic observing systems, pollution management, and
others. Due to the inherent imprecision of sensor observations, many
investigations have recently turned into querying, mining and storing uncertain
data. Uncertainty can also be due to data aggregation, privacy-preserving
transforms, and error-prone mining algorithms. In this study, we survey the
techniques that have been proposed specifically for modeling and processing
uncertain time series, an important model for temporal data. We provide an
analytical evaluation of the alternatives that have been proposed in the
literature, highlighting the advantages and disadvantages of each approach, and
further compare these alternatives with two additional techniques that were
carefully studied before. We conduct an extensive experimental evaluation with
17 real datasets, and discuss some surprising results, which suggest that a
fruitful research direction is to take into account the temporal correlations
in the time series. Based on our evaluations, we also provide guidelines useful
for the practitioners in the field.
|
1208.1932
|
Statistical Distortion: Consequences of Data Cleaning
|
cs.DB
|
We introduce the notion of statistical distortion as an essential metric for
measuring the effectiveness of data cleaning strategies. We use this metric to
propose a widely applicable yet scalable experimental framework for evaluating
data cleaning strategies along three dimensions: glitch improvement,
statistical distortion and cost-related criteria. Existing metrics focus on
glitch improvement and cost, but not on the statistical impact of data cleaning
strategies. We illustrate our framework on real world data, with a
comprehensive suite of experiments and analyses.
|
1208.1933
|
Towards Energy-Efficient Database Cluster Design
|
cs.DB
|
Energy is a growing component of the operational cost for many "big data"
deployments, and hence has become increasingly important for practitioners of
large-scale data analysis who require scale-out clusters or parallel DBMS
appliances. Although a number of recent studies have investigated the energy
efficiency of DBMSs, none of these studies have looked at the architectural
design space of energy-efficient parallel DBMS clusters. There are many
challenges to increasing the energy efficiency of a DBMS cluster, including
dealing with the inherent scaling inefficiency of parallel data processing, and
choosing the appropriate energy-efficient hardware. In this paper, we
experimentally examine and analyze a number of key parameters related to these
challenges for designing energy-efficient database clusters. We explore the
cluster design space using empirical results and propose a model that considers
the key bottlenecks to energy efficiency in a parallel DBMS. This paper
represents a key first step in designing energy-efficient database clusters,
which is increasingly important given the trend toward parallel database
appliances.
|
1208.1934
|
Technical report: CSVM dictionaries
|
cs.CE q-bio.QM
|
CSVM (CSV with Metadata) is a simple file format for tabular data. The
possible application domain is the same as typical spreadsheets files, but CSVM
is well suited for long term storage and the inter-conversion of RAW data. CSVM
embeds different levels for data, metadata and annotations in human readable
format and flat ASCII files. As a proof of concept, Perl and Python toolkits
were designed in order to handle CSVM data and objects in workflows. These
parsers can process CSVM files independently of data types, so it is possible
to use same data format and parser for a lot of scientific purposes. CSVM-1 is
the first version of CSVM specification, an extension of CSVM-1 for
implementing a translation system between CSVM files is presented in this
paper. The necessary data used to make the translation are also coded in
another CSVM file. This particular kind of CSVM is called a CSVM dictionary, it
is also readable by the current CSVM parser and it is fully supported by the
Python toolkit. This report presents a proposal for CSVM dictionaries, a
working example in chemistry, and some elements of Python toolkit usable to
handle these files.
|
1208.1940
|
Experiments with Game Tree Search in Real-Time Strategy Games
|
cs.AI cs.GT
|
Game tree search algorithms such as minimax have been used with enormous
success in turn-based adversarial games such as Chess or Checkers. However,
such algorithms cannot be directly applied to real-time strategy (RTS) games
because a number of reasons. For example, minimax assumes a turn-taking game
mechanics, not present in RTS games. In this paper we present RTMM, a real-time
variant of the standard minimax algorithm, and discuss its applicability in the
context of RTS games. We discuss its strengths and weaknesses, and evaluate it
in two real-time games.
|
1208.1955
|
Comparison of different T-norm operators in classification problems
|
cs.AI
|
Fuzzy rule based classification systems are one of the most popular fuzzy
modeling systems used in pattern classification problems. This paper
investigates the effect of applying nine different T-norms in fuzzy rule based
classification systems. In the recent researches, fuzzy versions of confidence
and support merits from the field of data mining have been widely used for both
rules selecting and weighting in the construction of fuzzy rule based
classification systems. For calculating these merits the product has been
usually used as a T-norm. In this paper different T-norms have been used for
calculating the confidence and support measures. Therefore, the calculations in
rule selection and rule weighting steps (in the process of constructing the
fuzzy rule based classification systems) are modified by employing these
T-norms. Consequently, these changes in calculation results in altering the
overall accuracy of rule based classification systems. Experimental results
obtained on some well-known data sets show that the best performance is
produced by employing the Aczel-Alsina operator in terms of the classification
accuracy, the second best operator is Dubois-Prade and the third best operator
is Dombi. In experiments, we have used 12 data sets with numerical attributes
from the University of California, Irvine machine learning repository (UCI).
|
1208.1963
|
Degree-doubling graph families
|
math.CO cs.IT math.IT
|
Let G be a family of n-vertex graphs of uniform degree 2 with the property
that the union of any two member graphs has degree four. We determine the
leading term in the asymptotics of the largest cardinality of such a family.
Several analogous problems are discussed.
|
1208.1977
|
Offloading in Heterogeneous Networks: Modeling, Analysis, and Design
Insights
|
cs.IT math.IT
|
Pushing data traffic from cellular to WiFi is an example of inter radio
access technology (RAT) offloading. While this clearly alleviates congestion on
the over-loaded cellular network, the ultimate potential of such offloading and
its effect on overall system performance is not well understood. To address
this, we develop a general and tractable model that consists of $M$ different
RATs, each deploying up to $K$ different tiers of access points (APs), where
each tier differs in transmit power, path loss exponent, deployment density and
bandwidth. Each class of APs is modeled as an independent Poisson point process
(PPP), with mobile user locations modeled as another independent PPP, all
channels further consisting of i.i.d. Rayleigh fading. The distribution of rate
over the entire network is then derived for a weighted association strategy,
where such weights can be tuned to optimize a particular objective. We show
that the optimum fraction of traffic offloaded to maximize $\SINR$ coverage is
not in general the same as the one that maximizes rate coverage, defined as the
fraction of users achieving a given rate.
|
1208.2013
|
Inferring SQL Queries Using Program Synthesis
|
cs.PL cs.DB
|
Developing high-performance applications that interact with databases is a
difficult task, as developers need to understand both the details of the
language in which their applications are written in, and also the intricacies
of the relational model. One popular solution to this problem is the use of
object-relational mapping (ORM) libraries that provide transparent access to
the database using the same language that the application is written in.
Unfortunately, using such frameworks can easily lead to applications with poor
performance because developers often end up implementing relational operations
in application code, and doing so usually does not take advantage of the
optimized implementations of relational operations, efficient query plans, or
push down of predicates that database systems provide. In this paper we present
QBS, an algorithm that automatically identifies fragments of application logic
that can be pushed into SQL queries. The QBS algorithm works by automatically
synthesizing invariants and postconditions for the original code fragment. The
postconditions and invariants are expressed using a theory of ordered relations
that allows us to reason precisely about the contents and order of the records
produced even by complex code fragments that compute joins and aggregates. The
theory is close in expressiveness to SQL, so the synthesized postconditions can
be readily translated to SQL queries. Using 40 code fragments extracted from
over 120k lines of open-source code written using the Java Hibernate ORM, we
demonstrate that our approach can convert a variety of imperative constructs
into relational specifications.
|
1208.2015
|
Sharp analysis of low-rank kernel matrix approximations
|
cs.LG math.ST stat.TH
|
We consider supervised learning problems within the positive-definite kernel
framework, such as kernel ridge regression, kernel logistic regression or the
support vector machine. With kernels leading to infinite-dimensional feature
spaces, a common practical limiting difficulty is the necessity of computing
the kernel matrix, which most frequently leads to algorithms with running time
at least quadratic in the number of observations n, i.e., O(n^2). Low-rank
approximations of the kernel matrix are often considered as they allow the
reduction of running time complexities to O(p^2 n), where p is the rank of the
approximation. The practicality of such methods thus depends on the required
rank p. In this paper, we show that in the context of kernel ridge regression,
for approximations based on a random subset of columns of the original kernel
matrix, the rank p may be chosen to be linear in the degrees of freedom
associated with the problem, a quantity which is classically used in the
statistical analysis of such methods, and is often seen as the implicit number
of parameters of non-parametric estimators. This result enables simple
algorithms that have sub-quadratic running time complexity, but provably
exhibit the same predictive performance than existing algorithms, for any given
problem instance, and not only for worst-case situations.
|
1208.2043
|
High-Dimensional Screening Using Multiple Grouping of Variables
|
stat.ML cs.IT math.IT
|
Screening is the problem of finding a superset of the set of non-zero entries
in an unknown p-dimensional vector \beta* given n noisy observations.
Naturally, we want this superset to be as small as possible. We propose a novel
framework for screening, which we refer to as Multiple Grouping (MuG), that
groups variables, performs variable selection over the groups, and repeats this
process multiple number of times to estimate a sequence of sets that contains
the non-zero entries in \beta*. Screening is done by taking an intersection of
all these estimated sets. The MuG framework can be used in conjunction with any
group based variable selection algorithm. In the high-dimensional setting,
where p >> n, we show that when MuG is used with the group Lasso estimator,
screening can be consistently performed without using any tuning parameter. Our
numerical simulations clearly show the merits of using the MuG framework in
practice.
|
1208.2076
|
Upper Bounds on the Number of Codewords of Some Separating Codes
|
cs.IT cs.CR math.IT
|
Separating codes have their applications in collusion-secure fingerprinting
for generic digital data, while they are also related to the other structures
including hash family, intersection code and group testing. In this paper we
study upper bounds for separating codes. First, some new upper bound for
restricted separating codes is proposed. Then we illustrate that the Upper
Bound Conjecture for separating Reed-Solomon codes inherited from Silverberg's
question holds true for almost all Reed-Solomon codes.
|
1208.2078
|
Non-homogeneous distributed storage systems
|
cs.IT math.IT
|
This paper describes a non-homogeneous distributed storage systems (DSS),
where there is one super node which has a larger storage size and higher
reliability and availability than the other storage nodes. We propose three
distributed storage schemes based on (k+2; k) maximum distance separable (MDS)
codes and non-MDS codes to show the efficiency of such non-homogeneous DSS in
terms of repair efficiency and data availability. Our schemes achieve optimal
bandwidth (k+1/2)(M/k) when repairing 1-node failure, but require only one
fourth of the minimum required file size and can operate with a smaller field
size leading to significant complexity reduction than traditional homogeneous
DSS. Moreover, with non-MDS codes, our scheme can achieve an even smaller
repair bandwidth of M/2k . Finally, we show that our schemes can increase the
data availability by 10% than the traditional homogeneous DSS scheme.
|
1208.2092
|
A study on non-destructive method for detecting Toxin in pepper using
Neural networks
|
cs.NE cs.CV
|
Mycotoxin contamination in certain agricultural systems have been a serious
concern for human and animal health. Mycotoxins are toxic substances produced
mostly as secondary metabolites by fungi that grow on seeds and feed in the
field, or in storage. The food-borne Mycotoxins likely to be of greatest
significance for human health in tropical developing countries are Aflatoxins
and Fumonisins. Chili pepper is also prone to Aflatoxin contamination during
harvesting, production and storage periods.Various methods used for detection
of Mycotoxins give accurate results, but they are slow, expensive and
destructive. Destructive method is testing a material that degrades the sample
under investigation. Whereas, non-destructive testing will, after testing,
allow the part to be used for its intended purpose. Ultrasonic methods,
Multispectral image processing methods, Terahertz methods, X-ray and
Thermography have been very popular in nondestructive testing and
characterization of materials and health monitoring. Image processing methods
are used to improve the visual quality of the pictures and to extract useful
information from them. In this proposed work, the chili pepper samples will be
collected, and the X-ray, multispectral images of the samples will be processed
using image processing methods. The term "Computational Intelligence" referred
as simulation of human intelligence on computers. It is also called as
"Artificial Intelligence" (AI) approach. The techniques used in AI approach are
Neural network, Fuzzy logic and evolutionary computation. Finally, the
computational intelligence method will be used in addition to image processing
to provide best, high performance and accurate results for detecting the
Mycotoxin level in the samples collected.
|
1208.2102
|
A Novel Fuzzy Logic Based Adaptive Supertwisting Sliding Mode Control
Algorithm for Dynamic Uncertain Systems
|
cs.AI
|
This paper presents a novel fuzzy logic based Adaptive Super-twisting Sliding
Mode Controller for the control of dynamic uncertain systems. The proposed
controller combines the advantages of Second order Sliding Mode Control, Fuzzy
Logic Control and Adaptive Control. The reaching conditions, stability and
robustness of the system with the proposed controller are guaranteed. In
addition, the proposed controller is well suited for simple design and
implementation. The effectiveness of the proposed controller over the first
order Sliding Mode Fuzzy Logic controller is illustrated by Matlab based
simulations performed on a DC-DC Buck converter. Based on this comparison, the
proposed controller is shown to obtain the desired transient response without
causing chattering and error under steady-state conditions. The proposed
controller is able to give robust performance in terms of rejection to input
voltage variations and load variations.
|
1208.2112
|
Inverse Reinforcement Learning with Gaussian Process
|
cs.LG
|
We present new algorithms for inverse reinforcement learning (IRL, or inverse
optimal control) in convex optimization settings. We argue that finite-space
IRL can be posed as a convex quadratic program under a Bayesian inference
framework with the objective of maximum a posterior estimation. To deal with
problems in large or even infinite state space, we propose a Gaussian process
model and use preference graphs to represent observations of decision
trajectories. Our method is distinguished from other approaches to IRL in that
it makes no assumptions about the form of the reward function and yet it
retains the promise of computationally manageable implementations for potential
real-world applications. In comparison with an establish algorithm on
small-scale numerical problems, our method demonstrated better accuracy in
apprenticeship learning and a more robust dependence on the number of
observations.
|
1208.2116
|
Outer Bounds for the Capacity Region of a Gaussian Two-way Relay Channel
|
cs.IT math.IT
|
We consider a three-node half-duplex Gaussian relay network where two nodes
(say $a$, $b$) want to communicate with each other and the third node acts as a
relay for this twoway communication. Outer bounds and achievable rate regions
for the possible rate pairs $(R_a, R_b)$ for two-way communication are
investigated. The modes (transmit or receive) of the halfduplex nodes together
specify the state of the network. A relaying protocol uses a specific sequence
of states and a coding scheme for each state. In this paper, we first obtain an
outer bound for the rate region of all achievable $(R_a,R_b)$ based on the
half-duplex cut-set bound. This outer bound can be numerically computed by
solving a linear program. It is proved that at any point on the boundary of the
outer bound only four of the six states of the network are used. We then
compare it with achievable rate regions of various known protocols. We consider
two kinds of protocols: (1) protocols in which all messages transmitted in a
state are decoded with the received signal in the same state, and (2) protocols
where information received in one state can also be stored and used as side
information to decode messages in future states. Various conclusions are drawn
on the importance of using all states, use of side information, and the choice
of processing at the relay. Then, two analytical outer bounds (as opposed to an
optimization problem formulation) are derived. Using an analytical outer bound,
we obtain the symmetric capacity within 0.5 bits for some channel conditions
where the direct link between nodes a and b is weak.
|
1208.2121
|
On the Sum Rate of a 2 x 2 Interference Network
|
cs.IT math.IT
|
In an M x N interference network, there are M transmitters and N receivers
with each transmitter having independent messages for each of the 2^N -1
possible non-empty subsets of the receivers. We consider the 2 x 2 interference
network with 6 possible messages, of which the 2 x 2 interference channel and X
channel are special cases obtained by using only 2 and 4 messages respectively.
Starting from an achievable rate region similar to the Han-Kobayashi region, we
obtain an achievable sum rate. For the Gaussian interference network, we
determine which of the 6 messages are sufficient for maximizing the sum rate
within this rate region for the low, mixed, and strong interference conditions.
It is observed that 2 messages are sufficient in several cases.
|
1208.2128
|
Brain tumor MRI image classification with feature selection and
extraction using linear discriminant analysis
|
cs.CV cs.LG
|
Feature extraction is a method of capturing visual content of an image. The
feature extraction is the process to represent raw image in its reduced form to
facilitate decision making such as pattern classification. We have tried to
address the problem of classification MRI brain images by creating a robust and
more accurate classifier which can act as an expert assistant to medical
practitioners. The objective of this paper is to present a novel method of
feature selection and extraction. This approach combines the Intensity,
Texture, shape based features and classifies the tumor as white matter, Gray
matter, CSF, abnormal and normal area. The experiment is performed on 140 tumor
contained brain MR images from the Internet Brain Segmentation Repository. The
proposed technique has been carried out over a larger database as compare to
any previous work and is more robust and effective. PCA and Linear Discriminant
Analysis (LDA) were applied on the training sets. The Support Vector Machine
(SVM) classifier served as a comparison of nonlinear techniques Vs linear ones.
PCA and LDA methods are used to reduce the number of features used. The feature
selection using the proposed technique is more beneficial as it analyses the
data according to grouping class variable and gives reduced feature set with
high classification accuracy.
|
1208.2175
|
An approach to describing and analysing bulk biological annotation
quality: a case study using UniProtKB
|
cs.CE cs.IR q-bio.GN
|
Motivation: Annotations are a key feature of many biological databases, used
to convey our knowledge of a sequence to the reader. Ideally, annotations are
curated manually, however manual curation is costly, time consuming and
requires expert knowledge and training. Given these issues and the exponential
increase of data, many databases implement automated annotation pipelines in an
attempt to avoid un-annotated entries. Both manual and automated annotations
vary in quality between databases and annotators, making assessment of
annotation reliability problematic for users. The community lacks a generic
measure for determining annotation quality and correctness, which we look at
addressing within this article. Specifically we investigate word reuse within
bulk textual annotations and relate this to Zipf's Principle of Least Effort.
We use UniProt Knowledge Base (UniProtKB) as a case study to demonstrate this
approach since it allows us to compare annotation change, both over time and
between automated and manually curated annotations.
Results: By applying power-law distributions to word reuse in annotation, we
show clear trends in UniProtKB over time, which are consistent with existing
studies of quality on free text English. Further, we show a clear distinction
between manual and automated analysis and investigate cohorts of protein
records as they mature. These results suggest that this approach holds distinct
promise as a mechanism for judging annotation quality.
Availability: Source code is available at the authors website:
http://homepages.cs.ncl.ac.uk/m.j.bell1/annotation.
Contact: phillip.lord@newcastle.ac.uk
|
1208.2199
|
Elimination of ISI Using Improved LMS Based Decision Feedback Equalizer
|
cs.AI
|
This paper deals with the implementation of Least Mean Square (LMS) algorithm
in Decision Feedback Equalizer (DFE) for removal of Inter Symbol Interference
(ISI) at the receiver. The channel disrupts the transmitted signal by spreading
it in time. Although, the LMS algorithm is robust and reliable, it is slow in
convergence. In order to increase the speed of convergence, modifications have
been made in the algorithm where the weights get updated depending on the
severity of disturbance.
|
1208.2205
|
Blind Channel Equalization
|
cs.IT math.IT
|
Future services demand high data rate and quality. Thus, it is necessary to
define new and robust algorithms to equalize channels and reduce noise in
communications. Nowadays, new equalization algorithms are being developed to
optimize the channel bandwidth and reduce noise, namely, Blind Channel
Equalization. Conventional equalizations minimizing mean-square error generally
require a training sequence accompanying the data sequence. In this study, the
result of Least Mean Square (LMS) algorithm applied on two given communication
channels is analyzed. Considering the fact that blind equalizers do not require
pilot signals to recover the transmitted data, implementation of four types of
Constant Modulus Algorithm (CMA) for blind equalization of the channels are
shown. Finally, a comparison of the simulation results of LMS and CMA for the
test channels is provided.
|
1208.2214
|
Curved Space Optimization: A Random Search based on General Relativity
Theory
|
cs.NE
|
Designing a fast and efficient optimization method with local optima
avoidance capability on a variety of optimization problems is still an open
problem for many researchers. In this work, the concept of a new global
optimization method with an open implementation area is introduced as a Curved
Space Optimization (CSO) method, which is a simple probabilistic optimization
method enhanced by concepts of general relativity theory. To address global
optimization challenges such as performance and convergence, this new method is
designed based on transformation of a random search space into a new search
space based on concepts of space-time curvature in general relativity theory.
In order to evaluate the performance of our proposed method, an implementation
of CSO is deployed and its results are compared on benchmark functions with
state-of-the art optimization methods. The results show that the performance of
CSO is promising on unimodal and multimodal benchmark functions with different
search space dimension sizes.
|
1208.2239
|
Stochastic Kronecker Graph on Vertex-Centric BSP
|
cs.SI physics.soc-ph
|
Recently Stochastic Kronecker Graph (SKG), a network generation model, and
vertex-centric BSP, a graph processing framework like Pregel, have attracted
much attention in the network analysis community. Unfortunately the two are not
very well-suited for each other and thus an implementation of SKG on
vertex-centric BSP must either be done serially or in an unnatural manner.
In this paper, we present a new network generation model, which we call
Poisson Stochastic Kronecker Graph (PSKG), that generate edges according to the
Poisson distribution. The advantage of PSKG is that it is easily parallelizable
on vertex-centric BSP, requires no communication between computational nodes,
and yet retains all the desired properties of SKG.
|
1208.2261
|
Analysis of Statistical Hypothesis based Learning Mechanism for Faster
Crawling
|
cs.IR cs.AI
|
The growth of world-wide-web (WWW) spreads its wings from an intangible
quantities of web-pages to a gigantic hub of web information which gradually
increases the complexity of crawling process in a search engine. A search
engine handles a lot of queries from various parts of this world, and the
answers of it solely depend on the knowledge that it gathers by means of
crawling. The information sharing becomes a most common habit of the society,
and it is done by means of publishing structured, semi-structured and
unstructured resources on the web. This social practice leads to an exponential
growth of web-resource, and hence it became essential to crawl for continuous
updating of web-knowledge and modification of several existing resources in any
situation. In this paper one statistical hypothesis based learning mechanism is
incorporated for learning the behavior of crawling speed in different
environment of network, and for intelligently control of the speed of crawler.
The scaling technique is used to compare the performance proposed method with
the standard crawler. The high speed performance is observed after scaling, and
the retrieval of relevant web-resource in such a high speed is analyzed.
|
1208.2294
|
Learning pseudo-Boolean k-DNF and Submodular Functions
|
cs.LG cs.DM cs.DS
|
We prove that any submodular function f: {0,1}^n -> {0,1,...,k} can be
represented as a pseudo-Boolean 2k-DNF formula. Pseudo-Boolean DNFs are a
natural generalization of DNF representation for functions with integer range.
Each term in such a formula has an associated integral constant. We show that
an analog of Hastad's switching lemma holds for pseudo-Boolean k-DNFs if all
constants associated with the terms of the formula are bounded.
This allows us to generalize Mansour's PAC-learning algorithm for k-DNFs to
pseudo-Boolean k-DNFs, and hence gives a PAC-learning algorithm with membership
queries under the uniform distribution for submodular functions of the form
f:{0,1}^n -> {0,1,...,k}. Our algorithm runs in time polynomial in n, k^{O(k
\log k / \epsilon)}, 1/\epsilon and log(1/\delta) and works even in the
agnostic setting. The line of previous work on learning submodular functions
[Balcan, Harvey (STOC '11), Gupta, Hardt, Roth, Ullman (STOC '11), Cheraghchi,
Klivans, Kothari, Lee (SODA '12)] implies only n^{O(k)} query complexity for
learning submodular functions in this setting, for fixed epsilon and delta.
Our learning algorithm implies a property tester for submodularity of
functions f:{0,1}^n -> {0, ..., k} with query complexity polynomial in n for
k=O((\log n/ \loglog n)^{1/2}) and constant proximity parameter \epsilon.
|
1208.2311
|
Compressed Hypothesis Testing: to Mix or Not to Mix?
|
cs.IT math.IT
|
In this paper, we study the hypothesis testing problem of, among $n$ random
variables, determining $k$ random variables which have different probability
distributions from the rest $(n-k)$ random variables. Instead of using separate
measurements of each individual random variable, we propose to use mixed
measurements which are functions of multiple random variables. It is
demonstrated that $O({\displaystyle \frac{k \log(n)}{\min_{P_i, P_j} C(P_i,
P_j)}})$ observations are sufficient for correctly identifying the $k$
anomalous random variables with high probability, where $C(P_i, P_j)$ is the
Chernoff information between two possible distributions $P_i$ and $P_j$ for the
proposed mixed observations. We characterized the Chernoff information
respectively under fixed time-invariant mixed observations, random time-varying
mixed observations, and deterministic time-varying mixed observations; in our
derivations, we introduced the \emph{inner and outer conditional Chernoff
information} for time-varying measurements. It is demonstrated that mixed
observations can strictly improve the error exponent of hypothesis testing,
over separate observations of individual random variables. We also
characterized the optimal mixed observations maximizing the error exponent, and
derived an explicit construction of the optimal mixed observations for the case
of Gaussian random variables. These results imply that mixed observations of
random variables can reduce the number of required samples in hypothesis
testing applications. Compared with compressed sensing problems, this paper
considers random variables which are allowed to dramatically change values in
different measurements.
|
1208.2322
|
Adaptive Control Design under Structured Model Information Limitation: A
Cost-Biased Maximum-Likelihood Approach
|
math.OC cs.SY
|
Networked control strategies based on limited information about the plant
model usually results in worse closed-loop performance than optimal centralized
control with full plant model information. Recently, this fact has been
established by utilizing the concept of competitive ratio, which is defined as
the worst case ratio of the cost of a control design with limited model
information to the cost of the optimal control design with full model
information. We show that an adaptive controller, inspired by a controller
proposed by Campi and Kumar, with limited plant model information,
asymptotically achieves the closed-loop performance of the optimal centralized
controller with full model information for almost any plant. Therefore, there
exists, at least, one adaptive control design strategy with limited plant model
information that can achieve a competitive ratio equal to one. The plant model
considered in the paper belongs to a compact set of stochastic linear
time-invariant systems and the closed loop performance measure is the ergodic
mean of a quadratic function of the state and control input. We illustrate the
applicability of the results numerically on a vehicle platooning problem.
|
1208.2330
|
Sparsity Averaging for Compressive Imaging
|
cs.IT astro-ph.IM math.IT
|
We discuss a novel sparsity prior for compressive imaging in the context of
the theory of compressed sensing with coherent redundant dictionaries, based on
the observation that natural images exhibit strong average sparsity over
multiple coherent frames. We test our prior and the associated algorithm, based
on an analysis reweighted $\ell_1$ formulation, through extensive numerical
simulations on natural images for spread spectrum and random Gaussian
acquisition schemes. Our results show that average sparsity outperforms
state-of-the-art priors that promote sparsity in a single orthonormal basis or
redundant frame, or that promote gradient sparsity. Code and test data are
available at https://github.com/basp-group/sopt.
|
1208.2332
|
Modeling Propagation Characteristics for Arm-Motion in Wireless Body
Area Sensor Networks
|
cs.SY cs.NI
|
To monitor health information using wireless sensors on body is a promising
new application. Human body acts as a transmission channel in wearable wireless
devices, so electromagnetic propagation modeling is well thought-out for
transmission channel in Wireless Body Area Sensor Network (WBASN). In this
paper we have presented the wave propagation in WBASN which is modeled as point
source (Antenna), close to the arm of the human body. Four possible cases are
presented, where transmitter and receiver are inside or outside of the body.
Dyadic Green's function is specifically used to propose a channel model for arm
motion of human body model. This function is expanded in terms of vector wave
function and scattering superposition principle. This paper describes the
analytical derivation of the spherical electric field distribution model and
the simulation of those derivations.
|
1208.2333
|
Energy Efficient Wireless Communication using Genetic Algorithm Guided
Faster Light Weight Digital Signature Algorithm (GADSA)
|
cs.CR cs.NE
|
In this paper GA based light weight faster version of Digital Signature
Algorithm (GADSA) in wireless communication has been proposed. Various genetic
operators like crossover and mutation are used to optimizing amount of modular
multiplication. Roulette Wheel selection mechanism helps to select best
chromosome which in turn helps in faster computation and minimizes the time
requirements for DSA. Minimization of number of modular multiplication itself a
NP-hard problem that means there is no polynomial time deterministic algorithm
for this purpose. This paper deals with this problem using GA based
optimization algorithm for minimization of the modular multiplication. Proposed
GADSA initiates with an initial population comprises of set of valid and
complete set of individuals. Some operators are used to generate feasible valid
offspring from the existing one. Among several exponents the best solution
reached by GADSA is compared with some of the existing techniques. Extensive
simulations shows competitive results for the proposed GADSA.
|
1208.2345
|
A Large Population Size Can Be Unhelpful in Evolutionary Algorithms
|
cs.NE
|
The utilization of populations is one of the most important features of
evolutionary algorithms (EAs). There have been many studies analyzing the
impact of different population sizes on the performance of EAs. However, most
of such studies are based computational experiments, except for a few cases.
The common wisdom so far appears to be that a large population would increase
the population diversity and thus help an EA. Indeed, increasing the population
size has been a commonly used strategy in tuning an EA when it did not perform
as well as expected for a given problem. He and Yao (2002) showed theoretically
that for some problem instance classes, a population can help to reduce the
runtime of an EA from exponential to polynomial time. This paper analyzes the
role of population further in EAs and shows rigorously that large populations
may not always be useful. Conditions, under which large populations can be
harmful, are discussed in this paper. Although the theoretical analysis was
carried out on one multi-modal problem using a specific type of EAs, it has
much wider implications. The analysis has revealed certain problem
characteristics, which can be either the problem considered here or other
problems, that lead to the disadvantages of large population sizes. The
analytical approach developed in this paper can also be applied to analyzing
EAs on other problems.
|
1208.2346
|
On existence of Budaghyan-Carlet APN hexanomials
|
math.CO cs.DM cs.IT math.IT
|
Budaghyan and Carlet constructed a family of almost perfect nonlinear (APN)
hexanomials over a field with r^2 elements, and with terms of degrees r+1, s+1,
rs+1, rs+r, rs+s, and r+s, where r = 2^m and s = 2^n with GCD(m,n)=1. The
construction requires a technical condition, which was verified empirically in
a finite number of examples. Bracken, Tan, and Tan (arXiv:1110.3177 [cs.it])
proved the condition holds when m = 2 or 4 (mod 6). In this article, we prove
that the construction of Budaghyan and Carlet produces APN polynomials for all
m and n.
In the case where GCD(m,n) = k >= 1, Budaghyan and Carlet showed that the
nonzero derivatives of the hexanomials are 2^k-to-one maps from F_{r^2} to
F_{r^2}, provided the same technical condition holds. We prove their
construction produces hexanomials with this differential property for all m and
n.
|
1208.2355
|
Empirical Validation of the Buckley--Osthus Model for the Web Host
Graph: Degree and Edge Distributions
|
cs.SI cs.DM cs.IR physics.soc-ph
|
There has been a lot of research on random graph models for large real-world
networks such as those formed by hyperlinks between web pages in the world wide
web. Though largely successful qualitatively in capturing their key properties,
such models may lack important quantitative characteristics of Internet graphs.
While preferential attachment random graph models were shown to be capable of
reflecting the degree distribution of the webgraph, their ability to reflect
certain aspects of the edge distribution was not yet well studied.
In this paper, we consider the Buckley--Osthus implementation of preferential
attachment and its ability to model the web host graph in two aspects. One is
the degree distribution that we observe to follow the power law, as often being
the case for real-world graphs. Another one is the two-dimensional edge
distribution, the number of edges between vertices of given degrees. We fit a
single "initial attractiveness" parameter $a$ of the model, first with respect
to the degree distribution of the web host graph, and then, absolutely
independently, with respect to the edge distribution. Surprisingly, the values
of $a$ we obtain turn out to be nearly the same. Therefore the same model with
the same value of the parameter $a$ fits very well the two independent and
basic aspects of the web host graph. In addition, we demonstrate that other
models completely lack the asymptotic behavior of the edge distribution of the
web host graph, even when accurately capturing the degree distribution.
To the best of our knowledge, this is the first attempt for a real graph of
Internet to describe the distribution of edges between vertices with respect to
their degrees.
|
1208.2361
|
Lexicodes over Rings
|
cs.IT math.IT
|
In this paper, we consider the construction of linear lexicodes over finite
chain rings by using a $B$-ordering over these rings and a selection criterion.
% and a greedy Algorithm. As examples we give lexicodes over $\mathbb{Z}_4$ and
$\mathbb{F}_2+u\mathbb{F}_2$. %First, greedy algorithms are presented to
construct %lexicodes using a multiplicative property. Then, greedy algorithms
%are given for the case when the selection criteria is not %multiplicative such
as the minimum distance constraint. It is shown that this construction produces
many optimal codes over rings and also good binary codes. Some of these codes
meet the Gilbert bound. We also obtain optimal self-dual codes, in particular
the octacode.
|
1208.2362
|
The Guppy Effect as Interference
|
cs.AI quant-ph
|
People use conjunctions and disjunctions of concepts in ways that violate the
rules of classical logic, such as the law of compositionality. Specifically,
they overextend conjunctions of concepts, a phenomenon referred to as the Guppy
Effect. We build on previous efforts to develop a quantum model that explains
the Guppy Effect in terms of interference. Using a well-studied data set with
16 exemplars that exhibit the Guppy Effect, we developed a 17-dimensional
complex Hilbert space H that models the data and demonstrates the relationship
between overextension and interference. We view the interference effect as, not
a logical fallacy on the conjunction, but a signal that out of the two
constituent concepts, a new concept has emerged.
|
1208.2376
|
Analytical Survey of Wearable Sensors
|
cs.SY cs.NI
|
Wearable sensors inWireless Body Area Networks (WBANs) provide health and
physical activity monitoring. Modern communication systems have extended this
monitoring remotely. In this survey, various types of wearable sensors
discussed, their medical applications like ECG, EEG, blood pressure, detection
of blood glucose level, pulse rate, respiration rate and non medical
applications like daily exercise monitoring and motion detection of different
body parts. Different types of noise removing filters also discussed at the end
that are helpful in to remove noise from ECG signals. Main purpose of this
survey is to provide a platform for researchers in wearable sensors for WBANs.
|
1208.2387
|
Instantly Decodable versus Random Linear Network Coding: A Comparative
Framework for Throughput and Decoding Delay Performance
|
cs.IT math.IT
|
This paper studies the tension between throughput and decoding delay
performance of two widely-used network coding schemes: random linear network
coding (RLNC) and instantly decodable network coding (IDNC). A single-hop
broadcasting system model is considered that aims to deliver a block of packets
to all receivers in the presence of packet erasures. For a fair and
analytically tractable comparison between the two coding schemes, the
transmission comprises two phases: a systematic transmission phase and a
network coded transmission phase which is further divided into rounds. After
the systematic transmission phase and given the same packet reception state,
three quantitative metrics are proposed and derived in each scheme: 1) the
absolute minimum number of transmissions in the first coded transmission round
(assuming no erasures), 2) probability distribution of extra coded
transmissions in a subsequent round (due to erasures), and 3) average packet
decoding delay. This comparative study enables application-aware adaptive
selection between IDNC and RLNC after systematic transmission phase.
One contribution of this paper is to provide a deep and systematic
understanding of the IDNC scheme, to propose the notion of packet diversity and
an optimal IDNC encoding scheme for minimizing metric 1. This is generally
NP-hard, but nevertheless required for characterizing and deriving all the
three metrics. Analytical and numerical results show that there is no clear
winner between RLNC and IDNC if one is concerned with both throughput and
decoding delay performance. IDNC is more preferable than RLNC when the number
of receivers is smaller than packet block size, and the case reverses when the
number of receivers is much greater than the packet block size. In the middle
regime, the choice can depend on the application and a specific instance of the
problem.
|
1208.2394
|
Performance Analysis of Protograph-based LDPC Codes with Spatial
Diversity
|
cs.IT math.IT
|
In wireless communications, spatial diversity techniques, such as space-time
block code (STBC) and single-input multiple-output (SIMO), are employed to
strengthen the robustness of the transmitted signal against channel fading.
This paper studies the performance of protograph-based low-density parity-check
(LDPC) codes with receive antenna diversity. We first propose a modified
version of the protograph extrinsic information transfer (PEXIT) algorithm and
use it for deriving the threshold of the protograph codes in a single-input
multiple-output (SIMO) system. We then calculate the decoding threshold and
simulate the bit error rate (BER) of two protograph codes
(accumulate-repeat-by-3-accumulate (AR3A) code and
accumulate-repeat-by-4-jagged-accumulate (AR4JA) code), a regular (3, 6) LDPC
code and two optimized irregular LDPC codes. The results reveal that the
irregular codes achieve the best error performance in the low
signal-to-noise-ratio (SNR) region and the AR3A code outperforms all other
codes in the high-SNR region. Utilizing the theoretical analyses and the
simulated results, we further discuss the effect of the diversity order on the
performance of the protograph codes. Accordingly, the AR3A code stands out as a
good candidate for wireless communication systems with multiple receive
antennas.
|
1208.2417
|
How to sample if you must: on optimal functional sampling
|
stat.ML cs.LG
|
We examine a fundamental problem that models various active sampling setups,
such as network tomography. We analyze sampling of a multivariate normal
distribution with an unknown expectation that needs to be estimated: in our
setup it is possible to sample the distribution from a given set of linear
functionals, and the difficulty addressed is how to optimally select the
combinations to achieve low estimation error. Although this problem is in the
heart of the field of optimal design, no efficient solutions for the case with
many functionals exist. We present some bounds and an efficient sub-optimal
solution for this problem for more structured sets such as binary functionals
that are induced by graph walks.
|
1208.2429
|
Linear model predictive control based on polyhedral control Lyapunov
functions: theory and applications
|
cs.SY math.OC
|
Polyhedral control Lyapunov functions (PCLFs) are exploited in finite-horizon
linear model predictive control formulations in order to guarantee the maximal
domain of attraction (DoA), in contrast to traditional formulations based on
quadratic control Lyapunov functions. In particular, the terminal region is
chosen as the largest DoA, namely the entire controllable set, which is
parametrized by a level set of a suitable PCLF. Closed-loop stability of the
origin is guaranteed either by using an "inflated" PCLF as terminal cost or by
adding a contraction constraint for the PCLF evaluated at the current state.
Two variants of the formulation based on the inflated PCLF terminal cost are
also presented. In all proposed formulations, the guaranteed DoA is always the
entire controllable set, independently of the chosen finite horizon.
Closed-loop inherent robustness with respect to arbitrary, sufficiently small
perturbations is also established. Moreover, all proposed schemes can be
formulated as Quadratic Programming problems. Numerical examples show the main
benefits and achievements of the proposed formulations.
|
1208.2434
|
Distributed Multi-objective Multidisciplinary Design Optimization
Algorithms
|
math.OC cs.SY
|
This work proposes multi-agent systems setting for concurrent engineering
system design optimization and gradually paves the way towards examining graph
theoretic constructs in the context of multidisciplinary design optimization
problem. The flow of the algorithm can be described as follow; generated
estimates of the optimal (shared design) variables are exchanged locally with
neighbor subspaces and then updated by computing a weighted sum of the local
and received estimates. To comply with the consistency requirement, the
resultant values are projected to local constraint sets. By employing the
existing rules and results of the field, it has shown that the dual task of
reaching consensus and asymptotic convergence of the algorithms to locally and
globally optimal and consistent designs can be achieved. Finally, simulations
are provided to illustrate the effectiveness and capability of the presented
framework.
|
1208.2437
|
An Efficient Genetic Programming System with Geometric Semantic
Operators and its Application to Human Oral Bioavailability Prediction
|
cs.NE
|
Very recently new genetic operators, called geometric semantic operators,
have been defined for genetic programming. Contrarily to standard genetic
operators, which are uniquely based on the syntax of the individuals, these new
operators are based on their semantics, meaning with it the set of input-output
pairs on training data. Furthermore, these operators present the interesting
property of inducing a unimodal fitness landscape for every problem that
consists in finding a match between given input and output data (for instance
regression and classification). Nevertheless, the current definition of these
operators has a serious limitation: they impose an exponential growth in the
size of the individuals in the population, so their use is impossible in
practice. This paper is intended to overcome this limitation, presenting a new
genetic programming system that implements geometric semantic operators in an
extremely efficient way. To demonstrate the power of the proposed system, we
use it to solve a complex real-life application in the field of
pharmacokinetic: the prediction of the human oral bioavailability of potential
new drugs. Besides the excellent performances on training data, which were
expected because the fitness landscape is unimodal, we also report an excellent
generalization ability of the proposed system, at least for the studied
application. In fact, it outperforms standard genetic programming and a wide
set of other well-known machine learning methods.
|
1208.2448
|
Breaking Out The XML MisMatch Trap
|
cs.DB
|
In keyword search, when user cannot get what she wants, query refinement is
needed and reason can be various. We first give a thorough categorization of
the reason, then focus on solving one category of query refinement problem in
the context of XML keyword search, where what user searches for does not exist
in the data. We refer to it as the MisMatch problem in this paper. Then we
propose a practical way to detect the MisMatch problem and generate helpful
suggestions to users. Our approach can be viewed as a post-processing job of
query evaluation, and has three main features: (1) it adopts both the suggested
queries and their sample results as the output to user, helping user judge
whether the MisMatch problem is solved without consuming all query results; (2)
it is portable in the sense that it can work with any LCA-based matching
semantics and orthogonal to the choice of result retrieval method adopted; (3)
it is lightweight in the way that it occupies a very small proportion of the
whole query evaluation time. Extensive experiments on three real datasets
verify the effectiveness, efficiency and scalability of our approach. An online
XML keyword search engine called XClear that embeds the MisMatch problem
detector and suggester has been built.
|
1208.2456
|
Wolfram's Classification and Computation in Cellular Automata Classes
III and IV
|
nlin.CG cs.CC cs.IT math.DS math.IT
|
We conduct a brief survey on Wolfram's classification, in particular related
to the computing capabilities of Cellular Automata (CA) in Wolfram's classes
III and IV. We formulate and shed light on the question of whether Class III
systems are capable of Turing universality or may turn out to be "too hot" in
practice to be controlled and programmed. We show that systems in Class III are
indeed capable of computation and that there is no reason to believe that they
are unable, in principle, to reach Turing-completness.
|
1208.2478
|
Structured Query Reformulations in Commerce Search
|
cs.IR cs.DB
|
Recent work in commerce search has shown that understanding the semantics in
user queries enables more effective query analysis and retrieval of relevant
products. However, due to lack of sufficient domain knowledge, user queries
often include terms that cannot be mapped directly to any product attribute.
For example, a user looking for {\tt designer handbags} might start with such a
query because she is not familiar with the manufacturers, the price ranges,
and/or the material that gives a handbag designer appeal. Current commerce
search engines treat terms such as {\tt designer} as keywords and attempt to
match them to contents such as product reviews and product descriptions, often
resulting in poor user experience.
In this study, we propose to address this problem by reformulating queries
involving terms such as {\tt designer}, which we call \emph{modifiers}, to
queries that specify precise product attributes. We learn to rewrite the
modifiers to attribute values by analyzing user behavior and leveraging
structured data sources such as the product catalog that serves the queries. We
first produce a probabilistic mapping between the modifiers and attribute
values based on user behavioral data. These initial associations are then used
to retrieve products from the catalog, over which we infer sets of attribute
values that best describe the semantics of the modifiers. We evaluate the
effectiveness of our approach based on a comprehensive Mechanical Turk study.
We find that users agree with the attribute values selected by our approach in
about 95% of the cases and they prefer the results surfaced for our
reformulated queries to ones for the original queries in 87% of the time.
|
1208.2488
|
Period Distribution of Inversive Pseudorandom Number Generators Over
Galois Rings
|
cs.IT math.IT
|
In 2009, Sol\'{e} and Zinoviev (\emph{Eur. J. Combin.}, vol. 30, no. 2, pp.
458-467, 2009) proposed an open problem of arithmetic interest to study the
period of the inversive pseudorandom number generators (IPRNGs) and to give
conditions bearing on $a, b$ to achieve maximal period, we focus on resolving
this open problem. In this paper, the period distribution of the IPRNGs over
the Galois ring $({\rm Z}_{p^{e}},+,\times)$ is considered, where $p>3$ is a
prime and $e\geq 2$ is an integer. The IPRNGs are transformed to 2-dimensional
linear feedback shift registers (LFSRs) so that the analysis of the period
distribution of the IPRNGs is transformed to the analysis of the period
distribution of the LFSRs. Then, by employing some analytical approaches, the
full information on the period distribution of the IPRNGs is obtained, which is
to make exact statistics about the period of the IPRNGs then count the number
of IPRNGs of a specific period when $a$, $b$ and $x_{0}$ traverse all elements
in ${\rm Z}_{p^{e}}$. The analysis process also indicates how to choose the
parameters and the initial values such that the IPRNGs fit specific periods.
|
1208.2503
|
Distributed Pareto Optimization via Diffusion Strategies
|
cs.MA math.OC
|
We consider solving multi-objective optimization problems in a distributed
manner by a network of cooperating and learning agents. The problem is
equivalent to optimizing a global cost that is the sum of individual
components. The optimizers of the individual components do not necessarily
coincide and the network therefore needs to seek Pareto optimal solutions. We
develop a distributed solution that relies on a general class of adaptive
diffusion strategies. We show how the diffusion process can be represented as
the cascade composition of three operators: two combination operators and a
gradient descent operator. Using the Banach fixed-point theorem, we establish
the existence of a unique fixed point for the composite cascade. We then study
how close each agent converges towards this fixed point, and also examine how
close the Pareto solution is to the fixed point. We perform a detailed
mean-square error analysis and establish that all agents are able to converge
to the same Pareto optimal solution within a sufficiently small
mean-square-error (MSE) bound even for constant step-sizes. We illustrate one
application of the theory to collaborative decision making in finance by a
network of agents.
|
1208.2507
|
Error Probability of OSTB Codes and Capacity Analysis with Antenna
Selection over Single-Antenna AF Relay Channels
|
cs.IT cs.NI math.IT
|
In this paper, the symbol error rate (SER) and the bit error rate (BER) of
orthogonal space-time block codes (OSTBCs) and their achievable capacity over
an amplify-and-forward (AF) relay channel with multiple antennas at source and
destination and single antenna at relay node are investigated. Considered are
receive antenna selection, transmit antenna selection, and joint antenna
selection at both the transmitter and the receiver. The exact SERs of OSTBCs
for M-PSK and square M-QAM constellations are obtained using the moment
generating functions (MGFs). Also, we analyze the achievable capacity over such
channels assuming antenna selection is done at the source and relay nodes. We
show that a small number of selected antennas can achieve the capacity of the
system in which no channel state information (CSI) is available at the source
and relay nodes.
|
1208.2515
|
A Sub-Nyquist Radar Prototype: Hardware and Algorithms
|
cs.IT math.IT
|
Traditional radar sensing typically involves matched filtering between the
received signal and the shape of the transmitted pulse. Under the confinement
of classic sampling theorem this requires that the received signals must first
be sampled at twice the baseband bandwidth, in order to avoid aliasing. The
growing demands for target distinction capability and spatial resolution imply
significant growth in the bandwidth of the transmitted pulse. Thus, correlation
based radar systems require high sampling rates, and with the large amounts of
data sampled also necessitate vast memory capacity. In addition, real-time
processing of the data typically results in high power consumption. Recently,
new approaches for radar sensing and detection were introduced, based on the
Finite Rate of Innovation and Xampling frameworks. These techniques allow
significant reduction in sampling rate, implying potential power savings, while
maintaining the system's detection capabilities at high enough SNR. Here we
present for the first time a design and implementation of a Xampling-based
hardware prototype that allows sampling of radar signals at rates much lower
than Nyquist. We demostrate by real-time analog experiments that our system is
able to maintain reasonable detection capabilities, while sampling radar
signals that require sampling at a rate of about 30MHz at a total rate of 1Mhz.
|
1208.2518
|
Software systems through complex networks science: Review, analysis and
applications
|
cs.SI cs.SE physics.soc-ph
|
Complex software systems are among most sophisticated human-made systems, yet
only little is known about the actual structure of 'good' software. We here
study different software systems developed in Java from the perspective of
network science. The study reveals that network theory can provide a prominent
set of techniques for the exploratory analysis of large complex software
system. We further identify several applications in software engineering, and
propose different network-based quality indicators that address software
design, efficiency, reusability, vulnerability, controllability and other. We
also highlight various interesting findings, e.g., software systems are highly
vulnerable to processes like bug propagation, however, they are not easily
controllable.
|
1208.2523
|
Path Integral Control by Reproducing Kernel Hilbert Space Embedding
|
cs.LG stat.ML
|
We present an embedding of stochastic optimal control problems, of the so
called path integral form, into reproducing kernel Hilbert spaces. Using
consistent, sample based estimates of the embedding leads to a model free,
non-parametric approach for calculation of an approximate solution to the
control problem. This formulation admits a decomposition of the problem into an
invariant and task dependent component. Consequently, we make much more
efficient use of the sample data compared to previous sample based approaches
in this domain, e.g., by allowing sample re-use across tasks. Numerical
examples on test problems, which illustrate the sample efficiency, are
provided.
|
1208.2534
|
Locating the Source of Diffusion in Large-Scale Networks
|
cs.SI cs.IR physics.soc-ph
|
How can we localize the source of diffusion in a complex network? Due to the
tremendous size of many real networks--such as the Internet or the human social
graph--it is usually infeasible to observe the state of all nodes in a network.
We show that it is fundamentally possible to estimate the location of the
source from measurements collected by sparsely-placed observers. We present a
strategy that is optimal for arbitrary trees, achieving maximum probability of
correct localization. We describe efficient implementations with complexity
O(N^{\alpha}), where \alpha=1 for arbitrary trees, and \alpha=3 for arbitrary
graphs. In the context of several case studies, we determine how localization
accuracy is affected by various system parameters, including the structure of
the network, the density of observers, and the number of observed cascades.
|
1208.2547
|
Social Event Detection with Interaction Graph Modeling
|
cs.SI cs.IR cs.MM physics.soc-ph
|
This paper focuses on detecting social, physical-world events from photos
posted on social media sites. The problem is important: cheap media capture
devices have significantly increased the number of photos shared on these
sites. The main contribution of this paper is to incorporate online social
interaction features in the detection of physical events. We believe that
online social interaction reflect important signals among the participants on
the "social affinity" of two photos, thereby helping event detection. We
compute social affinity via a random-walk on a social interaction graph to
determine similarity between two photos on the graph. We train a support vector
machine classifier to combine the social affinity between photos and
photo-centric metadata including time, location, tags and description.
Incremental clustering is then used to group photos to event clusters. We have
very good results on two large scale real-world datasets: Upcoming and
MediaEval. We show an improvement between 0.06-0.10 in F1 on these datasets.
|
1208.2566
|
The Complexity of Planning Revisited - A Parameterized Analysis
|
cs.AI
|
The early classifications of the computational complexity of planning under
various restrictions in STRIPS (Bylander) and SAS+ (Baeckstroem and Nebel) have
influenced following research in planning in many ways. We go back and
reanalyse their subclasses, but this time using the more modern tool of
parameterized complexity analysis. This provides new results that together with
the old results give a more detailed picture of the complexity landscape. We
demonstrate separation results not possible with standard complexity theory,
which contributes to explaining why certain cases of planning have seemed
simpler in practice than theory has predicted. In particular, we show that
certain restrictions of practical interest are tractable in the parameterized
sense of the term, and that a simple heuristic is sufficient to make a
well-known partial-order planner exploit this fact.
|
1208.2572
|
Nonparametric sparsity and regularization
|
stat.ML cs.LG math.OC
|
In this work we are interested in the problems of supervised learning and
variable selection when the input-output dependence is described by a nonlinear
function depending on a few variables. Our goal is to consider a sparse
nonparametric model, hence avoiding linear or additive models. The key idea is
to measure the importance of each variable in the model by making use of
partial derivatives. Based on this intuition we propose a new notion of
nonparametric sparsity and a corresponding least squares regularization scheme.
Using concepts and results from the theory of reproducing kernel Hilbert spaces
and proximal methods, we show that the proposed learning algorithm corresponds
to a minimization problem which can be provably solved by an iterative
procedure. The consistency properties of the obtained estimator are studied
both in terms of prediction and selection performance. An extensive empirical
analysis shows that the proposed method performs favorably with respect to the
state-of-the-art methods.
|
1208.2609
|
Epidemics scenarios in the "Romantic network"
|
physics.soc-ph cs.SI nlin.AO
|
The structure of sexual contacts, its contacts network and its temporal
interactions, play an important role in the spread of sexually transmitted
infections. Unfortunately, that kind of data is very hard to obtain. One of the
few exceptions is the "Romantic network" which is a complete structure of a
real sexual network of a high school. In terms of topology, unlike other sexual
networks classified as scale-free network. Regarding the temporal structure,
several studies indicate that relationship timing can have effects on diffusion
through networks, as relationship order determines transmission routes.With the
aim to check if the particular structure, static and dynamic, of the Romantic
network is determinant for the propagation of an STI in it, we perform
simulations in two scenarios: the static network where all contacts are
available and the dynamic case where contacts evolve in time. In the static
case, we compare the epidemic results in the Romantic network with some
paradigmatic topologies. We further study the behavior of the epidemic on the
Romantic network in response to the effect of any individual, belonging to the
network, having a contact with an external infected subject, the influence of
the degree of the initial infected, and the effect of the variability of
contacts per unit time. We also consider the dynamics of formation of pairs in
and we study the propagation of the diseases in this dynamic scenario. Our
results suggest that while the Romantic network can not be labeled as a
Watts-Strogatz network, it is, regarding the propagation of an STI, very close
to one with high disorder. Our simulations confirm that relationship timing
affects, but strongly lowering, the final outbreak size. Besides, shows a clear
correlation between the average degree and the outbreak size over time.
|
1208.2618
|
The role of noise and initial conditions in the asymptotic solution of a
bounded confidence, continuous-opinion model
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
We study a model for continuous-opinion dynamics under bounded confidence. In
particular, we analyze the importance of the initial distribution of opinions
in determining the asymptotic configuration. Thus, we sketch the structure of
attractors of the dynamical system, by means of the numerical computation of
the time evolution of the agents density. We show that, for a given bound of
confidence, a consensus can be encouraged or prevented by certain initial
conditions. Furthermore, a noisy perturbation is added to the system with the
purpose of modeling the free will of the agents. As a consequence, the
importance of the initial condition is partially replaced by that of the
statistical distribution of the noise. Nevertheless, we still find evidence of
the influence of the initial state upon the final configuration for a short
range of the bound of confidence parameter.
|
1208.2651
|
A Plea for Neutral Comparison Studies in Computational Sciences
|
stat.CO cs.CV stat.ME stat.ML
|
In a context where most published articles are devoted to the development of
"new methods", comparison studies are generally appreciated by readers but
surprisingly given poor consideration by many scientific journals. In
connection with recent articles on over-optimism and epistemology published in
Bioinformatics, this letter stresses the importance of neutral comparison
studies for the objective evaluation of existing methods and the establishment
of standards by drawing parallels with clinical research.
|
1208.2655
|
Stable Segmentation of Digital Image
|
cs.CV
|
In the paper the optimal image segmentation by means of piecewise constant
approximations is considered. The optimality is defined by a minimum value of
the total squared error or by equivalent value of standard deviation of the
approximation from the image. The optimal approximations are defined
independently on the method of their obtaining and might be generated in
different algorithms. We investigate the computation of the optimal
approximation on the grounds of stability with respect to a given set of
modifications. To obtain the optimal approximation the Mumford-Shuh model is
generalized and developed, which in the computational part is combined with the
Otsu method in multi-thresholding version. The proposed solution is proved
analytically and experimentally on the example of the standard image.
|
1208.2712
|
Topological measures for the analysis of wireless sensor networks
|
cs.NI cs.SI
|
Concepts such as energy dependence, random deployment, dynamic topological
update, self-organization, varying large number of nodes are among many factors
that make WSNs a type of complex system. However, when analyzing WSNs
properties using complex network tools, classical topological measures must be
considered with care as they might not be applicable in their original form. In
this work, we focus on the topological measures frequently used in the related
field of Internet topological analysis. We illustrate their applicability to
the WSNs domain through simulation experiments. In the cases when the classic
metrics turn out to be incompatible, we propose some alternative measures and
discuss them based on the WSNs characteristics.
|
1208.2719
|
Unified Analysis of Transmit Antenna Selection/Space-Time Block Coding
with Receive Selection and Combining over Nakagami-m Fading Channels in the
Presence of Feedback Errors
|
cs.IT math.IT stat.OT
|
Examining the effect of imperfect transmit antenna selection (TAS) caused by
the feedback link errors on the performance of hybrid TAS/space-time block
coding (STBC) with selection combining (SC) (i.e., joint transmit and receive
antenna selection (TRAS)/STBC) and TAS/STBC (with receive maximal-ratio
combining (MRC)-like combining structure) over Nakagami-m fading channels is
the main objective of this paper. Under ideal channel estimation and delay-free
feedback assumptions, statistical expressions and several performance metrics
related to the post-processing signal-to-noise ratio (SNR) are derived for a
unified system model concerning both joint TRAS/STBC and TAS/STBC schemes.
Exact analytical expressions for outage probability and bit/symbol error rates
(BER/SER) of binary and M-ary modulations are presented in order to provide an
extensive examination on the capacity and error performance of the unified
system that experiences feedback errors. Also, the asymptotic diversity order
analysis, which shows that the diversity order of the investigated schemes is
lower bounded by the diversity order provided by STBC transmission itself, is
included in the paper. Moreover, all theoretical results are validated by
performing Monte Carlo simulations.
|
1208.2737
|
Shannon Information Theory Without Shedding Tears Over Delta \& Epsilon
Proofs or Typical Sequences
|
cs.IT math.IT quant-ph
|
This paper begins with a discussion of integration over probability types
(p-types). After doing that, the paper re-visits 3 mainstay problems of
classical (non-quantum) Shannon Information Theory (SIT): source coding without
distortion, channel coding, and source coding with distortion. The paper proves
well-known, conventional results for each of these 3 problems. However, the
proofs given for these results are not conventional. They are based on complex
integration techniques (approximations obtained by applying the method of
steepest descent to p-type integrals) instead of the usual delta & epsilon and
typical sequences arguments. Another unconventional feature of this paper is
that we make ample use of classical Bayesian networks (CB nets). This paper
showcases some of the benefits of using CB nets to do classical SIT.
|
1208.2773
|
Privacy Preserving Record Linkage via grams Projections
|
cs.DB
|
Record linkage has been extensively used in various data mining applications
involving sharing data. While the amount of available data is growing, the
concern of disclosing sensitive information poses the problem of utility vs
privacy. In this paper, we study the problem of private record linkage via
secure data transformations. In contrast to the existing techniques in this
area, we propose a novel approach that provides strong privacy guarantees under
the formal framework of differential privacy. We develop an embedding strategy
based on frequent variable length grams mined in a private way from the
original data. We also introduce personalized threshold for matching individual
records in the embedded space which achieves better linkage accuracy than the
existing global threshold approach. Compared with the state-of-the-art secure
matching schema, our approach provides formal, provable privacy guarantees and
achieves better scalability while providing comparable utility.
|
1208.2777
|
A Method for Selecting Noun Sense using Co-occurrence Relation in
English-Korean Translation
|
cs.CL
|
The sense analysis is still critical problem in machine translation system,
especially such as English-Korean translation which the syntactical different
between source and target languages is very great. We suggest a method for
selecting the noun sense using contextual feature in English-Korean
Translation.
|
1208.2782
|
Multidimensional Web Page Evaluation Model Using Segmentation And
Annotations
|
cs.IR
|
The evaluation of web pages against a query is the pivot around which the
Information Retrieval domain revolves around. The context sensitive, semantic
evaluation of web pages is a non-trivial problem which needs to be addressed
immediately. This research work proposes a model to evaluate the web pages by
cumulating the segment scores which are computed by multidimensional evaluation
methodology. The model proposed is hybrid since it utilizes both the structural
semantics and content semantics in the evaluation process. The score of the web
page is computed in a bottom-up process by evaluating individual segment's
score through a multi-dimensional approach. The model incorporates an approach
for segment level annotation. The proposed model is prototyped for evaluation;
experiments conducted on the prototype confirm the model's efficiency in
semantic evaluation of pages.
|
1208.2786
|
On Reliability Function of Gaussian Channel with Noisy Feedback: Zero
Transmission Rate
|
cs.IT math.IT
|
For information transmission a discrete time channel with independent
additive Gaussian noise is used. There is also feedback channel with
independent additive Gaussian noise, and the transmitter observes without delay
all outputs of the forward channel via that feedback channel. Transmission of
nonexponential number of messages is considered and the achievable decoding
error exponent for such a combination of channels is investigated. It is shown
that for any finite noise in the feedback channel the achievable error exponent
is better than similar error exponent of the no-feedback channel. Method of
transmission/decoding used in the paper strengthens the earlier method used by
authors for BSC. In particular, for small feedback noise, it allows to get the
gain of 23.6% (instead of 14.3% earlier for BSC).
|
1208.2787
|
Analysis and Construction of Functional Regenerating Codes with Uncoded
Repair for Distributed Storage Systems
|
cs.IT math.IT
|
Modern distributed storage systems apply redundancy coding techniques to
stored data. One form of redundancy is based on regenerating codes, which can
minimize the repair bandwidth, i.e., the amount of data transferred when
repairing a failed storage node. Existing regenerating codes mainly require
surviving storage nodes encode data during repair. In this paper, we study
functional minimum storage regenerating (FMSR) codes, which enable uncoded
repair without the encoding requirement in surviving nodes, while preserving
the minimum repair bandwidth guarantees and also minimizing disk reads. Under
double-fault tolerance settings, we formally prove the existence of FMSR codes,
and provide a deterministic FMSR code construction that can significantly speed
up the repair process. We further implement and evaluate our deterministic FMSR
codes to show the benefits. Our work is built atop a practical cloud storage
system that implements FMSR codes, and we provide theoretical validation to
justify the practicality of FMSR codes.
|
1208.2808
|
Analysis of a Statistical Hypothesis Based Learning Mechanism for Faster
crawling
|
cs.LG cs.IR
|
The growth of world-wide-web (WWW) spreads its wings from an intangible
quantities of web-pages to a gigantic hub of web information which gradually
increases the complexity of crawling process in a search engine. A search
engine handles a lot of queries from various parts of this world, and the
answers of it solely depend on the knowledge that it gathers by means of
crawling. The information sharing becomes a most common habit of the society,
and it is done by means of publishing structured, semi-structured and
unstructured resources on the web. This social practice leads to an exponential
growth of web-resource, and hence it became essential to crawl for continuous
updating of web-knowledge and modification of several existing resources in any
situation. In this paper one statistical hypothesis based learning mechanism is
incorporated for learning the behavior of crawling speed in different
environment of network, and for intelligently control of the speed of crawler.
The scaling technique is used to compare the performance proposed method with
the standard crawler. The high speed performance is observed after scaling, and
the retrieval of relevant web-resource in such a high speed is analyzed.
|
1208.2852
|
Ordered {AND, OR}-Decomposition and Binary-Decision Diagram
|
cs.AI cs.LO
|
In the context of knowledge compilation (KC), we study the effect of
augmenting Ordered Binary Decision Diagrams (OBDD) with two kinds of
decomposition nodes, i.e., AND-vertices and OR-vertices which denote
conjunctive and disjunctive decomposition of propositional knowledge bases,
respectively. The resulting knowledge compilation language is called Ordered
{AND, OR}-decomposition and binary-Decision Diagram (OAODD). Roughly speaking,
several previous languages can be seen as special types of OAODD, including
OBDD, AND/OR Binary Decision Diagram (AOBDD), OBDD with implied Literals
(OBDD-L), Multi-Level Decomposition Diagrams (MLDD). On the one hand, we
propose some families of algorithms which can convert some fragments of OAODD
into others; on the other hand, we present a rich set of polynomial-time
algorithms that perform logical operations. According to these algorithms, as
well as theoretical analysis, we characterize the space efficiency and
tractability of OAODD and its some fragments with respect to the evaluating
criteria in the KC map. Finally, we present a compilation algorithm which can
convert formulas in negative normal form into OAODD.
|
1208.2873
|
Detecting Events and Patterns in Large-Scale User Generated Textual
Streams with Statistical Learning Methods
|
cs.LG cs.CL cs.IR cs.SI stat.AP stat.ML
|
A vast amount of textual web streams is influenced by events or phenomena
emerging in the real world. The social web forms an excellent modern paradigm,
where unstructured user generated content is published on a regular basis and
in most occasions is freely distributed. The present Ph.D. Thesis deals with
the problem of inferring information - or patterns in general - about events
emerging in real life based on the contents of this textual stream. We show
that it is possible to extract valuable information about social phenomena,
such as an epidemic or even rainfall rates, by automatic analysis of the
content published in Social Media, and in particular Twitter, using Statistical
Machine Learning methods. An important intermediate task regards the formation
and identification of features which characterise a target event; we select and
use those textual features in several linear, non-linear and hybrid inference
approaches achieving a significantly good performance in terms of the applied
loss function. By examining further this rich data set, we also propose methods
for extracting various types of mood signals revealing how affective norms - at
least within the social web's population - evolve during the day and how
significant events emerging in the real world are influencing them. Lastly, we
present some preliminary findings showing several spatiotemporal
characteristics of this textual information as well as the potential of using
it to tackle tasks such as the prediction of voting intentions.
|
1208.2900
|
On Achievable Degrees of Freedom for MIMO X Channels
|
cs.IT math.IT
|
In this paper, the achievable DoF of MIMO X channels for constant channel
coefficients with $M_t$ antennas at transmitter $t$ and $N_r$ antennas at
receiver $r$ ($t,r=1,2$) is studied. A spatial interference alignment and
cancelation scheme is proposed to achieve the maximum DoF of the MIMO X
channels. The scenario of $M_1\geq M_2\geq N_1\geq N_2$ is first considered and
divided into 3 cases, $3N_2<M_1+M_2<2N_1+N_2$ (Case $A$), $M_1+M_2\geq2N_1+N_2$
(Case $B$), and $M_1+M_2\leq3N_2$ (Case $C$). With the proposed scheme, it is
shown that in Case $A$, the outer-bound $\frac{M_1+M_2+N_2}{2}$ is achievable;
in Case $B$, the achievable DoF equals the outer-bound $N_1+N_2$ if $M_2>N_1$,
otherwise it is 1/2 or 1 less than the outer-bound; in Case $C$, the achievable
DoF is equal to the outer-bound $2/3(M_1+M_2)$ if $(3N_2-M_1-M_2)\mod 3=0$, and
it is 1/3 or 1/6 less than the outer-bound if $(3N_2-M_1-M_2)\mod 3=1
\mathrm{or} 2$. In the scenario of $M_t\leq N_r$, the exact symmetrical results
of DoF can be obtained.
|
1208.2925
|
Using Program Synthesis for Social Recommendations
|
cs.LG cs.DB cs.PL cs.SI physics.soc-ph
|
This paper presents a new approach to select events of interest to a user in
a social media setting where events are generated by the activities of the
user's friends through their mobile devices. We argue that given the unique
requirements of the social media setting, the problem is best viewed as an
inductive learning problem, where the goal is to first generalize from the
users' expressed "likes" and "dislikes" of specific events, then to produce a
program that can be manipulated by the system and distributed to the collection
devices to collect only data of interest. The key contribution of this paper is
a new algorithm that combines existing machine learning techniques with new
program synthesis technology to learn users' preferences. We show that when
compared with the more standard approaches, our new algorithm provides up to
order-of-magnitude reductions in model training time, and significantly higher
prediction accuracies for our target application. The approach also improves on
standard machine learning techniques in that it produces clear programs that
can be manipulated to optimize data collection and filtering.
|
1208.2943
|
A differential Lyapunov framework for contraction analysis
|
cs.SY math.DG math.DS
|
Lyapunov's second theorem is an essential tool for stability analysis of
differential equations. The paper provides an analog theorem for incremental
stability analysis by lifting the Lyapunov function to the tangent bundle. The
Lyapunov function endows the state-space with a Finsler structure. Incremental
stability is inferred from infinitesimal contraction of the Finsler metrics
through integration along solutions curves.
|
1208.2972
|
Wireless Network Design Under Service Constraints
|
cs.SY cs.NI
|
In this paper we consider the design of wireless queueing network control
policies with special focus on application-dependent service constraints. In
particular we consider streaming traffic induced requirements such as avoiding
buffer underflows, which significantly complicate the control problem compared
to guaranteeing throughput optimality only. Since state-of-the-art approaches
for enforcing minimum buffer constraints in broadcast networks are not suitable
for application in general networks we argue for a cost function based
approach, which combines throughput optimality with flexibility regarding
service constraints. New theoretical stability results are presented and
various candidate cost functions are investigated concerning their suitability
for use in wireless networks with streaming media traffic. Furthermore we show
how the cost function based approach can be used to aid wireless network design
with respect to important system parameters. The performance is demonstrated
using numerical simulations.
|
1208.2976
|
Discriminating different classes of biological networks by analyzing the
graphs spectra distribution
|
stat.ME cs.SI physics.soc-ph q-bio.QM
|
The brain's structural and functional systems, protein-protein interaction,
and gene networks are examples of biological systems that share some features
of complex networks, such as highly connected nodes, modularity, and
small-world topology. Recent studies indicate that some pathologies present
topological network alterations relative to norms seen in the general
population. Therefore, methods to discriminate the processes that generate the
different classes of networks (e.g., normal and disease) might be crucial for
the diagnosis, prognosis, and treatment of the disease. It is known that
several topological properties of a network (graph) can be described by the
distribution of the spectrum of its adjacency matrix. Moreover, large networks
generated by the same random process have the same spectrum distribution,
allowing us to use it as a "fingerprint". Based on this relationship, we
introduce and propose the entropy of a graph spectrum to measure the
"uncertainty" of a random graph and the Kullback-Leibler and Jensen-Shannon
divergences between graph spectra to compare networks. We also introduce
general methods for model selection and network model parameter estimation, as
well as a statistical procedure to test the nullity of divergence between two
classes of complex networks. Finally, we demonstrate the usefulness of the
proposed methods by applying them on (1) protein-protein interaction networks
of different species and (2) on networks derived from children diagnosed with
Attention Deficit Hyperactivity Disorder (ADHD) and typically developing
children. We conclude that scale-free networks best describe all the
protein-protein interactions. Also, we show that our proposed measures
succeeded in the identification of topological changes in the network while
other commonly used measures (number of edges, clustering coefficient, average
path length) failed.
|
1208.3001
|
More than Word Frequencies: Authorship Attribution via Natural Frequency
Zoned Word Distribution Analysis
|
cs.CL
|
With such increasing popularity and availability of digital text data,
authorships of digital texts can not be taken for granted due to the ease of
copying and parsing. This paper presents a new text style analysis called
natural frequency zoned word distribution analysis (NFZ-WDA), and then a basic
authorship attribution scheme and an open authorship attribution scheme for
digital texts based on the analysis. NFZ-WDA is based on the observation that
all authors leave distinct intrinsic word usage traces on texts written by them
and these intrinsic styles can be identified and employed to analyze the
authorship. The intrinsic word usage styles can be estimated through the
analysis of word distribution within a text, which is more than normal word
frequency analysis and can be expressed as: which groups of words are used in
the text; how frequently does each group of words occur; how are the
occurrences of each group of words distributed in the text. Next, the basic
authorship attribution scheme and the open authorship attribution scheme
provide solutions for both closed and open authorship attribution problems.
Through analysis and extensive experimental studies, this paper demonstrates
the efficiency of the proposed method for authorship attribution.
|
1208.3015
|
Explaining Time-Table-Edge-Finding Propagation for the Cumulative
Resource Constraint
|
cs.AI
|
Cumulative resource constraints can model scarce resources in scheduling
problems or a dimension in packing and cutting problems. In order to
efficiently solve such problems with a constraint programming solver, it is
important to have strong and fast propagators for cumulative resource
constraints. One such propagator is the recently developed
time-table-edge-finding propagator, which considers the current resource
profile during the edge-finding propagation. Recently, lazy clause generation
solvers, i.e. constraint programming solvers incorporating nogood learning,
have proved to be excellent at solving scheduling and cutting problems. For
such solvers, concise and accurate explanations of the reasons for propagation
are essential for strong nogood learning. In this paper, we develop the first
explaining version of time-table-edge-finding propagation and show preliminary
results on resource-constrained project scheduling problems from various
standard benchmark suites. On the standard benchmark suite PSPLib, we were able
to close one open instance and to improve the lower bound of about 60% of the
remaining open instances. Moreover, 6 of those instances were closed.
|
1208.3017
|
Expurgation Exponent of Leaked Information in Privacy Amplification for
Binary Sources
|
cs.IT math.IT
|
We investigate the privacy amplification problem in which Eve can observe the
uniform binary source through a binary erasure channel (BEC) or a binary
symmetric channel (BSC). For this problem, we derive the so-called expurgation
exponent of the information leaked to Eve. The exponent is derived by relating
the leaked information to the error probability of the linear code that is
generated by the linear hash function used in the privacy amplification, which
is also interesting in its own right. The derived exponent is larger than
state-of-the-art exponent recently derived by Hayashi at low rate.
|
1208.3024
|
Uplink Multicell Processing with Limited Backhaul via Per-Base-Station
Successive Interference Cancellation
|
cs.IT math.IT
|
This paper studies an uplink multicell joint processing model in which the
base-stations are connected to a centralized processing server via rate-limited
digital backhaul links. Unlike previous studies where the centralized processor
jointly decodes all the source messages from all base-stations, this paper
proposes a suboptimal achievability scheme in which the Wyner-Ziv
compress-and-forward relaying technique is employed on a per-base-station
basis, but successive interference cancellation (SIC) is used at the central
processor to mitigate multicell interference. This results in an achievable
rate region that is easily computable, in contrast to the joint processing
schemes in which the rate regions can only be characterized by exponential
number of rate constraints. Under the per-base-station SIC framework, this
paper further studies the impact of the limited-capacity backhaul links on the
achievable rates and establishes that in order to achieve to within constant
number of bits to the maximal SIC rate with infinite-capacity backhaul, the
backhaul capacity must scale logarithmically with the
signal-to-interference-and-noise ratio (SINR) at each base-station. Finally,
this paper studies the optimal backhaul rate allocation problem for an uplink
multicell joint processing model with a total backhaul capacity constraint. The
analysis reveals that the optimal strategy that maximizes the overall sum rate
should also scale with the log of the SINR at each base-station.
|
1208.3029
|
Fast Adaptive S-ALOHA Scheme for Event-driven M2M Communications
(Journal version)
|
cs.IT math.IT
|
Supporting massive device transmission is challenging in Machine-to-Machine
(M2M) communications. Particularly, in event-driven M2M communications, a large
number of devices activate within a short period of time, which in turn causes
high radio congestions and severe access delay. To address this issue, we
propose a Fast Adaptive S-ALOHA (FASA) scheme for random access control of M2M
communication systems with bursty traffic. Instead of the observation in a
single slot, the statistics of consecutive idle and collision slots are used in
FASA to accelerate the tracking process of network status which is critical for
optimizing S-ALOHA systems. Using drift analysis, we design the FASA scheme
such that the estimate of the backlogged devices converges fast to the true
value. Furthermore, by examining the $T$-slot drifts, we prove that the
proposed FASA scheme is stable as long as the average arrival rate is smaller
than $e^{-1}$, in the sense that the Markov Chain derived from the scheme is
geometrically ergodic. Simulation results demonstrate that the proposed FASA
scheme outperforms traditional additive schemes such as PB-ALOHA and achieves
near-optimal performance in reducing access delay. Moreover, compared to
multiplicative schemes, FASA shows its robustness under heavy traffic load in
addition to better delay performance.
|
1208.3030
|
Asymptotic Generalization Bound of Fisher's Linear Discriminant Analysis
|
stat.ML cs.LG
|
Fisher's linear discriminant analysis (FLDA) is an important dimension
reduction method in statistical pattern recognition. It has been shown that
FLDA is asymptotically Bayes optimal under the homoscedastic Gaussian
assumption. However, this classical result has the following two major
limitations: 1) it holds only for a fixed dimensionality $D$, and thus does not
apply when $D$ and the training sample size $N$ are proportionally large; 2) it
does not provide a quantitative description on how the generalization ability
of FLDA is affected by $D$ and $N$. In this paper, we present an asymptotic
generalization analysis of FLDA based on random matrix theory, in a setting
where both $D$ and $N$ increase and $D/N\longrightarrow\gamma\in[0,1)$. The
obtained lower bound of the generalization discrimination power overcomes both
limitations of the classical result, i.e., it is applicable when $D$ and $N$
are proportionally large and provides a quantitative description of the
generalization ability of FLDA in terms of the ratio $\gamma=D/N$ and the
population discrimination power. Besides, the discrimination power bound also
leads to an upper bound on the generalization error of binary-classification
with FLDA.
|
1208.3047
|
Parallelization of Maximum Entropy POS Tagging for Bahasa Indonesia with
MapReduce
|
cs.DC cs.CL
|
In this paper, MapReduce programming model is used to parallelize training
and tagging proceess in Maximum Entropy part of speech tagging for Bahasa
Indonesia. In training process, MapReduce model is implemented dictionary,
tagtoken, and feature creation. In tagging process, MapReduce is implemented to
tag lines of document in parallel. The training experiments showed that total
training time using MapReduce is faster, but its result reading time inside the
process slow down the total training time. The tagging experiments using
different number of map and reduce process showed that MapReduce implementation
could speedup the tagging process. The fastest tagging result is showed by
tagging process using 1,000,000 word corpus and 30 map process.
|
1208.3056
|
Polar Codes for Nonasymmetric Slepian-Wolf Coding
|
cs.IT math.IT
|
A method to construct nonasymmetric distributed source coding (DSC) scheme
using polar codes which can achieve any point on the dominant face of the
Slepian-Wolf (SW) rate region for sources with uniform marginals is considered.
In addition to nonasymmetric case, we also discuss and show explicitly how
asymmetric and single source compression is done using successive cancellation
(SC) polar decoder. We then present simulation results that exhibit the
performance of the considered methods.
|
1208.3091
|
An Adaptive Successive Cancellation List Decoder for Polar Codes with
Cyclic Redundancy Check
|
cs.IT math.IT
|
In this letter, we propose an adaptive SC (Successive Cancellation)-List
decoder for polar codes with CRC. This adaptive SC-List decoder iteratively
increases the list size until the decoder outputs contain at least one survival
path which can pass CRC. Simulation shows that the adaptive SC-List decoder
provides significant complexity reduction. We also demonstrate that polar code
(2048, 1024) with 24-bit CRC decoded by our proposed adaptive SC-List decoder
with very large list size can achieve a frame error rate FER=0.001 at
Eb/No=1.1dB, which is about 0.2dB from the information theoretic limit at this
block length.
|
1208.3101
|
Statistical Common Author Networks (SCAN)
|
cs.DL cs.SI physics.soc-ph
|
A new method for visualizing the relatedness of scientific areas is developed
that is based on measuring the overlap of researchers between areas. It is
found that closely related areas have a high propensity to share a larger
number of common authors. A methodology for comparing areas of vastly different
sizes and to handle name homonymy is constructed, allowing for the robust
deployment of this method on real data sets. A statistical analysis of the
probability distributions of the common author overlap that accounts for noise
is carried out along with the production of network maps with weighted links
proportional to the overlap strength. This is demonstrated on two case studies,
complexity science and neutrino physics, where the level of relatedness of
areas within each area is expected to vary greatly. It is found that the
results returned by this method closely match the intuitive expectation that
the broad, multidisciplinary area of complexity science possesses areas that
are weakly related to each other while the much narrower area of neutrino
physics shows very strongly related areas.
|
1208.3122
|
Defect Diagnosis in Rotors Systems by Vibrations Data Collectors Using
Trending Software
|
cs.CE
|
Vibration measurements have been used to reliably diagnose performance
problems in machinery and related mechanical products. A vibration data
collector can be used effectively to measure and analyze the machinery
vibration content in gearboxes, engines, turbines, fans, compressors, pumps and
bearings. Ideally, a machine will have little or no vibration, indicating that
the rotating components are appropriately balanced, aligned, and well
maintained. Quick analysis and assessment of the vibration content can lead to
fault diagnosis and prognosis of a machine's ability to continue running. The
aim of this research used vibration measurements to pinpoint mechanical defects
such as (unbalance, misalignment, resonance, and part loosening), consequently
diagnosis all necessary process for engineers and technicians who desire to
understand the vibration that exists in structures and machines.
Keywords- vibration data collectors; analysis software; rotating components.
|
1208.3133
|
Color Image Compression Algorithm Based on the DCT Blocks
|
cs.CV
|
This paper presents the performance of different blockbased discrete cosine
transform (DCT) algorithms for compressing color image. In this RGB component
of color image are converted to YCbCr before DCT transform is applied. Y is
luminance component;Cb and Cr are chrominance components of the image. The
modification of the image data is done based on the classification of image
blocks to edge blocks and non-edge blocks, then the edge block of the image is
compressed with low compression and the nonedge blocks is compressed with high
compression. The analysis results have indicated that the performance of the
suggested method is much better, where the constructed images are less
distorted and compressed with higher factor.
|
1208.3145
|
Metric distances derived from cosine similarity and Pearson and Spearman
correlations
|
stat.ME cs.LG
|
We investigate two classes of transformations of cosine similarity and
Pearson and Spearman correlations into metric distances, utilising the simple
tool of metric-preserving functions. The first class puts anti-correlated
objects maximally far apart. Previously known transforms fall within this
class. The second class collates correlated and anti-correlated objects. An
example of such a transformation that yields a metric distance is the sine
function when applied to centered data.
|
1208.3148
|
Evaluating Ontology Matching Systems on Large, Multilingual and
Real-world Test Cases
|
cs.AI
|
In the field of ontology matching, the most systematic evaluation of matching
systems is established by the Ontology Alignment Evaluation Initiative (OAEI),
which is an annual campaign for evaluating ontology matching systems organized
by different groups of researchers. In this paper, we report on the results of
an intermediary OAEI campaign called OAEI 2011.5. The evaluations of this
campaign are divided in five tracks. Three of these tracks are new or have been
improved compared to previous OAEI campaigns. Overall, we evaluated 18 matching
systems. We discuss lessons learned, in terms of scalability, multilingual
issues and the ability do deal with real world cases from different domains.
|
1208.3150
|
Low Complexity Space-Frequency MIMO OFDM System for Double-Selective
Fading Channels
|
cs.IT math.IT
|
This paper presents a highly robust space-frequency block coded (SFBC)
multiple-input multiple-output (MIMO) orthogonal frequency division
multiplexing (OFDM) system. The proposed system is based on applying a short
block length Walsh Hadamard transform (WHT) after the SFBC encoder. The main
advantage of the proposed system is that the channel frequency responses over
every two adjacent subcarriers become equal. Such interesting result provides
an exceptional operating conditions for SFBC-OFDM systems transmitting over
time and frequencyselective fading channels. Monte Carlo simulation is used to
evaluate the bit error rate (BER) performance of the proposed system using
various wireless channels with different degrees of frequency selectivity and
Doppler spreads. The simulation results demonstrated that the proposed scheme
substantially outperforms the standard SFBC-OFDM and the space-time block coded
(STBC) OFDM systems in severe time-varying frequency-selective fading channels.
Moreover, the proposed system has very low complexity because it is based on
short block length WHT.
|
1208.3151
|
Proceedings First International Workshop on Hybrid Systems and Biology
|
cs.CE cs.LO cs.SY
|
This volume contains the proceedings of the First International Workshop on
Hybrid Systems and Biology (HSB 2012), that will be held in Newcastle upon
Tyne, UK, on the 3rd September, 2012. HSB 2012 is a satellite event of the 23rd
International Conference on Concurrency Theory (CONCUR 2012).
This workshop aims at collecting scientists working in the area of hybrid
modeling applied to systems biology, in order to discuss about current achieved
goals, current challenges and future possible developments.
|
1208.3153
|
Inferring Chemical Reaction Patterns Using Rule Composition in Graph
Grammars
|
cs.DM cs.CE q-bio.MN
|
Modeling molecules as undirected graphs and chemical reactions as graph
rewriting operations is a natural and convenient approach tom odeling
chemistry. Graph grammar rules are most naturally employed to model elementary
reactions like merging, splitting, and isomerisation of molecules. It is often
convenient, in particular in the analysis of larger systems, to summarize
several subsequent reactions into a single composite chemical reaction. We use
a generic approach for composing graph grammar rules to define a chemically
useful rule compositions. We iteratively apply these rule compositions to
elementary transformations in order to automatically infer complex
transformation patterns. This is useful for instance to understand the net
effect of complex catalytic cycles such as the Formose reaction. The
automatically inferred graph grammar rule is a generic representative that also
covers the overall reaction pattern of the Formose cycle, namely two carbonyl
groups that can react with a bound glycolaldehyde to a second glycolaldehyde.
Rule composition also can be used to study polymerization reactions as well as
more complicated iterative reaction schemes. Terpenes and the polyketides, for
instance, form two naturally occurring classes of compounds of utmost
pharmaceutical interest that can be understood as "generalized polymers"
consisting of five-carbon (isoprene) and two-carbon units, respectively.
|
1208.3213
|
Ergodicity, Decisions, and Partial Information
|
math.PR cs.IT math.IT math.OC
|
In the simplest sequential decision problem for an ergodic stochastic process
X, at each time n a decision u_n is made as a function of past observations
X_0,...,X_{n-1}, and a loss l(u_n,X_n) is incurred. In this setting, it is
known that one may choose (under a mild integrability assumption) a decision
strategy whose pathwise time-average loss is asymptotically smaller than that
of any other strategy. The corresponding problem in the case of partial
information proves to be much more delicate, however: if the process X is not
observable, but decisions must be based on the observation of a different
process Y, the existence of pathwise optimal strategies is not guaranteed.
The aim of this paper is to exhibit connections between pathwise optimal
strategies and notions from ergodic theory. The sequential decision problem is
developed in the general setting of an ergodic dynamical system (\Omega,B,P,T)
with partial information Y\subseteq B. The existence of pathwise optimal
strategies grounded in two basic properties: the conditional ergodic theory of
the dynamical system, and the complexity of the loss function. When the loss
function is not too complex, a general sufficient condition for the existence
of pathwise optimal strategies is that the dynamical system is a conditional
K-automorphism relative to the past observations \bigvee_n T^n Y. If the
conditional ergodicity assumption is strengthened, the complexity assumption
can be weakened. Several examples demonstrate the interplay between complexity
and ergodicity, which does not arise in the case of full information. Our
results also yield a decision-theoretic characterization of weak mixing in
ergodic theory, and establish pathwise optimality of ergodic nonlinear filters.
|
1208.3235
|
First-Passage Time and Large-Deviation Analysis for Erasure Channels
with Memory
|
cs.IT math.IT
|
This article considers the performance of digital communication systems
transmitting messages over finite-state erasure channels with memory.
Information bits are protected from channel erasures using error-correcting
codes; successful receptions of codewords are acknowledged at the source
through instantaneous feedback. The primary focus of this research is on
delay-sensitive applications, codes with finite block lengths and, necessarily,
non-vanishing probabilities of decoding failure. The contribution of this
article is twofold. A methodology to compute the distribution of the time
required to empty a buffer is introduced. Based on this distribution, the mean
hitting time to an empty queue and delay-violation probabilities for specific
thresholds can be computed explicitly. The proposed techniques apply to
situations where the transmit buffer contains a predetermined number of
information bits at the onset of the data transfer. Furthermore, as additional
performance criteria, large deviation principles are obtained for the empirical
mean service time and the average packet-transmission time associated with the
communication process. This rigorous framework yields a pragmatic methodology
to select code rate and block length for the communication unit as functions of
the service requirements. Examples motivated by practical systems are provided
to further illustrate the applicability of these techniques.
|
1208.3241
|
Hidden information and regularities of information dynamics III
|
nlin.AO cs.IT math.IT
|
This presentation's Part 3 studies the evolutionary information processes and
regularities of evolution dynamics, evaluated by an entropy functional (EF) of
a random field (modeled by a diffusion information process) and an
informational path functional (IPF) on trajectories of the related dynamic
process (Lerner 2012). The integral information measure on the process'
trajectories accumulates and encodes inner connections and dependencies between
the information states, and contains more information than a sum of Shannon's
entropies, which measures and encodes each process's states separately. Cutting
off the process' measured information under action of impulse controls (Lerner
2012a), extracts and reveals hidden information, covering the states'
correlations in a multi-dimensional random process, and implements the EF-IPF
minimax variation principle (VP). The approach models an information observer
(Lerner 2012b)-as an extractor of such information, which is able to convert
the collected information of the random process in the information dynamic
process and organize it in the hierarchical information network (IN), Part2
(Lerner, 2012c). The IN's highest level of the structural hierarchy, measured
by a maximal quantity and quality of the accumulated cooperative information,
evaluates the observer's intelligence level, associated with its ability to
recognize and build such structure of a meaningful hidden information. The
considered evolution of optimal extraction, assembling, cooperation, and
organization of this information in the IN, satisfying the VP, creates the
phenomena of an evolving observer's intelligence. The requirements of
preserving the evolutionary hierarchy impose the restrictions that limit the
observer's intelligence level in the IN. The cooperative information geometry,
evolving under observations, limits the size and volumes of a particular
observer.
|
1208.3251
|
Toward Resource-Optimal Consensus over the Wireless Medium
|
cs.IT math.IT
|
We carry out a comprehensive study of the resource cost of averaging
consensus in wireless networks. Most previous approaches suppose a graphical
network, which abstracts away crucial features of the wireless medium, and
measure resource consumption only in terms of the total number of transmissions
required to achieve consensus. Under a path-loss dominated model, we study the
resource requirements of consensus with respect to three wireless-appropriate
metrics: total transmit energy, elapsed time, and time-bandwidth product. First
we characterize the performance of several popular gossip algorithms, showing
that they may be order-optimal with respect to transmit energy but are strictly
suboptimal with respect to elapsed time and time-bandwidth product. Further, we
propose a new consensus scheme, termed hierarchical averaging, and show that it
is nearly order-optimal with respect to all three metrics. Finally, we examine
the effects of quantization, showing that hierarchical averaging provides a
nearly order-optimal tradeoff between resource consumption and quantization
error.
|
1208.3252
|
The Effect of Exogenous Inputs and Defiant Agents on Opinion Dynamics
with Local and Global Interactions
|
physics.soc-ph cs.SI nlin.AO
|
Most of the conventional models for opinion dynamics mainly account for a
fully local influence, where myopic agents decide their actions after they
interact with other agents that are adjacent to them. For example, in the case
of social interactions, this includes family, friends, and other strong social
ties. The model proposed in this contribution, embodies a global influence as
well where, by global, we mean that each node also observes a sample of the
average behavior of the entire population (in the social example, people
observe other people on the streets, subway, and other social venues). We
consider a case where nodes have dichotomous states (examples include elections
with two major parties, whether or not to adopt a new technology or product,
and any yes/no opinion such as in voting on a referendum). The dynamics of
states on a network with arbitrary degree distribution are studied. For a given
initial condition, we find the probability to reach consensus on each state and
the expected time reach to consensus. The effect of an exogenous bias on the
average orientation of the system is investigated, to model mass media. To do
so, we add an external field to the model that favors one of the states over
the other. This field interferes with the regular decision process of each node
and creates a constant probability to lean towards one of the states. We solve
for the average state of the system as a function of time for given initial
conditions. Then anti-conformists (stubborn nodes who never revise their
states) are added to the network, in an effort to circumvent the external bias.
We find necessary conditions on the number of these defiant nodes required to
cancel the effect of the external bias. Our analysis is based on a mean field
approximation of the agent opinions.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.