id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1205.6018
|
Optimal Strategies for Communication and Remote Estimation with an
Energy Harvesting Sensor
|
cs.SY math.OC
|
We consider a remote estimation problem with an energy harvesting sensor and
a remote estimator. The sensor observes the state of a discrete-time source
which may be a finite state Markov chain or a multi-dimensional linear Gaussian
system. It harvests energy from its environment (say, for example, through a
solar cell) and uses this energy for the purpose of communicating with the
estimator. Due to the randomness of energy available for communication, the
sensor may not be able to communicate all the time. The sensor may also want to
save its energy for future communications. The estimator relies on messages
communicated by the sensor to produce real-time estimates of the source state.
We consider the problem of finding a communication scheduling strategy for the
sensor and an estimation strategy for the estimator that jointly minimize an
expected sum of communication and distortion costs over a finite time horizon.
Our goal of joint optimization leads to a decentralized decision-making
problem. By viewing the problem from the estimator's perspective, we obtain a
dynamic programming characterization for the decentralized decision-making
problem that involves optimization over functions. Under some symmetry
assumptions on the source statistics and the distortion metric, we show that an
optimal communication strategy is described by easily computable thresholds and
that the optimal estimate is a simple function of the most recently received
sensor observation.
|
1205.6024
|
A Social Influence Model Based On Circuit Theory
|
cs.SI physics.soc-ph
|
Understanding the behaviors of information propagation is essential for the
effective exploitation of social influence in social networks. However, few
existing influence models are tractable and efficient for describing the
information propagation process, especially when dealing with the difficulty of
incorporating the effects of combined influences from multiple nodes. To this
end, in this paper, we provide a social influence model that alleviates this
obstacle based on electrical circuit theory. This model vastly improves the
efficiency of measuring the influence strength between any pair of nodes, and
can be used to interpret the real-world influence propagation process in a
coherent way. In addition, this circuit theory model provides a natural
solution to the social influence maximization problem. When applied to
realworld data, the circuit theory model consistently outperforms the
state-of-the-art methods and can greatly alleviate the computation burden of
the influence maximization problem.
|
1205.6031
|
Towards a Mathematical Foundation of Immunology and Amino Acid Chains
|
stat.ML cs.LG q-bio.GN
|
We attempt to set a mathematical foundation of immunology and amino acid
chains. To measure the similarities of these chains, a kernel on strings is
defined using only the sequence of the chains and a good amino acid
substitution matrix (e.g. BLOSUM62). The kernel is used in learning machines to
predict binding affinities of peptides to human leukocyte antigens DR (HLA-DR)
molecules. On both fixed allele (Nielsen and Lund 2009) and pan-allele (Nielsen
et.al. 2010) benchmark databases, our algorithm achieves the state-of-the-art
performance. The kernel is also used to define a distance on an HLA-DR allele
set based on which a clustering analysis precisely recovers the serotype
classifications assigned by WHO (Nielsen and Lund 2009, and Marsh et.al. 2010).
These results suggest that our kernel relates well the chain structure of both
peptides and HLA-DR molecules to their biological functions, and that it offers
a simple, powerful and promising methodology to immunology and amino acid chain
studies.
|
1205.6033
|
A "well-balanced" finite volume scheme for blood flow simulation
|
math.NA cs.CE cs.NA
|
We are interested in simulating blood flow in arteries with a one dimensional
model. Thanks to recent developments in the analysis of hyperbolic system of
conservation laws (in the Saint-Venant/ shallow water equations context) we
will perform a simple finite volume scheme. We focus on conservation properties
of this scheme which were not previously considered. To emphasize the necessity
of this scheme, we present how a too simple numerical scheme may induce
spurious flows when the basic static shape of the radius changes. On contrary,
the proposed scheme is "well-balanced": it preserves equilibria of Q = 0. Then
examples of analytical or linearized solutions with and without viscous damping
are presented to validate the calculations. The influence of abrupt change of
basic radius is emphasized in the case of an aneurism.
|
1205.6114
|
Quantitative Methods for Comparing Different HVAC Control Schemes
|
cs.SY math.OC
|
Experimentally comparing the energy usage and comfort characteristics of
different controllers in heating, ventilation, and air-conditioning (HVAC)
systems is difficult because variations in weather and occupancy conditions
preclude the possibility of establishing equivalent experimental conditions
across the order of hours, days, and weeks. This paper is concerned with
defining quantitative metrics of energy usage and occupant comfort, which can
be computed and compared in a rigorous manner that is capable of determining
whether differences between controllers are statistically significant in the
presence of such environmental fluctuations. Experimental case studies are
presented that compare two alternative controllers (a schedule controller and a
hybrid system learning-based model predictive controller) to the default
controller in a building-wide HVAC system. Lastly, we discuss how our proposed
methodology may also be able to quantify the efficiency of other building
automation systems.
|
1205.6152
|
Robust frequency offset estimator for OFDM over fast varying multipath
channel
|
cs.IT math.IT
|
This paper presents a robust carrier frequency offset(CFO) estimation
algorithm suitable for fast varying multipath channels. The proposed algorithm
estimates CFO both in time-domain and frequency-domain using two carefully
designed sequences. This novel technique possesses high accuracy as well as
large estimation range and works well in fast varying channels.
|
1205.6154
|
Potentials and Limits of Super-Resolution Algorithms and Signal
Reconstruction from Sparse Data
|
physics.optics cs.CV math-ph math.MP
|
A common distortion in videos is image instability in the form of chaotic
(global and local displacements). Those instabilities can be used to enhance
image resolution by using subpixel elastic registration. In this work, we
investigate the performance of such methods over the ability to improve the
resolution by accumulating several frames. The second part of this work deals
with reconstruction of discrete signals from a subset of samples under
different basis functions such as DFT, Haar, Walsh, Daubechies wavelets and CT
(Radon) projections.
|
1205.6179
|
A Mixed Integer Programming Model Formulation for Solving the Lot-Sizing
Problem
|
math.OC cs.AI
|
This paper addresses a mixed integer programming (MIP) formulation for the
multi-item uncapacitated lot-sizing problem that is inspired from the trailer
manufacturer. The proposed MIP model has been utilized to find out the optimum
order quantity, optimum order time, and the minimum total cost of purchasing,
ordering, and holding over the predefined planning horizon. This problem is
known as NP-hard problem. The model was presented in an optimal software form
using LINGO 13.0.
|
1205.6184
|
On the duals of geometric Goppa codes from norm-trace curves
|
math.AG cs.IT math.IT
|
In this paper we study the dual codes of a wide family of evaluation codes on
norm-trace curves. We explicitly find out their minimum distance and give a
lower bound for the number of their minimum-weight codewords. A general
geometric approach is performed and applied to study in particular the dual
codes of one-point and two-point codes arising from norm-trace curves through
Goppa's construction, providing in many cases their minimum distance and some
bounds on the number of their minimum-weight codewords. The results are
obtained by showing that the supports of the minimum-weight codewords of the
studied codes obey some precise geometric laws as zero-dimensional subschemes
of the projective plane. Finally, the dimension of some classical two-point
Goppa codes on norm-trace curves is explicitly computed.
|
1205.6185
|
Power Consumption in Spatial Cognition
|
cs.IT math.IT
|
Multiple Input Multiple Output (MIMO) adds a new dimension to be exploited in
Cognitive Radio (CR) by simultaneously serving several users. The spatial
domain that is added through MIMO is another system resource that has to be
optimized, and shared when possible. In this paper, we present a spatial
sharing that is carried out through Zero Forcing beamforming (ZFB). Power
consumption in such a scenario is discussed and compared to single user case,
to evaluate the feasibility of employing spatial cognition from the power
perspective. Closed form expressions are derived for the consumed power and
data rate at different transmission schemes. Finally, a joint power rate metric
is deduced to provide a comprehensive measure of the expediency of spatial
cognitive scenario.
|
1205.6186
|
Diamond Networks with Bursty Traffic: Bounds on the Minimum
Energy-Per-Bit
|
cs.IT math.IT
|
When data traffic in a wireless network is bursty, small amounts of data
sporadically become available for transmission, at times that are unknown at
the receivers, and an extra amount of energy must be spent at the transmitters
to overcome this lack of synchronization between the network nodes. In
practice, pre-defined header sequences are used with the purpose of
synchronizing the different network nodes. However, in networks where relays
must be used for communication, the overhead required for synchronizing the
entire network may be very significant.
In this work, we study the fundamental limits of energy-efficient
communication in an asynchronous diamond network with two relays. We formalize
the notion of relay synchronization by saying that a relay is synchronized if
the conditional entropy of the arrival time of the source message given the
received signals at the relay is small. We show that the minimum energy-per-bit
for bursty traffic in diamond networks is achieved with a coding scheme where
each relay is either synchronized or not used at all. A consequence of this
result is the derivation of a lower bound to the minimum energy-per-bit for
bursty communication in diamond networks. This bound allows us to show that
schemes that perform the tasks of synchronization and communication separately
(i.e., with synchronization signals preceding the communication block) can
achieve the minimum energy-per-bit to within a constant fraction that ranges
from 2 in the synchronous case to 1 in the highly asynchronous regime.
|
1205.6210
|
Learning Dictionaries with Bounded Self-Coherence
|
stat.ML cs.LG
|
Sparse coding in learned dictionaries has been established as a successful
approach for signal denoising, source separation and solving inverse problems
in general. A dictionary learning method adapts an initial dictionary to a
particular signal class by iteratively computing an approximate factorization
of a training data matrix into a dictionary and a sparse coding matrix. The
learned dictionary is characterized by two properties: the coherence of the
dictionary to observations of the signal class, and the self-coherence of the
dictionary atoms. A high coherence to the signal class enables the sparse
coding of signal observations with a small approximation error, while a low
self-coherence of the atoms guarantees atom recovery and a more rapid residual
error decay rate for the sparse coding algorithm. The two goals of high signal
coherence and low self-coherence are typically in conflict, therefore one seeks
a trade-off between them, depending on the application. We present a dictionary
learning method with an effective control over the self-coherence of the
trained dictionary, enabling a trade-off between maximizing the sparsity of
codings and approximating an equiangular tight frame.
|
1205.6228
|
Structure and Overlaps of Communities in Networks
|
cs.SI physics.soc-ph
|
One of the main organizing principles in real-world social, information and
technological networks is that of network communities, where sets of nodes
organize into densely linked clusters. Even though detection of such
communities is of great interest, understanding the structure communities in
large networks remains relatively limited. Due to unavailability of labeled
ground-truth data it is practically impossible to evaluate and compare
different models and notions of communities on a large scale.
In this paper we identify 6 large social, collaboration, and information
networks where nodes explicitly state their community memberships. We define
ground-truth communities by using these explicit memberships. We then
empirically study how such ground-truth communities emerge in networks and how
they overlap. We observe some surprising phenomena. First, ground-truth
communities contain high-degree hub nodes that reside in community overlaps and
link to most of the members of the community. Second, the overlaps of
communities are more densely connected than the non-overlapping parts of
communities, in contrast to the conventional wisdom that community overlaps are
more sparsely connected than the communities themselves.
Existing models of network communities do not capture dense community
overlaps. We present the Community-Affiliation Graph Model (AGM), a conceptual
model of network community structure, which reliably captures the overall
structure of networks as well as the overlapping nature of network communities.
|
1205.6233
|
Defining and Evaluating Network Communities based on Ground-truth
|
cs.SI physics.soc-ph
|
Nodes in real-world networks organize into densely linked communities where
edges appear with high concentration among the members of the community.
Identifying such communities of nodes has proven to be a challenging task
mainly due to a plethora of definitions of a community, intractability of
algorithms, issues with evaluation and the lack of a reliable gold-standard
ground-truth.
In this paper we study a set of 230 large real-world social, collaboration
and information networks where nodes explicitly state their group memberships.
For example, in social networks nodes explicitly join various interest based
social groups. We use such groups to define a reliable and robust notion of
ground-truth communities. We then propose a methodology which allows us to
compare and quantitatively evaluate how different structural definitions of
network communities correspond to ground-truth communities. We choose 13
commonly used structural definitions of network communities and examine their
sensitivity, robustness and performance in identifying the ground-truth. We
show that the 13 structural definitions are heavily correlated and naturally
group into four classes. We find that two of these definitions, Conductance and
Triad-participation-ratio, consistently give the best performance in
identifying ground-truth communities. We also investigate a task of detecting
communities given a single seed node. We extend the local spectral clustering
algorithm into a heuristic parameter-free community detection method that
easily scales to networks with more than hundred million nodes. The proposed
method achieves 30% relative improvement over current local clustering methods.
|
1205.6278
|
Agent-based simulations of emotion spreading in online social networks
|
physics.soc-ph cs.SI
|
Quantitative analysis of empirical data from online social networks reveals
group dynamics in which emotions are involved (\v{S}uvakov et al). Full
understanding of the underlying mechanisms, however, remains a challenging
task. Using agent-based computer simulations, in this paper we study dynamics
of emotional communications in online social networks. The rules that guide how
the agents interact are motivated, and the realistic network structure and some
important parameters are inferred from the empirical dataset of
\texttt{MySpace} social network. Agent's emotional state is characterized by
two variables representing psychological arousal---reactivity to stimuli, and
valence---attractiveness or aversiveness, by which common emotions can be
defined. Agent's action is triggered by increased arousal. High-resolution
dynamics is implemented where each message carrying agent's emotion along the
network link is identified and its effect on the recipient agent is considered
as continuously aging in time. Our results demonstrate that (i) aggregated
group behaviors may arise from individual emotional actions of agents; (ii)
collective states characterized by temporal correlations and dominant positive
emotions emerge, similar to the empirical system; (iii) nature of the driving
signal---rate of user's stepping into online world, has profound effects on
building the coherent behaviors, which are observed for users in online social
networks. Further, our simulations suggest that spreading patterns differ for
the emotions, e.g., "enthusiastic" and "ashamed", which have entirely different
emotional content. {\bf {All data used in this study are fully anonymized.}}
|
1205.6309
|
Improper Signaling on the Two-user SISO Interference Channel
|
cs.IT math.IT
|
On a single-input-single-out (SISO) interference channel (IC), conventional
non-cooperative strategies encourage players selfishly maximizing their
transmit data rates, neglecting the deficit of performance caused by and to
other players. In the case of proper complex Gaussian noise, the maximum
entropy theorem shows that the best-response strategy is to transmit with
proper signals (symmetric complex Gaussian symbols). However, such equilibrium
leads to degrees-of-freedom zero due to the saturation of interference.
With improper signals (asymmetric complex Gaussian symbols), an extra freedom
of optimization is available. In this paper, we study the impact of improper
signaling on the 2-user SISO IC. We explore the achievable rate region with
non-cooperative strategies by computing a Nash equilibrium of a non-cooperative
game with improper signaling. Then, assuming cooperation between players, we
study the achievable rate region of improper signals. We propose the usage of
improper rank one signals for their simplicity and ease of implementation.
Despite their simplicity, rank one signals achieve close to optimal sum rate
compared to full rank improper signals. We characterize the Pareto boundary,
the outer-boundary of the achievable rate region, of improper rank one signals
with a single real-valued parameter; we compute the closed-form solution of the
Pareto boundary with the non-zero-forcing strategies, the maximum sum rate
point and the max-min fairness solution with zero-forcing strategies. Analysis
on the extreme SNR regimes shows that proper signals maximize the wide-band
slope of spectral efficiency whereas improper signals optimize the high-SNR
power offset.
|
1205.6326
|
A Framework for Evaluating Approximation Methods for Gaussian Process
Regression
|
stat.ML cs.LG stat.CO
|
Gaussian process (GP) predictors are an important component of many Bayesian
approaches to machine learning. However, even a straightforward implementation
of Gaussian process regression (GPR) requires O(n^2) space and O(n^3) time for
a dataset of n examples. Several approximation methods have been proposed, but
there is a lack of understanding of the relative merits of the different
approximations, and in what situations they are most useful. We recommend
assessing the quality of the predictions obtained as a function of the compute
time taken, and comparing to standard baselines (e.g., Subset of Data and
FITC). We empirically investigate four different approximation algorithms on
four different prediction problems, and make our code available to encourage
future comparisons.
|
1205.6343
|
PageRank of integers
|
cs.IR cond-mat.stat-mech math.NT nlin.CD
|
We build up a directed network tracing links from a given integer to its
divisors and analyze the properties of the Google matrix of this network. The
PageRank vector of this matrix is computed numerically and it is shown that its
probability is inversely proportional to the PageRank index thus being similar
to the Zipf law and the dependence established for the World Wide Web. The
spectrum of the Google matrix of integers is characterized by a large gap and a
relatively small number of nonzero eigenvalues. A simple semi-analytical
expression for the PageRank of integers is derived that allows to find this
vector for matrices of billion size. This network provides a new PageRank order
of integers.
|
1205.6352
|
Generalized sequential tree-reweighted message passing
|
cs.CV
|
This paper addresses the problem of approximate MAP-MRF inference in general
graphical models. Following [36], we consider a family of linear programming
relaxations of the problem where each relaxation is specified by a set of
nested pairs of factors for which the marginalization constraint needs to be
enforced. We develop a generalization of the TRW-S algorithm [9] for this
problem, where we use a decomposition into junction chains, monotonic w.r.t.
some ordering on the nodes. This generalizes the monotonic chains in [9] in a
natural way. We also show how to deal with nested factors in an efficient way.
Experiments show an improvement over min-sum diffusion, MPLP and subgradient
ascent algorithms on a number of computer vision and natural language
processing problems.
|
1205.6373
|
Publication Induced Research Analysis (PIRA) - Experiments on Real Data
|
cs.DL cs.SI physics.soc-ph
|
This paper describes the first results obtained by implementing a novel
approach to rank vertices in a heterogeneous graph, based on the PageRank
family of algorithms and applied here to the bipartite graph of papers and
authors as a first evaluation of its relevance on real data samples. With this
approach to evaluate research activities, the ranking of a paper/author depends
on that of the papers/authors citing it/him or her. We compare the results
against existing ranking methods (including methods which simply apply PageRank
to the graph of papers or the graph of authors) through the analysis of simple
scenarios based on a real dataset built from DBLP and CiteseerX. The results
show that in all examined cases the obtained result is most pertinent with our
method which allows to orient our future work to optimizing the execution of
this algorithm.
|
1205.6376
|
Analysis and study on text representation to improve the accuracy of the
Normalized Compression Distance
|
cs.IT math.IT
|
The huge amount of information stored in text form makes methods that deal
with texts really interesting. This thesis focuses on dealing with texts using
compression distances. More specifically, the thesis takes a small step towards
understanding both the nature of texts and the nature of compression distances.
Broadly speaking, the way in which this is done is exploring the effects that
several distortion techniques have on one of the most successful distances in
the family of compression distances, the Normalized Compression Distance -NCD-.
|
1205.6391
|
A Brief Summary of Dictionary Learning Based Approach for Classification
|
cs.CV
|
This note presents some representative methods which are based on dictionary
learning (DL) for classification. We do not review the sophisticated methods or
frameworks that involve DL for classification, such as online DL and spatial
pyramid matching (SPM), but rather, we concentrate on the direct DL-based
classification methods. Here, the "so-called direct DL-based method" is the
approach directly deals with DL framework by adding some meaningful penalty
terms. By listing some representative methods, we can roughly divide them into
two categories, i.e. (1) directly making the dictionary discriminative and (2)
forcing the sparse coefficients discriminative to push the discrimination power
of the dictionary. From this taxonomy, we can expect some extensions of them as
future researches.
|
1205.6396
|
Effective Listings of Function Stop words for Twitter
|
cs.IR cs.CL
|
Many words in documents recur very frequently but are essentially meaningless
as they are used to join words together in a sentence. It is commonly
understood that stop words do not contribute to the context or content of
textual documents. Due to their high frequency of occurrence, their presence in
text mining presents an obstacle to the understanding of the content in the
documents. To eliminate the bias effects, most text mining software or
approaches make use of stop words list to identify and remove those words.
However, the development of such top words list is difficult and inconsistent
between textual sources. This problem is further aggravated by sources such as
Twitter which are highly repetitive or similar in nature. In this paper, we
will be examining the original work using term frequency, inverse document
frequency and term adjacency for developing a stop words list for the Twitter
data source. We propose a new technique using combinatorial values as an
alternative measure to effectively list out stop words.
|
1205.6406
|
Bounds for projective codes from semidefinite programming
|
cs.IT math.IT
|
We apply the semidefinite programming method to derive bounds for projective
codes over a finite field.
|
1205.6412
|
An Evolutionary Approach to Drug-Design Using a Novel Neighbourhood
Based Genetic Algorithm
|
cs.NE cs.CE
|
The present work provides a new approach to evolve ligand structures which
represent possible drug to be docked to the active site of the target protein.
The structure is represented as a tree where each non-empty node represents a
functional group. It is assumed that the active site configuration of the
target protein is known with position of the essential residues. In this paper
the interaction energy of the ligands with the protein target is minimized.
Moreover, the size of the tree is difficult to obtain and it will be different
for different active sites. To overcome the difficulty, a variable tree size
configuration is used for designing ligands. The optimization is done using a
novel Neighbourhood Based Genetic Algorithm (NBGA) which uses dynamic
neighbourhood topology. To get variable tree size, a variable-length version of
the above algorithm is devised. To judge the merit of the algorithm, it is
initially applied on the well known Travelling Salesman Problem (TSP).
|
1205.6432
|
Multiclass Learning Approaches: A Theoretical Comparison with
Implications
|
cs.LG
|
We theoretically analyze and compare the following five popular multiclass
classification methods: One vs. All, All Pairs, Tree-based classifiers, Error
Correcting Output Codes (ECOC) with randomly generated code matrices, and
Multiclass SVM. In the first four methods, the classification is based on a
reduction to binary classification. We consider the case where the binary
classifier comes from a class of VC dimension $d$, and in particular from the
class of halfspaces over $\reals^d$. We analyze both the estimation error and
the approximation error of these methods. Our analysis reveals interesting
conclusions of practical relevance, regarding the success of the different
approaches under various conditions. Our proof technique employs tools from VC
theory to analyze the \emph{approximation error} of hypothesis classes. This is
in sharp contrast to most, if not all, previous uses of VC theory, which only
deal with estimation error.
|
1205.6433
|
Algebraic symmetries of generic $(m+1)$ dimensional periodic Costas
arrays
|
cs.IT math.IT
|
In this work we present two generators for the group of symmetries of the
generic $(m+1)$ dimensional periodic Costas arrays over elementary abelian
$(\mathbb{Z}_p)^m$ groups: one that is defined by multiplication on $m$
dimensions and the other by shear (addition) on $m$ dimensions. Through
exhaustive search we observe that these two generators characterize the group
of symmetries for the examples we were able to compute. Following the results,
we conjecture that these generators characterize the group of symmetries of the
generic $(m+1)$ dimensional periodic Costas arrays over elementary abelian
$(\mathbb{Z}_p)^m$ groups.
|
1205.6445
|
An Extended Network Coding Opportunity Discovery Scheme in Wireless
Networks
|
cs.NI cs.IT math.IT
|
Network coding is known as a promising approach to improve wireless network
performance. How to discover the coding opportunity in relay nodes is really
important for it. There are more coding chances, there are more times it can
improve network throughput by network coding operation. In this paper, an
extended network coding opportunity discovery scheme (ExCODE) is proposed,
which is realized by appending the current node ID and all its 1-hop neighbors'
IDs to the packet. ExCODE enables the next hop relay node to know which nodes
else have already overheard the packet, so it can discover the potential coding
opportunities as much as possible. ExCODE expands the region of discovering
coding chance to n-hops, and have more opportunities to execute network coding
operation in each relay node. At last, we implement ExCODE over the AODV
protocol, and efficiency of the proposed mechanism is demonstrated with NS2
simulations, compared to the existing coding opportunity discovery scheme.
|
1205.6523
|
Finding Important Genes from High-Dimensional Data: An Appraisal of
Statistical Tests and Machine-Learning Approaches
|
stat.ML cs.LG q-bio.QM
|
Over the past decades, statisticians and machine-learning researchers have
developed literally thousands of new tools for the reduction of
high-dimensional data in order to identify the variables most responsible for a
particular trait. These tools have applications in a plethora of settings,
including data analysis in the fields of business, education, forensics, and
biology (such as microarray, proteomics, brain imaging), to name a few.
In the present work, we focus our investigation on the limitations and
potential misuses of certain tools in the analysis of the benchmark colon
cancer data (2,000 variables; Alon et al., 1999) and the prostate cancer data
(6,033 variables; Efron, 2010, 2008). Our analysis demonstrates that models
that produce 100% accuracy measures often select different sets of genes and
cannot stand the scrutiny of parameter estimates and model stability.
Furthermore, we created a host of simulation datasets and "artificial
diseases" to evaluate the reliability of commonly used statistical and data
mining tools. We found that certain widely used models can classify the data
with 100% accuracy without using any of the variables responsible for the
disease. With moderate sample size and suitable pre-screening, stochastic
gradient boosting will be shown to be a superior model for gene selection and
variable screening from high-dimensional datasets.
|
1205.6544
|
A Brief Summary of Dictionary Learning Based Approach for Classification
(revised)
|
cs.CV cs.LG
|
This note presents some representative methods which are based on dictionary
learning (DL) for classification. We do not review the sophisticated methods or
frameworks that involve DL for classification, such as online DL and spatial
pyramid matching (SPM), but rather, we concentrate on the direct DL-based
classification methods. Here, the "so-called direct DL-based method" is the
approach directly deals with DL framework by adding some meaningful penalty
terms. By listing some representative methods, we can roughly divide them into
two categories, i.e. (1) directly making the dictionary discriminative and (2)
forcing the sparse coefficients discriminative to push the discrimination power
of the dictionary. From this taxonomy, we can expect some extensions of them as
future researches.
|
1205.6548
|
State Transition Algorithm
|
math.OC cs.NE
|
In terms of the concepts of state and state transition, a new heuristic
random search algorithm named state transition algorithm is proposed. For
continuous function optimization problems, four special transformation
operators called rotation, translation, expansion and axesion are designed.
Adjusting measures of the transformations are mainly studied to keep the
balance of exploration and exploitation. Convergence analysis is also discussed
about the algorithm based on random search theory. In the meanwhile, to
strengthen the search ability in high dimensional space, communication strategy
is introduced into the basic algorithm and intermittent exchange is presented
to prevent premature convergence. Finally, experiments are carried out for the
algorithms. With 10 common benchmark unconstrained continuous functions used to
test the performance, the results show that state transition algorithms are
promising algorithms due to their good global search capability and convergence
property when compared with some popular algorithms.
|
1205.6567
|
Clustering of tag-induced sub-graphs in complex networks
|
physics.soc-ph cs.SI
|
We study the behavior of the clustering coefficient in tagged networks. The
rich variety of tags associated with the nodes in the studied systems provide
additional information about the entities represented by the nodes which can be
important for practical applications like searching in the networks. Here we
examine how the clustering coefficient changes when narrowing the network to a
sub-graph marked by a given tag, and how does it correlate with various other
properties of the sub-graph. Another interesting question addressed in the
paper is how the clustering coefficient of the individual nodes is affected by
the tags on the node. We believe these sort of analysis help acquiring a more
complete description of the structure of large complex systems.
|
1205.6568
|
Characterization of Negabent Functions and Construction of Bent-Negabent
Functions with Maximum Algebraic Degree
|
cs.IT math.IT
|
We present necessary and sufficient conditions for a Boolean function to be a
negabent function for both even and odd number of variables, which demonstrate
the relationship between negabent functions and bent functions. By using these
necessary and sufficient conditions for Boolean functions to be negabent, we
obtain that the nega spectrum of a negabent function has at most 4 values. We
determine the nega spectrum distribution of negabent functions. Further, we
provide a method to construct bent-negabent functions in $n$ variables ($n$
even) of algebraic degree ranging from 2 to $\frac{n}{2}$, which implies that
the maximum algebraic degree of an $n$-variable bent-negabent function is equal
to $\frac{n}{2}$. Thus, we answer two open problems proposed by Parker and Pott
and by St\v{a}nic\v{a} \textit{et al.} respectively.
|
1205.6572
|
An Unsupervised Dynamic Image Segmentation using Fuzzy Hopfield Neural
Network based Genetic Algorithm
|
cs.CV
|
This paper proposes a Genetic Algorithm based segmentation method that can
automatically segment gray-scale images. The proposed method mainly consists of
spatial unsupervised grayscale image segmentation that divides an image into
regions. The aim of this algorithm is to produce precise segmentation of images
using intensity information along with neighborhood relationships. In this
paper, Fuzzy Hopfield Neural Network (FHNN) clustering helps in generating the
population of Genetic algorithm which there by automatically segments the
image. This technique is a powerful method for image segmentation and works for
both single and multiple-feature data with spatial information. Validity index
has been utilized for introducing a robust technique for finding the optimum
number of components in an image. Experimental results shown that the algorithm
generates good quality segmented image.
|
1205.6593
|
New Deep Holes of Generalized Reed-Solomon Codes
|
cs.IT math.IT math.NT
|
Deep holes play an important role in the decoding of generalized Reed-Solomon
codes. Recently, Wu and Hong \cite{WH} found a new class of deep holes for
standard Reed-Solomon codes. In the present paper, we give a concise method to
obtain a new class of deep holes for generalized Reed-Solomon codes. In
particular, for standard Reed-Solomon codes, we get the new class of deep holes
given in \cite{WH}.
Li and Wan \cite{L.W1} studied deep holes of generalized Reed-Solomon codes
$GRS_{k}(\f,D)$ and characterized deep holes defined by polynomials of degree
$k+1$. They showed that this problem is reduced to be a subset sum problem in
finite fields. Using the method of Li and Wan, we obtain some new deep holes
for special Reed-Solomon codes over finite fields with even characteristic.
Furthermore, we study deep holes of the extended Reed-Solomon code, i.e.,
$D=\f$ and show polynomials of degree $k+2$ can not define deep holes.
|
1205.6602
|
Analytical Bounds between Entropy and Error Probability in Binary
Classifications
|
cs.IT math.IT
|
The existing upper and lower bounds between entropy and error probability are
mostly derived from the inequality of the entropy relations, which could
introduce approximations into the analysis. We derive analytical bounds based
on the closed-form solutions of conditional entropy without involving any
approximation. Two basic types of classification errors are investigated in the
context of binary classification problems, namely, Bayesian and non-Bayesian
errors. We theoretically confirm that Fano's lower bound is an exact lower
bound for any types of classifier in a relation diagram of "error probability
vs. conditional entropy". The analytical upper bounds are achieved with respect
to the minimum prior probability, which are tighter than Kovalevskij's upper
bound.
|
1205.6605
|
Template-Cut: A Pattern-Based Segmentation Paradigm
|
cs.CV
|
We present a scale-invariant, template-based segmentation paradigm that sets
up a graph and performs a graph cut to separate an object from the background.
Typically graph-based schemes distribute the nodes of the graph uniformly and
equidistantly on the image, and use a regularizer to bias the cut towards a
particular shape. The strategy of uniform and equidistant nodes does not allow
the cut to prefer more complex structures, especially when areas of the object
are indistinguishable from the background. We propose a solution by introducing
the concept of a "template shape" of the target object in which the nodes are
sampled non-uniformly and non-equidistantly on the image. We evaluate it on
2D-images where the object's textures and backgrounds are similar, and large
areas of the object have the same gray level appearance as the background. We
also evaluate it in 3D on 60 brain tumor datasets for neurosurgical planning
purposes.
|
1205.6691
|
Efficient Subgraph Matching on Billion Node Graphs
|
cs.DB
|
The ability to handle large scale graph data is crucial to an increasing
number of applications. Much work has been dedicated to supporting basic graph
operations such as subgraph matching, reachability, regular expression
matching, etc. In many cases, graph indices are employed to speed up query
processing. Typically, most indices require either super-linear indexing time
or super-linear indexing space. Unfortunately, for very large graphs,
super-linear approaches are almost always infeasible. In this paper, we study
the problem of subgraph matching on billion-node graphs. We present a novel
algorithm that supports efficient subgraph matching for graphs deployed on a
distributed memory store. Instead of relying on super-linear indices, we use
efficient graph exploration and massive parallel computing for query
processing. Our experimental results demonstrate the feasibility of performing
subgraph matching on web-scale graph data.
|
1205.6692
|
Efficient Subgraph Similarity Search on Large Probabilistic Graph
Databases
|
cs.DB
|
Many studies have been conducted on seeking the efficient solution for
subgraph similarity search over certain (deterministic) graphs due to its wide
application in many fields, including bioinformatics, social network analysis,
and Resource Description Framework (RDF) data management. All these works
assume that the underlying data are certain. However, in reality, graphs are
often noisy and uncertain due to various factors, such as errors in data
extraction, inconsistencies in data integration, and privacy preserving
purposes. Therefore, in this paper, we study subgraph similarity search on
large probabilistic graph databases. Different from previous works assuming
that edges in an uncertain graph are independent of each other, we study the
uncertain graphs where edges' occurrences are correlated. We formally prove
that subgraph similarity search over probabilistic graphs is #P-complete, thus,
we employ a filter-and-verify framework to speed up the search. In the
filtering phase,we develop tight lower and upper bounds of subgraph similarity
probability based on a probabilistic matrix index, PMI. PMI is composed of
discriminative subgraph features associated with tight lower and upper bounds
of subgraph isomorphism probability. Based on PMI, we can sort out a large
number of probabilistic graphs and maximize the pruning capability. During the
verification phase, we develop an efficient sampling algorithm to validate the
remaining candidates. The efficiency of our proposed solutions has been
verified through extensive experiments.
|
1205.6693
|
Truss Decomposition in Massive Networks
|
cs.DB
|
The k-truss is a type of cohesive subgraphs proposed recently for the study
of networks. While the problem of computing most cohesive subgraphs is NP-hard,
there exists a polynomial time algorithm for computing k-truss. Compared with
k-core which is also efficient to compute, k-truss represents the "core" of a
k-core that keeps the key information of, while filtering out less important
information from, the k-core. However, existing algorithms for computing
k-truss are inefficient for handling today's massive networks. We first improve
the existing in-memory algorithm for computing k-truss in networks of moderate
size. Then, we propose two I/O-efficient algorithms to handle massive networks
that cannot fit in main memory. Our experiments on real datasets verify the
efficiency of our algorithms and the value of k-truss.
|
1205.6694
|
SEAL: Spatio-Textual Similarity Search
|
cs.DB
|
Location-based services (LBS) have become more and more ubiquitous recently.
Existing methods focus on finding relevant points-of-interest (POIs) based on
users' locations and query keywords. Nowadays, modern LBS applications generate
a new kind of spatio-textual data, regions-of-interest (ROIs), containing
region-based spatial information and textual description, e.g., mobile user
profiles with active regions and interest tags. To satisfy search requirements
on ROIs, we study a new research problem, called spatio-textual similarity
search: Given a set of ROIs and a query ROI, we find the similar ROIs by
considering spatial overlap and textual similarity. Spatio-textual similarity
search has many important applications, e.g., social marketing in
location-aware social networks. It calls for an efficient search method to
support large scales of spatio-textual data in LBS systems. To this end, we
introduce a filter-and-verification framework to compute the answers. In the
filter step, we generate signatures for the ROIs and the query, and utilize the
signatures to generate candidates whose signatures are similar to that of the
query. In the verification step, we verify the candidates and identify the
final answers. To achieve high performance, we generate effective high-quality
signatures, and devise efficient filtering algorithms as well as pruning
techniques. Experimental results on real and synthetic datasets show that our
method achieves high performance.
|
1205.6695
|
On The Spatiotemporal Burstiness of Terms
|
cs.DB
|
Thousands of documents are made available to the users via the web on a daily
basis. One of the most extensively studied problems in the context of such
document streams is burst identification. Given a term t, a burst is generally
exhibited when an unusually high frequency is observed for t. While spatial and
temporal burstiness have been studied individually in the past, our work is the
first to simultaneously track and measure spatiotemporal term burstiness. In
addition, we use the mined burstiness information toward an efficient
document-search engine: given a user's query of terms, our engine returns a
ranked list of documents discussing influential events with a strong
spatiotemporal impact. We demonstrate the efficiency of our methods with an
extensive experimental evaluation on real and synthetic datasets.
|
1205.6696
|
Efficient Reachability Query Evaluation in Large Spatiotemporal Contact
Datasets
|
cs.DB
|
With the advent of reliable positioning technologies and prevalence of
location-based services, it is now feasible to accurately study the propagation
of items such as infectious viruses, sensitive information pieces, and malwares
through a population of moving objects, e.g., individuals, mobile devices, and
vehicles. In such application scenarios, an item passes between two objects
when the objects are sufficiently close (i.e., when they are, so-called, in
contact), and hence once an item is initiated, it can penetrate the object
population through the evolving network of contacts among objects, termed
contact network. In this paper, for the first time we define and study
reachability queries in large (i.e., disk-resident) contact datasets which
record the movement of a (potentially large) set of objects moving in a spatial
environment over an extended time period. A reachability query verifies whether
two objects are "reachable" through the evolving contact network represented by
such contact datasets. We propose two contact-dataset indexes that enable
efficient evaluation of such queries despite the potentially humongous size of
the contact datasets. With the first index, termed ReachGrid, at the query time
only a small necessary portion of the contact network which is required for
reachability evaluation is constructed and traversed. With the second approach,
termed ReachGraph, we precompute reachability at different scales and leverage
these precalculations at the query time for efficient query processing. We
optimize the placement of both indexes on disk to enable efficient index
traversal during query processing. We study the pros and cons of our proposed
approaches by performing extensive experiments with both real and synthetic
data. Based on our experimental results, our proposed approaches outperform
existing reachability query processing techniques in contact n...[truncated].
|
1205.6697
|
Boosting Moving Object Indexing through Velocity Partitioning
|
cs.DB
|
There have been intense research interests in moving object indexing in the
past decade. However, existing work did not exploit the important property of
skewed velocity distributions. In many real world scenarios, objects travel
predominantly along only a few directions. Examples include vehicles on road
networks, flights, people walking on the streets, etc. The search space for a
query is heavily dependent on the velocity distribution of the objects grouped
in the nodes of an index tree. Motivated by this observation, we propose the
velocity partitioning (VP) technique, which exploits the skew in velocity
distribution to speed up query processing using moving object indexes. The VP
technique first identifies the "dominant velocity axes (DVAs)" using a
combination of principal components analysis (PCA) and k-means clustering.
Then, a moving object index (e.g., a TPR-tree) is created based on each DVA,
using the DVA as an axis of the underlying coordinate system. An object is
maintained in the index whose DVA is closest to the object's current moving
direction. Thus, all the objects in an index are moving in a near 1-dimensional
space instead of a 2-dimensional space. As a result, the expansion of the
search space with time is greatly reduced, from a quadratic function of the
maximum speed (of the objects in the search range) to a near linear function of
the maximum speed. The VP technique can be applied to a wide range of moving
object index structures. We have implemented the VP technique on two
representative ones, the TPR*-tree and the Bx-tree. Extensive experiments
validate that the VP technique consistently improves the performance of those
index structures.
|
1205.6698
|
Type-Based Detection of XML Query-Update Independence
|
cs.DB
|
This paper presents a novel static analysis technique to detect XML
query-update independence, in the presence of a schema. Rather than types, our
system infers chains of types. Each chain represents a path that can be
traversed on a valid document during query/update evaluation. The resulting
independence analysis is precise, although it raises a challenging issue:
recursive schemas may lead to infer infinitely many chains. A sound and
complete approximation technique ensuring a finite analysis in any case is
presented, together with an efficient implementation performing the chain-based
analysis in polynomial space and time.
|
1205.6699
|
Minuet: A Scalable Distributed Multiversion B-Tree
|
cs.DB
|
Data management systems have traditionally been designed to support either
long-running analytics queries or short-lived transactions, but an increasing
number of applications need both. For example, online games, socio-mobile apps,
and e-commerce sites need to not only maintain operational state, but also
analyze that data quickly to make predictions and recommendations that improve
user experience. In this paper, we present Minuet, a distributed, main-memory
B-tree that supports both transactions and copy-on-write snapshots for in-situ
analytics. Minuet uses main-memory storage to enable low-latency transactional
operations as well as analytics queries without compromising transaction
performance. In addition to supporting read-only analytics queries on
snapshots, Minuet supports writable clones, so that users can create branching
versions of the data. This feature can be quite useful, e.g. to support complex
"what-if" analysis or to facilitate wide-area replication. Our experiments show
that Minuet outperforms a commercial main-memory database in many ways. It
scales to hundreds of cores and TBs of memory, and can process hundreds of
thousands of B-tree operations per second while executing long-running scans.
|
1205.6700
|
Challenging the Long Tail Recommendation
|
cs.DB
|
The success of "infinite-inventory" retailers such as Amazon.com and Netflix
has been largely attributed to a "long tail" phenomenon. Although the majority
of their inventory is not in high demand, these niche products, unavailable at
limited-inventory competitors, generate a significant fraction of total revenue
in aggregate. In addition, tail product availability can boost head sales by
offering consumers the convenience of "one-stop shopping" for both their
mainstream and niche tastes. However, most of existing recommender systems,
especially collaborative filter based methods, can not recommend tail products
due to the data sparsity issue. It has been widely acknowledged that to
recommend popular products is easier yet more trivial while to recommend long
tail products adds more novelty yet it is also a more challenging task. In this
paper, we propose a novel suite of graph-based algorithms for the long tail
recommendation. We first represent user-item information with undirected
edge-weighted graph and investigate the theoretical foundation of applying
Hitting Time algorithm for long tail item recommendation. To improve
recommendation diversity and accuracy, we extend Hitting Time and propose
efficient Absorbing Time algorithm to help users find their favorite long tail
items. Finally, we refine the Absorbing Time algorithm and propose two
entropy-biased Absorbing Cost algorithms to distinguish the variation on
different user-item rating pairs, which further enhances the effectiveness of
long tail recommendation. Empirical experiments on two real life datasets show
that our proposed algorithms are effective to recommend long tail items and
outperform state-of-the-art recommendation techniques.
|
1205.6745
|
Fingerprint Gender Classification using Wavelet Transform and Singular
Value Decomposition
|
cs.CV
|
A novel method of gender Classification from fingerprint is proposed based on
discrete wavelet transform (DWT) and singular value decomposition (SVD). The
classification is achieved by extracting the energy computed from all the
sub-bands of DWT combined with the spatial features of non-zero singular values
obtained from the SVD of fingerprint images. K nearest neighbor (KNN) used as a
classifier. This method is experimented with the internal database of 3570
fingerprints finger prints in which 1980 were male fingerprints and 1590 were
female fingerprints. Finger-wise gender classification is achieved which is
94.32% for the left hand little fingers of female persons and 95.46% for the
left hand index finger of male persons. Gender classification for any finger of
male persons tested is attained as 91.67% and 84.69% for female persons
respectively. Overall classification rate is 88.28% has been achieved.
|
1205.6752
|
Modeling and Analysis of Abnormality Detection in Biomolecular
Nano-Networks
|
cs.IT math.IT q-bio.BM q-bio.MN
|
A scheme for detection of abnormality in molecular nano-networks is proposed.
This is motivated by the fact that early diagnosis, classification and
detection of diseases such as cancer play a crucial role in their successful
treatment. The proposed nano-abnormality detection scheme (NADS) comprises of a
two-tier network of sensor nano-machines (SNMs) in the first tier and a data
gathering node (DGN) at the sink. The SNMs detect the presence of competitor
cells as abnormality that is captured by variations in parameters of a
nano-communications channel. In the second step, the SNMs transmit micro-scale
messages over a noisy micro communications channel (MCC) to the DGN, where a
decision is made upon fusing the received signals. The detection performance of
each SNM is analyzed by setting up a Neyman-Pearson test. Next, taking into
account the effect of the MCC, the overall performance of the proposed NADS is
quantified in terms of probabilities of misdetection and false alarm. A design
problem is formulated, when the optimized concentration of SNMs in a sample is
obtained for a high probability of detection and a limited probability of false
alarm.
|
1205.6791
|
Repeated games of incomplete information with large sets of states
|
cs.GT cs.IT math.IT math.OC math.PR
|
The famous theorem of R.Aumann and M.Maschler states that the sequence of
values of an N-stage zero-sum game G_N with incomplete information on one side
converges as N tends to infinity, and the error term is bounded by a constant
divided by square root of N if the set of states K is finite. The paper deals
with the case of infinite K. It turns out that for countably-supported prior
distribution p with heavy tails the error term can decrease arbitrarily slowly.
The slowest possible speed of the decreasing for a given p is determined in
terms of entropy-like family of functionals. Our approach is based on the
well-known connection between the behavior of the maximal variation of
measure-valued martingales and asymptotic properties of repeated games with
incomplete information.
|
1205.6822
|
Friendship networks and social status
|
cs.SI physics.soc-ph
|
In empirical studies of friendship networks participants are typically asked,
in interviews or questionnaires, to identify some or all of their close
friends, resulting in a directed network in which friendships can, and often
do, run in only one direction between a pair of individuals. Here we analyze a
large collection of such networks representing friendships among students at US
high and junior-high schools and show that the pattern of unreciprocated
friendships is far from random. In every network, without exception, we find
that there exists a ranking of participants, from low to high, such that almost
all unreciprocated friendships consist of a lower-ranked individual claiming
friendship with a higher-ranked one. We present a maximum-likelihood method for
deducing such rankings from observed network data and conjecture that the
rankings produced reflect a measure of social status. We note in particular
that reciprocated and unreciprocated friendships obey different statistics,
suggesting different formation processes, and that rankings are correlated with
other characteristics of the participants that are traditionally associated
with status, such as age and overall popularity as measured by total number of
friends.
|
1205.6832
|
Syst\`eme d'aide \`a l'acc\`es lexical : trouver le mot qu'on a sur le
bout de la langue
|
cs.CL
|
The study of the Tip of the Tongue phenomenon (TOT) provides valuable clues
and insights concerning the organisation of the mental lexicon (meaning, number
of syllables, relation with other words, etc.). This paper describes a tool
based on psycho-linguistic observations concerning the TOT phenomenon. We've
built it to enable a speaker/writer to find the word he is looking for, word he
may know, but which he is unable to access in time. We try to simulate the TOT
phenomenon by creating a situation where the system knows the target word, yet
is unable to access it. In order to find the target word we make use of the
paradigmatic and syntagmatic associations stored in the linguistic databases.
Our experiment allows the following conclusion: a tool like SVETLAN, capable to
structure (automatically) a dictionary by domains can be used sucessfully to
help the speaker/writer to find the word he is looking for, if it is combined
with a database rich in terms of paradigmatic links like EuroWordNet.
|
1205.6845
|
Weighted-{$\ell_1$} minimization with multiple weighting sets
|
cs.IT math.IT
|
In this paper, we study the support recovery conditions of weighted $\ell_1$
minimization for signal reconstruction from compressed sensing measurements
when multiple support estimate sets with different accuracy are available. We
identify a class of signals for which the recovered vector from $\ell_1$
minimization provides an accurate support estimate. We then derive stability
and robustness guarantees for the weighted $\ell_1$ minimization problem with
more than one support estimate. We show that applying a smaller weight to
support estimate that enjoy higher accuracy improves the recovery conditions
compared with the case of a single support estimate and the case with standard,
i.e., non-weighted, $\ell_1$ minimization. Our theoretical results are
supported by numerical simulations on synthetic signals and real audio signals.
|
1205.6846
|
Support driven reweighted $\ell_1$ minimization
|
cs.IT math.IT
|
In this paper, we propose a support driven reweighted $\ell_1$ minimization
algorithm (SDRL1) that solves a sequence of weighted $\ell_1$ problems and
relies on the support estimate accuracy. Our SDRL1 algorithm is related to the
IRL1 algorithm proposed by Cand{\`e}s, Wakin, and Boyd. We demonstrate that it
is sufficient to find support estimates with \emph{good} accuracy and apply
constant weights instead of using the inverse coefficient magnitudes to achieve
gains similar to those of IRL1. We then prove that given a support estimate
with sufficient accuracy, if the signal decays according to a specific rate,
the solution to the weighted $\ell_1$ minimization problem results in a support
estimate with higher accuracy than the initial estimate. We also show that
under certain conditions, it is possible to achieve higher estimate accuracy
when the intersection of support estimates is considered. We demonstrate the
performance of SDRL1 through numerical simulations and compare it with that of
IRL1 and standard $\ell_1$ minimization.
|
1205.6849
|
Beyond $\ell_1$-norm minimization for sparse signal recovery
|
cs.IT cs.LG math.IT
|
Sparse signal recovery has been dominated by the basis pursuit denoise (BPDN)
problem formulation for over a decade. In this paper, we propose an algorithm
that outperforms BPDN in finding sparse solutions to underdetermined linear
systems of equations at no additional computational cost. Our algorithm, called
WSPGL1, is a modification of the spectral projected gradient for $\ell_1$
minimization (SPGL1) algorithm in which the sequence of LASSO subproblems are
replaced by a sequence of weighted LASSO subproblems with constant weights
applied to a support estimate. The support estimate is derived from the data
and is updated at every iteration. The algorithm also modifies the Pareto curve
at every iteration to reflect the new weighted $\ell_1$ minimization problem
that is being solved. We demonstrate through extensive simulations that the
sparse recovery performance of our algorithm is superior to that of $\ell_1$
minimization and approaches the recovery performance of iterative re-weighted
$\ell_1$ (IRWL1) minimization of Cand{\`e}s, Wakin, and Boyd, although it does
not match it in general. Moreover, our algorithm has the computational cost of
a single BPDN problem.
|
1205.6852
|
Multiaccess Channel with Partially Cooperating Encoders and Security
Constraints
|
cs.IT math.IT
|
We study a special case of Willems's two-user multi-access channel with
partially cooperating encoders from a security perspective. This model differs
from Willems's setup in that only one encoder, Encoder 1, is allowed to
conference; Encoder 2 does not transmit any message, and there is an additional
passive eavesdropper from whom the communication should be kept secret. For the
discrete memoryless (DM) case, we establish inner and outer bounds on the
capacity-equivocation region. The inner bound is based on a combination of
Willems's coding scheme, noise injection and additional binning that provides
randomization for security. For the memoryless Gaussian model, we establish
lower and upper bounds on the secrecy capacity. We also show that, under
certain conditions, these bounds agree in some extreme cases of cooperation
between the encoders. We illustrate our results through some numerical
examples.
|
1205.6855
|
A Study of "Churn" in Tweets and Real-Time Search Queries (Extended
Version)
|
cs.IR cs.SI
|
The real-time nature of Twitter means that term distributions in tweets and
in search queries change rapidly: the most frequent terms in one hour may look
very different from those in the next. Informally, we call this phenomenon
"churn". Our interest in analyzing churn stems from the perspective of
real-time search. Nearly all ranking functions, machine-learned or otherwise,
depend on term statistics such as term frequency, document frequency, as well
as query frequencies. In the real-time context, how do we compute these
statistics, considering that the underlying distributions change rapidly? In
this paper, we present an analysis of tweet and query churn on Twitter, as a
first step to answering this question. Analyses reveal interesting insights on
the temporal dynamics of term distributions on Twitter and hold implications
for the design of search systems.
|
1205.6903
|
Cram\'er-Rao Bounds for Polynomial Signal Estimation using Sensors with
AR(1) Drift
|
cs.IT math.IT
|
We seek to characterize the estimation performance of a sensor network where
the individual sensors exhibit the phenomenon of drift, i.e., a gradual change
of the bias. Though estimation in the presence of random errors has been
extensively studied in the literature, the loss of estimation performance due
to systematic errors like drift have rarely been looked into. In this paper, we
derive closed-form Fisher Information matrix and subsequently Cram\'er-Rao
bounds (upto reasonable approximation) for the estimation accuracy of
drift-corrupted signals. We assume a polynomial time-series as the
representative signal and an autoregressive process model for the drift. When
the Markov parameter for drift \rho<1, we show that the first-order effect of
drift is asymptotically equivalent to scaling the measurement noise by an
appropriate factor. For \rho=1, i.e., when the drift is non-stationary, we show
that the constant part of a signal can only be estimated inconsistently
(non-zero asymptotic variance). Practical usage of the results are demonstrated
through the analysis of 1) networks with multiple sensors and 2) bandwidth
limited networks communicating only quantized observations.
|
1205.6907
|
Optimal Identical Binary Quantizer Design for Distributed Estimation
|
cs.IT math.IT
|
We consider the design of identical one-bit probabilistic quantizers for
distributed estimation in sensor networks. We assume the parameter-range to be
finite and known and use the maximum Cram\'er-Rao Lower Bound (CRB) over the
parameter-range as our performance metric. We restrict our theoretical analysis
to the class of antisymmetric quantizers and determine a set of conditions for
which the probabilistic quantizer function is greatly simplified. We identify a
broad class of noise distributions, which includes Gaussian noise in the
low-SNR regime, for which the often used threshold-quantizer is found to be
minimax-optimal. Aided with theoretical results, we formulate an optimization
problem to obtain the optimum minimax-CRB quantizer. For a wide range of noise
distributions, we demonstrate the superior performance of the new quantizer -
particularly in the moderate to high-SNR regime.
|
1205.6917
|
Robust self-triggered coordination with ternary controllers
|
cs.SY math.OC
|
This paper regards coordination of networked systems, which is studied in the
framework of hybrid dynamical systems. We design a coordination scheme which
combines the use of ternary controllers with a self-triggered communication
policy. The communication policy requires the agents to collect, at each
sampling time, relative measurements of their neighbors' states: the collected
information is then used to update the control and determine the following
sampling time. We prove that the proposed scheme ensures finite-time
convergence to a neighborhood of a consensus state. We then study the
robustness of the proposed self-triggered coordination system with respect to
skews in the agents' local clocks, to delays, and to limited precision in
communication. Furthermore, we present two significant variations of our
scheme. First, we design a time-varying controller which asymptotically drives
the system to consensus. Second, we adapt our framework to a communication
model in which an agent does not poll all its neighbors simultaneously, but
single neighbors instead. This communication policy actually leads to a
self-triggered "gossip" coordination system.
|
1205.6919
|
Accurate Estimation of Gaseous Strength using Transient Data
|
cs.SY
|
Information about the strength of gas sources in buildings has a number of
applications in the area of building automation and control, including
temperature and ventilation control, fire detection and security systems. Here,
we consider the problem of estimating the strength of a gas source in an
enclosure when some of the parameters of the gas transport process are unknown.
Traditionally, these problems are either solved by the Maximum-Likelihood (ML)
method which is accurate but computationally intense, or by Recursive Least
Squares (RLS, also Kalman) filtering which is simpler but less accurate. In
this paper, we suggest a different statistical estimation procedure based on
the concept of Method of Moments. We outline techniques that make this
procedure computationally efficient and amenable for recursive implementation.
We provide a comparative analysis of our proposed method based on experimental
results as well as Monte-Carlo simulations. When used with the building control
systems, these algorithms can estimate the gaseous strength in a room both
quickly and accurately, and can potentially provide improved indoor air quality
in an efficient manner.
|
1205.6925
|
Spatial Whitening Framework for Distributed Estimation
|
cs.IT math.IT
|
Designing resource allocation strategies for power constrained sensor network
in the presence of correlated data often gives rise to intractable problem
formulations. In such situations, applying well-known strategies derived from
conditional-independence assumption may turn out to be fairly suboptimal. In
this paper, we address this issue by proposing an adjacency-based spatial
whitening scheme, where each sensor exchanges its observation with their
neighbors prior to encoding their own private information and transmitting it
to the fusion center. We comment on the computational limitations for obtaining
the optimal whitening transformation, and propose an iterative optimization
scheme to achieve the same for large networks. We demonstrate the efficacy of
the whitening framework by considering the example of bit-allocation for
distributed estimation.
|
1205.6935
|
Signal Enhancement as Minimization of Relevant Information Loss
|
cs.IT math.IT
|
We introduce the notion of relevant information loss for the purpose of
casting the signal enhancement problem in information-theoretic terms. We show
that many algorithms from machine learning can be reformulated using relevant
information loss, which allows their application to the aforementioned problem.
As a particular example we analyze principle component analysis for
dimensionality reduction, discuss its optimality, and show that the relevant
information loss can indeed vanish if the relevant information is concentrated
on a lower-dimensional subspace of the input space.
|
1205.6961
|
Tighter Worst-Case Bounds on Algebraic Gossip
|
cs.DS cs.DC cs.IT math.IT
|
Gossip and in particular network coded algebraic gossip have recently
attracted attention as a fast, bandwidth-efficient, reliable and distributed
way to broadcast or multicast multiple messages. While the algorithms are
simple, involved queuing approaches are used to study their performance. The
most recent result in this direction shows that uniform algebraic gossip
disseminates k messages in O({\Delta}(D + k + log n)) rounds where D is the
diameter, n the size of the network and {\Delta} the maximum degree.
In this paper we give a simpler, short and self-contained proof for this
worst-case guarantee. Our approach also allows to reduce the quadratic
{\Delta}D term to min{3n, {\Delta}D}. We furthermore show that a simple round
robin routing scheme also achieves min{3n, {\Delta}D} + {\Delta}k rounds,
eliminating both randomization and coding. Lastly, we combine a recent
non-uniform gossip algorithm with a simple routing scheme to get a O(D + k +
log^{O(1)}) gossip information dissemination algorithm. This is order optimal
as long as D and k are not both polylogarithmically small.
|
1205.6974
|
The Porosity of Additive Noise Sequences
|
cs.IT math.IT
|
Consider a binary additive noise channel with noiseless feedback. When the
noise is a stationary and ergodic process $\mathbf{Z}$, the capacity is
$1-\mathbb{H}(\mathbf{Z})$ ($\mathbb{H}(\cdot)$ denoting the entropy rate). It
is shown analogously that when the noise is a deterministic sequence
$z^\infty$, the capacity under finite-state encoding and decoding is
$1-\bar{\rho}(z^\infty)$, where $\bar{\rho}(\cdot)$ is Lempel and Ziv's
finite-state compressibility. This quantity is termed the \emph{porosity}
$\underline{\sigma}(\cdot)$ of an individual noise sequence. A sequence of
schemes are presented that universally achieve porosity for any noise sequence.
These converse and achievability results may be interpreted both as a
channel-coding counterpart to Ziv and Lempel's work in universal source coding,
as well as an extension to the work by Lomnitz and Feder and Shayevitz and
Feder on communication across modulo-additive channels. Additionally, a
slightly more practical architecture is suggested that draws a connection with
finite-state predictability, as introduced by Feder, Gutman, and Merhav.
|
1205.7009
|
Oriented and Degree-generated Block Models: Generating and Inferring
Communities with Inhomogeneous Degree Distributions
|
cs.SI cond-mat.stat-mech physics.soc-ph stat.ML
|
The stochastic block model is a powerful tool for inferring community
structure from network topology. However, it predicts a Poisson degree
distribution within each community, while most real-world networks have a
heavy-tailed degree distribution. The degree-corrected block model can
accommodate arbitrary degree distributions within communities. But since it
takes the vertex degrees as parameters rather than generating them, it cannot
use them to help it classify the vertices, and its natural generalization to
directed graphs cannot even use the orientations of the edges. In this paper,
we present variants of the block model with the best of both worlds: they can
use vertex degrees and edge orientations in the classification process, while
tolerating heavy-tailed degree distributions within communities. We show that
for some networks, including synthetic networks and networks of word
adjacencies in English text, these new block models achieve a higher accuracy
than either standard or degree-corrected block models.
|
1205.7016
|
On deep holes of generalized Reed-Solomon codes
|
math.NT cs.IT math.IT
|
Determining deep holes is an important topic in decoding Reed-Solomon codes.
In a previous paper [8], we showed that the received word $u$ is a deep hole of
the standard Reed-Solomon codes $[q-1, k]_q$ if its Lagrange interpolation
polynomial is the sum of monomial of degree $q-2$ and a polynomial of degree at
most $k-1$. In this paper, we extend this result by giving a new class of deep
holes of the generalized Reed-Solomon codes.
|
1205.7025
|
Engineering hierarchical complex systems: an agent-based approach. The
case of flexible manufacturing systems
|
cs.MA
|
This article introduces a formal model to specify, model and validate
hierarchical complex systems described at different levels of analysis. It
relies on concepts that have been developed in the multi-agent-based simulation
(MABS) literature: level, influence and reaction. One application of such model
is the specification of hierarchical complex systems, in which decisional
capacities are dynamically adapted at each level with respect to the
emergences/constraints paradigm. In the conclusion, we discuss the main
perspective of this work: the definition of a generic meta-model for holonic
multi-agent systems (HMAS).
|
1205.7031
|
Nonlinear Trellis Description for Convolutionally Encoded Transmission
Over ISI-channels with Applications for CPM
|
cs.IT math.IT
|
In this paper we propose a matched decoding scheme for convolutionally
encoded transmission over intersymbol interference (ISI) channels and devise a
nonlinear trellis description. As an application we show that for coded
continuous phase modulation (CPM) using a non-coherent receiver the number of
states of the super trellis can be significantly reduced by means of a matched
non-linear trellis encoder.
|
1205.7036
|
Upper Bounds on the Rate of Low Density Stabilizer Codes for the Quantum
Erasure Channel
|
quant-ph cs.IT math.CO math.IT
|
Using combinatorial arguments, we determine an upper bound on achievable
rates of stabilizer codes used over the quantum erasure channel. This allows us
to recover the no-cloning bound on the capacity of the quantum erasure channel,
R is below 1-2p, for stabilizer codes: we also derive an improved upper bound
of the form : R is below 1-2p-D(p) with a function D(p) that stays positive for
0 < p < 1/2 and for any family of stabilizer codes whose generators have
weights bounded from above by a constant - low density stabilizer codes.
We obtain an application to percolation theory for a family of self-dual
tilings of the hyperbolic plane. We associate a family of low density
stabilizer codes with appropriate finite quotients of these tilings. We then
relate the probability of percolation to the probability of a decoding error
for these codes on the quantum erasure channel. The application of our upper
bound on achievable rates of low density stabilizer codes gives rise to an
upper bound on the critical probability for these tilings.
|
1205.7044
|
Wireless Device-to-Device Communications with Distributed Caching
|
cs.IT cs.NI math.IT
|
We introduce a novel wireless device-to-device (D2D) collaboration
architecture that exploits distributed storage of popular content to enable
frequency reuse. We identify a fundamental conflict between collaboration
distance and interference and show how to optimize the transmission power to
maximize frequency reuse. Our analysis depends on the user content request
statistics which are modeled by a Zipf distribution. Our main result is a
closed form expression of the optimal collaboration distance as a function of
the content reuse distribution parameters. We show that if the Zipf exponent of
the content reuse distribution is greater than 1, it is possible to have a
number of D2D interference-free collaboration pairs that scales linearly in the
number of nodes. If the Zipf exponent is smaller than 1, we identify the best
possible scaling in the number of D2D collaborating links. Surprisingly, a very
simple distributed caching policy achieves the optimal scaling behavior and
therefore there is no need to centrally coordinate what each node is caching.
|
1206.0021
|
Clinical Productivity System - A Decision Support Model
|
cs.DB
|
Purpose: This goal of this study was to evaluate the effects of a data-driven
clinical productivity system that leverages Electronic Health Record (EHR) data
to provide productivity decision support functionality in a real-world clinical
setting. The system was implemented for a large behavioral health care provider
seeing over 75,000 distinct clients a year. Design/methodology/approach: The
key metric in this system is a "VPU", which simultaneously optimizes multiple
aspects of clinical care. The resulting mathematical value of clinical
productivity was hypothesized to tightly link the organization's performance to
its expectations and, through transparency and decision support tools at the
clinician level, affect significant changes in productivity, quality, and
consistency relative to traditional models of clinical productivity. Findings:
In only 3 months, every single variable integrated into the VPU system showed
significant improvement, including a 30% rise in revenue, 10% rise in clinical
percentage, a 25% rise in treatment plan completion, a 20% rise in case rate
eligibility, along with similar improvements in compliance/audit issues,
outcomes collection, access, etc. Practical implications: A data-driven
clinical productivity system employing decision support functionality is
effective because of the impact on clinician behavior relative to traditional
clinical productivity systems. Critically, the model is also extensible to
integration with outcomes-based productivity. Originality/Value: EHR's are only
a first step - the problem is turning that data into useful information.
Technology can leverage the data in order to produce actionable information
that can inform clinical practice and decision-making. Without additional
technology, EHR's are essentially just copies of paper-based records stored in
electronic form.
|
1206.0038
|
Robust Model Predictive Control via Scenario Optimization
|
cs.SY math.OC
|
This paper discusses a novel probabilistic approach for the design of robust
model predictive control (MPC) laws for discrete-time linear systems affected
by parametric uncertainty and additive disturbances. The proposed technique is
based on the iterated solution, at each step, of a finite-horizon optimal
control problem (FHOCP) that takes into account a suitable number of randomly
extracted scenarios of uncertainty and disturbances, followed by a specific
command selection rule implemented in a receding horizon fashion. The scenario
FHOCP is always convex, also when the uncertain parameters and disturbance
belong to non-convex sets, and irrespective of how the model uncertainty
influences the system's matrices. Moreover, the computational complexity of the
proposed approach does not depend on the uncertainty/disturbance dimensions,
and scales quadratically with the control horizon. The main result in this
paper is related to the analysis of the closed loop system under
receding-horizon implementation of the scenario FHOCP, and essentially states
that the devised control law guarantees constraint satisfaction at each step
with some a-priori assigned probability p, while the system's state reaches the
target set either asymptotically, or in finite time with probability at least
p. The proposed method may be a valid alternative when other existing
techniques, either deterministic or stochastic, are not directly usable due to
excessive conservatism or to numerical intractability caused by lack of
convexity of the robust or chance-constrained optimization problem.
|
1206.0042
|
Language Acquisition in Computers
|
cs.CL
|
This project explores the nature of language acquisition in computers, guided
by techniques similar to those used in children. While existing natural
language processing methods are limited in scope and understanding, our system
aims to gain an understanding of language from first principles and hence
minimal initial input. The first portion of our system was implemented in Java
and is focused on understanding the morphology of language using bigrams. We
use frequency distributions and differences between them to define and
distinguish languages. English and French texts were analyzed to determine a
difference threshold of 55 before the texts are considered to be in different
languages, and this threshold was verified using Spanish texts. The second
portion of our system focuses on gaining an understanding of the syntax of a
language using a recursive method. The program uses one of two possible methods
to analyze given sentences based on either sentence patterns or surrounding
words. Both methods have been implemented in C++. The program is able to
understand the structure of simple sentences and learn new words. In addition,
we have provided some suggestions regarding future work and potential
extensions of the existing program.
|
1206.0050
|
List Decoding of Polar Codes
|
cs.IT math.IT
|
We describe a successive-cancellation \emph{list} decoder for polar codes,
which is a generalization of the classic successive-cancellation decoder of
Ar{\i}kan. In the proposed list decoder, up to $L$ decoding paths are
considered concurrently at each decoding stage. Then, a single codeword is
selected from the list as output. If the most likely codeword is selected,
simulation results show that the resulting performance is very close to that of
a maximum-likelihood decoder, even for moderate values of $L$. Alternatively,
if a "genie" is allowed to pick the codeword from the list, the results are
comparable to the current state of the art LDPC codes. Luckily, implementing
such a helpful genie is easy.
Our list decoder doubles the number of decoding paths at each decoding step,
and then uses a pruning procedure to discard all but the $L$ "best" paths. %In
order to implement this algorithm, we introduce a natural pruning criterion
that can be easily evaluated. Nevertheless, a straightforward implementation
still requires $\Omega(L \cdot n^2)$ time, which is in stark contrast with the
$O(n \log n)$ complexity of the original successive-cancellation decoder. We
utilize the structure of polar codes to overcome this problem. Specifically, we
devise an efficient, numerically stable, implementation taking only $O(L \cdot
n \log n)$ time and $O(L \cdot n)$ space.
|
1206.0051
|
PF-OLA: A High-Performance Framework for Parallel On-Line Aggregation
|
cs.DB cs.DC
|
Online aggregation provides estimates to the final result of a computation
during the actual processing. The user can stop the computation as soon as the
estimate is accurate enough, typically early in the execution. This allows for
the interactive data exploration of the largest datasets. In this paper we
introduce the first framework for parallel online aggregation in which the
estimation virtually does not incur any overhead on top of the actual
execution. We define a generic interface to express any estimation model that
abstracts completely the execution details. We design a novel estimator
specifically targeted at parallel online aggregation. When executed by the
framework over a massive $8\text{TB}$ TPC-H instance, the estimator provides
accurate confidence bounds early in the execution even when the cardinality of
the final result is seven orders of magnitude smaller than the dataset size and
without incurring overhead.
|
1206.0068
|
Posterior contraction of the population polytope in finite admixture
models
|
math.ST cs.LG stat.TH
|
We study the posterior contraction behavior of the latent population
structure that arises in admixture models as the amount of data increases. We
adopt the geometric view of admixture models - alternatively known as topic
models - as a data generating mechanism for points randomly sampled from the
interior of a (convex) population polytope, whose extreme points correspond to
the population structure variables of interest. Rates of posterior contraction
are established with respect to Hausdorff metric and a minimum matching
Euclidean metric defined on polytopes. Tools developed include posterior
asymptotics of hierarchical models and arguments from convex geometry.
|
1206.0104
|
The Use of Self Organizing Map Method and Feature Selection in Image
Database Classification System
|
cs.IR cs.DB
|
This paper presents a technique in classifying the images into a number of
classes or clusters desired by means of Self Organizing Map (SOM) Artificial
Neural Network method. A number of 250 color images to be classified as
previously done some processing, such as RGB to grayscale color conversion,
color histogram, feature vector selection, and then classifying by the SOM
Feature vector selection in this paper will use two methods, namely by PCA
(Principal Component Analysis) and LSA (Latent Semantic Analysis) in which each
of these methods would have taken the characteristic vector of 50, 100, and 150
from 256 initial feature vector into the process of color histogram. Then the
selection will be processed into the SOM network to be classified into five
classes using a learning rate of 0.5 and calculated accuracy. Classification of
some of the test results showed that the highest percentage of accuracy
obtained when using PCA and the selection of 100 feature vector that is equal
to 88%, compared to when using LSA selection that only 74%. Thus it can be
concluded that the method fits the PCA feature selection methods are applied in
conjunction with SOM and has an accuracy rate better than the LSA feature
selection methods. Keywords: Color Histogram, Feature Selection, LSA, PCA, SOM.
|
1206.0108
|
The evolution of interdisciplinarity in physics research
|
physics.soc-ph cs.SI physics.data-an
|
Science, being a social enterprise, is subject to fragmentation into groups
that focus on specialized areas or topics. Often new advances occur through
cross-fertilization of ideas between sub-fields that otherwise have little
overlap as they study dissimilar phenomena using different techniques. Thus to
explore the nature and dynamics of scientific progress one needs to consider
the large-scale organization and interactions between different subject areas.
Here, we study the relationships between the sub-fields of Physics using the
Physics and Astronomy Classification Scheme (PACS) codes employed for
self-categorization of articles published over the past 25 years (1985-2009).
We observe a clear trend towards increasing interactions between the different
sub-fields. The network of sub-fields also exhibits core-periphery
organization, the nucleus being dominated by Condensed Matter and General
Physics. However, over time Interdisciplinary Physics is steadily increasing
its share in the network core, reflecting a shift in the overall trend of
Physics research.
|
1206.0111
|
OpenGM: A C++ Library for Discrete Graphical Models
|
cs.AI cs.MS stat.ML
|
OpenGM is a C++ template library for defining discrete graphical models and
performing inference on these models, using a wide range of state-of-the-art
algorithms. No restrictions are imposed on the factor graph to allow for
higher-order factors and arbitrary neighborhood structures. Large models with
repetitive structure are handled efficiently because (i) functions that occur
repeatedly need to be stored only once, and (ii) distinct functions can be
implemented differently, using different encodings alongside each other in the
same model. Several parametric functions (e.g. metrics), sparse and dense value
tables are provided and so is an interface for custom C++ code. Algorithms are
separated by design from the representation of graphical models and are easily
exchangeable. OpenGM, its algorithms, HDF5 file format and command line tools
are modular and extendible.
|
1206.0197
|
The Approximate Sum Capacity of the Symmetric Gaussian K-User
Interference Channel
|
cs.IT math.IT
|
Interference alignment has emerged as a powerful tool in the analysis of
multi-user networks. Despite considerable recent progress, the capacity region
of the Gaussian K-user interference channel is still unknown in general, in
part due to the challenges associated with alignment on the signal scale using
lattice codes. This paper develops a new framework for lattice interference
alignment, based on the compute-and-forward approach. Within this framework,
each receiver decodes by first recovering two or more linear combinations of
the transmitted codewords with integer-valued coefficients and then solving
these equations for its desired codeword. For the special case of symmetric
channel gains, this framework is used to derive the approximate sum capacity of
the Gaussian interference channel, up to an explicitly defined outage set of
the channel gains. The key contributions are the capacity lower bounds for the
weak through strong interference regimes, where each receiver should jointly
decode its own codeword along with part of the interfering codewords. As part
of the analysis, it is shown that decoding K linear combinations of the
codewords can approach the sum capacity of the K-user Gaussian multiple-access
channel up to a gap of no more than K log(K)/2 bits.
|
1206.0217
|
Efficient techniques for mining spatial databases
|
cs.DB
|
Clustering is one of the major tasks in data mining. In the last few years,
Clustering of spatial data has received a lot of research attention. Spatial
databases are components of many advanced information systems like geographic
information systems VLSI design systems. In this thesis, we introduce several
efficient algorithms for clustering spatial data. First, we present a
grid-based clustering algorithm that has several advantages and comparable
performance to the well known efficient clustering algorithm. The algorithm has
several advantages. The algorithm does not require many input parameters. It
requires only three parameters, the number of the points in the data space, the
number of the cells in the grid and a percentage. The number of the cells in
the grid reflects the accuracy that should be achieved by the algorithm. The
algorithm is capable of discovering clusters of arbitrary shapes. The
computational complexity of the algorithm is comparable to the complexity of
the most efficient clustering algorithm. The algorithm has been implemented and
tested against different ranges of database sizes. The performance results show
that the running time of the algorithm is superior to the most well known
algorithms (CLARANS [23]). The results show also that the performance of the
algorithm do not degrade as the number of the data points increases.
|
1206.0224
|
Cascading Failures in Interdependent Lattice Networks: The Critical Role
of the Length of Dependency Links
|
physics.data-an cs.SI physics.soc-ph
|
We study the cascading failures in a system composed of two interdependent
square lattice networks A and B placed on the same Cartesian plane, where each
node in network A depends on a node in network B randomly chosen within a
certain distance $r$ from the corresponding node in network A and vice versa.
Our results suggest that percolation for small $r$ below $r_{\rm max}\approx 8$
(lattice units) is a second-order transition, and for larger $r$ is a
first-order transition. For $r<r_{\rm max}$, the critical threshold increases
linearly with $r$ from 0.593 at $r=0$ and reaches a maximum, 0.738 for
$r=r_{\rm max}$ and then gradually decreases to 0.683 for $r=\infty$. Our
analytical considerations are in good agreement with simulations. Our study
suggests that interdependent infrastructures embedded in Euclidean space become
most vulnerable when the distance between interdependent nodes is in the
intermediate range, which is much smaller than the size of the system.
|
1206.0238
|
Rapid Feature Extraction for Optical Character Recognition
|
cs.CV
|
Feature extraction is one of the fundamental problems of character
recognition. The performance of character recognition system is depends on
proper feature extraction and correct classifier selection. In this article, a
rapid feature extraction method is proposed and named as Celled Projection (CP)
that compute the projection of each section formed through partitioning an
image. The recognition performance of the proposed method is compared with
other widely used feature extraction methods that are intensively studied for
many different scripts in literature. The experiments have been conducted using
Bangla handwritten numerals along with three different well known classifiers
which demonstrate comparable results including 94.12% recognition accuracy
using celled projection.
|
1206.0244
|
Detection Performance in Balanced Binary Relay Trees with Node and Link
Failures
|
cs.IT math.IT
|
We study the distributed detection problem in the context of a balanced
binary relay tree, where the leaves of the tree correspond to $N$ identical and
independent sensors generating binary messages. The root of the tree is a
fusion center making an overall decision. Every other node is a relay node that
aggregates the messages received from its child nodes into a new message and
sends it up toward the fusion center. We derive upper and lower bounds for the
total error probability $P_N$ as explicit functions of $N$ in the case where
nodes and links fail with certain probabilities. These characterize the
asymptotic decay rate of the total error probability as $N$ goes to infinity.
Naturally, this decay rate is not larger than that in the non-failure case,
which is $\sqrt N$. However, we derive an explicit necessary and sufficient
condition on the decay rate of the local failure probabilities $p_k$
(combination of node and link failure probabilities at each level) such that
the decay rate of the total error probability in the failure case is the same
as that of the non-failure case. More precisely, we show that $\log
P_N^{-1}=\Theta(\sqrt N)$ if and only if $\log p_k^{-1}=\Omega(2^{k/2})$.
|
1206.0259
|
The Causal Topography of Cognition
|
cs.AI
|
The causal structure of cognition can be simulated but not implemented
computationally, just as the causal structure of a comet can be simulated but
not implemented computationally. The only thing that allows us even to imagine
otherwise is that cognition, unlike a comet, is invisible (to all but the
cognizer).
|
1206.0260
|
Block synchronization for quantum information
|
quant-ph cs.IT math.IT
|
Locating the boundaries of consecutive blocks of quantum information is a
fundamental building block for advanced quantum computation and quantum
communication systems. We develop a coding theoretic method for properly
locating boundaries of quantum information without relying on external
synchronization when block synchronization is lost. The method also protects
qubits from decoherence in a manner similar to conventional quantum
error-correcting codes, seamlessly achieving synchronization recovery and error
correction. A family of quantum codes that are simultaneously synchronizable
and error-correcting is given through this approach.
|
1206.0277
|
Sensing with Optimal Matrices
|
cs.IT cs.DM math.IT
|
We consider the problem of designing optimal $M \times N$ ($M \leq N$)
sensing matrices which minimize the maximum condition number of all the
submatrices of $K$ columns. Such matrices minimize the worst-case estimation
errors when only $K$ sensors out of $N$ sensors are available for sensing at a
given time. For M=2 and matrices with unit-normed columns, this problem is
equivalent to the problem of maximizing the minimum singular value among all
the submatrices of $K$ columns. For M=2, we are able to give a closed form
formula for the condition number of the submatrices. When M=2 and K=3, for an
arbitrary $N\geq3$, we derive the optimal matrices which minimize the maximum
condition number of all the submatrices of $K$ columns. Surprisingly, a
uniformly distributed design is often \emph{not} the optimal design minimizing
the maximum condition number.
|
1206.0285
|
Image Filtering using All Neighbor Directional Weighted Pixels:
Optimization using Particle Swarm Optimization
|
cs.CV cs.NE
|
In this paper a novel approach for de noising images corrupted by random
valued impulses has been proposed. Noise suppression is done in two steps. The
detection of noisy pixels is done using all neighbor directional weighted
pixels (ANDWP) in the 5 x 5 window. The filtering scheme is based on minimum
variance of the four directional pixels. In this approach, relatively recent
category of stochastic global optimization technique i.e., particle swarm
optimization (PSO) has also been used for searching the parameters of detection
and filtering operators required for optimal performance. Results obtained
shows better de noising and preservation of fine details for highly corrupted
images.
|
1206.0333
|
Sparse Trace Norm Regularization
|
cs.LG stat.ML
|
We study the problem of estimating multiple predictive functions from a
dictionary of basis functions in the nonparametric regression setting. Our
estimation scheme assumes that each predictive function can be estimated in the
form of a linear combination of the basis functions. By assuming that the
coefficient matrix admits a sparse low-rank structure, we formulate the
function estimation problem as a convex program regularized by the trace norm
and the $\ell_1$-norm simultaneously. We propose to solve the convex program
using the accelerated gradient (AG) method and the alternating direction method
of multipliers (ADMM) respectively; we also develop efficient algorithms to
solve the key components in both AG and ADMM. In addition, we conduct
theoretical analysis on the proposed function estimation scheme: we derive a
key property of the optimal solution to the convex program; based on an
assumption on the basis functions, we establish a performance bound of the
proposed function estimation scheme (via the composite regularization).
Simulation studies demonstrate the effectiveness and efficiency of the proposed
algorithms.
|
1206.0335
|
A Route Confidence Evaluation Method for Reliable Hierarchical Text
Categorization
|
cs.IR cs.LG
|
Hierarchical Text Categorization (HTC) is becoming increasingly important
with the rapidly growing amount of text data available in the World Wide Web.
Among the different strategies proposed to cope with HTC, the Local Classifier
per Node (LCN) approach attains good performance by mirroring the underlying
class hierarchy while enforcing a top-down strategy in the testing step.
However, the problem of embedding hierarchical information (parent-child
relationship) to improve the performance of HTC systems still remains open. A
confidence evaluation method for a selected route in the hierarchy is proposed
to evaluate the reliability of the final candidate labels in an HTC system. In
order to take into account the information embedded in the hierarchy, weight
factors are used to take into account the importance of each level. An
acceptance/rejection strategy in the top-down decision making process is
proposed, which improves the overall categorization accuracy by rejecting a few
percentage of samples, i.e., those with low reliability score. Experimental
results on the Reuters benchmark dataset (RCV1- v2) confirm the effectiveness
of the proposed method, compared to other state-of-the art HTC methods.
|
1206.0338
|
Poisson noise reduction with non-local PCA
|
cs.CV cs.LG stat.CO
|
Photon-limited imaging arises when the number of photons collected by a
sensor array is small relative to the number of detector elements. Photon
limitations are an important concern for many applications such as spectral
imaging, night vision, nuclear medicine, and astronomy. Typically a Poisson
distribution is used to model these observations, and the inherent
heteroscedasticity of the data combined with standard noise removal methods
yields significant artifacts. This paper introduces a novel denoising algorithm
for photon-limited images which combines elements of dictionary learning and
sparse patch-based representations of images. The method employs both an
adaptation of Principal Component Analysis (PCA) for Poisson noise and recently
developed sparsity-regularized convex optimization algorithms for
photon-limited images. A comprehensive empirical evaluation of the proposed
method helps characterize the performance of this approach relative to other
state-of-the-art denoising methods. The results reveal that, despite its
conceptual simplicity, Poisson PCA-based denoising appears to be highly
competitive in very low light regimes.
|
1206.0375
|
Some Computational Aspects of Essential Properties of Evolution and Life
|
cs.CC cs.IT math.IT nlin.AO nlin.PS
|
While evolution has inspired algorithmic methods of heuristic optimisation,
little has been done in the way of using concepts of computation to advance our
understanding of salient aspects of biological phenomena. We argue that under
reasonable assumptions, interesting conclusions can be drawn that are of
relevance to behavioural evolution. We will focus on two important features of
life--robustness and fitness--which, we will argue, are related to algorithmic
probability and to the thermodynamics of computation, disciplines that may be
capable of modelling key features of living organisms, and which can be used in
formulating new algorithms of evolutionary computation.
|
1206.0376
|
Introducing the Computable Universe
|
cs.IT cs.CC math.IT nlin.CG physics.hist-ph
|
Some contemporary views of the universe assume information and computation to
be key in understanding and explaining the basic structure underpinning
physical reality. We introduce the Computable Universe exploring some of the
basic arguments giving foundation to these visions. We will focus on the
algorithmic and quantum aspects, and how these may fit and support the
computable universe hypothesis.
|
1206.0377
|
Automated Word Puzzle Generation via Topic Dictionaries
|
cs.CL math.CO
|
We propose a general method for automated word puzzle generation. Contrary to
previous approaches in this novel field, the presented method does not rely on
highly structured datasets obtained with serious human annotation effort: it
only needs an unstructured and unannotated corpus (i.e., document collection)
as input. The method builds upon two additional pillars: (i) a topic model,
which induces a topic dictionary from the input corpus (examples include e.g.,
latent semantic analysis, group-structured dictionaries or latent Dirichlet
allocation), and (ii) a semantic similarity measure of word pairs. Our method
can (i) generate automatically a large number of proper word puzzles of
different types, including the odd one out, choose the related word and
separate the topics puzzle. (ii) It can easily create domain-specific puzzles
by replacing the corpus component. (iii) It is also capable of automatically
generating puzzles with parameterizable levels of difficulty suitable for,
e.g., beginners or intermediate learners.
|
1206.0379
|
Low prevalence, quasi-stationarity and power-law distribution in a model
of spreading
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
Understanding how contagions (information, infections, etc) are spread on
complex networks is important both from practical as well as theoretical point
of view. Considerable work has been done in this regard in the past decade or
so. However, most models are limited in their scope and as a result only
capture general features of spreading phenomena. Here, we propose and study a
model of spreading which takes into account the strength or quality of
contagions as well as the local (probabilistic) dynamics occurring at various
nodes. Transmission occurs only after the quality-based fitness of the
contagion has been evaluated by the local agent. The model exhibits
quality-dependent exponential time scales at early times leading to a slowly
evolving quasi-stationary state. Low prevalence is seen for a wide range of
contagion quality for arbitrary large networks. We also investigate the
activity of nodes and find a power-law distribution with a robust exponent
independent of network topology. Our results are consistent with recent
empirical observations.
|
1206.0381
|
UNL Based Bangla Natural Text Conversion - Predicate Preserving Parser
Approach
|
cs.CL
|
Universal Networking Language (UNL) is a declarative formal language that is
used to represent semantic data extracted from natural language texts. This
paper presents a novel approach to converting Bangla natural language text into
UNL using a method known as Predicate Preserving Parser (PPP) technique. PPP
performs morphological, syntactic and semantic, and lexical analysis of text
synchronously. This analysis produces a semantic-net like structure represented
using UNL. We demonstrate how Bangla texts are analyzed following the PPP
technique to produce UNL documents which can then be translated into any other
suitable natural language facilitating the opportunity to develop a universal
language translation method via UNL.
|
1206.0399
|
On the Computation of the Higher-Order Statistics of the Channel
Capacity for Amplify-and-Forward Multihop Transmission
|
cs.IT math.IT math.PR math.ST stat.TH
|
Higher-order statistics (HOS) of the channel capacity provide useful
information regarding the level of reliability of the signal transmission at a
particular rate. We propose in this letter a novel and unified analysis, which
is based on the moment-generating function (MGF) approach, to efficiently and
accurately compute the HOS of the channel capacity for amplify-and-forward
multihop transmission over generalized fading channels. More precisely, our
mathematical formulism is easy-to-use and tractable specifically requiring only
the reciprocal MGFs of the instantaneous signal-to-noise ratio distributions of
the transmission hops. Numerical and simulation results, performed to exemplify
the usefulness of the proposed MGF-based analysis, are shown to be in perfect
agreement.
|
1206.0418
|
De-randomizing Shannon: The Design and Analysis of a Capacity-Achieving
Rateless Code
|
cs.IT cs.NI math.IT
|
This paper presents an analysis of spinal codes, a class of rateless codes
proposed recently. We prove that spinal codes achieve Shannon capacity for the
binary symmetric channel (BSC) and the additive white Gaussian noise (AWGN)
channel with an efficient polynomial-time encoder and decoder. They are the
first rateless codes with proofs of these properties for BSC and AWGN. The key
idea in the spinal code is the sequential application of a hash function over
the message bits. The sequential structure of the code turns out to be crucial
for efficient decoding. Moreover, counter to the wisdom of having an expander
structure in good codes, we show that the spinal code, despite its sequential
structure, achieves capacity. The pseudo-randomness provided by a hash function
suffices for this purpose. Our proof introduces a variant of Gallager's result
characterizing the error exponent of random codes for any memoryless channel.
We present a novel application of these error-exponent results within the
framework of an efficient sequential code. The application of a hash function
over the message bits provides a methodical and effective way to de-randomize
Shannon's random codebook construction.
|
1206.0448
|
The contraction rate in Thompson metric of order-preserving flows on a
cone - application to generalized Riccati equations
|
math.MG cs.SY math.OC
|
We give a formula for the Lipschitz constant in Thompson's part metric of any
order-preserving flow on the interior of a (possibly infinite dimensional)
closed convex pointed cone. This provides an explicit form of a
characterization of Nussbaum concerning non order-preserving flows. As an
application of this formula, we show that the flow of the generalized Riccati
equation arising in stochastic linear quadratic control is a local contraction
on the cone of positive definite matrices and characterize its Lipschitz
constant by a matrix inequality. We also show that the same flow is no longer a
contraction in other natural Finsler metrics on this cone, including the
standard invariant Riemannian metric. This is motivated by a series of
contraction properties concerning the standard Riccati equation, established by
Bougerol, Liverani, Wojtowski, Lawson, Lee and Lim: we show that some of these
properties do, and that some other do not, carry over to the generalized
Riccati equation.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.