id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1012.4404
|
Multicolored Dynamos on Toroidal Meshes
|
cs.DC cs.CC cs.DS cs.SI
|
Detecting on a graph the presence of the minimum number of nodes (target set)
that will be able to "activate" a prescribed number of vertices in the graph is
called the target set selection problem (TSS) proposed by Kempe, Kleinberg, and
Tardos. In TSS's settings, nodes have two possible states (active or
non-active) and the threshold triggering the activation of a node is given by
the number of its active neighbors. Dealing with fault tolerance in a majority
based system the two possible states are used to denote faulty or non-faulty
nodes, and the threshold is given by the state of the majority of neighbors.
Here, the major effort was in determining the distribution of initial faults
leading the entire system to a faulty behavior. Such an activation pattern,
also known as dynamic monopoly (or shortly dynamo), was introduced by Peleg in
1996. In this paper we extend the TSS problem's settings by representing nodes'
states with a "multicolored" set. The extended version of the problem can be
described as follows: let G be a simple connected graph where every node is
assigned a color from a finite ordered set C = {1, . . ., k} of colors. At each
local time step, each node can recolor itself, depending on the local
configurations, with the color held by the majority of its neighbors. Given G,
we study the initial distributions of colors leading the system to a k
monochromatic configuration in toroidal meshes, focusing on the minimum number
of initial k-colored nodes. We find upper and lower bounds to the size of a
dynamo, and then special classes of dynamos, outlined by means of a new
approach based on recoloring patterns, are characterized.
|
1012.4485
|
An Experimental Approach for Optimising Mobile Agent Migrations
|
cs.NI cs.MA
|
The field of mobile agent (MA) technology has been intensively researched
during the past few years, resulting in the phenomenal proliferation of
available MA platforms, all sharing several common design characteristics.
Research projects have mainly focused on identifying applications where the
employment of MAs is preferable compared to centralised or alternative
distributed computing models. Very little work has been made on examining how
MA platforms design can be optimised so as the network traffic and latency
associated with MA transfers are minimised. The work presented in this paper
addresses these issues by investigating the effect of several optimisation
ideas applied on our MA platform prototype. Furthermore, we discuss the results
of a set of timing experiments that offers a better understanding of the agent
migration process and recommend new techniques for reducing MA transfers delay.
|
1012.4519
|
Belief-propagation-based joint channel estimation and decoding for
spectrally efficient communication over unknown sparse channels
|
cs.IT math.IT
|
We consider spectrally-efficient communication over a Rayleigh N-block-fading
channel with a K- sparse L-length discrete-time impulse response (for 0<K<L<N),
where neither the transmitter nor receiver know the channel's coefficients nor
its support. Since the high-SNR ergodic capacity of this channel has been shown
to obey C(SNR) = (1-K/N)log2(SNR)+O(1), any pilot-aided scheme that sacrifices
more than K dimensions per fading block to pilots will be spectrally
inefficient. This causes concern about the conventional "compressed channel
sensing" approach, which uses O(K polylog L) pilots. In this paper, we
demonstrate that practical spectrally-efficient communication is indeed
possible. For this, we propose a novel belief-propagation-based reception
scheme to use with a standard bit- interleaved coded orthogonal frequency
division multiplexing (OFDM) transmitter. In particular, we leverage the
"relaxed belief propagation" methodology, which allows us to perform joint
sparse-channel estimation and data decoding with only O(LN) complexity.
Empirical results show that our receiver achieves the desired capacity pre-log
factor of 1 - K/N and performs near genie-aided bounds at both low and high
SNR.
|
1012.4521
|
Characterizing Structure Through Shape Matching and Applications to Self
Assembly
|
cond-mat.soft cs.CV
|
Structural quantities such as order parameters and correlation functions are
often employed to gain insight into the physical behavior and properties of
condensed matter systems. While standard quantities for characterizing
structure exist, often they are insufficient for treating problems in the
emerging field of nano and microscale self-assembly, where the structures
encountered may be complex and unusual. The computer science field of "shape
matching" offers a robust solution to this problem by defining diverse methods
for quantifying the similarity between arbitrarily complex shapes. Most order
parameters and correlation functions used in condensed matter apply a specific
measure of structural similarity within the context of a broader scheme. By
substituting shape matching quantities for traditional quantities, we retain
the essence of the broader scheme, but extend its applicability to more complex
structures. Here we review some standard shape matching techniques and discuss
how they might be used to create highly flexible structural metrics for diverse
systems such as self-assembled matter. We provide three proof-of-concept
example problems applying shape matching methods to identifying local and
global structures, and tracking structural transitions in complex assembled
systems. The shape matching methods reviewed here are applicable to a wide
range of condensed matter systems, both simulated and experimental, provided
particle positions are known or can be accurately imaged.
|
1012.4524
|
The interplay of microscopic and mesoscopic structure in complex
networks
|
cond-mat.stat-mech cs.SI physics.soc-ph q-bio.MN
|
Not all nodes in a network are created equal. Differences and similarities
exist at both individual node and group levels. Disentangling single node from
group properties is crucial for network modeling and structural inference.
Based on unbiased generative probabilistic exponential random graph models and
employing distributive message passing techniques, we present an efficient
algorithm that allows one to separate the contributions of individual nodes and
groups of nodes to the network structure. This leads to improved detection
accuracy of latent class structure in real world data sets compared to models
that focus on group structure alone. Furthermore, the inclusion of hitherto
neglected group specific effects in models used to assess the statistical
significance of small subgraph (motif) distributions in networks may be
sufficient to explain most of the observed statistics. We show the predictive
power of such generative models in forecasting putative gene-disease
associations in the Online Mendelian Inheritance in Man (OMIM) database. The
approach is suitable for both directed and undirected uni-partite as well as
for bipartite networks.
|
1012.4527
|
Harmonic Order Parameters for Characterizing Complex Particle
Morphologies
|
cond-mat.soft cs.CV physics.chem-ph
|
Order parameters based on spherical harmonics and Fourier coefficients
already play a significant role in condensed matter research in the context of
systems of spherical or point particles. Here, we extend these types of order
parameter to more complex shapes, such as those encountered in nanoscale
self-assembly applications. To do so, we build on a powerful set of techniques
that originate in the computer science field of "shape matching." We
demonstrate how shape matching techniques can be applied to identify unknown
structures and create highly-specialized \textit{ad hoc} order parameters.
Additionally, we investigate the special symmetry properties of harmonic
descriptors, and demonstrate how they can be exploited to provide optimal
solutions to certain classes of problems. Our techniques can be applied to
particle systems in general, both simulated and experimental, provided the
particle positions are known.
|
1012.4542
|
Impact of Mistiming on the Achievable Information Rate of Rake Receivers
in DS-UWB Systems
|
cs.IT math.IT
|
In this paper, we investigate the impact of mistiming on the performance of
Rake receivers in direct-sequence ultra-wideband (DS-UWB) systems from the
perspective of the achievable information rate. A generalized expression for
the performance degradation due to mistiming is derived. Monte Carlo
simulations based on this expression are then conducted, which demonstrate that
the performance loss has little relationship with the target achievable
information rate, but varies significantly with the system bandwidth and the
multipath diversity order, which reflects design trade-offs among the system
timing requirement, the bandwidth and the implementation complexity. In
addition, the performance degradations of Rake receivers with different
multipath component selection schemes and combining techniques are compared.
Among these receivers, the widely used maximal ratio combining (MRC)
selective-Rake (S-Rake) suffers the largest performance loss in the presence of
mistiming.
|
1012.4552
|
On the Throughput Cost of Physical Layer Security in Decentralized
Wireless Networks
|
cs.IT math.IT
|
This paper studies the throughput of large-scale decentralized wireless
networks with physical layer security constraints. In particular, we are
interested in the question of how much throughput needs to be sacrificed for
achieving a certain level of security. We consider random networks where the
legitimate nodes and the eavesdroppers are distributed according to independent
two-dimensional Poisson point processes. The transmission capacity framework is
used to characterize the area spectral efficiency of secure transmissions with
constraints on both the quality of service (QoS) and the level of security.
This framework illustrates the dependence of the network throughput on key
system parameters, such as the densities of legitimate nodes and eavesdroppers,
as well as the QoS and security constraints. One important finding is that the
throughput cost of achieving a moderate level of security is quite low, while
throughput must be significantly sacrificed to realize a highly secure network.
We also study the use of a secrecy guard zone, which is shown to give a
significant improvement on the throughput of networks with high security
requirements.
|
1012.4571
|
How I won the "Chess Ratings - Elo vs the Rest of the World" Competition
|
cs.LG
|
This article discusses in detail the rating system that won the kaggle
competition "Chess Ratings: Elo vs the rest of the world". The competition
provided a historical dataset of outcomes for chess games, and aimed to
discover whether novel approaches can predict the outcomes of future games,
more accurately than the well-known Elo rating system. The winning rating
system, called Elo++ in the rest of the article, builds upon the Elo rating
system. Like Elo, Elo++ uses a single rating per player and predicts the
outcome of a game, by using a logistic curve over the difference in ratings of
the players. The major component of Elo++ is a regularization technique that
avoids overfitting these ratings. The dataset of chess games and outcomes is
relatively small and one has to be careful not to draw "too many conclusions"
out of the limited data. Many approaches tested in the competition showed signs
of such an overfitting. The leader-board was dominated by attempts that did a
very good job on a small test dataset, but couldn't generalize well on the
private hold-out dataset. The Elo++ regularization takes into account the
number of games per player, the recency of these games and the ratings of the
opponents. Finally, Elo++ employs a stochastic gradient descent scheme for
training the ratings, and uses only two global parameters (white's advantage
and regularization constant) that are optimized using cross-validation.
|
1012.4583
|
Constructing Quantum Network Coding Schemes from Classical Nonlinear
Protocols
|
quant-ph cs.IT math.IT
|
The k-pair problem in network coding theory asks to send k messages
simultaneously between k source-target pairs over a directed acyclic graph. In
a previous paper [ICALP 2009, Part I, pages 622--633] the present authors
showed that if a classical k-pair problem is solvable by means of a linear
coding scheme, then the quantum k-pair problem over the same graph is also
solvable, provided that classical communication can be sent for free between
any pair of nodes of the graph. Here we address the main case that remained
open in our previous work, namely whether nonlinear classical network coding
schemes can also give rise to quantum network coding schemes. This question is
motivated by the fact that there are networks for which there are no linear
solutions to the k-pair problem, whereas nonlinear solutions exist. In the
present paper we overcome the limitation to linear protocols and describe a new
communication protocol for perfect quantum network coding that improves over
the previous one as follows: (i) the new protocol does not put any condition on
the underlying classical coding scheme, that is, it can simulate nonlinear
communication protocols as well, and (ii) the amount of classical communication
sent in the protocol is significantly reduced.
|
1012.4621
|
Self-organized Emergence of Navigability on Small-World Networks
|
cs.SI physics.soc-ph
|
This paper mainly investigates why small-world networks are navigable and how
to navigate small-world networks. We find that the navigability can naturally
emerge from self-organization in the absence of prior knowledge about
underlying reference frames of networks. Through a process of information
exchange and accumulation on networks, a hidden metric space for navigation on
networks is constructed. Navigation based on distances between vertices in the
hidden metric space can efficiently deliver messages on small-world networks,
in which long range connections play an important role. Numerical simulations
further suggest that high cluster coefficient and low diameter are both
necessary for navigability. These interesting results provide profound insights
into scalable routing on the Internet due to its distributed and localized
requirements.
|
1012.4623
|
Fitness-driven deactivation in network evolution
|
physics.soc-ph cond-mat.dis-nn cs.SI
|
Individual nodes in evolving real-world networks typically experience growth
and decay --- that is, the popularity and influence of individuals peaks and
then fades. In this paper, we study this phenomenon via an intrinsic nodal
fitness function and an intuitive aging mechanism. Each node of the network is
endowed with a fitness which represents its activity. All the nodes have two
discrete stages: active and inactive. The evolution of the network combines the
addition of new active nodes randomly connected to existing active ones and the
deactivation of old active nodes with possibility inversely proportional to
their fitnesses. We obtain a structured exponential network when the fitness
distribution of the individuals is homogeneous and a structured scale-free
network with heterogeneous fitness distributions. Furthermore, we recover two
universal scaling laws of the clustering coefficient for both cases, $C(k) \sim
k^{-1}$ and $C \sim n^{-1}$, where $k$ and $n$ refer to the node degree and the
number of active individuals, respectively. These results offer a new simple
description of the growth and aging of networks where intrinsic features of
individual nodes drive their popularity, and hence degree.
|
1012.4668
|
Distributed Detection over Random Networks: Large Deviations Performance
Analysis
|
cs.IT math.IT
|
We study the large deviations performance, i.e., the exponential decay rate
of the error probability, of distributed detection algorithms over random
networks. At each time step $k$ each sensor: 1) averages its decision variable
with the neighbors' decision variables; and 2) accounts on-the-fly for its new
observation. We show that distributed detection exhibits a "phase change"
behavior. When the rate of network information flow (the speed of averaging) is
above a threshold, then distributed detection is asymptotically equivalent to
the optimal centralized detection, i.e., the exponential decay rate of the
error probability for distributed detection equals the Chernoff information.
When the rate of information flow is below a threshold, distributed detection
achieves only a fraction of the Chernoff information rate; we quantify this
achievable rate as a function of the network rate of information flow.
Simulation examples demonstrate our theoretical findings on the behavior of
distributed detection over random networks.
|
1012.4715
|
Joint Unitary Triangularization for MIMO Networks
|
cs.IT math.IT
|
This work considers communication networks where individual links can be
described as MIMO channels. Unlike orthogonal modulation methods (such as the
singular-value decomposition), we allow interference between sub-channels,
which can be removed by the receivers via successive cancellation. The degrees
of freedom earned by this relaxation are used for obtaining a basis which is
simultaneously good for more than one link. Specifically, we derive necessary
and sufficient conditions for shaping the ratio vector of sub-channel gains of
two broadcast-channel receivers. We then apply this to two scenarios: First, in
digital multicasting we present a practical capacity-achieving scheme which
only uses scalar codes and linear processing. Then, we consider the joint
source-channel problem of transmitting a Gaussian source over a two-user MIMO
channel, where we show the existence of non-trivial cases, where the optimal
distortion pair (which for high signal-to-noise ratios equals the optimal
point-to-point distortions of the individual users) may be achieved by
employing a hybrid digital-analog scheme over the induced equivalent channel.
These scenarios demonstrate the advantage of choosing a modulation basis based
upon multiple links in the network, thus we coin the approach "network
modulation".
|
1012.4752
|
Semantic Web: Who is who in the field - A bibliometric analysis
|
cs.DL cs.IR
|
The Semantic Web is one of the main efforts aiming to enhance human and
machine interaction by representing data in an understandable way for machines
to mediate data and services. It is a fast-moving and multidisciplinary field.
This study conducts a thorough bibliometric analysis of the field by collecting
data from Web of Science (WOS) and Scopus for the period of 1960-2009. It
utilizes a total of 44,157 papers with 651,673 citations from Scopus, and
22,951 papers with 571,911 citations from WOS. Based on these papers and
citations, it evaluates the research performance of the Semantic Web (SW) by
identifying the most productive players, major scholarly communication media,
highly cited authors, influential papers and emerging stars.
|
1012.4755
|
Mutual information, matroids and extremal dependencies
|
cs.IT math.IT
|
In this paper, it is shown that the rank function of a matroid can be
represented by a "mutual information function" if and only if the matroid is
binary. The mutual information function considered is the one measuring the
amount of information between the inputs (binary uniform) and the output of a
multiple access channel (MAC). Moreover, it is shown that a MAC whose mutual
information function is integer valued is "equivalent" to a linear
deterministic MAC, in the sense that it essentially contains at the output no
more information than some linear forms of the inputs. These notes put emphasis
on the connection between mutual information functionals and rank functions in
matroid theory, without assuming prior knowledge on these two subjects. The
first section introduces mutual information functionals, the second section
introduces basic notions of matroid theory, and the third section connects
these two subjects. It is also shown that entropic matroids studied in the
literature correspond to specific cases of MAC matroids.
|
1012.4759
|
Chem2Bio2RDF: A Linked Open Data Portal for Chemical Biology
|
cs.IR q-bio.OT
|
The Chem2Bio2RDF portal is a Linked Open Data (LOD) portal for systems
chemical biology aiming for facilitating drug discovery. It converts around 25
different datasets on genes, compounds, drugs, pathways, side effects,
diseases, and MEDLINE/PubMed documents into RDF triples and links them to other
LOD bubbles, such as Bio2RDF, LODD and DBPedia. The portal is based on D2R
server and provides a SPARQL endpoint, but adds on few unique features like RDF
faceted browser, user-friendly SPARQL query generator, MEDLINE/PubMed cross
validation service, and Cytoscape visualization plugin. Three use cases
demonstrate the functionality and usability of this portal.
|
1012.4776
|
Automatic Estimation of the Exposure to Lateral Collision in Signalized
Intersections using Video Sensors
|
cs.AI
|
Intersections constitute one of the most dangerous elements in road systems.
Traffic signals remain the most common way to control traffic at high-volume
intersections and offer many opportunities to apply intelligent transportation
systems to make traffic more efficient and safe. This paper describes an
automated method to estimate the temporal exposure of road users crossing the
conflict zone to lateral collision with road users originating from a different
approach. This component is part of a larger system relying on video sensors to
provide queue lengths and spatial occupancy that are used for real time traffic
control and monitoring. The method is evaluated on data collected during a real
world experiment.
|
1012.4795
|
On the Equivalence of the General Covariance Union (GCU) and Minimum
Enclosing Ellipsoid (MEE) Problems
|
math.OC cs.SY
|
In this paper we describe General Covariance Union (GCU) and show that
solutions to GCU and the Minimum Enclosing Ellipsoid (MEE) problems are
equivalent. This is a surprising result because GCU is defined over positive
semidefinite (PSD) matrices with statistical interpretations while MEE involves
PSD matrices with geometric interpretations. Their equivalence establishes an
intersection between the seemingly disparate methodologies of covariance-based
(e.g., Kalman) filtering and bounded region approaches to data fusion.
|
1012.4814
|
Noisy channel coding via privacy amplification and information
reconciliation
|
quant-ph cs.IT math.IT
|
We show that optimal protocols for noisy channel coding of public or private
information over either classical or quantum channels can be directly
constructed from two more primitive information-theoretic tools: privacy
amplification and information reconciliation, also known as data compression
with side information. We do this in the one-shot scenario of structureless
resources, and formulate our results in terms of the smooth min- and
max-entropy. In the context of classical information theory, this shows that
essentially all two-terminal protocols can be reduced to these two primitives,
which are in turn governed by the smooth min- and max-entropies, respectively.
In the context of quantum information theory, the recently-established duality
of these two protocols means essentially all two-terminal protocols can be
constructed using just a single primitive.
|
1012.4824
|
Input Parameters Optimization in Swarm DS-CDMA Multiuser Detectors
|
cs.AI math.CO stat.CO
|
In this paper, the uplink direct sequence code division multiple access
(DS-CDMA) multiuser detection problem (MuD) is studied into heuristic
perspective, named particle swarm optimization (PSO). Regarding different
system improvements for future technologies, such as high-order modulation and
diversity exploitation, a complete parameter optimization procedure for the PSO
applied to MuD problem is provided, which represents the major contribution of
this paper. Furthermore, the performance of the PSO-MuD is briefly analyzed via
Monte-Carlo simulations. Simulation results show that, after convergence, the
performance reached by the PSO-MuD is much better than the conventional
detector, and somewhat close to the single user bound (SuB). Rayleigh flat
channel is initially considered, but the results are further extend to
diversity (time and spatial) channels.
|
1012.4845
|
Directed factor graph based fault diagnosis model construction for mode
switching satellite power system
|
cs.SY
|
Satellite power system is a complex, highly interconnected hybrid system that
exhibit nonlinear and mode switching behaviors. Directed factor graph is an
inference model for fault diagnosis using probabilistic reasoning techniques. A
novel approach for constructing the directed factor graph structure based on
hybrid bond graph model is proposed. The system components status and their
fault symptoms are treated as hypothesis and evidences respectively. The
cause-effect relations between hypothesis and evidences are identified and
concluded though qualitative equations and causal path analysis on hybrid bond
graph model. A power supply module of a satellite power system is provided as
case study to show the feasibility and validity of the proposed method.
|
1012.4855
|
Target-driven merging of Taxonomies
|
cs.DB
|
The proliferation of ontologies and taxonomies in many domains increasingly
demands the integration of multiple such ontologies. The goal of ontology
integration is to merge two or more given ontologies in order to provide a
unified view on the input ontologies while maintaining all information coming
from them. We propose a new taxonomy merging algorithm that, given as input two
taxonomies and an equivalence matching between them, can generate an integrated
taxonomy in a fully automatic manner. The approach is target-driven, i.e. we
merge a source taxonomy into the target taxonomy and preserve the structure of
the target ontology as much as possible. We also discuss how to extend the
merge algorithm providing auxiliary information, like additional relationships
between source and target concepts, in order to semantically improve the final
result. The algorithm was implemented in a working prototype and evaluated
using synthetic and real-world scenarios.
|
1012.4875
|
Upper Tag Ontology (UTO) For Integrating Social Tagging Data
|
cs.IR cs.SI
|
Data integration and mediation have become central concerns of information
technology over the past few decades. With the advent of the Web and the rapid
increases in the amount of data and the number of Web documents and users,
researchers have focused on enhancing the interoperability of data through the
development of metadata schemes. Other researchers have looked to the wealth of
metadata generated by bookmarking sites on the Social Web. While several
existing ontologies capitalize on the semantics of metadata created by tagging
activities, the Upper Tag Ontology (UTO) emphasizes the structure of tagging
activities to facilitate modeling of tagging data and the integration of data
from different bookmarking sites as well as the alignment of tagging
ontologies. UTO is described and its utility in harvesting, modeling,
integrating, searching and analyzing data is demonstrated with metadata
harvested from three major social tagging systems (Delicious, Flickr and
YouTube).
|
1012.4889
|
Tight Bounds for Lp Samplers, Finding Duplicates in Streams, and Related
Problems
|
cs.DS cs.CC cs.DB
|
In this paper, we present near-optimal space bounds for Lp-samplers. Given a
stream of updates (additions and subtraction) to the coordinates of an
underlying vector x \in R^n, a perfect Lp sampler outputs the i-th coordinate
with probability |x_i|^p/||x||_p^p. In SODA 2010, Monemizadeh and Woodruff
showed polylog space upper bounds for approximate Lp-samplers and demonstrated
various applications of them. Very recently, Andoni, Krauthgamer and Onak
improved the upper bounds and gave a O(\epsilon^{-p} log^3 n) space \epsilon
relative error and constant failure rate Lp-sampler for p \in [1,2]. In this
work, we give another such algorithm requiring only O(\epsilon^{-p} log^2 n)
space for p \in (1,2). For p \in (0,1), our space bound is O(\epsilon^{-1}
log^2 n), while for the $p=1$ case we have an O(log(1/\epsilon)\epsilon^{-1}
log^2 n) space algorithm. We also give a O(log^2 n) bits zero relative error
L0-sampler, improving the O(log^3 n) bits algorithm due to Frahling, Indyk and
Sohler.
As an application of our samplers, we give better upper bounds for the
problem of finding duplicates in data streams. In case the length of the stream
is longer than the alphabet size, L1 sampling gives us an O(log^2 n) space
algorithm, thus improving the previous O(log^3 n) bound due to Gopalan and
Radhakrishnan.
In the second part of our work, we prove an Omega(log^2 n) lower bound for
sampling from 0, \pm 1 vectors (in this special case, the parameter p is not
relevant for Lp sampling). This matches the space of our sampling algorithms
for constant \epsilon > 0. We also prove tight space lower bounds for the
finding duplicates and heavy hitters problems. We obtain these lower bounds
using reductions from the communication complexity problem augmented indexing.
|
1012.4905
|
Convolutional Goppa codes defined on fibrations
|
cs.IT math.IT
|
We define a new class of Convolutional Codes in terms of fibrations of
algebraic varieties generalizaing our previous constructions of Convolutional
Goppa Codes. Using this general construction we can give several examples of
Maximum Distance Separable (MDS) Convolutional Codes.
|
1012.4924
|
Information-Theoretic Capacity and Error Exponents of Stationary Point
Processes under Random Additive Displacements
|
cs.IT math.IT math.PR
|
This paper studies the Shannon regime for the random displacement of
stationary point processes. Let each point of some initial stationary point
process in $\R^n$ give rise to one daughter point, the location of which is
obtained by adding a random vector to the coordinates of the mother point, with
all displacement vectors independently and identically distributed for all
points. The decoding problem is then the following one: the whole mother point
process is known as well as the coordinates of some daughter point; the
displacements are only known through their law; can one find the mother of this
daughter point? The Shannon regime is that where the dimension $n$ tends to
infinity and where the logarithm of the intensity of the point process is
proportional to $n$. We show that this problem exhibits a sharp threshold: if
the sum of the proportionality factor and of the differential entropy rate of
the noise is positive, then the probability of finding the right mother point
tends to 0 with $n$ for all point processes and decoding strategies. If this
sum is negative, there exist mother point processes, for instance Poisson, and
decoding strategies, for instance maximum likelihood, for which the probability
of finding the right mother tends to 1 with $n$. We then use large deviations
theory to show that in the latter case, if the entropy spectrum of the noise
satisfies a large deviation principle, then the error probability goes
exponentially fast to 0 with an exponent that is given in closed form in terms
of the rate function of the noise entropy spectrum. This is done for two
classes of mother point processes: Poisson and Mat\'ern. The practical interest
to information theory comes from the explicit connection that we also establish
between this problem and the estimation of error exponents in Shannon's
additive noise channel with power constraints on the codewords.
|
1012.4928
|
Calibration Using Matrix Completion with Application to Ultrasound
Tomography
|
cs.LG cs.IT math.IT
|
We study the calibration process in circular ultrasound tomography devices
where the sensor positions deviate from the circumference of a perfect circle.
This problem arises in a variety of applications in signal processing ranging
from breast imaging to sensor network localization. We introduce a novel method
of calibration/localization based on the time-of-flight (ToF) measurements
between sensors when the enclosed medium is homogeneous. In the presence of all
the pairwise ToFs, one can easily estimate the sensor positions using
multi-dimensional scaling (MDS) method. In practice however, due to the
transitional behaviour of the sensors and the beam form of the transducers, the
ToF measurements for close-by sensors are unavailable. Further, random
malfunctioning of the sensors leads to random missing ToF measurements. On top
of the missing entries, in practice an unknown time delay is also added to the
measurements. In this work, we incorporate the fact that a matrix defined from
all the ToF measurements is of rank at most four. In order to estimate the
missing ToFs, we apply a state-of-the-art low-rank matrix completion algorithm,
OPTSPACE . To find the correct positions of the sensors (our ultimate goal) we
then apply MDS. We show analytic bounds on the overall error of the whole
process in the presence of noise and hence deduce its robustness. Finally, we
confirm the functionality of our method in practice by simulations mimicking
the measurements of a circular ultrasound tomography device.
|
1012.4981
|
Local Minima of a Quadratic Binary Functional with a Quasi-Hebbian
Connection Matrix
|
cond-mat.dis-nn cs.NE
|
The local minima of a quadratic functional depending on binary variables are
discussed. An arbitrary connection matrix can be presented in the form of
quasi-Hebbian expansion where each pattern is supplied with its own individual
weight. For such matrices statistical physics methods allow one to derive an
equation describing local minima of the functional. A model where only one
weight differs from other ones is discussed in detail. In this case the
equation can be solved analytically. The critical values of the weight, for
which the energy landscape is reconstructed, are obtained. Obtained results are
confirmed by computer simulations.
|
1012.5041
|
Jensen divergence based on Fisher's information
|
cs.IT math.IT physics.data-an
|
The measure of Jensen-Fisher divergence between probability distributions is
introduced and its theoretical grounds set up. This quantity, in contrast to
the remaining Jensen divergences, is very sensitive to the fluctuations of the
probability distributions because it is controlled by the (local) Fisher
information, which is a gradient functional of the distribution. So, it is
appropriate and informative when studying the similarity of distributions,
mainly for those having oscillatory character. The new Jensen-Fisher divergence
shares with the Jensen-Shannon divergence the following properties:
non-negativity, additivity when applied to an arbitrary number of probability
densities, symmetry under exchange of these densities, vanishing if and only if
all the densities are equal, and definiteness even when these densities present
non-common zeros. Moreover, the Jensen-Fisher divergence is shown to be
expressed in terms of the relative Fisher information as the Jensen-Shannon
divergence does in terms of the Kullback-Leibler or relative Shannon entropy.
Finally the Jensen-Shannon and Jensen-Fisher divergences are compared for the
following three large, non-trivial and qualitatively different families of
probability distributions: the sinusoidal, generalized gamma-like and
Rakhmanov-Hermite distributions.
|
1012.5071
|
Extension of the Blahut-Arimoto algorithm for maximizing directed
information
|
cs.IT math.IT
|
We extend the Blahut-Arimoto algorithm for maximizing Massey's directed
information. The algorithm can be used for estimating the capacity of channels
with delayed feedback, where the feedback is a deterministic function of the
output. In order to do so, we apply the ideas from the regular Blahut-Arimoto
algorithm, i.e., the alternating maximization procedure, onto our new problem.
We provide both upper and lower bound sequences that converge to the optimum
value. Our main insight in this paper is that in order to find the maximum of
the directed information over causal conditioning probability mass function
(PMF), one can use a backward index time maximization combined with the
alternating maximization procedure. We give a detailed description of the
algorithm, its complexity, the memory needed, and several numerical examples.
|
1012.5074
|
Power-Rate Allocation in DS/CDMA Based on Discretized Verhulst
Equilibrium
|
cs.CE
|
This paper proposes to extend the discrete Verhulst power equilibrium
approach, previously suggested in [1], to the power-rate optimal allocation
problem. Multirate users associated to different types of traffic are
aggregated to distinct user' classes, with the assurance of minimum rate
allocation per user and QoS. Herein, Verhulst power allocation algorithm was
adapted to the single-input-single-output DS/CDMA jointly power-rate control
problem. The analysis was carried out taking into account the convergence time,
quality of solution, in terms of the normalized squared error (NSE), when
compared with the analytical solution based on interference matrix inverse, and
computational complexity. Numerical results demonstrate the validity of the
proposed resource allocation methodology.
|
1012.5113
|
Timed Game Abstraction of Control Systems
|
cs.SY
|
This paper proposes a method for abstracting control systems by timed game
automata, and is aimed at obtaining automatic controller synthesis.
The proposed abstraction is based on partitioning the state space of a
control system using positive and negative invariant sets, generated by
Lyapunov functions. This partitioning ensures that the vector field of the
control system is transversal to the facets of the cells, which induces some
desirable properties of the abstraction. To allow a rich class of control
systems to be abstracted, the update maps of the timed game automaton are
extended.
Conditions on the partitioning of the state space and the control are set up
to obtain sound abstractions. Finally, an example is provided to demonstrate
the method applied to a control problem related to navigation.
|
1012.5174
|
SNEED: Enhancing Network Security Services Using Network Coding and
Joint Capacity
|
cs.NI cs.CR cs.IT math.IT
|
Traditional network security protocols depend mainly on developing
cryptographic schemes and on using biometric methods. These have led to several
network security protocols that are unbreakable based on difficulty of solving
untractable mathematical problems such as factoring large integers.
In this paper, Security of Networks Employing Encoding and Decoding (SNEED)
is developed to mitigate single and multiple link attacks. Network coding and
shared capacity among the working paths are used to provide data protection and
data integrity against network attackers and eavesdroppers.
SNEED can be incorporated into various applications in on-demand TV,
satellite communications and multimedia security. Finally, It is shown that
SNEED can be implemented easily where there are k edge disjoint paths between
two core nodes (routers or switches) in an enterprize network.
|
1012.5197
|
Accessible Capacity of Secondary Users
|
cs.IT math.IT
|
A new problem formulation is presented for the Gaussian interference channels
(GIFC) with two pairs of users, which are distinguished as primary users and
secondary users, respectively. The primary users employ a pair of encoder and
decoder that were originally designed to satisfy a given error performance
requirement under the assumption that no interference exists from other users.
In the scenario when the secondary users attempt to access the same medium, we
are interested in the maximum transmission rate (defined as {\em accessible
capacity}) at which secondary users can communicate reliably without affecting
the error performance requirement by the primary users under the constraint
that the primary encoder (not the decoder) is kept unchanged. By modeling the
primary encoder as a generalized trellis code (GTC), we are then able to treat
the secondary link and the cross link from the secondary transmitter to the
primary receiver as finite state channels (FSCs). Based on this, upper and
lower bounds on the accessible capacity are derived. The impact of the error
performance requirement by the primary users on the accessible capacity is
analyzed by using the concept of interference margin. In the case of
non-trivial interference margin, the secondary message is split into common and
private parts and then encoded by superposition coding, which delivers a lower
bound on the accessible capacity. For some special cases, these bounds can be
computed numerically by using the BCJR algorithm. Numerical results are also
provided to gain insight into the impacts of the GTC and the error performance
requirement on the accessible capacity.
|
1012.5208
|
Texture feature extraction in the spatial-frequency domain for
content-based image retrieval
|
cs.CV cs.IR cs.MM
|
The advent of large scale multimedia databases has led to great challenges in
content-based image retrieval (CBIR). Even though CBIR is considered an
emerging field of research, however it constitutes a strong background for new
methodologies and systems implementations. Therefore, many research
contributions are focusing on techniques enabling higher image retrieval
accuracy while preserving low level of computational complexity. Image
retrieval based on texture features is receiving special attention because of
the omnipresence of this visual feature in most real-world images. This paper
highlights the state-of-the-art and current progress relevant to texture-based
image retrieval and spatial-frequency image representations. In particular, it
gives an overview of statistical methodologies and techniques employed for
texture feature extraction using most popular spatial-frequency image
transforms, namely discrete wavelets, Gabor wavelets, dual-tree complex wavelet
and contourlets. Indications are also given about used similarity measurement
functions and most important achieved results.
|
1012.5224
|
Max-Flow Min-Cut Theorems for Multi-User Communication Networks
|
cs.IT cs.LO math.CO math.IT
|
The paper presents four distinct new ideas and results for communication
networks:
1) We show that relay-networks (i.e. communication networks where different
nodes use the same coding functions) can be used to model dynamic networks.
2) We introduce {\em the term model}, which is a simple, graph-free symbolic
approach to communication networks.
3) We state and prove variants of a theorem concerning the dispersion of
information in single-receiver communications.
4) We show that the solvability of an abstract multi-user communication
problem is equivalent to the solvability of a single-target communication in a
suitable relay network.
In the paper, we develop a number of technical ramifications of these ideas
and results. One technical result is a max-flow min-cut theorem for the R\'enyi
entropy with order less than one, given that the sources are equiprobably
distributed; conversely, we show that the max-flow min-cut theorem fails for
the R\'enyi entropy with order greater than one. We leave the status of the
theorem with regards to the ordinary Shannon Entropy measure (R\'enyi entropy
of order one and the limit case between validity or failure of the theorem) as
an open question. In non-dynamic static communication networks with a single
receiver, a simple application of Menger's theorem shows that the optimal
throughput can be achieved without proper use of network coding i.e. just by
using ordinary packet-switching. This fails dramatically in relay networks with
a single receiver. We show that even a powerful method like linear network
coding fails miserably for relay networks. With that in mind, it is noticeable
that our rather weak form of network coding (routing with dynamic headers) is
asymptotically sufficient to reach capacity.
|
1012.5240
|
Exploring Grid Polygons Online
|
cs.CG cs.RO
|
We investigate the exploration problem of a short-sighted mobile robot moving
in an unknown cellular room. To explore a cell, the robot must enter it. Once
inside, the robot knows which of the 4 adjacent cells exist and which are
boundary edges. The robot starts from a specified cell adjacent to the room's
outer wall; it visits each cell, and returns to the start. Our interest is in a
short exploration tour; that is, in keeping the number of multiple cell visits
small. For abitrary environments containing no obstacles we provide a strategy
producing tours of length S <= C + 1/2 E - 3, and for environments containing
obstacles we provide a strategy, that is bound by S <= C + 1/2 E + 3H + WCW -
2, where C denotes the number of cells-the area-, E denotes the number of
boundary edges-the perimeter-, and H is the number of obstacles, and WCW is a
measure for the sinuosity of the given environment.
|
1012.5248
|
Matrix Insertion-Deletion Systems
|
cs.FL cs.CC cs.CL cs.DM
|
In this article, we consider for the first time the operations of insertion
and deletion working in a matrix controlled manner. We show that, similarly as
in the case of context-free productions, the computational power is strictly
increased when using a matrix control: computational completeness can be
obtained by systems with insertion or deletion rules involving at most two
symbols in a contextual or in a context-free manner and using only binary
matrices.
|
1012.5253
|
Exploring Simple Triangular and Hexagonal Grid Polygons Online
|
cs.CG cs.RO
|
We investigate the online exploration problem (aka covering) of a
short-sighted mobile robot moving in an unknown cellular environment with
hexagons and triangles as types of cells. To explore a cell, the robot must
enter it. Once inside, the robot knows which of the 3 or 6 adjacent cells exist
and which are boundary edges. The robot's task is to visit every cell in the
given environment and to return to the start. Our interest is in a short
exploration tour; that is, in keeping the number of multiple cell visits small.
For arbitrary environments containing no obstacles we provide a strategy
producing tours of length S <= C + 1/4 E - 2.5 for hexagonal grids, and S <= C
+ E - 4 for triangular grids. C denotes the number of cells-the area-, E
denotes the number of boundary edges-the perimeter-of the given environment.
Further, we show that our strategy is 4/3-competitive in both types of grids,
and we provide lower bounds of 14/13 for hexagonal grids and 7/6 for triangular
grids.
|
1012.5306
|
An optimization strategy on prion AGAAAAGA amyloid fibril molecular
modeling
|
cs.CE physics.bio-ph q-bio.BM q-bio.QM
|
X-ray crystallography and nuclear magnetic resonance (NMR) spectroscopy are
two powerful tools to determine the protein 3D structure. However, not all
proteins can be successfully crystallized, particularly for membrane proteins.
Although NMR spectroscopy is indeed very powerful in determining the 3D
structures of membrane proteins, same as X-ray crystallography, it is still
very time-consuming and expensive. Under many circumstances, due to the
noncrystalline and insoluble nature of some proteins, X-ray and NMR cannot be
used at all. Computational approaches, however, allow us to obtain a
description of the protein 3D structure at a submicroscopic level.
To the best of the author's knowledge, there is little structural data
available to date on the AGAAAAGA palindrome in the hydrophobic region
(113--120) of prion proteins, which falls just within the N-terminal
unstructured region (1--123) of prion proteins. Many experimental studies have
shown that the AGAAAAGA region has amyloid fibril forming properties and plays
an important role in prion diseases. However, due to the noncrystalline and
insoluble nature of the amyloid fibril, little structural data on the AGAAAAGA
is available. This paper introduces a simple optimization strategy approach to
address the 3D atomic-resolution structure of prion AGAAAAGA amyloid fibrils.
Atomic-resolution structures of prion AGAAAAGA amyloid fibrils got in this
paper are useful for the drive to find treatments for prion diseases in the
field of medicinal chemistry.
|
1012.5318
|
Condensation into ground state in binary string models
|
cs.IT math.IT
|
The ensemble of binary strings defined via strong-interaction model exhibits
enhanced condensation (collapse) into ground state below certain temperature.
The non-interaction model shows gradual accumulation into ground state as
temperature approaches zero
|
1012.5327
|
Computationally Efficient Modulation Level Classification Based on
Probability Distribution Distance Functions
|
cs.IT cs.PF math.IT stat.ML
|
We present a novel modulation level classification (MLC) method based on
probability distribution distance functions. The proposed method uses modified
Kuiper and Kolmogorov-Smirnov distances to achieve low computational complexity
and outperforms the state of the art methods based on cumulants and
goodness-of-fit tests. We derive the theoretical performance of the proposed
MLC method and verify it via simulations. The best classification accuracy,
under AWGN with SNR mismatch and phase jitter, is achieved with the proposed
MLC method using Kuiper distances.
|
1012.5339
|
Efficient Generation of Random Bits from Finite State Markov Chains
|
cs.IT math.IT
|
The problem of random number generation from an uncorrelated random source
(of unknown probability distribution) dates back to von Neumann's 1951 work.
Elias (1972) generalized von Neumann's scheme and showed how to achieve optimal
efficiency in unbiased random bits generation. Hence, a natural question is
what if the sources are correlated? Both Elias and Samuelson proposed methods
for generating unbiased random bits in the case of correlated sources (of
unknown probability distribution), specifically, they considered finite Markov
chains. However, their proposed methods are not efficient or have
implementation difficulties. Blum (1986) devised an algorithm for efficiently
generating random bits from degree-2 finite Markov chains in expected linear
time, however, his beautiful method is still far from optimality on
information-efficiency. In this paper, we generalize Blum's algorithm to
arbitrary degree finite Markov chains and combine it with Elias's method for
efficient generation of unbiased bits. As a result, we provide the first known
algorithm that generates unbiased random bits from an arbitrary finite Markov
chain, operates in expected linear time and achieves the information-theoretic
upper bound on efficiency.
|
1012.5340
|
Relations between $\beta$ and $\delta$ for QP and LP in Compressed
Sensing Computations
|
cs.IT math.IT
|
In many compressed sensing applications, linear programming (LP) has been
used to reconstruct a sparse signal. When observation is noisy, the LP
formulation is extended to allow an inequality constraint and the solution is
dependent on a parameter $\delta$, related to the observation noise level.
Recently, some researchers also considered quadratic programming (QP) for
compressed sensing signal reconstruction and the solution in this case is
dependent on a Lagrange multiplier $\beta$. In this work, we investigated the
relation between $\delta$ and $\beta$ and derived an upper and a lower bound on
$\beta$ in terms of $\delta$. For a given $\delta$, these bounds can be used to
approximate $\beta$. Since $\delta$ is a physically related quantity and easy
to determine for an application while there is no easy way in general to
determine $\beta$, our results can be used to set $\beta$ when the QP is used
for compressed sensing. Our results and experimental verification also provide
some insight into the solutions generated by compressed sensing.
|
1012.5357
|
Quasirandom Rumor Spreading: An Experimental Analysis
|
cs.DS cs.SI
|
We empirically analyze two versions of the well-known "randomized rumor
spreading" protocol to disseminate a piece of information in networks. In the
classical model, in each round each informed node informs a random neighbor. In
the recently proposed quasirandom variant, each node has a (cyclic) list of its
neighbors. Once informed, it starts at a random position of the list, but from
then on informs its neighbors in the order of the list. While for sparse random
graphs a better performance of the quasirandom model could be proven, all other
results show that, independent of the structure of the lists, the same
asymptotic performance guarantees hold as for the classical model. In this
work, we compare the two models experimentally. This not only shows that the
quasirandom model generally is faster, but also that the runtime is more
concentrated around the mean. This is surprising given that much fewer random
bits are used in the quasirandom process. These advantages are also observed in
a lossy communication model, where each transmission does not reach its target
with a certain probability, and in an asynchronous model, where nodes send at
random times drawn from an exponential distribution. We also show that
typically the particular structure of the lists has little influence on the
efficiency.
|
1012.5430
|
Trajectory Codes for Flash Memory
|
cs.IT math.IT
|
Flash memory is well-known for its inherent asymmetry: the flash-cell charge
levels are easy to increase but are hard to decrease. In a general rewriting
model, the stored data changes its value with certain patterns. The patterns of
data updates are determined by the data structure and the application, and are
independent of the constraints imposed by the storage medium. Thus, an
appropriate coding scheme is needed so that the data changes can be updated and
stored efficiently under the storage-medium's constraints.
In this paper, we define the general rewriting problem using a graph model.
It extends many known rewriting models such as floating codes, WOM codes,
buffer codes, etc. We present a new rewriting scheme for flash memories, called
the trajectory code, for rewriting the stored data as many times as possible
without block erasures. We prove that the trajectory code is asymptotically
optimal in a wide range of scenarios.
We also present randomized rewriting codes optimized for expected performance
(given arbitrary rewriting sequences). Our rewriting codes are shown to be
asymptotically optimal.
|
1012.5454
|
Compressed Sensing for Feedback Reduction in MIMO Broadcast Channels
|
cs.IT math.IT
|
We propose a generalized feedback model and compressive sensing based
opportunistic feedback schemes for feedback resource reduction in MIMO
Broadcast Channels under the assumption that both uplink and downlink channels
undergo block Rayleigh fading. Feedback resources are shared and are
opportunistically accessed by users who are strong, i.e. users whose channel
quality information is above a certain fixed threshold. Strong users send the
same feedback information on all shared channels. They are identified by the
base station via compressive sensing. Both analog and digital feedbacks are
considered. The proposed analog & digital opportunistic feedback schemes are
shown to achieve the same sum-rate throughput as that achieved by dedicated
feedback schemes, but with feedback channels growing only logarithmically with
number of users. Moreover, there is also a reduction in the feedback load. In
the analog feedback case, we show that the proposed scheme reduces the feedback
noise which eventually results in better throughput, whereas in the digital
feedback case the proposed scheme in a noisy scenario achieves almost the
throughput obtained in a noiseless dedicated feedback scenario. We also show
that for a given fixed budget of feedback bits, there exists a trade-off
between the number of shared channels and thresholds accuracy of the fed back
SNR.
|
1012.5464
|
Classification of self-dual codes of length 36
|
math.CO cs.IT math.IT
|
A complete classification of binary self-dual codes of length 36 is given.
|
1012.5498
|
Checkable Codes from Group Rings
|
cs.IT math.AC math.IT
|
We study codes with a single check element derived from group rings, namely,
checkable codes. The notion of a code-checkable group ring is introduced.
Necessary and sufficient conditions for a group ring to be code-checkable are
given in the case where the group is a finite abelian group and the ring is a
finite field. This characterization leads to many good examples, among which
two checkable codes and two shortened codes have minimum distance better than
the lower bound given in Grassl's online table. Furthermore, when a group ring
is code-checkable, it is shown that every code in such a group ring admits a
generator, and that its dual is also generated by an element which may be
deduced directly from a check element of the original code. These are analogous
to the generator and parity-check polynomials of cyclic codes. In addition, the
structures of reversible and complementary dual checkable codes are established
as generalizations of reversible and complementary dual cyclic codes.
|
1012.5499
|
Integrating neighborhoods in the evaluation of fitness promotes
cooperation in the spatial prisoner's dilemma game
|
physics.soc-ph cs.SI
|
A fundamental question of human society is the evolution of cooperation. Many
previous studies explored this question via setting spatial background, where
players obtain their payoffs by playing game with their nearest neighbors.
Another undoubted fact is that environment plays an important role in the
individual development. Inspired by these phenomena, we reconsider the
definition of individual fitness which integrates the environment, denoted by
the average payoff of all individual neighbors, with the traditional individual
payoffs by introducing a selection parameter $u$. Tuning $u$ equal to zero
returns the traditional version, while increasing $u$ bears the influence of
environment. We find that considering the environment, i.e. integrating
neighborhoods in the evaluation of fitness, promotes cooperation. If we enhance
the value of $u$, the invasion of defection could be resisted better. We also
provide quantitative explanations and complete phase diagrams presenting the
influence of environment on the evolution of cooperation. Finally, the
universality of this mechanism is testified for different neighborhood sizes,
different topology structures and different game models. Our work may shed a
light on the emergence and persistence of cooperation in our life.
|
1012.5506
|
Ontology-based Queries over Cancer Data
|
cs.AI cs.DB cs.IR
|
The ever-increasing amount of data in biomedical research, and in cancer
research in particular, needs to be managed to support efficient data access,
exchange and integration. Existing software infrastructures, such caGrid,
support access to distributed information annotated with a domain ontology.
However, caGrid's current querying functionality depends on the structure of
individual data resources without exploiting the semantic annotations. In this
paper, we present the design and development of an ontology-based querying
functionality that consists of: the generation of OWL2 ontologies from the
underlying data resources metadata and a query rewriting and translation
process based on reasoning, which converts a query at the domain ontology level
into queries at the software infrastructure level. We present a detailed
analysis of our approach as well as an extensive performance evaluation. While
the implementation and evaluation was performed for the caGrid infrastructure,
the approach could be applicable to other model and metadata-driven
environments for data sharing.
|
1012.5546
|
Mining Multi-Level Frequent Itemsets under Constraints
|
cs.DB cs.AI cs.DS
|
Mining association rules is a task of data mining, which extracts knowledge
in the form of significant implication relation of useful items (objects) from
a database. Mining multilevel association rules uses concept hierarchies, also
called taxonomies and defined as relations of type 'is-a' between objects, to
extract rules that items belong to different levels of abstraction. These rules
are more useful, more refined and more interpretable by the user. Several
algorithms have been proposed in the literature to discover the multilevel
association rules. In this article, we are interested in the problem of
discovering multi-level frequent itemsets under constraints, involving the user
in the research process. We proposed a technique for modeling and
interpretation of constraints in a context of use of concept hierarchies. Three
approaches for discovering multi-level frequent itemsets under constraints were
proposed and discussed: Basic approach, "Test and Generate" approach and
Pruning based Approach.
|
1012.5553
|
Cyclic-Coded Integer-Forcing Equalization
|
cs.IT math.IT
|
A discrete-time intersymbol interference channel with additive Gaussian noise
is considered, where only the receiver has knowledge of the channel impulse
response. An approach for combining decision-feedback equalization with channel
coding is proposed, where decoding precedes the removal of intersymbol
interference. This is accomplished by combining the recently proposed
integer-forcing equalization approach with cyclic block codes. The channel
impulse response is linearly equalized to an integer-valued response. This is
then utilized by leveraging the property that a cyclic code is closed under
(cyclic) integer-valued convolution. Explicit bounds on the performance of the
proposed scheme are also derived.
|
1012.5585
|
Symmetry Breaking with Polynomial Delay
|
cs.AI
|
A conservative class of constraint satisfaction problems CSPs is a class for
which membership is preserved under arbitrary domain reductions. Many
well-known tractable classes of CSPs are conservative. It is well known that
lexleader constraints may significantly reduce the number of solutions by
excluding symmetric solutions of CSPs. We show that adding certain lexleader
constraints to any instance of any conservative class of CSPs still allows us
to find all solutions with a time which is polynomial between successive
solutions. The time is polynomial in the total size of the instance and the
additional lexleader constraints. It is well known that for complete symmetry
breaking one may need an exponential number of lexleader constraints. However,
in practice, the number of additional lexleader constraints is typically
polynomial number in the size of the instance. For polynomially many lexleader
constraints, we may in general not have complete symmetry breaking but
polynomially many lexleader constraints may provide practically useful symmetry
breaking -- and they sometimes exclude super-exponentially many solutions. We
prove that for any instance from a conservative class, the time between finding
successive solutions of the instance with polynomially many additional
lexleader constraints is polynomial even in the size of the instance without
lexleaderconstraints.
|
1012.5594
|
The Ethics of Robotics
|
cs.AI cs.RO
|
The three laws of Robotics first appeared together in Isaac Asimov's story
'Runaround' after being mentioned in some form or the other in previous works
by Asimov. These three laws commonly known as the three laws of robotics are
the earliest forms of depiction for the needs of ethics in Robotics. In
simplistic language Isaac Asimov is able to explain what rules a robot must
confine itself to in order to maintain societal sanctity. However, even though
they are outdated they still represent some of our innate fears which are
beginning to resurface in present day 21st Century. Our society is on the
advent of a new revolution; a revolution led by advances in Computer Science,
Artificial Intelligence & Nanotechnology. Some of our advances have been so
phenomenal that we surpassed what was predicted by the Moore's law. With these
advancements comes the fear that our future may be at the mercy of these
androids. Humans today are scared that we, ourselves, might create something
which we cannot control. We may end up creating something which can not only
learn much faster than anyone of us can, but also evolve faster than what the
theory of evolution has allowed us to. The greatest fear is not only that we
might lose our jobs to these intelligent beings, but that these beings might
end up replacing us at the top of the cycle. The public hysteria has been
heightened more so by a number of cultural works which depict annihilation of
the human race by robots. Right from Frankenstein to I, Robot mass media has
also depicted such issues. This paper is an effort to understand the need for
ethics in Robotics or simply termed as Roboethics. This is achieved by the
study of artificial beings and the thought being put behind them. By the end of
the paper, however, it is concluded that there isn't a need for ethical robots
but more so ever a need for ethical roboticists.
|
1012.5625
|
Free and Open-Source Software is not an Emerging Property but Rather the
Result of Studied Design
|
cs.CY cs.SI
|
Free and open source software (FOSS) is considered by many, along with
Wikipedia, the proof of an ongoing paradigm shift from hierarchically-managed
and market-driven production of knowledge to heterarchical, collaborative and
commons-based production styles. In such perspective, it has become common
place to refer to FOSS as a manifestation of collective intelligence where
deliverables and artefacts emerge by virtue of mere cooperation, with no need
for supervising leadership. The paper argues that this assumption is based on
limited understanding of the software development process, and may lead to
wrong conclusions as to the potential of peer production. The development of a
less than trivial piece of software, irrespective of whether it be FOSS or
proprietary, is a complex cooperative effort requiring the participation of
many (often thousands of) individuals. A subset of the participants always play
the role of leading system and subsystem designers, determining architecture
and functionality; the rest of the people work "underneath" them in a logical,
functional sense. While new and powerful forces, including FOSS, are clearly at
work in the post-industrial, networked econ-omy, the currently ingenuous stage
of research in the field of collective intelligence and networked cooperation
must give way to a deeper level of consciousness, which requires an
understanding of the software development process.
|
1012.5693
|
On the Asymptotic Connectivity of Random Networks under the Random
Connection Model
|
cs.NI cs.IT math.IT
|
Consider a network where all nodes are distributed on a unit square following
a Poisson distribution with known density $\rho$ and a pair of nodes separated
by an Euclidean distance $x$ are directly connected with probability
$g(\frac{x}{r_{\rho}})$, where $g:[0,\infty)\rightarrow[0,1]$ satisfies three
conditions: rotational invariance, non-increasing monotonicity and integral
boundedness, $r_{\rho}=\sqrt{\frac{\log\rho+b}{C\rho}}$,
$C=\int_{\Re^{2}}g(\Vert \boldsymbol{x}\Vert)d\boldsymbol{x}$ and $b$ is a
constant, independent of the event that another pair of nodes are directly
connected. In this paper, we analyze the asymptotic distribution of the number
of isolated nodes in the above network using the Chen-Stein technique and the
impact of the boundary effect on the number of isolated nodes as
$\rho\rightarrow\infty$. On that basis we derive a necessary condition for the
above network to be asymptotically almost surely connected. These results form
an important link in expanding recent results on the connectivity of the random
geometric graphs from the commonly used unit disk model to the more generic and
more practical random connection model.
|
1012.5696
|
Fast and Tiny Structural Self-Indexes for XML
|
cs.DB
|
XML document markup is highly repetitive and therefore well compressible
using dictionary-based methods such as DAGs or grammars. In the context of
selectivity estimation, grammar-compressed trees were used before as synopsis
for structural XPath queries. Here a fully-fledged index over such grammars is
presented. The index allows to execute arbitrary tree algorithms with a
slow-down that is comparable to the space improvement. More interestingly,
certain algorithms execute much faster over the index (because no decompression
occurs). E.g., for structural XPath count queries, evaluating over the index is
faster than previous XPath implementations, often by two orders of magnitude.
The index also allows to serialize XML results (including texts) faster than
previous systems, by a factor of ca. 2-3. This is due to efficient copy
handling of grammar repetitions, and because materialization is totally
avoided. In order to compare with twig join implementations, we implemented a
materializer which writes out pre-order numbers of result nodes, and show its
competitiveness.
|
1012.5705
|
Looking for plausibility
|
cs.AI
|
In the interpretation of experimental data, one is actually looking for
plausible explanations. We look for a measure of plausibility, with which we
can compare different possible explanations, and which can be combined when
there are different sets of data. This is contrasted to the conventional
measure for probabilities as well as to the proposed measure of possibilities.
We define what characteristics this measure of plausibility should have.
In getting to the conception of this measure, we explore the relation of
plausibility to abductive reasoning, and to Bayesian probabilities. We also
compare with the Dempster-Schaefer theory of evidence, which also has its own
definition for plausibility. Abduction can be associated with biconditionality
in inference rules, and this provides a platform to relate to the
Collins-Michalski theory of plausibility. Finally, using a formalism for wiring
logic onto Hopfield neural networks, we ask if this is relevant in obtaining
this measure.
|
1012.5723
|
Towards a Better Understanding of Large Scale Network Models
|
cs.NI cs.IT math.IT
|
Connectivity and capacity are two fundamental properties of wireless
multi-hop networks. The scalability of these properties has been a primary
concern for which asymptotic analysis is a useful tool. Three related but
logically distinct network models are often considered in asymptotic analyses,
viz. the dense network model, the extended network model and the infinite
network model, which consider respectively a network deployed in a fixed finite
area with a sufficiently large node density, a network deployed in a
sufficiently large area with a fixed node density, and a network deployed in
$\Re^{2}$ with a sufficiently large node density. The infinite network model
originated from continuum percolation theory and asymptotic results obtained
from the infinite network model have often been applied to the dense and
extended networks. In this paper, through two case studies related to network
connectivity on the expected number of isolated nodes and on the vanishing of
components of finite order k>1 respectively, we demonstrate some subtle but
important differences between the infinite network model and the dense and
extended network models. Therefore extra scrutiny has to be used in order for
the results obtained from the infinite network model to be applicable to the
dense and extended network models. Asymptotic results are also obtained on the
expected number of isolated nodes, the vanishingly small impact of the boundary
effect on the number of isolated nodes and the vanishing of components of
finite order k>1 in the dense and extended network models using a generic
random connection model.
|
1012.5752
|
Increasing risk behavior can outweigh the benefits of anti-retroviral
drug treatment on the HIV incidence among men-having-sex-with-men in
Amsterdam
|
cs.SI physics.med-ph q-bio.PE
|
The transmission through contacts among MSM (men who have sex with men) is
one of the dominating contributors to HIV prevalence in industrialized
countries. In Amsterdam, the capital of the Netherlands, the MSM risk group has
been traced for decades. This has motivated studies which provide detailed
information about MSM's risk behavior statistically, psychologically and
sociologically. Despite the era of potent antiretroviral therapy, the incidence
of HIV among MSM increases. In the long term the contradictory effects of risk
behavior and effective therapy are still poorly understood. Using a previously
presented Complex Agent Network model, we describe steady and casual
partnerships to predict the HIV spreading among MSM. Behavior-related
parameters and values, inferred from studies on Amsterdam MSM, are fed into the
model; we validate the model using historical yearly incidence data.
Subsequently, we study scenarios to assess the contradictory effects of risk
behavior and effective therapy, by varying corresponding values of parameters.
Finally, we conduct quantitative analysis based on the resulting incidence
data. The simulated incidence reproduces the ACS historical incidence well and
helps to predict the HIV epidemic among MSM in Amsterdam. Our results show that
in the long run the positive influence of effective therapy can be outweighed
by an increase in risk behavior of at least 30% for MSM. Conclusion: We
recommend, based on the model predictions, that lowering risk behavior is the
prominent control mechanism of HIV incidence even in the presence of effective
therapy.
|
1012.5754
|
Software Effort Estimation with Ridge Regression and Evolutionary
Attribute Selection
|
cs.SE cs.AI cs.LG
|
Software cost estimation is one of the prerequisite managerial activities
carried out at the software development initiation stages and also repeated
throughout the whole software life-cycle so that amendments to the total cost
are made. In software cost estimation typically, a selection of project
attributes is employed to produce effort estimations of the expected human
resources to deliver a software product. However, choosing the appropriate
project cost drivers in each case requires a lot of experience and knowledge on
behalf of the project manager which can only be obtained through years of
software engineering practice. A number of studies indicate that popular
methods applied in the literature for software cost estimation, such as linear
regression, are not robust enough and do not yield accurate predictions.
Recently the dual variables Ridge Regression (RR) technique has been used for
effort estimation yielding promising results. In this work we show that results
may be further improved if an AI method is used to automatically select
appropriate project cost drivers (inputs) for the technique. We propose a
hybrid approach combining RR with a Genetic Algorithm, the latter evolving the
subset of attributes for approximating effort more accurately. The proposed
hybrid cost model has been applied on a widely known high-dimensional dataset
of software project samples and the results obtained show that accuracy may be
increased if redundant attributes are eliminated.
|
1012.5755
|
DD-EbA: An algorithm for determining the number of neighbors in cost
estimation by analogy using distance distributions
|
cs.SE cs.AI
|
Case Based Reasoning and particularly Estimation by Analogy, has been used in
a number of problem-solving areas, such as cost estimation. Conventional
methods, despite the lack of a sound criterion for choosing nearest projects,
were based on estimation using a fixed and predetermined number of neighbors
from the entire set of historical instances. This approach puts boundaries to
the estimation ability of such algorithms, for they do not take into
consideration that every project under estimation is unique and requires
different handling. The notion of distributions of distances together with a
distance metric for distributions help us to adapt the proposed method (we call
it DD-EbA) each time to a specific case that is to be estimated without loosing
in prediction power or computational cost. The results of this paper show that
the proposed technique achieves the above idea in a very efficient way.
|
1012.5774
|
Towards the Capacity Region of Multiplicative Linear Operator Broadcast
Channels
|
cs.IT math.AG math.IT
|
Recent research indicates that packet transmission employing random linear
network coding can be regarded as transmitting subspaces over a linear operator
channel (LOC). In this paper we propose the framework of linear operator
broadcast channels (LOBCs) to model packet broadcasting over LOCs, and we do
initial work on the capacity region of constant-dimension multiplicative
LOBCs(CMLOBCs), a generalization of broadcast erasure channels. Two fundamental
problems regarding CMLOBCs are addressed-finding necessary and sufficient
conditions for degradation and deciding whether time sharing suffices to
achieve the boundary of the capacity region in the degraded case.
|
1012.5813
|
Neural Network Influence in Group Technology: A Chronological Survey and
Critical Analysis
|
cs.AI nlin.AO
|
This article portrays a chronological review of the influence of Artificial
Neural Network in group technology applications in the vicinity of Cellular
Manufacturing Systems. The research trend is identified and the evolvement is
captured through a critical analysis of the literature accessible from the very
beginning of its practice in the early 90's till the 2010. Analysis of the
diverse ANN approaches, spotted research pattern, comparison of the clustering
efficiencies, the solutions obtained and the tools used make this study
exclusive in its class.
|
1012.5815
|
SAPFOCS: a metaheuristic based approach to part family formation
problems in group technology
|
cs.AI
|
This article deals with Part family formation problem which is believed to be
moderately complicated to be solved in polynomial time in the vicinity of Group
Technology (GT). In the past literature researchers investigated that the part
family formation techniques are principally based on production flow analysis
(PFA) which usually considers operational requirements, sequences and time.
Part Coding Analysis (PCA) is merely considered in GT which is believed to be
the proficient method to identify the part families. PCA classifies parts by
allotting them to different families based on their resemblances in: (1) design
characteristics such as shape and size, and/or (2) manufacturing
characteristics (machining requirements). A novel approach based on simulated
annealing namely SAPFOCS is adopted in this study to develop effective part
families exploiting the PCA technique. Thereafter Taguchi's orthogonal design
method is employed to solve the critical issues on the subject of parameters
selection for the proposed metaheuristic algorithm. The adopted technique is
therefore tested on 5 different datasets of size 5 {\times} 9 to 27 {\times} 9
and the obtained results are compared with C-Linkage clustering technique. The
experimental results reported that the proposed metaheuristic algorithm is
extremely effective in terms of the quality of the solution obtained and has
outperformed C-Linkage algorithm in most instances.
|
1012.5846
|
Improvement of the Han-Kobayashi Rate Region for General Interference
Channel-v2
|
cs.IT math.IT
|
Allowing the input auxiliary random variables to be correlated and using the
binning scheme, the Han-Kobayashi (HK) rate region for general interference
channel is partially improved. The obtained partially new achievable rate
region (i) is compared to the HK region and its simplified description, i.e.,
Chong-Motani-Garg (CMG) region, in a detailed and favorable manner, by
considering different versions of the regions, and (ii) has an interesting and
easy interpretation: as expected, any rate in our region has generally two
additional terms in comparison with the HK region (one due to the input
correlation and the other as a result of the binning scheme). Keywords.
Interference channel, Input correlation, Binning scheme
|
1012.5847
|
On Elementary Loops of Logic Programs
|
cs.AI
|
Using the notion of an elementary loop, Gebser and Schaub refined the theorem
on loop formulas due to Lin and Zhao by considering loop formulas of elementary
loops only. In this article, we reformulate their definition of an elementary
loop, extend it to disjunctive programs, and study several properties of
elementary loops, including how maximal elementary loops are related to minimal
unfounded sets. The results provide useful insights into the stable model
semantics in terms of elementary loops. For a nondisjunctive program, using a
graph-theoretic characterization of an elementary loop, we show that the
problem of recognizing an elementary loop is tractable. On the other hand, we
show that the corresponding problem is {\sf coNP}-complete for a disjunctive
program. Based on the notion of an elementary loop, we present the class of
Head-Elementary-loop-Free (HEF) programs, which strictly generalizes the class
of Head-Cycle-Free (HCF) programs due to Ben-Eliyahu and Dechter. Like an HCF
program, an HEF program can be turned into an equivalent nondisjunctive program
in polynomial time by shifting head atoms into the body.
|
1012.5883
|
On sub-ideal causal smoothing filters
|
math.OC cs.SY math.CA math.SP
|
Smoothing causal linear time-invariant filters are studied for continuous
time processes. The paper suggests a family of causal filters with almost
exponential damping of the energy on the higher frequencies. These filters are
sub-ideal meaning that a faster decay of the frequency response would lead to
the loss of causality.
|
1012.5913
|
All liaisons are dangerous when all your friends are known to us
|
cs.SI cs.CY cs.DM
|
Online Social Networks (OSNs) are used by millions of users worldwide.
Academically speaking, there is little doubt about the usefulness of
demographic studies conducted on OSNs and, hence, methods to label unknown
users from small labeled samples are very useful. However, from the general
public point of view, this can be a serious privacy concern. Thus, both topics
are tackled in this paper: First, a new algorithm to perform user profiling in
social networks is described, and its performance is reported and discussed.
Secondly, the experiments --conducted on information usually considered
sensitive-- reveal that by just publicizing one's contacts privacy is at risk
and, thus, measures to minimize privacy leaks due to social graph data mining
are outlined.
|
1012.5933
|
Affine-invariant diffusion geometry for the analysis of deformable 3D
shapes
|
cs.CV
|
We introduce an (equi-)affine invariant diffusion geometry by which surfaces
that go through squeeze and shear transformations can still be properly
analyzed. The definition of an affine invariant metric enables us to construct
an invariant Laplacian from which local and global geometric structures are
extracted. Applications of the proposed framework demonstrate its power in
generalizing and enriching the existing set of tools for shape analysis.
|
1012.5936
|
Affine-invariant geodesic geometry of deformable 3D shapes
|
cs.CV
|
Natural objects can be subject to various transformations yet still preserve
properties that we refer to as invariants. Here, we use definitions of affine
invariant arclength for surfaces in R^3 in order to extend the set of existing
non-rigid shape analysis tools. In fact, we show that by re-defining the
surface metric as its equi-affine version, the surface with its modified metric
tensor can be treated as a canonical Euclidean object on which most classical
Euclidean processing and analysis tools can be applied. The new definition of a
metric is used to extend the fast marching method technique for computing
geodesic distances on surfaces, where now, the distances are defined with
respect to an affine invariant arclength. Applications of the proposed
framework demonstrate its invariance, efficiency, and accuracy in shape
analysis.
|
1012.5947
|
Orthogonal symmetric Toeplitz matrices for compressed sensing:
Statistical isometry property
|
cs.IT math.IT
|
Recently, the statistical restricted isometry property (RIP) has been
formulated to analyze the performance of deterministic sampling matrices for
compressed sensing. In this paper, we propose the usage of orthogonal symmetric
Toeplitz matrices (OSTM) for compressed sensing and study their statistical RIP
by taking advantage of Stein's method. In particular, we derive the statistical
RIP performance bound in terms of the largest value of the sampling matrix and
the sparsity level of the input signal. Based on such connections, we show that
OSTM can satisfy the statistical RIP for an overwhelming majority of signals
with given sparsity level, if a Golay sequence used to generate the OSTM. Such
sensing matrices are deterministic, Toeplitz, and efficient to implement.
Simulation results show that OSTM can offer reconstruction performance similar
to that of random matrices.
|
1012.5956
|
A New Noncoherent Decoder for Wireless Network Coding
|
cs.IT math.IT
|
This work deals with the decoding aspect of wireless network coding in the
canonical two-way relay channel where two senders exchange messages via a
common relay and they receive the mixture of two messages. One of the recent
works on wireless network coding was well explained by Katti \textit{et al.} in
SIGCOMM'07. In this work, we analyze the issue with one of their decoders when
minimum-shift keying (MSK) is employed as the modulation format, and propose a
new noncoherent decoder in the presence of two interfering signals.
|
1012.5960
|
Extending Binary Qualitative Direction Calculi with a Granular Distance
Concept: Hidden Feature Attachment
|
cs.AI
|
In this paper we introduce a method for extending binary qualitative
direction calculi with adjustable granularity like OPRAm or the star calculus
with a granular distance concept. This method is similar to the concept of
extending points with an internal reference direction to get oriented points
which are the basic entities in the OPRAm calculus. Even if the spatial objects
are from a geometrical point of view infinitesimal small points locally
available reference measures are attached. In the case of OPRAm, a reference
direction is attached. The same principle works also with local reference
distances which are called elevations. The principle of attaching references
features to a point is called hidden feature attachment.
|
1012.5961
|
Vulnerability of Networks Against Critical Link Failures
|
physics.soc-ph cond-mat.other cs.SI
|
Networks are known to be prone to link failures. In this paper we set out to
investigate how networks of varying connectivity patterns respond to different
link failure schemes in terms of connectivity, clustering coefficient and
shortest path lengths. We then propose a measure, which we call the
vulnerability of a network, for evaluating the extent of the damage these
failures can cause. Accepting the disconnections of node pairs as a damage
indicator, vulnerability simply represents how quickly the failure of the
critical links cause the network to undergo a specified damage extent.
Analyzing the vulnerabilities under varying damage specifications shows that
scale free networks are relatively more vulnerable for small failures, but more
efficient; whereas Erd\"os-R\'enyi networks are the least vulnerable despite
lacking any clustered structure.
|
1012.5962
|
Annotated English
|
cs.CL
|
This document presents Annotated English, a system of diacritical symbols
which turns English pronunciation into a precise and unambiguous process. The
annotations are defined and located in such a way that the original English
text is not altered (not even a letter), thus allowing for a consistent reading
and learning of the English language with and without annotations. The
annotations are based on a set of general rules that make the frequency of
annotations not dramatically high. This makes the reader easily associate
annotations with exceptions, and makes it possible to shape, internalise and
consolidate some rules for the English language which otherwise are weakened by
the enormous amount of exceptions in English pronunciation. The advantages of
this annotation system are manifold. Any existing text can be annotated without
a significant increase in size. This means that we can get an annotated version
of any document or book with the same number of pages and fontsize. Since no
letter is affected, the text can be perfectly read by a person who does not
know the annotation rules, since annotations can be simply ignored. The
annotations are based on a set of rules which can be progressively learned and
recognised, even in cases where the reader has no access or time to read the
rules. This means that a reader can understand most of the annotations after
reading a few pages of Annotated English, and can take advantage from that
knowledge for any other annotated document she may read in the future.
|
1012.5994
|
Toward Emerging Topic Detection for Business Intelligence: Predictive
Analysis of `Meme' Dynamics
|
cs.SI
|
Detecting and characterizing emerging topics of discussion and consumer
trends through analysis of Internet data is of great interest to businesses.
This paper considers the problem of monitoring the Web to spot emerging memes -
distinctive phrases which act as "tracers" for topics - as a means of early
detection of new topics and trends. We present a novel methodology for
predicting which memes will propagate widely, appearing in hundreds or
thousands of blog posts, and which will not, thereby enabling discovery of
significant topics. We begin by identifying measurables which should be
predictive of meme success. Interestingly, these metrics are not those
traditionally used for such prediction but instead are subtle measures of meme
dynamics. These metrics form the basis for learning a classifier which
predicts, for a given meme, whether or not it will propagate widely. The
utility of the prediction methodology is demonstrated through analysis of memes
that emerged online during the second half of 2008.
|
1012.5997
|
Protection Over Asymmetric Channels, S-MATE: Secure Multipath Adaptive
Traffic Engineering
|
cs.IT cs.CR cs.NI math.IT
|
Several approaches have been proposed to the problem of provisioning traffic
engineering between core network nodes in Internet Service Provider (ISP)
networks. Such approaches aim to minimize network delay, increase capacity, and
enhance security services between two core (relay) network nodes, an ingress
node and an egress node. MATE (Multipath Adaptive Traffic Engineering) has been
proposed for multipath adaptive traffic engineering between an ingress node
(source) and an egress node (destination) to distribute the network flow among
multiple disjoint paths. Its novel idea is to avoid network congestion and
attacks that might exist in edge and node disjoint paths between two core
network nodes.
This paper proposes protection schemes over asymmetric channels. Precisely,
the paper aims to develop an adaptive, robust, and reliable traffic engineering
scheme to improve performance and reliability of communication networks. This
scheme will also provision Quality of Server (QoS) and protection of traffic
engineering to maximize network efficiency. Specifically, S-MATE (secure MATE)
is proposed to protect the network traffic between two core nodes (routers,
switches, etc.) in a cloud network. S-MATE secures against a single link
attack/failure by adding redundancy in one of the operational redundant paths
between the sender and receiver nodes. It is also extended to secure against
multiple attacked links. The proposed scheme can be applied to secure core
networks such as optical and IP networks.
|
1012.6009
|
Cluster Evaluation of Density Based Subspace Clustering
|
cs.DB
|
Clustering real world data often faced with curse of dimensionality, where
real world data often consist of many dimensions. Multidimensional data
clustering evaluation can be done through a density-based approach. Density
approaches based on the paradigm introduced by DBSCAN clustering. In this
approach, density of each object neighbours with MinPoints will be calculated.
Cluster change will occur in accordance with changes in density of each object
neighbours. The neighbours of each object typically determined using a distance
function, for example the Euclidean distance. In this paper SUBCLU, FIRES and
INSCY methods will be applied to clustering 6x1595 dimension synthetic
datasets. IO Entropy, F1 Measure, coverage, accurate and time consumption used
as evaluation performance parameters. Evaluation results showed SUBCLU method
requires considerable time to process subspace clustering; however, its value
coverage is better. Meanwhile INSCY method is better for accuracy comparing
with two other methods, although consequence time calculation was longer.
|
1012.6012
|
On the Capacity of the Discrete Memoryless Broadcast Channel with
Feedback
|
cs.IT math.IT
|
A coding scheme for the discrete memoryless broadcast channel with
{noiseless, noisy, generalized} feedback is proposed, and the associated
achievable region derived. The scheme is based on a block-Markov strategy
combining the Marton scheme and a lossy version of the Gray-Wyner scheme with
side-information. In each block the transmitter sends fresh data and update
information that allows the receivers to improve the channel outputs observed
in the previous block. For a generalization of Dueck's broadcast channel our
scheme achieves the noiseless-feedback capacity, which is strictly larger than
the no-feedback capacity. For a generalization of Blackwell's channel and when
the feedback is noiseless our new scheme achieves rate points that are outside
the no-feedback capacity region. It follows by a simple continuity argument
that for both these channels and when the feedback noise is sufficiently low,
our scheme improves on the no-feedback capacity even when the feedback is
noisy.
|
1012.6018
|
Learning a Representation of a Believable Virtual Character's
Environment with an Imitation Algorithm
|
cs.AI
|
In video games, virtual characters' decision systems often use a simplified
representation of the world. To increase both their autonomy and believability
we want those characters to be able to learn this representation from human
players. We propose to use a model called growing neural gas to learn by
imitation the topology of the environment. The implementation of the model, the
modifications and the parameters we used are detailed. Then, the quality of the
learned representations and their evolution during the learning are studied
using different measures. Improvements for the growing neural gas to give more
information to the character's model are given in the conclusion.
|
1101.0011
|
Packet Scheduling in Switches with Target Outflow Profiles
|
cs.NI cs.MM cs.SY
|
The problem of packet scheduling for traffic streams with target outflow
profiles traversing input queued switches is formulated in this paper. Target
outflow profiles specify the desirable inter-departure times of packets leaving
the switch from each traffic stream. The goal of the switch scheduler is to
dynamically select service configurations of the switch, so that actual outflow
streams ("pulled" through the switch) adhere to their desired target profiles
as accurately as possible. Dynamic service controls (schedules) are developed
to minimize deviation of actual outflow streams from their targets and suppress
stream "distortion". Using appropriately selected subsets of service
configurations of the switch, efficient schedules are designed, which deliver
high performance at relatively low complexity. Some of these schedules are
provably shown to achieve 100% pull-throughput. Moreover, simulations
demonstrate that for even substantial contention of streams through the switch,
due to stringent/intense target outflow profiles, the proposed schedules
achieve closely their target profiles and suppress stream distortion. The
switch model investigated here deviates from the classical switching paradigm.
In the latter, the goal of packet scheduling is primarily to "push" as much
traffic load through the switch as possible, while controlling delay to
traverse the switch and keeping congestion/backlogs from exploding. In the
model presented here, however, the goal of packet scheduling is to "pull"
traffic streams through the switch, maintaining desirable (target) outflow
profiles.
|
1101.0064
|
Dual universality of hash functions and its applications to quantum
cryptography
|
quant-ph cs.IT math.IT
|
In this paper, we introduce the concept of dual universality of hash
functions and present its applications to quantum cryptography. We begin by
establishing the one-to-one correspondence between a linear function family
{\cal F} and a code family {\cal C}, and thereby defining \varepsilon-almost
dual universal_2 hash functions, as a generalization of the conventional
universal_2 hash functions. Then we show that this generalized (and thus
broader) class of hash functions is in fact sufficient for the security of
quantum cryptography. This result can be explained in two different formalisms.
First, by noting its relation to the \delta-biased family introduced by Dodis
and Smith, we demonstrate that Renner's two-universal hashing lemma is
generalized to our class of hash functions. Next, we prove that the proof
technique by Shor and Preskill can be applied to quantum key distribution (QKD)
systems that use our generalized class of hash functions for privacy
amplification. While Shor-Preskill formalism requires an implementer of a QKD
system to explicitly construct a linear code of the Calderbank-Shor-Steane
type, this result removes the existing difficulty of the construction a linear
code of CSS code by replacing it by the combination of an ordinary classical
error correcting code and our proposed hash function. We also show that a
similar result applies to the quantum wire-tap channel. Finally we compare our
results in the two formalisms and show that, in typical QKD scenarios, the
Shor-Preskill--type argument gives better security bounds in terms of the trace
distance and Holevo information, than the method based on the \delta-biased
family.
|
1101.0085
|
Linear Codes, Target Function Classes, and Network Computing Capacity
|
cs.IT cs.DC math.CO math.IT
|
We study the use of linear codes for network computing in single-receiver
networks with various classes of target functions of the source messages. Such
classes include reducible, injective, semi-injective, and linear target
functions over finite fields. Computing capacity bounds and achievability are
given with respect to these target function classes for network codes that use
routing, linear coding, or nonlinear coding.
|
1101.0133
|
Enabling Node Repair in Any Erasure Code for Distributed Storage
|
cs.IT cs.DC cs.NI math.IT
|
Erasure codes are an efficient means of storing data across a network in
comparison to data replication, as they tend to reduce the amount of data
stored in the network and offer increased resilience in the presence of node
failures. The codes perform poorly though, when repair of a failed node is
called for, as they typically require the entire file to be downloaded to
repair a failed node. A new class of erasure codes, termed as regenerating
codes were recently introduced, that do much better in this respect. However,
given the variety of efficient erasure codes available in the literature, there
is considerable interest in the construction of coding schemes that would
enable traditional erasure codes to be used, while retaining the feature that
only a fraction of the data need be downloaded for node repair. In this paper,
we present a simple, yet powerful, framework that does precisely this. Under
this framework, the nodes are partitioned into two 'types' and encoded using
two codes in a manner that reduces the problem of node-repair to that of
erasure-decoding of the constituent codes. Depending upon the choice of the two
codes, the framework can be used to avail one or more of the following
advantages: simultaneous minimization of storage space and repair-bandwidth,
low complexity of operation, fewer disk reads at helper nodes during repair,
and error detection and correction.
|
1101.0139
|
A Fast Statistical Method for Multilevel Thresholding in Wavelet Domain
|
nlin.CD cs.CV
|
An algorithm is proposed for the segmentation of image into multiple levels
using mean and standard deviation in the wavelet domain. The procedure provides
for variable size segmentation with bigger block size around the mean, and
having smaller blocks at the ends of histogram plot of each horizontal,
vertical and diagonal components, while for the approximation component it
provides for finer block size around the mean, and larger blocks at the ends of
histogram plot coefficients. It is found that the proposed algorithm has
significantly less time complexity, achieves superior PSNR and Structural
Similarity Measurement Index as compared to similar space domain algorithms[1].
In the process it highlights finer image structures not perceptible in the
original image. It is worth emphasizing that after the segmentation only 16 (at
threshold level 3) wavelet coefficients captures the significant variation of
image.
|
1101.0198
|
Link Spam Detection based on DBSpamClust with Fuzzy C-means Clustering
|
cs.IR cs.IT cs.SI math.IT
|
Search engine became omnipresent means for ingoing to the web. Spamming
Search engine is the technique to deceiving the ranking in search engine and it
inflates the ranking. Web spammers have taken advantage of the vulnerability of
link based ranking algorithms by creating many artificial references or links
in order to acquire higher-than-deserved ranking n search engines' results.
Link based algorithms such as PageRank, HITS utilizes the structural details of
the hyperlinks for ranking the content in the web. In this paper an algorithm
DBSpamClust is proposed for link spam detection. As showing through experiments
such a method can filter out web spam effectively
|
1101.0211
|
Spectral Properties of Directed Random Networks with Modular Structure
|
cond-mat.dis-nn cs.SI physics.soc-ph q-bio.MN
|
We study spectra of directed networks with inhibitory and excitatory
couplings. We investigate in particular eigenvector localization properties of
various model networks for different value of correlation among their entries.
Spectra of random networks, with completely uncorrelated entries show a
circular distribution with delocalized eigenvectors, where as networks with
correlated entries have localized eigenvectors. In order to understand the
origin of localization we track the spectra as a function of connection
probability and directionality. As connections are made directed, eigenstates
start occurring in complex conjugate pairs and the eigenvalue distribution
combined with the localization measure shows a rich pattern. Moreover, for a
very well distinguished community structure, the whole spectrum is localized
except few eigenstates at boundary of the circular distribution. As the network
deviates from the community structure there is a sudden change in the
localization property for a very small value of deformation from the perfect
community structure. We search for this effect for the whole range of
correlation strengths and for different community configurations. Furthermore,
we investigate spectral properties of a metabolic network of zebrafish, and
compare them with those of the model networks.
|
1101.0237
|
A Framework for Real-Time Face and Facial Feature Tracking using Optical
Flow Pre-estimation and Template Tracking
|
cs.CV
|
This work presents a framework for tracking head movements and capturing the
movements of the mouth and both the eyebrows in real-time. We present a head
tracker which is a combination of a optical flow and a template based tracker.
The estimation of the optical flow head tracker is used as starting point for
the template tracker which fine-tunes the head estimation. This approach
together with re-updating the optical flow points prevents the head tracker
from drifting. This combination together with our switching scheme, makes our
tracker very robust against fast movement and motion-blur. We also propose a
way to reduce the influence of partial occlusion of the head. In both the
optical flow and the template based tracker we identify and exclude occluded
points.
|
1101.0242
|
Binary and nonbinary description of hypointensity in human brain MR
images
|
cs.CV
|
Accumulating evidence has shown that iron is involved in the mechanism
underlying many neurodegenerative diseases, such as Alzheimer's disease,
Parkinson's disease and Huntington's disease. Abnormal (higher) iron
accumulation has been detected in the brains of most neurodegenerative
patients, especially in the basal ganglia region. Presence of iron leads to
changes in MR signal in both magnitude and phase. Accordingly, tissues with
high iron concentration appear hypo-intense (darker than usual) in MR
contrasts. In this report, we proposed an improved binary hypointensity
description and a novel nonbinary hypointensity description based on principle
components analysis. Moreover, Kendall's rank correlation coefficient was used
to compare the complementary and redundant information provided by the two
methods in order to better understand the individual descriptions of iron
accumulation in the brain.
|
1101.0245
|
Use of Python and Phoenix-M Interface in Robotics
|
cs.RO cs.AI
|
In this paper I will show how to use Python programming with a computer
interface such as Phoenix-M 1 to drive simple robots. In my quest towards
Artificial Intelligence(AI) I am experimenting with a lot of different
possibilities in Robotics. This one will try to mimic the working of a simple
insect's nervous system using hard wiring and some minimal software usage. This
is the precursor to my advanced robotics and AI integration where I plan to use
a new paradigm of AI based on Machine Learning and Self Consciousness via
Knowledge Feedback and Update Process.
|
1101.0255
|
Conditional information and definition of neighbor in categorical random
fields
|
math.ST cs.LG stat.TH
|
We show that the definition of neighbor in Markov random fields as defined by
Besag (1974) when the joint distribution of the sites is not positive is not
well-defined. In a random field with finite number of sites we study the
conditions under which giving the value at extra sites will change the belief
of an agent about one site. Also the conditions under which the information
from some sites is equivalent to giving the value at all other sites is
studied. These concepts provide an alternative to the concept of neighbor for
general case where the positivity condition of the joint does not hold.
|
1101.0270
|
"On the engineers' new toolbox" or Analog Circuit Design, using Symbolic
Analysis, Computer Algebra, and Elementary Network Transformations
|
cs.SC cs.CE cs.DM
|
In this paper, by way of three examples - a fourth order low pass active RC
filter, a rudimentary BJT amplifier, and an LC ladder - we show, how the
algebraic capabilities of modern computer algebra systems can, or in the last
example, might be brought to use in the task of designing analog circuits.
|
1101.0272
|
Social Norms for Online Communities
|
cs.SI cs.NI physics.soc-ph
|
Sustaining cooperation among self-interested agents is critical for the
proliferation of emerging online social communities, such as online communities
formed through social networking services. Providing incentives for cooperation
in social communities is particularly challenging because of their unique
features: a large population of anonymous agents interacting infrequently,
having asymmetric interests, and dynamically joining and leaving the community;
operation errors; and low-cost reputation whitewashing. In this paper, taking
these features into consideration, we propose a framework for the design and
analysis of a class of incentive schemes based on a social norm, which consists
of a reputation scheme and a social strategy. We first define the concept of a
sustainable social norm under which every agent has an incentive to follow the
social strategy given the reputation scheme. We then formulate the problem of
designing an optimal social norm, which selects a social norm that maximizes
overall social welfare among all sustainable social norms. Using the proposed
framework, we study the structure of optimal social norms and the impacts of
punishment lengths and whitewashing on optimal social norms. Our results show
that optimal social norms are capable of sustaining cooperation, with the
amount of cooperation varying depending on the community characteristics.
|
1101.0275
|
Asynchronous Interference Alignment
|
cs.IT math.IT
|
A constant K-user interference channel in which the users are not
symbol-synchronous is considered. It is shown that the asynchronism among the
users facilitates aligning interfering signals at each receiver node while it
does not affect the total number of degrees of freedom (DoF) of the channel. To
achieve the total K/2 DoF of the channel when single antenna nodes are used, a
novel practical interference alignment scheme is proposed wherein the alignment
task is performed with the help of asynchronous delays which inherently exist
among the received signals at each receiver node. When each node is equipped
with M > 1 antennas, it is argued that the same alignment scheme is sufficient
to achieve the total MK/2 DoF of the medium when all links between collocated
antennas experience the same asynchronous delay.
|
1101.0287
|
On the Capacity of the Heat Channel, Waterfilling in the Time-Frequency
Plane, and a C-NODE Relationship
|
cs.IT math.IT
|
The heat channel is defined by a linear time-varying (LTV) filter with
additive white Gaussian noise (AWGN) at the filter output. The continuous-time
LTV filter is related to the heat kernel of the quantum mechanical harmonic
oscillator, so the name of the channel. The channel's capacity is given in
closed form by means of the Lambert W function. Also a waterfilling theorem in
the time-frequency plane for the capacity is derived. It relies on a specific
Szego theorem for which an essentially self-contained proof is provided.
Similarly, the rate distortion function for a related nonstationary source is
given in closed form and a (reverse) waterfilling theorem in the time-frequency
plane is derived. Finally, a second closed-form expression for the capacity of
the heat channel based on the detected perturbed filter output signals is
presented. In this context, a precise differential connection between channel
capacity and the normalized optimal detection error (NODE) is revealed. This
C-NODE relationship is compared with the well-known I-MMSE relationship
connecting mutual information with the minimum mean-square error (MMSE) of
estimation theory.
|
1101.0294
|
Virtual Full Duplex Wireless Broadcasting via Compressed Sensing
|
cs.IT cs.NI math.IT
|
A novel solution is proposed to undertake a frequent task in wireless
networks, which is to let all nodes broadcast information to and receive
information from their respective one-hop neighboring nodes. The contribution
is two-fold. First, as each neighbor selects one message-bearing codeword from
its unique codebook for transmission, it is shown that decoding their messages
based on a superposition of those codewords through the multiaccess channel is
fundamentally a problem of compressed sensing. In the case where each message
consists of a small number of bits, an iterative algorithm based on belief
propagation is developed for efficient decoding. Second, to satisfy the
half-duplex constraint, each codeword consists of randomly distributed on-slots
and off-slots. A node transmits during its on-slots, and listens to its
neighbors only through its own off-slots. Over one frame interval, each node
broadcasts a message to neighbors and simultaneously decodes neighbors'
messages based on the superposed signals received through its own off-slots.
Thus the solution fully exploits the multiaccess nature of the wireless medium
and addresses the half-duplex constraint at the fundamental level. In a network
consisting of Poisson distributed nodes, numerical results demonstrate that the
proposed scheme often achieves several times the rate of slotted ALOHA and CSMA
with the same packet error rate.
|
1101.0302
|
Mutual Information, Relative Entropy, and Estimation in the Poisson
Channel
|
cs.IT math.IT
|
Let $X$ be a non-negative random variable and let the conditional
distribution of a random variable $Y$, given $X$, be ${Poisson}(\gamma \cdot
X)$, for a parameter $\gamma \geq 0$. We identify a natural loss function such
that: 1) The derivative of the mutual information between $X$ and $Y$ with
respect to $\gamma$ is equal to the \emph{minimum} mean loss in estimating $X$
based on $Y$, regardless of the distribution of $X$. 2) When $X \sim P$ is
estimated based on $Y$ by a mismatched estimator that would have minimized the
expected loss had $X \sim Q$, the integral over all values of $\gamma$ of the
excess mean loss is equal to the relative entropy between $P$ and $Q$.
For a continuous time setting where $X^T = \{X_t, 0 \leq t \leq T \}$ is a
non-negative stochastic process and the conditional law of $Y^T=\{Y_t, 0\le
t\le T\}$, given $X^T$, is that of a non-homogeneous Poisson process with
intensity function $\gamma \cdot X^T$, under the same loss function: 1) The
minimum mean loss in \emph{causal} filtering when $\gamma = \gamma_0$ is equal
to the expected value of the minimum mean loss in \emph{non-causal} filtering
(smoothing) achieved with a channel whose parameter $\gamma$ is uniformly
distributed between 0 and $\gamma_0$. Bridging the two quantities is the mutual
information between $X^T$ and $Y^T$. 2) This relationship between the mean
losses in causal and non-causal filtering holds also in the case where the
filters employed are mismatched, i.e., optimized assuming a law on $X^T$ which
is not the true one. Bridging the two quantities in this case is the sum of the
mutual information and the relative entropy between the true and the mismatched
distribution of $Y^T$. Thus, relative entropy quantifies the excess estimation
loss due to mismatch in this setting.
These results parallel those recently found for the Gaussian channel.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.