id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1104.1227
|
Intervention in Power Control Games With Selfish Users
|
cs.IT cs.GT cs.NI math.IT
|
We study the power control problem in wireless ad hoc networks with selfish
users. Without incentive schemes, selfish users tend to transmit at their
maximum power levels, causing significant interference to each other. In this
paper, we study a class of incentive schemes based on intervention to induce
selfish users to transmit at desired power levels. An intervention scheme can
be implemented by introducing an intervention device that can monitor the power
levels of users and then transmit power to cause interference to users. We
mainly consider first-order intervention rules based on individual transmit
powers. We derive conditions on design parameters and the intervention
capability to achieve a desired outcome as a (unique) Nash equilibrium and
propose a dynamic adjustment process that the designer can use to guide users
and the intervention device to the desired outcome. The effect of using
intervention rules based on aggregate receive power is also analyzed. Our
results show that with perfect monitoring intervention schemes can be designed
to achieve any positive power profile while using interference from the
intervention device only as a threat. We also analyze the case of imperfect
monitoring and show that a performance loss can occur. Lastly, simulation
results are presented to illustrate the performance improvement from using
intervention rules and compare the performances of different intervention
rules.
|
1104.1237
|
A Statistical Nonparametric Approach of Face Recognition: Combination of
Eigenface & Modified k-Means Clustering
|
cs.CV
|
Facial expressions convey non-verbal cues, which play an important role in
interpersonal relations. Automatic recognition of human face based on facial
expression can be an important component of natural human-machine interface. It
may also be used in behavioural science. Although human can recognize the face
practically without any effort, but reliable face recognition by machine is a
challenge. This paper presents a new approach for recognizing the face of a
person considering the expressions of the same human face at different
instances of time. This methodology is developed combining Eigenface method for
feature extraction and modified k-Means clustering for identification of the
human face. This method endowed the face recognition without using the
conventional distance measure classifiers. Simulation results show that
proposed face recognition using perception of k-Means clustering is useful for
face images with different facial expressions.
|
1104.1249
|
The Design of a Novel Prismatic Drive for a Three-DOF
Parallel-Kinematics Machine
|
cs.RO
|
The design of a novel prismatic drive is reported in this paper. This
transmission is based on Slide-o-Cam, a cam mechanism with multiple rollers
mounted on a common translating follower. The design of Slide-o-Cam was
reported elsewhere. This drive thus provides pure-rolling motion, thereby
reducing the friction of rack-and-pinions and linear drives. Such properties
can be used to design new transmissions for parallel-kinematics machines. In
this paper, this transmission is intended to replace the ball-screws in
Orthoglide, a three-dof parallel robot intended for machining applications.
|
1104.1279
|
Context Aware Multisensor Image Fusion for Military Sensor Networks
using Multi Agent System
|
cs.MA
|
This paper proposes a Context Aware Agent based Military Sensor Network
(CAMSN) to form an improved infrastructure for multi-sensor image fusion. It
considers contexts driven by a node and sink. The contexts such as general and
critical object detection are node driven where as sensing time (such as day or
night) is sink driven. The agencies used in the scheme are categorized as node
and sink agency. Each agency employs a set of static and mobile agents to
perform dedicated tasks. Node agency performs context sensing and context
interpretation based on the sensed image and sensing time. Node agency
comprises of node manager agent, context agent and node blackboard (NBB).
Context agent gathers the context from the target and updates the NBB, Node
manager agent interprets the context and passes the context information to sink
node by using flooding mechanism. Sink agency mainly comprises of sink manager
agent, fusing agent, and sink black board. A context at the sensor node
triggers the fusion process at the sink. Based on the context, sink manager
agent triggers the fusing agent. Fusing agent roams around the network, visits
active sensor node, fuses the relevant images and sends the fused image to
sink. The fusing agent uses wavelet transform for fusion. The scheme is
simulated for testing its operation effectiveness in terms of fusion time, mean
square error, throughput, dropping rate, bandwidth requirement, node battery
usage and agent overhead.
|
1104.1311
|
Latent table discovery by semantic relationship extraction between
unrelated sets of entity sets of structured data sources
|
cs.DB
|
Querying is one of the basic functionality expected from a database system.
Query efficiency is adversely affected by increase in the number of
participating tables. Also, querying based on syntax largely limits the gamut
of queries a database system can process. Syntactic queries rely on the
database table structure, which is a cause of concern for large organisations
due to incompatibility between heterogeneous systems that store data
distributed across geographic locations. Solution to these problems is answered
to some extent by moving towards semantic technology by making data and the
database meaningful. In doing so, relationship between sets of entity sets will
not be limited only to syntactic constraints but would also permit semantic
connections nonetheless such relationships may be tacit, intangible and
invisible. The goal of this work is to extract such hidden relationships
between unrelated sets of entity sets and store them in a tangible form. A few
sample cases are provided to vindicate that the proposed work improves querying
significantly.
|
1104.1317
|
Algorithm for Sensor Network Attitude Problem
|
math.OC cs.SY
|
Sensor network attitude problem consists in retrieving the attitude of each
sensor of a network knowing some relative orientations between pairs of
sensors. The attitude of a sensor is its orientation in an absolute axis
system. We present in this paper a method for solving the sensor network
attitude problem using quaternion formalism which allows to apply linear
algebra tools. The proposed algorithm solves the problem when all of the
relative attitudes are known. A complete characterisation of the algorithm is
established: spatial complexity, time complexity and robustness. Our algorithm
is validated in simulations and with real experiments.
|
1104.1320
|
On the geometry of small weight codewords of dual algebraic geometric
codes
|
math.AG cs.IT math.IT
|
We investigate the geometry of the support of small weight codewords of dual
algebraic geometric codes on smooth complete intersections by applying the
powerful tools recently developed by Alain Couvreur. In particular, by
restricting ourselves to the case of Hermitian codes, we recover and extend
previous results obtained by the second named author joint with Marco
Pellegrini and Massimiliano Sala.
|
1104.1389
|
Generalizing the Markov and covariance interpolation problem using
input-to-state filters
|
math.OC cs.SY
|
In the Markov and covariance interpolation problem a transfer function $W$ is
sought that match the first coefficients in the expansion of $W$ around zero
and the first coefficients of the Laurent expansion of the corresponding
spectral density $WW^\star$. Here we solve an interpolation problem where the
matched parameters are the coefficients of expansions of $W$ and $WW^\star$
around various points in the disc. The solution is derived using input-to-state
filters and is determined by simple calculations such as solving Lyapunov
equations and generalized eigenvalue problems.
|
1104.1408
|
Coding Bounds for Multiple Phased-Burst Correction and Single Burst
Correction Codes
|
cs.IT math.IT
|
In this paper, two upper bounds on the achievable code rate of linear block
codes for multiple phased-burst correction (MPBC) are presented. One bound is
constrained to a maximum correctable cyclic burst length within every subblock,
or equivalently a constraint on the minimum error free length or gap within
every phased-burst. This bound, when reduced to the special case of a bound for
single burst correction (SBC), is shown to be the Abramson bound when the
cyclic burst length is less than half the block length. The second MPBC bound
is developed without the minimum error free gap constraint and is used as a
comparison to the first bound.
|
1104.1436
|
Efficient First Order Methods for Linear Composite Regularizers
|
cs.LG math.OC stat.ME stat.ML
|
A wide class of regularization problems in machine learning and statistics
employ a regularization term which is obtained by composing a simple convex
function \omega with a linear transformation. This setting includes Group Lasso
methods, the Fused Lasso and other total variation methods, multi-task learning
methods and many more. In this paper, we present a general approach for
computing the proximity operator of this class of regularizers, under the
assumption that the proximity operator of the function \omega is known in
advance. Our approach builds on a recent line of research on optimal first
order optimization methods and uses fixed point iterations for numerically
computing the proximity operator. It is more general than current approaches
and, as we show with numerical simulations, computationally more efficient than
available first order methods which do not achieve the optimal rate. In
particular, our method outperforms state of the art O(1/T) methods for
overlapping Group Lasso and matches optimal O(1/T^2) methods for the Fused
Lasso and tree structured Group Lasso.
|
1104.1448
|
A Basic Unified Context for Evaluating the Beam Forming and MIMO Options
in a Wireless Link
|
cs.IT math.IT
|
For one isolated wireless link we take a unified look at simple beamforming
(BF) as contrasted with MIMO to see how both emerge and under which conditions
advantage goes to one or the other. Communication is from a high base array to
a user in clutter. The channel propagation model is derived from fundamentals.
The base knows the power angular spectrum, but not the channel instantiation.
Eigenstates of the field spatial autocorrelation are the preferred apodizations
(APODs) which are drivers of the natural modes for exciting lectric fields.
Preference for MIMO or BF depends on APOD spectra which are surveyed pointing
to various asymptotic effects, including the maximum BF gain. Performance is
studied under varying eigenmode power settings at 10% outage. We focus on (1,4)
driving the strongest mode for BF and (4,4) driving the 4 strongest for MIMO.
Results are obtained under representative parameter settings, e.g. an angular
spread of 8 deg, 2 GHz carrier, 0 dB SNR and an array aperture of 1.68m (4
field decorrelation lengths) with antenna elements spaced as close as lambda/2.
We find MIMO excelling for array apertures much larger than the decorrelation
length; BF does almost as well for smaller apertures.
|
1104.1450
|
Plug-in Approach to Active Learning
|
math.ST cs.LG stat.TH
|
We present a new active learning algorithm based on nonparametric estimators
of the regression function. Our investigation provides probabilistic bounds for
the rates of convergence of the generalization error achievable by proposed
method over a broad class of underlying distributions. We also prove minimax
lower bounds which show that the obtained rates are almost tight.
|
1104.1457
|
High-Rate Short-Block LDPC Codes for Iterative Decoding with
Applications to High-Density Magnetic Recording Channels
|
cs.IT math.IT
|
This paper investigates the Triangle Single Parity Check (T/SPC) code, a
novel class of high-rate low-complexity LDPC codes. T/SPC is a regular, soft
decodable, linear-time encodable/decodable code. Compared to previous high-rate
and low-complexity LDPC codes, such as the well-known Turbo Product Code /
Single Parity Check (TPC/SPC), T/SPC provides higher code rates, shorter code
words, and lower complexity. This makes T/SPC very attractive for practical
implementation on integrated circuits.
In addition, we analyze the performance of iterative decoders based on a
soft-input soft-output (SISO) equalizer using T/SPC over high-density
perpendicular magnetic recording channels. Computer simulations show that the
proposed scheme is able to achieve a gain of up to 0.3 dB over TPC/SPC codes
with a significant reduction of implementation complexity.
|
1104.1471
|
New Techniques for Upper-Bounding the ML Decoding Performance of Binary
Linear Codes
|
cs.IT math.IT
|
In this paper, new techniques are presented to either simplify or improve
most existing upper bounds on the maximum-likelihood (ML) decoding performance
of the binary linear codes over additive white Gaussian noise (AWGN) channels.
Firstly, the recently proposed union bound using truncated weight spectrums by
Ma {\em et al} is re-derived in a detailed way based on Gallager's first
bounding technique (GFBT), where the "good region" is specified by a
sub-optimal list decoding algorithm. The error probability caused by the bad
region can be upper-bounded by the tail-probability of a binomial distribution,
while the error probability caused by the good region can be upper-bounded by
most existing techniques. Secondly, we propose two techniques to tighten the
union bound on the error probability caused by the good region. The first
technique is based on pair-wise error probabilities, which can be further
tightened by employing the independence between the error events and certain
components of the received random vectors. The second technique is based on
triplet-wise error probabilities, which can be upper-bounded by proving that
any three bipolar vectors form a non-obtuse triangle. The proposed bounds
improve the conventional union bounds but have a similar complexity since they
involve only the $Q$-function. The proposed bounds can also be adapted to
bit-error probabilities.
|
1104.1472
|
Gaussian Affine Feature Detector
|
cs.CV
|
A new method is proposed to get image features' geometric information. Using
Gaussian as an input signal, a theoretical optimal solution to calculate
feature's affine shape is proposed. Based on analytic result of a feature
model, the method is different from conventional iterative approaches. From the
model, feature's parameters such as position, orientation, background
luminance, contrast, area and aspect ratio can be extracted. Tested with
synthesized and benchmark data, the method achieves or outperforms existing
approaches in term of accuracy, speed and stability. The method can detect
small, long or thin objects precisely, and works well under general conditions,
such as for low contrast, blurred or noisy images.
|
1104.1477
|
An Agent-based Architecture for a Knowledge-work Support System
|
cs.HC cs.AI cs.MA
|
Enhancement of technology-based system support for knowledge workers is an
issue of great importance. The "Knowledge work Support System (KwSS)" framework
analyzes this issue from a holistic perspective. KwSS proposes a set of design
principles for building a comprehensive IT-based support system, which enhances
the capability of a human agent for performing a set of complex and
interrelated knowledge-works relevant to one or more target task-types within a
domain of professional activities. In this paper, we propose a high-level,
software-agent based architecture for realizing a KwSS system that incorporates
these design principles. Here we focus on developing a number of crucial
enabling components of the architecture, including (1) an Activity Theory-based
novel modeling technique for knowledgeintensive activities; (2) a graph
theoretic formalism for representing these models in a knowledge base in
conjunction with relevant entity taxonomies/ontologies; and (3) an algorithm
for reasoning, using the knowledge base, about various aspects of possible
supports for activities at performance-time.
|
1104.1485
|
Fuzzy Rules and Evidence Theory for Satellite Image Analysis
|
cs.CV
|
Design of a fuzzy rule based classifier is proposed. The performance of the
classifier for multispectral satellite image classification is improved using
Dempster- Shafer theory of evidence that exploits information of the
neighboring pixels. The classifiers are tested rigorously with two known images
and their performance are found to be better than the results available in the
literature. We also demonstrate the improvement of performance while using D-S
theory along with fuzzy rule based classifiers over the basic fuzzy rule based
classifiers for all the test cases.
|
1104.1506
|
Prosper: image and robot-guided prostate brachytherapy
|
cs.RO physics.med-ph
|
Brachytherapy for localized prostate cancer consists in destroying cancer by
introducing iodine radioactive seeds into the gland through hollow needles. The
planning of the position of the seeds and their introduction into the prostate
is based on intra-operative ultrasound (US) imaging. We propose to optimize the
global quality of the procedure by: i) using 3D US; ii) enhancing US data with
MRI registration; iii) using a specially designed needle-insertion robot,
connected to the imaging data. The imaging methods have been successfully
tested on patient data while the robot accuracy has been evaluated on a
realistic deformable phantom.
|
1104.1528
|
Coded Modulation for Power Line Communications
|
cs.IT math.IT
|
We discuss the application of coded modulation for power-line communications.
We combine M-ary FSK with diversity and coding to make the transmission robust
against permanent frequency disturbances and impulse noise. We give a
particular example of the coding/modulation scheme that is in agreement with
the existing CENELEC norms. The scheme can be considered as a form of coded
Frequency Hopping and is thus extendable to any frequency range.
|
1104.1546
|
Physical Simulation of Inarticulate Robots
|
cs.RO
|
In this note we study the structure and the behavior of inarticulate robots.
We introduce a robot that moves by successive revolvings. The robot's structure
is analyzed, simulated and discussed in detail.
|
1104.1550
|
A bio-inspired image coder with temporal scalability
|
cs.CV cs.IT cs.NE math.IT
|
We present a novel bio-inspired and dynamic coding scheme for static images.
Our coder aims at reproducing the main steps of the visual stimulus processing
in the mammalian retina taking into account its time behavior. The main novelty
of this work is to show how to exploit the time behavior of the retina cells to
ensure, in a simple way, scalability and bit allocation. To do so, our main
source of inspiration will be the biologically plausible retina model called
Virtual Retina. Following a similar structure, our model has two stages. The
first stage is an image transform which is performed by the outer layers in the
retina. Here it is modelled by filtering the image with a bank of difference of
Gaussians with time-delays. The second stage is a time-dependent
analog-to-digital conversion which is performed by the inner layers in the
retina. Thanks to its conception, our coder enables scalability and bit
allocation across time. Also, our decoded images do not show annoying artefacts
such as ringing and block effects. As a whole, this article shows how to
capture the main properties of a biological system, here the retina, in order
to design a new efficient coder.
|
1104.1556
|
Benchmarking the Quality of Diffusion-Weighted Images
|
cs.CV
|
We present a novel method that allows for measuring the quality of
diffusion-weighted MR images dependent on the image resolution and the image
noise. For this purpose, we introduce a new thresholding technique so that
noise and the signal can automatically be estimated from a single data set.
Thus, no user interaction as well as no double acquisition technique, which
requires a time-consuming proper geometrical registration, is needed. As a
coarser image resolution or slice thickness leads to a higher signal-to-noise
ratio (SNR), our benchmark determines a resolution-independent quality measure
so that images with different resolutions can be adequately compared. To
evaluate our method, a set of diffusion-weighted images from different vendors
is used. It is shown that the quality can efficiently be determined and that
the automatically computed SNR is comparable to the SNR which is measured
manually in a manually selected region of interest.
|
1104.1582
|
A Fuzzy Control Algorithm for the Electronic Stability Program optimized
for tyre burst control
|
cs.SY math.OC
|
This paper introduces an improved Electronic Stability Program for cars that
can deal with the sudden burst of a tyre. The Improved Electronic Stability
Program (IESP) is based on a fuzzy logic algorithm. The IESP collects data from
the same sensors of a standard ESP and acts on brakes/throttle with the same
actuators. The IESP reads the driver steering angle and the dynamic condition
of the car and selectively acts on throttle and brakes in order to put the car
on the required direction even during a tyre burst.
|
1104.1605
|
Efficient Top-K Retrieval in Online Social Tagging Networks
|
cs.IR cs.DB cs.SI
|
We consider in this paper top-k query answering in social tagging systems,
also known as folksonomies. This problem requires a significant departure from
existing, socially agnostic techniques. In a network-aware context, one can
(and should) exploit the social links, which can indicate how users relate to
the seeker and how much weight their tagging actions should have in the result
build-up. We propose an algorithm that has the potential to scale to current
applications. While the problem has already been considered in previous
literature, this was done either under strong simplifying assumptions or under
choices that cannot scale to even moderate-size real world applications. We
first consider a key aspect of the problem, which is accessing the closest or
most relevant users for a given seeker. We describe how this can be done on the
fly (without any pre-computations) for several possible choices - arguably the
most natural ones - of proximity computation in a user network. Based on this,
our top-k algorithm is sound and complete, while addressing the scalability
issues of the existing ones. Importantly, our technique is instance optimal in
the case when the search relies exclusively on the social weight of tagging
actions. To further reduce response times, we then consider directions for
efficiency by approximation. Extensive experiments on real world data show that
our techniques can drastically improve the response time, without sacrificing
precision.
|
1104.1672
|
Dimension-free tail inequalities for sums of random matrices
|
math.PR cs.LG stat.ML
|
We derive exponential tail inequalities for sums of random matrices with no
dependence on the explicit matrix dimensions. These are similar to the matrix
versions of the Chernoff bound and Bernstein inequality except with the
explicit matrix dimensions replaced by a trace quantity that can be small even
when the dimension is large or infinite. Some applications to principal
component analysis and approximate matrix multiplication are given to
illustrate the utility of the new bounds.
|
1104.1677
|
Automatic Vehicle Checking Agent (VCA)
|
cs.AI
|
A definition of intelligence is given in terms of performance that can be
quantitatively measured. In this study, we have presented a conceptual model of
Intelligent Agent System for Automatic Vehicle Checking Agent (VCA). To achieve
this goal, we have introduced several kinds of agents that exhibit intelligent
features. These are the Management agent, internal agent, External Agent,
Watcher agent and Report agent. Metrics and measurements are suggested for
evaluating the performance of Automatic Vehicle Checking Agent (VCA). Calibrate
data and test facilities are suggested to facilitate the development of
intelligent systems.
|
1104.1678
|
A Proposed Decision Support System/Expert System for Guiding Fresh
Students in Selecting a Faculty in Gomal University, Pakistan
|
cs.AI
|
This paper presents the design and development of a proposed rule based
Decision Support System that will help students in selecting the best suitable
faculty/major decision while taking admission in Gomal University, Dera Ismail
Khan, Pakistan. The basic idea of our approach is to design a model for testing
and measuring the student capabilities like intelligence, understanding,
comprehension, mathematical concepts plus his/her past academic record plus
his/her intelligence level, and applying the module results to a rule-based
decision support system to determine the compatibility of those capabilities
with the available faculties/majors in Gomal University. The result is shown as
a list of suggested faculties/majors with the student capabilities and
abilities.
|
1104.1717
|
Continuous and Discrete Adjoints to the Euler Equations for Fluids
|
cs.CE math.NA physics.flu-dyn
|
Adjoints are used in optimization to speed-up computations, simplify
optimality conditions or compute sensitivities. Because time is reversed in
adjoint equations with first order time derivatives, boundary conditions and
transmission conditions through shocks can be difficult to understand. In this
article we analyze the adjoint equations that arise in the context of
compressible flows governed by the Euler equations of fluid dynamics. We show
that the continuous adjoints and the discrete adjoints computed by automatic
differentiation agree numerically; in particular the adjoint is found to be
continuous at the shocks and usually discontinuous at contact discontinuities
by both.
|
1104.1742
|
Asymptotic Capacity Analysis for Adaptive Transmission Schemes under
General Fading Distributions
|
cs.IT math.IT
|
Asymptotic comparisons of ergodic channel capacity at high and low
signal-to-noise ratios (SNRs) are provided for several adaptive transmission
schemes over fading channels with general distributions, including optimal
power and rate adaptation, rate adaptation only, channel inversion and its
variants. Analysis of the high-SNR pre-log constants of the ergodic capacity
reveals the existence of constant capacity difference gaps among the schemes
with a pre-log constant of ?1. Closed-form expressions for these high-SNR
capacity difference gaps are derived, which are proportional to the SNR loss
between these schemes in dB scale. The largest one of these gaps is found to be
between the optimal power and rate adaptation scheme and the channel inversion
scheme. Based on these expressions it is shown that the presence of space
diversity or multi-user diversity makes channel inversion arbitrarily close to
achieving optimal capacity at high SNR with sufficiently large number of
antennas or users. A low-SNR analysis also reveals that the presence of fading
provably always improves capacity at sufficiently low SNR, compared to the
additive white Gaussian noise (AWGN) case. Numerical results are shown to
corroborate our analytical results.
|
1104.1745
|
Multi-User Diversity with Random Number of Users
|
cs.IT math.IT
|
Multi-user diversity is considered when the number of users in the system is
random. The complete monotonicity of the error rate as a function of the
(deterministic) number of users is established and it is proved that
randomization of the number of users always leads to deterioration of average
system performance at any average SNR. Further, using stochastic ordering
theory, a framework for comparison of system performance for different user
distributions is provided. For Poisson distributed users, the difference in
error rate of the random and deterministic number of users cases is shown to
asymptotically approach zero as the average number of users goes to infinity
for any fixed average SNR. In contrast, for a finite average number of users
and high SNR, it is found that randomization of the number of users
deteriorates performance significantly, and the diversity order under fading is
dominated by the smallest possible number of users. For Poisson distributed
users communicating over Rayleigh faded channels, further closed-form results
are provided for average error rate, and the asymptotic scaling law for ergodic
capacity is also provided. Simulation results are provided to corroborate our
analytical findings.
|
1104.1789
|
Zipf's law unzipped
|
physics.soc-ph cs.SI
|
Why does Zipf's law give a good description of data from seemingly completely
unrelated phenomena? Here it is argued that the reason is that they can all be
described as outcomes of a ubiquitous random group division: the elements can
be citizens of a country and the groups family names, or the elements can be
all the words making up a novel and the groups the unique words, or the
elements could be inhabitants and the groups the cities in a country, and so
on. A Random Group Formation (RGF) is presented from which a Bayesian estimate
is obtained based on minimal information: it provides the best prediction for
the number of groups with $k$ elements, given the total number of elements,
groups, and the number of elements in the largest group. For each specification
of these three values, the RGF predicts a unique group distribution
$N(k)\propto \exp(-bk)/k^{\gamma}$, where the power-law index $\gamma$ is a
unique function of the same three values. The universality of the result is
made possible by the fact that no system specific assumptions are made about
the mechanism responsible for the group division. The direct relation between
$\gamma$ and the total number of elements, groups, and the number of elements
in the largest group, is calculated. The predictive power of the RGF model is
demonstrated by direct comparison with data from a variety of systems. It is
shown that $\gamma$ usually takes values in the interval $1\leq\gamma\leq 2$
and that the value for a given phenomena depends in a systematic way on the
total size of the data set. The results are put in the context of earlier
discussions on Zipf's and Gibrat's laws, $N(k)\propto k^{-2}$ and the
connection between growth models and RGF is elucidated.
|
1104.1823
|
Which weighted circulant networks have perfect state transfer?
|
cs.DM cs.IT math.IT quant-ph
|
The question of perfect state transfer existence in quantum spin networks
based on weighted graphs has been recently presented by many authors. We give a
simple condition for characterizing weighted circulant graphs allowing perfect
state transfer in terms of their eigenvalues. This is done by extending the
results about quantum periodicity existence in the networks obtained by Saxena,
Severini and Shparlinski and characterizing integral graphs among weighted
circulant graphs. Finally, classes of weighted circulant graphs supporting
perfect state transfer are found. These classes completely cover the class of
circulant graphs having perfect state transfer in the unweighted case. In fact,
we show that there exists an weighted integral circulant graph with $n$
vertices having perfect state transfer if and only if $n$ is even. Moreover we
prove the non-existence of perfect state transfer for several other classes of
weighted integral circulant graphs of even order.
|
1104.1824
|
Simulating Spiking Neural P systems without delays using GPUs
|
cs.DC cs.ET cs.FL cs.NE q-bio.NC
|
We present in this paper our work regarding simulating a type of P system
known as a spiking neural P system (SNP system) using graphics processing units
(GPUs). GPUs, because of their architectural optimization for parallel
computations, are well-suited for highly parallelizable problems. Due to the
advent of general purpose GPU computing in recent years, GPUs are not limited
to graphics and video processing alone, but include computationally intensive
scientific and mathematical applications as well. Moreover P systems, including
SNP systems, are inherently and maximally parallel computing models whose
inspirations are taken from the functioning and dynamics of a living cell. In
particular, SNP systems try to give a modest but formal representation of a
special type of cell known as the neuron and their interactions with one
another. The nature of SNP systems allowed their representation as matrices,
which is a crucial step in simulating them on highly parallel devices such as
GPUs. The highly parallel nature of SNP systems necessitate the use of hardware
intended for parallel computations. The simulation algorithms, design
considerations, and implementation are presented. Finally, simulation results,
observations, and analyses using an SNP system that generates all numbers in
$\mathbb N$ - {1} are discussed, as well as recommendations for future work.
|
1104.1825
|
Characterization of circulant graphs having perfect state transfer
|
cs.DM cs.IT math.IT quant-ph
|
In this paper we answer the question of when circulant quantum spin networks
with nearest-neighbor couplings can give perfect state transfer. The network is
described by a circulant graph $G$, which is characterized by its circulant
adjacency matrix $A$. Formally, we say that there exists a {\it perfect state
transfer} (PST) between vertices $a,b\in V(G)$ if $|F(\tau)_{ab}|=1$, for some
positive real number $\tau$, where $F(t)=\exp(\i At)$. Saxena, Severini and
Shparlinski ({\it International Journal of Quantum Information} 5 (2007),
417--430) proved that $|F(\tau)_{aa}|=1$ for some $a\in V(G)$ and $\tau\in
\R^+$ if and only if all eigenvalues of $G$ are integer (that is, the graph is
integral). The integral circulant graph $\ICG_n (D)$ has the vertex set $Z_n =
\{0, 1, 2, ..., n - 1\}$ and vertices $a$ and $b$ are adjacent if
$\gcd(a-b,n)\in D$, where $D \subseteq \{d : d \mid n,\ 1\leq d<n\}$. These
graphs are highly symmetric and have important applications in chemical graph
theory. We show that $\ICG_n (D)$ has PST if and only if $n\in 4\N$ and
$D=\widetilde{D_3}\cup D_2\cup 2D_2\cup 4D_2\cup \{n/2^a\}$, where
$\widetilde{D_3}=\{d\in D\ |\ n/d\in 8\N\}$, $D_2= \{d\in D\ |\ n/d\in
8\N+4\}\setminus \{n/4\}$ and $a\in\{1,2\}$. We have thus answered the question
of complete characterization of perfect state transfer in integral circulant
graphs raised in {\it Quantum Information and Computation}, Vol. 10, No. 3&4
(2010) 0325--0342 by Angeles-Canul {\it et al.} Furthermore, we also calculate
perfect quantum communication distance (distance between vertices where PST
occurs) and describe the spectra of integral circulant graphs having PST. We
conclude by giving a closed form expression calculating the number of integral
circulant graphs of a given order having PST.
|
1104.1872
|
Convex and Network Flow Optimization for Structured Sparsity
|
math.OC cs.LG stat.ML
|
We consider a class of learning problems regularized by a structured
sparsity-inducing norm defined as the sum of l_2- or l_infinity-norms over
groups of variables. Whereas much effort has been put in developing fast
optimization techniques when the groups are disjoint or embedded in a
hierarchy, we address here the case of general overlapping groups. To this end,
we present two different strategies: On the one hand, we show that the proximal
operator associated with a sum of l_infinity-norms can be computed exactly in
polynomial time by solving a quadratic min-cost flow problem, allowing the use
of accelerated proximal gradient methods. On the other hand, we use proximal
splitting techniques, and address an equivalent formulation with
non-overlapping groups, but in higher dimension and with additional
constraints. We propose efficient and scalable algorithms exploiting these two
strategies, which are significantly faster than alternative approaches. We
illustrate these methods with several problems such as CUR matrix
factorization, multi-task learning of tree-structured dictionaries, background
subtraction in video sequences, image denoising with wavelets, and topographic
dictionary learning of natural image patches.
|
1104.1880
|
Approximative Covariance Interpolation
|
math.OC cs.SY
|
When methods of moments are used for identification of power spectral
densities, a model is matched to estimated second order statistics such as,
e.g., covariance estimates. If the estimates are good there is an infinite
family of power spectra consistent with such an estimate and in applications,
such as identification, we want to single out the most representative spectrum.
We choose a prior spectral density to represent a priori information, and the
spectrum closest to it in a given quasi-distance is determined. However, if the
estimates are based on few data, or the model class considered is not
consistent with the process considered, it may be necessary to use an
approximative covariance interpolation. Two different types of regularizations
are considered in this paper that can be applied on many covariance
interpolation based estimation methods.
|
1104.1892
|
"Improved FCM algorithm for Clustering on Web Usage Mining"
|
cs.IR cs.CV
|
In this paper we present clustering method is very sensitive to the initial
center values, requirements on the data set too high, and cannot handle noisy
data the proposal method is using information entropy to initialize the cluster
centers and introduce weighting parameters to adjust the location of cluster
centers and noise problems.The navigation datasets which are sequential in
nature, Clustering web data is finding the groups which share common interests
and behavior by analyzing the data collected in the web servers, this improves
clustering on web data efficiently using improved fuzzy c-means(FCM)
clustering. Web usage mining is the application of data mining techniques to
web log data repositories. It is used in finding the user access patterns from
web access log. Web data Clusters are formed using on MSNBC web navigation
dataset.
|
1104.1905
|
A simulation of the Neolithic transition in Western Eurasia
|
cs.MA q-bio.PE
|
Farming and herding were introduced to Europe from the Near East and
Anatolia; there are, however, considerable arguments about the mechanisms of
this transition. Were it people who moved and outplaced the indigenous hunter-
gatherer groups or admixed with them? Or was it just material and information
that moved-the Neolithic Package-consisting of domesticated plants and animals
and the knowledge of its use? The latter process is commonly referred to as
cultural diffusion and the former as demic diffusion. Despite continuous and
partly combined efforts by archaeologists, anthropologists, linguists,
paleontologists and geneticists a final resolution of the debate has not yet
been reached. In the present contribution we interpret results from the Global
Land Use and technological Evolution Simulator (GLUES), a mathematical model
for regional sociocultural development embedded in the western Eurasian
geoenvironmental context during the Holocene. We demonstrate that the model is
able to realistically hindcast the expansion speed and the inhomogeneous
space-time evolution of the transition to agropastoralism in Europe. GLUES, in
contrast to models that do not resolve endogenous sociocultural dynamics, also
describes and explains how and why the Neolithic advanced in stages. In the
model analysis, we uncouple the mechanisms of migration and information
exchange. We find that (1) an indigenous form of agropastoralism could well
have arisen in certain Mediterranean landscapes, but not in Northern and
Central Europe, where it depended on imported technology and material, (2) both
demic diffusion by migration or cultural diffusion by trade may explain the
western European transition equally well, (3) [...]
|
1104.1910
|
Tails of Random Matrix Diagonal Elements: The Case of the Wishart
Inverse
|
cs.IT cond-mat.stat-mech math.IT
|
We analytically compute the large-deviation probability of a diagonal matrix
element of two cases of random matrices, namely $\beta=[\vec H^\dagger\vec
H]^{-1}_{11}$ and $\gamma=[\vec I_N+\rho\vec H^\dagger\vec H]^{-1}_{11}$, where
$\vec H$ is a $M\times N$ complex Gaussian matrix with independent entries and
$M\geq N$. These diagonal entries are related to the "signal to interference
and noise ratio" (SINR) in multi-antenna communications. They depend not only
on the eigenvalues but also on the corresponding eigenfunction weights, which
we are able to evaluate on average constrained on the value of the SINR. We
also show that beyond a lower and upper critical value of $\beta$, $\gamma$,
the maximum and minimum eigenvalues, respectively, detach from the bulk.
Responsible for this detachment is the fact that the corresponding eigenvalue
weight becomes macroscopic (i.e. O(1)), and hence exerts a strong repulsion to
the eigenvalue.
|
1104.1924
|
Rational Deployment of CSP Heuristics
|
cs.AI
|
Heuristics are crucial tools in decreasing search effort in varied fields of
AI. In order to be effective, a heuristic must be efficient to compute, as well
as provide useful information to the search algorithm. However, some well-known
heuristics which do well in reducing backtracking are so heavy that the gain of
deploying them in a search algorithm might be outweighed by their overhead.
We propose a rational metareasoning approach to decide when to deploy
heuristics, using CSP backtracking search as a case study. In particular, a
value of information approach is taken to adaptive deployment of solution-count
estimation heuristics for value ordering. Empirical results show that indeed
the proposed mechanism successfully balances the tradeoff between decreasing
backtracking and heuristic computational overhead, resulting in a significant
overall search time reduction.
|
1104.1945
|
Off-Line Handwritten Signature Retrieval using Curvelet Transforms
|
cs.CV
|
In this paper, a new method for offline handwritten signature retrieval is
based on curvelet transform is proposed. Many applications in image processing
require similarity retrieval of an image from a large collection of images. In
such cases, image indexing becomes important for efficient organization and
retrieval of images. This paper addresses this issue in the context of a
database of handwritten signature images and describes a system for similarity
retrieval. The proposed system uses a curvelet based texture features
extraction. The performance of the system has been tested with an image
database of 180 signatures. The results obtained indicate that the proposed
system is able to identify signatures with great with accuracy even when a part
of a signature is missing.
|
1104.1970
|
Wet paper codes and the dual distance in steganography
|
cs.CR cs.IT math.IT
|
In 1998 Crandall introduced a method based on coding theory to secretly embed
a message in a digital support such as an image. Later Fridrich et al. improved
this method to minimize the distortion introduced by the embedding; a process
called wet paper. However, as previously emphasized in the literature, this
method can fail during the embedding step. Here we find sufficient and
necessary conditions to guarantee a successful embedding by studying the dual
distance of a linear code. Since these results are essentially of combinatorial
nature, they can be generalized to systematic codes, a large family containing
all linear codes. We also compute the exact number of solutions and point out
the relationship between wet paper codes and orthogonal arrays.
|
1104.1971
|
A unified framework for Schelling's model of segregation
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
Schelling's model of segregation is one of the first and most influential
models in the field of social simulation. There are many variations of the
model which have been proposed and simulated over the last forty years, though
the present state of the literature on the subject is somewhat fragmented and
lacking comprehensive analytical treatments. In this article a unified
mathematical framework for Schelling's model and its many variants is
developed. This methodology is useful in two regards: firstly, it provides a
tool with which to understand the differences observed between models;
secondly, phenomena which appear in several model variations may be understood
in more depth through analytic studies of simpler versions.
|
1104.1990
|
Adaptive Evolutionary Clustering
|
cs.LG stat.ML
|
In many practical applications of clustering, the objects to be clustered
evolve over time, and a clustering result is desired at each time step. In such
applications, evolutionary clustering typically outperforms traditional static
clustering by producing clustering results that reflect long-term trends while
being robust to short-term variations. Several evolutionary clustering
algorithms have recently been proposed, often by adding a temporal smoothness
penalty to the cost function of a static clustering method. In this paper, we
introduce a different approach to evolutionary clustering by accurately
tracking the time-varying proximities between objects followed by static
clustering. We present an evolutionary clustering framework that adaptively
estimates the optimal smoothing parameter using shrinkage estimation, a
statistical approach that improves a naive estimate using additional
information. The proposed framework can be used to extend a variety of static
clustering algorithms, including hierarchical, k-means, and spectral
clustering, into evolutionary clustering algorithms. Experiments on synthetic
and real data sets indicate that the proposed framework outperforms static
clustering and existing evolutionary clustering algorithms in many scenarios.
|
1104.2018
|
Efficient Learning of Generalized Linear and Single Index Models with
Isotonic Regression
|
cs.AI cs.LG stat.ML
|
Generalized Linear Models (GLMs) and Single Index Models (SIMs) provide
powerful generalizations of linear regression, where the target variable is
assumed to be a (possibly unknown) 1-dimensional function of a linear
predictor. In general, these problems entail non-convex estimation procedures,
and, in practice, iterative local search heuristics are often used. Kalai and
Sastry (2009) recently provided the first provably efficient method for
learning SIMs and GLMs, under the assumptions that the data are in fact
generated under a GLM and under certain monotonicity and Lipschitz constraints.
However, to obtain provable performance, the method requires a fresh sample
every iteration. In this paper, we provide algorithms for learning GLMs and
SIMs, which are both computationally and statistically efficient. We also
provide an empirical study, demonstrating their feasibility in practice.
|
1104.2026
|
Collaboration in Social Networks
|
physics.soc-ph cs.GT cs.SI
|
The very notion of social network implies that linked individuals interact
repeatedly with each other. This allows them not only to learn successful
strategies and adapt to them, but also to condition their own behavior on the
behavior of others, in a strategic forward looking manner. Game theory of
repeated games shows that these circumstances are conducive to the emergence of
collaboration in simple games of two players. We investigate the extension of
this concept to the case where players are engaged in a local contribution game
and show that rationality and credibility of threats identify a class of Nash
equilibria -- that we call "collaborative equilibria" -- that have a precise
interpretation in terms of sub-graphs of the social network. For large network
games, the number of such equilibria is exponentially large in the number of
players. When incentives to defect are small, equilibria are supported by local
structures whereas when incentives exceed a threshold they acquire a non-local
nature, which requires a "critical mass" of more than a given fraction of the
players to collaborate. Therefore, when incentives are high, an individual
deviation typically causes the collapse of collaboration across the whole
system. At the same time, higher incentives to defect typically support
equilibria with a higher density of collaborators. The resulting picture
conforms with several results in sociology and in the experimental literature
on game theory, such as the prevalence of collaboration in denser groups and in
the structural hubs of sparse networks.
|
1104.2034
|
Materials to the Russian-Bulgarian Comparative Dictionary "EAD"
|
cs.CL
|
This article presents a fragment of a new comparative dictionary "A
comparative dictionary of names of expansive action in Russian and Bulgarian
languages". Main features of the new web-based comparative dictionary are
placed, the principles of its formation are shown, primary links between the
word-matches are classified. The principal difference between translation
dictionaries and the model of double comparison is also shown. The
classification scheme of the pages is proposed. New concepts and keywords have
been introduced. The real prototype of the dictionary with a few key pages is
published. The broad debate about the possibility of this prototype to become a
version of Russian-Bulgarian comparative dictionary of a new generation is
available.
|
1104.2049
|
Optimal Channel Training in Uplink Network MIMO Systems
|
cs.IT math.IT
|
We consider a multi-cell frequency-selective fading uplink channel (network
MIMO) from K single-antenna user terminals (UTs) to B cooperative base stations
(BSs) with M antennas each. The BSs, assumed to be oblivious of the applied
codebooks, forward compressed versions of their observations to a central
station (CS) via capacity limited backhaul links. The CS jointly decodes the
messages from all UTs. Since the BSs and the CS are assumed to have no prior
channel state information (CSI), the channel needs to be estimated during its
coherence time. Based on a lower bound of the ergodic mutual information, we
determine the optimal fraction of the coherence time used for channel training,
taking different path losses between the UTs and the BSs into account. We then
study how the optimal training length is impacted by the backhaul capacity.
Although our analytical results are based on a large system limit, we show by
simulations that they provide very accurate approximations for even small
system dimensions.
|
1104.2059
|
Template-based matching using weight maps
|
cs.CV
|
Template matching is one of the most prevalent pattern recognition methods
worldwide. It has found uses in most visual concept detection fields. In this
work, we investigate methods for improving template matching by adjusting the
weights of different regions of the template. We compare several weight maps
and test the methods using the FERET face test set in the context of human eye
detection.
|
1104.2069
|
GEOMIR2K9 - A Similar Scene Finder
|
cs.CV
|
The main goal of the GEOMIR2K9 project is to create a software program that
is able to find similar scenic images clustered by geographical location and
sorted by similarity based only on their visual content. The user should be
able to input a query image, based on this given query image the program should
find relevant visual content and present this to the user in a meaningful way.
Technically the goal for the GEOMIR2K9 project is twofold. The first of these
two goals is to create a basic low level visual information retrieval system.
This includes feature extraction, post processing of the feature data and
classification/ clustering based on similarity with a strong focus on scenic
images. The second goal of this project is to provide the user with a novel and
suitable interface and visualization method so that the user may interact with
the retrieved images in a natural and meaningful way.
|
1104.2079
|
Optimizing XML querying using type-based document projection
|
cs.DB
|
XML data projection (or pruning) is a natural optimization for main memory
query engines: given a query Q over a document D, the subtrees of D that are
not necessary to evaluate Q are pruned, thus producing a smaller document D';
the query Q is then executed on D', hence avoiding to allocate and process
nodes that will never be reached by Q. In this article, we propose a new
approach, based on types, that greatly improves current solutions. Besides
providing comparable or greater precision and far lesser pruning overhead, our
solution ---unlike current approaches--- takes into account backward axes,
predicates, and can be applied to multiple queries rather than just to single
ones. A side contribution is a new type system for XPath able to handle
backward axes. The soundness of our approach is formally proved. Furthermore,
we prove that the approach is also complete (i.e., yields the best possible
type-driven pruning) for a relevant class of queries and Schemas. We further
validate our approach using the XMark and XPathMark benchmarks and show that
pruning not only improves the main memory query engine's performances (as
expected) but also those of state of the art native XML databases.
|
1104.2086
|
A Universal Part-of-Speech Tagset
|
cs.CL
|
To facilitate future research in unsupervised induction of syntactic
structure and to standardize best-practices, we propose a tagset that consists
of twelve universal part-of-speech categories. In addition to the tagset, we
develop a mapping from 25 different treebank tagsets to this universal set. As
a result, when combined with the original treebank data, this universal tagset
and mapping produce a dataset consisting of common parts-of-speech for 22
different languages. We highlight the use of this resource via two experiments,
including one that reports competitive accuracies for unsupervised grammar
induction without gold standard part-of-speech tags.
|
1104.2097
|
PAC learnability versus VC dimension: a footnote to a basic result of
statistical learning
|
cs.LG
|
A fundamental result of statistical learnig theory states that a concept
class is PAC learnable if and only if it is a uniform Glivenko-Cantelli class
if and only if the VC dimension of the class is finite. However, the theorem is
only valid under special assumptions of measurability of the class, in which
case the PAC learnability even becomes consistent. Otherwise, there is a
classical example, constructed under the Continuum Hypothesis by Dudley and
Durst and further adapted by Blumer, Ehrenfeucht, Haussler, and Warmuth, of a
concept class of VC dimension one which is neither uniform Glivenko-Cantelli
nor consistently PAC learnable. We show that, rather surprisingly, under an
additional set-theoretic hypothesis which is much milder than the Continuum
Hypothesis (Martin's Axiom), PAC learnability is equivalent to finite VC
dimension for every concept class.
|
1104.2108
|
Stability of Modified-CS and LS-CS for Recursive Reconstruction of
Sparse Signal Sequences
|
cs.IT math.IT
|
In this work, we obtain sufficient conditions for the "stability" of our
recently proposed algorithms, Least Squares Compressive Sensing residual
(LS-CS) and modified-CS, for recursively reconstructing sparse signal sequences
from noisy measurements. By "stability" we mean that the number of misses from
the current support estimate and the number of extras in it remain bounded by a
time-invariant value at all times. We show that, for a signal model with fixed
signal power and support set size; support set changes allowed at every time;
and gradual coefficient magnitude increase/decrease, "stability" holds under
mild assumptions -- bounded noise, high enough minimum nonzero coefficient
magnitude increase rate, and large enough number of measurements at every time.
A direct corollary is that the reconstruction error is also bounded by a
time-invariant value at all times. If the support set of the sparse signal
sequence changes slowly over time, our results hold under weaker assumptions
than what simple compressive sensing (CS) needs for the same error bound. Also,
our support error bounds are small compared to the support size. Our discussion
is backed up by Monte Carlo simulation based comparisons.
|
1104.2110
|
Deterministic Real-time Thread Scheduling
|
cs.OS cs.SY
|
Race condition is a timing sensitive problem. A significant source of timing
variation comes from nondeterministic hardware interactions such as cache
misses. While data race detectors and model checkers can check races, the
enormous state space of complex software makes it difficult to identify all of
the races and those residual implementation errors still remain a big
challenge. In this paper, we propose deterministic real-time scheduling methods
to address scheduling nondeterminism in uniprocessor systems. The main idea is
to use timing insensitive deterministic events, e.g, an instruction counter, in
conjunction with a real-time clock to schedule threads. By introducing the
concept of Worst Case Executable Instructions (WCEI), we guarantee both
determinism and real-time performance.
|
1104.2112
|
Optimal Asymptotic Entrainment of Phase-Reduced Oscillators
|
nlin.CD cs.SY math.DS math.OC
|
We derive optimal periodic controls for entrainment of a self-driven
oscillator to a desired frequency. The alternative objectives of minimizing
power and maximizing frequency range of entrainment are considered. A state
space representation of the oscillator is reduced to a linearized phase model,
and the optimal periodic control is computed from the phase response curve
using formal averaging and the calculus of variations. Computational methods
are used to calculate the periodic orbit and the phase response curve, and a
numerical method for approximating the optimal controls is introduced. Our
method is applied to asymptotically control the period of spiking neural
oscillators modeled using the Hodgkin-Huxley equations. This example
illustrates the optimality of entrainment controls derived using phase models
when applied to the original state space system.
|
1104.2116
|
Statistical Beamforming on the Grassmann Manifold for the Two-User
Broadcast Channel
|
cs.IT math.IT math.OC
|
A Rayleigh fading spatially correlated broadcast setting with M = 2 antennas
at the transmitter and two-users (each with a single antenna) is considered. It
is assumed that the users have perfect channel information about their links
whereas the transmitter has only statistical information of each user's link
(covariance matrix of the vector channel). A low-complexity linear beamforming
strategy that allocates equal power and one spatial eigen-mode to each user is
employed at the transmitter. Beamforming vectors on the Grassmann manifold that
depend only on statistical information are to be designed at the transmitter to
maximize the ergodic sum-rate delivered to the two users. Towards this goal,
the beamforming vectors are first fixed and a closed-form expression is
obtained for the ergodic sum-rate in terms of the covariance matrices of the
links. This expression is non-convex in the beamforming vectors ensuring that
the classical Lagrange multiplier technique is not applicable. Despite this
difficulty, the optimal solution to this problem is shown to be the solution to
the maximization of an appropriately-defined average signal-to-interference and
noise ratio (SINR) metric for each user. This solution is the dominant
generalized eigenvector of a pair of positive-definite matrices where the first
matrix is the covariance matrix of the forward link and the second is an
appropriately-designed "effective" interference covariance matrix. In this
sense, our work is a generalization of optimal signalling along the dominant
eigen-mode of the transmit covariance matrix in the single-user case. Finally,
the ergodic sum-rate for the general broadcast setting with M antennas at the
transmitter and M-users (each with a single antenna) is obtained in terms of
the covariance matrices of the links and the beamforming vectors.
|
1104.2124
|
Is a probabilistic modeling really useful in financial engineering? -
A-t-on vraiment besoin d'un mod\`ele probabiliste en ing\'enierie
financi\`ere ?
|
q-fin.CP cs.CE q-fin.PM q-fin.RM
|
A new standpoint on financial time series, without the use of any
mathematical model and of probabilistic tools, yields not only a rigorous
approach of trends and volatility, but also efficient calculations which were
already successfully applied in automatic control and in signal processing. It
is based on a theorem due to P. Cartier and Y. Perrin, which was published in
1995. The above results are employed for sketching a dynamical portfolio and
strategy management, without any global optimization technique. Numerous
computer simulations are presented.
|
1104.2156
|
Structural Analysis of Network Traffic Matrix via Relaxed Principal
Component Pursuit
|
cs.NI cs.IT cs.PF math.IT
|
The network traffic matrix is widely used in network operation and
management. It is therefore of crucial importance to analyze the components and
the structure of the network traffic matrix, for which several mathematical
approaches such as Principal Component Analysis (PCA) were proposed. In this
paper, we first argue that PCA performs poorly for analyzing traffic matrix
that is polluted by large volume anomalies, and then propose a new
decomposition model for the network traffic matrix. According to this model, we
carry out the structural analysis by decomposing the network traffic matrix
into three sub-matrices, namely, the deterministic traffic, the anomaly traffic
and the noise traffic matrix, which is similar to the Robust Principal
Component Analysis (RPCA) problem previously studied in [13]. Based on the
Relaxed Principal Component Pursuit (Relaxed PCP) method and the Accelerated
Proximal Gradient (APG) algorithm, we present an iterative approach for
decomposing a traffic matrix, and demonstrate its efficiency and flexibility by
experimental results. Finally, we further discuss several features of the
deterministic and noise traffic. Our study develops a novel method for the
problem of structural analysis of the traffic matrix, which is robust against
pollution of large volume anomalies.
|
1104.2171
|
From a Modified Ambrosio-Tortorelli to a Randomized Part Hierarchy Tree
|
cs.CV
|
We demonstrate the possibility of coding parts, features that are higher
level than boundaries, using a modified AT field after augmenting the
interaction term of the AT energy with a non-local term and weakening the
separation into boundary/not-boundary phases. The iteratively extracted parts
using the level curves with double point singularities are organized as a
proper binary tree. Inconsistencies due to non-generic configurations for level
curves as well as due to visual changes such as occlusion are successfully
handled once the tree is endowed with a probabilistic structure. The work is a
step in establishing the AT function as a bridge between low and high level
visual processing.
|
1104.2175
|
Extracting Parts of 2D Shapes Using Local and Global Interactions
Simultaneously
|
cs.CV
|
Perception research provides strong evidence in favor of part based
representation of shapes in human visual system. Despite considerable
differences among different theories in terms of how part boundaries are found,
there is substantial agreement on that the process depends on many local and
global geometric factors. This poses an important challenge from the
computational point of view. In the first part of the chapter, I present a
novel decomposition method by taking both local and global interactions within
the shape domain into account. At the top of the partitioning hierarchy, the
shape gets split into two parts capturing, respectively, the gross structure
and the peripheral structure. The gross structure may be conceived as the least
deformable part of the shape which remains stable under visual transformations.
The peripheral structure includes limbs, protrusions, and boundary texture.
Such a separation is in accord with the behavior of the artists who start with
a gross shape and enrich it with details. The method is particularly
interesting from the computational point of view as it does not resort to any
geometric notions (e.g. curvature, convexity) explicitly. In the second part of
the chapter, I relate the new method to PDE based shape representation schemes.
|
1104.2187
|
A Generalized Continuous Model for Random Markets
|
q-fin.GN cs.MA nlin.AO
|
A generalized continuous economic model is proposed for random markets. In
this model, agents interact by pairs and exchange their money in a random way.
A parameter controls the effectiveness of the transactions between the agents.
We show in a rigorous way that this type of markets reach their asymptotic
equilibrium on the exponential wealth distribution.
|
1104.2196
|
Space and Time as a Primary Classification Criterion for Information
Retrieval in Distributed Social Networking
|
cs.IR cs.SI physics.soc-ph
|
We discuss in a compact way how the implicit relations between spatiotemporal
relatedness of information items, spatiotemporal relatedness of users, social
relatedness of users and semantic relatedness of information items may be
exploited for an information retrieval architecture that operates along the
lines of human ways of searching. The decentralized and agent oriented
architecture mirrors emerging trends such as upcoming mobile and decentralized
social networking as a new paradigm in social computing and is targetted to
satisfy broader and more subtly interlinked information demands beyond
immediate information needs which can be readily satisfied with current IR
services. We briefly discuss why using spatio-temporal references as primary
information criterion implicitly conserves other relations and is thus suitable
for such an architecture. We finally shortly point to results from a large
evaluation study using Wikipedia articles.
|
1104.2215
|
Sparse Representation of White Gaussian Noise with Application to
L0-Norm Decoding in Noisy Compressed Sensing
|
cs.IT math.IT
|
The achievable and converse regions for sparse representation of white
Gaussian noise based on an overcomplete dictionary are derived in the limit of
large systems. Furthermore, the marginal distribution of such sparse
representations is also inferred. The results are obtained via the Replica
method which stems from statistical mechanics. A direct outcome of these
results is the introduction of sharp threshold for $\ell_{0}$-norm decoding in
noisy compressed sensing, and its mean-square error for underdetermined
Gaussian vector channels.
|
1104.2239
|
Experimental Investigation of Forecasting Methods Based on Universal
Measures
|
cs.IT math.IT physics.data-an
|
We describe and experimentally investigate a method to construct forecasting
algorithms for stationary and ergodic processes based on universal measures (or
so-called universal data compressors). Using some geophysical and economical
time series as examples, we show that the precision of thus obtained
predictions is higher than that of known methods.
|
1104.2284
|
Preprocessing: A Prerequisite for Discovering Patterns in WUM Process
|
cs.DB
|
Web log data is usually diverse and voluminous. This data must be assembled
into a consistent, integrated and comprehensive view, in order to be used for
pattern discovery. Without properly cleaning, transforming and structuring the
data prior to the analysis, one cannot expect to find meaningful patterns. As
in most data mining applications, data preprocessing involves removing and
filtering redundant and irrelevant data, removing noise, transforming and
resolving any inconsistencies. In this paper, a complete preprocessing
methodology having merging, data cleaning, user/session identification and data
formatting and summarization activities to improve the quality of data by
reducing the quantity of data has been proposed. To validate the efficiency of
the proposed preprocessing methodology, several experiments are conducted and
the results show that the proposed methodology reduces the size of Web access
log files down to 73-82% of the initial size and offers richer logs that are
structured for further stages of Web Usage Mining (WUM). So preprocessing of
raw data in this WUM process is the central theme of this paper.
|
1104.2285
|
Elimination of Specular reflection and Identification of ROI: The First
Step in Automated Detection of Cervical Cancer using Digital Colposcopy
|
cs.CV physics.med-ph
|
Cervical Cancer is one of the most common forms of cancer in women worldwide.
Most cases of cervical cancer can be prevented through screening programs aimed
at detecting precancerous lesions. During Digital Colposcopy, Specular
Reflections (SR) appear as bright spots heavily saturated with white light.
These occur due to the presence of moisture on the uneven cervix surface, which
act like mirrors reflecting light from the illumination source. Apart from
camouflaging the actual features, the SR also affects subsequent segmentation
routines and hence must be removed. Our novel technique eliminates the SR and
makes the colposcopic images (cervigram) ready for segmentation algorithms. The
cervix region occupies about half of the cervigram image. Other parts of the
image contain irrelevant information, such as equipment, frames, text and
non-cervix tissues. This irrelevant information can confuse automatic
identification of the tissues within the cervix. The first step is, therefore,
focusing on the cervical borders, so that we have a geometric boundary on the
relevant image area. We have proposed a type of modified kmeans clustering
algorithm to evaluate the region of interest.
|
1104.2355
|
Cooperative Spectrum Sensing for Amplify-and-Forward Cognitive Networks
|
cs.IT math.IT
|
We develop a framework for spectrum sensing in cooperative
amplify-and-forward cognitive radio networks. We consider a stochastic model
where relays are assigned in cognitive radio networks to transmit the primary
user's signal to a cognitive Secondary Base Station (SBS). We develop the
Bayesian optimal decision rule under various scenarios of Channel State
Information (CSI) varying from perfect to imperfect CSI. In order to obtain the
optimal decision rule based on a Likelihood Ratio Test (LRT), the marginal
likelihood under each hypothesis relating to presence or absence of
transmission needs to be evaluated pointwise. However, in some cases the
evaluation of the LRT can not be performed analytically due to the
intractability of the multi-dimensional integrals involved. In other cases, the
distribution of the test statistic can not be obtained exactly. To circumvent
these difficulties we design two algorithms to approximate the marginal
likelihood, and obtain the decision rule. The first is based on Gaussian
Approximation where we quantify the accuracy of the approximation via a
multivariate version of the Berry-Esseen bound. The second algorithm is based
on Laplace approximation for the marginal likelihood, which results in a
non-convex optimisation problem which is solved efficiently via Bayesian
Expectation-Maximisation method. We also utilise a Laguerre series expansion to
approximate the distribution of the test statistic in cases where its
distribution can not be derived exactly. Performance is evaluated via analytic
bounds and compared to numerical simulations.
|
1104.2364
|
Epidemic spreading with immunization rate on complex networks
|
physics.soc-ph cs.SI
|
We investigate the spread of diseases, computer viruses or information on
complex networks and also immunization strategies to prevent or control the
spread. When an entire population cannot be immunized and the effect of
immunization is not perfect, we need the targeted immunization with
immunization rate. Under such a circumstance we calculate epidemic thresholds
for the SIR and SIS epidemic models. It is shown that, in scale-free networks,
the targeted immunization is effective only if the immunization rate is equal
to one. We analyze here epidemic spreading on directed complex networks, but
similar results are also valid for undirected ones.
|
1104.2373
|
Hybrid Deterministic-Stochastic Methods for Data Fitting
|
cs.NA cs.SY math.OC stat.ML
|
Many structured data-fitting applications require the solution of an
optimization problem involving a sum over a potentially large number of
measurements. Incremental gradient algorithms offer inexpensive iterations by
sampling a subset of the terms in the sum. These methods can make great
progress initially, but often slow as they approach a solution. In contrast,
full-gradient methods achieve steady convergence at the expense of evaluating
the full objective and gradient on each iteration. We explore hybrid methods
that exhibit the benefits of both approaches. Rate-of-convergence analysis
shows that by controlling the sample size in an incremental gradient algorithm,
it is possible to maintain the steady convergence rates of full-gradient
methods. We detail a practical quasi-Newton implementation based on this
approach. Numerical experiments illustrate its potential benefits.
|
1104.2444
|
A Simplified and Improved Free-Variable Framework for Hilbert's epsilon
as an Operator of Indefinite Committed Choice
|
cs.AI math.LO
|
Free variables occur frequently in mathematics and computer science with ad
hoc and altering semantics. We present the most recent version of our
free-variable framework for two-valued logics with properly improved
functionality, but only two kinds of free variables left (instead of three):
implicitly universally and implicitly existentially quantified ones, now simply
called "free atoms" and "free variables", respectively. The quantificational
expressiveness and the problem-solving facilities of our framework exceed
standard first-order and even higher-order modal logics, and directly support
Fermat's descente infinie. With the improved version of our framework, we can
now model also Henkin quantification, neither using quantifiers (binders) nor
raising (Skolemization). We propose a new semantics for Hilbert's epsilon as a
choice operator with the following features: We avoid overspecification (such
as right-uniqueness), but admit indefinite choice, committed choice, and
classical logics. Moreover, our semantics for the epsilon supports reductive
proof search optimally.
|
1104.2541
|
Kernels for Global Constraints
|
cs.AI cs.DS
|
Bessiere et al. (AAAI'08) showed that several intractable global constraints
can be efficiently propagated when certain natural problem parameters are
small. In particular, the complete propagation of a global constraint is
fixed-parameter tractable in k - the number of holes in domains - whenever
bound consistency can be enforced in polynomial time; this applies to the
global constraints AtMost-NValue and Extended Global Cardinality (EGC).
In this paper we extend this line of research and introduce the concept of
reduction to a problem kernel, a key concept of parameterized complexity, to
the field of global constraints. In particular, we show that the consistency
problem for AtMost-NValue constraints admits a linear time reduction to an
equivalent instance on O(k^2) variables and domain values. This small kernel
can be used to speed up the complete propagation of NValue constraints. We
contrast this result by showing that the consistency problem for EGC
constraints does not admit a reduction to a polynomial problem kernel unless
the polynomial hierarchy collapses.
|
1104.2547
|
C-Codes: Cyclic Lowest-Density MDS Array Codes Constructed Using
Starters for RAID 6
|
cs.IT cs.DM math.CO math.IT
|
The distance-3 cyclic lowest-density MDS array code (called the C-Code) is a
good candidate for RAID 6 because of its optimal storage efficiency, optimal
update complexity, optimal length, and cyclic symmetry. In this paper, the
underlying connections between C-Codes (or quasi-C-Codes) and starters in group
theory are revealed. It is shown that each C-Code (or quasi-C-Code) of length
$2n$ can be constructed using an even starter (or even multi-starter) in
$(Z_{2n},+)$. It is also shown that each C-Code (or quasi-C-Code) has a twin
C-Code (or quasi-C-Code). Then, four infinite families (three of which are new)
of C-Codes of length $p-1$ are constructed, where $p$ is a prime. Besides the
family of length $p-1$, C-Codes for some sporadic even lengths are also
presented. Even so, there are still some even lengths (such as 8) for which
C-Codes do not exist. To cover this limitation, two infinite families (one of
which is new) of quasi-C-Codes of length $2(p-1)$ are constructed for these
even lengths.
|
1104.2580
|
Hypothesize and Bound: A Computational Focus of Attention Mechanism for
Simultaneous N-D Segmentation, Pose Estimation and Classification Using Shape
Priors
|
cs.CV cs.CG cs.GR cs.LG
|
Given the ever increasing bandwidth of the visual information available to
many intelligent systems, it is becoming essential to endow them with a sense
of what is worthwhile their attention and what can be safely disregarded. This
article presents a general mathematical framework to efficiently allocate the
available computational resources to process the parts of the input that are
relevant to solve a given perceptual problem. By this we mean to find the
hypothesis H (i.e., the state of the world) that maximizes a function L(H),
representing how well each hypothesis "explains" the input. Given the large
bandwidth of the sensory input, fully evaluating L(H) for each hypothesis H is
computationally infeasible (e.g., because it would imply checking a large
number of pixels). To address this problem we propose a mathematical framework
with two key ingredients. The first one is a Bounding Mechanism (BM) to compute
lower and upper bounds of L(H), for a given computational budget. These bounds
are much cheaper to compute than L(H) itself, can be refined at any time by
increasing the budget allocated to a hypothesis, and are frequently enough to
discard a hypothesis. To compute these bounds, we develop a novel theory of
shapes and shape priors. The second ingredient is a Focus of Attention
Mechanism (FoAM) to select which hypothesis' bounds should be refined next,
with the goal of discarding non-optimal hypotheses with the least amount of
computation. The proposed framework: 1) is very efficient since most hypotheses
are discarded with minimal computation; 2) is parallelizable; 3) is guaranteed
to find the globally optimal hypothesis; and 4) its running time depends on the
problem at hand, not on the bandwidth of the input. We instantiate the proposed
framework for the problem of simultaneously estimating the class, pose, and a
noiseless version of a 2D shape in a 2D image.
|
1104.2581
|
Approximate MIMO Iterative Processing with Adjustable Complexity
Requirements
|
cs.IT math.IT
|
Targeting always the best achievable bit error rate (BER) performance in
iterative receivers operating over multiple-input multiple-output (MIMO)
channels may result in significant waste of resources, especially when the
achievable BER is orders of magnitude better than the target performance (e.g.,
under good channel conditions and at high signal-to-noise ratio (SNR)). In
contrast to the typical iterative schemes, a practical iterative decoding
framework that approximates the soft-information exchange is proposed which
allows reduced complexity sphere and channel decoding, adjustable to the
transmission conditions and the required bit error rate. With the proposed
approximate soft information exchange the performance of the exact soft
information can still be reached with significant complexity gains.
|
1104.2599
|
Streaming Tree Transducers
|
cs.FL cs.DB
|
Theory of tree transducers provides a foundation for understanding
expressiveness and complexity of analysis problems for specification languages
for transforming hierarchically structured data such as XML documents. We
introduce streaming tree transducers as an analyzable, executable, and
expressive model for transforming unranked ordered trees in a single pass.
Given a linear encoding of the input tree, the transducer makes a single
left-to-right pass through the input, and computes the output in linear time
using a finite-state control, a visibly pushdown stack, and a finite number of
variables that store output chunks that can be combined using the operations of
string-concatenation and tree-insertion. We prove that the expressiveness of
the model coincides with transductions definable using monadic second-order
logic (MSO). Existing models of tree transducers either cannot implement all
MSO-definable transformations, or require regular look ahead that prohibits
single-pass implementation. We show a variety of analysis problems such as
type-checking and checking functional equivalence are solvable for our model.
|
1104.2606
|
Statistical mechanics of the international trade network
|
q-fin.GN cs.SI physics.data-an physics.soc-ph
|
Analyzing real data on international trade covering the time interval
1950-2000, we show that in each year over the analyzed period the network is a
typical representative of the ensemble of maximally random weighted networks,
whose directed connections (bilateral trade volumes) are only characterized by
the product of the trading countries' GDPs. It means that time evolution of
this network may be considered as a continuous sequence of equilibrium states,
i.e. quasi-static process. This, in turn, allows one to apply the linear
response theory to make (and also verify) simple predictions about the network.
In particular, we show that bilateral trade fulfills fluctuation-response
theorem, which states that the average relative change in import (export)
between two countries is a sum of relative changes in their GDPs. Yearly
changes in trade volumes prove that the theorem is valid.
|
1104.2644
|
Idealized Dynamic Population Sizing for Uniformly Scaled Problems
|
cs.NE
|
This paper explores an idealized dynamic population sizing strategy for
solving additive decomposable problems of uniform scale. The method is designed
on top of the foundations of existing population sizing theory for this class
of problems, and is carefully compared with an optimal fixed population sized
genetic algorithm. The resulting strategy should be close to a lower bound in
terms of what can be achieved, performance-wise, by self-adjusting population
sizing algorithms for this class of problems.
|
1104.2679
|
Convex inner approximations of nonconvex semialgebraic sets applied to
fixed-order controller design
|
math.OC cs.SY
|
We describe an elementary algorithm to build convex inner approximations of
nonconvex sets. Both input and output sets are basic semialgebraic sets given
as lists of defining multivariate polynomials. Even though no optimality
guarantees can be given (e.g. in terms of volume maximization for bounded
sets), the algorithm is designed to preserve convex boundaries as much as
possible, while removing regions with concave boundaries. In particular, the
algorithm leaves invariant a given convex set. The algorithm is based on
Gloptipoly 3, a public-domain Matlab package solving nonconvex polynomial
optimization problems with the help of convex semidefinite programming
(optimization over linear matrix inequalities, or LMIs). We illustrate how the
algorithm can be used to design fixed-order controllers for linear systems,
following a polynomial approach.
|
1104.2689
|
Viscosity solutions of systems of PDEs with interconnected obstacles and
Multi modes switching problems
|
math.OC cs.SY
|
This paper deals with existence and uniqueness, in viscosity sense, of a
solution for a system of m variational partial differential inequalities with
inter-connected obstacles. A particular case of this system is the
deterministic version of the Verification Theorem of the Markovian optimal
m-states switching problem. The switching cost functions are arbitrary. This
problem is connected with the valuation of a power plant in the energy market.
The main tool is the notion of systems of reflected BSDEs with oblique
reflection.
|
1104.2721
|
Optimal Cell Towers Distribution by using Spatial Mining and Geographic
Information System
|
cs.DB
|
The appearance of wireless communication is dramatically changing our life.
Mobile telecommunications emerged as a technological marvel allowing for access
to personal and other services, devices, computation and communication, in any
place and at any time through effortless plug and play. Setting up wireless
mobile networks often requires: Frequency Assignment, Communication Protocol
selection, Routing schemes selection, and cells towers distributions. This
research aims to optimize the cells towers distribution by using spatial mining
with Geographic Information System (GIS) as a tool. The distribution
optimization could be done by applying the Digital Elevation Model (DEM) on the
image of the area which must be covered with two levels of hierarchy. The
research will apply the spatial association rules technique on the second level
to select the best square in the cell for placing the antenna. From that the
proposal will try to minimize the number of installed towers, makes tower's
location feasible, and provides full area coverage.
|
1104.2745
|
An Axis-Based Representation for Recognition
|
cs.CV
|
This paper presents a new axis-based shape representation scheme along with a
matching framework to address the problem of generic shape recognition. The
main idea is to define the relative spatial arrangement of local symmetry axes
and their metric properties in a shape centered coordinate frame. The resulting
descriptions are invariant to scale, rotation, small changes in viewpoint and
articulations. Symmetry points are extracted from a surface whose level curves
roughly mimic the motion by curvature. By increasing the amount of smoothing on
the evolving curve, only those symmetry axes that correspond to the most
prominent parts of a shape are extracted. The representation does not suffer
from the common instability problems of the traditional connected skeletons. It
captures the perceptual qualities of shapes well. Therefore finding the
similarities and the differences among shapes becomes easier. The matching
process gives highly successful results on a diverse database of 2D shapes.
|
1104.2751
|
Disconnected Skeleton: Shape at its Absolute Scale
|
cs.CV
|
We present a new skeletal representation along with a matching framework to
address the deformable shape recognition problem. The disconnectedness arises
as a result of excessive regularization that we use to describe a shape at an
attainably coarse scale. Our motivation is to rely on the stable properties of
the shape instead of inaccurately measured secondary details. The new
representation does not suffer from the common instability problems of
traditional connected skeletons, and the matching process gives quite
successful results on a diverse database of 2D shapes. An important difference
of our approach from the conventional use of the skeleton is that we replace
the local coordinate frame with a global Euclidean frame supported by
additional mechanisms to handle articulations and local boundary deformations.
As a result, we can produce descriptions that are sensitive to any combination
of changes in scale, position, orientation and articulation, as well as
invariant ones.
|
1104.2756
|
Privacy Preserving Moving KNN Queries
|
cs.DB
|
We present a novel approach that protects trajectory privacy of users who
access location-based services through a moving k nearest neighbor (MkNN)
query. An MkNN query continuously returns the k nearest data objects for a
moving user (query point). Simply updating a user's imprecise location such as
a region instead of the exact position to a location-based service provider
(LSP) cannot ensure privacy of the user for an MkNN query: continuous
disclosure of regions enables the LSP to follow a user's trajectory. We
identify the problem of trajectory privacy that arises from the overlap of
consecutive regions while requesting an MkNN query and provide the first
solution to this problem. Our approach allows a user to specify the confidence
level that represents a bound of how much more the user may need to travel than
the actual kth nearest data object. By hiding a user's required confidence
level and the required number of nearest data objects from an LSP, we develop a
technique to prevent the LSP from tracking the user's trajectory for MkNN
queries. We propose an efficient algorithm for the LSP to find k nearest data
objects for a region with a user's specified confidence level, which is an
essential component to evaluate an MkNN query in a privacy preserving manner;
this algorithm is at least two times faster than the state-of-the-art
algorithm. Extensive experimental studies validate the effectiveness of our
trajectory privacy protection technique and the efficiency of our algorithm.
|
1104.2773
|
Distributed Stochastic Approximation for Constrained and Unconstrained
Optimization
|
cs.IT math.IT
|
In this paper, we analyze the convergence of a distributed Robbins-Monro
algorithm for both constrained and unconstrained optimization in multi-agent
systems. The algorithm searches for local minima of a (nonconvex) objective
function which is supposed to coincide with a sum of local utility functions of
the agents. The algorithm under study consists of two steps: a local stochastic
gradient descent at each agent and a gossip step that drives the network of
agents to a consensus. It is proved that i) an agreement is achieved between
agents on the value of the estimate, ii) the algorithm converges to the set of
Kuhn-Tucker points of the optimization problem. The proof relies on recent
results about differential inclusions. In the context of unconstrained
optimization, intelligible sufficient conditions are provided in order to
ensure the stability of the algorithm. In the latter case, we also provide a
central limit theorem which governs the asymptotic fluctuations of the
estimate. We illustrate our results in the case of distributed power allocation
for ad-hoc wireless networks.
|
1104.2784
|
Diversity Analysis of Symbol-by-Symbol Linear Equalizers
|
cs.IT math.IT
|
In frequency-selective channels linear receivers enjoy significantly-reduced
complexity compared with maximum likelihood receivers at the cost of
performance degradation which can be in the form of a loss of the inherent
frequency diversity order or reduced coding gain. This paper demonstrates that
the minimum mean-square error symbol-by-symbol linear equalizer incurs no
diversity loss compared to the maximum likelihood receivers. In particular, for
a channel with memory $\nu$, it achieves the full diversity order of ($\nu+1$)
while the zero-forcing symbol-by-symbol linear equalizer always achieves a
diversity order of one.
|
1104.2788
|
Backdoors to Tractable Answer-Set Programming
|
cs.CC cs.AI
|
Answer Set Programming (ASP) is an increasingly popular framework for
declarative programming that admits the description of problems by means of
rules and constraints that form a disjunctive logic program. In particular,
many AI problems such as reasoning in a nonmonotonic setting can be directly
formulated in ASP. Although the main problems of ASP are of high computational
complexity, located at the second level of the Polynomial Hierarchy, several
restrictions of ASP have been identified in the literature, under which ASP
problems become tractable.
In this paper we use the concept of backdoors to identify new restrictions
that make ASP problems tractable. Small backdoors are sets of atoms that
represent "clever reasoning shortcuts" through the search space and represent a
hidden structure in the problem input. The concept of backdoors is widely used
in the areas of propositional satisfiability and constraint satisfaction. We
show that it can be fruitfully adapted to ASP. We demonstrate how backdoors can
serve as a unifying framework that accommodates several tractable restrictions
of ASP known from the literature. Furthermore, we show how backdoors allow us
to deploy recent algorithmic results from parameterized complexity theory to
the domain of answer set programming.
|
1104.2824
|
Pattern discovery for semi-structured web pages using bar-tree
representation
|
cs.IR cs.DS
|
Many websites with an underlying database containing structured data provide
the richest and most dense source of information relevant for topical data
integration. The real data integration requires sustainable and reliable
pattern discovery to enable accurate content retrieval and to recognize pattern
changes from time to time; yet, extracting the structured data from web
documents is still lacking from its accuracy. This paper proposes the bar-tree
representation to describe the whole pattern of web pages in an efficient way
based on the reverse algorithm. While previous algorithms always trace the
pattern and extract the region of interest from \textit{top root}, the reverse
algorithm recognizes the pattern from the region of interest to both top and
bottom roots simultaneously. The attributes are then extracted and labeled
reversely from the region of interest of targeted contents. Since using
conventional representations for the algorithm should require more
computational power, the bar-tree method is developed to represent the
generated patterns using bar graphs characterized by the depths and widths from
the document roots. We show that this representation is suitable for extracting
the data from the semi-structured web sources, and for detecting the template
changes of targeted pages. The experimental results show perfect recognition
rate for template changes in several web targets.
|
1104.2825
|
Foundations for Uniform Interpolation and Forgetting in Expressive
Description Logics
|
cs.LO cs.AI
|
We study uniform interpolation and forgetting in the description logic ALC.
Our main results are model-theoretic characterizations of uniform inter-
polants and their existence in terms of bisimula- tions, tight complexity
bounds for deciding the existence of uniform interpolants, an approach to
computing interpolants when they exist, and tight bounds on their size. We use
a mix of model- theoretic and automata-theoretic methods that, as a by-product,
also provides characterizations of and decision procedures for conservative
extensions.
|
1104.2829
|
Self-organizing traffic lights at multiple-street intersections
|
nlin.AO cs.AI nlin.CG
|
Summary: Traffic light coordination is a complex problem. In this paper, we
extend previous work on an abstract model of city traffic to allow for multiple
street intersections. We test a self-organizing method in our model, showing
that it is close to theoretical optima and superior to a traditional method of
traffic light coordination.
Abstract: The elementary cellular automaton following rule 184 can mimic
particles flowing in one direction at a constant speed. This automaton can
therefore model highway traffic. In a recent paper, we have incorporated
intersections regulated by traffic lights to this model using exclusively
elementary cellular automata. In such a paper, however, we only explored a
rectangular grid. We now extend our model to more complex scenarios employing
an hexagonal grid. This extension shows first that our model can readily
incorporate multiple-way intersections and hence simulate complex scenarios. In
addition, the current extension allows us to study and evaluate the behavior of
two different kinds of traffic light controller for a grid of six-way streets
allowing for either two or three street intersections: a traffic light that
tries to adapt to the amount of traffic (which results in self-organizing
traffic lights) and a system of synchronized traffic lights with coordinated
rigid periods (sometimes called the "green wave" method). We observe a tradeoff
between system capacity and topological complexity. The green wave method is
unable to cope with the complexity of a higher-capacity scenario, while the
self-organizing method is scalable, adapting to the complexity of a scenario
and exploiting its maximum capacity. Additionally, in this paper we propose a
benchmark, independent of methods and models, to measure the performance of a
traffic light controller comparing it against a theoretical optimum.
|
1104.2842
|
Augmenting Tractable Fragments of Abstract Argumentation
|
cs.AI cs.CC
|
We present a new and compelling approach to the efficient solution of
important computational problems that arise in the context of abstract
argumentation. Our approach makes known algorithms defined for restricted
fragments generally applicable, at a computational cost that scales with the
distance from the fragment. Thus, in a certain sense, we gradually augment
tractable fragments. Surprisingly, it turns out that some tractable fragments
admit such an augmentation and that others do not.
More specifically, we show that the problems of credulous and skeptical
acceptance are fixed-parameter tractable when parameterized by the distance
from the fragment of acyclic argumentation frameworks. Other tractable
fragments such as the fragments of symmetrical and bipartite frameworks seem to
prohibit an augmentation: the acceptance problems are already intractable for
frameworks at distance 1 from the fragments.
For our study we use a broad setting and consider several different
semantics. For the algorithmic results we utilize recent advances in
fixed-parameter tractability.
|
1104.2861
|
Using Channel Output Feedback to Increase Throughput in Hybrid-ARQ
|
cs.IT math.IT
|
Hybrid-ARQ protocols have become common in many packet transmission systems
due to their incorporation in various standards. Hybrid-ARQ combines the normal
automatic repeat request (ARQ) method with error correction codes to increase
reliability and throughput. In this paper, we look at improving upon this
performance using feedback information from the receiver, in particular, using
a powerful forward error correction (FEC) code in conjunction with a proposed
linear feedback code for the Rayleigh block fading channels. The new hybrid-ARQ
scheme is initially developed for full received packet feedback in a
point-to-point link. It is then extended to various different multiple-antenna
scenarios (MISO/MIMO) with varying amounts of packet feedback information.
Simulations illustrate gains in throughput.
|
1104.2930
|
Cluster Forests
|
stat.ME cs.LG stat.ML
|
With inspiration from Random Forests (RF) in the context of classification, a
new clustering ensemble method---Cluster Forests (CF) is proposed.
Geometrically, CF randomly probes a high-dimensional data cloud to obtain "good
local clusterings" and then aggregates via spectral clustering to obtain
cluster assignments for the whole dataset. The search for good local
clusterings is guided by a cluster quality measure kappa. CF progressively
improves each local clustering in a fashion that resembles the tree growth in
RF. Empirical studies on several real-world datasets under two different
performance metrics show that CF compares favorably to its competitors.
Theoretical analysis reveals that the kappa measure makes it possible to grow
the local clustering in a desirable way---it is "noise-resistant". A
closed-form expression is obtained for the mis-clustering rate of spectral
clustering under a perturbation model, which yields new insights into some
aspects of spectral clustering.
|
1104.2939
|
Subexponential convergence for information aggregation on regular trees
|
cs.MA cs.IT math.IT math.ST stat.TH
|
We consider the decentralized binary hypothesis testing problem on trees of
bounded degree and increasing depth. For a regular tree of depth t and
branching factor k>=2, we assume that the leaves have access to independent and
identically distributed noisy observations of the 'state of the world' s.
Starting with the leaves, each node makes a decision in a finite alphabet M,
that it sends to its parent in the tree. Finally, the root decides between the
two possible states of the world based on the information it receives.
We prove that the error probability vanishes only subexponentially in the
number of available observations, under quite general hypotheses. More
precisely the case of binary messages, decay is subexponential for any decision
rule. For general (finite) message alphabet M, decay is subexponential for
'node-oblivious' decision rules, that satisfy a mild irreducibility condition.
In the latter case, we propose a family of decision rules with close-to-optimal
asymptotic behavior.
|
1104.2944
|
Global Computation in a Poorly Connected World: Fast Rumor Spreading
with No Dependence on Conductance
|
cs.DM cs.DC cs.SI physics.soc-ph
|
In this paper, we study the question of how efficiently a collection of
interconnected nodes can perform a global computation in the widely studied
GOSSIP model of communication. In this model, nodes do not know the global
topology of the network, and they may only initiate contact with a single
neighbor in each round. This model contrasts with the much less restrictive
LOCAL model, where a node may simultaneously communicate with all of its
neighbors in a single round. A basic question in this setting is how many
rounds of communication are required for the information dissemination problem,
in which each node has some piece of information and is required to collect all
others. In this paper, we give an algorithm that solves the information
dissemination problem in at most $O(D+\text{polylog}{(n)})$ rounds in a network
of diameter $D$, withno dependence on the conductance. This is at most an
additive polylogarithmic factor from the trivial lower bound of $D$, which
applies even in the LOCAL model. In fact, we prove that something stronger is
true: any algorithm that requires $T$ rounds in the LOCAL model can be
simulated in $O(T +\mathrm{polylog}(n))$ rounds in the GOSSIP model. We thus
prove that these two models of distributed computation are essentially
equivalent.
|
1104.2982
|
Multi-representation d'une ontologie : OWL, bases de donnees, syst\`emes
de types et d'objets
|
cs.IR
|
Due to the emergence of the semantic Web and the increasing need to formalize
human knowledge, ontologie engineering is now an important activity. But is
this activity very different from other ones like software engineering, for
example ? In this paper, we investigate analogies between ontologies on one
hand, types, objects and data bases on the other one, taking into account the
notion of evolution of an ontology. We represent a unique ontology using
different paradigms, and observe that the distance between these different
concepts is small. We deduce from this constatation that ontologies and more
specifically ontology description languages can take advantage of beeing
fertilizated with some other computer science domains and inherit important
characteristics as modularity, for example.
|
1104.2998
|
On the exponential decay of the Euler-Bernoulli beam with boundary
energy dissipation
|
math-ph cs.SY math.MP math.OC
|
We study the asymptotic behavior of the Euler-Bernoulli beam which is clamped
at one end and free at the other end. We apply a boundary control with memory
at the free end of the beam and prove that the "exponential decay" of the
memory kernel is a necessary and sufficient condition for the exponential decay
of the energy.
|
1104.3069
|
Efficient Maximum Likelihood Estimation of a 2-D Complex Sinusoidal
Based on Barycentric Interpolation
|
cs.IT math.IT
|
This paper presents an efficient method to compute the maximum likelihood
(ML) estimation of the parameters of a complex 2-D sinusoidal, with the
complexity order of the FFT. The method is based on an accurate barycentric
formula for interpolating band-limited signals, and on the fact that the ML
cost function can be viewed as a signal of this type, if the time and frequency
variables are switched. The method consists in first computing the DFT of the
data samples, and then locating the maximum of the cost function by means of
Newton's algorithm. The fact is that the complexity of the latter step is small
and independent of the data size, since it makes use of the barycentric formula
for obtaining the values of the cost function and its derivatives. Thus, the
total complexity order is that of the FFT. The method is validated in a
numerical example.
|
1104.3083
|
Narrow scope for resolution-limit-free community detection
|
physics.soc-ph cs.SI
|
Detecting communities in large networks has drawn much attention over the
years. While modularity remains one of the more popular methods of community
detection, the so-called resolution limit remains a significant drawback. To
overcome this issue, it was recently suggested that instead of comparing the
network to a random null model, as is done in modularity, it should be compared
to a constant factor. However, it is unclear what is meant exactly by
"resolution-limit-free", that is, not suffering from the resolution limit.
Furthermore, the question remains what other methods could be classified as
resolution-limit-free. In this paper we suggest a rigorous definition and
derive some basic properties of resolution-limit-free methods. More
importantly, we are able to prove exactly which class of community detection
methods are resolution-limit-free. Furthermore, we analyze which methods are
not resolution-limit-free, suggesting there is only a limited scope for
resolution-limit-free community detection methods. Finally, we provide such a
natural formulation, and show it performs superbly.
|
1104.3084
|
I/O-Efficient Data Structures for Colored Range and Prefix Reporting
|
cs.DS cs.IR
|
Motivated by information retrieval applications, we consider the
one-dimensional colored range reporting problem in rank space. The goal is to
build a static data structure for sets C_1,...,C_m \subseteq {1,...,sigma} that
supports queries of the kind: Given indices a,b, report the set Union_{a <= i
<= b} C_i.
We study the problem in the I/O model, and show that there exists an optimal
linear-space data structure that answers queries in O(1+k/B) I/Os, where k
denotes the output size and B the disk block size in words. In fact, we obtain
the same bound for the harder problem of three-sided orthogonal range
reporting. In this problem, we are to preprocess a set of n two-dimensional
points in rank space, such that all points inside a query rectangle of the form
[x_1,x_2] x (-infinity,y] can be reported. The best previous bounds for this
problem is either O(n lg^2_B n) space and O(1+k/B) query I/Os, or O(n) space
and O(lg^(h)_B n +k/B) query I/Os, where lg^(h)_B n is the base B logarithm
iterated h times, for any constant integer h. The previous bounds are both
achieved under the indivisibility assumption, while our solution exploits the
full capabilities of the underlying machine. Breaking the indivisibility
assumption thus provides us with cleaner and optimal bounds.
Our results also imply an optimal solution to the following colored prefix
reporting problem. Given a set S of strings, each O(1) disk blocks in length,
and a function c: S -> 2^{1,...,sigma}, support queries of the kind: Given a
string p, report the set Union_{x in S intersection p*} c(x), where p* denotes
the set of strings with prefix p. Finally, we consider the possibility of top-k
extensions of this result, and present a simple solution in a model that allows
non-blocked I/O.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.