id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1204.4541
|
Automatic Sampling of Geographic objects
|
cs.AI
|
Today, one's disposes of large datasets composed of thousands of geographic
objects. However, for many processes, which require the appraisal of an expert
or much computational time, only a small part of these objects can be taken
into account. In this context, robust sampling methods become necessary. In
this paper, we propose a sampling method based on clustering techniques. Our
method consists in dividing the objects in clusters, then in selecting in each
cluster, the most representative objects. A case-study in the context of a
process dedicated to knowledge revision for geographic data generalisation is
presented. This case-study shows that our method allows to select relevant
samples of objects.
|
1204.4560
|
A Fast and Effective Local Search Algorithm for Optimizing the Placement
of Wind Turbines
|
cs.NE
|
The placement of wind turbines on a given area of land such that the wind
farm produces a maximum amount of energy is a challenging optimization problem.
In this article, we tackle this problem, taking into account wake effects that
are produced by the different turbines on the wind farm. We significantly
improve upon existing results for the minimization of wake effects by
developing a new problem-specific local search algorithm. One key step in the
speed-up of our algorithm is the reduction in computation time needed to assess
a given wind farm layout compared to previous approaches. Our new method allows
the optimization of large real-world scenarios within a single night on a
standard computer, whereas weeks on specialized computing servers were required
for previous approaches.
|
1204.4563
|
Describing A Cyclic Code by Another Cyclic Code
|
cs.IT math.IT
|
A new approach to bound the minimum distance of $q$-ary cyclic codes is
presented. The connection to the BCH and the Hartmann--Tzeng bound is
formulated and it is shown that for several cases an improvement is achieved.
We associate a second cyclic code to the original one and bound its minimum
distance in terms of parameters of the associated code.
|
1204.4584
|
Jerarca: Efficient Analysis of Complex Networks Using Hierarchical
Clustering
|
q-bio.MN cs.SI physics.soc-ph
|
Background: How to extract useful information from complex biological
networks is a major goal in many fields, especially in genomics and proteomics.
We have shown in several works that iterative hierarchical clustering, as
implemented in the UVCluster program, is a powerful tool to analyze many of
those networks. However, the amount of computation time required to perform
UVCluster analyses imposed significant limitations to its use.
Methodology/Principal Findings: We describe the suite Jerarca, designed to
efficiently convert networks of interacting units into dendrograms by means of
iterative hierarchical clustering. Jerarca is divided into three main sections.
First, weighted distances among units are computed using up to three different
approaches: a more efficient version of UVCluster and two new, related
algorithms called RCluster and SCluster. Second, Jerarca builds dendrograms
based on those distances, using well-known phylogenetic algorithms, such as
UPGMA or Neighbor-Joining. Finally, Jerarca provides optimal partitions of the
trees using statistical criteria based on the distribution of intra- and
intercluster connections. Outputs compatible with the phylogenetic software
MEGA and the Cytoscape package are generated, allowing the results to be easily
visualized.
Conclusions/Significance: The four main advantages of Jerarca in respect to
UVCluster are: 1) Improved speed of a novel UVCluster algorithm; 2) Additional,
alternative strategies to perform iterative hierarchical clustering; 3)
Automatic evaluation of the hierarchical trees to obtain optimal partitions;
and, 4) Outputs compatible with popular software such as MEGA and Cytoscape.
|
1204.4585
|
An Information Theoretic Location Verification System for Wireless
Networks
|
cs.IT math.IT
|
As location-based applications become ubiquitous in emerging wireless
networks, Location Verification Systems (LVS) are of growing importance. In
this paper we propose, for the first time, a rigorous information-theoretic
framework for an LVS. The theoretical framework we develop illustrates how the
threshold used in the detection of a spoofed location can be optimized in terms
of the mutual information between the input and output data of the LVS. In
order to verify the legitimacy of our analytical framework we have carried out
detailed numerical simulations. Our simulations mimic the practical scenario
where a system deployed using our framework must make a binary Yes/No
"malicious decision" to each snapshot of the signal strength values obtained by
base stations. The comparison between simulation and analysis shows excellent
agreement. Our optimized LVS framework provides a defence against location
spoofing attacks in emerging wireless networks such as those envisioned for
Intelligent Transport Systems, where verification of location information is of
paramount importance.
|
1204.4626
|
Fast and Robust Parametric Estimation of Jointly Sparse Channels
|
cs.SY
|
We consider the joint estimation of multipath channels obtained with a set of
receiving antennas and uniformly probed in the frequency domain. This scenario
fits most of the modern outdoor communication protocols for mobile access or
digital broadcasting among others.
Such channels verify a Sparse Common Support property (SCS) which was used in
a previous paper to propose a Finite Rate of Innovation (FRI) based sampling
and estimation algorithm. In this contribution we improve the robustness and
computational complexity aspects of this algorithm. The method is based on
projection in Krylov subspaces to improve complexity and a new criterion called
the Partial Effective Rank (PER) to estimate the level of sparsity to gain
robustness.
If P antennas measure a K-multipath channel with N uniformly sampled
measurements per channel, the algorithm possesses an O(KPNlogN) complexity and
an O(KPN) memory footprint instead of O(PN^3) and O(PN^2) for the direct
implementation, making it suitable for K << N. The sparsity is estimated online
based on the PER, and the algorithm therefore has a sense of introspection
being able to relinquish sparsity if it is lacking. The estimation performances
are tested on field measurements with synthetic AWGN, and the proposed
algorithm outperforms non-sparse reconstruction in the medium to low SNR range
(< 0dB), increasing the rate of successful symbol decodings by 1/10th in
average, and 1/3rd in the best case. The experiments also show that the
algorithm does not perform worse than a non-sparse estimation algorithm in
non-sparse operating conditions, since it may fall-back to it if the PER
criterion does not detect a sufficient level of sparsity.
The algorithm is also tested against a method assuming a "discrete" sparsity
model as in Compressed Sensing (CS). The conducted test indicates a trade-off
between speed and accuracy.
|
1204.4656
|
Fusion of Greedy Pursuits for Compressed Sensing Signal Reconstruction
|
stat.AP cs.IT math.IT
|
Greedy Pursuits are very popular in Compressed Sensing for sparse signal
recovery. Though many of the Greedy Pursuits possess elegant theoretical
guarantees for performance, it is well known that their performance depends on
the statistical distribution of the non-zero elements in the sparse signal. In
practice, the distribution of the sparse signal may not be known a priori. It
is also observed that performance of Greedy Pursuits degrades as the number of
available measurements decreases from a threshold value which is method
dependent. To improve the performance in these situations, we introduce a novel
fusion framework for Greedy Pursuits and also propose two algorithms for sparse
recovery. Through Monte Carlo simulations we show that the proposed schemes
improve sparse signal recovery in clean as well as noisy measurement cases.
|
1204.4685
|
A Query Language for Formal Mathematical Libraries
|
cs.LO cs.DB
|
One of the most promising applications of mathematical knowledge management
is search: Even if we restrict attention to the tiny fragment of mathematics
that has been formalized, the amount exceeds the comprehension of an individual
human.
Based on the generic representation language MMT, we introduce the
mathematical query language QMT: It combines simplicity, expressivity, and
scalability while avoiding a commitment to a particular logical formalism. QMT
can integrate various search paradigms such as unification, semantic web, or
XQuery style queries, and QMT queries can span different mathematical
libraries.
We have implemented QMT as a part of the MMT API. This combination provides a
scalable indexing and query engine that can be readily applied to any library
of mathematical knowledge. While our focus here is on libraries that are
available in a content markup language, QMT naturally extends to presentation
and narration markup languages.
|
1204.4686
|
Analysis of LT Codes with Unequal Recovery Time
|
cs.IT math.IT
|
In this paper we analyze a specific class of rateless codes, called LT codes
with unequal recovery time. These codes provide the option of prioritizing
different segments of the transmitted data over other. The result is that
segments are decoded in stages during the rateless transmission, where higher
prioritized segments are decoded at lower overhead. Our analysis focuses on
quantifying the expected amount of received symbols, which are redundant
already upon arrival, i.e. all input symbols contained in the received symbols
have already been decoded. This analysis gives novel insights into the
probabilistic mechanisms of LT codes with unequal recovery time, which has not
yet been available in the literature. We show that while these rateless codes
successfully provide the unequal recovery time, they do so at a significant
price in terms of redundancy in the lower prioritized segments. We propose and
analyze a modification where a single intermediate feedback is transmitted,
when the first segment is decoded in a code with two segments. Our analysis
shows that this modification provides a dramatic improvement on the decoding
performance of the lower prioritized segment.
|
1204.4710
|
Regret in Online Combinatorial Optimization
|
cs.LG stat.ML
|
We address online linear optimization problems when the possible actions of
the decision maker are represented by binary vectors. The regret of the
decision maker is the difference between her realized loss and the best loss
she would have achieved by picking, in hindsight, the best possible action. Our
goal is to understand the magnitude of the best possible (minimax) regret. We
study the problem under three different assumptions for the feedback the
decision maker receives: full information, and the partial information models
of the so-called "semi-bandit" and "bandit" problems. Combining the Mirror
Descent algorithm and the INF (Implicitely Normalized Forecaster) strategy, we
are able to prove optimal bounds for the semi-bandit case. We also recover the
optimal bounds for the full information setting. In the bandit case we discuss
existing results in light of a new lower bound, and suggest a conjecture on the
optimal regret in that case. Finally we also prove that the standard
exponentially weighted average forecaster is provably suboptimal in the setting
of online combinatorial optimization.
|
1204.4717
|
Energy-Efficient Building HVAC Control Using Hybrid System LBMPC
|
math.OC cs.LG cs.SY
|
Improving the energy-efficiency of heating, ventilation, and air-conditioning
(HVAC) systems has the potential to realize large economic and societal
benefits. This paper concerns the system identification of a hybrid system
model of a building-wide HVAC system and its subsequent control using a hybrid
system formulation of learning-based model predictive control (LBMPC). Here,
the learning refers to model updates to the hybrid system model that
incorporate the heating effects due to occupancy, solar effects, outside air
temperature (OAT), and equipment, in addition to integrator dynamics inherently
present in low-level control. Though we make significant modeling
simplifications, our corresponding controller that uses this model is able to
experimentally achieve a large reduction in energy usage without any
degradations in occupant comfort. It is in this way that we justify the
modeling simplifications that we have made. We conclude by presenting results
from experiments on our building HVAC testbed, which show an average of 1.5MWh
of energy savings per day (p = 0.002) with a 95% confidence interval of 1.0MWh
to 2.1MWh of energy savings.
|
1204.4758
|
Morphological Filtering in Shape Spaces: Applications using Tree-Based
Image Representations
|
cs.CV math.OA
|
Connected operators are filtering tools that act by merging elementary
regions of an image. A popular strategy is based on tree-based image
representations: for example, one can compute an attribute on each node of the
tree and keep only the nodes for which the attribute is sufficiently strong.
This operation can be seen as a thresholding of the tree, seen as a graph whose
nodes are weighted by the attribute. Rather than being satisfied with a mere
thresholding, we propose to expand on this idea, and to apply connected filters
on this latest graph. Consequently, the filtering is done not in the space of
the image, but on the space of shapes build from the image. Such a processing
is a generalization of the existing tree-based connected operators. Indeed, the
framework includes classical existing connected operators by attributes. It
also allows us to propose a class of novel connected operators from the
leveling family, based on shape attributes. Finally, we also propose a novel
class of self-dual connected operators that we call morphological shapings.
|
1204.4779
|
Paraiso : An Automated Tuning Framework for Explicit Solvers of Partial
Differential Equations
|
astro-ph.IM cs.DC cs.NE
|
We propose Paraiso, a domain specific language embedded in functional
programming language Haskell, for automated tuning of explicit solvers of
partial differential equations (PDEs) on GPUs as well as multicore CPUs. In
Paraiso, one can describe PDE solving algorithms succinctly using tensor
equations notation. Hydrodynamic properties, interpolation methods and other
building blocks are described in abstract, modular, re-usable and combinable
forms, which lets us generate versatile solvers from little set of Paraiso
source codes.
We demonstrate Paraiso by implementing a compressive hydrodynamics solver. A
single source code less than 500 lines can be used to generate solvers of
arbitrary dimensions, for both multicore CPUs and GPUs. We demonstrate both
manual annotation based tuning and evolutionary computing based automated
tuning of the program.
|
1204.4805
|
What's in an `is about' link? Chemical diagrams and the Information
Artifact Ontology
|
cs.AI cs.LO
|
The Information Artifact Ontology is an ontology in the domain of information
entities. Core to the definition of what it is to be an information entity is
the claim that an information entity must be `about' something, which is
encoded in an axiom expressing that all information entities are about some
entity. This axiom comes into conflict with ontological realism, since many
information entities seem to be about non-existing entities, such as
hypothetical molecules. We discuss this problem in the context of diagrams of
molecules, a kind of information entity pervasively used throughout
computational chemistry. We then propose a solution that recognizes that
information entities such as diagrams are expressions of diagrammatic
languages. In so doing, we not only address the problem of classifying diagrams
that seem to be about non-existing entities but also allow a more sophisticated
categorisation of information entities.
|
1204.4840
|
Energy-Delay Tradeoff and Dynamic Sleep Switching for Bluetooth-Like
Body-Area Sensor Networks
|
cs.IT cs.NI math.IT
|
Wireless technology enables novel approaches to healthcare, in particular the
remote monitoring of vital signs and other parameters indicative of people's
health. This paper considers a system scenario relevant to such applications,
where a smart-phone acts as a data-collecting hub, gathering data from a number
of wireless-capable body sensors, and relaying them to a healthcare provider
host through standard existing cellular networks. Delay of critical data and
sensors' energy efficiency are both relevant and conflicting issues. Therefore,
it is important to operate the wireless body-area sensor network at some
desired point close to the optimal energy-delay tradeoff curve. This tradeoff
curve is a function of the employed physical-layer protocol: in particular, it
depends on the multiple-access scheme and on the coding and modulation schemes
available. In this work, we consider a protocol closely inspired by the
widely-used Bluetooth standard. First, we consider the calculation of the
minimum energy function, i.e., the minimum sum energy per symbol that
guarantees the stability of all transmission queues in the network. Then, we
apply the general theory developed by Neely to develop a dynamic scheduling
policy that approaches the optimal energy-delay tradeoff for the network at
hand. Finally, we examine the queue dynamics and propose a novel policy that
adaptively switches between connected and disconnected (sleeping) modes. We
demonstrate that the proposed policy can achieve significant gains in the
realistic case where the control "NULL" packets necessary to maintain the
connection alive, have a non-zero energy cost, and the data arrival statistics
corresponding to the sensed physical process are bursty.
|
1204.4864
|
Tight lower bound of consecutive lengths for QC-LDPC codes with girth
twelve
|
cs.IT math.IT
|
For an arbitrary (3,L) QC-LDPC code with a girth of twelve, a tight lower
bound of the consecutive lengths is proposed. For an arbitrary length above the
bound the resultant code necessarily has a girth of twelve, and for the length
meeting the bound, the corresponding code inevitably has a girth smaller than
twelve. The conclusion can play an important role in the proofs of the
existence of large-girth QC-LDPC codes, the construction of large-girth QC-LDPC
codes based on the Chinese remainder theorem, and the construction of LDPC
codes with the guaranteed error correction capability.
|
1204.4865
|
Branch Flow Model: Relaxations and Convexification (Parts I, II)
|
cs.SY math.OC
|
We propose a branch flow model for the anal- ysis and optimization of mesh as
well as radial networks. The model leads to a new approach to solving optimal
power flow (OPF) that consists of two relaxation steps. The first step
eliminates the voltage and current angles and the second step approximates the
resulting problem by a conic program that can be solved efficiently. For radial
networks, we prove that both relaxation steps are always exact, provided there
are no upper bounds on loads. For mesh networks, the conic relaxation is always
exact but the angle relaxation may not be exact, and we provide a simple way to
determine if a relaxed solution is globally optimal. We propose convexification
of mesh networks using phase shifters so that OPF for the convexified network
can always be solved efficiently for an optimal solution. We prove that
convexification requires phase shifters only outside a spanning tree of the
network and their placement depends only on network topology, not on power
flows, generation, loads, or operating constraints. Part I introduces our
branch flow model, explains the two relaxation steps, and proves the conditions
for exact relaxation. Part II describes convexification of mesh networks, and
presents simulation results.
|
1204.4867
|
A Unified Multiscale Framework for Discrete Energy Minimization
|
cs.CV cs.DM
|
Discrete energy minimization is a ubiquitous task in computer vision, yet is
NP-hard in most cases. In this work we propose a multiscale framework for
coping with the NP-hardness of discrete optimization. Our approach utilizes
algebraic multiscale principles to efficiently explore the discrete solution
space, yielding improved results on challenging, non-submodular energies for
which current methods provide unsatisfactory approximations. In contrast to
popular multiscale methods in computer vision, that builds an image pyramid,
our framework acts directly on the energy to construct an energy pyramid.
Deriving a multiscale scheme from the energy itself makes our framework
application independent and widely applicable. Our framework gives rise to two
complementary energy coarsening strategies: one in which coarser scales involve
fewer variables, and a more revolutionary one in which the coarser scales
involve fewer discrete labels. We empirically evaluated our unified framework
on a variety of both non-submodular and submodular energies, including energies
from Middlebury benchmark.
|
1204.4874
|
On the existence, uniqueness and nature of Caratheodory and Filippov
solutions for bimodal piecewise affine dynamical systems
|
cs.SY math.CA
|
In this paper, we deal with the well-posedness (in the sense of existence and
uniqueness of solutions) and nature of solutions for discontinuous bimodal
piecewise affine systems in a differential inclusion setting. First, we show
that the conditions guaranteeing uniqueness of Filippov solutions in the
context of general differential inclusions are quite restrictive when applied
to bimodal piecewise affine systems. Later, we present a set of necessary and
sufficient conditions for uniqueness of Filippov solutions for bimodal
piecewise affine systems. We also study the so-called Zeno behavior
(possibility of infinitely many switchings within a finite time interval) for
Filippov solutions.
|
1204.4905
|
Capacity of Gaussian MAC Powered by Energy Harvesters without Storage
Buffer
|
cs.IT math.IT
|
We consider a Gaussian multiple access channel (GMAC) where the users are
sensor nodes powered by energy harvesters. The energy harvester has no buffer
to store the harvested energy and hence the energy need to be expended
immediately. We assume that the decoder has perfect knowledge of the energy
harvesting process. We characterize the capacity region of such a GMAC. We also
provide the capacity region when one of the users has infinite buffer to store
the energy harvested. Next we find the achievable rates when the energy
harvesting information is not available at the decoder.
|
1204.4914
|
Quantum Interference in Cognition: Structural Aspects of the Brain
|
cs.AI cs.CL quant-ph
|
We identify the presence of typically quantum effects, namely 'superposition'
and 'interference', in what happens when human concepts are combined, and
provide a quantum model in complex Hilbert space that represents faithfully
experimental data measuring the situation of combining concepts. Our model
shows how 'interference of concepts' explains the effects of underextension and
overextension when two concepts combine to the disjunction of these two
concepts. This result supports our earlier hypothesis that human thought has a
superposed two-layered structure, one layer consisting of 'classical logical
thought' and a superposed layer consisting of 'quantum conceptual thought'.
Possible connections with recent findings of a 'grid-structure' for the brain
are analyzed, and influences on the mind/brain relation, and consequences on
applied disciplines, such as artificial intelligence and quantum computation,
are considered.
|
1204.4927
|
EHRs Connect Research and Practice: Where Predictive Modeling,
Artificial Intelligence, and Clinical Decision Support Intersect
|
cs.AI cs.DB stat.ML
|
Objectives: Electronic health records (EHRs) are only a first step in
capturing and utilizing health-related data - the challenge is turning that
data into useful information. Furthermore, EHRs are increasingly likely to
include data relating to patient outcomes, functionality such as clinical
decision support, and genetic information as well, and, as such, can be seen as
repositories of increasingly valuable information about patients' health
conditions and responses to treatment over time. Methods: We describe a case
study of 423 patients treated by Centerstone within Tennessee and Indiana in
which we utilized electronic health record data to generate predictive
algorithms of individual patient treatment response. Multiple models were
constructed using predictor variables derived from clinical, financial and
geographic data. Results: For the 423 patients, 101 deteriorated, 223 improved
and in 99 there was no change in clinical condition. Based on modeling of
various clinical indicators at baseline, the highest accuracy in predicting
individual patient response ranged from 70-72% within the models tested. In
terms of individual predictors, the Centerstone Assessment of Recovery Level -
Adult (CARLA) baseline score was most significant in predicting outcome over
time (odds ratio 4.1 + 2.27). Other variables with consistently significant
impact on outcome included payer, diagnostic category, location and provision
of case management services. Conclusions: This approach represents a promising
avenue toward reducing the current gap between research and practice across
healthcare, developing data-driven clinical decision support based on
real-world populations, and serving as a component of embedded clinical
artificial intelligences that "learn" over time.
|
1204.4928
|
Challenges in Complex Systems Science
|
nlin.AO cs.SI physics.soc-ph
|
FuturICT foundations are social science, complex systems science, and ICT.
The main concerns and challenges in the science of complex systems in the
context of FuturICT are laid out in this paper with special emphasis on the
Complex Systems route to Social Sciences. This include complex systems having:
many heterogeneous interacting parts; multiple scales; complicated transition
laws; unexpected or unpredicted emergence; sensitive dependence on initial
conditions; path-dependent dynamics; networked hierarchical connectivities;
interaction of autonomous agents; self-organisation; non-equilibrium dynamics;
combinatorial explosion; adaptivity to changing environments; co-evolving
subsystems; ill-defined boundaries; and multilevel dynamics. In this context,
science is seen as the process of abstracting the dynamics of systems from
data. This presents many challenges including: data gathering by large-scale
experiment, participatory sensing and social computation, managing huge
distributed dynamic and heterogeneous databases; moving from data to dynamical
models, going beyond correlations to cause-effect relationships, understanding
the relationship between simple and comprehensive models with appropriate
choices of variables, ensemble modeling and data assimilation, modeling systems
of systems of systems with many levels between micro and macro; and formulating
new approaches to prediction, forecasting, and risk, especially in systems that
can reflect on and change their behaviour in response to predictions, and
systems whose apparently predictable behaviour is disrupted by apparently
unpredictable rare or extreme events. These challenges are part of the FuturICT
agenda.
|
1204.4948
|
On Injective Embeddings of Tree Patterns
|
cs.DB
|
We study three different kinds of embeddings of tree patterns:
weakly-injective, ancestor-preserving, and lca-preserving. While each of them
is often referred to as injective embedding, they form a proper hierarchy and
their computational properties vary (from P to NP-complete). We present a
thorough study of the complexity of the model checking problem i.e., is there
an embedding of a given tree pattern in a given tree, and we investigate the
impact of various restrictions imposed on the tree pattern: bound on the degree
of a node, bound on the height, and type of allowed labels and edges.
|
1204.4989
|
Using Belief Theory to Diagnose Control Knowledge Quality. Application
to cartographic generalisation
|
cs.AI
|
Both humans and artificial systems frequently use trial and error methods to
problem solving. In order to be effective, this type of strategy implies having
high quality control knowledge to guide the quest for the optimal solution.
Unfortunately, this control knowledge is rarely perfect. Moreover, in
artificial systems-as in humans-self-evaluation of one's own knowledge is often
difficult. Yet, this self-evaluation can be very useful to manage knowledge and
to determine when to revise it. The objective of our work is to propose an
automated approach to evaluate the quality of control knowledge in artificial
systems based on a specific trial and error strategy, namely the informed tree
search strategy. Our revision approach consists in analysing the system's
execution logs, and in using the belief theory to evaluate the global quality
of the knowledge. We present a real-world industrial application in the form of
an experiment using this approach in the domain of cartographic generalisation.
Thus far, the results of using our approach have been encouraging.
|
1204.4990
|
Objective Function Designing Led by User Preferences Acquisition
|
cs.LG cs.AI cs.HC
|
Many real world problems can be defined as optimisation problems in which the
aim is to maximise an objective function. The quality of obtained solution is
directly linked to the pertinence of the used objective function. However,
designing such function, which has to translate the user needs, is usually
fastidious. In this paper, a method to help user objective functions designing
is proposed. Our approach, which is highly interactive, is based on man machine
dialogue and more particularly on the comparison of problem instance solutions
by the user. We propose an experiment in the domain of cartographic
generalisation that shows promising results.
|
1204.4991
|
Knowledge revision in systems based on an informed tree search strategy
: application to cartographic generalisation
|
cs.AI cs.LG
|
Many real world problems can be expressed as optimisation problems. Solving
this kind of problems means to find, among all possible solutions, the one that
maximises an evaluation function. One approach to solve this kind of problem is
to use an informed search strategy. The principle of this kind of strategy is
to use problem-specific knowledge beyond the definition of the problem itself
to find solutions more efficiently than with an uninformed strategy. This kind
of strategy demands to define problem-specific knowledge (heuristics). The
efficiency and the effectiveness of systems based on it directly depend on the
used knowledge quality. Unfortunately, acquiring and maintaining such knowledge
can be fastidious. The objective of the work presented in this paper is to
propose an automatic knowledge revision approach for systems based on an
informed tree search strategy. Our approach consists in analysing the system
execution logs and revising knowledge based on these logs by modelling the
revision problem as a knowledge space exploration problem. We present an
experiment we carried out in an application domain where informed search
strategies are often used: cartographic generalisation.
|
1204.5001
|
Improving the Entropy Estimate of Neuronal Firings of Modeled Cochlear
Nucleus Neurons
|
q-bio.NC cs.IT math.IT
|
In this correspondence information theoretical tools are used to investigate
the statistical properties of modeled cochlear nucleus globular bushy cell
spike trains. The firing patterns are obtained from a simulation software that
generates sample spike trains from any auditory input. Here we analyze for the
first time the responses of globular bushy cells to voiced and unvoiced speech
sounds. Classical entropy estimates, such as the direct method, are improved
upon by considering a time-varying and time-dependent entropy estimate. With
this method we investigated the relationship between the predictability of the
neuronal response and the frequency content in the auditory signals. The
analysis quantifies the temporal precision of the neuronal coding and the
memory in the neuronal response.
|
1204.5028
|
Regenerating Codes: A System Perspective
|
cs.DC cs.IT cs.NI math.IT
|
The explosion of the amount of data stored in cloud systems calls for more
efficient paradigms for redundancy. While replication is widely used to ensure
data availability, erasure correcting codes provide a much better trade-off
between storage and availability. Regenerating codes are good candidates for
they also offer low repair costs in term of network bandwidth. While they have
been proven optimal, they are difficult to understand and parameterize. In this
paper we provide an analysis of regenerating codes for practitioners to grasp
the various trade-offs. More specifically we make two contributions: (i) we
study the impact of the parameters by conducting an analysis at the level of
the system, rather than at the level of a single device; (ii) we compare the
computational costs of various implementations of codes and highlight the most
efficient ones. Our goal is to provide system designers with concrete
information to help them choose the best parameters and design for regenerating
codes.
|
1204.5043
|
Sparse Prediction with the $k$-Support Norm
|
stat.ML cs.LG
|
We derive a novel norm that corresponds to the tightest convex relaxation of
sparsity combined with an $\ell_2$ penalty. We show that this new {\em
$k$-support norm} provides a tighter relaxation than the elastic net and is
thus a good replacement for the Lasso or the elastic net in sparse prediction
problems. Through the study of the $k$-support norm, we also bound the
looseness of the elastic net, thus shedding new light on it and providing
justification for its use.
|
1204.5046
|
Instantaneous Relaying: Optimal Strategies and Interference
Neutralization
|
cs.IT math.IT math.OC
|
In a multi-user wireless network equipped with multiple relay nodes, some
relays are more intelligent than other relay nodes. The intelligent relays are
able to gather channel state information, perform linear processing and forward
signals whereas the dumb relays is only able to serve as amplifiers. As the
dumb relays are oblivious to the source and destination nodes, the wireless
network can be modeled as a relay network with *smart instantaneous relay*
only: the signals of source-destination arrive at the same time as
source-relay-destination. Recently, instantaneous relaying is shown to improve
the degrees-of-freedom of the network as compared to classical cut-set bound.
In this paper, we study an achievable rate region and its boundary of the
instantaneous interference relay channel in the scenario of (a) uninformed
non-cooperative source-destination nodes (source and destination nodes are not
aware of the existence of the relay and are non-cooperative) and (b) informed
and cooperative source-destination nodes. Further, we examine the performance
of interference neutralization: a relay strategy which is able to cancel
interference signals at each destination node in the air. We observe that
interference neutralization, although promise to achieve desired
degrees-of-freedom, may not be feasible if relay has limited power. Simulation
results show that the optimal relay strategies improve the achievable rate
region and provide better user-fairness in both uninformed non-cooperative and
informed cooperative scenarios.
|
1204.5059
|
Computation over Mismatched Channels
|
cs.IT math.IT
|
We consider the problem of distributed computation of a target function over
a multiple-access channel. If the target and channel functions are matched
(i.e., compute the same function), significant performance gains can be
obtained by jointly designing the computation and communication tasks. However,
in most situations there is mismatch between these two functions. In this work,
we analyze the impact of this mismatch on the performance gains achievable with
joint computation and communication designs over separation-based designs. We
show that for most pairs of target and channel functions there is no such gain,
and separation of computation and communication is optimal.
|
1204.5136
|
Analysis and Design of Irregular Graphs for Node-Based
Verification-Based Recovery Algorithms in Compressed Sensing
|
cs.IT math.IT
|
In this paper, we present a probabilistic analysis of iterative node-based
verification-based (NB-VB) recovery algorithms over irregular graphs in the
context of compressed sensing. Verification-based algorithms are particularly
interesting due to their low complexity (linear in the signal dimension $n$).
The analysis predicts the average fraction of unverified signal elements at
each iteration $\ell$ where the average is taken over the ensembles of input
signals and sensing matrices. The analysis is asymptotic ($n \rightarrow
\infty$) and is similar in nature to the well-known density evolution technique
commonly used to analyze iterative decoding algorithms. Compared to the
existing technique for the analysis of NB-VB algorithms, which is based on
numerically solving a large system of coupled differential equations, the
proposed method is much simpler and more accurate. This allows us to design
irregular sensing graphs for such recovery algorithms. The designed irregular
graphs outperform the corresponding regular graphs substantially. For example,
for the same recovery complexity per iteration, we design irregular graphs that
can recover up to about 40% more non-zero signal elements compared to the
regular graphs. Simulation results are also provided which demonstrate that the
proposed asymptotic analysis matches the performance of recovery algorithms for
large but finite values of $n$.
|
1204.5174
|
Christhin: Quantitative Analysis of Thin Layer Chromatography
|
cs.CE
|
Manual for Christhin 0.1.36 Christhin (Chromatography Riser Thin) is software
developed for the quantitative analysis of data obtained from thin-layer
chromatographic techniques (TLC). Once installed on your computer, the program
is very easy to use, and provides data quickly and accurately. This manual
describes the program, and reading should be enough to use it properly.
|
1204.5213
|
Solving Weighted Voting Game Design Problems Optimally: Representations,
Synthesis, and Enumeration
|
cs.GT cs.AI cs.MA
|
We study the inverse power index problem for weighted voting games: the
problem of finding a weighted voting game in which the power of the players is
as close as possible to a certain target distribution. Our goal is to find
algorithms that solve this problem exactly. Thereto, we study various
subclasses of simple games, and their associated representation methods. We
survey algorithms and impossibility results for the synthesis problem, i.e.,
converting a representation of a simple game into another representation.
We contribute to the synthesis problem by showing that it is impossible to
compute in polynomial time the list of ceiling coalitions (also known as
shift-maximal losing coalitions) of a game from its list of roof coalitions
(also known as shift-minimal winning coalitions), and vice versa.
Then, we proceed by studying the problem of enumerating the set of weighted
voting games. We present first a naive algorithm for this, running in doubly
exponential time. Using our knowledge of the synthesis problem, we then improve
on this naive algorithm, and we obtain an enumeration algorithm that runs in
quadratic exponential time (that is, O(2^(n^2) p(n)) for a polynomial p).
Moreover, we show that this algorithm runs in output-polynomial time, making it
the best possible enumeration algorithm up to a polynomial factor.
Finally, we propose an exact anytime algorithm for the inverse power index
problem that runs in exponential time. This algorithm is straightforward and
general: it computes the error for each game enumerated, and outputs the game
that minimizes this error. By the genericity of our approach, our algorithm can
be used to find a weighted voting game that optimizes any exponential time
computable function. We implement our algorithm for the case of the normalized
Banzhaf index, and we perform experiments in order to study performance and
error convergence.
|
1204.5226
|
An Optimal and Distributed Method for Voltage Regulation in Power
Distribution Systems
|
math.OC cs.IT cs.SY math.IT
|
This paper addresses the problem of voltage regulation in power distribution
networks with deep-penetration of distributed energy resources, e.g.,
renewable-based generation, and storage-capable loads such as plug-in hybrid
electric vehicles. We cast the problem as an optimization program, where the
objective is to minimize the losses in the network subject to constraints on
bus voltage magnitudes, limits on active and reactive power injections,
transmission line thermal limits and losses. We provide sufficient conditions
under which the optimization problem can be solved via its convex relaxation.
Using data from existing networks, we show that these sufficient conditions are
expected to be satisfied by most networks. We also provide an efficient
distributed algorithm to solve the problem. The algorithm adheres to a
communication topology described by a graph that is the same as the graph that
describes the electrical network topology. We illustrate the operation of the
algorithm, including its robustness against communication link failures,
through several case studies involving 5-, 34-, and 123-bus power distribution
systems.
|
1204.5253
|
An Algebraic Framework for Concatenated Linear Block Codes in Side
Information Based Problems
|
cs.IT math.IT
|
This work provides an algebraic framework for source coding with decoder side
information and its dual problem, channel coding with encoder side information,
showing that nested concatenated codes can achieve the corresponding
rate-distortion and capacity-noise bounds. We show that code concatenation
preserves the nested properties of codes and that only one of the concatenated
codes needs to be nested, which opens up a wide range of possible new code
combinations for these side information based problems. In particular, the
practically important binary version of these problems can be addressed by
concatenating binary inner and non-binary outer linear codes. By observing that
list decoding with folded Reed- Solomon codes is asymptotically optimal for
encoding IID q-ary sources and that in concatenation with inner binary codes it
can asymptotically achieve the rate-distortion bound for a Bernoulli symmetric
source, we illustrate our findings with a new algebraic construction which
comprises concatenated nested cyclic codes and binary linear block codes.
|
1204.5281
|
Stochastic Analysis of Mean Interference for RTS/CTS Mechanism
|
cs.NI cs.IT cs.PF math.IT
|
The RTS/CTS handshake mechanism in WLAN is studied using stochastic geometry.
The effect of RTS/CTS is treated as a thinning procedure for a spatially
distributed point process that models the potential transceivers in a WLAN, and
the resulting concurrent transmission processes are described. Exact formulas
for the intensity of the concurrent transmission processes and the mean
interference experienced by a typical receiver are established. The analysis
yields useful results for understanding how the design parameters of RTS/CTS
affect the network interference.
|
1204.5309
|
Analysis Operator Learning and Its Application to Image Reconstruction
|
cs.LG cs.CV
|
Exploiting a priori known structural information lies at the core of many
image reconstruction methods that can be stated as inverse problems. The
synthesis model, which assumes that images can be decomposed into a linear
combination of very few atoms of some dictionary, is now a well established
tool for the design of image reconstruction algorithms. An interesting
alternative is the analysis model, where the signal is multiplied by an
analysis operator and the outcome is assumed to be the sparse. This approach
has only recently gained increasing interest. The quality of reconstruction
methods based on an analysis model severely depends on the right choice of the
suitable operator.
In this work, we present an algorithm for learning an analysis operator from
training images. Our method is based on an $\ell_p$-norm minimization on the
set of full rank matrices with normalized columns. We carefully introduce the
employed conjugate gradient method on manifolds, and explain the underlying
geometry of the constraints. Moreover, we compare our approach to
state-of-the-art methods for image denoising, inpainting, and single image
super-resolution. Our numerical results show competitive performance of our
general approach in all presented applications compared to the specialized
state-of-the-art techniques.
|
1204.5314
|
A Tunable Mechanism for Identifying Trusted Nodes in Large Scale
Distributed Networks
|
cs.SI physics.soc-ph
|
In this paper, we propose a simple randomized protocol for identifying
trusted nodes based on personalized trust in large scale distributed networks.
The problem of identifying trusted nodes, based on personalized trust, in a
large network setting stems from the huge computation and message overhead
involved in exhaustively calculating and propagating the trust estimates by the
remote nodes. However, in any practical scenario, nodes generally communicate
with a small subset of nodes and thus exhaustively estimating the trust of all
the nodes can lead to huge resource consumption. In contrast, our mechanism can
be tuned to locate a desired subset of trusted nodes, based on the allowable
overhead, with respect to a particular user. The mechanism is based on a simple
exchange of random walk messages and nodes counting the number of times they
are being hit by random walkers of nodes in their neighborhood. Simulation
results to analyze the effectiveness of the algorithm show that using the
proposed algorithm, nodes identify the top trusted nodes in the network with a
very high probability by exploring only around 45% of the total nodes, and in
turn generates nearly 90% less overhead as compared to an exhaustive trust
estimation mechanism, named TrustWebRank. Finally, we provide a measure of the
global trustworthiness of a node; simulation results indicate that the measures
generated using our mechanism differ by only around 0.6% as compared to
TrustWebRank.
|
1204.5316
|
ILexicOn: toward an ECD-compliant interlingual lexical ontology
described with semantic web formalisms
|
cs.CL cs.AI
|
We are interested in bridging the world of natural language and the world of
the semantic web in particular to support natural multilingual access to the
web of data. In this paper we introduce a new type of lexical ontology called
interlingual lexical ontology (ILexicOn), which uses semantic web formalisms to
make each interlingual lexical unit class (ILUc) support the projection of its
semantic decomposition on itself. After a short overview of existing lexical
ontologies, we briefly introduce the semantic web formalisms we use. We then
present the three layered architecture of our approach: i) the interlingual
lexical meta-ontology (ILexiMOn); ii) the ILexicOn where ILUcs are formally
defined; iii) the data layer. We illustrate our approach with a standalone
ILexicOn, and introduce and explain a concise human-readable notation to
represent ILexicOns. Finally, we show how semantic web formalisms enable the
projection of a semantic decomposition on the decomposed ILUc.
|
1204.5317
|
Correction Trees as an Alternative to Turbo Codes and Low Density Parity
Check Codes
|
cs.IT math.IT
|
The rapidly improving performance of modern hardware renders convolutional
codes obsolete, and allows for the practical implementation of more
sophisticated correction codes such as low density parity check (LDPC) and
turbo codes (TC). Both are decoded by iterative algorithms, which require a
disproportional computational effort for low channel noise. They are also
unable to correct higher noise levels, still below the Shannon theoretical
limit. In this paper, we discuss an enhanced version of a convolutional-like
decoding paradigm which adopts very large spaces of possible system states, of
the order of $2^{64}$. Under such conditions, the traditional convolution
operation is rendered useless and needs to be replaced by a carefully designed
state transition procedure. The size of the system state space completely
changes the correction philosophy, as state collisions are virtually impossible
and the decoding procedure becomes a correction tree. The proposed decoding
algorithm is practically cost-free for low channel noise. As the channel noise
approaches the Shannon limit, it is still possible to perform correction,
although its cost increases to infinity. In many applications, the implemented
decoder can essentially outperform both LDPC and TC. This paper describes the
proposed correction paradigm and theoretically analyzes the asymptotic
correction performance. The considered encoder and decoder were verified
experimentally for the binary symmetric channel. The correction process remains
practically cost-free for channel error rates below 0.05 and 0.13 for the 1/2
and 1/4 rate codes, respectively. For the considered resource limit, the output
bit error rates reach the order of $10^{-3}$ for channel error rates 0.08 and
0.18. The proposed correction paradigm can be easily extended to other
communication channels; the appropriate generalizations are also discussed in
this study.
|
1204.5320
|
Robust Estimates of Covariance Matrices in the Large Dimensional Regime
|
cs.IT math.IT
|
This article studies the limiting behavior of a class of robust population
covariance matrix estimators, originally due to Maronna in 1976, in the regime
where both the number of available samples and the population size grow large.
Using tools from random matrix theory, we prove that, for sample vectors made
of independent entries having some moment conditions, the difference between
the sample covariance matrix and (a scaled version of) such robust estimator
tends to zero in spectral norm, almost surely. This result can be applied to
various statistical methods arising from random matrix theory that can be made
robust without altering their first order behavior.
|
1204.5345
|
Time-dependent wave selection for information processing in excitable
media
|
nlin.PS cs.CL
|
We demonstrate an improved technique for implementing logic circuits in
light-sensitive chemical excitable media. The technique makes use of the
constant-speed propagation of waves along defined channels in an excitable
medium based on the Belousov-Zhabotinsky reaction, along with the mutual
annihilation of colliding waves. What distinguishes this work from previous
work in this area is that regions where channels meet at a junction can
periodically alternate between permitting the propagation of waves and blocking
them. These valve-like areas are used to select waves based on the length of
time that it takes waves to propagate from one valve to another. In an
experimental implementation, the channels which make up the circuit layout are
projected by a digital projector connected to a computer. Excitable channels
are projected as dark areas, unexcitable regions as light areas. Valves
alternate between dark and light: every valve has the same period and phase,
with a 50% duty cycle. This scheme can be used to make logic gates based on
combinations of OR and AND-NOT operations, with few geometrical constraints.
Because there are few geometrical constraints, compact circuits can be
implemented. Experimental results from an implementation of a 4-bit input,
2-bit output integer square root circuit are given. This is the most complex
logic circuit that has been implemented in BZ excitable media to date.
|
1204.5347
|
Analysis-based sparse reconstruction with synthesis-based solvers
|
cs.IT math.IT
|
Analysis based reconstruction has recently been introduced as an alternative
to the well-known synthesis sparsity model used in a variety of signal
processing areas. In this paper we convert the analysis exact-sparse
reconstruction problem to an equivalent synthesis recovery problem with a set
of additional constraints. We are therefore able to use existing
synthesis-based algorithms for analysis-based exact-sparse recovery. We call
this the Analysis-By-Synthesis (ABS) approach. We evaluate our proposed
approach by comparing it against the recent Greedy Analysis Pursuit (GAP)
analysis-based recovery algorithm. The results show that our approach is a
viable option for analysis-based reconstruction, while at the same time
allowing many algorithms that have been developed for synthesis reconstruction
to be directly applied for analysis reconstruction as well.
|
1204.5357
|
Learning AMP Chain Graphs under Faithfulness
|
stat.ML cs.AI math.ST stat.TH
|
This paper deals with chain graphs under the alternative
Andersson-Madigan-Perlman (AMP) interpretation. In particular, we present a
constraint based algorithm for learning an AMP chain graph a given probability
distribution is faithful to. We also show that the extension of Meek's
conjecture to AMP chain graphs does not hold, which compromises the development
of efficient and correct score+search learning algorithms under assumptions
weaker than faithfulness.
|
1204.5369
|
Ecological Evaluation of Persuasive Messages Using Google AdWords
|
cs.CL cs.SI
|
In recent years there has been a growing interest in crowdsourcing
methodologies to be used in experimental research for NLP tasks. In particular,
evaluation of systems and theories about persuasion is difficult to accommodate
within existing frameworks. In this paper we present a new cheap and fast
methodology that allows fast experiment building and evaluation with
fully-automated analysis at a low cost. The central idea is exploiting existing
commercial tools for advertising on the web, such as Google AdWords, to measure
message impact in an ecological setting. The paper includes a description of
the approach, tips for how to use AdWords for scientific research, and results
of pilot experiments on the impact of affective text variations which confirm
the effectiveness of the approach.
|
1204.5373
|
TopSig: Topology Preserving Document Signatures
|
cs.IR
|
Performance comparisons between File Signatures and Inverted Files for text
retrieval have previously shown several significant shortcomings of file
signatures relative to inverted files. The inverted file approach underpins
most state-of-the-art search engine algorithms, such as Language and
Probabilistic models. It has been widely accepted that traditional file
signatures are inferior alternatives to inverted files. This paper describes
TopSig, a new approach to the construction of file signatures. Many advances in
semantic hashing and dimensionality reduction have been made in recent times,
but these were not so far linked to general purpose, signature file based,
search engines. This paper introduces a different signature file approach that
builds upon and extends these recent advances. We are able to demonstrate
significant improvements in the performance of signature file based indexing
and retrieval, performance that is comparable to that of state of the art
inverted file based systems, including Language models and BM25. These findings
suggest that file signatures offer a viable alternative to inverted files in
suitable settings and from the theoretical perspective it positions the file
signatures model in the class of Vector Space retrieval models.
|
1204.5383
|
Climbing on Pyramids
|
math.OC cs.SY
|
A new approach is proposed for finding the "best cut" in a hierarchy of
partitions by energy minimization. Said energy must be "climbing" i.e. it must
be hierarchically and scale increasing. It encompasses separable energies and
those composed under supremum.
|
1204.5388
|
Track estimation with binary derivative observations
|
cs.IT math.IT
|
We focus in this paper in the estimation of a target trajectory defined by
whether a time constant parameter in a simple stochastic process or a random
walk with binary observations. The binary observation comes from binary
derivative sensors, that is, the target is getting closer or moving away. Such
a binary observation has a time property that will be used to ensure the
quality of a max-likelihood estimation, through single index model or
classification for the constant velocity movement. In the second part of this
paper we present a new algorithm for target tracking within a binary sensor
network when the target trajectory is assumed to be modelled by a random walk.
For a given target, this algorithm provides an estimation of its velocity and
its position. The greatest improvements are made through a position correction
and velocity analysis.
|
1204.5399
|
A Consensual Linear Opinion Pool
|
cs.MA math.ST stat.TH
|
An important question when eliciting opinions from experts is how to
aggregate the reported opinions. In this paper, we propose a pooling method to
aggregate expert opinions. Intuitively, it works as if the experts were
continuously updating their opinions in order to accommodate the expertise of
others. Each updated opinion takes the form of a linear opinion pool, where the
weight that an expert assigns to a peer's opinion is inversely related to the
distance between their opinions. In other words, experts are assumed to prefer
opinions that are close to their own opinions. We prove that such an updating
process leads to consensus, \textit{i.e.}, the experts all converge towards the
same opinion. Further, we show that if rational experts are rewarded using the
quadratic scoring rule, then the assumption that they prefer opinions that are
close to their own opinions follows naturally. We empirically demonstrate the
efficacy of the proposed method using real-world data.
|
1204.5416
|
A New Approach of Improving CFA Image for Digital Camera's
|
cs.CV
|
This paper work directly towards the improving the quality of the image for
the digital cameras and other visual capturing products. In this Paper, the
authors clearly defines the problems occurs in the CFA image. A different
methodology for removing the noise is discuses in the paper for color
correction and color balancing of the image. At the same time, the authors also
proposed a new methodology of providing denoisiing process before the
demosaickingfor the improving the image quality of CFA which is much efficient
then the other previous defined. The demosaicking process for producing the
colors in the image in a best way is also discuss.
|
1204.5431
|
Robust Head Pose Estimation Using Contourlet Transform
|
cs.CV
|
Estimating pose of the head is an important preprocessing step in many
pattern recognition and computer vision systems such as face recognition. Since
the performance of the face recognition systems is greatly affected by the
poses of the face, how to estimate the accurate pose of the face in human face
image is still a challenging problem. In this paper, we represent a novel
method for head pose estimation. To enhance the efficiency of the estimation we
use contourlet transform for feature extraction. Contourlet transform is
multi-resolution, multi-direction transform. In order to reduce the feature
space dimension and obtain appropriate features we use LDA (Linear Discriminant
Analysis) and PCA (Principal Component Analysis) to remove ineffcient features.
Then, we apply different classifiers such as k-nearest neighborhood (knn) and
minimum distance. We use the public available FERET database to evaluate the
performance of proposed method. Simulation results indicate the superior
robustness of the proposed method.
|
1204.5467
|
A new upper bound on the query complexity for testing generalized
Reed-Muller codes
|
cs.IT math.IT
|
Over a finite field $\F_q$ the $(n,d,q)$-Reed-Muller code is the code given
by evaluations of $n$-variate polynomials of total degree at most $d$ on all
points (of $\F_q^n$). The task of testing if a function $f:\F_q^n \to \F_q$ is
close to a codeword of an $(n,d,q)$-Reed-Muller code has been of central
interest in complexity theory and property testing. The query complexity of
this task is the minimal number of queries that a tester can make (minimum over
all testers of the maximum number of queries over all random choices) while
accepting all Reed-Muller codewords and rejecting words that are $\delta$-far
from the code with probability $\Omega(\delta)$. (In this work we allow the
constant in the $\Omega$ to depend on $d$.) In this work we give a new upper
bound of $(c q)^{(d+1)/q}$ on the query complexity, where $c$ is a universal
constant. In the process we also give new upper bounds on the "spanning weight"
of the dual of the Reed-Muller code (which is also a Reed-Muller code). The
spanning weight of a code is the smallest integer $w$ such that codewords of
Hamming weight at most $w$ span the code.
|
1204.5513
|
The impulse cutoff an entropy functional measure on trajectories of
Markov diffusion process integrating in information path functional
|
nlin.AO cs.IT math.IT math.OC math.PR
|
The impulses, cutting entropy functional (EF) measure on trajectories Markov
diffusion process, integrate information path functional (IPF) composing
discrete information Bits extracted from observing random process. Each cut
brings memory of the cutting entropy, which provides both reduction of the
process entropy and discrete unit of the cutting entropy a Bit. Consequently,
information is memorized entropy cutting in random observations which process
interactions. The origin of information associates with anatomy creation of
impulse enables both cut entropy and stipulate random process generating
information under the cut. Memory of the impulse cutting time interval freezes
the observing events dynamics in information processes. Diffusion process
additive functional defines EF reducing it to a regular integral functional.
Compared to Shannon entropy measure of random state, cutting process on
separated states decreases quantity information concealed in the states
correlation holding hidden process information. Infinite dimensional process
cutoffs integrate finite information in IPF whose information approaches EF
restricting process maximal information. Within the impulse reversible
microprocess, conjugated entropy increments are entangling up to the cutoff
converting entropy in irreversible information. Extracting maximum of minimal
impulse information and transferring minimal entropy between impulses implement
maxmin-minimax principle of optimal conversion process entropy to information.
Macroprocess extremals integrate entropy of microprocess and cutoff information
of impulses in the IPF information physical process. IPF measures Feller kernel
information. Estimation extracting information confirms nonadditivity of EF
measured process increments.
|
1204.5547
|
Automorphism groups of Grassmann codes
|
cs.IT math.AG math.IT
|
We use a theorem of Chow (1949) on line-preserving bijections of
Grassmannians to determine the automorphism group of Grassmann codes. Further,
we analyze the automorphisms of the big cell of a Grassmannian and then use it
to settle an open question of Beelen et al. (2010) concerning the permutation
automorphism groups of affine Grassmann codes. Finally, we prove an analogue of
Chow's theorem for the case of Schubert divisors in Grassmannians and then use
it to determine the automorphism group of linear codes associated to such
Schubert divisors. In the course of this work, we also give an alternative
short proof of MacWilliams theorem concerning the equivalence of linear codes
and a characterization of maximal linear subspaces of Schubert divisors in
Grassmannians.
|
1204.5580
|
Cusp Points in the Parameter Space of Degenerate 3-RPR Planar Parallel
Manipulators
|
cs.RO
|
This paper investigates the conditions in the design parameter space for the
existence and distribution of the cusp locus for planar parallel manipulators.
Cusp points make possible non-singular assembly-mode changing motion, which
increases the maximum singularity-free workspace. An accurate algorithm for the
determination is proposed amending some imprecisions done by previous existing
algorithms. This is combined with methods of Cylindric Algebraic Decomposition,
Gr\"obner bases and Discriminant Varieties in order to partition the parameter
space into cells with constant number of cusp points. These algorithms will
allow us to classify a family of degenerate 3-RPR manipulators.
|
1204.5602
|
The persistence of social signatures in human communication
|
physics.soc-ph cs.SI
|
The social network maintained by a focal individual, or ego, is intrinsically
dynamic and typically exhibits some turnover in membership over time as
personal circumstances change. However, the consequences of such changes on the
distribution of an ego's network ties are not well understood. Here we use a
unique 18-month data set that combines mobile phone calls and survey data to
track changes in the ego networks and communication patterns of students making
the transition from school to university or work. Our analysis reveals that
individuals display a distinctive and robust social signature, captured by how
interactions are distributed across different alters. Notably, for a given ego,
these social signatures tend to persist over time, despite considerable
turnover in the identity of alters in the ego network. Thus as new network
members are added, some old network members are either replaced or receive
fewer calls, preserving the overall distribution of calls across network
members. This is likely to reflect the consequences of finite resources such as
the time available for communication, the cognitive and emotional effort
required to sustain close relationships, and the ability to make emotional
investments.
|
1204.5635
|
Locally Most Powerful Invariant Tests for Correlation and Sphericity of
Gaussian Vectors
|
cs.IT math.IT stat.OT
|
In this paper we study the existence of locally most powerful invariant tests
(LMPIT) for the problem of testing the covariance structure of a set of
Gaussian random vectors. The LMPIT is the optimal test for the case of close
hypotheses, among those satisfying the invariances of the problem, and in
practical scenarios can provide better performance than the typically used
generalized likelihood ratio test (GLRT). The derivation of the LMPIT usually
requires one to find the maximal invariant statistic for the detection problem
and then derive its distribution under both hypotheses, which in general is a
rather involved procedure. As an alternative, Wijsman's theorem provides the
ratio of the maximal invariant densities without even finding an explicit
expression for the maximal invariant. We first consider the problem of testing
whether a set of $N$-dimensional Gaussian random vectors are uncorrelated or
not, and show that the LMPIT is given by the Frobenius norm of the sample
coherence matrix. Second, we study the case in which the vectors under the null
hypothesis are uncorrelated and identically distributed, that is, the
sphericity test for Gaussian vectors, for which we show that the LMPIT is given
by the Frobenius norm of a normalized version of the sample covariance matrix.
Finally, some numerical examples illustrate the performance of the proposed
tests, which provide better results than their GLRT counterparts.
|
1204.5636
|
Social Networks with Competing Products
|
cs.SI physics.soc-ph
|
We introduce a new threshold model of social networks, in which the nodes
influenced by their neighbours can adopt one out of several alternatives. We
characterize social networks for which adoption of a product by the whole
network is possible (respectively necessary) and the ones for which a unique
outcome is guaranteed. These characterizations directly yield polynomial time
algorithms that allow us to determine whether a given social network satisfies
one of the above properties.
We also study algorithmic questions for networks without unique outcomes. We
show that the problem of determining whether a final network exists in which
all nodes adopted some product is NP-complete. In turn, the problems of
determining whether a given node adopts some (respectively, a given) product in
some (respectively, all) network(s) are either co-NP complete or can be solved
in polynomial time.
Further, we show that the problem of computing the minimum possible spread of
a product is NP-hard to approximate with an approximation ratio better than
$\Omega(n)$, in contrast to the maximum spread, which is efficiently
computable. Finally, we clarify that some of the above problems can be solved
in polynomial time when there are only two products.
|
1204.5648
|
Taxonomy and synthesis of Web services querying languages
|
cs.DB
|
Most works on Web services has focused on discovery, composition and
selection processes of these kinds of services. Other few works were interested
in how to represent Web services search queries. However, these queries cannot
be processed by ensuring a high level of performance without being adequately
represented first. To this end, different query languages were designed. Even
so, in the absence of a standard, these languages are quite various. Their
diversity makes it difficult choosing the most suitable language. In fact, this
language should be able to cover all types of preferences or requirements of
clients such as their functional, nonfunctional,temporal or even specific
constraints as is the case of geographical or spatial constraints and meet
their needs and preferences helping to provide them the best answer. It must
also be mutually simple and imposes no restrictions or at least not too many
constraints in terms of prior knowledge to use and also provide a formal or
semi-formal queries presentation to support their automatic post-processing. A
comparative study is eventually established to allow to reveal the advantages
and limitations of various existing languages in this context. It is a
synthesis of this category of languages discussing their performance level and
their capability to respond to various needs related to the Web services
research and discovery case. The criterions identified at this stage may, in
our opinion, constitute then the main pre-requisite that a language should
satisfy to be called perfect or to be a future standard.
|
1204.5652
|
ML Decoding Complexity Reduction in STBCs Using Time-Orthogonal Pulse
Shaping
|
cs.IT math.IT
|
Motivated by the recent developments in the Space Shift Keying (SSK) and
Spatial Modulation (SM) systems which employ Time-Orthogonal Pulse Shaping
(TOPS) filters to achieve transmit diversity gains, we propose TOPS for
Space-Time Block Codes (STBC). We show that any STBC whose set of weight
matrices partitions into P subsets under the equivalence relation termed as
Common Support Relation can be made P -group decodable by properly employing
TOPS waveforms across space and time. Furthermore, by considering some of the
well known STBCs in the literature we show that the order of their Maximum
Likelihood decoding complexity can be greatly reduced by the application of
TOPS.
|
1204.5661
|
Transmission of distress in a bank credit network
|
q-fin.RM cs.AI
|
The European sovereign debt crisis has impaired many European banks. The
distress on the European banks may transmit worldwide, and result in a
large-scale knock-on default of financial institutions. This study presents a
computer simulation model to analyze the risk of insolvency of banks and
defaults in a bank credit network. Simulation experiments reproduce the
knock-on default, and quantify the impact which is imposed on the number of
bank defaults by heterogeneity of the bank credit network, the equity capital
ratio of banks, and the capital surcharge on big banks.
|
1204.5663
|
Cognitive Interference Channels with Confidential Messages under
Randomness Constraint
|
cs.IT math.IT
|
The cognitive interference channel with confidential messages (CICC) proposed
by Liang et. al. is investigated. When the security is considered in coding
systems, it is well known that the sender needs to use a stochastic encoding to
avoid the information about the transmitted confidential message to be leaked
to an eavesdropper. For the CICC, the trade-off between the rate of the random
number to realize the stochastic encoding and the communication rates is
investigated, and the optimal trade-off is completely characterized.
|
1204.5677
|
Automatic Generation of C-code or PLD Circuits under SFC Graphical
Environment
|
cs.SY
|
This paper proposes a framework for automatic development of control systems
from a high level specification based in Grafcet formalism. Grafcet, or
Sequential Function Charts (SFC), is a special class of Petri Nets and is
becoming the standard representation for sequential control systems. The
proposed framework accepts a graphical (through ISaGRAPH) or textual
behavioural specification of the control system to be implemented. It follows
the usual procedure in software specification: the first step is to formally
validate the initial specification. Then the initial specification is
translated through automated processes into an implementation. At the moment
there are two possible output languages: C and Palasm [1]. The target processor
for the C code language are microcontroller based systems that require extended
time constrains and access to external peripherals. The goal of including PLD's
is the possibility of automatically design mixed hardware and software systems.
|
1204.5703
|
A Simple Proof of Threshold Saturation for Coupled Scalar Recursions
|
cs.IT math.IT
|
Low-density parity-check (LDPC) convolutional codes (or spatially-coupled
codes) have been shown to approach capacity on the binary erasure channel (BEC)
and binary-input memoryless symmetric channels. The mechanism behind this
spectacular performance is the threshold saturation phenomenon, which is
characterized by the belief-propagation threshold of the spatially-coupled
ensemble increasing to an intrinsic noise threshold defined by the uncoupled
system.
In this paper, we present a simple proof of threshold saturation that applies
to a broad class of coupled scalar recursions. The conditions of the theorem
are verified for the density-evolution (DE) equations of irregular LDPC codes
on the BEC, a class of generalized LDPC codes, and the joint iterative decoding
of LDPC codes on intersymbol-interference channels with erasure noise. Our
approach is based on potential functions and was motivated mainly by the ideas
of Takeuchi et al. The resulting proof is surprisingly simple when compared to
previous methods.
|
1204.5707
|
Analysis of MMSE Estimation for Compressive Sensing of Block Sparse
Signals
|
cs.IT math.IT
|
Minimum mean square error (MMSE) estimation of block sparse signals from
noisy linear measurements is considered. Unlike in the standard compressive
sensing setup where the non-zero entries of the signal are independently and
uniformly distributed across the vector of interest, the information bearing
components appear here in large mutually dependent clusters. Using the replica
method from statistical physics, we derive a simple closed-form solution for
the MMSE obtained by the optimum estimator. We show that the MMSE is a version
of the Tse-Hanly formula with system load and MSE scaled by parameters that
depend on the sparsity pattern of the source. It turns out that this is equal
to the MSE obtained by a genie-aided MMSE estimator which is informed in
advance about the exact locations of the non-zero blocks. The asymptotic
results obtained by the non-rigorous replica method are found to have an
excellent agreement with finite sized numerical simulations.
|
1204.5710
|
Information Masking and Amplification: The Source Coding Setting
|
cs.IT math.IT
|
The complementary problems of masking and amplifying channel state
information in the Gel'fand-Pinsker channel have recently been solved by Merhav
and Shamai, and Kim et al., respectively. In this paper, we study a related
source coding problem. Specifically, we consider the two-encoder source coding
setting where one source is to be amplified, while the other source is to be
masked. In general, there is a tension between these two objectives which is
characterized by the amplification-masking tradeoff. In this paper, we give a
single-letter description of this tradeoff.
We apply this result, together with a recent theorem by Courtade and Weissman
on multiterminal source coding, to solve a fundamental entropy characterization
problem.
|
1204.5717
|
Multi-agent Path Planning and Network Flow
|
cs.DS cs.RO cs.SY
|
This paper connects multi-agent path planning on graphs (roadmaps) to network
flow problems, showing that the former can be reduced to the latter, therefore
enabling the application of combinatorial network flow algorithms, as well as
general linear program techniques, to multi-agent path planning problems on
graphs. Exploiting this connection, we show that when the goals are permutation
invariant, the problem always has a feasible solution path set with a longest
finish time of no more than $n + V - 1$ steps, in which $n$ is the number of
agents and $V$ is the number of vertices of the underlying graph. We then give
a complete algorithm that finds such a solution in $O(nVE)$ time, with $E$
being the number of edges of the graph. Taking a further step, we study time
and distance optimality of the feasible solutions, show that they have a
pairwise Pareto optimal structure, and again provide efficient algorithms for
optimizing two of these practical objectives.
|
1204.5721
|
Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit
Problems
|
cs.LG stat.ML
|
Multi-armed bandit problems are the most basic examples of sequential
decision problems with an exploration-exploitation trade-off. This is the
balance between staying with the option that gave highest payoffs in the past
and exploring new options that might give higher payoffs in the future.
Although the study of bandit problems dates back to the Thirties,
exploration-exploitation trade-offs arise in several modern applications, such
as ad placement, website optimization, and packet routing. Mathematically, a
multi-armed bandit is defined by the payoff process associated with each
option. In this survey, we focus on two extreme cases in which the analysis of
regret is particularly simple and elegant: i.i.d. payoffs and adversarial
payoffs. Besides the basic setting of finitely many actions, we also analyze
some of the most important variants and extensions, such as the contextual
bandit model.
|
1204.5780
|
Friendship, Altruism, and Reward Sharing in Stable Matching and
Contribution Games
|
cs.GT cs.MA
|
We study stable matching problems in networks where players are embedded in a
social context, and may incorporate friendship relations or altruism into their
decisions. Each player is a node in a social network and strives to form a good
match with a neighboring player. We consider the existence, computation, and
inefficiency of stable matchings from which no pair of players wants to
deviate. When the benefits from a match are the same for both players, we show
that incorporating the well-being of other players into their matching
decisions significantly decreases the price of stability, while the price of
anarchy remains unaffected. Furthermore, a good stable matching achieving the
price of stability bound always exists and can be reached in polynomial time.
We extend these results to more general matching rewards, when players matched
to each other may receive different utilities from the match. For this more
general case, we show that incorporating social context (i.e., "caring about
your friends") can make an even larger difference, and greatly reduce the price
of anarchy. We show a variety of existence results, and present upper and lower
bounds on the prices of anarchy and stability for various matching utility
structures. Finally, we extend most of our results to network contribution
games, in which players can decide how much effort to contribute to each
incident edge, instead of simply choosing a single node to match with.
|
1204.5796
|
Proceedings Third Workshop on Formal Aspects of Virtual Organisations
|
cs.MA
|
This volume contains the proceedings of the 3rd International Workshop on
Formal Aspects of Virtual Organisations (FAVO 2011). The workshop was held in
Sao Paulo, Brazil on October 18th, 2011 as a satellite event to the 12th IFIP
Working Conference on Virtual Enterprises (PRO-VE'11). The FAVO workshop aims
to provide a forum for researchers interested in the application of formal
techniques in the design and analysis of Virtual Organisations.
|
1204.5802
|
Quantitative Concept Analysis
|
cs.LG math.CT
|
Formal Concept Analysis (FCA) begins from a context, given as a binary
relation between some objects and some attributes, and derives a lattice of
concepts, where each concept is given as a set of objects and a set of
attributes, such that the first set consists of all objects that satisfy all
attributes in the second, and vice versa. Many applications, though, provide
contexts with quantitative information, telling not just whether an object
satisfies an attribute, but also quantifying this satisfaction. Contexts in
this form arise as rating matrices in recommender systems, as occurrence
matrices in text analysis, as pixel intensity matrices in digital image
processing, etc. Such applications have attracted a lot of attention, and
several numeric extensions of FCA have been proposed. We propose the framework
of proximity sets (proxets), which subsume partially ordered sets (posets) as
well as metric spaces. One feature of this approach is that it extracts from
quantified contexts quantified concepts, and thus allows full use of the
available information. Another feature is that the categorical approach allows
analyzing any universal properties that the classical FCA and the new versions
may have, and thus provides structural guidance for aligning and combining the
approaches.
|
1204.5805
|
Intelligent Automated Diagnosis of Client Device Bottlenecks in Private
Clouds
|
cs.NI cs.AI
|
We present an automated solution for rapid diagnosis of client device
problems in private cloud environments: the Intelligent Automated Client
Diagnostic (IACD) system. Clients are diagnosed with the aid of Transmission
Control Protocol (TCP) packet traces, by (i) observation of anomalous artifacts
occurring as a result of each fault and (ii) subsequent use of the inference
capabilities of soft-margin Support Vector Machine (SVM) classifiers. The IACD
system features a modular design and is extendible to new faults, with
detection capability unaffected by the TCP variant used at the client.
Experimental evaluation of the IACD system in a controlled environment
demonstrated an overall diagnostic accuracy of 98%.
|
1204.5810
|
Geometry of Online Packing Linear Programs
|
cs.DS cs.LG
|
We consider packing LP's with $m$ rows where all constraint coefficients are
normalized to be in the unit interval. The n columns arrive in random order and
the goal is to set the corresponding decision variables irrevocably when they
arrive so as to obtain a feasible solution maximizing the expected reward.
Previous (1 - \epsilon)-competitive algorithms require the right-hand side of
the LP to be Omega((m/\epsilon^2) log (n/\epsilon)), a bound that worsens with
the number of columns and rows. However, the dependence on the number of
columns is not required in the single-row case and known lower bounds for the
general case are also independent of n.
Our goal is to understand whether the dependence on n is required in the
multi-row case, making it fundamentally harder than the single-row version. We
refute this by exhibiting an algorithm which is (1 - \epsilon)-competitive as
long as the right-hand sides are Omega((m^2/\epsilon^2) log (m/\epsilon)). Our
techniques refine previous PAC-learning based approaches which interpret the
online decisions as linear classifications of the columns based on sampled dual
prices. The key ingredient of our improvement comes from a non-standard
covering argument together with the realization that only when the columns of
the LP belong to few 1-d subspaces we can obtain small such covers; bounding
the size of the cover constructed also relies on the geometry of linear
classifiers. General packing LP's are handled by perturbing the input columns,
which can be seen as making the learning problem more robust.
|
1204.5852
|
Context-sensitive Spelling Correction Using Google Web 1T 5-Gram
Information
|
cs.CL
|
In computing, spell checking is the process of detecting and sometimes
providing spelling suggestions for incorrectly spelled words in a text.
Basically, a spell checker is a computer program that uses a dictionary of
words to perform spell checking. The bigger the dictionary is, the higher is
the error detection rate. The fact that spell checkers are based on regular
dictionaries, they suffer from data sparseness problem as they cannot capture
large vocabulary of words including proper names, domain-specific terms,
technical jargons, special acronyms, and terminologies. As a result, they
exhibit low error detection rate and often fail to catch major errors in the
text. This paper proposes a new context-sensitive spelling correction method
for detecting and correcting non-word and real-word errors in digital text
documents. The approach hinges around data statistics from Google Web 1T 5-gram
data set which consists of a big volume of n-gram word sequences, extracted
from the World Wide Web. Fundamentally, the proposed method comprises an error
detector that detects misspellings, a candidate spellings generator based on a
character 2-gram model that generates correction suggestions, and an error
corrector that performs contextual error correction. Experiments conducted on a
set of text documents from different domains and containing misspellings,
showed an outstanding spelling error correction rate and a drastic reduction of
both non-word and real-word errors. In a further study, the proposed algorithm
is to be parallelized so as to lower the computational cost of the error
detection and correction processes.
|
1204.5859
|
On the Complexity of Finding Second-Best Abductive Explanations
|
cs.LO cs.AI
|
While looking for abductive explanations of a given set of manifestations, an
ordering between possible solutions is often assumed. The complexity of
finding/verifying optimal solutions is already known. In this paper we consider
the computational complexity of finding second-best solutions. We consider
different orderings, and consider also different possible definitions of what a
second-best solution is.
|
1204.5920
|
Quantified Conditional Logics are Fragments of HOL
|
cs.AI cs.LO
|
A semantic embedding of (constant domain) quantified conditional logic in
classical higher-order logic is presented.
|
1204.5958
|
Sparse Signal Processing with Frame Theory
|
math.FA cs.IT math.IT
|
Many emerging applications involve sparse signals, and their processing is a
subject of active research. We desire a large class of sensing matrices which
allow the user to discern important properties of the measured sparse signal.
Of particular interest are matrices with the restricted isometry property
(RIP). RIP matrices are known to enable efficient and stable reconstruction of
sufficiently sparse signals, but the deterministic construction of such
matrices has proven very difficult. In this thesis, we discuss this matrix
design problem in the context of a growing field of study known as frame
theory. In the first two chapters, we build large families of equiangular tight
frames and full spark frames, and we discuss their relationship to RIP matrices
as well as their utility in other aspects of sparse signal processing. In
Chapter 3, we pave the road to deterministic RIP matrices, evaluating various
techniques to demonstrate RIP, and making interesting connections with graph
theory and number theory. We conclude in Chapter 4 with a coherence-based
alternative to RIP, which provides near-optimal probabilistic guarantees for
various aspects of sparse signal processing while at the same time admitting a
whole host of deterministic constructions.
|
1204.5981
|
Containment, Equivalence and Coreness from CSP to QCSP and beyond
|
cs.LO cs.AI
|
The constraint satisfaction problem (CSP) and its quantified extensions,
whether without (QCSP) or with disjunction (QCSP_or), correspond naturally to
the model checking problem for three increasingly stronger fragments of
positive first-order logic. Their complexity is often studied when
parameterised by a fixed model, the so-called template.
It is a natural question to ask when two templates are equivalent, or more
generally when one "contain" another, in the sense that a satisfied instance of
the first will be necessarily satisfied in the second. One can also ask for a
smallest possible equivalent template: this is known as the core for CSP.
We recall and extend previous results on containment, equivalence and
"coreness" for QCSP_or before initiating a preliminary study of cores for QCSP
which we characterise for certain structures and which turns out to be more
elusive.
|
1204.6049
|
Channel Capacity under Sub-Nyquist Nonuniform Sampling
|
cs.IT math.IT
|
This paper investigates the effect of sub-Nyquist sampling upon the capacity
of an analog channel. The channel is assumed to be a linear time-invariant
Gaussian channel, where perfect channel knowledge is available at both the
transmitter and the receiver. We consider a general class of right-invertible
time-preserving sampling methods which include irregular nonuniform sampling,
and characterize in closed form the channel capacity achievable by this class
of sampling methods, under a sampling rate and power constraint. Our results
indicate that the optimal sampling structures extract out the set of
frequencies that exhibits the highest signal-to-noise ratio among all spectral
sets of measure equal to the sampling rate. This can be attained through
filterbank sampling with uniform sampling at each branch with possibly
different rates, or through a single branch of modulation and filtering
followed by uniform sampling. These results reveal that for a large class of
channels, employing irregular nonuniform sampling sets, while typically
complicated to realize, does not provide capacity gain over uniform sampling
sets with appropriate preprocessing. Our findings demonstrate that aliasing or
scrambling of spectral components does not provide capacity gain, which is in
contrast to the benefits obtained from random mixing in spectrum-blind
compressive sampling schemes.
|
1204.6076
|
Shortest Path Computation with No Information Leakage
|
cs.DB
|
Shortest path computation is one of the most common queries in location-based
services (LBSs). Although particularly useful, such queries raise serious
privacy concerns. Exposing to a (potentially untrusted) LBS the client's
position and her destination may reveal personal information, such as social
habits, health condition, shopping preferences, lifestyle choices, etc. The
only existing method for privacy-preserving shortest path computation follows
the obfuscation paradigm; it prevents the LBS from inferring the source and
destination of the query with a probability higher than a threshold. This
implies, however, that the LBS still deduces some information (albeit not
exact) about the client's location and her destination. In this paper we aim at
strong privacy, where the adversary learns nothing about the shortest path
query. We achieve this via established private information retrieval
techniques, which we treat as black-box building blocks. Experiments on real,
large-scale road networks assess the practicality of our schemes.
|
1204.6077
|
V-SMART-Join: A Scalable MapReduce Framework for All-Pair Similarity
Joins of Multisets and Vectors
|
cs.DB
|
This work proposes V-SMART-Join, a scalable MapReduce-based framework for
discovering all pairs of similar entities. The V-SMART-Join framework is
applicable to sets, multisets, and vectors. V-SMART-Join is motivated by the
observed skew in the underlying distributions of Internet traffic, and is a
family of 2-stage algorithms, where the first stage computes and joins the
partial results, and the second stage computes the similarity exactly for all
candidate pairs. The V-SMART-Join algorithms are very efficient and scalable in
the number of entities, as well as their cardinalities. They were up to 30
times faster than the state of the art algorithm, VCL, when compared on a real
dataset of a small size. We also established the scalability of the proposed
algorithms by running them on a dataset of a realistic size, on which VCL never
succeeded to finish. Experiments were run using real datasets of IPs and
cookies, where each IP is represented as a multiset of cookies, and the goal is
to discover similar IPs to identify Internet proxies.
|
1204.6078
|
Distributed GraphLab: A Framework for Machine Learning in the Cloud
|
cs.DB cs.LG
|
While high-level data parallel frameworks, like MapReduce, simplify the
design and implementation of large-scale data processing systems, they do not
naturally or efficiently support many important data mining and machine
learning algorithms and can lead to inefficient learning systems. To help fill
this critical void, we introduced the GraphLab abstraction which naturally
expresses asynchronous, dynamic, graph-parallel computation while ensuring data
consistency and achieving a high degree of parallel performance in the
shared-memory setting. In this paper, we extend the GraphLab framework to the
substantially more challenging distributed setting while preserving strong data
consistency guarantees. We develop graph based extensions to pipelined locking
and data versioning to reduce network congestion and mitigate the effect of
network latency. We also introduce fault tolerance to the GraphLab abstraction
using the classic Chandy-Lamport snapshot algorithm and demonstrate how it can
be easily implemented by exploiting the GraphLab abstraction itself. Finally,
we evaluate our distributed implementation of the GraphLab abstraction on a
large Amazon EC2 deployment and show 1-2 orders of magnitude performance gains
over Hadoop-based implementations.
|
1204.6079
|
Learning Semantic String Transformations from Examples
|
cs.DB
|
We address the problem of performing semantic transformations on strings,
which may represent a variety of data types (or their combination) such as a
column in a relational table, time, date, currency, etc. Unlike syntactic
transformations, which are based on regular expressions and which interpret a
string as a sequence of characters, semantic transformations additionally
require exploiting the semantics of the data type represented by the string,
which may be encoded as a database of relational tables. Manually performing
such transformations on a large collection of strings is error prone and
cumbersome, while programmatic solutions are beyond the skill-set of end-users.
We present a programming by example technology that allows end-users to
automate such repetitive tasks. We describe an expressive transformation
language for semantic manipulation that combines table lookup operations and
syntactic manipulations. We then present a synthesis algorithm that can learn
all transformations in the language that are consistent with the user-provided
set of input-output examples. We have implemented this technology as an add-in
for the Microsoft Excel Spreadsheet system and have evaluated it successfully
over several benchmarks picked from various Excel help-forums.
|
1204.6080
|
Cologne: A Declarative Distributed Constraint Optimization Platform
|
cs.DB
|
This paper presents Cologne, a declarative optimization platform that enables
constraint optimization problems (COPs) to be declaratively specified and
incrementally executed in distributed systems. Cologne integrates a declarative
networking engine with an off-the-shelf constraint solver. We have developed
the Colog language that combines distributed Datalog used in declarative
networking with language constructs for specifying goals and constraints used
in COPs. Cologne uses novel query processing strategies for processing Colog
programs, by combining the use of bottom-up distributed Datalog evaluation with
top-down goal-oriented constraint solving. Using case studies based on cloud
and wireless network optimizations, we demonstrate that Cologne (1) can
flexibly support a wide range of policy-based optimizations in distributed
systems, (2) results in orders of magnitude less code compared to imperative
implementations, and (3) is highly efficient with low overhead and fast
convergence times.
|
1204.6081
|
Optimizing I/O for Big Array Analytics
|
cs.DB
|
Big array analytics is becoming indispensable in answering important
scientific and business questions. Most analysis tasks consist of multiple
steps, each making one or multiple passes over the arrays to be analyzed and
generating intermediate results. In the big data setting, I/O optimization is a
key to efficient analytics. In this paper, we develop a framework and
techniques for capturing a broad range of analysis tasks expressible in
nested-loop forms, representing them in a declarative way, and optimizing their
I/O by identifying sharing opportunities. Experiment results show that our
optimizer is capable of finding execution plans that exploit nontrivial I/O
sharing opportunities with significant savings.
|
1204.6082
|
Probabilistically Bounded Staleness for Practical Partial Quorums
|
cs.DB cs.DC
|
Data store replication results in a fundamental trade-off between operation
latency and data consistency. In this paper, we examine this trade-off in the
context of quorum-replicated data stores. Under partial, or non-strict quorum
replication, a data store waits for responses from a subset of replicas before
answering a query, without guaranteeing that read and write replica sets
intersect. As deployed in practice, these configurations provide only basic
eventual consistency guarantees, with no limit to the recency of data returned.
However, anecdotally, partial quorums are often "good enough" for practitioners
given their latency benefits. In this work, we explain why partial quorums are
regularly acceptable in practice, analyzing both the staleness of data they
return and the latency benefits they offer. We introduce Probabilistically
Bounded Staleness (PBS) consistency, which provides expected bounds on
staleness with respect to both versions and wall clock time. We derive a
closed-form solution for versioned staleness as well as model real-time
staleness for representative Dynamo-style systems under internet-scale
production workloads. Using PBS, we measure the latency-consistency trade-off
for partial quorum systems. We quantitatively demonstrate how eventually
consistent systems frequently return consistent data within tens of
milliseconds while offering significant latency benefits.
|
1204.6089
|
Multi-model-based Access Control in Construction Projects
|
cs.MA cs.CE cs.CY cs.SE
|
During the execution of large scale construction projects performed by
Virtual Organizations (VO), relatively complex technical models have to be
exchanged between the VO members. For linking the trade and transfer of these
models, a so-called multi-model container format was developed. Considering the
different skills and tasks of the involved partners, it is not necessary for
them to know all the models in every technical detailing. Furthermore, the
model size can lead to a delay in communication. In this paper an approach is
presented for defining model cut-outs according to the current project context.
Dynamic dependencies to the project context as well as static dependencies on
the organizational structure are mapped in a context-sensitive rule. As a
result, an approach for dynamic filtering of multi-models is obtained which
ensures, together with a filtering service, that the involved VO members get a
simplified view of complex multi-models as well as sufficient permissions
depending on their tasks.
|
1204.6090
|
Towards a Formal Model of Privacy-Sensitive Dynamic Coalitions
|
cs.MA cs.DC
|
The concept of dynamic coalitions (also virtual organizations) describes the
temporary interconnection of autonomous agents, who share information or
resources in order to achieve a common goal. Through modern technologies these
coalitions may form across company, organization and system borders. Therefor
questions of access control and security are of vital significance for the
architectures supporting these coalitions.
In this paper, we present our first steps to reach a formal framework for
modeling and verifying the design of privacy-sensitive dynamic coalition
infrastructures and their processes. In order to do so we extend existing
dynamic coalition modeling approaches with an access-control-concept, which
manages access to information through policies. Furthermore we regard the
processes underlying these coalitions and present first works in formalizing
these processes. As a result of the present paper we illustrate the usefulness
of the Abstract State Machine (ASM) method for this task. We demonstrate a
formal treatment of privacy-sensitive dynamic coalitions by two example ASMs
which model certain access control situations. A logical consideration of these
ASMs can lead to a better understanding and a verification of the ASMs
according to the aspired specification.
|
1204.6091
|
A structured approach to VO reconfigurations through Policies
|
cs.MA cs.SE
|
One of the strength of Virtual Organisations is their ability to dynamically
and rapidly adapt in response to changing environmental conditions. Dynamic
adaptability has been studied in other system areas as well and system
management through policies has crystallized itself as a very prominent
solution in system and network administration. However, these areas are often
concerned with very low-level technical aspects. Previous work on the APPEL
policy language has been aimed at dynamically adapting system behaviour to
satisfy end-user demands and - as part of STPOWLA - APPEL was used to adapt
workflow instances at runtime. In this paper we explore how the ideas of APPEL
and STPOWLA can be extended from workflows to the wider scope of Virtual
Organisations. We will use a Travel Booking VO as example.
|
1204.6093
|
Linear Consensus Algorithms Based on Balanced Asymmetric Chains
|
math.OC cs.SY eess.SY math.DS
|
Multi agent consensus algorithms with update steps based on so-called
balanced asymmetric chains, are analyzed. For such algorithms it is shown that
(i) the set of accumulation points of states is finite, (ii) the asymptotic
unconditional occurrence of single consensus or multiple consensuses is
directly related to the property of absolute infinite flow for the underlying
update chain. The results are applied to well known consensus models.
|
1204.6098
|
On Locality in Distributed Storage Systems
|
cs.IT math.IT
|
This paper studies the design of codes for distributed storage systems (DSS)
that enable local repair in the event of node failure. This paper presents
locally repairable codes based on low degree multivariate polynomials. Its code
construction mechanism extends work on Noisy Interpolating Set by Dvir et al.
\cite{dvir2011}. The paper presents two classes of codes that allow node repair
to be performed by contacting 2 and 3 surviving nodes respectively. It further
shows that both classes are good in terms of their rate and minimum distance,
and allow their rate to be bartered for greater flexibility in the repair
process.
|
1204.6100
|
On the Overhead of Interference Alignment: Training, Feedback, and
Cooperation
|
cs.IT math.IT
|
Interference alignment (IA) is a cooperative transmission strategy that,
under some conditions, achieves the interference channel's maximum number of
degrees of freedom. Realizing IA gains, however, is contingent upon providing
transmitters with sufficiently accurate channel knowledge. In this paper, we
study the performance of IA in multiple-input multiple-output systems where
channel knowledge is acquired through training and analog feedback. We design
the training and feedback system to maximize IA's effective sum-rate: a
non-asymptotic performance metric that accounts for estimation error, training
and feedback overhead, and channel selectivity. We characterize effective
sum-rate with overhead in relation to various parameters such as
signal-to-noise ratio, Doppler spread, and feedback channel quality. A main
insight from our analysis is that, by properly designing the CSI acquisition
process, IA can provide good sum-rate performance in a very wide range of
fading scenarios. Another observation from our work is that such overhead-aware
analysis can help solve a number of practical network design problems. To
demonstrate the concept of overhead-aware network design, we consider the
example problem of finding the optimal number of cooperative IA users based on
signal power and mobility.
|
1204.6105
|
Mechanism Design for Base Station Association and Resource Allocation in
Downlink OFDMA Network
|
cs.IT cs.GT math.IT
|
We consider a resource management problem in a multi-cell downlink OFDMA
network, whereby the goal is to find the optimal per base station resource
allocation and user-base station assignment. The users are assumed to be
strategic/selfish who have private information on downlink channel states and
noise levels. To induce truthfulness among the users as well as to enhance the
spectrum efficiency, the resource management strategy needs to be both
incentive compatible and efficient. However, due to the mixed (discrete and
continuous) nature of resource management in this context, the implementation
of any incentive compatible mechanism that maximizes the system throughput is
NP-hard. We consider the dominant strategy implementation of an approximately
optimal resource management scheme via a computationally tractable mechanism.
The proposed mechanism is decentralized and dynamic. More importantly, it
ensures the truthfulness of the users and it implements a resource allocation
solution that yields at least 1/2 of the optimal throughput. Simulations are
provided to illustrate the effectiveness of the performance of the proposed
mechanism.
|
1204.6106
|
Performance of Polar Codes on wireless communications Channel
|
cs.IT math.IT
|
We discuss the performance of polar codes, the capacity-achieving channel
codes, on wireless communication channel in this paper. By generalizing the
definition of Bhattacharyya Parameter in discrete memoryless channel, we
present the special expression of the parameter for Gaussian and Rayleigh
fading the two continuous channels, including the recursive formulas and the
initial values. We analyze the applications of polar codes with the defined
parameter over Rayleigh fading channel by transmitting image and speech. By
comparing with low density parity-check codes(LDPC) at the same cases, our
simulation results show that polar codes have better performance than that of
LDPC codes. Polar codes will be good candidate for wireless communication
channel.
|
1204.6120
|
Geometric Separation by Single-Pass Alternating Thresholding
|
math.FA cs.IT math.IT math.NA
|
Modern data is customarily of multimodal nature, and analysis tasks typically
require separation into the single components. Although a highly ill-posed
problem, the morphological difference of these components sometimes allow a
very precise separation such as, for instance, in neurobiological imaging a
separation into spines (pointlike structures) and dendrites (curvilinear
structures). Recently, applied harmonic analysis introduced powerful
methodologies to achieve this task, exploiting specifically designed
representation systems in which the components are sparsely representable,
combined with either performing $\ell_1$ minimization or thresholding on the
combined dictionary.
In this paper we provide a thorough theoretical study of the separation of a
distributional model situation of point- and curvilinear singularities
exploiting a surprisingly simple single-pass alternating thresholding method
applied to the two complementary frames: wavelets and curvelets. Utilizing the
fact that the coefficients are clustered geometrically, thereby exhibiting
clustered/geometric sparsity in the chosen frames, we prove that at
sufficiently fine scales arbitrarily precise separation is possible. Even more
surprising, it turns out that the thresholding index sets converge to the
wavefront sets of the point- and curvilinear singularities in phase space and
that those wavefront sets are perfectly separated by the thresholding
procedure. Main ingredients of our analysis are the novel notion of cluster
coherence and clustered/geometric sparsity as well as a microlocal analysis
viewpoint.
|
1204.6123
|
Clustered Sparsity and Separation of Cartoon and Texture
|
math.FA cs.IT math.IT math.NA
|
Natural images are typically a composition of cartoon and texture structures.
A medical image might, for instance, show a mixture of gray matter and the
skull cap. One common task is to separate such an image into two single images,
one containing the cartoon part and the other containing the texture part.
Recently, a powerful class of algorithms using sparse approximation and
$\ell_1$ minimization has been introduced to resolve this problem, and numerous
inspiring empirical results have already been obtained.
In this paper we provide the first thorough theoretical study of the
separation of a combination of cartoon and texture structures in a model
situation using this class of algorithms. The methodology we consider expands
the image in a combined dictionary consisting of a curvelet tight frame and a
Gabor tight frame and minimizes the $\ell_1$ norm on the analysis side. Sparse
approximation properties then force the cartoon components into the curvelet
coefficients and the texture components into the Gabor coefficients, thereby
separating the image. Utilizing the fact that the coefficients are clustered
geometrically, we prove that at sufficiently fine scales arbitrarily precise
separation is possible. Main ingredients of our analysis are the novel notion
of cluster coherence and clustered/geometric sparsity. Our analysis also
provides a deep understanding on when separation is still possible.
|
1204.6174
|
Efficient Computations of a Security Index for False Data Attacks in
Power Networks
|
math.OC cs.SY
|
The resilience of Supervisory Control and Data Acquisition (SCADA) systems
for electric power networks for certain cyber-attacks is considered. We analyze
the vulnerability of the measurement system to false data attack on
communicated measurements. The vulnerability analysis problem is shown to be
NP-hard, meaning that unless $P = NP$ there is no polynomial time algorithm to
analyze the vulnerability of the system. Nevertheless, we identify situations,
such as the full measurement case, where it can be solved efficiently. In such
cases, we show indeed that the problem can be cast as a generalization of the
minimum cut problem involving costly nodes. We further show that it can be
reformulated as a standard minimum cut problem (without costly nodes) on a
modified graph of proportional size. An important consequence of this result is
that our approach provides the first exact efficient algorithm for the
vulnerability analysis problem under the full measurement assumption.
Furthermore, our approach also provides an efficient heuristic algorithm for
the general NP-hard problem. Our results are illustrated by numerical studies
on benchmark systems including the IEEE 118-bus system.
|
1204.6178
|
Distributed Output-Feedback LQG Control with Delayed Information Sharing
|
cs.SY
|
This paper develops a controller synthesis method for distributed LQG control
problems under output-feedback. We consider a system consisting of three
interconnected linear subsystems with a delayed information sharing structure.
While the state-feedback case has previously been solved, the extension to
output-feedback is nontrivial as the classical separation principle fails. To
find the optimal solution, the controller is decomposed into two independent
components: a centralized LQG-optimal controller under delayed state
observations, and a sum of correction terms based on additional local
information available to decision makers. Explicit discrete-time equations are
derived whose solutions are the gains of the optimal controller.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.