id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1107.5841
|
Sequential Convex Programming Methods for Solving Nonlinear Optimization
Problems with DC constraints
|
math.OC cs.SY
|
This paper investigates the relation between sequential convex programming
(SCP) as, e.g., defined in [24] and DC (difference of two convex functions)
programming. We first present an SCP algorithm for solving nonlinear
optimization problems with DC constraints and prove its convergence. Then we
combine the proposed algorithm with a relaxation technique to handle
inconsistent linearizations. Numerical tests are performed to investigate the
behaviour of the class of algorithms.
|
1107.5850
|
Confidence-Based Dynamic Classifier Combination For Mean-Shift Tracking
|
cs.CV
|
We introduce a novel tracking technique which uses dynamic confidence-based
fusion of two different information sources for robust and efficient tracking
of visual objects. Mean-shift tracking is a popular and well known method used
in object tracking problems. Originally, the algorithm uses a similarity
measure which is optimized by shifting a search area to the center of a
generated weight image to track objects. Recent improvements on the original
mean-shift algorithm involves using a classifier that differentiates the object
from its surroundings. We adopt this classifier-based approach and propose an
application of a classifier fusion technique within this classifier-based
context in this work. We use two different classifiers, where one comes from a
background modeling method, to generate the weight image and we calculate
contributions of the classifiers dynamically using their confidences to
generate a final weight image to be used in tracking. The contributions of the
classifiers are calculated by using correlations between histograms of their
weight images and histogram of a defined ideal weight image in the previous
frame. We show with experiments that our dynamic combination scheme selects
good contributions for classifiers for different cases and improves tracking
accuracy significantly.
|
1107.5851
|
Co-evolution of Content Popularity and Delivery in Mobile P2P Networks
|
cs.SI cs.MA
|
Mobile P2P technology provides a scalable approach to content delivery to a
large number of users on their mobile devices. In this work, we study the
dissemination of a \emph{single} content (e.g., an item of news, a song or a
video clip) among a population of mobile nodes. Each node in the population is
either a \emph{destination} (interested in the content) or a potential
\emph{relay} (not yet interested in the content). There is an interest
evolution process by which nodes not yet interested in the content (i.e.,
relays) can become interested (i.e., become destinations) on learning about the
popularity of the content (i.e., the number of already interested nodes). In
our work, the interest in the content evolves under the \emph{linear threshold
model}. The content is copied between nodes when they make random contact. For
this we employ a controlled epidemic spread model. We model the joint evolution
of the copying process and the interest evolution process, and derive the joint
fluid limit ordinary differential equations. We then study the selection of the
parameters under the content provider's control, for the optimization of
various objective functions that aim at maximizing content popularity and
efficient content delivery.
|
1107.5869
|
Piecewise linear car-following modeling
|
math.OC cs.SY
|
We present a traffic model that extends the linear car-following model as
well as the min-plus traffic model (a model based on the min-plus algebra). A
discrete-time car-dynamics describing the traffic on a 1-lane road without
passing is interpreted as a dynamic programming equation of a stochastic
optimal control problem of a Markov chain. This variational formulation permits
to characterize the stability of the car-dynamics and to calculte the
stationary regimes when they exist. The model is based on a piecewise linear
approximation of the fundamental traffic diagram.
|
1107.5870
|
Evolutionary Dynamics of Scientific Collaboration Networks: Multi-Levels
and Cross-time Analysis
|
cs.SI cs.DL physics.soc-ph
|
Several studies exist which use scientific literature for comparing
scientific activities (e.g., productivity, and collaboration). In this study,
using co-authorship data over the last 40 years, we present the evolutionary
dynamics of multi level (i.e., individual, institutional and national)
collaboration networks for exploring the emergence of collaborations in the
research field of "steel structures". The collaboration network of scientists
in the field has been analyzed using author affiliations extracted from Scopus
between 1970 and 2009. We have studied collaboration distribution networks at
the micro-, meso- and macro-levels for the 40 years. We compared and analyzed a
number of properties of these networks (i.e., density, centrality measures, the
giant component and clustering coefficient) for presenting a longitudinal
analysis and statistical validation of the evolutionary dynamics of "steel
structures" collaboration networks. At all levels, the scientific
collaborations network structures were central considering the closeness
centralization while betweenness and degree centralization were much lower. In
general networks density, connectedness, centralization and clustering
coefficient were highest in marco-level and decreasing as the network size grow
to the lowest in micro-level. We also find that the average distance between
countries about two and institutes five and for authors eight meaning that only
about eight steps are necessary to get from one randomly chosen author to
another.
|
1107.5924
|
Reachability in Biochemical Dynamical Systems by Quantitative Discrete
Approximation
|
cs.SY math.OC q-bio.QM
|
In this paper a novel computational technique for finite discrete
approximation of continuous dynamical systems suitable for a significant class
of biochemical dynamical systems is introduced. The method is parameterized in
order to affect the imposed level of approximation provided that with
increasing parameter value the approximation converges to the original
continuous system. By employing this approximation technique, we present
algorithms solving the reachability problem for biochemical dynamical systems.
The presented method and algorithms are evaluated on several exemplary
biological models and on a real case study. This is a full version of the paper
published in the proceedings of CompMod 2011.
|
1107.5928
|
Extension of the $\nu$-metric for stabilizable plants over $H^\infty$
|
math.OC cs.SY math.AP math.FA math.RA
|
An abstract $\nu$-metric was introduced by Ball and Sasane, with a view
towards extending the classical $\nu$-metric of Vinnicombe from the case of
rational transfer functions to more general nonrational transfer function
classes of infinite-dimensional linear control systems. In this short note, we
give an important concrete special instance of the abstract $\nu$-metric, by
verifying that all the assumptions demanded in the abstract set-up are
satisfied when the ring of stable transfer functions is the Hardy algebra
$H^\infty$. This settles the open question implicit in \cite{BalSas2}.
|
1107.5930
|
Technical Note: Towards ROC Curves in Cost Space
|
cs.AI
|
ROC curves and cost curves are two popular ways of visualising classifier
performance, finding appropriate thresholds according to the operating
condition, and deriving useful aggregated measures such as the area under the
ROC curve (AUC) or the area under the optimal cost curve. In this note we
present some new findings and connections between ROC space and cost space, by
using the expected loss over a range of operating conditions. In particular, we
show that ROC curves can be transferred to cost space by means of a very
natural way of understanding how thresholds should be chosen, by selecting the
threshold such that the proportion of positive predictions equals the operating
condition (either in the form of cost proportion or skew). We call these new
curves {ROC Cost Curves}, and we demonstrate that the expected loss as measured
by the area under these curves is linearly related to AUC. This opens up a
series of new possibilities and clarifies the notion of cost curve and its
relation to ROC analysis. In addition, we show that for a classifier that
assigns the scores in an evenly-spaced way, these curves are equal to the Brier
Curves. As a result, this establishes the first clear connection between AUC
and the Brier score.
|
1107.5951
|
Optimal, scalable forward models for computing gravity anomalies
|
cs.CE cs.DC physics.geo-ph
|
We describe three approaches for computing a gravity signal from a density
anomaly. The first approach consists of the classical "summation" technique,
whilst the remaining two methods solve the Poisson problem for the
gravitational potential using either a Finite Element (FE) discretization
employing a multilevel preconditioner, or a Green's function evaluated with the
Fast Multipole Method (FMM). The methods utilizing the PDE formulation
described here differ from previously published approaches used in gravity
modeling in that they are optimal, implying that both the memory and
computational time required scale linearly with respect to the number of
unknowns in the potential field. Additionally, all of the implementations
presented here are developed such that the computations can be performed in a
massively parallel, distributed memory computing environment. Through numerical
experiments, we compare the methods on the basis of their discretization error,
CPU time and parallel scalability. We demonstrate the parallel scalability of
all these techniques by running forward models with up to $10^8$ voxels on
1000's of cores.
|
1107.5953
|
Multivalued Subsets Under Information Theory
|
cs.IT math.IT
|
In the fields of finance, engineering and sciences data mining/ machine
learning has held an eminent position in predictive analysis. Complex
algorithms and adaptive decision models have contributed towards streamlining
research as well as improve forecasting. Extensive study in areas surrounding
computation and mathematical sciences has primarily been responsible for the
field's development. Classification based modeling, which holds a prominent
position amongst the different rule-based algorithms, is one of the most widely
used decision making tool. The decision tree has a place of profound
significance in classification modeling. A number of heuristics have been
developed over the years to refine its decision making process. Most heuristics
applied to such tree-based learning algorithms derive their roots from
Shannon's 'Information Theory'. The current application of this theory is
directed towards individual assessment of the attribute-values. The proposed
study takes a look at the effects of combining these values with the aim to
improve the 'Information Gain'. A search-based heuristic tool is applied for
identifying the subsets sharing a better gain value than the ones presented in
the GID3 approach. An application towards the feature selection stage of the
mining process has been tested and presented with statistical analysis.
|
1107.5968
|
Input-Output Finite-Time Stability
|
cs.SY math.OC
|
This paper introduces the extension of Finite-Time Stability (FTS) to the
input-output case, namely the Input-Output FTS (IO-FTS). The main differences
between classic IO stability and IO-FTS are that the latter involves signals
defined over a finite time interval, does not necessarily require the inputs
and outputs to belong to the same class of signals, and that quantitative
bounds on both inputs and outputs must be specified. This paper revises some
recent results on IO-FTS, both in the context of linear systems and in the
context of switching systems. In the final example the proposed methodology is
used to minimize the maximum displacement and velocity of a building subject to
an earthquake of given magnitude.
|
1107.6004
|
Explicit Bounds for Entropy Concentration under Linear Constraints
|
cs.IT math.IT physics.data-an
|
Consider the set of all sequences of $n$ outcomes, each taking one of $m$
values, that satisfy a number of linear constraints. If $m$ is fixed while $n$
increases, most sequences that satisfy the constraints result in frequency
vectors whose entropy approaches that of the maximum entropy vector satisfying
the constraints. This well-known "entropy concentration" phenomenon underlies
the maximum entropy method.
Existing proofs of the concentration phenomenon are based on limits or
asymptotics and unrealistically assume that constraints hold precisely,
supporting maximum entropy inference more in principle than in practice. We
present, for the first time, non-asymptotic, explicit lower bounds on $n$ for a
number of variants of the concentration result to hold to any prescribed
accuracies, with the constraints holding up to any specified tolerance, taking
into account the fact that allocations of discrete units can satisfy
constraints only approximately. Again unlike earlier results, we measure
concentration not by deviation from the maximum entropy value, but by the
$\ell_1$ and $\ell_2$ distances from the maximum entropy-achieving frequency
vector. One of our results holds independently of the alphabet size $m$ and is
based on a novel proof technique using the multi-dimensional Berry-Esseen
theorem. We illustrate and compare our results using various detailed examples.
|
1107.6027
|
Minimax-Optimal Bounds for Detectors Based on Estimated Prior
Probabilities
|
cs.IT math.IT stat.ML
|
In many signal detection and classification problems, we have knowledge of
the distribution under each hypothesis, but not the prior probabilities. This
paper is aimed at providing theory to quantify the performance of detection via
estimating prior probabilities from either labeled or unlabeled training data.
The error or {\em risk} is considered as a function of the prior probabilities.
We show that the risk function is locally Lipschitz in the vicinity of the true
prior probabilities, and the error of detectors based on estimated prior
probabilities depends on the behavior of the risk function in this locality. In
general, we show that the error of detectors based on the Maximum Likelihood
Estimate (MLE) of the prior probabilities converges to the Bayes error at a
rate of $n^{-1/2}$, where $n$ is the number of training data. If the behavior
of the risk function is more favorable, then detectors based on the MLE have
errors converging to the corresponding Bayes errors at optimal rates of the
form $n^{-(1+\alpha)/2}$, where $\alpha>0$ is a parameter governing the
behavior of the risk function with a typical value $\alpha = 1$. The limit
$\alpha \rightarrow \infty$ corresponds to a situation where the risk function
is flat near the true probabilities, and thus insensitive to small errors in
the MLE; in this case the error of the detector based on the MLE converges to
the Bayes error exponentially fast with $n$. We show the bounds are achievable
no matter given labeled or unlabeled training data and are minimax-optimal in
labeled case.
|
1108.0007
|
A Invertible Dimension Reduction of Curves on a Manifold
|
cs.CV math.DG
|
In this paper, we propose a novel lower dimensional representation of a shape
sequence. The proposed dimension reduction is invertible and computationally
more efficient in comparison to other related works. Theoretically, the
differential geometry tools such as moving frame and parallel transportation
are successfully adapted into the dimension reduction problem of high
dimensional curves. Intuitively, instead of searching for a global flat
subspace for curve embedding, we deployed a sequence of local flat subspaces
adaptive to the geometry of both of the curve and the manifold it lies on. In
practice, the experimental results of the dimension reduction and
reconstruction algorithms well illustrate the advantages of the proposed
theoretical innovation.
|
1108.0017
|
Generating a Diverse Set of High-Quality Clusterings
|
cs.LG cs.DB
|
We provide a new framework for generating multiple good quality partitions
(clusterings) of a single data set. Our approach decomposes this problem into
two components, generating many high-quality partitions, and then grouping
these partitions to obtain k representatives. The decomposition makes the
approach extremely modular and allows us to optimize various criteria that
control the choice of representative partitions.
|
1108.0024
|
Achievable Rates and Outer Bound for the Half-Duplex MAC with
Generalized Feedback
|
cs.IT math.IT
|
This paper provides comprehensive coding and outer bound for the half-duplex
multiple access channel with generalized feedback (MAC-GF). Two users
communicate with one destination over a discrete memoryless channel using time
division. Each transmission block is divided into 3 time slots with variable
durations: the destination is always in receive mode, while each user
alternatively transmits and receives during the first 2 time slots, then both
cooperate to send information during the last one. The paper proposes two
decode-forward based coding schemes, analyzes their rate regions, and also
derives two outer bounds with rate constraints similar to the achievable
regions. Both schemes requires no block Makovity, allowing the destination to
decode at the end of each block without any delay. In the first scheme, the
codewords in the third time slot are superimposed on the codewords of the first
two, whereas in the second scheme, these codewords are independent. While the
second scheme is simpler, the first scheme helps emphasize the importance of
joint decoding over separate decoding among multiple time slots at the
destination. For the Gaussian channel, the two schemes with joint decoding are
equivalent, as are the two outer bounds. For physically degraded Gaussian
channels, the proposed schemes achieve the capacity. Extension to the m-user
half-duplex MAC-GF are provided. Numerical results for the Gaussian channel
shows significant rate region improvement over the classical MAC and that the
outer bound becomes increasingly tight as the inter-user link quality increases
|
1108.0027
|
Revisiting Degree Distribution Models for Social Graph Analysis
|
cs.SI physics.soc-ph
|
Degree distribution models are incredibly important tools for analyzing and
understanding the structure and formation of social networks, and can help
guide the design of efficient graph algorithms. In particular, the Power-law
degree distribution has long been used to model the structure of online social
networks, and is the basis for algorithms and heuristics in graph applications
such as influence maximization and social search. Along with recent measurement
results, our interest in this topic was sparked by our own experimental results
on social graphs that deviated significantly from those predicted by a
Power-law model. In this work, we seek a deeper understanding of these
deviations, and propose an alternative model with significant implications on
graph algorithms and applications. We start by quantifying this artifact using
a variety of real social graphs, and show that their structures cannot be
accurately modeled using elementary distributions including the Power-law.
Instead, we propose the Pareto-Lognormal (PLN) model, verify its
goodness-of-fit using graphical and statistical methods, and present an
analytical study of its asymptotical differences with the Power-law. To
demonstrate the quantitative benefits of the PLN model, we compare the results
of three wide-ranging graph applications on real social graphs against those on
synthetic graphs generated using the PLN and Power-law models. We show that
synthetic graphs generated using PLN are much better predictors of degree
distributions in real graphs, and produce experimental results with errors that
are orders-of-magnitude smaller than those produced by the Power-law model.
|
1108.0039
|
CBR with Commonsense Reasoning and Structure Mapping: An Application to
Mediation
|
cs.AI cs.LG
|
Mediation is an important method in dispute resolution. We implement a case
based reasoning approach to mediation integrating analogical and commonsense
reasoning components that allow an artificial mediation agent to satisfy
requirements expected from a human mediator, in particular: utilizing
experience with cases in different domains; and structurally transforming the
set of issues for a better solution. We utilize a case structure based on
ontologies reflecting the perceptions of the parties in dispute. The analogical
reasoning component, employing the Structure Mapping Theory from psychology,
provides a flexibility to respond innovatively in unusual circumstances, in
contrast with conventional approaches confined into specialized problem
domains. We aim to build a mediation case base incorporating real world
instances ranging from interpersonal or intergroup disputes to international
conflicts.
|
1108.0047
|
Reconstructing Isoform Graphs from RNA-Seq data
|
q-bio.GN cs.CE cs.DS
|
Next-generation sequencing (NGS) technologies allow new methodologies for
alternative splicing (AS) analysis. Current computational methods for AS from
NGS data are mainly focused on predicting splice site junctions or de novo
assembly of full-length transcripts. These methods are computationally
expensive and produce a huge number of full-length transcripts or splice
junctions, spanning the whole genome of organisms. Thus summarizing such data
into the different gene structures and AS events of the expressed genes is an
hard task.
To face this issue in this paper we investigate the computational problem of
reconstructing from NGS data, in absence of the genome, a gene structure for
each gene that is represented by the isoform graph: we introduce such graph and
we show that it uniquely summarizes the gene transcripts. We define the
computational problem of reconstructing the isoform graph and provide some
conditions that must be met to allow such reconstruction.
Finally, we describe an efficient algorithmic approach to solve this problem,
validating our approach with both a theoretical and an experimental analysis.
|
1108.0065
|
Approximating the Permanent with Fractional Belief Propagation
|
cs.DM cond-mat.stat-mech cs.CC cs.IT math.IT
|
We discuss schemes for exact and approximate computations of permanents, and
compare them with each other. Specifically, we analyze the Belief Propagation
(BP) approach and its Fractional Belief Propagation (FBP) generalization for
computing the permanent of a non-negative matrix. Known bounds and conjectures
are verified in experiments, and some new theoretical relations, bounds and
conjectures are proposed. The Fractional Free Energy (FFE) functional is
parameterized by a scalar parameter $\gamma\in[-1;1]$, where $\gamma=-1$
corresponds to the BP limit and $\gamma=1$ corresponds to the exclusion
principle (but ignoring perfect matching constraints) Mean-Field (MF) limit.
FFE shows monotonicity and continuity with respect to $\gamma$. For every
non-negative matrix, we define its special value $\gamma_*\in[-1;0]$ to be the
$\gamma$ for which the minimum of the $\gamma$-parameterized FFE functional is
equal to the permanent of the matrix, where the lower and upper bounds of the
$\gamma$-interval corresponds to respective bounds for the permanent. Our
experimental analysis suggests that the distribution of $\gamma_*$ varies for
different ensembles but $\gamma_*$ always lies within the $[-1;-1/2]$ interval.
Moreover, for all ensembles considered the behavior of $\gamma_*$ is highly
distinctive, offering an emprirical practical guidance for estimating
permanents of non-negative matrices via the FFE approach.
|
1108.0072
|
On the Throughput-Delay Trade-off in Georouting Networks
|
cs.IT cs.NI math.IT
|
We study the scaling properties of a georouting scheme in a wireless
multi-hop network of $n$ mobile nodes. Our aim is to increase the network
capacity quasi linearly with $n$ while keeping the average delay bounded. In
our model, mobile nodes move according to an i.i.d. random walk with velocity
$v$ and transmit packets to randomly chosen destinations. The average packet
delivery delay of our scheme is of order $1/v$ and it achieves the network
capacity of order $\frac{n}{\log n\log\log n}$. This shows a practical
throughput-delay trade-off, in particular when compared with the seminal result
of Gupta and Kumar which shows network capacity of order $\sqrt{n/\log n}$ and
negligible delay and the groundbreaking result of Grossglausser and Tse which
achieves network capacity of order $n$ but with an average delay of order
$\sqrt{n}/v$. We confirm the generality of our analytical results using
simulations under various interference models.
|
1108.0100
|
Explicit Solution of Worst-Case Secrecy Rate for MISO Wiretap Channels
with Spherical Uncertainty
|
cs.IT math.IT
|
A multiple-input single-output (MISO) wiretap channel model is considered,
that includes a multi-antenna transmitter, a single-antenna legitimate receiver
and a single-antenna eavesdropper. For the scenario in which spherical
uncertainty for both the legitimate and the eavesdropper channels is included,
the problem of finding the optimal input covariance that maximizes the
worst-case secrecy rate subject to a power constraint, is considered, and an
explicit expression for the maximum worst-case secrecy rate is provided.
|
1108.0128
|
Delay Optimal Multichannel Opportunistic Access
|
math.OC cs.SY
|
The problem of minimizing queueing delay of opportunistic access of multiple
continuous time Markov channels is considered. A new access policy based on
myopic sensing and adaptive transmission (MS-AT) is proposed. Under the
framework of risk sensitive constrained Markov decision process with effective
bandwidth as a measure of queueing delay, it is shown that MS-AT achieves
simultaneously throughput and delay optimality. It is shown further that both
the effective bandwidth and the throughput of MS-AT are two-segment piece-wise
linear functions of the collision constraint (maximum allowable conditional
collision probability) with the effective bandwidth and throughput coinciding
in the regime of tight collision constraints. Analytical and simulations
comparisons with the myopic sensing and memoryless transmission (MS-MT) policy
which is throughput optimal but delay suboptimal in the regime of tight
collision constraints.
|
1108.0129
|
Identifiability and inference of non-parametric rates-across-sites
models on large-scale phylogenies
|
math.PR cs.CE cs.DS math.ST q-bio.PE stat.TH
|
Mutation rate variation across loci is well known to cause difficulties,
notably identifiability issues, in the reconstruction of evolutionary trees
from molecular sequences. Here we introduce a new approach for estimating
general rates-across-sites models. Our results imply, in particular, that large
phylogenies are typically identifiable under rate variation. We also derive
sequence-length requirements for high-probability reconstruction.
Our main contribution is a novel algorithm that clusters sites according to
their mutation rate. Following this site clustering step, standard
reconstruction techniques can be used to recover the phylogeny. Our results
rely on a basic insight: that, for large trees, certain site statistics
experience concentration-of-measure phenomena.
|
1108.0155
|
Reasoning in the OWL 2 Full Ontology Language using First-Order
Automated Theorem Proving
|
cs.AI
|
OWL 2 has been standardized by the World Wide Web Consortium (W3C) as a
family of ontology languages for the Semantic Web. The most expressive of these
languages is OWL 2 Full, but to date no reasoner has been implemented for this
language. Consistency and entailment checking are known to be undecidable for
OWL 2 Full. We have translated a large fragment of the OWL 2 Full semantics
into first-order logic, and used automated theorem proving systems to do
reasoning based on this theory. The results are promising, and indicate that
this approach can be applied in practice for effective OWL reasoning, beyond
the capabilities of current Semantic Web reasoners.
This is an extended version of a paper with the same title that has been
published at CADE 2011, LNAI 6803, pp. 446-460. The extended version provides
appendices with additional resources that were used in the reported evaluation.
|
1108.0170
|
Optimization of Lyapunov Invariants in Verification of Software Systems
|
cs.SY math.OC
|
The paper proposes a control-theoretic framework for verification of
numerical software systems, and puts forward software verification as an
important application of control and systems theory. The idea is to transfer
Lyapunov functions and the associated computational techniques from control
systems analysis and convex optimization to verification of various software
safety and performance specifications. These include but are not limited to
absence of overflow, absence of division-by-zero, termination in finite time,
presence of dead-code, and certain user-specified assertions. Central to this
framework are Lyapunov invariants. These are properly constructed functions of
the program variables, and satisfy certain properties-resembling those of
Lyapunov functions-along the execution trace. The search for the invariants can
be formulated as a convex optimization problem. If the associated optimization
problem is feasible, the result is a certificate for the specification.
|
1108.0186
|
Differentially Private Search Log Sanitization with Optimal Output
Utility
|
cs.DB
|
Web search logs contain extremely sensitive data, as evidenced by the recent
AOL incident. However, storing and analyzing search logs can be very useful for
many purposes (i.e. investigating human behavior). Thus, an important research
question is how to privately sanitize search logs. Several search log
anonymization techniques have been proposed with concrete privacy models.
However, in all of these solutions, the output utility of the techniques is
only evaluated rather than being maximized in any fashion. Indeed, for
effective search log anonymization, it is desirable to derive the optimal
(maximum utility) output while meeting the privacy standard. In this paper, we
propose utility-maximizing sanitization based on the rigorous privacy standard
of differential privacy, in the context of search logs. Specifically, we
utilize optimization models to maximize the output utility of the sanitization
for different applications, while ensuring that the production process
satisfies differential privacy. An added benefit is that our novel
randomization strategy ensures that the schema of the output is identical to
that of the input. A comprehensive evaluation on real search logs validates the
approach and demonstrates its robustness and scalability.
|
1108.0194
|
Optimal Utilization of a Cognitive Shared Channel with a Rechargeable
Primary Source Node
|
cs.IT math.IT
|
This paper considers the scenario in which a set of nodes share a common
channel. Some nodes have a rechargeable battery and the others are plugged to a
reliable power supply and, thus, have no energy limitations. We consider two
source-destination pairs and apply the concept of cognitive radio communication
in sharing the common channel. Specifically, we give high-priority to the
energy-constrained source-destination pair, i.e., primary pair, and
low-priority to the pair which is free from such constraint, i.e., secondary
pair. In contrast to the traditional notion of cognitive radio, in which the
secondary transmitter is required to relinquish the channel as soon as the
primary is detected, the secondary transmitter not only utilizes the idle slots
of primary pair but also transmits along with the primary transmitter with
probability $p$. This is possible because we consider the general multi-packet
reception model. Given the requirement on the primary pair's throughput, the
probability $p$ is chosen to maximize the secondary pair's throughput. To this
end, we obtain two-dimensional maximum stable throughput region which describes
the theoretical limit on rates that we can push into the network while
maintaining the queues in the network to be stable. The result is obtained for
both cases in which the capacity of the battery at the primary node is infinite
and also finite.
|
1108.0239
|
Stability Criteria via Common Non-strict Lyapunov Matrix for
Discrete-time Linear Switched Systems
|
math.OC cs.SY math.DS
|
In this paper, we consider the stability of discrete-time linear switched
systems with a common non-strict Lyapunov matrix.
|
1108.0261
|
FIFA World Cup 2010: A Network Analysis of the Champion Team Play
|
cs.SI physics.soc-ph
|
We analyze the pass network among the players of the Spanish team (the world
champion in the FIFA World Cup 2010), with the objective of explaining the
results obtained from the behavior at the complex network level. The team is
considered a network with players as nodes and passes as (directed) edges, and
a temporal analysis of the resulting passes network is done, looking at the
number of passes, length of the chain of passes, and the centrality of players
in the turf. Results of the last three matches indicate that the clustering
coefficient of the pass network increases with time, and stays high, indicating
possession by Spanish players, which eventually leads to victory, even as the
density of the pass network decreases with time.
|
1108.0294
|
Scaling Inference for Markov Logic with a Task-Decomposition Approach
|
cs.AI cs.DB
|
Motivated by applications in large-scale knowledge base construction, we
study the problem of scaling up a sophisticated statistical inference framework
called Markov Logic Networks (MLNs). Our approach, Felix, uses the idea of
Lagrangian relaxation from mathematical programming to decompose a program into
smaller tasks while preserving the joint-inference property of the original
MLN. The advantage is that we can use highly scalable specialized algorithms
for common tasks such as classification and coreference. We propose an
architecture to support Lagrangian relaxation in an RDBMS which we show enables
scalable joint inference for MLNs. We empirically validate that Felix is
significantly more scalable and efficient than prior approaches to MLN
inference by constructing a knowledge base from 1.8M documents as part of the
TAC challenge. We show that Felix scales and achieves state-of-the-art quality
numbers. In contrast, prior approaches do not scale even to a subset of the
corpus that is three orders of magnitude smaller.
|
1108.0333
|
Fuzzy Consensus and Synchronization: Theory and Application to Critical
Infrastructure Protection Problems
|
cs.SY cs.MA math.OC
|
In this paper the Distributed Consensus and Synchronization problems with
fuzzy-valued initial conditions are introduced, in order to obtain a shared
estimation of the state of a system based on partial and distributed
observations, in the case where such a state is affected by ambiguity and/or
vagueness. The Discrete-Time Fuzzy Systems (DFS) are introduced as an extension
of scalar fuzzy difference equations and some conditions for their stability
and representation are provided. The proposed framework is then applied in the
field of Critical Infrastructures; the consensus framework is used to represent
a scenario where human operators, each able to observe directly the state of a
given infrastructure (or of a given area considering vast and geographically
dispersed infrastructures), reach an agreement on the overall situation, whose
severity is expressed in a linguistic, fuzzy way; conversely synchronization is
used to provide a distributed interdependency estimation system, where an array
of interdependency models is synchronized via partial observation.
|
1108.0342
|
Black-Box Complexities of Combinatorial Problems
|
cs.NE
|
Black-box complexity is a complexity theoretic measure for how difficult a
problem is to be optimized by a general purpose optimization algorithm. It is
thus one of the few means trying to understand which problems are tractable for
genetic algorithms and other randomized search heuristics.
Most previous work on black-box complexity is on artificial test functions.
In this paper, we move a step forward and give a detailed analysis for the two
combinatorial problems minimum spanning tree and single-source shortest paths.
Besides giving interesting bounds for their black-box complexities, our work
reveals that the choice of how to model the optimization problem is non-trivial
here. This in particular comes true where the search space does not consist of
bit strings and where a reasonable definition of unbiasedness has to be agreed
on.
|
1108.0347
|
Entropy Semiring Forward-backward Algorithm for HMM Entropy Computation
|
cs.IT math.IT
|
The paper presents Entropy Semiring Forwardbackward algorithm (ESRFB) and its
application for memory efficient computation of the subsequence constrained
entropy and state sequence entropy of a Hidden Markov Model (HMM) when an
observation sequence is given. ESRFB is based on forward-backward recursion
over the entropy semiring, having the lower memory requirement than the
algorithm developed by Mann and MacCallum, with the same time complexity.
Furthermore, when it is used with forward pass only, it is applicable for the
computation of HMM entropy for a given observation sequence, with the same time
and memory complexity as the previously developed algorithm by Hernando et al
|
1108.0353
|
Cross-moments computation for stochastic context-free grammars
|
cs.CL
|
In this paper we consider the problem of efficient computation of
cross-moments of a vector random variable represented by a stochastic
context-free grammar. Two types of cross-moments are discussed. The sample
space for the first one is the set of all derivations of the context-free
grammar, and the sample space for the second one is the set of all derivations
which generate a string belonging to the language of the grammar. In the past,
this problem was widely studied, but mainly for the cross-moments of scalar
variables and up to the second order. This paper presents new algorithms for
computing the cross-moments of an arbitrary order, and the previously developed
ones are derived as special cases.
|
1108.0355
|
Using Java for distributed computing in the Gaia satellite data
processing
|
cs.CE astro-ph.IM cs.MS
|
In recent years Java has matured to a stable easy-to-use language with the
flexibility of an interpreter (for reflection etc.) but the performance and
type checking of a compiled language. When we started using Java for
astronomical applications around 1999 they were the first of their kind in
astronomy. Now a great deal of astronomy software is written in Java as are
many business applications.
We discuss the current environment and trends concerning the language and
present an actual example of scientific use of Java for high-performance
distributed computing: ESA's mission Gaia. The Gaia scanning satellite will
perform a galactic census of about 1000 million objects in our galaxy. The Gaia
community has chosen to write its processing software in Java. We explore the
manifold reasons for choosing Java for this large science collaboration.
Gaia processing is numerically complex but highly distributable, some parts
being embarrassingly parallel. We describe the Gaia processing architecture and
its realisation in Java. We delve into the astrometric solution which is the
most advanced and most complex part of the processing. The Gaia simulator is
also written in Java and is the most mature code in the system. This has been
successfully running since about 2005 on the supercomputer "Marenostrum" in
Barcelona. We relate experiences of using Java on a large shared machine.
Finally we discuss Java, including some of its problems, for scientific
computing.
|
1108.0363
|
Typesafe Modeling in Text Mining
|
cs.PL cs.IR
|
Based on the concept of annotation-based agents, this report introduces tools
and a formal notation for defining and running text mining experiments using a
statically typed domain-specific language embedded in Scala. Using machine
learning for classification as an example, the framework is used to develop and
document text mining experiments, and to show how the concept of generic,
typesafe annotation corresponds to a general information model that goes beyond
text processing.
|
1108.0391
|
The Channel Capacity Increases with Power
|
cs.IT math.IT
|
It is proved that for memoryless vector channels, maximizing the mutual
information over all source distributions with a certain average power or over
the larger set of source distributions with upperbounded average power yields
the same channel capacity in both cases. Hence, the channel capacity cannot
decrease with increasing average transmitted power, not even for channels with
severe nonlinear distortion.
|
1108.0404
|
Exploiting Agent and Type Independence in Collaborative Graphical
Bayesian Games
|
cs.AI cs.GT
|
Efficient collaborative decision making is an important challenge for
multiagent systems. Finding optimal joint actions is especially challenging
when each agent has only imperfect information about the state of its
environment. Such problems can be modeled as collaborative Bayesian games in
which each agent receives private information in the form of its type. However,
representing and solving such games requires space and computation time
exponential in the number of agents. This article introduces collaborative
graphical Bayesian games (CGBGs), which facilitate more efficient collaborative
decision making by decomposing the global payoff function as the sum of local
payoff functions that depend on only a few agents. We propose a framework for
the efficient solution of CGBGs based on the insight that they posses two
different types of independence, which we call agent independence and type
independence. In particular, we present a factor graph representation that
captures both forms of independence and thus enables efficient solutions. In
addition, we show how this representation can provide leverage in sequential
tasks by using it to construct a novel method for decentralized partially
observable Markov decision processes. Experimental results in both random and
benchmark tasks demonstrate the improved scalability of our methods compared to
several existing alternatives.
|
1108.0442
|
Diffusive Logistic Model Towards Predicting Information Diffusion in
Online Social Networks
|
cs.SI math.AP physics.soc-ph
|
Online social networks have recently become an effective and innovative
channel for spreading information and influence among hundreds of millions of
end users. Many prior work have carried out empirical studies and proposed
diffusion models to understand the information diffusion process in online
social networks. However, most of these studies focus on the information
diffusion in temporal dimension, that is, how the information propagates over
time. Little attempt has been given on understanding information diffusion over
both temporal and spatial dimensions. In this paper, we propose a Partial
Differential Equation (PDE), specifically, a Diffusive Logistic (DL) equation
to model the temporal and spatial characteristics of information diffusion in
online social networks. To be more specific, we develop a PDE-based theoretical
framework to measure and predict the density of influenced users at a given
distance from the original information source after a time period. The density
of influenced users over time and distance provides valuable insight on the
actual information diffusion process. We present the temporal and spatial
patterns in a real dataset collected from Digg social news site, and validate
the proposed DL equation in terms of predicting the information diffusion
process. Our experiment results show that the DL model is indeed able to
characterize and predict the process of information propagation in online
social networks. For example, for the most popular news with 24,099 votes in
Digg, the average prediction accuracy of DL model over all distances during the
first 6 hours is 92.08%. To the best of our knowledge, this paper is the first
attempt to use PDE-based model to study the information diffusion process in
both temporal and spatial dimensions in online social networks.
|
1108.0443
|
Sparse Recovery with Graph Constraints: Fundamental Limits and
Measurement Construction
|
cs.IT cs.NI math.IT
|
This paper addresses the problem of sparse recovery with graph constraints in
the sense that we can take additive measurements over nodes only if they induce
a connected subgraph. We provide explicit measurement constructions for several
special graphs. A general measurement construction algorithm is also proposed
and evaluated. For any given graph $G$ with $n$ nodes, we derive order optimal
upper bounds of the minimum number of measurements needed to recover any
$k$-sparse vector over $G$ ($M^G_{k,n}$). Our study suggests that $M^G_{k,n}$
may serve as a graph connectivity metric.
|
1108.0454
|
Digital Shearlet Transform
|
math.NA cs.IT math.IT
|
Over the past years, various representation systems which sparsely
approximate functions governed by anisotropic features such as edges in images
have been proposed. We exemplarily mention the systems of contourlets,
curvelets, and shearlets. Alongside the theoretical development of these
systems, algorithmic realizations of the associated transforms were provided.
However, one of the most common shortcomings of these frameworks is the lack of
providing a unified treatment of the continuum and digital world, i.e.,
allowing a digital theory to be a natural digitization of the continuum theory.
In fact, shearlet systems are the only systems so far which satisfy this
property, yet still deliver optimally sparse approximations of cartoon-like
images. In this chapter, we provide an introduction to digital shearlet theory
with a particular focus on a unified treatment of the continuum and digital
realm. In our survey we will present the implementations of two shearlet
transforms, one based on band-limited shearlets and the other based on
compactly supported shearlets. We will moreover discuss various quantitative
measures, which allow an objective comparison with other directional transforms
and an objective tuning of parameters. The codes for both presented transforms
as well as the framework for quantifying performance are provided in the Matlab
toolbox ShearLab.
|
1108.0476
|
Specifying and Staging Mixed-Initiative Dialogs with Program Generation
and Transformation
|
cs.PL cs.AI cs.HC
|
Specifying and implementing flexible human-computer dialogs, such as those
used in kiosks and smart phone apps, is challenging because of the numerous and
varied directions in which each user might steer a dialog. The objective of
this research is to improve dialog specification and implementation. To do so
we enriched a notation based on concepts from programming languages, especially
partial evaluation, for specifying a variety of unsolicited reporting,
mixed-initiative dialogs in a concise representation that serves as a design
for dialog implementation. We also built a dialog mining system that extracts a
specification in this notation from requirements. To demonstrate that such a
specification provides a design for dialog implementation, we built a system
that automatically generates an implementation of the dialog, called a stager,
from it. These two components constitute a dialog modeling toolkit that
automates dialog specification and implementation. These results provide a
proof of concept and demonstrate the study of dialog specification and
implementation from a programming languages perspective. The ubiquity of
dialogs in domains such as travel, education, and health care combined with the
demand for smart phone apps provide a landscape for further investigation of
these results.
|
1108.0477
|
Asymptotic Analysis of Complex LASSO via Complex Approximate Message
Passing (CAMP)
|
cs.IT math.IT
|
Recovering a sparse signal from an undersampled set of random linear
measurements is the main problem of interest in compressed sensing. In this
paper, we consider the case where both the signal and the measurements are
complex. We study the popular reconstruction method of $\ell_1$-regularized
least squares or LASSO. While several studies have shown that the LASSO
algorithm offers desirable solutions under certain conditions, the precise
asymptotic performance of this algorithm in the complex setting is not yet
known. In this paper, we extend the approximate message passing (AMP) algorithm
to the complex signals and measurements and obtain the complex approximate
message passing algorithm (CAMP). We then generalize the state evolution
framework recently introduced for the analysis of AMP, to the complex setting.
Using the state evolution, we derive accurate formulas for the phase transition
and noise sensitivity of both LASSO and CAMP.
|
1108.0488
|
A Kalman Decomposition for Possibly Controllable Uncertain Linear
Systems
|
cs.SY math.OC
|
This paper considers the structure of uncertain linear systems building on
concepts of robust unobservability and possible controllability which were
introduced in previous papers. The paper presents a new geometric
characterization of the possibly controllable states. When combined with
previous geometric results on robust unobservability, the results of this paper
lead to a general Kalman type decomposition for uncertain linear systems which
can be applied to the problem of obtaining reduced order uncertain system
models.
|
1108.0502
|
An Efficient Real Time Method of Fingertip Detection
|
cs.CV cs.AI cs.MM
|
Fingertips detection has been used in many applications, and it is very
popular and commonly used in the area of Human Computer Interaction these days.
This paper presents a novel time efficient method that will lead to fingertip
detection after cropping the irrelevant parts of input image. Binary silhouette
of the input image is generated using HSV color space based skin filter and
hand cropping done based on histogram of the hand image. The cropped image will
be used to figure out the fingertips.
|
1108.0535
|
Universal Rateless Codes From Coupled LT Codes
|
cs.IT math.IT
|
It was recently shown that spatial coupling of individual low-density
parity-check codes improves the belief-propagation threshold of the coupled
ensemble essentially to the maximum a posteriori threshold of the underlying
ensemble. We study the performance of spatially coupled low-density
generator-matrix ensembles when used for transmission over binary-input
memoryless output-symmetric channels. We show by means of density evolution
that the threshold saturation phenomenon also takes place in this setting. Our
motivation for studying low-density generator-matrix codes is that they can
easily be converted into rateless codes. Although there are already several
classes of excellent rateless codes known to date, rateless codes constructed
via spatial coupling might offer some additional advantages. In particular, by
the very nature of the threshold phenomenon one expects that codes constructed
on this principle can be made to be universal, i.e., a single construction can
uniformly approach capacity over the class of binary-input memoryless
output-symmetric channels. We discuss some necessary conditions on the degree
distribution which universal rateless codes based on the threshold phenomenon
have to fulfill. We then show by means of density evolution and some simulation
results that indeed codes constructed in this way perform very well over a
whole range of channel types and channel conditions.
|
1108.0631
|
Serialising the ISO SynAF Syntactic Object Model
|
cs.CL
|
This paper introduces, an XML format developed to serialise the object model
defined by the ISO Syntactic Annotation Framework SynAF. Based on widespread
best practices we adapt a popular XML format for syntactic annotation,
TigerXML, with additional features to support a variety of syntactic phenomena
including constituent and dependency structures, binding, and different node
types such as compounds or empty elements. We also define interfaces to other
formats and standards including the Morpho-syntactic Annotation Framework MAF
and the ISOCat Data Category Registry. Finally a case study of the German
Treebank TueBa-D/Z is presented, showcasing the handling of constituent
structures, topological fields and coreference annotation in tandem.
|
1108.0679
|
A characterization of entanglement-assisted quantum low-density
parity-check codes
|
cs.IT math.CO math.IT quant-ph
|
As in classical coding theory, quantum analogues of low-density parity-check
(LDPC) codes have offered good error correction performance and low decoding
complexity by employing the Calderbank-Shor-Steane (CSS) construction. However,
special requirements in the quantum setting severely limit the structures such
quantum codes can have. While the entanglement-assisted stabilizer formalism
overcomes this limitation by exploiting maximally entangled states (ebits),
excessive reliance on ebits is a substantial obstacle to implementation. This
paper gives necessary and sufficient conditions for the existence of quantum
LDPC codes which are obtainable from pairs of identical LDPC codes and consume
only one ebit, and studies the spectrum of attainable code parameters.
|
1108.0729
|
Estudo de Viabilidade de uma Plataforma de Baixo Custo para Data
Warehouse
|
cs.DB
|
Often corporations need tools to improve their decision making in a
competitive market. In general, these tools are based on data warehouse
platforms to mange and analyze large amounts of data. However, several of these
corporations do not have enough resources to buy such platforms because of the
high cost. This work is dedicated to a feasibility study of a low cost platform
to data warehouse. We consider as a low cost platform the use of open source
software like the PostgreSQL database system and the GNU/Linux operational
system. We verify the feasibility of this platform by executing two benchmarks
that simulate a data warehouse workload. The workload reproduces a multi-user
environment with the execution of complex queries, which executes:
aggregations, nested sub queries, multi joins, in-line views and more.
Considering the results we were able to highlight some problems on the
PostgreSQL database system, and discuss improvements in the context of data
warehouse.
|
1108.0748
|
Binary Particle Swarm Optimization based Biclustering of Web usage Data
|
cs.IR cs.SY
|
Web mining is the nontrivial process to discover valid, novel, potentially
useful knowledge from web data using the data mining techniques or methods. It
may give information that is useful for improving the services offered by web
portals and information access and retrieval tools. With the rapid development
of biclustering, more researchers have applied the biclustering technique to
different fields in recent years. When biclustering approach is applied to the
web usage data it automatically captures the hidden browsing patterns from it
in the form of biclusters. In this work, swarm intelligent technique is
combined with biclustering approach to propose an algorithm called Binary
Particle Swarm Optimization (BPSO) based Biclustering for Web Usage Data. The
main objective of this algorithm is to retrieve the global optimal bicluster
from the web usage data. These biclusters contain relationships between web
users and web pages which are useful for the E-Commerce applications like web
advertising and marketing. Experiments are conducted on real dataset to prove
the efficiency of the proposed algorithms.
|
1108.0775
|
Optimization with Sparsity-Inducing Penalties
|
cs.LG math.OC stat.ML
|
Sparse estimation methods are aimed at using or obtaining parsimonious
representations of data or models. They were first dedicated to linear variable
selection but numerous extensions have now emerged such as structured sparsity
or kernel selection. It turns out that many of the related estimation problems
can be cast as convex optimization problems by regularizing the empirical risk
with appropriate non-smooth norms. The goal of this paper is to present from a
general perspective optimization tools and techniques dedicated to such
sparsity-inducing penalties. We cover proximal methods, block-coordinate
descent, reweighted $\ell_2$-penalized techniques, working-set and homotopy
methods, as well as non-convex formulations and extensions, and provide an
extensive set of experiments to compare various algorithms from a computational
point of view.
|
1108.0779
|
Basketball scoring in NBA games: an example of complexity
|
physics.soc-ph cs.SI physics.data-an
|
Scoring in a basketball game is a process highly dynamic and non-linear type.
The level of NBA teams improve each season. They incorporate to their rosters
the best players in the world. These and other mechanisms, make the scoring in
the NBA basketball games be something exciting, where, on rare occasions, we
really know what will be the result at the end of the game. We analyzed all the
games of the 2005-06, 2006-07, 2007-08, 2008-09, 2009-10 NBA regular seasons
(6150 games). We have studied the evolution of the scoring and the time
intervals between points. These do not behave uniformly, but present more
predictable areas. In turn, we have analyzed the scoring in the games regarding
the differences in points. Exists different areas of behavior related with the
scorea and each zone has a different nature. There are point that we can
consider as tipping points. The presence of these critical points suggests that
there are phase transitions where the dynamic scoring of the games varies
significantly.
|
1108.0786
|
All good things come in threes - Three beads learn to swim with lattice
Boltzmann and a rigid body solver
|
cs.CE cond-mat.soft physics.flu-dyn
|
We simulate the self-propulsion of devices in a fluid in the regime of low
Reynolds numbers. Each device consists of three bodies (spheres or capsules)
connected with two damped harmonic springs. Sinusoidal driving forces compress
the springs which are resolved within a rigid body physics engine. The latter
is consistently coupled to a 3D lattice Boltzmann framework for the fluid
dynamics. In simulations of three-sphere devices, we find that the propulsion
velocity agrees well with theoretical predictions. In simulations where some or
all spheres are replaced by capsules, we find that the asymmetry of the design
strongly affects the propelling efficiency.
|
1108.0831
|
Towards Spatio-Temporal SOLAP
|
cs.DB
|
The integration of Geographic Information Systems (GIS) and On-Line
Analytical Processing (OLAP), denoted SOLAP, is aimed at exploring and
analyzing spatial data. In real-world SOLAP applications, spatial and
non-spatial data are subject to changes. In this paper we present a temporal
query language for SOLAP, called TPiet-QL, supporting so-called discrete
changes (for example, in land use or cadastral applications there are
situations where parcels are merged or split). TPiet-QL allows expressing
integrated GIS-OLAP queries in an scenario where spatial objects change across
time.
|
1108.0840
|
Searching for Voltage Graph-Based LDPC Tailbiting Codes with Large Girth
|
cs.IT math.IT
|
The relation between parity-check matrices of quasi-cyclic (QC) low-density
parity-check (LDPC) codes and biadjacency matrices of bipartite graphs supports
searching for powerful LDPC block codes. Using the principle of tailbiting,
compact representations of bipartite graphs based on convolutional codes can be
found.
Bounds on the girth and the minimum distance of LDPC block codes constructed
in such a way are discussed. Algorithms for searching iteratively for LDPC
block codes with large girth and for determining their minimum distance are
presented. Constructions based on all-ones matrices, Steiner Triple Systems,
and QC block codes are introduced. Finally, new QC regular LDPC block codes
with girth up to 24 are given.
|
1108.0870
|
Noisy-Interference Sum-Rate Capacity for Vector Gaussian Interference
Channels
|
cs.IT math.IT
|
New sufficient conditions for a vector Gaussian interference channel to
achieve the sum-rate capacity by treating interference as noise are derived,
which generalize the existing results. More concise conditions for
multiple-input-single-output, and single-input-multiple-output scenarios are
obtained.
|
1108.0894
|
Evader Interdiction and Collateral Damage
|
cs.SI
|
In network interdiction problems, evaders (e.g., hostile agents or data
packets) may be moving through a network towards targets and we wish to choose
locations for sensors in order to intercept the evaders before they reach their
destinations. The evaders might follow deterministic routes or Markov chains,
or they may be reactive}, i.e., able to change their routes in order to avoid
sensors placed to detect them. The challenge in such problems is to choose
sensor locations economically, balancing security gains with costs, including
the inconvenience sensors inflict upon innocent travelers. We study the
objectives of 1) maximizing the number of evaders captured when limited by a
budget on sensing cost and 2) capturing all evaders as cheaply as possible. We
give optimal sensor placement algorithms for several classes of special graphs
and hardness and approximation results for general graphs, including for
deterministic or Markov chain-based and reactive or oblivious evaders. In a
similar-sounding but fundamentally different problem setting posed by
Rubinstein and Glazer where both evaders and innocent travelers are reactive,
we again give optimal algorithms for special cases and hardness and
approximation results on general graphs.
|
1108.0895
|
Accurate Estimators for Improving Minwise Hashing and b-Bit Minwise
Hashing
|
stat.ML cs.DB cs.IR cs.LG
|
Minwise hashing is the standard technique in the context of search and
databases for efficiently estimating set (e.g., high-dimensional 0/1 vector)
similarities. Recently, b-bit minwise hashing was proposed which significantly
improves upon the original minwise hashing in practice by storing only the
lowest b bits of each hashed value, as opposed to using 64 bits. b-bit hashing
is particularly effective in applications which mainly concern sets of high
similarities (e.g., the resemblance >0.5). However, there are other important
applications in which not just pairs of high similarities matter. For example,
many learning algorithms require all pairwise similarities and it is expected
that only a small fraction of the pairs are similar. Furthermore, many
applications care more about containment (e.g., how much one object is
contained by another object) than the resemblance. In this paper, we show that
the estimators for minwise hashing and b-bit minwise hashing used in the
current practice can be systematically improved and the improvements are most
significant for set pairs of low resemblance and high containment.
|
1108.0982
|
Outage Constrained Robust Transmit Optimization for Multiuser MISO
Downlinks: Tractable Approximations by Conic Optimization
|
cs.IT math.IT
|
In this paper we consider a probabilistic signal-to-interference and-noise
ratio (SINR) constrained problem for transmit beamforming design in the
presence of imperfect channel state information (CSI), under a multiuser
multiple-input single-output (MISO) downlink scenario. In particular, we deal
with outage-based quality-of-service constraints, where the probability of each
user's SINR not satisfying a service requirement must not fall below a given
outage probability specification. The study of solution approaches to the
probabilistic SINR constrained problem is important because CSI errors are
often present in practical systems and they may cause substantial SINR outages
if not handled properly. However, a major technical challenge is how to process
the probabilistic SINR constraints. To tackle this, we propose a novel
relaxation- restriction (RAR) approach, which consists of two key
ingredients-semidefinite relaxation (SDR), and analytic tools for
conservatively approximating probabilistic constraints. The underlying goal is
to establish approximate probabilistic SINR constrained formulations in the
form of convex conic optimization problems, so that they can be readily
implemented by available solvers. Using either an intuitive worst-case argument
or specialized probabilistic results, we develop various conservative
approximation schemes for processing probabilistic constraints with quadratic
uncertainties. Consequently, we obtain several RAR alternatives for handling
the probabilistic SINR constrained problem. Our techniques apply to both
complex Gaussian CSI errors and i.i.d. bounded CSI errors with unknown
distribution. Moreover, results obtained from our extensive simulations show
that the proposed RAR methods significantly improve upon existing ones, both in
terms of solution quality and computational complexity.
|
1108.1022
|
Information Complexity and Estimation
|
cs.IT math.IT
|
We consider an input $x$ generated by an unknown stationary ergodic source
$X$ that enters a signal processing system $J$, resulting in $w=J(x)$. We
observe $w$ through a noisy channel, $y=z(w)$; our goal is to estimate x from
$y$, $J$, and knowledge of $f_{Y|W}$. This is universal estimation, because
$f_X$ is unknown. We provide a formulation that describes a trade-off between
information complexity and noise. Initial theoretical, algorithmic, and
experimental evidence is presented in support of our approach.
|
1108.1045
|
A Data Mining Approach to the Diagnosis of Tuberculosis by Cascading
Clustering and Classification
|
cs.AI cs.DB
|
In this paper, a methodology for the automated detection and classification
of Tuberculosis(TB) is presented. Tuberculosis is a disease caused by
mycobacterium which spreads through the air and attacks low immune bodies
easily. Our methodology is based on clustering and classification that
classifies TB into two categories, Pulmonary Tuberculosis(PTB) and retroviral
PTB(RPTB) that is those with Human Immunodeficiency Virus (HIV) infection.
Initially K-means clustering is used to group the TB data into two clusters and
assigns classes to clusters. Subsequently multiple different classification
algorithms are trained on the result set to build the final classifier model
based on K-fold cross validation method. This methodology is evaluated using
700 raw TB data obtained from a city hospital. The best obtained accuracy was
98.7% from support vector machine (SVM) compared to other classifiers. The
proposed approach helps doctors in their diagnosis decisions and also in their
treatment planning procedures for different categories.
|
1108.1065
|
Onset of coherent attitude layers in a population of sports fans
|
cs.SI
|
The aim of this paper was to empirically investigate the behavior of fans,
globally coupled to a common environmental source of information. The
environmental stimuli were given in a form of referee's decisions list. The
sample of fans had to respond on each stimulus by associating points signifying
his/her own opinion, emotion and action that referee's decisions provoke. Data
were fitted by the Brillouin function which was a solution of an adapted model
of quantum statistical physics to social phenomena. Correlation and a principal
component analysis were performed in order to detect any collective behavior of
the social ensemble of fans. Results showed that fans behaved as a system
subject to a phase transition where the neutral state in the opinion, emotional
and action space has been destabilized and a new stable state of coherent
attitudes was formed. The enhancement of fluctuations and the increase of
social susceptibility (responsiveness) to referee's decisions were connected to
the first few decisions. The subsequent reduction of values in these parameters
signified the onset of coherent layering within the attitude space of the
social ensemble of fans. In the space of opinions fan coherence was maximal as
only one layer of coherence emerged. In the emotional and action spaces the
number of coherent levels was 2 and 4 respectively. The principal component
analysis revealed a strong collective behavior and a high degree of integration
within and between the opinion, emotional and action spaces of the sample of
fans. These results point to one possible way of how different proto-groups,
violent and moderate, may be formed as a consequence of global coupling to a
common source of information.
|
1108.1066
|
On the scalability and convergence of simultaneous parameter
identification and synchronization of dynamical systems
|
cs.SY math.DS math.OC nlin.CD
|
The synchronization of dynamical systems is a method that allows two systems
to have identical state trajectories, appart from an error converging to zero.
This method consists in an appropriate unidirectional coupling from one system
(drive) to the other (response). This requires that the response system shares
the same dynamical model with the drive. For the cases where the drive is
unknown, Chen proposed in 2002 a method to adapt the response system such that
synchronization is achieved, provided that (1) the response dynamical model is
linear with a vector of parameters, and (2) there is a parameter vector that
makes both system dynamics identical. However, this method has two limitations:
first, it does not scale well for complex parametric models (e.g., if the
number of parameters is greater than the state dimension), and second, the
model parameters are not guaranteed to converge, namely as the synchronization
error approaches zero. This paper presents an adaptation law addressing these
two limitations. Stability and convergence proofs, using Lyapunov's second
method, support the proposed adaptation law. Finally, numerical simulations
illustrate the advantages of the proposed method, namely showing cases where
the Chen's method fail, while the proposed one does not.
|
1108.1121
|
Analysis, Dimensioning and Robust Control of Shunt Active Filter for
Harmonic Currents Compensation in Electrical Mains
|
cs.SY math.OC
|
In this chapter some results related to Shunt Active Filters (SAFs) and
obtained by the authors and some coauthors are reported. SAFs are complex power
electronics equipments adopted to compensate for cur-rent harmonic pollution in
electric mains, due to nonlinear loads. By using a proper "floating" capacitor
as energy reservoir, the SAF purpose is to inject in the line grid currents
canceling the polluting har-monics. Control algorithms play a key role for such
devices and, in general, in many power electronics applications. Moreover,
systems theory is crucial, since it is the mathematical tool that enables a
deep understanding of the involved dynamics of such systems, allowing a correct
dimensioning, beside an effective control. As a matter of facts, current
injection objective can be straightforwardly formulated as an output tracking
control problem. In this fashion, the structural and insidious
marginally-stable internal/zero dynamics of SAFs can be immediately highlighted
and characterized in terms of sizing and control issues. For what concerns the
control design strictly, time-scale separation among output and internal
dynamics can be effectively exploited to split the control design in different
stages that can be later aggregated, by using singular perturbation analysis.
In addition, for robust asymptotic output tracking the Internal Model Principle
is adopted.
|
1108.1122
|
Leveraging Billions of Faces to Overcome Performance Barriers in
Unconstrained Face Recognition
|
cs.CV
|
We employ the face recognition technology developed in house at face.com to a
well accepted benchmark and show that without any tuning we are able to
considerably surpass state of the art results. Much of the improvement is
concentrated in the high-valued performance point of zero false positive
matches, where the obtained recall rate almost doubles the best reported result
to date. We discuss the various components and innovations of our system that
enable this significant performance gap. These components include extensive
utilization of an accurate 3D reconstructed shape model dealing with challenges
arising from pose and illumination. In addition, discriminative models based on
billions of faces are used in order to overcome aging and facial expression as
well as low light and overexposure. Finally, we identify a challenging set of
identification queries that might provide useful focus for future research.
|
1108.1136
|
Capacity Region of Vector Gaussian Interference Channels with Generally
Strong Interference
|
cs.IT math.IT
|
An interference channel is said to have strong interference if for all input
distributions, the receivers can fully decode the interference. This definition
of strong interference applies to discrete memoryless, scalar and vector
Gaussian interference channels. However, there exist vector Gaussian
interference channels that may not satisfy the strong interference condition
but for which the capacity can still be achieved by jointly decoding the signal
and the interference. This kind of interference is called generally strong
interference. Sufficient conditions for a vector Gaussian interference channel
to have generally strong interference are derived. The sum-rate capacity and
the boundary points of the capacity region are also determined.
|
1108.1161
|
On generic erasure correcting sets and related problems
|
cs.IT math.IT
|
Motivated by iterative decoding techniques for the binary erasure channel
Hollmann and Tolhuizen introduced and studied the notion of generic erasure
correcting sets for linear codes. A generic $(r,s)$--erasure correcting set
generates for all codes of codimension $r$ a parity check matrix that allows
iterative decoding of all correctable erasure patterns of size $s$ or less. The
problem is to derive bounds on the minimum size $F(r,s)$ of generic erasure
correcting sets and to find constructions for such sets. In this paper we
continue the study of these sets. We derive better lower and upper bounds.
Hollmann and Tolhuizen also introduced the stronger notion of $(r,s)$--sets and
derived bounds for their minimum size $G(r,s)$. Here also we improve these
bounds. We observe that these two conceps are closely related to so called
$s$--wise intersecting codes, an area, in which $G(r,s)$ has been studied
primarily with respect to ratewise performance. We derive connections. Finally,
we observed that hypergraph covering can be used for both problems to derive
good upper bounds.
|
1108.1169
|
Learning Representations by Maximizing Compression
|
cs.CV
|
We give an algorithm that learns a representation of data through
compression. The algorithm 1) predicts bits sequentially from those previously
seen and 2) has a structure and a number of computations similar to an
autoencoder. The likelihood under the model can be calculated exactly, and
arithmetic coding can be used directly for compression. When training on digits
the algorithm learns filters similar to those of restricted boltzman machines
and denoising autoencoders. Independent samples can be drawn from the model by
a single sweep through the pixels. The algorithm has a good compression
performance when compared to other methods that work under random ordering of
pixels.
|
1108.1170
|
Convex Optimization without Projection Steps
|
math.OC cs.AI cs.SY
|
For the general problem of minimizing a convex function over a compact convex
domain, we will investigate a simple iterative approximation algorithm based on
the method by Frank & Wolfe 1956, that does not need projection steps in order
to stay inside the optimization domain. Instead of a projection step, the
linearized problem defined by a current subgradient is solved, which gives a
step direction that will naturally stay in the domain. Our framework
generalizes the sparse greedy algorithm of Frank & Wolfe and its primal-dual
analysis by Clarkson 2010 (and the low-rank SDP approach by Hazan 2008) to
arbitrary convex domains. We give a convergence proof guaranteeing
{\epsilon}-small duality gap after O(1/{\epsilon}) iterations.
The method allows us to understand the sparsity of approximate solutions for
any l1-regularized convex optimization problem (and for optimization over the
simplex), expressed as a function of the approximation quality. We obtain
matching upper and lower bounds of {\Theta}(1/{\epsilon}) for the sparsity for
l1-problems. The same bounds apply to low-rank semidefinite optimization with
bounded trace, showing that rank O(1/{\epsilon}) is best possible here as well.
As another application, we obtain sparse matrices of O(1/{\epsilon}) non-zero
entries as {\epsilon}-approximate solutions when optimizing any convex function
over a class of diagonally dominant symmetric matrices.
We show that our proposed first-order method also applies to nuclear norm and
max-norm matrix optimization problems. For nuclear norm regularized
optimization, such as matrix completion and low-rank recovery, we demonstrate
the practical efficiency and scalability of our algorithm for large matrix
problems, as e.g. the Netflix dataset. For general convex optimization over
bounded matrix max-norm, our algorithm is the first with a convergence
guarantee, to the best of our knowledge.
|
1108.1228
|
An index for regular expression queries: Design and implementation
|
cs.DB cs.IR
|
The like regular expression predicate has been part of the SQL standard since
at least 1989. However, despite its popularity and wide usage, database vendors
provide only limited indexing support for regular expression queries which
almost always require a full table scan.
In this paper we propose a rigorous and robust approach for providing
indexing support for regular expression queries. Our approach consists of
formulating the indexing problem as a combinatorial optimization problem. We
begin with a database, abstracted as a collection of strings. From this data
set we generate a query workload. The input to the optimization problem is the
database and the workload. The output is a set of multigrams (substrings) which
can be used as keys to records which satisfy the query workload. The multigrams
can then be integrated with the data structure (like B+ trees) to provide
indexing support for the queries. We provide a deterministic and a randomized
approximation algorithm (with provable guarantees) to solve the optimization
problem. Extensive experiments on synthetic data sets demonstrate that our
approach is accurate and efficient.
We also present a case study on PROSITE patterns - which are complex regular
expression signatures for classes of proteins. Again, we are able to
demonstrate the utility of our indexing approach in terms of accuracy and
efficiency. Thus, perhaps for the first time, there is a robust and practical
indexing mechanism for an important class of database queries.
|
1108.1262
|
1st International Workshop on Complex Systems in Sports - Proceedings
|
cs.SI physics.soc-ph
|
Online proceedings for the first workshop on complex systems in sports; index
pointing to the papers that will be presented and discussed in that workshop.
The papers deal with sports from a complex systems point of view, and include
papers on a network analysis of the performance of the Spanish team in the 2010
world cup and basketball scoring, study of populations of sports fans, try to
select attributes for sports forecasting and finally try to analyze the
physical condition from the perspective of complexity.
|
1108.1275
|
Neutral evolution: A null model for language dynamics
|
physics.soc-ph cs.SI
|
We review the task of aligning simple models for language dynamics with
relevant empirical data, motivated by the fact that this is rarely attempted in
practice despite an abundance of abstract models. We propose that one way to
meet this challenge is through the careful construction of null models. We
argue in particular that rejection of a null model must have important
consequences for theories about language dynamics if modelling is truly to be
worthwhile. Our main claim is that the stochastic process of neutral evolution
(also known as genetic drift or random copying) is a viable null model for
language dynamics. We survey empirical evidence in favour and against neutral
evolution as a mechanism behind historical language changes, highlighting the
theoretical implications in each case.
|
1108.1331
|
Three-term Method and Dual Estimate on Static Problems of Continuum
Bodies
|
cs.CE math.OC
|
This work aims to provide standard formulations for direct minimization
approaches on various types of static problems of continuum mechanics.
Particularly, form-finding problems of tension structures are discussed in the
first half and the large deformation problems of continuum bodies are discussed
in the last half. In the first half, as the standards of iterative direct
minimization strategies, two types of simple recursive methods are presented,
namely the two-term method and the three-term method. The dual estimate is also
introduced as a powerful means of involving equally constraint conditions into
minimization problems. As examples of direct minimization approaches on usual
engineering issues, some form finding problems of tension structures which can
be solved by the presented strategies are illustrated. Additionally, it is
pointed out that while the two-term method sometimes becomes useless, the
three-term method always provides remarkable rate of global convergence
efficiency. Then, to show the potential ability of the three-term method, in
the last part of this work, some principle of virtual works which usually
appear in the continuum mechanics are approximated and discretized in a common
manner, which are suitable to be solved by the three-term method. Finally, some
large deformation analyses of continuum bodies which can be solved by the
three-term method are presented.
|
1108.1353
|
Real time face recognition using adaboost improved fast PCA algorithm
|
cs.CV
|
This paper presents an automated system for human face recognition in a real
time background world for a large homemade dataset of persons face. The task is
very difficult as the real time background subtraction in an image is still a
challenge. Addition to this there is a huge variation in human face image in
terms of size, pose and expression. The system proposed collapses most of this
variance. To detect real time human face AdaBoost with Haar cascade is used and
a simple fast PCA and LDA is used to recognize the faces detected. The matched
face is then used to mark attendance in the laboratory, in our case. This
biometric system is a real time attendance system based on the human face
recognition with a simple and fast algorithms and gaining a high accuracy
rate..
|
1108.1361
|
Variability of location management costs with different mobilities and
timer periods to update locations
|
cs.SI
|
In this article, we examine the Location Management costs in mobile
communication networks utilizing the timer-based method. From the study of the
probabilities that a mobile terminal changes a number of Location Areas between
two calls, we identify a threshold value of 0.7 for the Call-to-Mobility Ratio
(CMR) below which the application of the timer-based method is most
appropriate. We characterize the valley appearing in the evolution of the costs
with the timeout period, showing that the time interval required to reach 90%
of the stabilized costs grows with the mobility index, the paging cost per
Location Area and the movement dimension, in opposition to the behavior
presented by the time interval that achieves the minimum of the costs. The
results obtained for CMRs below the suggested 0.7 threshold show that the
valley appearing in the costs tends to disappear for CMRs within [0.001, 0.7]
in onedimensional movements and within [0.2, 0.7] in two-dimensional ones, and
when the normalized paging cost per Location Area is below 0.3.
|
1108.1367
|
Savings in location management costs leveraging user statistics
|
cs.NI cs.SI
|
The growth in the number of users in mobile communications networks and the
rise in the traffic generated by each user, are responsible for the increasing
importance of Mobility Management. Within Mobility Management, the main
objective of Location Management is to enable the roaming of the user in the
coverage area. In this paper, we analyze the savings in Location Management
costs obtained leveraging the users' statistics, in comparison with the
classical strategy. In particular, we introduce two novel algorithms to obtain
the Beta parameters (useful terms in the calculation of location update costs
for different Location Management strategies), utilizing a geographical study
of relative positions of the cells within the location areas. Eventually, we
discuss the influence of the different network parameters on the total Location
Management costs savings for both the radio interface and the fixed network
part, providing useful guidelines for the optimum design of the networks.
|
1108.1378
|
An Efficient Architecture for Information Retrieval in P2P Context Using
Hypergraph
|
cs.DB cs.PF
|
Peer-to-peer (P2P) Data-sharing systems now generate a significant portion of
Internet traffic. P2P systems have emerged as an accepted way to share enormous
volumes of data. Needs for widely distributed information systems supporting
virtual organizations have given rise to a new category of P2P systems called
schema-based. In such systems each peer is a database management system in
itself, ex-posing its own schema. In such a setting, the main objective is the
efficient search across peer databases by processing each incoming query
without overly consuming bandwidth. The usability of these systems depends on
successful techniques to find and retrieve data; however, efficient and
effective routing of content-based queries is an emerging problem in P2P
networks. This work was attended as an attempt to motivate the use of mining
algorithms in the P2P context may improve the significantly the efficiency of
such methods. Our proposed method based respectively on combination of
clustering with hypergraphs. We use ECCLAT to build approximate clustering and
discovering meaningful clusters with slight overlapping. We use an algorithm
MTMINER to extract all minimal transversals of a hypergraph (clusters) for
query routing. The set of clusters improves the robustness in queries routing
mechanism and scalability in P2P Network. We compare the performance of our
method with the baseline one considering the queries routing problem. Our
experimental results prove that our proposed methods generate impressive levels
of performance and scalability with with respect to important criteria such as
response time, precision and recall.
|
1108.1410
|
Distributed Detection over Noisy Networks: Large Deviations Analysis
|
cs.IT math.IT
|
We study the large deviations performance of consensus+innovations
distributed detection over noisy networks, where sensors at a time step k
cooperate with immediate neighbors (consensus) and assimilate their new
observations (innovation.) We show that, even under noisy communication,
\emph{all sensors} can achieve exponential decay e^{-k C_{\mathrm{dis}}} of the
detection error probability, even when certain (or most) sensors cannot detect
the event of interest in isolation. We achieve this by designing a single time
scale stochastic approximation type distributed detector with the optimal
weight sequence {\alpha_k}, by which sensors weigh their neighbors' messages.
The optimal design of {\alpha_k} balances the opposing effects of communication
noise and information flow from neighbors: larger, slowly decaying \alpha_k
improves information flow but injects more communication noise. Further, we
quantify the best achievable C_{\mathrm{dis}} as a function of the sensing
signal and noise, communication noise, and network connectivity. Finally, we
find a threshold on the communication noise power below which a sensor that can
detect the event in isolation still improves its detection by cooperation
through noisy links.
|
1108.1421
|
On the Secrecy Degrees of Freedom of Multi-Antenna Wiretap Channels with
Delayed CSIT
|
cs.IT math.IT
|
The secrecy degrees of freedom (SDoF) of the Gaussian multiple-input and
single-output (MISO) wiretap channel is studied under the assumption that
delayed channel state information (CSI) is available at the transmitter and
each receiver knows its own instantaneous channel. We first show that a
strictly positive SDoF can be guaranteed whenever the transmitter has delayed
CSI (either on the legitimate channel or/and the eavesdropper channel). In
particular, in the case with delayed CSI on both channels, it is shown that the
optimal SDoF is 2/3. We then generalize the result to the two-user Gaussian
MISO broadcast channel with confidential messages and characterize the SDoF
region when the transmitter has delayed CSI of both receivers. Interestingly,
the artificial noise schemes exploiting several time instances are shown to
provide the optimal SDoF region by masking the confidential message to the
unintended receiver while aligning the interference at each receiver.
|
1108.1434
|
A Novel Approach for Authenticating Textual or Graphical Passwords Using
Hopfield Neural Network
|
cs.CR cs.NE
|
Password authentication using Hopfield Networks is presented in this paper.
In this paper we discussed the Hopfield Network Scheme for Textual and
graphical passwords, for which input Password will be converted in to
probabilistic values. We observed how to get password authentication using
Probabilistic values for Textual passwords and Graphical passwords. This study
proposes the use of a Hopfield neural network technique for password
authentication. In comparison to existing layered neural network techniques,
the proposed method provides better accuracy and quicker response time to
registration and password changes.
|
1108.1440
|
Clustering in large networks does not promote upstream reciprocity
|
physics.soc-ph cs.SI
|
Upstream reciprocity (also called generalized reciprocity) is a putative
mechanism for cooperation in social dilemma situations with which players help
others when they are helped by somebody else. It is a type of indirect
reciprocity. Although upstream reciprocity is often observed in experiments,
most theories suggest that it is operative only when players form short cycles
such as triangles, implying a small population size, or when it is combined
with other mechanisms that promote cooperation on their own. An expectation is
that real social networks, which are known to be full of triangles and other
short cycles, may accommodate upstream reciprocity. In this study, I extend the
upstream reciprocity game proposed for a directed cycle by Boyd and Richerson
to the case of general networks. The model is not evolutionary and concerns the
conditions under which the unanimity of cooperative players is a Nash
equilibrium. I show that an abundance of triangles or other short cycles in a
network does little to promote upstream reciprocity. Cooperation is less likely
for a larger population size even if triangles are abundant in the network. In
addition, in contrast to the results for evolutionary social dilemma games on
networks, scale-free networks lead to less cooperation than networks with a
homogeneous degree distribution.
|
1108.1441
|
Spatial Degrees of Freedom of the Multicell MIMO Multiple Access Channel
|
cs.IT math.IT
|
We consider a homogeneous multiple cellular scenario with multiple users per
cell, i.e., $K\geq 1$ where $K$ denotes the number of users in a cell. In this
scenario, a degrees of freedom outer bound as well as an achievable scheme that
attains the degrees of freedom outer bound of the multicell multiple access
channel (MAC) with constant channel coefficients are investigated. The users
have $M$ antennas, and the base stations are equipped with $N$ antennas. The
found outer bound is general in that it characterizes a degrees of freedom
upper bound for $K\geq 1$ and $L>1$ where $L$ denotes the number of cells. The
achievability of the degrees of freedom outer bound is studied for two cell
case (i.e., L=2). The achievable schemes that attains the degrees of freedom
outer bound for L=2 are based on two approaches. The first scheme is a simple
zero forcing with $M=K\beta+\beta$ and $N=K\beta$, and the second approach is
null space interference alignment with $M=K\beta$ and $N=K\beta+\beta$ where
$\beta>0$ is a positive integer.
|
1108.1464
|
Cutaneous Force Feedback as a Sensory Subtraction Technique in Haptics
|
cs.RO
|
A novel sensory substitution technique is presented. Kinesthetic and
cutaneous force feedback are substituted by cutaneous feedback (CF) only,
provided by two wearable devices able to apply forces to the index finger and
the thumb, while holding a handle during a teleoperation task. The force
pattern, fed back to the user while using the cutaneous devices, is similar, in
terms of intensity and area of application, to the cutaneous force pattern
applied to the finger pad while interacting with a haptic device providing both
cutaneous and kinesthetic force feedback. The pattern generated using the
cutaneous devices can be thought as a subtraction between the complete haptic
feedback (HF) and the kinesthetic part of it. For this reason, we refer to this
approach as sensory subtraction instead of sensory substitution. A needle
insertion scenario is considered to validate the approach. The haptic device is
connected to a virtual environment simulating a needle insertion task.
Experiments show that the perception of inserting a needle using the
cutaneous-only force feedback is nearly indistinguishable from the one felt by
the user while using both cutaneous and kinesthetic feedback. As most of the
sensory substitution approaches, the proposed sensory subtraction technique
also has the advantage of not suffering from stability issues of teleoperation
systems due, for instance, to communication delays. Moreover, experiments show
that the sensory subtraction technique outperforms sensory substitution with
more conventional visual feedback (VF).
|
1108.1488
|
'Just Enough' Ontology Engineering
|
cs.AI
|
This paper introduces 'just enough' principles and 'systems engineering'
approach to the practice of ontology development to provide a minimal yet
complete, lightweight, agile and integrated development process, supportive of
stakeholder management and implementation independence.
|
1108.1500
|
Gender Recognition Based on Sift Features
|
cs.AI cs.CV
|
This paper proposes a robust approach for face detection and gender
classification in color images. Previous researches about gender recognition
suppose an expensive computational and time-consuming pre-processing step in
order to alignment in which face images are aligned so that facial landmarks
like eyes, nose, lips, chin are placed in uniform locations in image. In this
paper, a novel technique based on mathematical analysis is represented in three
stages that eliminates alignment step. First, a new color based face detection
method is represented with a better result and more robustness in complex
backgrounds. Next, the features which are invariant to affine transformations
are extracted from each face using scale invariant feature transform (SIFT)
method. To evaluate the performance of the proposed algorithm, experiments have
been conducted by employing a SVM classifier on a database of face images which
contains 500 images from distinct people with equal ratio of male and female.
|
1108.1502
|
Generalized Louvain Method for Community Detection in Large Networks
|
cs.SI physics.soc-ph
|
In this paper we present a novel strategy to discover the community structure
of (possibly, large) networks. This approach is based on the well-know concept
of network modularity optimization. To do so, our algorithm exploits a novel
measure of edge centrality, based on the k-paths. This technique allows to
efficiently compute a edge ranking in large networks in near linear time. Once
the centrality ranking is calculated, the algorithm computes the pairwise
proximity between nodes of the network. Finally, it discovers the community
structure adopting a strategy inspired by the well-known state-of-the-art
Louvain method (henceforth, LM), efficiently maximizing the network modularity.
The experiments we carried out show that our algorithm outperforms other
techniques and slightly improves results of the original LM, providing reliable
results. Another advantage is that its adoption is naturally extended even to
unweighted networks, differently with respect to the LM.
|
1108.1510
|
How Hidden are Hidden Processes? A Primer on Crypticity and Entropy
Convergence
|
physics.data-an cond-mat.stat-mech cs.IT math.DS math.IT math.ST nlin.CD stat.TH
|
We investigate a stationary process's crypticity---a measure of the
difference between its hidden state information and its observed
information---using the causal states of computational mechanics. Here, we
motivate crypticity and cryptic order as physically meaningful quantities that
monitor how hidden a hidden process is. This is done by recasting previous
results on the convergence of block entropy and block-state entropy in a
geometric setting, one that is more intuitive and that leads to a number of new
results. For example, we connect crypticity to how an observer synchronizes to
a process. We show that the block-causal-state entropy is a convex function of
block length. We give a complete analysis of spin chains. We present a
classification scheme that surveys stationary processes in terms of their
possible cryptic and Markov orders. We illustrate related entropy convergence
behaviors using a new form of foliated information diagram. Finally, along the
way, we provide a variety of interpretations of crypticity and cryptic order to
establish their naturalness and pervasiveness. Hopefully, these will inspire
new applications in spatially extended and network dynamical systems.
|
1108.1522
|
Wireless MIMO Switching with Zero-forcing Relaying and Network-coded
Relaying
|
cs.IT cs.NI math.IT
|
A wireless relay with multiple antennas is called a
multiple-input-multiple-output (MIMO) switch if it maps its input links to its
output links using "precode-and-forward." Namely, the MIMO switch precodes the
received signal vector in the uplink using some matrix for transmission in the
downlink. This paper studies the scenario of $K$ stations and a MIMO switch,
which has full channel state information. The precoder at the MIMO switch is
either a zero-forcing matrix or a network-coded matrix. With the zero-forcing
precoder, each destination station receives only its desired signal with
enhanced noise but no interference. With the network-coded precoder, each
station receives not only its desired signal and noise, but possibly also
self-interference, which can be perfectly canceled. Precoder design for
optimizing the received signal-to-noise ratios at the destinations is
investigated. For zero-forcing relaying, the problem is solved in closed form
in the two-user case, whereas in the case of more users, efficient algorithms
are proposed and shown to be close to what can be achieved by extensive random
search. For network-coded relaying, we present efficient iterative algorithms
that can boost the throughput further.
|
1108.1530
|
Evolving A-Type Artificial Neural Networks
|
cs.NE
|
We investigate Turing's notion of an A-type artificial neural network. We
study a refinement of Turing's original idea, motivated by work of Teuscher,
Bull, Preen and Copeland. Our A-types can process binary data by accepting and
outputting sequences of binary vectors; hence we can associate a function to an
A-type, and we say the A-type {\em represents} the function. There are two
modes of data processing: clamped and sequential. We describe an evolutionary
algorithm, involving graph-theoretic manipulations of A-types, which searches
for A-types representing a given function. The algorithm uses both mutation and
crossover operators. We implemented the algorithm and applied it to three
benchmark tasks. We found that the algorithm performed much better than a
random search. For two out of the three tasks, the algorithm with crossover
performed better than a mutation-only version.
|
1108.1535
|
Robust Coding for Lossy Computing with Receiver-Side Observation Costs
|
cs.IT math.IT
|
An encoder wishes to minimize the bit rate necessary to guarantee that a
decoder is able to calculate a symbolwise function of a sequence available only
at the encoder and a sequence that can be measured only at the decoder. This
classical problem, first studied by Yamamoto, is addressed here by including
two new aspects: (i) The decoder obtains noisy measurements of its sequence,
where the quality of such measurements can be controlled via a cost-constrained
"action" sequence; (ii) Measurement at the decoder may fail in a way that is
unpredictable to the encoder, thus requiring robust encoding. The considered
scenario generalizes known settings such as the Heegard-Berger-Kaspi and the
"source coding with a vending machine" problems. The rate-distortion-cost
function is derived and numerical examples are also worked out to obtain
further insight into the optimal system design.
|
1108.1549
|
A frequency approach to topological identification and graphical
modeling
|
cs.SY cs.SI math.OC
|
This works explores and illustrates recent results developed by the author in
field of dynamical network analysis. The considered approach is blind, i.e., no
a priori assumptions on the interconnected systems are available. Moreover, the
perspective is that of a simple "observer" who can perform no kind of test on
the network in order to study the related response, that is no action or
forcing input aimed to reveal particular responses of the system can be
performed. In such a scenario a frequency based method of investigation is
developed to obtain useful insights on the network. The information thus
derived can be fruitfully exploited to build acyclic graphical models, which
can be seen as extension of Bayesian Networks or Markov Chains. Moreover, it is
shown that the topology of polytree linear networks can be exactly identified
via the same mathematical tools. In this respect, it is worth observing that
important real systems, such as all the transportation networks, fit this
class.
|
1108.1572
|
Optimal Rate for Irregular LDPC Codes in Binary Erasure Channel
|
cs.IT math.IT
|
In this paper, we introduce a new practical and general method for solving
the main problem of designing the capacity approaching, optimal rate, irregular
low-density parity-check (LDPC) code ensemble over binary erasure channel
(BEC). Compared to some new researches, which are based on application of
asymptotic analysis tools out of optimization process, the proposed method is
much simpler, faster, accurate and practical. Because of not using any
relaxation or any approximate solution like previous works, the found answer
with this method is optimal. We can construct optimal variable node degree
distribution for any given binary erasure rate, {\epsilon}, and any check node
degree distribution. The presented method is implemented and works well in
practice. The time complexity of this method is of polynomial order. As a
result, we obtain some degree distribution which their rates are close to the
capacity.
|
1108.1589
|
Imitation of Life: Advanced system for native Artificial Evolution
|
cs.NE q-bio.PE
|
A model for artificial evolution in native x86 Windows systems has been
developed at the end of 2010. In this text, further improvements and additional
analogies to natural microbiologic processes are presented. Several experiments
indicate the capability of the system - and raise the question of possible
countermeasures.
|
1108.1597
|
Evolving network models under a dynamic growth rule
|
physics.soc-ph cs.SI
|
Evolving network models under a dynamic growth rule which comprises the
addition and deletion of nodes are investigated. By adding a node with a
probability $P_a$ or deleting a node with the probability $P_d=1-P_a$ at each
time step, where $P_a$ and $P_d$ are determined by the Logistic population
equation, topological properties of networks are studied. All the fat-tailed
degree distributions observed in real systems are obtained, giving the evidence
that the mechanism of addition and deletion can lead to the diversity of degree
distribution of real systems. Moreover, it is found that the networks exhibit
nonstationary degree distributions, changing from the power-law to the
exponential one or from the exponential to the Gaussian one. These results can
be expected to shed some light on the formation and evolution of real complex
real-world networks.
|
1108.1636
|
A new embedding quality assessment method for manifold learning
|
cs.CV cs.LG
|
Manifold learning is a hot research topic in the field of computer science. A
crucial issue with current manifold learning methods is that they lack a
natural quantitative measure to assess the quality of learned embeddings, which
greatly limits their applications to real-world problems. In this paper, a new
embedding quality assessment method for manifold learning, named as
Normalization Independent Embedding Quality Assessment (NIEQA), is proposed.
Compared with current assessment methods which are limited to isometric
embeddings, the NIEQA method has a much larger application range due to two
features. First, it is based on a new measure which can effectively evaluate
how well local neighborhood geometry is preserved under normalization, hence it
can be applied to both isometric and normalized embeddings. Second, it can
provide both local and global evaluations to output an overall assessment.
Therefore, NIEQA can serve as a natural tool in model selection and evaluation
tasks for manifold learning. Experimental results on benchmark data sets
validate the effectiveness of the proposed method.
|
1108.1645
|
A joint time-invariant filtering approach to the linear Gaussian relay
problem
|
cs.IT math.IT
|
In this paper, the linear Gaussian relay problem is considered. Under the
linear time-invariant (LTI) model the problem is formulated in the frequency
domain based on the Toeplitz distribution theorem. Under the further assumption
of realizable input spectra, the LTI Gaussian relay problem is converted to a
joint design problem of source and relay filters under two power constraints,
one at the source and the other at the relay, and a practical solution to this
problem is proposed based on the projected subgradient method. Numerical
results show that the proposed method yields a noticeable gain over the
instantaneous amplify-and-forward (AF) scheme in inter-symbol interference
(ISI) channels. Also, the optimality of the AF scheme within the class of
one-tap relay filters is established in flat-fading channels.
|
1108.1656
|
Emergent bipartiteness in a society of knights and knaves
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
We propose a simple model of a social network based on so-called
knights-and-knaves puzzles. The model describes the formation of networks
between two classes of agents where links are formed by agents introducing
their neighbours to others of their own class. We show that if the proportion
of knights and knaves is within a certain range, the network self-organizes to
a perfectly bipartite state. However, if the excess of one of the two classes
is greater than a threshold value, bipartiteness is not observed. We offer a
detailed theoretical analysis for the behaviour of the model, investigate its
behaviou r in the thermodynamic limit, and argue that it provides a simple
example of a topology-driven model whose behaviour is strongly reminiscent of a
first-order phase transitions far from equilibrium.
|
1108.1676
|
Sub-modularity and Antenna Selection in MIMO systems
|
cs.IT math.IT
|
In this paper, we show that the optimal receive antenna subset selection
problem for maximizing the mutual information in a point-to-point MIMO system
is sub-modular. Consequently, a greedy step-wise optimization approach, where
at each step an antenna that maximizes the incremental gain is added to the
existing antenna subset, is guaranteed to be within a (1 - 1/e) fraction of the
global optimal value. For a single antenna equipped source and destination with
multiple relays, we show that the relay antenna selection problem to maximize
the mutual information is modular, when complete channel state information is
available at the relays. As a result a greedy step-wise optimization approach
leads to an optimal solution for the relay antenna selection problem with
linear complexity in comparison to the brute force search that incurs
exponential complexity.
|
1108.1689
|
A nonlinear preconditioner for experimental design problems
|
math.OC cs.SY
|
We address the slow convergence and poor stability of quasi-newton sequential
quadratic programming (SQP) methods that is observed when solving experimental
design problems, in particular when they are large. Our findings suggest that
this behavior is due to the fact that these problems often have bad absolute
condition numbers. To shed light onto the structure of the problem close to the
solution, we formulate a model problem (based on the $A$-criterion), that is
defined in terms of a given initial design that is to be improved. We prove
that the absolute condition number of the model problem grows without bounds as
the quality of the initial design improves. Additionally, we devise a
preconditioner that ensures that the condition number will instead stay
uniformly bounded. Using numerical experiments, we study the effect of this
reformulation on relevant cases of the general problem, and find that it leads
to significant improvements in stability and convergence behavior.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.