id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1401.8044 | Preserving Lagrangian structure in nonlinear model reduction with
application to structural dynamics | cs.CE math.NA | This work proposes a model-reduction methodology that preserves Lagrangian
structure (equivalently Hamiltonian structure) and achieves computational
efficiency in the presence of high-order nonlinearities and arbitrary parameter
dependence. As such, the resulting reduced-order model retains key properties
such as energy conservation and symplectic time-evolution maps. We focus on
parameterized simple mechanical systems subjected to Rayleigh damping and
external forces, and consider an application to nonlinear structural dynamics.
To preserve structure, the method first approximates the system's `Lagrangian
ingredients'---the Riemannian metric, the potential-energy function, the
dissipation function, and the external force---and subsequently derives
reduced-order equations of motion by applying the (forced) Euler--Lagrange
equation with these quantities. From the algebraic perspective, key
contributions include two efficient techniques for approximating parameterized
reduced matrices while preserving symmetry and positive definiteness: matrix
gappy POD and reduced-basis sparsification (RBS). Results for a parameterized
truss-structure problem demonstrate the importance of preserving Lagrangian
structure and illustrate the proposed method's merits: it reduces computation
time while maintaining high accuracy and stability, in contrast to existing
nonlinear model-reduction techniques that do not preserve structure.
|
1401.8053 | Hallucinating optimal high-dimensional subspaces | cs.CV | Linear subspace representations of appearance variation are pervasive in
computer vision. This paper addresses the problem of robustly matching such
subspaces (computing the similarity between them) when they are used to
describe the scope of variations within sets of images of different (possibly
greatly so) scales. A naive solution of projecting the low-scale subspace into
the high-scale image space is described first and subsequently shown to be
inadequate, especially at large scale discrepancies. A successful approach is
proposed instead. It consists of (i) an interpolated projection of the
low-scale subspace into the high-scale space, which is followed by (ii) a
rotation of this initial estimate within the bounds of the imposed
``downsampling constraint''. The optimal rotation is found in the closed-form
which best aligns the high-scale reconstruction of the low-scale subspace with
the reference it is compared to. The method is evaluated on the problem of
matching sets of (i) face appearances under varying illumination and (ii)
object appearances under varying viewpoint, using two large data sets. In
comparison to the naive matching, the proposed algorithm is shown to greatly
increase the separation of between-class and within-class similarities, as well
as produce far more meaningful modes of common appearance on which the match
score is based.
|
1401.8074 | Empirically Evaluating Multiagent Learning Algorithms | cs.GT cs.LG | There exist many algorithms for learning how to play repeated bimatrix games.
Most of these algorithms are justified in terms of some sort of theoretical
guarantee. On the other hand, little is known about the empirical performance
of these algorithms. Most such claims in the literature are based on small
experiments, which has hampered understanding as well as the development of new
multiagent learning (MAL) algorithms. We have developed a new suite of tools
for running multiagent experiments: the MultiAgent Learning Testbed (MALT).
These tools are designed to facilitate larger and more comprehensive
experiments by removing the need to build one-off experimental code. MALT also
provides baseline implementations of many MAL algorithms, hopefully eliminating
or reducing differences between algorithm implementations and increasing the
reproducibility of results. Using this test suite, we ran an experiment
unprecedented in size. We analyzed the results according to a variety of
performance metrics including reward, maxmin distance, regret, and several
notions of equilibrium convergence. We confirmed several pieces of conventional
wisdom, but also discovered some surprising results. For example, we found that
single-agent $Q$-learning outperformed many more complicated and more modern
MAL algorithms.
|
1401.8090 | Analyzing Finite-length Protograph-based Spatially Coupled LDPC Codes | cs.IT math.IT | The peeling decoding for spatially coupled low-density parity-check (SC-LDPC)
codes is analyzed for a binary erasure channel. An analytical calculation of
the mean evolution of degree-one check nodes of protograph-based SC-LDPC codes
is given and an estimate for the covariance evolution of degree-one check nodes
is proposed in the stable decoding phase where the decoding wave propagates
along the chain of coupled codes. Both results are verified numerically.
Protograph-based SC-LDPC codes turn out to have a more robust behavior than
unstructured random SC-LDPC codes. Using the analytically calculated
parameters, the finite- length scaling laws for these constructions are given
and verified by numerical simulations.
|
1401.8092 | Cross-calibration of Time-of-flight and Colour Cameras | cs.CV cs.RO | Time-of-flight cameras provide depth information, which is complementary to
the photometric appearance of the scene in ordinary images. It is desirable to
merge the depth and colour information, in order to obtain a coherent scene
representation. However, the individual cameras will have different viewpoints,
resolutions and fields of view, which means that they must be mutually
calibrated. This paper presents a geometric framework for this multi-view and
multi-modal calibration problem. It is shown that three-dimensional projective
transformations can be used to align depth and parallax-based representations
of the scene, with or without Euclidean reconstruction. A new evaluation
procedure is also developed; this allows the reprojection error to be
decomposed into calibration and sensor-dependent components. The complete
approach is demonstrated on a network of three time-of-flight and six colour
cameras. The applications of such a system, to a range of automatic
scene-interpretation problems, are discussed.
|
1401.8126 | Extrinsic Methods for Coding and Dictionary Learning on Grassmann
Manifolds | cs.LG cs.CV stat.ML | Sparsity-based representations have recently led to notable results in
various visual recognition tasks. In a separate line of research, Riemannian
manifolds have been shown useful for dealing with features and models that do
not lie in Euclidean spaces. With the aim of building a bridge between the two
realms, we address the problem of sparse coding and dictionary learning over
the space of linear subspaces, which form Riemannian structures known as
Grassmann manifolds. To this end, we propose to embed Grassmann manifolds into
the space of symmetric matrices by an isometric mapping. This in turn enables
us to extend two sparse coding schemes to Grassmann manifolds. Furthermore, we
propose closed-form solutions for learning a Grassmann dictionary, atom by
atom. Lastly, to handle non-linearity in data, we extend the proposed Grassmann
sparse coding and dictionary learning algorithms through embedding into Hilbert
spaces.
Experiments on several classification tasks (gender recognition, gesture
classification, scene analysis, face recognition, action recognition and
dynamic texture classification) show that the proposed approaches achieve
considerable improvements in discrimination accuracy, in comparison to
state-of-the-art methods such as kernelized Affine Hull Method and
graph-embedding Grassmann discriminant analysis.
|
1401.8132 | Heterogeneous Speed Profiles in Discrete Models for Pedestrian
Simulation | cs.MA | Discrete pedestrian simulation models are viable alternatives to particle
based approaches based on a continuous spatial representation. The effects of
discretisation, however, also imply some difficulties in modelling certain
phenomena that can be observed in reality. This paper focuses on the
possibility to manage heterogeneity in the walking speed of the simulated
population of pedestrians by modifying an existing multi-agent model extending
the floor field approach. Whereas some discrete models allow pedestrians (or
cars, when applied to traffic modelling) to move more than a single cell per
time step, the present work proposes a maximum speed of one cell per step, but
we model lower speeds by having pedestrians yielding their movement in some
turns. Different classes of pedestrians are associated to different desired
walking speeds and we define a stochastic mechanism ensuring that they maintain
an average speed close to this threshold. In the paper we formally describe the
model and we show the results of its application in benchmark scenarios.
Finally, we also show how this approach can also support the definition of
slopes and stairs as elements reducing the walking speed of pedestrians
climbing them in a simulated scenario.
|
1401.8168 | Thresholds of absorbing sets in Low-Density-Parity-Check codes | cs.IT math.IT | In this paper, we investigate absorbing sets, responsible of error floors in
Low Density Parity Check codes. We look for a concise, quantitative way to rate
the absorbing sets' dangerousness. Based on a simplified model for iterative
decoding evolution, we show that absorbing sets exhibit a threshold behavior.
An absorbing set with at least one channel log-likelihood-ratio below the
threshold can stop the convergence towards the right codeword. Otherwise
convergence is guaranteed. We show that absorbing sets with negative thresholds
can be deactivated simply using proper saturation levels. We propose an
efficient algorithm to compute thresholds.
|
1401.8175 | Equilibrium Points of an AND-OR Tree: under Constraints on Probability | cs.AI | We study a probability distribution d on the truth assignments to a uniform
binary AND-OR tree. Liu and Tanaka [2007, Inform. Process. Lett.] showed the
following: If d achieves the equilibrium among independent distributions (ID)
then d is an independent identical distribution (IID). We show a stronger form
of the above result. Given a real number r such that 0 < r < 1, we consider a
constraint that the probability of the root node having the value 0 is r. Our
main result is the following: When we restrict ourselves to IDs satisfying this
constraint, the above result of Liu and Tanaka still holds. The proof employs
clever tricks of induction. In particular, we show two fundamental
relationships between expected cost and probability in an IID on an OR-AND
tree: (1) The ratio of the cost to the probability (of the root having the
value 0) is a decreasing function of the probability x of the leaf. (2) The
ratio of derivative of the cost to the derivative of the probability is a
decreasing function of x, too.
|
1401.8176 | Double percolation phase transition in clustered complex networks | physics.soc-ph cond-mat.dis-nn cs.SI | The internal organization of complex networks often has striking consequences
on either their response to external perturbations or on their dynamical
properties. In addition to small-world and scale-free properties, clustering is
the most common topological characteristic observed in many real networked
systems. In this paper, we report an extensive numerical study on the effects
of clustering on the structural properties of complex networks. Strong
clustering in heterogeneous networks induces the emergence of a core-periphery
organization that has a critical effect on the percolation properties of the
networks. We observe a novel double phase transition with an intermediate phase
in which only the core of the network is percolated and a final phase in which
the periphery percolates regardless of the core. This result implies breaking
of the same symmetry at two different values of the control parameter, in stark
contrast to the modern theory of continuous phase transitions. Inspired by this
core-periphery organization, we introduce a simple model that allows us to
analytically prove that such an anomalous phase transition is in fact possible.
|
1401.8192 | The Scheme of a Novel Methodology for Zonal Division Based on Power
Transfer Distribution Factors | cs.CE cs.CY cs.SY | One of the methodologies that carry out the division of the electrical grid
into zones is based on the aggregation of nodes characterized by similar Power
Transfer Distribution Factors (PTDFs). Here, we point out that satisfactory
clustering algorithm should take into account two aspects. First, nodes of
similar impact on cross-border lines should be grouped together. Second,
cross-border power flows should be relatively insensitive to differences
between real and assumed Generation Shift Key matrices. We introduce a
theoretical basis of a novel clustering algorithm (BubbleClust) that fulfills
these requirements and we perform a case study to illustrate social welfare
consequences of the division.
|
1401.8199 | Wind Turbine Model and Observer in Takagi-Sugeno Model Structure | cs.SY | Based on a reduced-order, dynamic nonlinear wind turbine model in
Takagi-Sugeno (TS) model structure, a TS state observer is designed as a
disturbance observer to estimate the unknown effective wind speed. The TS
observer model is an exact representation of the underlying nonlinear model,
obtained by means of the sector-nonlinearity approach. The observer gain
matrices are obtained by means of a linear matrix inequality (LMI) design
approach for optimal fuzzy control, where weighting matrices for the individual
system states and outputs are included. The observer is tested in simulations
with the aero-elastic code FAST for the NREL 5 MW reference turbine, where it
shows a stable behaviour both for IEC wind gusts and turbulent wind input.
|
1401.8201 | Relative Expressive Power of Navigational Querying on Graphs | cs.DB cs.LO | Motivated by both established and new applications, we study navigational
query languages for graphs (binary relations). The simplest language has only
the two operators union and composition, together with the identity relation.
We make more powerful languages by adding any of the following operators:
intersection; set difference; projection; coprojection; converse; and the
diversity relation. All these operators map binary relations to binary
relations. We compare the expressive power of all resulting languages. We do
this not only for general path queries (queries where the result may be any
binary relation) but also for boolean or yes/no queries (expressed by the
nonemptiness of an expression). For both cases, we present the complete Hasse
diagram of relative expressiveness. In particular the Hasse diagram for boolean
queries contains some nontrivial separations and a few surprising collapses.
|
1401.8206 | Decode-and-Forward Relay Beamforming with Secret and Non-Secret Messages | cs.IT math.IT | In this paper, we study beamforming in decode-and-forward (DF) relaying using
multiple relays, where the source node sends a secret message as well as a
non-secret message to the destination node in the presence of multiple
non-colluding eavesdroppers. The non-secret message is transmitted at a fixed
rate $R_{0}$ and requires no protection from the eavesdroppers, whereas the
secret message needs to be protected from the eavesdroppers. The source and
relays operate under a total power constraint. We find the optimum source
powers and weights of the relays for both secret and non-secret messages which
maximize the worst case secrecy rate for the secret message as well as meet the
information rate constraint $R_{0}$ for the non-secret message. We solve this
problem for the cases when ($i$) perfect channel state information (CSI) of all
links is known, and ($ii$) only the statistical CSI of the eavesdroppers links
and perfect CSI of other links are known.
|
1401.8212 | Human Activity Recognition using Smartphone | cs.CY cs.HC cs.LG | Human activity recognition has wide applications in medical research and
human survey system. In this project, we design a robust activity recognition
system based on a smartphone. The system uses a 3-dimentional smartphone
accelerometer as the only sensor to collect time series signals, from which 31
features are generated in both time and frequency domain. Activities are
classified using 4 different passive learning methods, i.e., quadratic
classifier, k-nearest neighbor algorithm, support vector machine, and
artificial neural networks. Dimensionality reduction is performed through both
feature extraction and subset selection. Besides passive learning, we also
apply active learning algorithms to reduce data labeling expense. Experiment
results show that the classification rate of passive learning reaches 84.4% and
it is robust to common positions and poses of cellphone. The results of active
learning on real data demonstrate a reduction of labeling labor to achieve
comparable performance with passive learning.
|
1401.8226 | Sensing for Spectrum Sharing in Cognitive LTE-A Cellular Networks | cs.NI cs.IT math.IT | In this work we present a case for dynamic spectrum sharing between different
operators in systems with carrier aggregation (CA) which is an important
feature in 3GPP LTE-A systems. Cross-carrier scheduling and sensing are
identified as key enablers for such spectrum sharing in LTE-A. Sensing is
classified as Type 1 sensing and Type 2 sensing and the role of each in the
system operation is discussed. The more challenging Type 2 sensing which
involves sensing the interfering signal in the presence of a desired signal is
studied for a single-input single-output system. Energy detection and the most
powerful test are formulated. The probability of false alarm and of detection
are analyzed for energy detectors. Performance evaluations show that reasonable
sensing performance can be achieved with the use of channel state information,
making such sensing practically viable.
|
1401.8244 | On Routing-Optimal Network for Multiple Unicasts | cs.IT math.IT | In this paper, we consider networks with multiple unicast sessions.
Generally, non-linear network coding is needed to achieve the whole rate region
of network coding. Yet, there exist networks for which routing is sufficient to
achieve the whole rate region, and we refer to them as routing-optimal
networks. We identify a class of routing-optimal networks, which we refer to as
information-distributive networks, defined by three topological features. Due
to these features, for each rate vector achieved by network coding, there is
always a routing scheme such that it achieves the same rate vector, and the
traffic transmitted through the network is exactly the information transmitted
over the cut-sets between the sources and the sinks in the corresponding
network coding scheme. We present more examples of information-distributive
networks, including some examples from index coding and single unicast with
hard deadline constraint.
|
1401.8257 | Online Clustering of Bandits | cs.LG stat.ML | We introduce a novel algorithmic approach to content recommendation based on
adaptive clustering of exploration-exploitation ("bandit") strategies. We
provide a sharp regret analysis of this algorithm in a standard stochastic
noise setting, demonstrate its scalability properties, and prove its
effectiveness on a number of artificial and real-world datasets. Our
experiments show a significant increase in prediction performance over
state-of-the-art methods for bandit problems.
|
1401.8261 | Infrared face recognition: a comprehensive review of methodologies and
databases | cs.CV | Automatic face recognition is an area with immense practical potential which
includes a wide range of commercial and law enforcement applications. Hence it
is unsurprising that it continues to be one of the most active research areas
of computer vision. Even after over three decades of intense research, the
state-of-the-art in face recognition continues to improve, benefitting from
advances in a range of different research fields such as image processing,
pattern recognition, computer graphics, and physiology. Systems based on
visible spectrum images, the most researched face recognition modality, have
reached a significant level of maturity with some practical success. However,
they continue to face challenges in the presence of illumination, pose and
expression changes, as well as facial disguises, all of which can significantly
decrease recognition accuracy. Amongst various approaches which have been
proposed in an attempt to overcome these limitations, the use of infrared (IR)
imaging has emerged as a particularly promising research direction. This paper
presents a comprehensive and timely review of the literature on this subject.
Our key contributions are: (i) a summary of the inherent properties of infrared
imaging which makes this modality promising in the context of face recognition,
(ii) a systematic review of the most influential approaches, with a focus on
emerging common trends as well as key differences between alternative
methodologies, (iii) a description of the main databases of infrared facial
images available to the researcher, and lastly (iv) a discussion of the most
promising avenues for future research.
|
1401.8265 | Sub-optimality of Treating Interference as Noise in the Cellular Uplink
with Weak Interference | cs.IT math.IT | Despite the simplicity of the scheme of treating interference as noise (TIN),
it was shown to be sum-capacity optimal in the Gaussian interference channel
(IC) with very-weak (noisy) interference. In this paper, the 2-user IC is
altered by introducing an additional transmitter that wants to communicate with
one of the receivers of the IC. The resulting network thus consists of a
point-to-point channel interfering with a multiple access channel (MAC) and is
denoted PIMAC. The sum-capacity of the PIMAC is studied with main focus on the
optimality of TIN. It turns out that TIN in its naive variant, where all
transmitters are active and both receivers use TIN for decoding, is not the
best choice for the PIMAC. In fact, a scheme that combines both time division
multiple access and TIN (TDMA-TIN) strictly outperforms the naive-TIN scheme.
Furthermore, it is shown that in some regimes, TDMA-TIN achieves the
sum-capacity for the deterministic PIMAC and the sum-capacity within a constant
gap for the Gaussian PIMAC. Additionally, it is shown that, even for very-weak
interference, there are some regimes where a combination of interference
alignment with power control and treating interference as noise at the receiver
side outperforms TDMA-TIN. As a consequence, on the one hand treating
interference as noise in a cellular uplink is approximately optimal in certain
regimes. On the other hand those regimes cannot be simply described by the
strength of interference.
|
1401.8269 | Experiments with Three Approaches to Recognizing Lexical Entailment | cs.CL cs.AI cs.LG | Inference in natural language often involves recognizing lexical entailment
(RLE); that is, identifying whether one word entails another. For example,
"buy" entails "own". Two general strategies for RLE have been proposed: One
strategy is to manually construct an asymmetric similarity measure for context
vectors (directional similarity) and another is to treat RLE as a problem of
learning to recognize semantic relations using supervised machine learning
techniques (relation classification). In this paper, we experiment with two
recent state-of-the-art representatives of the two general strategies. The
first approach is an asymmetric similarity measure (an instance of the
directional similarity strategy), designed to capture the degree to which the
contexts of a word, a, form a subset of the contexts of another word, b. The
second approach (an instance of the relation classification strategy)
represents a word pair, a:b, with a feature vector that is the concatenation of
the context vectors of a and b, and then applies supervised learning to a
training set of labeled feature vectors. Additionally, we introduce a third
approach that is a new instance of the relation classification strategy. The
third approach represents a word pair, a:b, with a feature vector in which the
features are the differences in the similarities of a and b to a set of
reference words. All three approaches use vector space models (VSMs) of
semantics, based on word-context matrices. We perform an extensive evaluation
of the three approaches using three different datasets. The proposed new
approach (similarity differences) performs significantly better than the other
two approaches on some datasets and there is no dataset for which it is
significantly worse. Our results suggest it is beneficial to make connections
between the research in lexical entailment and the research in semantic
relation classification.
|
1402.0009 | Qualitative Relational Mapping and Navigation for Planetary Rovers | cs.RO | This paper presents a novel method for qualitative mapping of large scale
spaces. The proposed framework makes use of a graphical representation of the
world in order to build a map consisting of qualitative constraints on the
geometric relationships between landmark triplets. A novel measurement method
based on camera imagery is presented which extends previous work from the field
of Qualitative Spatial Reasoning. Measurements are fused into the map using a
deterministic, iterative graph update. A Branch-and-Bound approach is taken to
solve a set of non-convex feasibility problems required for generating on-line
measurements and off-line operator lookup tables. A navigation approach for
travel between distant landmarks is developed, using estimates of the Relative
Neighborhood Graph extracted from the qualitative map in order to generate a
sequence of landmark objectives based on proximity. Average and asymptotic
performance of the mapping algorithm is evaluated using Monte Carlo tests on
randomly generated maps, and data-driven simulation results are presented for a
robot traversing the Jet Propulsion Laboratory Mars Yard while building a
relational map. Simulation results demonstrate an initial rapid convergence of
qualitative state estimates for visible landmarks, followed by a slow tapering
as the remaining ambiguous states are removed from the map.
|
1402.0013 | Classifying Latent Infection States in Complex Networks | cs.SI physics.soc-ph | Algorithms for identifying the infection states of nodes in a network are
crucial for understanding and containing infections. Often, however, only a
relatively small set of nodes have a known infection state. Moreover, the
length of time that each node has been infected is also unknown. This missing
data -- infection state of most nodes and infection time of the unobserved
infected nodes -- poses a challenge to the study of real-world cascades.
In this work, we develop techniques to identify the latent infected nodes in
the presence of missing infection time-and-state data. Based on the likely
epidemic paths predicted by the simple susceptible-infected epidemic model, we
propose a measure (Infection Betweenness) for uncovering these unknown
infection states. Our experimental results using machine learning algorithms
show that Infection Betweenness is the most effective feature for identifying
latent infected nodes.
|
1402.0017 | Capacity of Binary State Symmetric Channel with and without Feedback and
Transmission Cost | cs.IT math.IT | We consider a unit memory channel, called Binary State Symmetric Channel
(BSSC), in which the channel state is the modulo2 addition of the current
channel input and the previous channel output. We derive closed form
expressions for the capacity and corresponding channel input distribution, of
this BSSC with and without feedback and transmission cost. We also show that
the capacity of the BSSC is not increased by feedback, and it is achieved by a
first order symmetric Markov process.
|
1402.0029 | Fuzzy Decision Analysis in Negotiation between the System of Systems
Agent and the System Agent in an Agent-Based Model | cs.MA | Previous papers have described a computational approach to System of Systems
(SoS) development using an Agent-Based Model (ABM). This paper describes the
Fuzzy Decision Analysis used in the negotiation between the SoS agent and a
System agent in the ABM of an Acknowledged SoS development. An Acknowledged SoS
has by definition a limited influence on the development of the individual
Systems. The individual Systems have their own priorities, pressures, and
agenda which may or may not align with the goals of the SoS. The SoS has some
funding and deadlines which can be used to negotiate with the individual System
in order to illicit the required capability from that System. The Fuzzy
Decision Analysis determines how the SoS agent will adjust the funding and
deadlines for each of the Systems in order to achieve the desired SoS
architecture quality. The Fuzzy Decision Analysis has inputs of performance,
funding, and deadlines as well as weights for each capability. The performance,
funding, and deadlines are crisp values which are fuzzified. The fuzzified
values are then used with a Fuzzy Inference Engine to get the fuzzy outputs of
funding adjustment and deadline adjustment which must then be defuzzified
before being passed to the System agent. The first contribution of this paper
is the fuzzy decision analysis that represents the negotiation between the SoS
agent and the System agent. A second contribution of this paper is the method
of implementing the fuzzy decision analysis which provides a generalized fuzzy
decision analysis.
|
1402.0030 | Neural Variational Inference and Learning in Belief Networks | cs.LG stat.ML | Highly expressive directed latent variable models, such as sigmoid belief
networks, are difficult to train on large datasets because exact inference in
them is intractable and none of the approximate inference methods that have
been applied to them scale well. We propose a fast non-iterative approximate
inference method that uses a feedforward network to implement efficient exact
sampling from the variational posterior. The model and this inference network
are trained jointly by maximizing a variational lower bound on the
log-likelihood. Although the naive estimator of the inference model gradient is
too high-variance to be useful, we make it practical by applying several
straightforward model-independent variance reduction techniques. Applying our
approach to training sigmoid belief networks and deep autoregressive networks,
we show that it outperforms the wake-sleep algorithm on MNIST and achieves
state-of-the-art results on the Reuters RCV1 document dataset.
|
1402.0049 | Single-shot security for one-time memories in the isolated qubits model | quant-ph cs.CR cs.IT math.IT | One-time memories (OTM's) are simple, tamper-resistant cryptographic devices,
which can be used to implement sophisticated functionalities such as one-time
programs. Can one construct OTM's whose security follows from some physical
principle? This is not possible in a fully-classical world, or in a
fully-quantum world, but there is evidence that OTM's can be built using
"isolated qubits" -- qubits that cannot be entangled, but can be accessed using
adaptive sequences of single-qubit measurements.
Here we present new constructions for OTM's using isolated qubits, which
improve on previous work in several respects: they achieve a stronger
"single-shot" security guarantee, which is stated in terms of the (smoothed)
min-entropy; they are proven secure against adversaries who can perform
arbitrary local operations and classical communication (LOCC); and they are
efficiently implementable.
These results use Wiesner's idea of conjugate coding, combined with
error-correcting codes that approach the capacity of the q-ary symmetric
channel, and a high-order entropic uncertainty relation, which was originally
developed for cryptography in the bounded quantum storage model.
|
1402.0051 | Distributed Algorithms for Stochastic Source Seeking with Mobile Robot
Networks: Technical Report | cs.MA cs.RO cs.SY | Autonomous robot networks are an effective tool for monitoring large-scale
environmental fields. This paper proposes distributed control strategies for
localizing the source of a noisy signal, which could represent a physical
quantity of interest such as magnetic force, heat, radio signal, or chemical
concentration. We develop algorithms specific to two scenarios: one in which
the sensors have a precise model of the signal formation process and one in
which a signal model is not available. In the model-free scenario, a team of
sensors is used to follow a stochastic gradient of the signal field. Our
approach is distributed, robust to deformations in the group geometry, does not
necessitate global localization, and is guaranteed to lead the sensors to a
neighborhood of a local maximum of the field. In the model-based scenario, the
sensors follow the stochastic gradient of the mutual information between their
expected measurements and the location of the source in a distributed manner.
The performance is demonstrated in simulation using a robot sensor network to
localize the source of a wireless radio signal.
|
1402.0052 | Performance of the Survey Propagation-guided decimation algorithm for
the random NAE-K-SAT problem | math.PR cond-mat.stat-mech cs.AI cs.CC cs.DS math.CO | We show that the Survey Propagation-guided decimation algorithm fails to find
satisfying assignments on random instances of the "Not-All-Equal-$K$-SAT"
problem if the number of message passing iterations is bounded by a constant
independent of the size of the instance and the clause-to-variable ratio is
above $(1+o_K(1)){2^{K-1}\over K}\log^2 K$ for sufficiently large $K$. Our
analysis in fact applies to a broad class of algorithms described as
"sequential local algorithms". Such algorithms iteratively set variables based
on some local information and then recurse on the reduced instance. Survey
Propagation-guided as well as Belief Propagation-guided decimation algorithms -
two widely studied message passing based algorithms, fall under this category
of algorithms provided the number of message passing iterations is bounded by a
constant. Another well-known algorithm falling into this category is the Unit
Clause algorithm. Our work constitutes the first rigorous analysis of the
performance of the SP-guided decimation algorithm.
The approach underlying our paper is based on an intricate geometry of the
solution space of random NAE-$K$-SAT problem. We show that above the
$(1+o_K(1)){2^{K-1}\over K}\log^2 K$ threshold, the overlap structure of
$m$-tuples of satisfying assignments exhibit a certain clustering behavior
expressed in the form of constraints on distances between the $m$ assignments,
for appropriately chosen $m$. We further show that if a sequential local
algorithm succeeds in finding a satisfying assignment with probability bounded
away from zero, then one can construct an $m$-tuple of solutions violating
these constraints, thus leading to a contradiction. Along with (citation), this
result is the first work which directly links the clustering property of random
constraint satisfaction problems to the computational hardness of finding
satisfying assignments.
|
1402.0060 | On Classification of Toric Surface Codes of Low Dimension | cs.IT math.CO math.IT | This work is a natural continuation of our previous work \cite{yz}. In this
paper, we give a complete classification of toric surface codes of dimension
less than or equal to 6, except a special pair, $C_{P_6^{(4)}}$ and
$C_{P_6^{(5)}}$ over $\mathbb{F}_8$. Also, we give an example, $C_{P_6^{(5)}}$
and $C_{P_6^{(6)}}$ over $\mathbb{F}_7$, to illustrate that two monomially
equivalent toric codes can be constructed from two lattice non-equivalent
polygons.
|
1402.0062 | Exact Common Information | cs.IT math.IT | This paper introduces the notion of exact common information, which is the
minimum description length of the common randomness needed for the exact
distributed generation of two correlated random variables $(X,Y)$. We introduce
the quantity $G(X;Y)=\min_{X\to W \to Y} H(W)$ as a natural bound on the exact
common information and study its properties and computation. We then introduce
the exact common information rate, which is the minimum description rate of the
common randomness for the exact generation of a 2-DMS $(X,Y)$. We give a
multiletter characterization for it as the limit $\bar{G}(X;Y)=\lim_{n\to
\infty}(1/n)G(X^n;Y^n)$. While in general $\bar{G}(X;Y)$ is greater than or
equal to the Wyner common information, we show that they are equal for the
Symmetric Binary Erasure Source. We do not know, however, if the exact common
information rate has a single letter characterization in general.
|
1402.0068 | Radiation Pattern of Patch Antenna with Slits | cs.IT math.IT | The Microstrip antenna has been commercially used in many applications, such
as direct broadcast satellite service, mobile satellite communications, global
positioning system, medical hyperthermia usage, etc. The patch antenna of the
size reduction at a given operating frequency is obtained. Mobile personal
communication systems and wireless computer networks are most commonly used
nowadays and they are in need of antennas in different frequency bands. In
regulate to without difficulty incorporate these antennas into individual
systems, a micro strip scrap transmitter have been preferred and intended for a
convinced divergence. There is also an analysis of radiation pattern, Gain of
the antenna, Directivity of the antenna, Electric Far Field. The simulations
results are obtained by using electromagnetic simulation software called feko
software are presented and discussed
|
1402.0092 | Mutual information of Contingency Tables and Related Inequalities | math.ST cs.IT math.IT stat.TH | For testing independence it is very popular to use either the
$\chi^{2}$-statistic or $G^{2}$-statistics (mutual information). Asymptotically
both are $\chi^{2}$-distributed so an obvious question is which of the two
statistics that has a distribution that is closest to the
$\chi^{2}$-distribution. Surprisingly the distribution of mutual information is
much better approximated by a $\chi^{2}$-distribution than the
$\chi^{2}$-statistic. For technical reasons we shall focus on the simplest case
with one degree of freedom. We introduce the signed log-likelihood and
demonstrate that its distribution function can be related to the distribution
function of a standard Gaussian by inequalities. For the hypergeometric
distribution we formulate a general conjecture about how close the signed
log-likelihood is to a standard Gaussian, and this conjecture gives much more
accurate estimates of the tail probabilities of this type of distribution than
previously published results. The conjecture has been proved numerically in all
cases relevant for testing independence and further evidence of its validity is
given.
|
1402.0099 | Dual-to-kernel learning with ideals | stat.ML cs.LG math.AC math.AG math.ST stat.TH | In this paper, we propose a theory which unifies kernel learning and symbolic
algebraic methods. We show that both worlds are inherently dual to each other,
and we use this duality to combine the structure-awareness of algebraic methods
with the efficiency and generality of kernels. The main idea lies in relating
polynomial rings to feature space, and ideals to manifolds, then exploiting
this generative-discriminative duality on kernel matrices. We illustrate this
by proposing two algorithms, IPCA and AVICA, for simultaneous manifold and
feature learning, and test their accuracy on synthetic and real world data.
|
1402.0108 | Markov Blanket Ranking using Kernel-based Conditional Dependence
Measures | stat.ML cs.LG | Developing feature selection algorithms that move beyond a pure correlational
to a more causal analysis of observational data is an important problem in the
sciences. Several algorithms attempt to do so by discovering the Markov blanket
of a target, but they all contain a forward selection step which variables must
pass in order to be included in the conditioning set. As a result, these
algorithms may not consider all possible conditional multivariate combinations.
We improve on this limitation by proposing a backward elimination method that
uses a kernel-based conditional dependence measure to identify the Markov
blanket in a fully multivariate fashion. The algorithm is easy to implement and
compares favorably to other methods on synthetic and real datasets.
|
1402.0119 | Randomized Nonlinear Component Analysis | stat.ML cs.LG | Classical methods such as Principal Component Analysis (PCA) and Canonical
Correlation Analysis (CCA) are ubiquitous in statistics. However, these
techniques are only able to reveal linear relationships in data. Although
nonlinear variants of PCA and CCA have been proposed, these are computationally
prohibitive in the large scale.
In a separate strand of recent research, randomized methods have been
proposed to construct features that help reveal nonlinear patterns in data. For
basic tasks such as regression or classification, random features exhibit
little or no loss in performance, while achieving drastic savings in
computational requirements.
In this paper we leverage randomness to design scalable new variants of
nonlinear PCA and CCA; our ideas extend to key multivariate analysis tools such
as spectral clustering or LDA. We demonstrate our algorithms through
experiments on real-world data, on which we compare against the
state-of-the-art. A simple R implementation of the presented algorithms is
provided.
|
1402.0126 | Kantian fractionalization predicts the conflict propensity of the
international system | physics.soc-ph cs.SI physics.data-an | The study of complex social and political phenomena with the perspective and
methods of network science has proven fruitful in a variety of areas, including
applications in political science and more narrowly the field of international
relations. We propose a new line of research in the study of international
conflict by showing that the multiplex fractionalization of the international
system (which we label Kantian fractionalization) is a powerful predictor of
the propensity for violent interstate conflict, a key indicator of the system's
stability. In so doing, we also demonstrate the first use of multislice
modularity for community detection in a multiplex network application. Even
after controlling for established system-level conflict indicators, we find
that Kantian fractionalization contributes more to model fit for violent
interstate conflict than previously established measures. Moreover, evaluating
the influence of each of the constituent networks shows that joint democracy
plays little, if any, role in predicting system stability, thus challenging a
major empirical finding of the international relations literature. Lastly, a
series of Granger causal tests shows that the temporal variability of Kantian
fractionalization is consistent with a causal relationship with the prevalence
of conflict in the international system. This causal relationship has
real-world policy implications as changes in Kantian fractionalization could
serve as an early warning sign of international instability.
|
1402.0130 | Dynamical Properties of a Two-gene Network with Hysteresis | cs.SY math.DS | A mathematical model for a two-gene regulatory network is derived and several
of their properties analyzed. Due to the presence of mixed continuous/discrete
dynamics and hysteresis, we employ a hybrid systems model to capture the
dynamics of the system. The proposed model incorporates binary hysteresis with
different thresholds capturing the interaction between the genes. We analyze
properties of the solutions and asymptotic stability of equilibria in the
system as a function of its parameters. Our analysis reveals the presence of
limit cycles for a certain range of parameters, behavior that is associated
with hysteresis. The set of points defining the limit cycle is characterized
and its asymptotic stability properties are studied. Furthermore, the stability
property of the limit cycle is robust to small perturbations. Numerical
simulations are presented to illustrate the results.
|
1402.0140 | Probabilistic Model Validation for Uncertain Nonlinear Systems | cs.SY math.OC | This paper presents a probabilistic model validation methodology for
nonlinear systems in time-domain. The proposed formulation is simple,
intuitive, and accounts both deterministic and stochastic nonlinear systems
with parametric and nonparametric uncertainties. Instead of hard invalidation
methods available in the literature, a relaxed notion of validation in
probability is introduced. To guarantee provably correct inference, algorithm
for constructing probabilistically robust validation certificate is given along
with computational complexities. Several examples are worked out to illustrate
its use.
|
1402.0147 | A Probabilistic Method for Nonlinear Robustness Analysis of F-16
Controllers | cs.SY math.OC | This paper presents a new framework for controller robustness verification
with respect to F-16 aircraft's closed-loop performance in longitudinal flight.
We compare the state regulation performance of a linear quadratic regulator
(LQR) and a gain-scheduled linear quadratic regulator (gsLQR), applied to
nonlinear open-loop dynamics of F-16, in presence of stochastic initial
condition and parametric uncertainties, as well as actuator disturbance. We
show that, in presence of initial condition uncertainties alone, both LQR and
gsLQR have comparable immediate and asymptotic performances, but the gsLQR
exhibits better transient performance at intermediate times. This remains true
in the presence of additional actuator disturbance. Also, gsLQR is shown to be
more robust than LQR, against parametric uncertainties. The probabilistic
framework proposed here, leverages transfer operator based density computation
in exact arithmetic and introduces optimal transport theoretic performance
validation and verification (V&V) for nonlinear dynamical systems. Numerical
results from our proposed method, are in unison with Monte Carlo simulations.
|
1402.0170 | Collaborative Receptive Field Learning | cs.CV cs.LG cs.MM stat.ML | The challenge of object categorization in images is largely due to arbitrary
translations and scales of the foreground objects. To attack this difficulty,
we propose a new approach called collaborative receptive field learning to
extract specific receptive fields (RF's) or regions from multiple images, and
the selected RF's are supposed to focus on the foreground objects of a common
category. To this end, we solve the problem by maximizing a submodular function
over a similarity graph constructed by a pool of RF candidates. However,
measuring pairwise distance of RF's for building the similarity graph is a
nontrivial problem. Hence, we introduce a similarity metric called
pyramid-error distance (PED) to measure their pairwise distances through
summing up pyramid-like matching errors over a set of low-level features.
Besides, in consistent with the proposed PED, we construct a simple
nonparametric classifier for classification. Experimental results show that our
method effectively discovers the foreground objects in images, and improves
classification performance.
|
1402.0197 | Measuring the Complexity of Self-organizing Traffic Lights | nlin.AO cs.IT cs.SY math.IT nlin.CG physics.soc-ph | We apply measures of complexity, emergence and self-organization to an
abstract city traffic model for comparing a traditional traffic coordination
method with a self-organizing method in two scenarios: cyclic boundaries and
non-orientable boundaries. We show that the measures are useful to identify and
characterize different dynamical phases. It becomes clear that different
operation regimes are required for different traffic demands. Thus, not only
traffic is a non-stationary problem, which requires controllers to adapt
constantly. Controllers must also change drastically the complexity of their
behavior depending on the demand. Based on our measures, we can say that the
self-organizing method achieves an adaptability level comparable to a living
system.
|
1402.0215 | Mutually connected component of network of networks with replica nodes | physics.soc-ph cond-mat.dis-nn cs.SI | We describe the emergence of the giant mutually connected component in
networks of networks in which each node has a single replica node in any layer
and can be interdependent only on its replica nodes in the interdependent
layers. We prove that if in these networks, all the nodes of one network
(layer) are interdependent on the nodes of the same other interconnected layer,
then, remarkably, the mutually connected component does not depend on the
topology of the network of networks. This component coincides with the mutual
component of the fully connected network of networks constructed from the same
set of layers, i.e., a multiplex network.
|
1402.0238 | Classification of Complex Networks Based on Topological Properties | cs.SI physics.soc-ph | Complex networks are a powerful modeling tool, allowing the study of
countless real-world systems. They have been used in very different domains
such as computer science, biology, sociology, management, etc. Authors have
been trying to characterize them using various measures such as degree
distribution, transitivity or average distance. Their goal is to detect certain
properties such as the small-world or scale-free properties. Previous works
have shown some of these properties are present in many different systems,
while others are characteristic of certain types of systems only. However, each
one of these studies generally focuses on a very small number of topological
measures and networks. In this work, we aim at using a more systematic
approach. We first constitute a dataset of 152 publicly available networks,
spanning over 7 different domains. We then process 14 different topological
measures to characterize them in the most possible complete way. Finally, we
apply standard data mining tools to analyze these data. A cluster analysis
reveals it is possible to obtain two significantly distinct clusters of
networks, corresponding roughly to a bisection of the domains modeled by the
networks. On these data, the most discriminant measures are density,
modularity, average degree and transitivity, and at a lesser extent, closeness
and edgebetweenness centralities.Abstract--Complex networks are a powerful
modeling tool, allowing the study of countless real-world systems. They have
been used in very different domains such as computer science, biology,
sociology, management, etc. Authors have been trying to characterize them using
various measures such as degree distribution, transitivity or average distance.
Their goal is to detect certain properties such as the small-world or
scale-free properties. Previous works have shown some of these properties are
present in many different systems, while others are characteristic of certain
types of systems only. However, each one of these studies generally focuses on
a very small number of topological measures and networks. In this work, we aim
at using a more systematic approach. We first constitute a dataset of 152
publicly available networks, spanning over 7 different domains. We then process
14 different topological measures to characterize them in the most possible
complete way. Finally, we apply standard data mining tools to analyze these
data. A cluster analysis reveals it is possible to obtain two significantly
distinct clusters of networks, corresponding roughly to a bisection of the
domains modeled by the networks. On these data, the most discriminant measures
are density, modularity, average degree and transitivity, and at a lesser
extent, closeness and edgebetweenness centralities.
|
1402.0240 | Graph Cuts with Interacting Edge Costs - Examples, Approximations, and
Algorithms | cs.DS cs.CV cs.DM math.OC | We study an extension of the classical graph cut problem, wherein we replace
the modular (sum of edge weights) cost function by a submodular set function
defined over graph edges. Special cases of this problem have appeared in
different applications in signal processing, machine learning, and computer
vision. In this paper, we connect these applications via the generic
formulation of "cooperative graph cuts", for which we study complexity,
algorithms, and connections to polymatroidal network flows. Finally, we compare
the proposed algorithms empirically.
|
1402.0246 | Distributed Kalman Filtering over Massive Data Sets: Analysis Through
Large Deviations of Random Riccati Equations | cs.IT math.IT | This paper studies the convergence of the estimation error process and the
characterization of the corresponding invariant measure in distributed Kalman
filtering for potentially unstable and large linear dynamic systems. A gossip
network protocol termed Modified Gossip Interactive Kalman Filtering (M-GIKF)
is proposed, where sensors exchange their filtered states (estimates and error
covariances) and propagate their observations via inter-sensor communications
of rate $\overline{\gamma}$; $\overline{\gamma}$ is defined as the averaged
number of inter-sensor message passages per signal evolution epoch. The
filtered states are interpreted as stochastic particles swapped through local
interaction. The paper shows that the conditional estimation error covariance
sequence at each sensor under M-GIKF evolves as a random Riccati equation (RRE)
with Markov modulated switching. By formulating the RRE as a random dynamical
system, it is shown that the network achieves weak consensus, i.e., the
conditional estimation error covariance at a randomly selected sensor converges
weakly (in distribution) to a unique invariant measure. Further, it is proved
that as $\overline{\gamma} \rightarrow \infty$ this invariant measure satisfies
the Large Deviation (LD) upper and lower bounds, implying that this measure
converges exponentially fast (in probability) to the Dirac measure
$\delta_{P^*}$, where $P^*$ is the stable error covariance of the centralized
(Kalman) filtering setup. The LD results answer a fundamental question on how
to quantify the rate at which the distributed scheme approaches the centralized
performance as the inter-sensor communication rate increases.
|
1402.0247 | Secure Debit Card Device Model | cs.CE cs.CR | The project envisages the implementation of an e-payment system utilizing
FIPS-201 Smart Card. The system combines hardware and software modules. The
hardware module takes data insertions (e.g. currency notes), processes the data
and then creates connection with the smart card using serial/USB ports to
perform further mathematical manipulations. The hardware interacts with servers
at the back for authentication and identification of users and for data storage
pertaining to a particular user. The software module manages database, handles
identities, provide authentication and secure communication between the various
system components. It will also provide a component to the end users. This
component can be in the form of software for computer or executable binaries
for PoS devices. The idea is to receive data in the embedded system from data
reader and smart card. After manipulations, the updated data is imprinted on
smart card memory and also updated in the back end servers maintaining
database. The information to be sent to a server is sent through a PoS device
which has multiple transfer mediums involving wired and un-wired mediums. The
user device also acts as an updater; therefore, whenever the smart card is
inserted by user, it is automatically updated by synchronizing with back-end
database. The project required expertise in embedded systems, networks, java
and C++ (Optional).
|
1402.0258 | A Rate-Distortion Approach to Index Coding | cs.IT math.IT | We approach index coding as a special case of rate-distortion with multiple
receivers, each with some side information about the source. Specifically,
using techniques developed for the rate-distortion problem, we provide two
upper bounds and one lower bound on the optimal index coding rate. The upper
bounds involve specific choices of the auxiliary random variables in the best
existing scheme for the rate-distortion problem. The lower bound is based on a
new lower bound for the general rate-distortion problem. The bounds are shown
to coincide for a number of (groupcast) index coding instances, including all
instances for which the number of decoders does not exceed three.
|
1402.0282 | Principled Graph Matching Algorithms for Integrating Multiple Data
Sources | cs.DB cs.LG stat.ML | This paper explores combinatorial optimization for problems of max-weight
graph matching on multi-partite graphs, which arise in integrating multiple
data sources. Entity resolution-the data integration problem of performing
noisy joins on structured data-typically proceeds by first hashing each record
into zero or more blocks, scoring pairs of records that are co-blocked for
similarity, and then matching pairs of sufficient similarity. In the most
common case of matching two sources, it is often desirable for the final
matching to be one-to-one (a record may be matched with at most one other);
members of the database and statistical record linkage communities accomplish
such matchings in the final stage by weighted bipartite graph matching on
similarity scores. Such matchings are intuitively appealing: they leverage a
natural global property of many real-world entity stores-that of being nearly
deduped-and are known to provide significant improvements to precision and
recall. Unfortunately unlike the bipartite case, exact max-weight matching on
multi-partite graphs is known to be NP-hard. Our two-fold algorithmic
contributions approximate multi-partite max-weight matching: our first
algorithm borrows optimization techniques common to Bayesian probabilistic
inference; our second is a greedy approximation algorithm. In addition to a
theoretical guarantee on the latter, we present comparisons on a real-world ER
problem from Bing significantly larger than typically found in the literature,
publication data, and on a series of synthetic problems. Our results quantify
significant improvements due to exploiting multiple sources, which are made
possible by global one-to-one constraints linking otherwise independent
matching sub-problems. We also discover that our algorithms are complementary:
one being much more robust under noise, and the other being simple to implement
and very fast to run.
|
1402.0288 | Transductive Learning with Multi-class Volume Approximation | cs.LG stat.ML | Given a hypothesis space, the large volume principle by Vladimir Vapnik
prioritizes equivalence classes according to their volume in the hypothesis
space. The volume approximation has hitherto been successfully applied to
binary learning problems. In this paper, we extend it naturally to a more
general definition which can be applied to several transductive problem
settings, such as multi-class, multi-label and serendipitous learning. Even
though the resultant learning method involves a non-convex optimization
problem, the globally optimal solution is almost surely unique and can be
obtained in O(n^3) time. We theoretically provide stability and error analyses
for the proposed method, and then experimentally show that it is promising.
|
1402.0289 | A Robust Framework for Moving-Object Detection and Vehicular Traffic
Density Estimation | cs.CV cs.RO cs.SY | Intelligent machines require basic information such as moving-object
detection from videos in order to deduce higher-level semantic information. In
this paper, we propose a methodology that uses a texture measure to detect
moving objects in video. The methodology is computationally inexpensive,
requires minimal parameter fine-tuning and also is resilient to noise,
illumination changes, dynamic background and low frame rate. Experimental
results show that performance of the proposed approach is higher than those of
state-of-the-art approaches. We also present a framework for vehicular traffic
density estimation using the foreground object detection technique and present
a comparison between the foreground object detection-based framework and the
classical density state modelling-based framework for vehicular traffic density
estimation.
|
1402.0295 | Performance Analysis and Optimization for Interference Alignment over
MIMO Interference Channels with Limited Feedback | cs.IT math.IT | In this paper, we address the problem of interference alignment (IA) over
MIMO interference channels with limited channel state information (CSI)
feedback based on quantization codebooks. Due to limited feedback and hence
imperfect IA, there are residual interferences across different links and
different data streams. As a result, the performance of IA is greatly related
to the CSI accuracy (namely number of feedback bits) and the number of data
streams (namely transmission mode). In order to improve the performance of IA,
it makes sense to optimize the system parameters according to the channel
conditions. Motivated by this, we first give a quantitative performance
analysis for IA under limited feedback, and derive a closed-form expression for
the average transmission rate in terms of feedback bits and transmission mode.
By maximizing the average transmission rate, we obtain an adaptive feedback
allocation scheme, as well as a dynamic mode selection scheme. Furthermore,
through asymptotic analysis, we obtain several clear insights on the system
performance, and provide some guidelines on the system design. Finally,
simulation results validate our theoretical claims, and show that obvious
performance gain can be obtained by adjusting feedback bits dynamically or
selecting transmission mode adaptively.
|
1402.0313 | Rate-Distortion Properties of Single-Layer Quantize-and-Forward for
Two-Way Relaying | cs.IT math.IT | The Quantize & Forward (QF) scheme for two-way relaying is studied with a
focus on its rate-distortion properties. A sum rate maximization problem is
formulated and the associated quantizer optimization problem is investigated.
An algorithm to approximately solve the problem is proposed. Under certain
cases scalar quantizers maximize the sum rate.
|
1402.0327 | Spectrum Allocation for ICIC Based Picocell | cs.IT math.IT | In this work, we analytically study the impact of spectrum allocation scheme
in picocells on the coverage probability (CP) of the Pico User (PU), when the
macro base stations (MBSs) employ either fractional frequency reuse (FFR) or
soft frequency reuse (SFR). Assuming a fixed size for the picocell, the CP
expression is derived for a PU present in either a FFR or SFR based deployment,
and when the PU uses either the centre or the edge frequency resources. Based
on these expressions, we propose two possible frequency allocation schemes for
the picocell when FFR is employed by the macrocell. The CP and the average rate
expressions for both these schemes are derived, and it is shown that these
schemes outperform the conventional scheme where no inter-cell interference
coordination (ICIC) is assumed. The impact of both schemes on the macro-user
performance is also analysed. When SFR is used by the MBS, it is shown that the
CP is maximized when the PU uses the same frequency resources as used by the
centre region.
|
1402.0349 | Zero-error capacity of binary channels with memory | math.CO cs.IT math.IT | We begin a systematic study of the problem of the zero--error capacity of
noisy binary channels with memory and solve some of the non--trivial cases.
|
1402.0362 | A quantitative analysis of the effect of flexible loads on reserve
markets | cs.GT cs.CE | We propose and analyze a day-ahead reserve market model that handles bids
from flexible loads. This pool market model takes into account the fact that a
load modulation in one direction must usually be compensated later by a
modulation of the same magnitude in the opposite direction. Our analysis takes
into account the gaming possibilities of producers and retailers, controlling
load flexibility, in the day-ahead energy and reserve markets, and in imbalance
settlement. This analysis is carried out by an agent-based approach where, for
every round, each actor uses linear programs to maximize its profit according
to forecasts of the prices. The procurement of a reserve is assumed to be
determined, for each period, as a fixed percentage of the total consumption
cleared in the energy market for the same period. The results show that the
provision of reserves by flexible loads has a negligible impact on the energy
market prices but markedly decreases the cost of reserve procurement. However,
as the rate of flexible loads increases, the system operator has to rely more
and more on non-contracted reserves, which may cancel out the benefits made in
the procurement of reserves.
|
1402.0375 | Highly symmetric POVMs and their informational power | quant-ph cs.IT math-ph math.IT math.MP | We discuss the dependence of the Shannon entropy of normalized finite rank-1
POVMs on the choice of the input state, looking for the states that minimize
this quantity. To distinguish the class of measurements where the problem can
be solved analytically, we introduce the notion of highly symmetric POVMs and
classify them in dimension two (for qubits). In this case we prove that the
entropy is minimal, and hence the relative entropy (informational power) is
maximal, if and only if the input state is orthogonal to one of the states
constituting a POVM. The method used in the proof, employing the Michel theory
of critical points for group action, the Hermite interpolation and the
structure of invariant polynomials for unitary-antiunitary groups, can also be
applied in higher dimensions and for other entropy-like functions. The links
between entropy minimization and entropic uncertainty relations, the Wehrl
entropy and the quantum dynamical entropy are described.
|
1402.0391 | Limited Feedback-Based Interference Alignment for Interfering
Multi-Access Channels | cs.IT math.IT | A limited feedback-based interference alignment (IA) scheme is proposed for
the interfering multi-access channel (IMAC). By employing a novel
performance-oriented quantization strategy, the proposed scheme is able to
achieve the minimum overall residual inter-cell interference (ICI) with the
optimized transceivers under limited feedback. Consequently, the scheme
outperforms the existing counterparts in terms of system throughput. In
addition, the proposed scheme can be implemented with flexible antenna
configurations.
|
1402.0400 | Exploration via Structured Triangulation by a Multi-Robot System with
Bearing-Only Low-Resolution Sensors | cs.RO cs.CG | This paper presents a distributed approach for exploring and triangulating an
unknown region using a multi- robot system. The objective is to produce a
covering of an unknown workspace by a fixed number of robots such that the
covered region is maximized, solving the Maximum Area Triangulation Problem
(MATP). The resulting triangulation is a physical data structure that is a
compact representation of the workspace; it contains distributed knowledge of
each triangle, adjacent triangles, and the dual graph of the workspace.
Algorithms can store information in this physical data structure, such as a
routing table for robot navigation Our algorithm builds a triangulation in a
closed environment, starting from a single location. It provides coverage with
a breadth-first search pattern and completeness guarantees. We show the
computational and communication requirements to build and maintain the
triangulation and its dual graph are small. Finally, we present a physical
navigation algorithm that uses the dual graph, and show that the resulting path
lengths are within a constant factor of the shortest-path Euclidean distance.
We validate our theoretical results with experiments on triangulating a region
with a system of low-cost robots. Analysis of the resulting quality of the
triangulation shows that most of the triangles are of high quality, and cover a
large area. Implementation of the triangulation, dual graph, and navigation all
use communication messages of fixed size, and are a practical solution for
large populations of low-cost robots.
|
1402.0402 | Customizable Contraction Hierarchies | cs.DS cs.AI | We consider the problem of quickly computing shortest paths in weighted
graphs given auxiliary data derived in an expensive preprocessing phase. By
adding a fast weight-customization phase, we extend Contraction Hierarchies by
Geisberger et al to support the three-phase workflow introduced by Delling et
al. Our Customizable Contraction Hierarchies use nested dissection orders as
suggested by Bauer et al. We provide an in-depth experimental analysis on large
road and game maps that clearly shows that Customizable Contraction Hierarchies
are a very practicable solution in scenarios where edge weights often change.
|
1402.0412 | Bots vs. Wikipedians, Anons vs. Logged-Ins | cs.DL cs.CY cs.SI | Wikipedia is a global crowdsourced encyclopedia that at time of writing is
available in 287 languages. Wikidata is a likewise global crowdsourced
knowledge base that provides shared facts to be used by Wikipedias. In the
context of this research, we have developed an application and an underlying
Application Programming Interface (API) capable of monitoring realtime edit
activity of all language versions of Wikipedia and Wikidata. This application
allows us to easily analyze edits in order to answer questions such as "Bots
vs. Wikipedians, who edits more?", "Which is the most anonymously edited
Wikipedia?", or "Who are the bots and what do they edit?". To the best of our
knowledge, this is the first time such an analysis could be done in realtime
for Wikidata and for really all Wikipedias--large and small. Our application is
available publicly online at the URL http://wikipedia-edits.herokuapp.com/, its
code has been open-sourced under the Apache 2.0 license.
|
1402.0420 | Multidiscipinary Optimization For Gas Turbines Design | math.OC cs.NE | State-of-the-art aeronautic Low Pressure gas Turbines (LPTs) are already
characterized by high quality standards, thus they offer very narrow margins of
improvement. Typical design process starts with a Concept Design (CD) phase,
defined using mean-line 1D and other low-order tools, and evolves through a
Preliminary Design (PD) phase, which allows the geometric definition in
details. In this framework, multidisciplinary optimization is the only way to
properly handle the complicated peculiarities of the design. The authors
present different strategies and algorithms that have been implemented
exploiting the PD phase as a real-like design benchmark to illustrate results.
The purpose of this work is to describe the optimization techniques, their
settings and how to implement them effectively in a multidisciplinary
environment. Starting from a basic gradient method and a semi-random second
order method, the authors have introduced an Artificial Bee Colony-like
optimizer, a multi-objective Genetic Diversity Evolutionary Algorithm [1] and a
multi-objective response surface approach based on Artificial Neural Network,
parallelizing and customizing them for the gas turbine study. Moreover, speedup
and improvement arrangements are embedded in different hybrid strategies with
the aim at finding the best solutions for different kind of problems that arise
in this field.
|
1402.0422 | A high-reproducibility and high-accuracy method for automated topic
classification | stat.ML cs.IR cs.LG physics.soc-ph | Much of human knowledge sits in large databases of unstructured text.
Leveraging this knowledge requires algorithms that extract and record metadata
on unstructured text documents. Assigning topics to documents will enable
intelligent search, statistical characterization, and meaningful
classification. Latent Dirichlet allocation (LDA) is the state-of-the-art in
topic classification. Here, we perform a systematic theoretical and numerical
analysis that demonstrates that current optimization techniques for LDA often
yield results which are not accurate in inferring the most suitable model
parameters. Adapting approaches for community detection in networks, we propose
a new algorithm which displays high-reproducibility and high-accuracy, and also
has high computational efficiency. We apply it to a large set of documents in
the English Wikipedia and reveal its hierarchical structure. Our algorithm
promises to make "big data" text analysis systems more reliable.
|
1402.0429 | Defmod - Parallel multiphysics finite element code for modeling crustal
deformation during the earthquake/rifting cycle | physics.geo-ph cs.CE physics.comp-ph | In this article, we present Defmod, an open source, fully unstructured, two
or three dimensional, parallel finite element code for modeling crustal
deformation over time scales ranging from milliseconds to thousands of years.
Unlike existing public domain numerical codes, Defmod can simulate deformation
due to all major processes that make up the earthquake/rifting cycle, in
non-homogeneous media. Specifically, it can be used to model deformation due to
dynamic and quasistatic processes such as co-seismic slip or dike intrusion(s),
poroelastic rebound due to fluid flow and post-seismic or post-rifting
viscoelastic relaxation. It can also be used to model deformation due to
processes such as post-glacial rebound, hydrological (un)loading, injection
and/or withdrawal of fluids from subsurface reservoirs etc. Defmod is written
in Fortran 95 and uses PETSc's parallel sparse data structures and implicit
solvers. Problems can be solved using (stabilized) linear triangular,
quadrilateral, tetrahedral or hexahedral elements on shared or distributed
memory machines with hundreds or even thousands of processor cores. In the
current version of the code, prescribed loading is supported. Results are
written in ASCII VTK format for easy visualization. The source code is released
under the terms of GNU General Public License (v3.0) and is freely available
from https://bitbucket.org/stali/defmod/.
|
1402.0452 | A Lower Bound for the Variance of Estimators for Nakagami m Distribution | cs.LG | Recently, we have proposed a maximum likelihood iterative algorithm for
estimation of the parameters of the Nakagami-m distribution. This technique
performs better than state of art estimation techniques for this distribution.
This could be of particular use in low data or block based estimation problems.
In these scenarios, the estimator should be able to give accurate estimates in
the mean square sense with less amounts of data. Also, the estimates should
improve with the increase in number of blocks received. In this paper, we see
through our simulations, that our proposal is well designed for such
requirements. Further, it is well known in the literature that an efficient
estimator does not exist for Nakagami-m distribution. In this paper, we derive
a theoretical expression for the variance of our proposed estimator. We find
that this expression clearly fits the experimental curve for the variance of
the proposed estimator. This expression is pretty close to the cramer-rao lower
bound(CRLB).
|
1402.0453 | Fine-Grained Visual Categorization via Multi-stage Metric Learning | cs.CV cs.LG stat.ML | Fine-grained visual categorization (FGVC) is to categorize objects into
subordinate classes instead of basic classes. One major challenge in FGVC is
the co-occurrence of two issues: 1) many subordinate classes are highly
correlated and are difficult to distinguish, and 2) there exists the large
intra-class variation (e.g., due to object pose). This paper proposes to
explicitly address the above two issues via distance metric learning (DML). DML
addresses the first issue by learning an embedding so that data points from the
same class will be pulled together while those from different classes should be
pushed apart from each other; and it addresses the second issue by allowing the
flexibility that only a portion of the neighbors (not all data points) from the
same class need to be pulled together. However, feature representation of an
image is often high dimensional, and DML is known to have difficulty in dealing
with high dimensional feature vectors since it would require $\mathcal{O}(d^2)$
for storage and $\mathcal{O}(d^3)$ for optimization. To this end, we proposed a
multi-stage metric learning framework that divides the large-scale high
dimensional learning problem to a series of simple subproblems, achieving
$\mathcal{O}(d)$ computational complexity. The empirical study with FVGC
benchmark datasets verifies that our method is both effective and efficient
compared to the state-of-the-art FGVC approaches.
|
1402.0459 | Applying Supervised Learning Algorithms and a New Feature Selection
Method to Predict Coronary Artery Disease | cs.LG stat.ML | From a fresh data science perspective, this thesis discusses the prediction
of coronary artery disease based on genetic variations at the DNA base pair
level, called Single-Nucleotide Polymorphisms (SNPs), collected from the
Ontario Heart Genomics Study (OHGS).
First, the thesis explains two commonly used supervised learning algorithms,
the k-Nearest Neighbour (k-NN) and Random Forest classifiers, and includes a
complete proof that the k-NN classifier is universally consistent in any finite
dimensional normed vector space. Second, the thesis introduces two
dimensionality reduction steps, Random Projections, a known feature extraction
technique based on the Johnson-Lindenstrauss lemma, and a new method termed
Mass Transportation Distance (MTD) Feature Selection for discrete domains.
Then, this thesis compares the performance of Random Projections with the k-NN
classifier against MTD Feature Selection and Random Forest, for predicting
artery disease based on accuracy, the F-Measure, and area under the Receiver
Operating Characteristic (ROC) curve.
The comparative results demonstrate that MTD Feature Selection with Random
Forest is vastly superior to Random Projections and k-NN. The Random Forest
classifier is able to obtain an accuracy of 0.6660 and an area under the ROC
curve of 0.8562 on the OHGS genetic dataset, when 3335 SNPs are selected by MTD
Feature Selection for classification. This area is considerably better than the
previous high score of 0.608 obtained by Davies et al. in 2010 on the same
dataset.
|
1402.0480 | Efficient Gradient-Based Inference through Transformations between Bayes
Nets and Neural Nets | cs.LG stat.ML | Hierarchical Bayesian networks and neural networks with stochastic hidden
units are commonly perceived as two separate types of models. We show that
either of these types of models can often be transformed into an instance of
the other, by switching between centered and differentiable non-centered
parameterizations of the latent variables. The choice of parameterization
greatly influences the efficiency of gradient-based posterior inference; we
show that they are often complementary to eachother, we clarify when each
parameterization is preferred and show how inference can be made robust. In the
non-centered form, a simple Monte Carlo estimator of the marginal likelihood
can be used for learning the parameters. Theoretical results are supported by
experiments.
|
1402.0501 | Large-deviation properties of resilience of transportation networks | physics.soc-ph cs.SI physics.comp-ph | Distributions of the resilience of transport networks are studied
numerically, in particular the large-deviation tails. Thus, not only typical
quantities like average or variance but the distributions over the (almost)
full support can be studied. For a proof of principle, a simple transport model
based on the edge-betweenness and three abstract yet widely studied random
network ensembles are considered here: Erdoes-Renyi random networks with finite
connectivity, small world networks and spatial networks embedded in a
two-dimensional plane. Using specific numerical large-deviation techniques,
probability densities as small as 10^(-80) are obtained here. This allows one
to study typical but also the most and the least resilient networks. The
resulting distributions fulfill the mathematical large-deviation principle,
i.e., can be well described by rate functions in the thermodynamic limit. The
analysis of the limiting rate function reveals that the resilience follows an
exponential distribution almost everywhere. An analysis of the structure of the
network shows that the most-resilient networks can be obtained, as a rule of
thumb, by minimizing the diameter of a network. Also, trivially, by including
more links a network can typically be made more resilient. On the other hand,
the least-resilient networks are very rare and characterized by one (or few)
small core(s) to which all other nodes are connected. In total, the spatial
network ensemble turns out to be most suitable for obtaining and studying
resilience of real mostly finite-dimensional networks. Studying this ensemble
in combination with the presented large-deviation approach for more realistic,
in particular dynamic transport networks appears to be very promising.
|
1402.0525 | A Deterministic Annealing Approach to Witsenhausen's Counterexample | cs.IT cs.SY math.IT math.OC | This paper proposes a numerical method, based on information theoretic ideas,
to a class of distributed control problems. As a particular test case, the
well-known and numerically "over-mined" problem of decentralized control and
implicit communication, commonly referred to as Witsenhausen's counterexample,
is considered. The method provides a small improvement over the best numerical
result so far for this benchmark problem. The key idea is to randomize the
zero-delay mappings. which become "soft", probabilistic mappings to be
optimized in a deterministic annealing process, by incorporating a Shannon
entropy constraint in the problem formulation. The entropy of the mapping is
controlled and gradually lowered to zero to obtain deterministic mappings,
while avoiding poor local minima. Proposed method obtains new mappings that
shed light on the structure of the optimal solution, as well as achieving a
small improvement in total cost over the state of the art in numerical
approaches to this problem.
|
1402.0543 | How Does Latent Semantic Analysis Work? A Visualisation Approach | cs.CL cs.IR | By using a small example, an analogy to photographic compression, and a
simple visualization using heatmaps, we show that latent semantic analysis
(LSA) is able to extract what appears to be semantic meaning of words from a
set of documents by blurring the distinctions between the words.
|
1402.0555 | Taming the Monster: A Fast and Simple Algorithm for Contextual Bandits | cs.LG stat.ML | We present a new algorithm for the contextual bandit learning problem, where
the learner repeatedly takes one of $K$ actions in response to the observed
context, and observes the reward only for that chosen action. Our method
assumes access to an oracle for solving fully supervised cost-sensitive
classification problems and achieves the statistically optimal regret guarantee
with only $\tilde{O}(\sqrt{KT/\log N})$ oracle calls across all $T$ rounds,
where $N$ is the number of policies in the policy class we compete against. By
doing so, we obtain the most practical contextual bandit learning algorithm
amongst approaches that work for general policy classes. We further conduct a
proof-of-concept experiment which demonstrates the excellent computational and
prediction performance of (an online variant of) our algorithm relative to
several baselines.
|
1402.0556 | Generating Extractive Summaries of Scientific Paradigms | cs.IR cs.CL | Researchers and scientists increasingly find themselves in the position of
having to quickly understand large amounts of technical material. Our goal is
to effectively serve this need by using bibliometric text mining and
summarization techniques to generate summaries of scientific literature. We
show how we can use citations to produce automatically generated, readily
consumable, technical extractive summaries. We first propose C-LexRank, a model
for summarizing single scientific articles based on citations, which employs
community detection and extracts salient information-rich sentences. Next, we
further extend our experiments to summarize a set of papers, which cover the
same scientific topic. We generate extractive summaries of a set of Question
Answering (QA) and Dependency Parsing (DP) papers, their abstracts, and their
citation sentences and show that citations have unique information amenable to
creating a summary.
|
1402.0557 | Optimal Rectangle Packing: An Absolute Placement Approach | cs.AI cs.DS | We consider the problem of finding all enclosing rectangles of minimum area
that can contain a given set of rectangles without overlap. Our rectangle
packer chooses the x-coordinates of all the rectangles before any of the
y-coordinates. We then transform the problem into a perfect-packing problem
with no empty space by adding additional rectangles. To determine the
y-coordinates, we branch on the different rectangles that can be placed in each
empty position. Our packer allows us to extend the known solutions for a
consecutive-square benchmark from 27 to 32 squares. We also introduce three new
benchmarks, avoiding properties that make a benchmark easy, such as rectangles
with shared dimensions. Our third benchmark consists of rectangles of
increasingly high precision. To pack them efficiently, we limit the rectangles
coordinates and the bounding box dimensions to the set of subset sums of the
rectangles dimensions. Overall, our algorithms represent the current
state-of-the-art for this problem, outperforming other algorithms by orders of
magnitude, depending on the benchmark.
|
1402.0558 | Parameterized Complexity Results for Exact Bayesian Network Structure
Learning | cs.AI cs.LG | Bayesian network structure learning is the notoriously difficult problem of
discovering a Bayesian network that optimally represents a given set of
training data. In this paper we study the computational worst-case complexity
of exact Bayesian network structure learning under graph theoretic restrictions
on the (directed) super-structure. The super-structure is an undirected graph
that contains as subgraphs the skeletons of solution networks. We introduce the
directed super-structure as a natural generalization of its undirected
counterpart. Our results apply to several variants of score-based Bayesian
network structure learning where the score of a network decomposes into local
scores of its nodes. Results: We show that exact Bayesian network structure
learning can be carried out in non-uniform polynomial time if the
super-structure has bounded treewidth, and in linear time if in addition the
super-structure has bounded maximum degree. Furthermore, we show that if the
directed super-structure is acyclic, then exact Bayesian network structure
learning can be carried out in quadratic time. We complement these positive
results with a number of hardness results. We show that both restrictions
(treewidth and degree) are essential and cannot be dropped without loosing
uniform polynomial time tractability (subject to a complexity-theoretic
assumption). Similarly, exact Bayesian network structure learning remains
NP-hard for "almost acyclic" directed super-structures. Furthermore, we show
that the restrictions remain essential if we do not search for a globally
optimal network but aim to improve a given network by means of at most k arc
additions, arc deletions, or arc reversals (k-neighborhood local search).
|
1402.0559 | Short and Long Supports for Constraint Propagation | cs.AI | Special-purpose constraint propagation algorithms frequently make implicit
use of short supports -- by examining a subset of the variables, they can infer
support (a justification that a variable-value pair may still form part of an
assignment that satisfies the constraint) for all other variables and values
and save substantial work -- but short supports have not been studied in their
own right. The two main contributions of this paper are the identification of
short supports as important for constraint propagation, and the introduction of
HaggisGAC, an efficient and effective general purpose propagation algorithm for
exploiting short supports. Given the complexity of HaggisGAC, we present it as
an optimised version of a simpler algorithm ShortGAC. Although experiments
demonstrate the efficiency of ShortGAC compared with other general-purpose
propagation algorithms where a compact set of short supports is available, we
show theoretically and experimentally that HaggisGAC is even better. We also
find that HaggisGAC performs better than GAC-Schema on full-length supports. We
also introduce a variant algorithm HaggisGAC-Stable, which is adapted to avoid
work on backtracking and in some cases can be faster and have significant
reductions in memory use. All the proposed algorithms are excellent for
propagating disjunctions of constraints. In all experiments with disjunctions
we found our algorithms to be faster than Constructive Or and GAC-Schema by at
least an order of magnitude, and up to three orders of magnitude.
|
1402.0560 | Safe Exploration of State and Action Spaces in Reinforcement Learning | cs.LG cs.AI | In this paper, we consider the important problem of safe exploration in
reinforcement learning. While reinforcement learning is well-suited to domains
with complex transition dynamics and high-dimensional state-action spaces, an
additional challenge is posed by the need for safe and efficient exploration.
Traditional exploration techniques are not particularly useful for solving
dangerous tasks, where the trial and error process may lead to the selection of
actions whose execution in some states may result in damage to the learning
system (or any other system). Consequently, when an agent begins an interaction
with a dangerous and high-dimensional state-action space, an important question
arises; namely, that of how to avoid (or at least minimize) damage caused by
the exploration of the state-action space. We introduce the PI-SRL algorithm
which safely improves suboptimal albeit robust behaviors for continuous state
and action control tasks and which efficiently learns from the experience
gained from the environment. We evaluate the proposed method in four complex
tasks: automatic car parking, pole-balancing, helicopter hovering, and business
management.
|
1402.0561 | Irrelevant and independent natural extension for sets of desirable
gambles | cs.AI | The results in this paper add useful tools to the theory of sets of desirable
gambles, a growing toolbox for reasoning with partial probability assessments.
We investigate how to combine a number of marginal coherent sets of desirable
gambles into a joint set using the properties of epistemic irrelevance and
independence. We provide formulas for the smallest such joint, called their
independent natural extension, and study its main properties. The independent
natural extension of maximal coherent sets of desirable gambles allows us to
define the strong product of sets of desirable gambles. Finally, we explore an
easy way to generalise these results to also apply for the conditional versions
of epistemic irrelevance and independence. Having such a set of tools that are
easily implemented in computer programs is clearly beneficial to fields, like
AI, with a clear interest in coherent reasoning under uncertainty using general
and robust uncertainty models that require no full specification.
|
1402.0562 | Online Stochastic Optimization under Correlated Bandit Feedback | stat.ML cs.LG cs.SY | In this paper we consider the problem of online stochastic optimization of a
locally smooth function under bandit feedback. We introduce the high-confidence
tree (HCT) algorithm, a novel any-time $\mathcal{X}$-armed bandit algorithm,
and derive regret bounds matching the performance of existing state-of-the-art
in terms of dependency on number of steps and smoothness factor. The main
advantage of HCT is that it handles the challenging case of correlated rewards,
whereas existing methods require that the reward-generating process of each arm
is an identically and independent distributed (iid) random process. HCT also
improves on the state-of-the-art in terms of its memory requirement as well as
requiring a weaker smoothness assumption on the mean-reward function in compare
to the previous anytime algorithms. Finally, we discuss how HCT can be applied
to the problem of policy search in reinforcement learning and we report
preliminary empirical results.
|
1402.0563 | Evaluating Indirect Strategies for Chinese-Spanish Statistical Machine
Translation | cs.CL | Although, Chinese and Spanish are two of the most spoken languages in the
world, not much research has been done in machine translation for this language
pair. This paper focuses on investigating the state-of-the-art of
Chinese-to-Spanish statistical machine translation (SMT), which nowadays is one
of the most popular approaches to machine translation. For this purpose, we
report details of the available parallel corpus which are Basic Traveller
Expressions Corpus (BTEC), Holy Bible and United Nations (UN). Additionally, we
conduct experimental work with the largest of these three corpora to explore
alternative SMT strategies by means of using a pivot language. Three
alternatives are considered for pivoting: cascading, pseudo-corpus and
triangulation. As pivot language, we use either English, Arabic or French.
Results show that, for a phrase-based SMT system, English is the best pivot
language between Chinese and Spanish. We propose a system output combination
using the pivot strategies which is capable of outperforming the direct
translation strategy. The main objective of this work is motivating and
involving the research community to work in this important pair of languages
given their demographic impact.
|
1402.0564 | A Hybrid LP-RPG Heuristic for Modelling Numeric Resource Flows in
Planning | cs.AI | Although the use of metric fluents is fundamental to many practical planning
problems, the study of heuristics to support fully automated planners working
with these fluents remains relatively unexplored. The most widely used
heuristic is the relaxation of metric fluents into interval-valued variables
--- an idea first proposed a decade ago. Other heuristics depend on domain
encodings that supply additional information about fluents, such as capacity
constraints or other resource-related annotations. A particular challenge to
these approaches is in handling interactions between metric fluents that
represent exchange, such as the transformation of quantities of raw materials
into quantities of processed goods, or trading of money for materials. The
usual relaxation of metric fluents is often very poor in these situations,
since it does not recognise that resources, once spent, are no longer available
to be spent again. We present a heuristic for numeric planning problems
building on the propositional relaxed planning graph, but using a mathematical
program for numeric reasoning. We define a class of producer--consumer planning
problems and demonstrate how the numeric constraints in these can be modelled
in a mixed integer program (MIP). This MIP is then combined with a metric
Relaxed Planning Graph (RPG) heuristic to produce an integrated hybrid
heuristic. The MIP tracks resource use more accurately than the usual
relaxation, but relaxes the ordering of actions, while the RPG captures the
causal propositional aspects of the problem. We discuss how these two
components interact to produce a single unified heuristic and go on to explore
how further numeric features of planning problems can be integrated into the
MIP. We show that encoding a limited subset of the propositional problem to
augment the MIP can yield more accurate guidance, partly by exploiting
structure such as propositional landmarks and propositional resources. Our
results show that the use of this heuristic enhances scalability on problems
where numeric resource interaction is key in finding a solution.
|
1402.0565 | Lifted Variable Elimination: Decoupling the Operators from the
Constraint Language | cs.AI | Lifted probabilistic inference algorithms exploit regularities in the
structure of graphical models to perform inference more efficiently. More
specifically, they identify groups of interchangeable variables and perform
inference once per group, as opposed to once per variable. The groups are
defined by means of constraints, so the flexibility of the grouping is
determined by the expressivity of the constraint language. Existing approaches
for exact lifted inference use specific languages for (in)equality constraints,
which often have limited expressivity. In this article, we decouple lifted
inference from the constraint language. We define operators for lifted
inference in terms of relational algebra operators, so that they operate on the
semantic level (the constraints extension) rather than on the syntactic level,
making them language-independent. As a result, lifted inference can be
performed using more powerful constraint languages, which provide more
opportunities for lifting. We empirically demonstrate that this can improve
inference efficiency by orders of magnitude, allowing exact inference where
until now only approximate inference was feasible.
|
1402.0566 | Incremental Clustering and Expansion for Faster Optimal Planning in
Dec-POMDPs | cs.AI | This article presents the state-of-the-art in optimal solution methods for
decentralized partially observable Markov decision processes (Dec-POMDPs),
which are general models for collaborative multiagent planning under
uncertainty. Building off the generalized multiagent A* (GMAA*) algorithm,
which reduces the problem to a tree of one-shot collaborative Bayesian games
(CBGs), we describe several advances that greatly expand the range of
Dec-POMDPs that can be solved optimally. First, we introduce lossless
incremental clustering of the CBGs solved by GMAA*, which achieves exponential
speedups without sacrificing optimality. Second, we introduce incremental
expansion of nodes in the GMAA* search tree, which avoids the need to expand
all children, the number of which is in the worst case doubly exponential in
the nodes depth. This is particularly beneficial when little clustering is
possible. In addition, we introduce new hybrid heuristic representations that
are more compact and thereby enable the solution of larger Dec-POMDPs. We
provide theoretical guarantees that, when a suitable heuristic is used, both
incremental clustering and incremental expansion yield algorithms that are both
complete and search equivalent. Finally, we present extensive empirical results
demonstrating that GMAA*-ICE, an algorithm that synthesizes these advances, can
optimally solve Dec-POMDPs of unprecedented size.
|
1402.0568 | Boolean Equi-propagation for Concise and Efficient SAT Encodings of
Combinatorial Problems | cs.AI | We present an approach to propagation-based SAT encoding of combinatorial
problems, Boolean equi-propagation, where constraints are modeled as Boolean
functions which propagate information about equalities between Boolean
literals. This information is then applied to simplify the CNF encoding of the
constraints. A key factor is that considering only a small fragment of a
constraint model at one time enables us to apply stronger, and even complete,
reasoning to detect equivalent literals in that fragment. Once detected,
equivalences apply to simplify the entire constraint model and facilitate
further reasoning on other fragments. Equi-propagation in combination with
partial evaluation and constraint simplification provide the foundation for a
powerful approach to SAT-based finite domain constraint solving. We introduce a
tool called BEE (Ben-Gurion Equi-propagation Encoder) based on these ideas and
demonstrate for a variety of benchmarks that our approach leads to a
considerable reduction in the size of CNF encodings and subsequent speed-ups in
SAT solving times.
|
1402.0569 | Description Logic Knowledge and Action Bases | cs.AI cs.LO | Description logic Knowledge and Action Bases (KAB) are a mechanism for
providing both a semantically rich representation of the information on the
domain of interest in terms of a description logic knowledge base and actions
to change such information over time, possibly introducing new objects. We
resort to a variant of DL-Lite where the unique name assumption is not enforced
and where equality between objects may be asserted and inferred. Actions are
specified as sets of conditional effects, where conditions are based on
epistemic queries over the knowledge base (TBox and ABox), and effects are
expressed in terms of new ABoxes. In this setting, we address verification of
temporal properties expressed in a variant of first-order mu-calculus with
quantification across states. Notably, we show decidability of verification,
under a suitable restriction inspired by the notion of weak acyclicity in data
exchange.
|
1402.0570 | A Feature Subset Selection Algorithm Automatic Recommendation Method | cs.LG | Many feature subset selection (FSS) algorithms have been proposed, but not
all of them are appropriate for a given feature selection problem. At the same
time, so far there is rarely a good way to choose appropriate FSS algorithms
for the problem at hand. Thus, FSS algorithm automatic recommendation is very
important and practically useful. In this paper, a meta learning based FSS
algorithm automatic recommendation method is presented. The proposed method
first identifies the data sets that are most similar to the one at hand by the
k-nearest neighbor classification algorithm, and the distances among these data
sets are calculated based on the commonly-used data set characteristics. Then,
it ranks all the candidate FSS algorithms according to their performance on
these similar data sets, and chooses the algorithms with best performance as
the appropriate ones. The performance of the candidate FSS algorithms is
evaluated by a multi-criteria metric that takes into account not only the
classification accuracy over the selected features, but also the runtime of
feature selection and the number of selected features. The proposed
recommendation method is extensively tested on 115 real world data sets with 22
well-known and frequently-used different FSS algorithms for five representative
classifiers. The results show the effectiveness of our proposed FSS algorithm
recommendation method.
|
1402.0571 | Analysis of Watson's Strategies for Playing Jeopardy! | cs.AI | Major advances in Question Answering technology were needed for IBM Watson to
play Jeopardy! at championship level -- the show requires rapid-fire answers to
challenging natural language questions, broad general knowledge, high
precision, and accurate confidence estimates. In addition, Jeopardy! features
four types of decision making carrying great strategic importance: (1) Daily
Double wagering; (2) Final Jeopardy wagering; (3) selecting the next square
when in control of the board; (4) deciding whether to attempt to answer, i.e.,
"buzz in." Using sophisticated strategies for these decisions, that properly
account for the game state and future event probabilities, can significantly
boost a players overall chances to win, when compared with simple "rule of
thumb" strategies. This article presents our approach to developing Watsons
game-playing strategies, comprising development of a faithful simulation model,
and then using learning and Monte-Carlo methods within the simulator to
optimize Watsons strategic decision-making. After giving a detailed description
of each of our game-strategy algorithms, we then focus in particular on
validating the accuracy of the simulators predictions, and documenting
performance improvements using our methods. Quantitative performance benefits
are shown with respect to both simple heuristic strategies, and actual human
contestant performance in historical episodes. We further extend our analysis
of human play to derive a number of valuable and counterintuitive examples
illustrating how human contestants may improve their performance on the show.
|
1402.0573 | Identifying the Class of Maxi-Consistent Operators in Argumentation | cs.AI | Dung's abstract argumentation theory can be seen as a general framework for
non-monotonic reasoning. An important question is then: what is the class of
logics that can be subsumed as instantiations of this theory? The goal of this
paper is to identify and study the large class of logic-based instantiations of
Dung's theory which correspond to the maxi-consistent operator, i.e. to the
function which returns maximal consistent subsets of an inconsistent knowledge
base. In other words, we study the class of instantiations where very extension
of the argumentation system corresponds to exactly one maximal consistent
subset of the knowledge base. We show that an attack relation belonging to this
class must be conflict-dependent, must not be valid, must not be
conflict-complete, must not be symmetric etc. Then, we show that some attack
relations serve as lower or upper bounds of the class (e.g. if an attack
relation contains canonical undercut then it is not a member of this class). By
using our results, we show for all existing attack relations whether or not
they belong to this class. We also define new attack relations which are
members of this class. Finally, we interpret our results and discuss more
general questions, like: what is the added value of argumentation in such a
setting? We believe that this work is a first step towards achieving our
long-term goal, which is to better understand the role of argumentation and,
particularly, the expressivity of logic-based instantiations of Dung-style
argumentation frameworks.
|
1402.0574 | Learning to Predict from Textual Data | cs.CL cs.AI cs.IR | Given a current news event, we tackle the problem of generating plausible
predictions of future events it might cause. We present a new methodology for
modeling and predicting such future news events using machine learning and data
mining techniques. Our Pundit algorithm generalizes examples of causality pairs
to infer a causality predictor. To obtain precisely labeled causality examples,
we mine 150 years of news articles and apply semantic natural language modeling
techniques to headlines containing certain predefined causality patterns. For
generalization, the model uses a vast number of world knowledge ontologies.
Empirical evaluation on real news articles shows that our Pundit algorithm
performs as well as non-expert humans.
|
1402.0575 | Reasoning about Explanations for Negative Query Answers in DL-Lite | cs.AI cs.LO | In order to meet usability requirements, most logic-based applications
provide explanation facilities for reasoning services. This holds also for
Description Logics, where research has focused on the explanation of both TBox
reasoning and, more recently, query answering. Besides explaining the presence
of a tuple in a query answer, it is important to explain also why a given tuple
is missing. We address the latter problem for instance and conjunctive query
answering over DL-Lite ontologies by adopting abductive reasoning; that is, we
look for additions to the ABox that force a given tuple to be in the result. As
reasoning tasks we consider existence and recognition of an explanation, and
relevance and necessity of a given assertion for an explanation. We
characterize the computational complexity of these problems for arbitrary,
subset minimal, and cardinality minimal explanations.
|
1402.0576 | Optimizing SPARQL Query Answering over OWL Ontologies | cs.DB cs.AI | The SPARQL query language is currently being extended by the World Wide Web
Consortium (W3C) with so-called entailment regimes. An entailment regime
defines how queries are evaluated under more expressive semantics than SPARQLs
standard simple entailment, which is based on subgraph matching. The queries
are very expressive since variables can occur within complex concepts and can
also bind to concept or role names. In this paper, we describe a sound and
complete algorithm for the OWL Direct Semantics entailment regime. We further
propose several novel optimizations such as strategies for determining a good
query execution order, query rewriting techniques, and show how specialized OWL
reasoning tasks and the concept and role hierarchy can be used to reduce the
query execution time. For determining a good execution order, we propose a
cost-based model, where the costs are based on information about the instances
of concepts and roles that are extracted from a model abstraction built by an
OWL reasoner. We present two ordering strategies: a static and a dynamic one.
For the dynamic case, we improve the performance by exploiting an individual
clustering approach that allows for computing the cost functions based on one
individual sample from a cluster. We provide a prototypical implementation and
evaluate the efficiency of the proposed optimizations. Our experimental study
shows that the static ordering usually outperforms the dynamic one when
accurate statistics are available. This changes, however, when the statistics
are less accurate, e.g., due to nondeterministic reasoning decisions. For
queries that go beyond conjunctive instance queries we observe an improvement
of up to three orders of magnitude due to the proposed optimizations.
|
1402.0577 | A Survey on Latent Tree Models and Applications | cs.LG | In data analysis, latent variables play a central role because they help
provide powerful insights into a wide variety of phenomena, ranging from
biological to human sciences. The latent tree model, a particular type of
probabilistic graphical models, deserves attention. Its simple structure - a
tree - allows simple and efficient inference, while its latent variables
capture complex relationships. In the past decade, the latent tree model has
been subject to significant theoretical and methodological developments. In
this review, we propose a comprehensive study of this model. First we summarize
key ideas underlying the model. Second we explain how it can be efficiently
learned from data. Third we illustrate its use within three types of
applications: latent structure discovery, multidimensional clustering, and
probabilistic inference. Finally, we conclude and give promising directions for
future researches in this field.
|
1402.0578 | Natural Language Inference for Arabic Using Extended Tree Edit Distance
with Subtrees | cs.CL | Many natural language processing (NLP) applications require the computation
of similarities between pairs of syntactic or semantic trees. Many researchers
have used tree edit distance for this task, but this technique suffers from the
drawback that it deals with single node operations only. We have extended the
standard tree edit distance algorithm to deal with subtree transformation
operations as well as single nodes. The extended algorithm with subtree
operations, TED+ST, is more effective and flexible than the standard algorithm,
especially for applications that pay attention to relations among nodes (e.g.
in linguistic trees, deleting a modifier subtree should be cheaper than the sum
of deleting its components individually). We describe the use of TED+ST for
checking entailment between two Arabic text snippets. The preliminary results
of using TED+ST were encouraging when compared with two string-based approaches
and with the standard algorithm.
|
1402.0579 | Probabilistic Planning for Continuous Dynamic Systems under Bounded Risk | cs.AI | This paper presents a model-based planner called the Probabilistic Sulu
Planner or the p-Sulu Planner, which controls stochastic systems in a goal
directed manner within user-specified risk bounds. The objective of the p-Sulu
Planner is to allow users to command continuous, stochastic systems, such as
unmanned aerial and space vehicles, in a manner that is both intuitive and
safe. To this end, we first develop a new plan representation called a
chance-constrained qualitative state plan (CCQSP), through which users can
specify the desired evolution of the plant state as well as the acceptable
level of risk. An example of a CCQSP statement is go to A through B within 30
minutes, with less than 0.001% probability of failure." We then develop the
p-Sulu Planner, which can tractably solve a CCQSP planning problem. In order to
enable CCQSP planning, we develop the following two capabilities in this paper:
1) risk-sensitive planning with risk bounds, and 2) goal-directed planning in a
continuous domain with temporal constraints. The first capability is to ensures
that the probability of failure is bounded. The second capability is essential
for the planner to solve problems with a continuous state space such as vehicle
path planning. We demonstrate the capabilities of the p-Sulu Planner by
simulations on two real-world scenarios: the path planning and scheduling of a
personal aerial vehicle as well as the space rendezvous of an autonomous cargo
spacecraft.
|
1402.0580 | On the Computation of Fully Proportional Representation | cs.GT cs.MA | We investigate two systems of fully proportional representation suggested by
Chamberlin Courant and Monroe. Both systems assign a representative to each
voter so that the "sum of misrepresentations" is minimized. The winner
determination problem for both systems is known to be NP-hard, hence this work
aims at investigating whether there are variants of the proposed rules and/or
specific electorates for which these problems can be solved efficiently. As a
variation of these rules, instead of minimizing the sum of misrepresentations,
we considered minimizing the maximal misrepresentation introducing effectively
two new rules. In the general case these "minimax" versions of classical rules
appeared to be still NP-hard. We investigated the parameterized complexity of
winner determination of the two classical and two new rules with respect to
several parameters. Here we have a mixture of positive and negative results:
e.g., we proved fixed-parameter tractability for the parameter the number of
candidates but fixed-parameter intractability for the number of winners. For
single-peaked electorates our results are overwhelmingly positive: we provide
polynomial-time algorithms for most of the considered problems. The only rule
that remains NP-hard for single-peaked electorates is the classical Monroe
rule.
|
1402.0581 | Qualitative Order of Magnitude Energy-Flow-Based Failure Modes and
Effects Analysis | cs.AI | This paper presents a structured power and energy-flow-based qualitative
modelling approach that is applicable to a variety of system types including
electrical and fluid flow. The modelling is split into two parts. Power flow is
a global phenomenon and is therefore naturally represented and analysed by a
network comprised of the relevant structural elements from the components of a
system. The power flow analysis is a platform for higher-level behaviour
prediction of energy related aspects using local component behaviour models to
capture a state-based representation with a global time. The primary
application is Failure Modes and Effects Analysis (FMEA) and a form of
exaggeration reasoning is used, combined with an order of magnitude
representation to derive the worst case failure modes. The novel aspects of the
work are an order of magnitude(OM) qualitative network analyser to represent
any power domain and topology, including multiple power sources, a feature that
was not required for earlier specialised electrical versions of the approach.
Secondly, the representation of generalised energy related behaviour as
state-based local models is presented as a modelling strategy that can be more
vivid and intuitive for a range of topologically complex applications than
qualitative equation-based representations.The two-level modelling strategy
allows the broad system behaviour coverage of qualitative simulation to be
exploited for the FMEA task, while limiting the difficulties of qualitative
ambiguity explanation that can arise from abstracted numerical models. We have
used the method to support an automated FMEA system with examples of an
aircraft fuel system and domestic a heating system discussed in this paper.
|
1402.0582 | Scheduling a Dynamic Aircraft Repair Shop with Limited Repair Resources | cs.AI | We address a dynamic repair shop scheduling problem in the context of
military aircraft fleet management where the goal is to maintain a full
complement of aircraft over the long-term. A number of flights, each with a
requirement for a specific number and type of aircraft, are already scheduled
over a long horizon. We need to assign aircraft to flights and schedule repair
activities while considering the flights requirements, repair capacity, and
aircraft failures. The number of aircraft awaiting repair dynamically changes
over time due to failures and it is therefore necessary to rebuild the repair
schedule online. To solve the problem, we view the dynamic repair shop as
successive static repair scheduling sub-problems over shorter time periods. We
propose a complete approach based on the logic-based Benders decomposition to
solve the static sub-problems, and design different rescheduling policies to
schedule the dynamic repair shop. Computational experiments demonstrate that
the Benders model is able to find and prove optimal solutions on average four
times faster than a mixed integer programming model. The rescheduling approach
having both aspects of scheduling over a longer horizon and quickly adjusting
the schedule increases aircraft available in the long term by 10% compared to
the approaches having either one of the aspects alone.
|
1402.0583 | Decentralized Anti-coordination Through Multi-agent Learning | cs.GT cs.MA | To achieve an optimal outcome in many situations, agents need to choose
distinct actions from one another. This is the case notably in many resource
allocation problems, where a single resource can only be used by one agent at a
time. How shall a designer of a multi-agent system program its identical agents
to behave each in a different way? From a game theoretic perspective, such
situations lead to undesirable Nash equilibria. For example consider a resource
allocation game in that two players compete for an exclusive access to a single
resource. It has three Nash equilibria. The two pure-strategy NE are efficient,
but not fair. The one mixed-strategy NE is fair, but not efficient. Aumanns
notion of correlated equilibrium fixes this problem: It assumes a correlation
device that suggests each agent an action to take. However, such a "smart"
coordination device might not be available. We propose using a randomly chosen,
"stupid" integer coordination signal. "Smart" agents learn which action they
should use for each value of the coordination signal. We present a multi-agent
learning algorithm that converges in polynomial number of steps to a correlated
equilibrium of a channel allocation game, a variant of the resource allocation
game. We show that the agents learn to play for each coordination signal value
a randomly chosen pure-strategy Nash equilibrium of the game. Therefore, the
outcome is an efficient correlated equilibrium. This CE becomes more fair as
the number of the available coordination signal values increases.
|
1402.0584 | NuMVC: An Efficient Local Search Algorithm for Minimum Vertex Cover | cs.AI cs.DS | The Minimum Vertex Cover (MVC) problem is a prominent NP-hard combinatorial
optimization problem of great importance in both theory and application. Local
search has proved successful for this problem. However, there are two main
drawbacks in state-of-the-art MVC local search algorithms. First, they select a
pair of vertices to exchange simultaneously, which is time-consuming. Secondly,
although using edge weighting techniques to diversify the search, these
algorithms lack mechanisms for decreasing the weights. To address these issues,
we propose two new strategies: two-stage exchange and edge weighting with
forgetting. The two-stage exchange strategy selects two vertices to exchange
separately and performs the exchange in two stages. The strategy of edge
weighting with forgetting not only increases weights of uncovered edges, but
also decreases some weights for each edge periodically. These two strategies
are used in designing a new MVC local search algorithm, which is referred to as
NuMVC. We conduct extensive experimental studies on the standard benchmarks,
namely DIMACS and BHOSLIB. The experiment comparing NuMVC with state-of-the-art
heuristic algorithms show that NuMVC is at least competitive with the nearest
competitor namely PLS on the DIMACS benchmark, and clearly dominates all
competitors on the BHOSLIB benchmark. Also, experimental results indicate that
NuMVC finds an optimal solution much faster than the current best exact
algorithm for Maximum Clique on random instances as well as some structured
ones. Moreover, we study the effectiveness of the two strategies and the
run-time behaviour through experimental analysis.
|
1402.0585 | AI Methods in Algorithmic Composition: A Comprehensive Survey | cs.AI | Algorithmic composition is the partial or total automation of the process of
music composition by using computers. Since the 1950s, different computational
techniques related to Artificial Intelligence have been used for algorithmic
composition, including grammatical representations, probabilistic methods,
neural networks, symbolic rule-based systems, constraint programming and
evolutionary algorithms. This survey aims to be a comprehensive account of
research on algorithmic composition, presenting a thorough view of the field
for researchers in Artificial Intelligence.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.