id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1401.3516 | Clustering Evolving Networks | cs.SI cs.CY physics.soc-ph | Roughly speaking, clustering evolving networks aims at detecting structurally
dense subgroups in networks that evolve over time. This implies that the
subgroups we seek for also evolve, which results in many additional tasks
compared to clustering static networks. We discuss these additional tasks and
difficulties resulting thereof and present an overview on current approaches to
solve these problems. We focus on clustering approaches in online scenarios,
i.e., approaches that incrementally use structural information from previous
time steps in order to incorporate temporal smoothness or to achieve low
running time. Moreover, we describe a collection of real world networks and
generators for synthetic data that are often used for evaluation.
|
1401.3520 | Adaptive Mode Selection for Bidirectional Relay Networks -- Fixed Rate
Transmission | cs.IT math.IT | In this paper, we consider the problem of sum throughput maximization for
bidirectional relay networks with block fading. Thereby, user 1 and user 2
exchange information only via a relay node, i.e., a direct link between both
users is not present. We assume that channel state information at the
transmitter (CSIT) is not available and/or only one coding and modulation
scheme is used at the transmitters due to complexity constraints. Thus, the
nodes transmit with a fixed predefined rate regardless of the channel state
information (CSI). In general, the nodes in the network can assume one of three
possible states in each time slot, namely the transmit, receive, and silent
state. Most of the existing protocols assume a fixed schedule for the sequence
of the states of the nodes. In this paper, we abandon the restriction of having
a fixed and predefined schedule and propose a new protocol which, based on the
CSI at the receiver (CSIR), selects the optimal states of the nodes in each
time slot such that the sum throughput is maximized. To this end, the relay has
to be equipped with two buffers for storage of the information received from
the two users. Numerical results show that the proposed protocol significantly
outperforms the existing protocols.
|
1401.3521 | Analysis of Oscillator Phase-Noise Effects on Self-Interference
Cancellation in Full-Duplex OFDM Radio Transceivers | cs.IT math.IT | This paper addresses the analysis of oscillator phase-noise effects on the
self-interference cancellation capability of full-duplex direct-conversion
radio transceivers. Closed-form solutions are derived for the power of the
residual self-interference stemming from phase noise in two alternative cases
of having either independent oscillators or the same oscillator at the
transmitter and receiver chains of the full-duplex transceiver. The results
show that phase noise has a severe effect on self-interference cancellation in
both of the considered cases, and that by using the common oscillator in
upconversion and downconversion results in clearly lower residual
self-interference levels. The results also show that it is in general vital to
use high quality oscillators in full-duplex transceivers, or have some means
for phase noise estimation and mitigation in order to suppress its effects. One
of the main findings is that in practical scenarios the subcarrier-wise
phase-noise spread of the multipath components of the self-interference channel
causes most of the residual phase-noise effect when high amounts of
self-interference cancellation is desired.
|
1401.3525 | Is the month of Ramadan marked by a reduction in the number of suicides? | physics.soc-ph cs.SI | For Muslims the month of Ramadan is a time of fasting but during the evenings
after sunset it is also an occasion for family and social gatherings.
Therefore, according to the Bertillon-Durkheim conception of suicide (that is
based on the strength of social ties), one would expect a fall in suicide rates
during Ramadan. Is this conjecture confirmed by observation? That is the
question addressed in the present paper. Surprisingly, the most tricky part of
the investigation was to find reliable monthly suicide data. In the Islamic
world Turkey seems to be the only country whose statistical institute publishes
such observations. The data reveal indeed a fall of about $15\%$ in suicide
numbers during the month of Ramadan (with respect to same-non-Ramadan months).
As the standard deviation is only $4.7\%$ this effect has a high degree of
significance. This observation, along with the fact that other occasions of
social gathering such as Thanksgiving or Christmas are also marked by a drop in
suicides, adds further credence to the B-D thesis.
|
1401.3527 | Extensions of the I-MMSE Relationship to Gaussian Channels with Feedback
and Memory | cs.IT math.IT | Unveiling a fundamental link between information theory and estimation
theory, the I-MMSE relationship by Guo, Shamai and Verdu~\cite{gu05}, together
with its numerous extensions, has great theoretical significance and various
practical applications. On the other hand, its influences to date have been
restricted to channels without feedback or memory, due to the absence of its
extensions to such channels. In this paper, we propose extensions of the I-MMSE
relationship to discrete-time and continuous-time Gaussian channels with
feedback and/or memory. Our approach is based on a very simple observation,
which can be applied to other scenarios, such as a simple and direct proof of
the classical de Bruijn's identity. This submission corrects the mistakes in
the previous version.
|
1401.3529 | Capacity Regions of Families of Continuous-Time Multi-User Gaussian
Channels | cs.IT math.IT | In this paper, we propose to use Brownian motions to model families of
continuous-time multiuser Gaussian channels without bandwidth limit. It turns
out that such a formulation allows parallel translation of many fundamental
notions and techniques from the discrete-time setting to the continuous-time
regime, which enables us to derive the capacity regions of a continuous-time
white Gaussian multiple access channel with/without feedback, a continuous-time
white Gaussian interference channel without feedback and a continuous-time
white Gaussian broadcast channel without feedback. In theory, these capacity
results give the fundamental transmission limit modulation/coding schemes can
achieve for families of continuous-time Gaussian one-hop channels without
bandwidth limit; in practice, the explicit capacity regions derived and
capacity achieving modulation/coding scheme proposed may provide engineering
insights on designing multi-user communication systems operating on an
ultra-wideband regime.
|
1401.3531 | Highly comparative feature-based time-series classification | cs.LG cs.AI cs.DB physics.data-an q-bio.QM | A highly comparative, feature-based approach to time series classification is
introduced that uses an extensive database of algorithms to extract thousands
of interpretable features from time series. These features are derived from
across the scientific time-series analysis literature, and include summaries of
time series in terms of their correlation structure, distribution, entropy,
stationarity, scaling properties, and fits to a range of time-series models.
After computing thousands of features for each time series in a training set,
those that are most informative of the class structure are selected using
greedy forward feature selection with a linear classifier. The resulting
feature-based classifiers automatically learn the differences between classes
using a reduced number of time-series properties, and circumvent the need to
calculate distances between time series. Representing time series in this way
results in orders of magnitude of dimensionality reduction, allowing the method
to perform well on very large datasets containing long time series or time
series of different lengths. For many of the datasets studied, classification
performance exceeded that of conventional instance-based classifiers, including
one nearest neighbor classifiers using Euclidean distances and dynamic time
warping and, most importantly, the features selected provide an understanding
of the properties of the dataset, insight that can guide further scientific
investigation.
|
1401.3538 | Full-Duplex Transceiver System Calculations: Analysis of ADC and
Linearity Challenges | cs.IT cs.ET math.IT | Despite the intensive recent research on wireless single-channel full-duplex
communications, relatively little is known about the transceiver chain
nonidealities of full-duplex devices. In this paper, the effect of nonlinear
distortion occurring in the transmitter power amplifier (PA) and the receiver
chain is analyzed, alongside with the dynamic range requirements of
analog-to-digital converters (ADCs). This is done with detailed system
calculations, which combine the properties of the individual electronics
components to jointly model the complete transceiver chain, including
self-interference cancellation. They also quantify the decrease in the dynamic
range for the signal of interest caused by self-interference at the
analog-to-digital interface. Using these system calculations, we provide
comprehensive numerical results for typical transceiver parameters. The
analytical results are also confirmed with full waveform simulations. We
observe that the nonlinear distortion produced by the transmitter PA is a
significant issue in a full-duplex transceiver and, when using cheaper and less
linear components, also the receiver chain nonlinearities become considerable.
It is also shown that, with digitally-intensive self-interference cancellation,
the quantization noise of the ADCs is another significant problem.
|
1401.3556 | Equivalent Codes, Optimality, and Performance Analysis of OSTBC:
Textbook Study | cs.IT math.IT | An equivalent model for a multi-input multi-output (MIMO) communication
system with orthogonal space-time block codes (OSTBCs) is proposed based on a
newly revealed connection between OSTBCs and Euclidean codes. Examples of
distance spectra, signal constellations, and signal coordinate diagrams of
Euclidean codes equivalent to simplest OSTBCs are given. A new asymptotic upper
bound for the symbol error rate (SER) of OSTBCs, based on the distance spectra
of the introduced equivalent Euclidean codes is derived, and new general design
criteria for signal constellations of the optimal OSTBC are proposed. Some
bounds relating distance properties, dimensionality, and cardinality of OSTBCs
with constituent signals of equal energy are given, and new optimal signal
constellations with cardinalities M = 8 and M = 16 for Alamouti's code are
designed. Using the new model for MIMO communication systems with OSTBCs, a
general methodology for performance analysis of OSTBCs is developed. As an
example of the application of this methodology, an exact evaluation of the SER
of any OSTBC is given. Namely, a new expression for the SER of Alamouti's OSTBC
with binary phase shift keying (BPSK) signals is derived.
|
1401.3566 | Reweighted l1-norm Penalized LMS for Sparse Channel Estimation and Its
Analysis | cs.IT math.IT | A new reweighted l1-norm penalized least mean square (LMS) algorithm for
sparse channel estimation is proposed and studied in this paper. Since standard
LMS algorithm does not take into account the sparsity information about the
channel impulse response (CIR), sparsity-aware modifications of the LMS
algorithm aim at outperforming the standard LMS by introducing a penalty term
to the standard LMS cost function which forces the solution to be sparse. Our
reweighted l1-norm penalized LMS algorithm introduces in addition a reweighting
of the CIR coefficient estimates to promote a sparse solution even more and
approximate l0-pseudo-norm closer. We provide in depth quantitative analysis of
the reweighted l1-norm penalized LMS algorithm. An expression for the excess
mean square error (MSE) of the algorithm is also derived which suggests that
under the right conditions, the reweighted l1-norm penalized LMS algorithm
outperforms the standard LMS, which is expected. However, our quantitative
analysis also answers the question of what is the maximum sparsity level in the
channel for which the reweighted l1-norm penalized LMS algorithm is better than
the standard LMS. Simulation results showing the better performance of the
reweighted l1-norm penalized LMS algorithm compared to other existing LMS-type
algorithms are given.
|
1401.3567 | 2D Direction Of Arrival Estimation with Modified Propagator | cs.IT math.IT stat.AP | In this paper, a fast algorithm for the Direction Of Arrival (DOA) estimation
of radiating sources, based on partial covariance matrix and without eigende-
composition of incoming signals is extended to two dimensional problem of joint
azimuth and elevation estimation angles using Uniform Circular Array (UCA) in
case of non coherent narrowband signals. Simulation results are presented with
both Additive White Gaussian Noise (AWGN) and real symmetric Toeplitz noise.
|
1401.3569 | Efficient Strategies for Single/Multi-Target Jamming on MIMO Gaussian
Channels | cs.IT math.IT | The problem of jamming on multiple-input multiple-output (MIMO) Gaussian
channels is investigated in this paper. In the case of a single target
legitimate signal, we show that the existing result based on the simplification
of the system model by neglecting the jamming channel leads to losing important
insights regarding the effect of jamming power and jamming channel on the
jamming strategy. We find a closed-form optimal solution for the problem under
a positive semi-definite (PSD) condition without considering simplifications in
the model. If the condition is not satisfied and the optimal solution may not
exist in closed-form, we find the optimal solution using a numerical method and
also propose a suboptimal solution in closed-form as a close approximation of
the optimal solution. Then, the possibility of extending the results to solve
the problem of multi-target jamming is investigated for four scenarios, i.e.,
multiple access channel, broadcasting channel, multiple transceiver pairs with
orthogonal transmissions, and multiple transceiver pairs with interference,
respectively. It is shown that the proposed numerical method can be extended to
all scenarios while the proposed closed-form solutions for jamming may be
applied in the scenarios of the multiple access channel and multiple
transceiver pairs with orthogonal transmissions. Simulation results verify the
effectiveness of the proposed solutions.
|
1401.3579 | A Supervised Goal Directed Algorithm in Economical Choice Behaviour: An
Actor-Critic Approach | cs.GT cs.AI cs.LG | This paper aims to find an algorithmic structure that affords to predict and
explain economical choice behaviour particularly under uncertainty(random
policies) by manipulating the prevalent Actor-Critic learning method to comply
with the requirements we have been entrusted ever since the field of
neuroeconomics dawned on us. Whilst skimming some basics of neuroeconomics that
seem relevant to our discussion, we will try to outline some of the important
works which have so far been done to simulate choice making processes.
Concerning neurological findings that suggest the existence of two specific
functions that are executed through Basal Ganglia all the way up to sub-
cortical areas, namely 'rewards' and 'beliefs', we will offer a modified
version of actor/critic algorithm to shed a light on the relation between these
functions and most importantly resolve what is referred to as a challenge for
actor-critic algorithms, that is, the lack of inheritance or hierarchy which
avoids the system being evolved in continuous time tasks whence the convergence
might not be emerged.
|
1401.3580 | Bits Through Bufferless Queues | cs.IT math.IT | This paper investigates the capacity of a channel in which information is
conveyed by the timing of consecutive packets passing through a queue with
independent and identically distributed service times. Such timing channels are
commonly studied under the assumption of a work-conserving queue. In contrast,
this paper studies the case of a bufferless queue that drops arriving packets
while a packet is in service. Under this bufferless model, the paper provides
upper bounds on the capacity of timing channels and establishes achievable
rates for the case of bufferless M/M/1 and M/G/1 queues. In particular, it is
shown that a bufferless M/M/1 queue at worst suffers less than 10% reduction in
capacity when compared to an M/M/1 work-conserving queue.
|
1401.3582 | The equivalent identities of the MacWilliams identity for linear codes | cs.IT math.IT | We use derivatives to prove the equivalences between MacWilliams identity and
its four equivalent forms, and present new interpretations for the four
equivalent forms. Our results explicitly give out the relationships between
MacWilliams identity and its four equivalent forms.
|
1401.3584 | Experiments of Distance Measurements in a Foliage Plant Retrieval System | cs.CV | One of important components in an image retrieval system is selecting a
distance measure to compute rank between two objects. In this paper, several
distance measures were researched to implement a foliage plant retrieval
system. Sixty kinds of foliage plants with various leaf color and shape were
used to test the performance of 7 different kinds of distance measures: city
block distance, Euclidean distance, Canberra distance, Bray-Curtis distance, x2
statistics, Jensen Shannon divergence and Kullback Leibler divergence. The
results show that city block and Euclidean distance measures gave the best
performance among the others.
|
1401.3590 | An Enhanced Method For Evaluating Automatic Video Summaries | cs.CV cs.IR | Evaluation of automatic video summaries is a challenging problem. In the past
years, some evaluation methods are presented that utilize only a single feature
like color feature to detect similarity between automatic video summaries and
ground-truth user summaries. One of the drawbacks of using a single feature is
that sometimes it gives a false similarity detection which makes the assessment
of the quality of the generated video summary less perceptual and not accurate.
In this paper, a novel method for evaluating automatic video summaries is
presented. This method is based on comparing automatic video summaries
generated by video summarization techniques with ground-truth user summaries.
The objective of this evaluation method is to quantify the quality of video
summaries, and allow comparing different video summarization techniques
utilizing both color and texture features of the video frames and using the
Bhattacharya distance as a dissimilarity measure due to its advantages. Our
Experiments show that the proposed evaluation method overcomes the drawbacks of
other methods and gives a more perceptual evaluation of the quality of the
automatic video summaries.
|
1401.3592 | Intelligent Systems for Information Security | cs.NE cs.CR | This thesis aims to use intelligent systems to extend and improve performance
and security of cryptographic techniques. Genetic algorithms framework for
cryptanalysis problem is addressed. A novel extension to the differential
cryptanalysis using genetic algorithm is proposed and a fitness measure based
on the differential characteristics of the cipher being attacked is also
proposed. The complexity of the proposed attack is shown to be less than
quarter of normal differential cryptanalysis of the same cipher by applying the
proposed attack to both the basic Substitution Permutation Network and the
Feistel Network. The basic models of modern block ciphers are attacked instead
of actual cipher to prove that the attack is applicable to other ciphers
vulnerable to differential cryptanalysis. A new attack for block cipher based
on the ability of neural networks to perform an approximation of mapping is
proposed. A complete problem formulation is explained and implementation of the
attack on some hypothetical Feistel cipher not vulnerable to differential or
linear attacks is presented. A new block cipher based on the neural networks is
proposed. A complete cipher structure is given and a key scheduling is also
shown. The main properties of neural network being able to perform mapping
between large dimension domains in a very fast and a very small memory compared
to S-Boxes is used as a base for the cipher.
|
1401.3607 | A Brief History of Learning Classifier Systems: From CS-1 to XCS | cs.NE cs.LG | Modern Learning Classifier Systems can be characterized by their use of rule
accuracy as the utility metric for the search algorithm(s) discovering useful
rules. Such searching typically takes place within the restricted space of
co-active rules for efficiency. This paper gives an historical overview of the
evolution of such systems up to XCS, and then some of the subsequent
developments of XCS to different types of learning.
|
1401.3613 | Turing Minimalism and the Emergence of Complexity | cs.CC cs.IT math.IT | Not only did Turing help found one of the most exciting areas of modern
science (computer science), but it may be that his contribution to our
understanding of our physical reality is greater than we had hitherto supposed.
Here I explore the path that Alan Turing would have certainly liked to follow,
that of complexity science, which was launched in the wake of his seminal work
on computability and structure formation. In particular, I will explain how the
theory of algorithmic probability based on Turing's universal machine can also
explain how structure emerges at the most basic level, hence reconnecting two
of Turing's most cherished topics: computation and pattern formation.
|
1401.3615 | Performance Engineering for a Medical Imaging Application on the Intel
Xeon Phi Accelerator | cs.DC cs.CV cs.PF | We examine the Xeon Phi, which is based on Intel's Many Integrated Cores
architecture, for its suitability to run the FDK algorithm--the most commonly
used algorithm to perform the 3D image reconstruction in cone-beam computed
tomography. We study the challenges of efficiently parallelizing the
application and means to enable sensible data sharing between threads despite
the lack of a shared last level cache. Apart from parallelization, SIMD
vectorization is critical for good performance on the Xeon Phi; we perform
various micro-benchmarks to investigate the platform's new set of vector
instructions and put a special emphasis on the newly introduced vector gather
capability. We refine a previous performance model for the application and
adapt it for the Xeon Phi to validate the performance of our optimized
hand-written assembly implementation, as well as the performance of several
different auto-vectorization approaches.
|
1401.3617 | Power Allocation in MIMO Wiretap Channel with Statistical CSI and
Finite-Alphabet Input | cs.IT math.IT | In this paper, we consider the problem of power allocation in MIMO wiretap
channel for secrecy in the presence of multiple eavesdroppers. Perfect
knowledge of the destination channel state information (CSI) and only the
statistical knowledge of the eavesdroppers CSI are assumed. We first consider
the MIMO wiretap channel with Gaussian input. Using Jensen's inequality, we
transform the secrecy rate max-min optimization problem to a single
maximization problem. We use generalized singular value decomposition and
transform the problem to a concave maximization problem which maximizes the sum
secrecy rate of scalar wiretap channels subject to linear constraints on the
transmit covariance matrix. We then consider the MIMO wiretap channel with
finite-alphabet input. We show that the transmit covariance matrix obtained for
the case of Gaussian input, when used in the MIMO wiretap channel with
finite-alphabet input, can lead to zero secrecy rate at high transmit powers.
We then propose a power allocation scheme with an additional power constraint
which alleviates this secrecy rate loss problem, and gives non-zero secrecy
rates at high transmit powers.
|
1401.3626 | Modeling Concept Combinations in a Quantum-theoretic Framework | cs.AI quant-ph | We present modeling for conceptual combinations which uses the mathematical
formalism of quantum theory. Our model faithfully describes a large amount of
experimental data collected by different scholars on concept conjunctions and
disjunctions. Furthermore, our approach sheds a new light on long standing
drawbacks connected with vagueness, or fuzziness, of concepts, and puts forward
a completely novel possible solution to the 'combination problem' in concept
theory. Additionally, we introduce an explanation for the occurrence of quantum
structures in the mechanisms and dynamics of concepts and, more generally, in
cognitive and decision processes, according to which human thought is a well
structured superposition of a 'logical thought' and a 'conceptual thought', and
the latter usually prevails over the former, at variance with some widespread
beliefs
|
1401.3632 | Bayesian Conditional Density Filtering | stat.ML cs.LG stat.CO | We propose a Conditional Density Filtering (C-DF) algorithm for efficient
online Bayesian inference. C-DF adapts MCMC sampling to the online setting,
sampling from approximations to conditional posterior distributions obtained by
propagating surrogate conditional sufficient statistics (a function of data and
parameter estimates) as new data arrive. These quantities eliminate the need to
store or process the entire dataset simultaneously and offer a number of
desirable features. Often, these include a reduction in memory requirements and
runtime and improved mixing, along with state-of-the-art parameter inference
and prediction. These improvements are demonstrated through several
illustrative examples including an application to high dimensional compressed
regression. Finally, we show that C-DF samples converge to the target posterior
distribution asymptotically as sampling proceeds and more data arrives.
|
1401.3659 | Multipath Private Communication: An Information Theoretic Approach | cs.CR cs.IT math.IT | Sending private messages over communication environments under surveillance
is an important challenge in communication security and has attracted
attentions of cryptographers through time. We believe that resources other than
cryptographic keys can be used for communication privacy. We consider private
message transmission (PMT) in an abstract multipath communication model between
two communicants, Alice and Bob, in the presence of an eavesdropper, Eve. Alice
and Bob have pre-shared keys and Eve is computationally unbounded. There are a
total of $n$ paths and the three parties can have simultaneous access to at
most $t_a$, $t_b$, and $t_e$ paths. The parties can switch their paths after
every $\lambda$ bits of communication. We study perfect (P)-PMT versus
asymptotically-perfect (AP)-PMT protocols. The former has zero tolerance of
transmission error and leakage, whereas the latter allows for positive error
and leakage that tend to zero as the message length increases. We derive the
necessary and sufficient conditions under which P-PMT and AP-PMT are possible.
We also introduce explicit P-PMT and AP-PMT constructions. Our results show
AP-PMT protocols attain much higher information rates than P-PMT ones.
Interestingly, AP-PMT is possible even in poorest condition where $t_a=t_b=1$
and $t_e=n-1$. It remains however an open question whether the derived rates
can be improved by more sophisticated AP-PMT protocols.
We study applications of our results to private communication over the
real-life scenarios of multiple-frequency links and multiple-route networks. We
show practical examples of such scenarios that can be abstracted by the
multipath setting: Our results prove the possibility of keyless
information-theoretic private message transmission at rates $17\%$ and $20\%$
for the two example scenarios, respectively. We discuss open problems and
future work at the end.
|
1401.3660 | The Throughput of Slotted Aloha with Diversity | cs.NI cs.IT math.IT | In this paper, a simple variation of classical Slotted Aloha is introduced
and analyzed. The enhancement relies on adding multiple receivers that gather
different observations of the packets transmitted by a user population in one
slot. For each observation, the packets transmitted in one slot are assumed to
be subject to independent on-off fading, so that each of them is either
completely faded, and then does not bring any power or interference at the
receiver, or it arrives unfaded, and then may or may not, collide with other
unfaded transmissions. With this model, a novel type of diversity is introduced
to the conventional SA scheme, leading to relevant throughput gains already for
moderate number of receivers. The analytical framework that we introduce allows
to derive closed-form expression of both throughput and packet loss rate an
arbitrary number of receivers, providing interesting hints on the key
trade-offs that characterize the system. We then focus on the problem of having
receivers forward the full set of collected packets to a final gateway using
the minimum possible amount of resources, i.e., avoiding delivery of duplicate
packets, without allowing any exchange of information among them. We derive
what is the minimum amount of resources needed and propose a scheme based on
random linear network coding that achieves asymptotically this bound without
the need for the receivers to coordinate among them.
|
1401.3667 | Group Testing with Prior Statistics | cs.IT math.IT | We consider a new group testing model wherein each item is a binary random
variable defined by an a priori probability of being defective. We assume that
each probability is small and that items are independent, but not necessarily
identically distributed. The goal of group testing algorithms is to identify
with high probability the subset of defectives via non-linear (disjunctive)
binary measurements. Our main contributions are two classes of algorithms: (1)
adaptive algorithms with tests based either on a maximum entropy principle, or
on a Shannon-Fano/Huffman code; (2) non-adaptive algorithms. Under loose
assumptions and with high probability, our algorithms only need a number of
measurements that is close to the information-theoretic lower bound, up to an
explicitly-calculated universal constant factor. We provide simulations to
support our results.
|
1401.3669 | Hrebs and Cohesion Chains as similar tools for semantic text properties
research | cs.CL | In this study it is proven that the Hrebs used in Denotation analysis of
texts and Cohesion Chains (defined as a fusion between Lexical Chains and
Coreference Chains) represent similar linguistic tools. This result gives us
the possibility to extend to Cohesion Chains (CCs) some important indicators
as, for example the Kernel of CCs, the topicality of a CC, text concentration,
CC-diffuseness and mean diffuseness of the text. Let us mention that nowhere in
the Lexical Chains or Coreference Chains literature these kinds of indicators
are introduced and used since now. Similarly, some applications of CCs in the
study of a text (as for example segmentation or summarization of a text) could
be realized starting from hrebs. As an illustration of the similarity between
Hrebs and CCs a detailed analyze of the poem "Lacul" by Mihai Eminescu is
given.
|
1401.3674 | Wireless Video Multicast with Cooperative and Incremental Transmission
of Parity Packets | cs.MM cs.IT cs.NI math.IT | In this paper, a cooperative multicast scheme that uses Randomized
Distributed Space Time Codes (R-DSTC), along with packet level Forward Error
Correction (FEC), is studied. Instead of sending source packets and/or parity
packets through two hops using R-DSTC as proposed in our prior work, the new
scheme delivers both source packets and parity packets using only one hop.
After the source station (access point, AP) first sends all the source packets,
the AP as well as all nodes that have received all source packets together send
the parity packets using R-DSTC. As more parity packets are transmitted, more
nodes can recover all source packets and join the parity packet transmission.
The process continues until all nodes acknowledge the receipt of enough packets
for recovering the source packets. For each given node distribution, the
optimum transmission rates for source and parity packets are determined such
that the video rate that can be sustained at all nodes is maximized. This new
scheme can support significantly higher video rates, and correspondingly higher
PSNR of decoded video, than the prior approaches. Three suboptimal approaches,
which do not require full information about user distribution or the feedback,
and hence are more feasible in practice are also presented. The proposed
suboptimal scheme with only the node count information and without feedback
still outperforms our prior approach that assumes full channel information and
no feedback.
|
1401.3677 | The Ginibre Point Process as a Model for Wireless Networks with
Repulsion | cs.IT cs.NI math.IT math.PR | The spatial structure of transmitters in wireless networks plays a key role
in evaluating the mutual interference and hence the performance. Although the
Poisson point process (PPP) has been widely used to model the spatial
configuration of wireless networks, it is not suitable for networks with
repulsion. The Ginibre point process (GPP) is one of the main examples of
determinantal point processes that can be used to model random phenomena where
repulsion is observed. Considering the accuracy, tractability and
practicability tradeoffs, we introduce and promote the $\beta$-GPP, an
intermediate class between the PPP and the GPP, as a model for wireless
networks when the nodes exhibit repulsion. To show that the model leads to
analytically tractable results in several cases of interest, we derive the mean
and variance of the interference using two different approaches: the Palm
measure approach and the reduced second moment approach, and then provide
approximations of the interference distribution by three known probability
density functions. Besides, to show that the model is relevant for cellular
systems, we derive the coverage probability of the typical user and also find
that the fitted $\beta$-GPP can closely model the deployment of actual base
stations in terms of the coverage probability and other statistics.
|
1401.3682 | Broadcast Classical-Quantum Capacity Region of Two-Phase Bidirectional
Relaying Channel | cs.IT math.IT math.QA quant-ph | We study a three-node quantum network which enables bidirectional
communication between two nodes with a half-duplex relay node. A
decode-and-forward protocol is used to perform the communication in two phases.
In the first phase, the messages of two nodes are transmitted to the relay
node. In the second phase, the relay node broadcasts a re-encoded composition
to the two nodes. We determine the capacity region of the broadcast phase.
|
1401.3690 | FindStat - the combinatorial statistics database | math.CO cs.DB | The FindStat project at www.FindStat.org provides an online platform for
mathematicians, particularly for combinatorialists, to gather information about
combinatorial statistics and their relations. This outline provides an overview
over the project.
|
1401.3700 | Convex Relaxations of SE(2) and SE(3) for Visual Pose Estimation | cs.CV | This paper proposes a new method for rigid body pose estimation based on
spectrahedral representations of the tautological orbitopes of $SE(2)$ and
$SE(3)$. The approach can use dense point cloud data from stereo vision or an
RGB-D sensor (such as the Microsoft Kinect), as well as visual appearance data.
The method is a convex relaxation of the classical pose estimation problem, and
is based on explicit linear matrix inequality (LMI) representations for the
convex hulls of $SE(2)$ and $SE(3)$. Given these representations, the relaxed
pose estimation problem can be framed as a robust least squares problem with
the optimization variable constrained to these convex sets. Although this
formulation is a relaxation of the original problem, numerical experiments
indicate that it is indeed exact - i.e. its solution is a member of $SE(2)$ or
$SE(3)$ - in many interesting settings. We additionally show that this method
is guaranteed to be exact for a large class of pose estimation problems.
|
1401.3717 | Physical Realizability and Mean Square Performance of Translation
Invariant Networks of Interacting Linear Quantum Stochastic Systems | cs.SY math.PR quant-ph | This paper is concerned with translation invariant networks of linear quantum
stochastic systems with nearest neighbour interaction mediated by boson fields.
The systems are associated with sites of a one-dimensional chain or a
multidimensional lattice and are governed by coupled linear quantum stochastic
differential equations (QSDEs). Such interconnections of open quantum systems
are relevant, for example, to the phonon theory of crystalline solids, atom
trapping in optical lattices and quantum metamaterials. In order to represent a
large-scale open quantum harmonic oscillator, the coefficients of the coupled
QSDEs must satisfy certain physical realizability conditions. These are
established in the form of matrix algebraic equations for the parameters of an
individual building block of the network and its interaction with the
neighbours and external fields. We also discuss the computation of mean square
performance functionals with block Toeplitz weighting matrices for such systems
in the thermodynamic limit per site for unboundedly increasing fragments of the
lattice.
|
1401.3737 | Coordinate Descent with Online Adaptation of Coordinate Frequencies | stat.ML cs.LG | Coordinate descent (CD) algorithms have become the method of choice for
solving a number of optimization problems in machine learning. They are
particularly popular for training linear models, including linear support
vector machine classification, LASSO regression, and logistic regression.
We consider general CD with non-uniform selection of coordinates. Instead of
fixing selection frequencies beforehand we propose an online adaptation
mechanism for this important parameter, called the adaptive coordinate
frequencies (ACF) method. This mechanism removes the need to estimate optimal
coordinate frequencies beforehand, and it automatically reacts to changing
requirements during an optimization run.
We demonstrate the usefulness of our ACF-CD approach for a variety of
optimization problems arising in machine learning contexts. Our algorithm
offers significant speed-ups over state-of-the-art training methods.
|
1401.3753 | LLR-based Successive Cancellation List Decoding of Polar Codes | cs.IT math.IT | We show that successive cancellation list decoding can be formulated
exclusively using log-likelihood ratios. In addition to numerical stability,
the log-likelihood ratio based formulation has useful properties which simplify
the sorting step involved in successive cancellation list decoding. We propose
a hardware architecture of the successive cancellation list decoder in the
log-likelihood ratio domain which, compared to a log-likelihood domain
implementation, requires less irregular and smaller memories. This
simplification together with the gains in the metric sorter, lead to $56\%$ to
$137\%$ higher throughput per unit area than other recently proposed
architectures. We then evaluate the empirical performance of the CRC-aided
successive cancellation list decoder at different list sizes using different
CRCs and conclude that it is important to adapt the CRC length to the list size
in order to achieve the best error-rate performance of concatenated polar
codes. Finally, we synthesize conventional successive cancellation decoders at
large block-lengths with the same block-error probability as our proposed
CRC-aided successive cancellation list decoders to demonstrate that, while our
decoders have slightly lower throughput and larger area, they have a
significantly smaller decoding latency.
|
1401.3760 | Large Alphabet Compression and Predictive Distributions through
Poissonization and Tilting | cs.IT math.IT stat.ME | This paper introduces a convenient strategy for coding and predicting
sequences of independent, identically distributed random variables generated
from a large alphabet of size $m$. In particular, the size of the sample is
allowed to be variable. The employment of a Poisson model and tilting method
simplifies the implementation and analysis through independence. The resulting
strategy is optimal within the class of distributions satisfying a moment
condition, and is close to optimal for the class of all i.i.d distributions on
strings of a given length. Moreover, the method can be used to code and predict
strings with a condition on the tail of the ordered counts. It can also be
applied to distributions in an envelope class.
|
1401.3781 | Random Number Conversion and LOCC Conversion via Restricted Storage | quant-ph cs.IT math.IT | We consider random number conversion (RNC) through random number storage with
restricted size. We clarify the relation between the performance of RNC and the
size of storage in the framework of first- and second- order asymptotics, and
derive their rate regions. Then, we show that the results for RNC with
restricted storage recover those for conventional RNC without storage in the
limit of storage size. To treat RNC via restricted storage, we introduce a new
kind of probability distributions named generalized Rayleigh-normal
distributions. Using the generalized Rayleigh-normal distributions, we can
describe the second-order asymptotic behaviour of RNC via restricted storage in
a unified manner. As an application to quantum information theory, we analyze
LOCC conversion via entanglement storage with restricted size. Moreover, we
derive the optimal LOCC compression rate under a constraint of conversion
accuracy.
|
1401.3785 | Adaptive Link Selection Strategies for Distributed Estimation in
Wireless Sensor Networks | cs.IT math.IT | In this work, we propose adaptive link selection strategies for distributed
estimation in diffusion-type wireless networks. We develop an exhaustive
search-based link selection algorithm and a sparsity-inspired link selection
algorithm that can exploit the topology of networks with poor-quality links. In
the exhaustive search-based algorithm, we choose the set of neighbors that
results in the smallest excess mean square error (EMSE) for a specific node. In
the sparsity-inspired link selection algorithm, a convex regularization is
introduced to devise a sparsity-inspired link selection algorithm. The proposed
algorithms have the ability to equip diffusion-type wireless networks and to
significantly improve their performance. Simulation results illustrate that the
proposed algorithms have lower EMSE values, a better convergence rate and
significantly improve the network performance when compared with existing
methods.
|
1401.3801 | Finite-length Analysis on Tail probability for Markov Chain and
Application to Simple Hypothesis Testing | math.ST cs.IT math.IT math.PR stat.TH | Using terminologies of information geometry, we derive upper and lower bounds
of the tail probability of the sample mean. Employing these bounds, we obtain
upper and lower bounds of the minimum error probability of the 2nd kind of
error under the exponential constraint for the error probability of the 1st
kind of error in a simple hypothesis testing for a finite-length Markov chain,
which yields the Hoeffding type bound. For these derivations, we derive upper
and lower bounds of cumulant generating function for Markov chain. As a
byproduct, we obtain another simple proof of central limit theorem for Markov
chain.
|
1401.3807 | On the Existence of MDS Codes Over Small Fields With Constrained
Generator Matrices | cs.IT cs.DM math.IT | We study the existence over small fields of Maximum Distance Separable (MDS)
codes with generator matrices having specified supports (i.e. having specified
locations of zero entries). This problem unifies and simplifies the problems
posed in recent works of Yan and Sprintson (NetCod'13) on weakly secure
cooperative data exchange, of Halbawi et al. (arxiv'13) on distributed
Reed-Solomon codes for simple multiple access networks, and of Dau et al.
(ISIT'13) on MDS codes with balanced and sparse generator matrices. We
conjecture that there exist such $[n,k]_q$ MDS codes as long as $q \geq n + k -
1$, if the specified supports of the generator matrices satisfy the so-called
MDS condition, which can be verified in polynomial time. We propose a
combinatorial approach to tackle the conjecture, and prove that the conjecture
holds for a special case when the sets of zero coordinates of rows of the
generator matrix share with each other (pairwise) at most one common element.
Based on our numerical result, the conjecture is also verified for all $k \leq
7$. Our approach is based on a novel generalization of the well-known Hall's
marriage theorem, which allows (overlapping) multiple representatives instead
of a single representative for each subset.
|
1401.3809 | An Information-Spectrum Approach to Weak Variable-Length Source Coding
with Side-Information | cs.IT math.IT | This paper studies variable-length (VL) source coding of general sources with
side-information. Novel one-shot coding theorems for coding with common
side-information available at the encoder and the decoder and Slepian- Wolf
(SW) coding (i.e., with side-information only at the decoder) are given, and
then, are applied to asymptotic analyses of these coding problems. Especially,
a general formula for the infimum of the coding rate asymptotically achievable
by weak VL-SW coding (i.e., VL-SW coding with vanishing error probability) is
derived. Further, the general formula is applied to investigating weak VL-SW
coding of mixed sources. Our results derive and extend several known results on
SW coding and weak VL coding, e.g., the optimal achievable rate of VL-SW coding
for mixture of i.i.d. sources is given for countably infinite alphabet case
with mild condition. In addition, the usefulness of the encoder
side-information is investigated. Our result shows that if the encoder
side-information is useless in weak VL coding then it is also useless even in
the case where the error probability may be positive asymptotically.
|
1401.3814 | Information Geometry Approach to Parameter Estimation in Markov Chains | math.ST cs.IT math.IT stat.TH | We consider the parameter estimation of Markov chain when the unknown
transition matrix belongs to an exponential family of transition matrices.
Then, we show that the sample mean of the generator of the exponential family
is an asymptotically efficient estimator. Further, we also define a curved
exponential family of transition matrices. Using a transition matrix version of
the Pythagorean theorem, we give an asymptotically efficient estimator for a
curved exponential family.
|
1401.3815 | On Swarm Stability of Linear Time-Invariant Descriptor Compartmental
Networks | cs.SY | Swarm stability is concerned for descriptor compartmental networks with
linear time-invariant protocol. Compartmental network is a specific type of
dynamical multi-agent system. Necessary and sufficient conditions for both
consensus and critical swarm stability are presented, which require a joint
matching between the interactive dynamics of nearest neighboring vertices and
the Laplacian spectrum of the overall network topology. Three numerical
instances are illustrated to verify the theoretical results.
|
1401.3818 | Structured Priors for Sparse-Representation-Based Hyperspectral Image
Classification | cs.CV cs.LG stat.ML | Pixel-wise classification, where each pixel is assigned to a predefined
class, is one of the most important procedures in hyperspectral image (HSI)
analysis. By representing a test pixel as a linear combination of a small
subset of labeled pixels, a sparse representation classifier (SRC) gives rather
plausible results compared with that of traditional classifiers such as the
support vector machine (SVM). Recently, by incorporating additional structured
sparsity priors, the second generation SRCs have appeared in the literature and
are reported to further improve the performance of HSI. These priors are based
on exploiting the spatial dependencies between the neighboring pixels, the
inherent structure of the dictionary, or both. In this paper, we review and
compare several structured priors for sparse-representation-based HSI
classification. We also propose a new structured prior called the low rank
group prior, which can be considered as a modification of the low rank prior.
Furthermore, we will investigate how different structured priors improve the
result for the HSI classification.
|
1401.3825 | Reasoning About the Transfer of Control | cs.AI cs.LO | We present DCL-PC: a logic for reasoning about how the abilities of agents
and coalitions of agents are altered by transferring control from one agent to
another. The logical foundation of DCL-PC is CL-PC, a logic for reasoning about
cooperation in which the abilities of agents and coalitions of agents stem from
a distribution of atomic Boolean variables to individual agents -- the choices
available to a coalition correspond to assignments to the variables the
coalition controls. The basic modal constructs of DCL-PC are of the form
coalition C can cooperate to bring about phi. DCL-PC extends CL-PC with dynamic
logic modalities in which atomic programs are of the form agent i gives control
of variable p to agent j; as usual in dynamic logic, these atomic programs may
be combined using sequence, iteration, choice, and test operators to form
complex programs. By combining such dynamic transfer programs with cooperation
modalities, it becomes possible to reason about how the power of agents and
coalitions is affected by the transfer of control. We give two alternative
semantics for the logic: a direct semantics, in which we capture the
distributions of Boolean variables to agents; and a more conventional Kripke
semantics. We prove that these semantics are equivalent, and then present an
axiomatization for the logic. We investigate the computational complexity of
model checking and satisfiability for DCL-PC, and show that both problems are
PSPACE-complete (and hence no worse than the underlying logic CL-PC). Finally,
we investigate the characterisation of control in DCL-PC. We distinguish
between first-order control -- the ability of an agent or coalition to control
some state of affairs through the assignment of values to the variables under
the control of the agent or coalition -- and second-order control -- the
ability of an agent to exert control over the control that other agents have by
transferring variables to other agents. We give a logical characterisation of
second-order control.
|
1401.3827 | Efficient Planning under Uncertainty with Macro-actions | cs.AI | Deciding how to act in partially observable environments remains an active
area of research. Identifying good sequences of decisions is particularly
challenging when good control performance requires planning multiple steps into
the future in domains with many states. Towards addressing this challenge, we
present an online, forward-search algorithm called the Posterior Belief
Distribution (PBD). PBD leverages a novel method for calculating the posterior
distribution over beliefs that result after a sequence of actions is taken,
given the set of observation sequences that could be received during this
process. This method allows us to efficiently evaluate the expected reward of a
sequence of primitive actions, which we refer to as macro-actions. We present a
formal analysis of our approach, and examine its performance on two very large
simulation experiments: scientific exploration and a target monitoring domain.
We also demonstrate our algorithm being used to control a real robotic
helicopter in a target monitoring experiment, which suggests that our approach
has practical potential for planning in real-world, large partially observable
domains where a multi-step lookahead is required to achieve good performance.
|
1401.3829 | RoxyBot-06: Stochastic Prediction and Optimization in TAC Travel | cs.GT cs.LG | In this paper, we describe our autonomous bidding agent, RoxyBot, who emerged
victorious in the travel division of the 2006 Trading Agent Competition in a
photo finish. At a high level, the design of many successful trading agents can
be summarized as follows: (i) price prediction: build a model of market prices;
and (ii) optimization: solve for an approximately optimal set of bids, given
this model. To predict, RoxyBot builds a stochastic model of market prices by
simulating simultaneous ascending auctions. To optimize, RoxyBot relies on the
sample average approximation method, a stochastic optimization technique.
|
1401.3830 | Interactive Cost Configuration Over Decision Diagrams | cs.AI | In many AI domains such as product configuration, a user should interactively
specify a solution that must satisfy a set of constraints. In such scenarios,
offline compilation of feasible solutions into a tractable representation is an
important approach to delivering efficient backtrack-free user interaction
online. In particular,binary decision diagrams (BDDs) have been successfully
used as a compilation target for product and service configuration. In this
paper we discuss how to extend BDD-based configuration to scenarios involving
cost functions which express user preferences.
We first show that an efficient, robust and easy to implement extension is
possible if the cost function is additive, and feasible solutions are
represented using multi-valued decision diagrams (MDDs). We also discuss the
effect on MDD size if the cost function is non-additive or if it is encoded
explicitly into MDD. We then discuss interactive configuration in the presence
of multiple cost functions. We prove that even in its simplest form,
multiple-cost configuration is NP-hard in the input MDD. However, for solving
two-cost configuration we develop a pseudo-polynomial scheme and a fully
polynomial approximation scheme. The applicability of our approach is
demonstrated through experiments over real-world configuration models and
product-catalogue datasets. Response times are generally within a fraction of a
second even for very large instances.
|
1401.3831 | An Investigation into Mathematical Programming for Finite Horizon
Decentralized POMDPs | cs.AI | Decentralized planning in uncertain environments is a complex task generally
dealt with by using a decision-theoretic approach, mainly through the framework
of Decentralized Partially Observable Markov Decision Processes (DEC-POMDPs).
Although DEC-POMDPS are a general and powerful modeling tool, solving them is a
task with an overwhelming complexity that can be doubly exponential. In this
paper, we study an alternate formulation of DEC-POMDPs relying on a
sequence-form representation of policies. From this formulation, we show how to
derive Mixed Integer Linear Programming (MILP) problems that, once solved, give
exact optimal solutions to the DEC-POMDPs. We show that these MILPs can be
derived either by using some combinatorial characteristics of the optimal
solutions of the DEC-POMDPs or by using concepts borrowed from game theory.
Through an experimental validation on classical test problems from the
DEC-POMDP literature, we compare our approach to existing algorithms. Results
show that mathematical programming outperforms dynamic programming but is less
efficient than forward search, except for some particular problems. The main
contributions of this work are the use of mathematical programming for
DEC-POMDPs and a better understanding of DEC-POMDPs and of their solutions.
Besides, we argue that our alternate representation of DEC-POMDPs could be
helpful for designing novel algorithms looking for approximate solutions to
DEC-POMDPs.
|
1401.3832 | Constructing Reference Sets from Unstructured, Ungrammatical Text | cs.CL cs.IR | Vast amounts of text on the Web are unstructured and ungrammatical, such as
classified ads, auction listings, forum postings, etc. We call such text
"posts." Despite their inconsistent structure and lack of grammar, posts are
full of useful information. This paper presents work on semi-automatically
building tables of relational information, called "reference sets," by
analyzing such posts directly. Reference sets can be applied to a number of
tasks such as ontology maintenance and information extraction. Our
reference-set construction method starts with just a small amount of background
knowledge, and constructs tuples representing the entities in the posts to form
a reference set. We also describe an extension to this approach for the special
case where even this small amount of background knowledge is impossible to
discover and use. To evaluate the utility of the machine-constructed reference
sets, we compare them to manually constructed reference sets in the context of
reference-set-based information extraction. Our results show the reference sets
constructed by our method outperform manually constructed reference sets. We
also compare the reference-set-based extraction approach using the
machine-constructed reference set to supervised extraction approaches using
generic features. These results demonstrate that using machine-constructed
reference sets outperforms the supervised methods, even though the supervised
methods require training data.
|
1401.3833 | Active Tuples-based Scheme for Bounding Posterior Beliefs | cs.AI | The paper presents a scheme for computing lower and upper bounds on the
posterior marginals in Bayesian networks with discrete variables. Its power
lies in its ability to use any available scheme that bounds the probability of
evidence or posterior marginals and enhance its performance in an anytime
manner. The scheme uses the cutset conditioning principle to tighten existing
bounding schemes and to facilitate anytime behavior, utilizing a fixed number
of cutset tuples. The accuracy of the bounds improves as the number of used
cutset tuples increases and so does the computation time. We demonstrate
empirically the value of our scheme for bounding posterior marginals and
probability of evidence using a variant of the bound propagation algorithm as a
plug-in scheme.
|
1401.3835 | On Action Theory Change | cs.AI | As historically acknowledged in the Reasoning about Actions and Change
community, intuitiveness of a logical domain description cannot be fully
automated. Moreover, like any other logical theory, action theories may also
evolve, and thus knowledge engineers need revision methods to help in
accommodating new incoming information about the behavior of actions in an
adequate manner. The present work is about changing action domain descriptions
in multimodal logic. Its contribution is threefold: first we revisit the
semantics of action theory contraction proposed in previous work, giving more
robust operators that express minimal change based on a notion of distance
between Kripke-models. Second we give algorithms for syntactical action theory
contraction and establish their correctness with respect to our semantics for
those action theories that satisfy a principle of modularity investigated in
previous work. Since modularity can be ensured for every action theory and, as
we show here, needs to be computed at most once during the evolution of a
domain description, it does not represent a limitation at all to the method
here studied. Finally we state AGM-like postulates for action theory
contraction and assess the behavior of our operators with respect to them.
Moreover, we also address the revision counterpart of action theory change,
showing that it benefits from our semantics for contraction.
|
1401.3836 | An Active Learning Approach for Jointly Estimating Worker Performance
and Annotation Reliability with Crowdsourced Data | cs.LG cs.HC | Crowdsourcing platforms offer a practical solution to the problem of
affordably annotating large datasets for training supervised classifiers.
Unfortunately, poor worker performance frequently threatens to compromise
annotation reliability, and requesting multiple labels for every instance can
lead to large cost increases without guaranteeing good results. Minimizing the
required training samples using an active learning selection procedure reduces
the labeling requirement but can jeopardize classifier training by focusing on
erroneous annotations. This paper presents an active learning approach in which
worker performance, task difficulty, and annotation reliability are jointly
estimated and used to compute the risk function guiding the sample selection
procedure. We demonstrate that the proposed approach, which employs active
learning with Bayesian networks, significantly improves training accuracy and
correctly ranks the expertise of unknown labelers in the presence of annotation
noise.
|
1401.3838 | Change in Abstract Argumentation Frameworks: Adding an Argument | cs.AI | In this paper, we address the problem of change in an abstract argumentation
system. We focus on a particular change: the addition of a new argument which
interacts with previous arguments. We study the impact of such an addition on
the outcome of the argumentation system, more particularly on the set of its
extensions. Several properties for this change operation are defined by
comparing the new set of extensions to the initial one, these properties are
called structural when the comparisons are based on set-cardinality or
set-inclusion relations. Several other properties are proposed where
comparisons are based on the status of some particular arguments: the accepted
arguments; these properties refer to the evolution of this status during the
change, e.g., Monotony and Priority to Recency. All these properties may be
more or less desirable according to specific applications. They are studied
under two particular semantics: the grounded and preferred semantics.
|
1401.3839 | The LAMA Planner: Guiding Cost-Based Anytime Planning with Landmarks | cs.AI | LAMA is a classical planning system based on heuristic forward search. Its
core feature is the use of a pseudo-heuristic derived from landmarks,
propositional formulas that must be true in every solution of a planning task.
LAMA builds on the Fast Downward planning system, using finite-domain rather
than binary state variables and multi-heuristic search. The latter is employed
to combine the landmark heuristic with a variant of the well-known FF
heuristic. Both heuristics are cost-sensitive, focusing on high-quality
solutions in the case where actions have non-uniform cost. A weighted A* search
is used with iteratively decreasing weights, so that the planner continues to
search for plans of better quality until the search is terminated. LAMA showed
best performance among all planners in the sequential satisficing track of the
International Planning Competition 2008. In this paper we present the system in
detail and investigate which features of LAMA are crucial for its performance.
We present individual results for some of the domains used at the competition,
demonstrating good and bad cases for the techniques implemented in LAMA.
Overall, we find that using landmarks improves performance, whereas the
incorporation of action costs into the heuristic estimators proves not to be
beneficial. We show that in some domains a search that ignores cost solves far
more problems, raising the question of how to deal with action costs more
effectively in the future. The iterated weighted A* search greatly improves
results, and shows synergy effects with the use of landmarks.
|
1401.3840 | Grounding FO and FO(ID) with Bounds | cs.LO cs.AI | Grounding is the task of reducing a first-order theory and finite domain to
an equivalent propositional theory. It is used as preprocessing phase in many
logic-based reasoning systems. Such systems provide a rich first-order input
language to a user and can rely on efficient propositional solvers to perform
the actual reasoning. Besides a first-order theory and finite domain, the input
for grounders contains in many applications also additional data. By exploiting
this data, the size of the grounders output can often be reduced significantly.
A common practice to improve the efficiency of a grounder in this context is by
manually adding semantically redundant information to the input theory,
indicating where and when the grounder should exploit the data. In this paper
we present a method to compute and add such redundant information
automatically. Our method therefore simplifies the task of writing input
theories that can be grounded efficiently by current systems. We first present
our method for classical first-order logic (FO) theories. Then we extend it to
FO(ID), the extension of FO with inductive definitions, which allows for more
concise and comprehensive input theories. We discuss implementation issues and
experimentally validate the practical applicability of our method.
|
1401.3841 | Narrative Planning: Balancing Plot and Character | cs.AI | Narrative, and in particular storytelling, is an important part of the human
experience. Consequently, computational systems that can reason about narrative
can be more effective communicators, entertainers, educators, and trainers. One
of the central challenges in computational narrative reasoning is narrative
generation, the automated creation of meaningful event sequences. There are
many factors -- logical and aesthetic -- that contribute to the success of a
narrative artifact. Central to this success is its understandability. We argue
that the following two attributes of narratives are universal: (a) the logical
causal progression of plot, and (b) character believability. Character
believability is the perception by the audience that the actions performed by
characters do not negatively impact the audiences suspension of disbelief.
Specifically, characters must be perceived by the audience to be intentional
agents. In this article, we explore the use of refinement search as a technique
for solving the narrative generation problem -- to find a sound and believable
sequence of character actions that transforms an initial world state into a
world state in which goal propositions hold. We describe a novel refinement
search planning algorithm -- the Intent-based Partial Order Causal Link (IPOCL)
planner -- that, in addition to creating causally sound plot progression,
reasons about character intentionality by identifying possible character goals
that explain their actions and creating plan structures that explain why those
characters commit to their goals. We present the results of an empirical
evaluation that demonstrates that narrative plans generated by the IPOCL
algorithm support audience comprehension of character intentions better than
plans generated by conventional partial-order planners.
|
1401.3842 | Developing Approaches for Solving a Telecommunications Feature
Subscription Problem | cs.AI | Call control features (e.g., call-divert, voice-mail) are primitive options
to which users can subscribe off-line to personalise their service. The
configuration of a feature subscription involves choosing and sequencing
features from a catalogue and is subject to constraints that prevent
undesirable feature interactions at run-time. When the subscription requested
by a user is inconsistent, one problem is to find an optimal relaxation, which
is a generalisation of the feedback vertex set problem on directed graphs, and
thus it is an NP-hard task. We present several constraint programming
formulations of the problem. We also present formulations using partial
weighted maximum Boolean satisfiability and mixed integer linear programming.
We study all these formulations by experimentally comparing them on a variety
of randomly generated instances of the feature subscription problem.
|
1401.3843 | Theta*: Any-Angle Path Planning on Grids | cs.CG cs.AI | Grids with blocked and unblocked cells are often used to represent terrain in
robotics and video games. However, paths formed by grid edges can be longer
than true shortest paths in the terrain since their headings are artificially
constrained. We present two new correct and complete any-angle path-planning
algorithms that avoid this shortcoming. Basic Theta* and Angle-Propagation
Theta* are both variants of A* that propagate information along grid edges
without constraining paths to grid edges. Basic Theta* is simple to understand
and implement, fast and finds short paths. However, it is not guaranteed to
find true shortest paths. Angle-Propagation Theta* achieves a better worst-case
complexity per vertex expansion than Basic Theta* by propagating angle ranges
when it expands vertices, but is more complex, not as fast and finds slightly
longer paths. We refer to Basic Theta* and Angle-Propagation Theta*
collectively as Theta*. Theta* has unique properties, which we analyze in
detail. We show experimentally that it finds shorter paths than both A* with
post-smoothed paths and Field D* (the only other version of A* we know of that
propagates information along grid edges without constraining paths to grid
edges) with a runtime comparable to that of A* on grids. Finally, we extend
Theta* to grids that contain unblocked cells with non-uniform traversal costs
and introduce variants of Theta* which provide different tradeoffs between path
length and runtime.
|
1401.3844 | Multiattribute Auctions Based on Generalized Additive Independence | cs.GT cs.AI | We develop multiattribute auctions that accommodate generalized additive
independent (GAI) preferences. We propose an iterative auction mechanism that
maintains prices on potentially overlapping GAI clusters of attributes, thus
decreases elicitation and computational burden, and creates an open competition
among suppliers over a multidimensional domain. Most significantly, the auction
is guaranteed to achieve surplus which approximates optimal welfare up to a
small additive factor, under reasonable equilibrium strategies of traders. The
main departure of GAI auctions from previous literature is to accommodate
non-additive trader preferences, hence allowing traders to condition their
evaluation of specific attributes on the value of other attributes. At the same
time, the GAI structure supports a compact representation of prices, enabling a
tractable auction process. We perform a simulation study, demonstrating and
quantifying the significant efficiency advantage of more expressive preference
modeling. We draw random GAI-structured utility functions with various internal
structures, generate additive functions that approximate the GAI utility, and
compare the performance of the auctions using the two representations. We find
that allowing traders to express existing dependencies among attributes
improves the economic efficiency of multiattribute auctions.
|
1401.3845 | Resource-Driven Mission-Phasing Techniques for Constrained Agents in
Stochastic Environments | cs.MA cs.AI | Because an agents resources dictate what actions it can possibly take, it
should plan which resources it holds over time carefully, considering its
inherent limitations (such as power or payload restrictions), the competing
needs of other agents for the same resources, and the stochastic nature of the
environment. Such agents can, in general, achieve more of their objectives if
they can use --- and even create --- opportunities to change which resources
they hold at various times. Driven by resource constraints, the agents could
break their overall missions into an optimal series of phases, optimally
reconfiguring their resources at each phase, and optimally using their assigned
resources in each phase, given their knowledge of the stochastic environment.
In this paper, we formally define and analyze this constrained, sequential
optimization problem in both the single-agent and multi-agent contexts. We
present a family of mixed integer linear programming (MILP) formulations of
this problem that can optimally create phases (when phases are not predefined)
accounting for costs and limitations in phase creation. Because our
formulations multaneously also find the optimal allocations of resources at
each phase and the optimal policies for using the allocated resources at each
phase, they exploit structure across these coupled problems. This allows them
to find solutions significantly faster(orders of magnitude faster in larger
problems) than alternative solution techniques, as we demonstrate empirically.
|
1401.3846 | Fast Set Bounds Propagation Using a BDD-SAT Hybrid | cs.AI | Binary Decision Diagram (BDD) based set bounds propagation is a powerful
approach to solving set-constraint satisfaction problems. However, prior BDD
based techniques in- cur the significant overhead of constructing and
manipulating graphs during search. We present a set-constraint solver which
combines BDD-based set-bounds propagators with the learning abilities of a
modern SAT solver. Together with a number of improvements beyond the basic
algorithm, this solver is highly competitive with existing propagation based
set constraint solvers.
|
1401.3847 | Automatic Induction of Bellman-Error Features for Probabilistic Planning | cs.AI | Domain-specific features are important in representing problem structure
throughout machine learning and decision-theoretic planning. In planning, once
state features are provided, domain-independent algorithms such as approximate
value iteration can learn weighted combinations of those features that often
perform well as heuristic estimates of state value (e.g., distance to the
goal). Successful applications in real-world domains often require features
crafted by human experts. Here, we propose automatic processes for learning
useful domain-specific feature sets with little or no human intervention. Our
methods select and add features that describe state-space regions of high
inconsistency in the Bellman equation (statewise Bellman error) during
approximate value iteration. Our method can be applied using any
real-valued-feature hypothesis space and corresponding learning method for
selecting features from training sets of state-value pairs. We evaluate the
method with hypothesis spaces defined by both relational and propositional
feature languages, using nine probabilistic planning domains. We show that
approximate value iteration using a relational feature space performs at the
state-of-the-art in domain-independent stochastic relational planning. Our
method provides the first domain-independent approach that plays Tetris
successfully (without human-engineered features).
|
1401.3848 | Approximate Model-Based Diagnosis Using Greedy Stochastic Search | cs.AI | We propose a StochAstic Fault diagnosis AlgoRIthm, called SAFARI, which
trades off guarantees of computing minimal diagnoses for computational
efficiency. We empirically demonstrate, using the 74XXX and ISCAS-85 suites of
benchmark combinatorial circuits, that SAFARI achieves several
orders-of-magnitude speedup over two well-known deterministic algorithms, CDA*
and HA*, for multiple-fault diagnoses; further, SAFARI can compute a range of
multiple-fault diagnoses that CDA* and HA* cannot. We also prove that SAFARI is
optimal for a range of propositional fault models, such as the widely-used
weak-fault models (models with ignorance of abnormal behavior). We discuss the
optimality of SAFARI in a class of strong-fault circuit models with stuck-at
failure modes. By modeling the algorithm itself as a Markov chain, we provide
exact bounds on the minimality of the diagnosis computed. SAFARI also displays
strong anytime behavior, and will return a diagnosis after any non-trivial
inference time.
|
1401.3849 | Nominals, Inverses, Counting, and Conjunctive Queries or: Why Infinity
is your Friend! | cs.LO cs.AI | Description Logics are knowledge representation formalisms that provide, for
example, the logical underpinning of the W3C OWL standards. Conjunctive
queries, the standard query language in databases, have recently gained
significant attention as an expressive formalism for querying Description Logic
knowledge bases. Several different techniques for deciding conjunctive query
entailment are available for a wide range of DLs. Nevertheless, the combination
of nominals, inverse roles, and number restrictions in OWL 1 and OWL 2 DL
causes unsolvable problems for the techniques hitherto available. We tackle
this problem and present a decidability result for entailment of unions of
conjunctive queries in the DL ALCHOIQb that contains all three problematic
constructors simultaneously. Provided that queries contain only simple roles,
our result also shows decidability of entailment of (unions of) conjunctive
queries in the logic that underpins OWL 1 DL and we believe that the presented
results will pave the way for further progress towards conjunctive query
entailment decision procedures for the Description Logics underlying the OWL
standards.
|
1401.3850 | A Model-Based Active Testing Approach to Sequential Diagnosis | cs.AI | Model-based diagnostic reasoning often leads to a large number of diagnostic
hypotheses. The set of diagnoses can be reduced by taking into account extra
observations (passive monitoring), measuring additional variables (probing) or
executing additional tests (sequential diagnosis/test sequencing). In this
paper we combine the above approaches with techniques from Automated Test
Pattern Generation (ATPG) and Model-Based Diagnosis (MBD) into a framework
called FRACTAL (FRamework for ACtive Testing ALgorithms). Apart from the inputs
and outputs that connect a system to its environment, in active testing we
consider additional input variables to which a sequence of test vectors can be
supplied. We address the computationally hard problem of computing optimal
control assignments (as defined in FRACTAL) in terms of a greedy approximation
algorithm called FRACTAL-G. We compare the decrease in the number of remaining
minimal cardinality diagnoses of FRACTAL-G to that of two more FRACTAL
algorithms: FRACTAL-ATPG and FRACTAL-P. FRACTAL-ATPG is based on ATPG and
sequential diagnosis while FRACTAL-P is based on probing and, although not an
active testing algorithm, provides a baseline for comparing the lower bound on
the number of reachable diagnoses for the FRACTAL algorithms. We empirically
evaluate the trade-offs of the three FRACTAL algorithms by performing extensive
experimentation on the ISCAS85/74XXX benchmark of combinational circuits.
|
1401.3851 | Intrusion Detection using Continuous Time Bayesian Networks | cs.AI cs.CR | Intrusion detection systems (IDSs) fall into two high-level categories:
network-based systems (NIDS) that monitor network behaviors, and host-based
systems (HIDS) that monitor system calls. In this work, we present a general
technique for both systems. We use anomaly detection, which identifies patterns
not conforming to a historic norm. In both types of systems, the rates of
change vary dramatically over time (due to burstiness) and over components (due
to service difference). To efficiently model such systems, we use continuous
time Bayesian networks (CTBNs) and avoid specifying a fixed update interval
common to discrete-time models. We build generative models from the normal
training data, and abnormal behaviors are flagged based on their likelihood
under this norm. For NIDS, we construct a hierarchical CTBN model for the
network packet traces and use Rao-Blackwellized particle filtering to learn the
parameters. We illustrate the power of our method through experiments on
detecting real worms and identifying hosts on two publicly available network
traces, the MAWI dataset and the LBNL dataset. For HIDS, we develop a novel
learning method to deal with the finite resolution of system log file time
stamps, without losing the benefits of our continuous time model. We
demonstrate the method by detecting intrusions in the DARPA 1998 BSM dataset.
|
1401.3853 | Implicit Abstraction Heuristics | cs.AI | State-space search with explicit abstraction heuristics is at the state of
the art of cost-optimal planning. These heuristics are inherently limited,
nonetheless, because the size of the abstract space must be bounded by some,
even if a very large, constant. Targeting this shortcoming, we introduce the
notion of (additive) implicit abstractions, in which the planning task is
abstracted by instances of tractable fragments of optimal planning. We then
introduce a concrete setting of this framework, called fork-decomposition, that
is based on two novel fragments of tractable cost-optimal planning. The induced
admissible heuristics are then studied formally and empirically. This study
testifies for the accuracy of the fork decomposition heuristics, yet our
empirical evaluation also stresses the tradeoff between their accuracy and the
runtime complexity of computing them. Indeed, some of the power of the explicit
abstraction heuristics comes from precomputing the heuristic function offline
and then determining h(s) for each evaluated state s by a very fast lookup in a
database. By contrast, while fork-decomposition heuristics can be calculated in
polynomial time, computing them is far from being fast. To address this
problem, we show that the time-per-node complexity bottleneck of the
fork-decomposition heuristics can be successfully overcome. We demonstrate that
an equivalent of the explicit abstraction notion of a database exists for the
fork-decomposition abstractions as well, despite their exponential-size
abstract spaces. We then verify empirically that heuristic search with the
databased" fork-decomposition heuristics favorably competes with the state of
the art of cost-optimal planning.
|
1401.3854 | A Constraint Satisfaction Framework for Executing Perceptions and
Actions in Diagrammatic Reasoning | cs.AI | Diagrammatic reasoning (DR) is pervasive in human problem solving as a
powerful adjunct to symbolic reasoning based on language-like representations.
The research reported in this paper is a contribution to building a general
purpose DR system as an extension to a SOAR-like problem solving architecture.
The work is in a framework in which DR is modeled as a process where subtasks
are solved, as appropriate, either by inference from symbolic representations
or by interaction with a diagram, i.e., perceiving specified information from a
diagram or modifying/creating objects in a diagram in specified ways according
to problem solving needs. The perceptions and actions in most DR systems built
so far are hand-coded for the specific application, even when the rest of the
system is built using the general architecture. The absence of a general
framework for executing perceptions/actions poses as a major hindrance to using
them opportunistically -- the essence of open-ended search in problem solving.
Our goal is to develop a framework for executing a wide variety of specified
perceptions and actions across tasks/domains without human intervention. We
observe that the domain/task-specific visual perceptions/actions can be
transformed into domain/task-independent spatial problems. We specify a spatial
problem as a quantified constraint satisfaction problem in the real domain
using an open-ended vocabulary of properties, relations and actions involving
three kinds of diagrammatic objects -- points, curves, regions. Solving a
spatial problem from this specification requires computing the equivalent
simplified quantifier-free expression, the complexity of which is inherently
doubly exponential. We represent objects as configuration of simple elements to
facilitate decomposition of complex problems into simpler and similar
subproblems. We show that, if the symbolic solution to a subproblem can be
expressed concisely, quantifiers can be eliminated from spatial problems in
low-order polynomial time using similar previously solved subproblems. This
requires determining the similarity of two problems, the existence of a mapping
between them computable in polynomial time, and designing a memory for storing
previously solved problems so as to facilitate search. The efficacy of the idea
is shown by time complexity analysis. We demonstrate the proposed approach by
executing perceptions and actions involved in DR tasks in two army
applications.
|
1401.3855 | Algorithms for Closed Under Rational Behavior (CURB) Sets | cs.GT cs.AI | We provide a series of algorithms demonstrating that solutions according to
the fundamental game-theoretic solution concept of closed under rational
behavior (CURB) sets in two-player, normal-form games can be computed in
polynomial time (we also discuss extensions to n-player games). First, we
describe an algorithm that identifies all of a player's best responses
conditioned on the belief that the other player will play from within a given
subset of its strategy space. This algorithm serves as a subroutine in a series
of polynomial-time algorithms for finding all minimal CURB sets, one minimal
CURB set, and the smallest minimal CURB set in a game. We then show that the
complexity of finding a Nash equilibrium can be exponential only in the size of
a game's smallest CURB set. Related to this, we show that the smallest CURB set
can be an arbitrarily small portion of the game, but it can also be arbitrarily
larger than the supports of its only enclosed Nash equilibrium. We test our
algorithms empirically and find that most commonly studied academic games tend
to have either very large or very small minimal CURB sets.
|
1401.3857 | Case-Based Subgoaling in Real-Time Heuristic Search for Video Game
Pathfinding | cs.AI | Real-time heuristic search algorithms satisfy a constant bound on the amount
of planning per action, independent of problem size. As a result, they scale up
well as problems become larger. This property would make them well suited for
video games where Artificial Intelligence controlled agents must react quickly
to user commands and to other agents actions. On the downside, real-time search
algorithms employ learning methods that frequently lead to poor solution
quality and cause the agent to appear irrational by re-visiting the same
problem states repeatedly. The situation changed recently with a new algorithm,
D LRTA*, which attempted to eliminate learning by automatically selecting
subgoals. D LRTA* is well poised for video games, except it has a complex and
memory-demanding pre-computation phase during which it builds a database of
subgoals. In this paper, we propose a simpler and more memory-efficient way of
pre-computing subgoals thereby eliminating the main obstacle to applying
state-of-the-art real-time search methods in video games. The new algorithm
solves a number of randomly chosen problems off-line, compresses the solutions
into a series of subgoals and stores them in a database. When presented with a
novel problem on-line, it queries the database for the most similar previously
solved case and uses its subgoals to solve the problem. In the domain of
pathfinding on four large video game maps, the new algorithm delivers solutions
eight times better while using 57 times less memory and requiring 14% less
pre-computation time.
|
1401.3858 | Logical Foundations of RDF(S) with Datatypes | cs.LO cs.AI | The Resource Description Framework (RDF) is a Semantic Web standard that
provides a data language, simply called RDF, as well as a lightweight ontology
language, called RDF Schema. We investigate embeddings of RDF in logic and show
how standard logic programming and description logic technology can be used for
reasoning with RDF. We subsequently consider extensions of RDF with datatype
support, considering D entailment, defined in the RDF semantics specification,
and D* entailment, a semantic weakening of D entailment, introduced by ter
Horst. We use the embeddings and properties of the logics to establish novel
upper bounds for the complexity of deciding entailment. We subsequently
establish two novel lower bounds, establishing that RDFS entailment is
PTime-complete and that simple-D entailment is coNP-hard, when considering
arbitrary datatypes, both in the size of the entailing graph. The results
indicate that RDFS may not be as lightweight as one may expect.
|
1401.3859 | A Utility-Theoretic Approach to Privacy in Online Services | cs.AI cs.CR cs.CY | Online offerings such as web search, news portals, and e-commerce
applications face the challenge of providing high-quality service to a large,
heterogeneous user base. Recent efforts have highlighted the potential to
improve performance by introducing methods to personalize services based on
special knowledge about users and their context. For example, a users
demographics, location, and past search and browsing may be useful in enhancing
the results offered in response to web search queries. However, reasonable
concerns about privacy by both users, providers, and government agencies acting
on behalf of citizens, may limit access by services to such information. We
introduce and explore an economics of privacy in personalization, where people
can opt to share personal information, in a standing or on-demand manner, in
return for expected enhancements in the quality of an online service. We focus
on the example of web search and formulate realistic objective functions for
search efficacy and privacy. We demonstrate how we can find a provably
near-optimal optimization of the utility-privacy tradeoff in an efficient
manner. We evaluate our methodology on data drawn from a log of the search
activity of volunteer participants. We separately assess users' preferences
about privacy and utility via a large-scale survey, aimed at eliciting
preferences about peoples' willingness to trade the sharing of personal data in
returns for gains in search efficiency. We show that a significant level of
personalization can be achieved using a relatively small amount of information
about users.
|
1401.3860 | Planning with Noisy Probabilistic Relational Rules | cs.AI | Noisy probabilistic relational rules are a promising world model
representation for several reasons. They are compact and generalize over world
instantiations. They are usually interpretable and they can be learned
effectively from the action experiences in complex worlds. We investigate
reasoning with such rules in grounded relational domains. Our algorithms
exploit the compactness of rules for efficient and flexible decision-theoretic
planning. As a first approach, we combine these rules with the Upper Confidence
Bounds applied to Trees (UCT) algorithm based on look-ahead trees. Our second
approach converts these rules into a structured dynamic Bayesian network
representation and predicts the effects of action sequences using approximate
inference and beliefs over world states. We evaluate the effectiveness of our
approaches for planning in a simulated complex 3D robot manipulation scenario
with an articulated manipulator and realistic physics and in domains of the
probabilistic planning competition. Empirical results show that our methods can
solve problems where existing methods fail.
|
1401.3861 | Best-First Heuristic Search for Multicore Machines | cs.AI cs.DC | To harness modern multicore processors, it is imperative to develop parallel
versions of fundamental algorithms. In this paper, we compare different
approaches to parallel best-first search in a shared-memory setting. We present
a new method, PBNF, that uses abstraction to partition the state space and to
detect duplicate states without requiring frequent locking. PBNF allows
speculative expansions when necessary to keep threads busy. We identify and fix
potential livelock conditions in our approach, proving its correctness using
temporal logic. Our approach is general, allowing it to extend easily to
suboptimal and anytime heuristic search. In an empirical comparison on STRIPS
planning, grid pathfinding, and sliding tile puzzle problems using 8-core
machines, we show that A*, weighted A* and Anytime weighted A* implemented
using PBNF yield faster search than improved versions of previous parallel
search proposals.
|
1401.3862 | A Probabilistic Approach for Maintaining Trust Based on Evidence | cs.MA | Leading agent-based trust models address two important needs. First, they
show how an agent may estimate the trustworthiness of another agent based on
prior interactions. Second, they show how agents may share their knowledge in
order to cooperatively assess the trustworthiness of others. However, in
real-life settings, information relevant to trust is usually obtained
piecemeal, not all at once. Unfortunately, the problem of maintaining trust has
drawn little attention. Existing approaches handle trust updates in a
heuristic, not a principled, manner. This paper builds on a formal model that
considers probability and certainty as two dimensions of trust. It proposes a
mechanism using which an agent can update the amount of trust it places in
other agents on an ongoing basis. This paper shows via simulation that the
proposed approach (a) provides accurate estimates of the trustworthiness of
agents that change behavior frequently; and (b) captures the dynamic behavior
of the agents. This paper includes an evaluation based on a real dataset drawn
from Amazon Marketplace, a leading e-commerce site.
|
1401.3863 | An Effective Algorithm for and Phase Transitions of the Directed
Hamiltonian Cycle Problem | cs.AI | The Hamiltonian cycle problem (HCP) is an important combinatorial problem
with applications in many areas. It is among the first problems used for
studying intrinsic properties, including phase transitions, of combinatorial
problems. While thorough theoretical and experimental analyses have been made
on the HCP in undirected graphs, a limited amount of work has been done for the
HCP in directed graphs (DHCP). The main contribution of this work is an
effective algorithm for the DHCP. Our algorithm explores and exploits the close
relationship between the DHCP and the Assignment Problem (AP) and utilizes a
technique based on Boolean satisfiability (SAT). By combining effective
algorithms for the AP and SAT, our algorithm significantly outperforms previous
exact DHCP algorithms, including an algorithm based on the award-winning
Concorde TSP algorithm. The second result of the current study is an
experimental analysis of phase transitions of the DHCP, verifying and refining
a known phase transition of the DHCP.
|
1401.3864 | A Logical Study of Partial Entailment | cs.LO cs.AI | We introduce a novel logical notion--partial entailment--to propositional
logic. In contrast with classical entailment, that a formula P partially
entails another formula Q with respect to a background formula set \Gamma
intuitively means that under the circumstance of \Gamma, if P is true then some
"part" of Q will also be true. We distinguish three different kinds of partial
entailments and formalize them by using an extended notion of prime implicant.
We study their semantic properties, which show that, surprisingly, partial
entailments fail for many simple inference rules. Then, we study the related
computational properties, which indicate that partial entailments are
relatively difficult to be computed. Finally, we consider a potential
application of partial entailments in reasoning about rational agents.
|
1401.3865 | Evaluating Temporal Graphs Built from Texts via Transitive Reduction | cs.CL cs.IR | Temporal information has been the focus of recent attention in information
extraction, leading to some standardization effort, in particular for the task
of relating events in a text. This task raises the problem of comparing two
annotations of a given text, because relations between events in a story are
intrinsically interdependent and cannot be evaluated separately. A proper
evaluation measure is also crucial in the context of a machine learning
approach to the problem. Finding a common comparison referent at the text level
is not obvious, and we argue here in favor of a shift from event-based measures
to measures on a unique textual object, a minimal underlying temporal graph, or
more formally the transitive reduction of the graph of relations between event
boundaries. We support it by an investigation of its properties on synthetic
data and on a well-know temporal corpus.
|
1401.3866 | Automated Search for Impossibility Theorems in Social Choice Theory:
Ranking Sets of Objects | cs.AI cs.LO cs.MA | We present a method for using standard techniques from satisfiability
checking to automatically verify and discover theorems in an area of economic
theory known as ranking sets of objects. The key question in this area, which
has important applications in social choice theory and decision making under
uncertainty, is how to extend an agents preferences over a number of objects to
a preference relation over nonempty sets of such objects. Certain combinations
of seemingly natural principles for this kind of preference extension can
result in logical inconsistencies, which has led to a number of important
impossibility theorems. We first prove a general result that shows that for a
wide range of such principles, characterised by their syntactic form when
expressed in a many-sorted first-order logic, any impossibility exhibited at a
fixed (small) domain size will necessarily extend to the general case. We then
show how to formulate candidates for impossibility theorems at a fixed domain
size in propositional logic, which in turn enables us to automatically search
for (general) impossibility theorems using a SAT solver. When applied to a
space of 20 principles for preference extension familiar from the literature,
this method yields a total of 84 impossibility theorems, including both known
and nontrivial new results.
|
1401.3867 | Iterated Belief Change Due to Actions and Observations | cs.AI | In action domains where agents may have erroneous beliefs, reasoning about
the effects of actions involves reasoning about belief change. In this paper,
we use a transition system approach to reason about the evolution of an agents
beliefs as actions are executed. Some actions cause an agent to perform belief
revision while others cause an agent to perform belief update, but the
interaction between revision and update can be non-elementary. We present a set
of rationality properties describing the interaction between revision and
update, and we introduce a new class of belief change operators for reasoning
about alternating sequences of revisions and updates. Our belief change
operators can be characterized in terms of a natural shifting operation on
total pre-orderings over interpretations. We compare our approach with related
work on iterated belief change due to action, and we conclude with some
directions for future research.
|
1401.3868 | Clause-Learning Algorithms with Many Restarts and Bounded-Width
Resolution | cs.LO cs.AI | We offer a new understanding of some aspects of practical SAT-solvers that
are based on DPLL with unit-clause propagation, clause-learning, and restarts.
We do so by analyzing a concrete algorithm which we claim is faithful to what
practical solvers do. In particular, before making any new decision or restart,
the solver repeatedly applies the unit-resolution rule until saturation, and
leaves no component to the mercy of non-determinism except for some internal
randomness. We prove the perhaps surprising fact that, although the solver is
not explicitly designed for it, with high probability it ends up behaving as
width-k resolution after no more than O(n^2k+2) conflicts and restarts, where n
is the number of variables. In other words, width-k resolution can be thought
of as O(n^2k+2) restarts of the unit-resolution rule with learning.
|
1401.3869 | False-Name Manipulations in Weighted Voting Games | cs.GT cs.MA | Weighted voting is a classic model of cooperation among agents in
decision-making domains. In such games, each player has a weight, and a
coalition of players wins the game if its total weight meets or exceeds a given
quota. A players power in such games is usually not directly proportional to
his weight, and is measured by a power index, the most prominent among which
are the Shapley-Shubik index and the Banzhaf index.In this paper, we
investigate by how much a player can change his power, as measured by the
Shapley-Shubik index or the Banzhaf index, by means of a false-name
manipulation, i.e., splitting his weight among two or more identities. For both
indices, we provide upper and lower bounds on the effect of weight-splitting.
We then show that checking whether a beneficial split exists is NP-hard, and
discuss efficient algorithms for restricted cases of this problem, as well as
randomized algorithms for the general case. We also provide an experimental
evaluation of these algorithms. Finally, we examine related forms of
manipulative behavior, such as annexation, where a player subsumes other
players, or merging, where several players unite into one. We characterize the
computational complexity of such manipulations and provide limits on their
effects. For the Banzhaf index, we describe a new paradox, which we term the
Annexation Non-monotonicity Paradox.
|
1401.3870 | Learning to Make Predictions In Partially Observable Environments
Without a Generative Model | cs.LG cs.AI stat.ML | When faced with the problem of learning a model of a high-dimensional
environment, a common approach is to limit the model to make only a restricted
set of predictions, thereby simplifying the learning problem. These partial
models may be directly useful for making decisions or may be combined together
to form a more complete, structured model. However, in partially observable
(non-Markov) environments, standard model-learning methods learn generative
models, i.e. models that provide a probability distribution over all possible
futures (such as POMDPs). It is not straightforward to restrict such models to
make only certain predictions, and doing so does not always simplify the
learning problem. In this paper we present prediction profile models:
non-generative partial models for partially observable systems that make only a
given set of predictions, and are therefore far simpler than generative models
in some cases. We formalize the problem of learning a prediction profile model
as a transformation of the original model-learning problem, and show
empirically that one can learn prediction profile models that make a small set
of important predictions even in systems that are too complex for standard
generative models.
|
1401.3871 | Non-Deterministic Policies in Markovian Decision Processes | cs.AI cs.LG | Markovian processes have long been used to model stochastic environments.
Reinforcement learning has emerged as a framework to solve sequential planning
and decision-making problems in such environments. In recent years, attempts
were made to apply methods from reinforcement learning to construct decision
support systems for action selection in Markovian environments. Although
conventional methods in reinforcement learning have proved to be useful in
problems concerning sequential decision-making, they cannot be applied in their
current form to decision support systems, such as those in medical domains, as
they suggest policies that are often highly prescriptive and leave little room
for the users input. Without the ability to provide flexible guidelines, it is
unlikely that these methods can gain ground with users of such systems. This
paper introduces the new concept of non-deterministic policies to allow more
flexibility in the users decision-making process, while constraining decisions
to remain near optimal solutions. We provide two algorithms to compute
non-deterministic policies in discrete domains. We study the output and running
time of these method on a set of synthetic and real-world problems. In an
experiment with human subjects, we show that humans assisted by hints based on
non-deterministic policies outperform both human-only and computer-only agents
in a web navigation task.
|
1401.3872 | Second-Order Consistencies | cs.AI | In this paper, we propose a comprehensive study of second-order consistencies
(i.e., consistencies identifying inconsistent pairs of values) for constraint
satisfaction. We build a full picture of the relationships existing between
four basic second-order consistencies, namely path consistency (PC),
3-consistency (3C), dual consistency (DC) and 2-singleton arc consistency
(2SAC), as well as their conservative and strong variants. Interestingly, dual
consistency is an original property that can be established by using the
outcome of the enforcement of generalized arc consistency (GAC), which makes it
rather easy to obtain since constraint solvers typically maintain GAC during
search. On binary constraint networks, DC is equivalent to PC, but its
restriction to existing constraints, called conservative dual consistency
(CDC), is strictly stronger than traditional conservative consistencies derived
from path consistency, namely partial path consistency (PPC) and conservative
path consistency (CPC). After introducing a general algorithm to enforce strong
(C)DC, we present the results of an experimentation over a wide range of
benchmarks that demonstrate the interest of (conservative) dual consistency. In
particular, we show that enforcing (C)DC before search clearly improves the
performance of MAC (the algorithm that maintains GAC during search) on several
binary and non-binary structured problems.
|
1401.3874 | Identifying Aspects for Web-Search Queries | cs.IR cs.DB | Many web-search queries serve as the beginning of an exploration of an
unknown space of information, rather than looking for a specific web page. To
answer such queries effec- tively, the search engine should attempt to organize
the space of relevant information in a way that facilitates exploration. We
describe the Aspector system that computes aspects for a given query. Each
aspect is a set of search queries that together represent a distinct
information need relevant to the original search query. To serve as an
effective means to explore the space, Aspector computes aspects that are
orthogonal to each other and to have high combined coverage. Aspector combines
two sources of information to compute aspects. We discover candidate aspects by
analyzing query logs, and cluster them to eliminate redundancies. We then use a
mass-collaboration knowledge base (e.g., Wikipedia) to compute candidate
aspects for queries that occur less frequently and to group together aspects
that are likely to be "semantically" related. We present a user study that
indicates that the aspects we compute are rated favorably against three
competing alternatives -related searches proposed by Google, cluster labels
assigned by the Clusty search engine, and navigational searches proposed by
Bing.
|
1401.3875 | On-line Planning and Scheduling: An Application to Controlling Modular
Printers | cs.AI | We present a case study of artificial intelligence techniques applied to the
control of production printing equipment. Like many other real-world
applications, this complex domain requires high-speed autonomous
decision-making and robust continual operation. To our knowledge, this work
represents the first successful industrial application of embedded
domain-independent temporal planning. Our system handles execution failures and
multi-objective preferences. At its heart is an on-line algorithm that combines
techniques from state-space planning and partial-order scheduling. We suggest
that this general architecture may prove useful in other applications as more
intelligent systems operate in continual, on-line settings. Our system has been
used to drive several commercial prototypes and has enabled a new product
architecture for our industrial partner. When compared with state-of-the-art
off-line planners, our system is hundreds of times faster and often finds
better plans. Our experience demonstrates that domain-independent AI planning
based on heuristic search can flexibly handle time, resources, replanning, and
multiple objectives in a high-speed practical application without requiring
hand-coded control knowledge.
|
1401.3876 | Determining Possible and Necessary Winners Given Partial Orders | cs.GT cs.MA | Usually a voting rule requires agents to give their preferences as linear
orders. However, in some cases it is impractical for an agent to give a linear
order over all the alternatives. It has been suggested to let agents submit
partial orders instead. Then, given a voting rule, a profile of partial orders,
and an alternative (candidate) c, two important questions arise: first, is it
still possible for c to win, and second, is c guaranteed to win? These are the
possible winner and necessary winner problems, respectively. Each of these two
problems is further divided into two sub-problems: determining whether c is a
unique winner (that is, c is the only winner), or determining whether c is a
co-winner (that is, c is in the set of winners). We consider the setting where
the number of alternatives is unbounded and the votes are unweighted. We
completely characterize the complexity of possible/necessary winner problems
for the following common voting rules: a class of positional scoring rules
(including Borda), Copeland, maximin, Bucklin, ranked pairs, voting trees, and
plurality with runoff.
|
1401.3877 | Properties of Bethe Free Energies and Message Passing in Gaussian Models | cs.LG cs.AI stat.ML | We address the problem of computing approximate marginals in Gaussian
probabilistic models by using mean field and fractional Bethe approximations.
We define the Gaussian fractional Bethe free energy in terms of the moment
parameters of the approximate marginals, derive a lower and an upper bound on
the fractional Bethe free energy and establish a necessary condition for the
lower bound to be bounded from below. It turns out that the condition is
identical to the pairwise normalizability condition, which is known to be a
sufficient condition for the convergence of the message passing algorithm. We
show that stable fixed points of the Gaussian message passing algorithm are
local minima of the Gaussian Bethe free energy. By a counterexample, we
disprove the conjecture stating that the unboundedness of the free energy
implies the divergence of the message passing algorithm.
|
1401.3878 | Computing Small Unsatisfiable Cores in Satisfiability Modulo Theories | cs.LO cs.AI | The problem of finding small unsatisfiable cores for SAT formulas has
recently received a lot of interest, mostly for its applications in formal
verification. However, propositional logic is often not expressive enough for
representing many interesting verification problems, which can be more
naturally addressed in the framework of Satisfiability Modulo Theories, SMT.
Surprisingly, the problem of finding unsatisfiable cores in SMT has received
very little attention in the literature. In this paper we present a novel
approach to this problem, called the Lemma-Lifting approach. The main idea is
to combine an SMT solver with an external propositional core extractor. The SMT
solver produces the theory lemmas found during the search, dynamically lifting
the suitable amount of theory information to the Boolean level. The core
extractor is then called on the Boolean abstraction of the original SMT problem
and of the theory lemmas. This results in an unsatisfiable core for the
original SMT problem, once the remaining theory lemmas are removed. The
approach is conceptually interesting, and has several advantages in practice.
In fact, it is extremely simple to implement and to update, and it can be
interfaced with every propositional core extractor in a plug-and-play manner,
so as to benefit for free of all unsat-core reduction techniques which have
been or will be made available.
We have evaluated our algorithm with a very extensive empirical test on
SMT-LIB benchmarks, which confirms the validity and potential of this approach.
|
1401.3879 | Soft Constraints of Difference and Equality | cs.AI cs.DS | In many combinatorial problems one may need to model the diversity or
similarity of assignments in a solution. For example, one may wish to maximise
or minimise the number of distinct values in a solution. To formulate problems
of this type, we can use soft variants of the well known AllDifferent and
AllEqual constraints. We present a taxonomy of six soft global constraints,
generated by combining the two latter ones and the two standard cost functions,
which are either maximised or minimised. We characterise the complexity of
achieving arc and bounds consistency on these constraints, resolving those
cases for which NP-hardness was neither proven nor disproven. In particular, we
explore in depth the constraint ensuring that at least k pairs of variables
have a common value. We show that achieving arc consistency is NP-hard, however
achieving bounds consistency can be done in polynomial time through dynamic
programming. Moreover, we show that the maximum number of pairs of equal
variables can be approximated by a factor 1/2 with a linear time greedy
algorithm. Finally, we provide a fixed parameter tractable algorithm with
respect to the number of values appearing in more than two distinct domains.
Interestingly, this taxonomy shows that enforcing equality is harder than
enforcing difference.
|
1401.3880 | Regression Conformal Prediction with Nearest Neighbours | cs.LG | In this paper we apply Conformal Prediction (CP) to the k-Nearest Neighbours
Regression (k-NNR) algorithm and propose ways of extending the typical
nonconformity measure used for regression so far. Unlike traditional regression
methods which produce point predictions, Conformal Predictors output predictive
regions that satisfy a given confidence level. The regions produced by any
Conformal Predictor are automatically valid, however their tightness and
therefore usefulness depends on the nonconformity measure used by each CP. In
effect a nonconformity measure evaluates how strange a given example is
compared to a set of other examples based on some traditional machine learning
algorithm. We define six novel nonconformity measures based on the k-Nearest
Neighbours Regression algorithm and develop the corresponding CPs following
both the original (transductive) and the inductive CP approaches. A comparison
of the predictive regions produced by our measures with those of the typical
regression measure suggests that a major improvement in terms of predictive
region tightness is achieved by the new measures.
|
1401.3881 | Value of Information Lattice: Exploiting Probabilistic Independence for
Effective Feature Subset Acquisition | cs.AI | We address the cost-sensitive feature acquisition problem, where
misclassifying an instance is costly but the expected misclassification cost
can be reduced by acquiring the values of the missing features. Because
acquiring the features is costly as well, the objective is to acquire the right
set of features so that the sum of the feature acquisition cost and
misclassification cost is minimized. We describe the Value of Information
Lattice (VOILA), an optimal and efficient feature subset acquisition framework.
Unlike the common practice, which is to acquire features greedily, VOILA can
reason with subsets of features. VOILA efficiently searches the space of
possible feature subsets by discovering and exploiting conditional independence
properties between the features and it reuses probabilistic inference
computations to further speed up the process. Through empirical evaluation on
five medical datasets, we show that the greedy strategy is often reluctant to
acquire features, as it cannot forecast the benefit of acquiring multiple
features in combination.
|
1401.3882 | Probabilistic Relational Planning with First Order Decision Diagrams | cs.AI | Dynamic programming algorithms have been successfully applied to
propositional stochastic planning problems by using compact representations, in
particular algebraic decision diagrams, to capture domain dynamics and value
functions. Work on symbolic dynamic programming lifted these ideas to first
order logic using several representation schemes. Recent work introduced a
first order variant of decision diagrams (FODD) and developed a value iteration
algorithm for this representation. This paper develops several improvements to
the FODD algorithm that make the approach practical. These include, new
reduction operators that decrease the size of the representation, several
speedup techniques, and techniques for value approximation. Incorporating
these, the paper presents a planning system, FODD-Planner, for solving
relational stochastic planning problems. The system is evaluated on several
domains, including problems from the recent international planning competition,
and shows competitive performance with top ranking systems. This is the first
demonstration of feasibility of this approach and it shows that abstraction
through compact representation is a promising approach to stochastic planning.
|
1401.3883 | From "Identical" to "Similar": Fusing Retrieved Lists Based on
Inter-Document Similarities | cs.IR | Methods for fusing document lists that were retrieved in response to a query
often utilize the retrieval scores and/or ranks of documents in the lists. We
present a novel fusion approach that is based on using, in addition,
information induced from inter-document similarities. Specifically, our methods
let similar documents from different lists provide relevance-status support to
each other. We use a graph-based method to model relevance-status propagation
between documents. The propagation is governed by inter-document-similarities
and by retrieval scores of documents in the lists. Empirical evaluation
demonstrates the effectiveness of our methods in fusing TREC runs. The
performance of our most effective methods transcends that of effective fusion
methods that utilize only retrieval scores or ranks.
|
1401.3885 | Scaling up Heuristic Planning with Relational Decision Trees | cs.AI | Current evaluation functions for heuristic planning are expensive to compute.
In numerous planning problems these functions provide good guidance to the
solution, so they are worth the expense. However, when evaluation functions are
misguiding or when planning problems are large enough, lots of node evaluations
must be computed, which severely limits the scalability of heuristic planners.
In this paper, we present a novel solution for reducing node evaluations in
heuristic planning based on machine learning. Particularly, we define the task
of learning search control for heuristic planning as a relational
classification task, and we use an off-the-shelf relational classification tool
to address this learning task. Our relational classification task captures the
preferred action to select in the different planning contexts of a specific
planning domain. These planning contexts are defined by the set of helpful
actions of the current state, the goals remaining to be achieved, and the
static predicates of the planning task. This paper shows two methods for
guiding the search of a heuristic planner with the learned classifiers. The
first one consists of using the resulting classifier as an action policy. The
second one consists of applying the classifier to generate lookahead states
within a Best First Search algorithm. Experiments over a variety of domains
reveal that our heuristic planner using the learned classifiers solves larger
problems than state-of-the-art planners.
|
1401.3886 | Exploiting Structure in Weighted Model Counting Approaches to
Probabilistic Inference | cs.AI | Previous studies have demonstrated that encoding a Bayesian network into a
SAT formula and then performing weighted model counting using a backtracking
search algorithm can be an effective method for exact inference. In this paper,
we present techniques for improving this approach for Bayesian networks with
noisy-OR and noisy-MAX relations---two relations that are widely used in
practice as they can dramatically reduce the number of probabilities one needs
to specify. In particular, we present two SAT encodings for noisy-OR and two
encodings for noisy-MAX that exploit the structure or semantics of the
relations to improve both time and space efficiency, and we prove the
correctness of the encodings. We experimentally evaluated our techniques on
large-scale real and randomly generated Bayesian networks. On these benchmarks,
our techniques gave speedups of up to two orders of magnitude over the best
previous approaches for networks with noisy-OR/MAX relations and scaled up to
larger networks. As well, our techniques extend the weighted model counting
approach for exact inference to networks that were previously intractable for
the approach.
|
1401.3887 | The Complexity of Integer Bound Propagation | cs.AI cs.LO | Bound propagation is an important Artificial Intelligence technique used in
Constraint Programming tools to deal with numerical constraints. It is
typically embedded within a search procedure ("branch and prune") and used at
every node of the search tree to narrow down the search space, so it is
critical that it be fast. The procedure invokes constraint propagators until a
common fixpoint is reached, but the known algorithms for this have a
pseudo-polynomial worst-case time complexity: they are fast indeed when the
variables have a small numerical range, but they have the well-known problem of
being prohibitively slow when these ranges are large. An important question is
therefore whether strongly-polynomial algorithms exist that compute the common
bound consistent fixpoint of a set of constraints. This paper answers this
question. In particular we show that this fixpoint computation is in fact
NP-complete, even when restricted to binary linear constraints.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.