id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1401.3049 | On the Secrecy Outage Capacity of Physical Layer Security in Large-Scale
MIMO Relaying Systems with Imperfect CSI | cs.IT math.IT | In this paper, we study the problem of physical layer security in a
large-scale multiple-input multiple-output (LS-MIMO) relaying system. The
advantage of LS-MIMO relaying systems is exploited to enhance both wireless
security and spectral efficiency. In particular, the challenging issue incurred
by short interception distance is well addressed. Under very practical
assumptions, i.e., no eavesdropper's channel state information (CSI) and
imperfect legitimate channel CSI, this paper gives a thorough investigation of
the impact of imperfect CSI in two classic relaying systems, i.e.,
amplify-and-forward (AF) and decode-and-forward (DF) systems, and obtain
explicit expressions of secrecy outage capacities for both cases. Finally, our
theoretical claims are validated by the numerical results.
|
1401.3056 | Power of individuals -- Controlling centrality of temporal networks | cs.SI physics.soc-ph | Temporal networks are such networks where nodes and interactions may appear
and disappear at various time scales. With the evidence of ubiquity of temporal
networks in our economy, nature and society, it's urgent and significant to
focus on structural controllability of temporal networks, which nowadays is
still an untouched topic. We develop graphic tools to study the structural
controllability of temporal networks, identifying the intrinsic mechanism of
the ability of individuals in controlling a dynamic and large-scale temporal
network. Classifying temporal trees of a temporal network into different types,
we give (both upper and lower) analytical bounds of the controlling centrality,
which are verified by numerical simulations of both artificial and empirical
temporal networks. We find that the scale-free distribution of node's
controlling centrality is virtually independent of the time scale and types of
datasets, meaning the inherent heterogeneity and robustness of the controlling
centrality of temporal networks.
|
1401.3069 | Use Case Point Approach Based Software Effort Estimation using Various
Support Vector Regression Kernel Methods | cs.SE cs.LG | The job of software effort estimation is a critical one in the early stages
of the software development life cycle when the details of requirements are
usually not clearly identified. Various optimization techniques help in
improving the accuracy of effort estimation. The Support Vector Regression
(SVR) is one of several different soft-computing techniques that help in
getting optimal estimated values. The idea of SVR is based upon the computation
of a linear regression function in a high dimensional feature space where the
input data are mapped via a nonlinear function. Further, the SVR kernel methods
can be applied in transforming the input data and then based on these
transformations, an optimal boundary between the possible outputs can be
obtained. The main objective of the research work carried out in this paper is
to estimate the software effort using use case point approach. The use case
point approach relies on the use case diagram to estimate the size and effort
of software projects. Then, an attempt has been made to optimize the results
obtained from use case point analysis using various SVR kernel methods to
achieve better prediction accuracy.
|
1401.3071 | A Framework of Performance Analysis for Distributed Antenna Systems
Based on Random Matrix Theory | cs.IT math.IT | Future communications systems will definitely be built on green
infrastructures. To realize such a goal, recently a new network infrastructure
named cloud radio access network (C-RAN) is proposed by China Mobile to enhance
network coverage and save energy simultaneously. In C-RANs, to order to save
more energy the radio front ends are separated from the colocated baseband
units and distributively located in physical positions. C-RAN can be recognized
as a variant of distributed antenna systems (DASs). In this paper we analyze
the performance of C-RANS using random matrix theory. Due to the fact that the
antennas are distributed geographically instead of being installed nearby, the
variances of the entries in the considered channel matrix are different from
each other. To the best of the authors' knowledge, the work on random matrices
with elements having different variances is largely open, which is of great
importance for DASs. In our work, some fundamental results on the eigenvalue
distributions of the random matrices with different variances are derived
first. Then based on these fundamental conclusions the outage probability of
the considered DAS is derived. Finally, the accuracy of our analytical results
is assessed by some numerical results.
|
1401.3075 | Multicast Network Coding and Field Sizes | cs.IT math.IT | In an acyclic multicast network, it is well known that a linear network
coding solution over GF($q$) exists when $q$ is sufficiently large. In
particular, for each prime power $q$ no smaller than the number of receivers, a
linear solution over GF($q$) can be efficiently constructed. In this work, we
reveal that a linear solution over a given finite field does \emph{not}
necessarily imply the existence of a linear solution over all larger finite
fields. Specifically, we prove by construction that: (i) For every source
dimension no smaller than 3, there is a multicast network linearly solvable
over GF(7) but not over GF(8), and another multicast network linearly solvable
over GF(16) but not over GF(17); (ii) There is a multicast network linearly
solvable over GF(5) but not over such GF($q$) that $q > 5$ is a Mersenne prime
plus 1, which can be extremely large; (iii) A multicast network linearly
solvable over GF($q^{m_1}$) and over GF($q^{m_2}$) is \emph{not} necessarily
linearly solvable over GF($q^{m_1+m_2}$); (iv) There exists a class of
multicast networks with a set $T$ of receivers such that the minimum field size
$q_{min}$ for a linear solution over GF($q_{min}$) is lower bounded by
$\Theta(\sqrt{|T|})$, but not every larger field than GF($q_{min}$) suffices to
yield a linear solution. The insight brought from this work is that not only
the field size, but also the order of subgroups in the multiplicative group of
a finite field affects the linear solvability of a multicast network.
|
1401.3088 | Secret Message Transmission by HARQ with Multiple Encoding | cs.CR cs.IT math.IT | Secure transmission between two agents, Alice and Bob, over block fading
channels can be achieved similarly to conventional hybrid automatic repeat
request (HARQ) by letting Alice transmit multiple blocks, each containing an
encoded version of the secret message, until Bob informs Alice about successful
decoding by a public error-free return channel. In existing literature each
block is a differently punctured version of a single codeword generated with a
Wyner code that uses a common randomness for all blocks. In this paper instead
we propose a more general approach where multiple codewords are generated from
independent randomnesses. The class of channels for which decodability and
secrecy is ensured is characterized, with derivations for the existence of
secret codes. We show in particular that the classes are not a trivial subset
(or superset) of those of existing schemes, thus highlighting the novelty of
the proposed solution. The result is further confirmed by deriving the average
achievable secrecy throughput, thus taking into account both decoding and
secrecy outage.
|
1401.3093 | Rate-Distortion for Ranking with Incomplete Information | cs.IT math.IT | We study the rate-distortion relationship in the set of permutations endowed
with the Kendall Tau metric and the Chebyshev metric. Our study is motivated by
the application of permutation rate-distortion to the average-case and
worst-case analysis of algorithms for ranking with incomplete information and
approximate sorting algorithms. For the Kendall Tau metric we provide bounds
for small, medium, and large distortion regimes, while for the Chebyshev metric
we present bounds that are valid for all distortions and are especially
accurate for small distortions. In addition, for the Chebyshev metric, we
provide a construction for covering codes.
|
1401.3098 | Interference Alignment (IA) and Coordinated Multi-Point (CoMP) overheads
and RF impairments: testbed results | cs.IT math.IT | In this work we investigate the network MIMO techniques of interference
alignment (IA) and fully adaptive joint transmission coordinated multipoint
(CoMP) in an indoor very small cell environment. Our focus is on the overheads
in a system with quantized channel state feedback from the receiver to the
transmitter (based on the 802.11ac standard) and on the impact of non-ideal
hardware. The indoor office scenario should be the most favourable case in
terms of the required feedback rates due to the large coherence bandwidth and
coherence time of the channel. The evaluations are done using a real-world
wireless testbed with three BSs and three MSs all having two antennas. The
signal to noise ratio in the measurements is very high, 35-60dB, due to the
short transmission range. Under such conditions radio hardware impairments
becomes a major limitation on the performance. We quantify the impact of these
impairments. For a 23ms update interval the overhead is 2.5% and IA and CoMP
improves the sum throughput 27% and 47% in average (over the reference schemes
e.g. TDMA MIMO), under stationary conditions. When two people are walking in
the measurement area the throughput improvements drops to 16% and 45%,
respectively.
|
1401.3112 | Achieving Low-Complexity Maximum-Likelihood Detection for the 3D MIMO
Code | cs.IT cs.NI math.IT | The 3D MIMO code is a robust and efficient space-time block code (STBC) for
the distributed MIMO broadcasting but suffers from high maximum-likelihood (ML)
decoding complexity. In this paper, we first analyze some properties of the 3D
MIMO code to show that the 3D MIMO code is fast-decodable. It is proved that
the ML decoding performance can be achieved with a complexity of O(M^{4.5})
instead of O(M^8) in quasi static channel with M-ary square QAM modulations.
Consequently, we propose a simplified ML decoder exploiting the unique
properties of 3D MIMO code. Simulation results show that the proposed
simplified ML decoder can achieve much lower processing time latency compared
to the classical sphere decoder with Schnorr-Euchner enumeration.
|
1401.3126 | Exploiting all phone media? A multidimensional network analysis of phone
users' sociality | cs.SI physics.soc-ph | The growing awareness that human communications and social interactions are
assuming a stratified structure, due to the availability of multiple
techno-communication channels, including online social networks, mobile phone
calls, short messages (SMS) and e-mails, has recently led to the study of
multidimensional networks, as a step further the classical Social Network
Analysis. A few papers have been dedicated to develop the theoretical framework
to deal with such multiplex networks and to analyze some example of
multidimensional social networks. In this context we perform the first study of
the multiplex mobile social network, gathered from the records of both call and
text message activities of millions of users of a large mobile phone operator
over a period of 12 weeks. While social networks constructed from mobile phone
datasets have drawn great attention in recent years, so far studies have dealt
with text message and call data, separately, providing a very partial view of
people sociality expressed on phone. Here we analyze how the call and the text
message dimensions overlap showing how many information about links and nodes
could be lost only accounting for a single layer and how users adopt different
media channels to interact with their neighborhood.
|
1401.3127 | From Polar to Reed-Muller Codes: a Technique to Improve the
Finite-Length Performance | cs.IT math.IT | We explore the relationship between polar and RM codes and we describe a
coding scheme which improves upon the performance of the standard polar code at
practical block lengths. Our starting point is the experimental observation
that RM codes have a smaller error probability than polar codes under MAP
decoding. This motivates us to introduce a family of codes that "interpolates"
between RM and polar codes, call this family ${\mathcal C}_{\rm inter} =
\{C_{\alpha} : \alpha \in [0, 1]\}$, where $C_{\alpha} \big |_{\alpha = 1}$ is
the original polar code, and $C_{\alpha} \big |_{\alpha = 0}$ is an RM code.
Based on numerical observations, we remark that the error probability under MAP
decoding is an increasing function of $\alpha$. MAP decoding has in general
exponential complexity, but empirically the performance of polar codes at
finite block lengths is boosted by moving along the family ${\mathcal C}_{\rm
inter}$ even under low-complexity decoding schemes such as, for instance,
belief propagation or successive cancellation list decoder. We demonstrate the
performance gain via numerical simulations for transmission over the erasure
channel as well as the Gaussian channel.
|
1401.3129 | Cancellation of Power Amplifier Induced Nonlinear Self-Interference in
Full-Duplex Transceivers | cs.IT cs.NI math.IT | Recently, full-duplex (FD) communications with simultaneous transmission and
reception on the same channel has been proposed. The FD receiver, however,
suffers from inevitable self-interference (SI) from the much more powerful
transmit signal. Analogue radio-frequency (RF) and baseband, as well as digital
baseband, cancellation techniques have been proposed for suppressing the SI,
but so far most of the studies have failed to take into account the inherent
nonlinearities of the transmitter and receiver front-ends. To fill this gap,
this article proposes a novel digital nonlinear interference cancellation
technique to mitigate the power amplifier (PA) induced nonlinear SI in a FD
transceiver. The technique is based on modeling the nonlinear SI channel, which
is comprised of the nonlinear PA, the linear multipath SI channel, and the RF
SI canceller, with a parallel Hammerstein nonlinearity. Stemming from the
modeling, and appropriate parameter estimation, the known transmit data is then
processed with the developed nonlinear parallel Hammerstein structure and
suppressed from the receiver path at digital baseband. The results illustrate
that with a given IIP3 figure for the PA, the proposed technique enables higher
transmit power to be used compared to existing linear SI cancellation methods.
Alternatively, for a given maximum transmit power level, a lower-quality PA
(i.e., lower IIP3) can be used.
|
1401.3146 | The Blackwell relation defines no lattice | cs.IT math.IT math.ST stat.TH | Blackwell's theorem shows the equivalence of two preorders on the set of
information channels. Here, we restate, and slightly generalize, his result in
terms of random variables. Furthermore, we prove that the corresponding partial
order is not a lattice; that is, least upper bounds and greatest lower bounds
do not exist.
|
1401.3148 | Dynamic Topology Adaptation and Distributed Estimation for Smart Grids | cs.IT cs.LG math.IT | This paper presents new dynamic topology adaptation strategies for
distributed estimation in smart grids systems. We propose a dynamic exhaustive
search--based topology adaptation algorithm and a dynamic sparsity--inspired
topology adaptation algorithm, which can exploit the topology of smart grids
with poor--quality links and obtain performance gains. We incorporate an
optimized combining rule, named Hastings rule into our proposed dynamic
topology adaptation algorithms. Compared with the existing works in the
literature on distributed estimation, the proposed algorithms have a better
convergence rate and significantly improve the system performance. The
performance of the proposed algorithms is compared with that of existing
algorithms in the IEEE 14--bus system.
|
1401.3168 | On the Design of Relay--Assisted Primary--Secondary Networks | cs.IT cs.NI math.IT | The use of $N$ cognitive relays to assist primary and secondary transmissions
in a time-slotted cognitive setting with one primary user (PU) and one
secondary user (SU) is investigated. An overlapped spectrum sensing strategy is
proposed for channel sensing, where the SU senses the channel for $\tau$
seconds from the beginning of the time slot and the cognitive relays sense the
channel for $2 \tau$ seconds from the beginning of the time slot, thus
providing the SU with an intrinsic priority over the relays. The relays sense
the channel over the interval $[0,\tau]$ to detect primary activity and over
the interval $[\tau,2\tau]$ to detect secondary activity. The relays help both
the PU and SU to deliver their undelivered packets and transmit when both are
idle. Two optimization-based formulations with quality of service constraints
involving queueing delay are studied. Both cases of perfect and imperfect
spectrum sensing are investigated. These results show the benefits of relaying
and its ability to enhance both primary and secondary performance, especially
in the case of no direct link between the PU and the SU transmitters and their
respective receivers. Three packet decoding strategies at the relays are also
investigated and their performance is compared.
|
1401.3174 | Comments on "Optimal Utilization of a Cognitive Shared Channel with a
Rechargeable Primary Source Node" | cs.IT cs.NI math.IT | In a recent paper [1], the authors investigated the maximum stable throughput
region of a network composed of a rechargeable primary user and a secondary
user plugged to a reliable power supply. The authors studied the cases of an
infinite and a finite energy queue at the primary transmitter. However, the
results of the finite case are incorrect. We show that under the proposed
energy queue model (a decoupled ${\rm M/D/1}$ queueing system with Bernoulli
arrivals and the consumption of one energy packet per time slot), the energy
queue capacity does not affect the stability region of the network.
|
1401.3189 | Asymmetric Compute-and-Forward with CSIT | cs.IT math.IT | We present a modified compute-and-forward scheme which utilizes Channel State
Information at the Transmitters (CSIT) in a natural way. The modified scheme
allows different users to have different coding rates, and use CSIT to achieve
larger rate region. This idea is applicable to all systems which use the
compute-and-forward technique and can be arbitrarily better than the regular
scheme in some settings.
|
1401.3198 | Online Markov decision processes with Kullback-Leibler control cost | math.OC cs.LG cs.SY | This paper considers an online (real-time) control problem that involves an
agent performing a discrete-time random walk over a finite state space. The
agent's action at each time step is to specify the probability distribution for
the next state given the current state. Following the set-up of Todorov, the
state-action cost at each time step is a sum of a state cost and a control cost
given by the Kullback-Leibler (KL) divergence between the agent's next-state
distribution and that determined by some fixed passive dynamics. The online
aspect of the problem is due to the fact that the state cost functions are
generated by a dynamic environment, and the agent learns the current state cost
only after selecting an action. An explicit construction of a computationally
efficient strategy with small regret (i.e., expected difference between its
actual total cost and the smallest cost attainable using noncausal knowledge of
the state costs) under mild regularity conditions is presented, along with a
demonstration of the performance of the proposed strategy on a simulated target
tracking problem. A number of new results on Markov decision processes with KL
control cost are also obtained.
|
1401.3201 | Privacy Preserving Social Network Publication Against Mutual Friend
Attacks | cs.DB cs.CR cs.SI | Publishing social network data for research purposes has raised serious
concerns for individual privacy. There exist many privacy-preserving works that
can deal with different attack models. In this paper, we introduce a novel
privacy attack model and refer it as a mutual friend attack. In this model, the
adversary can re-identify a pair of friends by using their number of mutual
friends. To address this issue, we propose a new anonymity concept, called
k-NMF anonymity, i.e., k-anonymity on the number of mutual friends, which
ensures that there exist at least k-1 other friend pairs in the graph that
share the same number of mutual friends. We devise algorithms to achieve the
k-NMF anonymity while preserving the original vertex set in the sense that we
allow the occasional addition but no deletion of vertices. Further we give an
algorithm to ensure the k-degree anonymity in addition to the k-NMF anonymity.
The experimental results on real-word datasets demonstrate that our approach
can preserve the privacy and utility of social networks effectively against
mutual friend attacks.
|
1401.3202 | Capacity bounds for MIMO microwave backhaul links affected by phase
noise | cs.IT math.IT | We present bounds and a closed-form high-SNR expression for the capacity of
multiple-antenna systems affected by Wiener phase noise. Our results are
developed for the scenario where a single oscillator drives all the
radio-frequency circuitries at each transceiver (common oscillator setup), the
input signal is subject to a peak-power constraint, and the channel matrix is
deterministic. This scenario is relevant for line-of-sight multiple-antenna
microwave backhaul links with sufficiently small antenna spacing at the
transceivers. For the 2 by 2 multiple-antenna case, for a Wiener phase-noise
process with standard deviation equal to 6 degrees, and at the medium/high SNR
values at which microwave backhaul links operate, the upper bound reported in
the paper exhibits a 3 dB gap from a lower bound obtained using 64-QAM.
Furthermore, in this SNR regime the closed-form high-SNR expression is shown to
be accurate.
|
1401.3215 | Constructions of Pure Asymmetric Quantum Alternant Codes Based on
Subclasses of Alternant Codes | cs.IT math.IT | In this paper, we construct asymmetric quantum error-correcting codes(AQCs)
based on subclasses of Alternant codes. Firstly, We propose a new subclass of
Alternant codes which can attain the classical Gilbert-Varshamov bound to
construct AQCs. It is shown that when $d_x=2$, $Z$-parts of the AQCs can attain
the classical Gilbert-Varshamov bound. Then we construct AQCs based on a famous
subclass of Alternant codes called Goppa codes. As an illustrative example, we
get three $[[55,6,19/4]],[[55,10,19/3]],[[55,15,19/2]]$ AQCs from the well
known $[55,16,19]$ binary Goppa code. At last, we get asymptotically good
binary expansions of asymmetric quantum GRS codes, which are quantum
generalizations of Retter's classical results. All the AQCs constructed in this
paper are pure.
|
1401.3222 | Uncovering nodes that spread information between communities in social
networks | cs.SI physics.soc-ph | From many datasets gathered in online social networks, well defined community
structures have been observed. A large number of users participate in these
networks and the size of the resulting graphs poses computational challenges.
There is a particular demand in identifying the nodes responsible for
information flow between communities; for example, in temporal Twitter networks
edges between communities play a key role in propagating spikes of activity
when the connectivity between communities is sparse and few edges exist between
different clusters of nodes. The new algorithm proposed here is aimed at
revealing these key connections by measuring a node's vicinity to nodes of
another community. We look at the nodes which have edges in more than one
community and the locality of nodes around them which influence the information
received and broadcasted to them. The method relies on independent random walks
of a chosen fixed number of steps, originating from nodes with edges in more
than one community. For the large networks that we have in mind, existing
measures such as betweenness centrality are difficult to compute, even with
recent methods that approximate the large number of operations required. We
therefore design an algorithm that scales up to the demand of current big data
requirements and has the ability to harness parallel processing capabilities.
The new algorithm is illustrated on synthetic data, where results can be judged
carefully, and also on a real, large scale Twitter activity data, where new
insights can be gained.
|
1401.3225 | Cyclic Interference Alignment and Cancellation in 3-User X-Networks with
Minimal Backhaul | cs.IT math.IT | We consider the problem of Cyclic Interference Alignment (IA) on the 3-user
X-network and show that it is infeasible to exactly achieve the upper bound of
$\frac{K^2}{2K-1}=\frac{9}{5}$ degrees of freedom for the lower bound of n=5
signalling dimensions and K=3 user-pairs. This infeasibility goes beyond the
problem of common eigenvectors in invariant subspaces within spatial IA.
In order to gain non-asymptotic feasibility with minimal intervention, we
first investigate an alignment strategy that enables IA by feedforwarding a
subset of messages with minimal rate. In a second step, we replace the proposed
feedforward strategy by an analogous Cyclic Interference Alignment and
Cancellation scheme with a backhaul network on the receiver side and also by a
dual Cyclic Interference Neutralization scheme with a backhaul network on the
transmitter side.
|
1401.3230 | Optimization Of Cross Domain Sentiment Analysis Using Sentiwordnet | cs.CL cs.IR | The task of sentiment analysis of reviews is carried out using manually built
/ automatically generated lexicon resources of their own with which terms are
matched with lexicon to compute the term count for positive and negative
polarity. On the other hand the Sentiwordnet, which is quite different from
other lexicon resources that gives scores (weights) of the positive and
negative polarity for each word. The polarity of a word namely positive,
negative and neutral have the score ranging between 0 to 1 indicates the
strength/weight of the word with that sentiment orientation. In this paper, we
show that using the Sentiwordnet, how we could enhance the performance of the
classification at both sentence and document level.
|
1401.3250 | Half-Duplex Relaying for the Multiuser Channel | cs.IT math.IT | This work focuses on studying the half-duplex (HD) relaying in the Multiple
Access Relay Channel (MARC) and the Compound Multiple Access Channel with a
Relay (cMACr). A generalized Quantize-and-Forward (GQF) has been proposed to
establish the achievable rate regions. Such scheme is developed based on the
variation of the Quantize-and-Forward (QF) scheme and single block with two
slots coding structure. The results in this paper can also be considered as a
significant extension of the achievable rate region of Half-Duplex Relay
Channel (HDRC). Furthermore, the rate regions based on GQF scheme is extended
to the Gaussian channel case. The scheme performance is shown through some
numerical examples.
|
1401.3258 | A Boosting Approach to Learning Graph Representations | cs.LG cs.SI stat.ML | Learning the right graph representation from noisy, multisource data has
garnered significant interest in recent years. A central tenet of this problem
is relational learning. Here the objective is to incorporate the partial
information each data source gives us in a way that captures the true
underlying relationships. To address this challenge, we present a general,
boosting-inspired framework for combining weak evidence of entity associations
into a robust similarity metric. We explore the extent to which different
quality measurements yield graph representations that are suitable for
community detection. We then present empirical results on both synthetic and
real datasets demonstrating the utility of this framework. Our framework leads
to suitable global graph representations from quality measurements local to
each edge. Finally, we discuss future extensions and theoretical considerations
of learning useful graph representations from weak feedback in general
application settings.
|
1401.3277 | A Novel Rate Control Algorithm for Onboard Predictive Coding of
Multispectral and Hyperspectral Images | cs.IT math.IT | Predictive coding is attractive for compression onboard of spacecrafts thanks
to its low computational complexity, modest memory requirements and the ability
to accurately control quality on a pixel-by-pixel basis. Traditionally,
predictive compression focused on the lossless and near-lossless modes of
operation where the maximum error can be bounded but the rate of the compressed
image is variable. Rate control is considered a challenging problem for
predictive encoders due to the dependencies between quantization and prediction
in the feedback loop, and the lack of a signal representation that packs the
signal's energy into few coefficients. In this paper, we show that it is
possible to design a rate control scheme intended for onboard implementation.
In particular, we propose a general framework to select quantizers in each
spatial and spectral region of an image so as to achieve the desired target
rate while minimizing distortion. The rate control algorithm allows to achieve
lossy, near-lossless compression, and any in-between type of compression, e.g.,
lossy compression with a near-lossless constraint. While this framework is
independent of the specific predictor used, in order to show its performance,
in this paper we tailor it to the predictor adopted by the CCSDS-123 lossless
compression standard, obtaining an extension that allows to perform lossless,
near-lossless and lossy compression in a single package. We show that the rate
controller has excellent performance in terms of accuracy in the output rate,
rate-distortion characteristics and is extremely competitive with respect to
state-of-the-art transform coding.
|
1401.3322 | A Subband-Based SVM Front-End for Robust ASR | cs.CL cs.LG cs.SD | This work proposes a novel support vector machine (SVM) based robust
automatic speech recognition (ASR) front-end that operates on an ensemble of
the subband components of high-dimensional acoustic waveforms. The key issues
of selecting the appropriate SVM kernels for classification in frequency
subbands and the combination of individual subband classifiers using ensemble
methods are addressed. The proposed front-end is compared with state-of-the-art
ASR front-ends in terms of robustness to additive noise and linear filtering.
Experiments performed on the TIMIT phoneme classification task demonstrate the
benefits of the proposed subband based SVM front-end: it outperforms the
standard cepstral front-end in the presence of noise and linear filtering for
signal-to-noise ratio (SNR) below 12-dB. A combination of the proposed
front-end with a conventional front-end such as MFCC yields further
improvements over the individual front ends across the full range of noise
levels.
|
1401.3357 | Back-pressure traffic signal control with unknown routing rates | cs.SY | The control of a network of signalized intersections is considered. Previous
works proposed a feedback control belonging to the family of the so-called
back-pressure controls that ensures provably maximum stability given
pre-specified routing probabilities. However, this optimal back-pressure
controller (BP*) requires routing rates and a measure of the number of vehicles
queuing at a node for each possible routing decision. It is an idealistic
assumption for our application since vehicles (going straight, turning
left/right) are all gathered in the same lane apart from the proximity of the
intersection and cameras can only give estimations of the aggregated queue
length. In this paper, we present a back-pressure traffic signal controller
(BP) that does not require routing rates, it requires only aggregated queue
lengths estimation (without direction information) and loop detectors at the
stop line for each possible direction. A theoretical result on the Lyapunov
drift in heavy load conditions under BP control is provided and tends to
indicate that BP should have good stability properties. Simulations confirm
this and show that BP stabilizes the queuing network in a significant part of
the capacity region.
|
1401.3372 | Learning Language from a Large (Unannotated) Corpus | cs.CL cs.LG | A novel approach to the fully automated, unsupervised extraction of
dependency grammars and associated syntax-to-semantic-relationship mappings
from large text corpora is described. The suggested approach builds on the
authors' prior work with the Link Grammar, RelEx and OpenCog systems, as well
as on a number of prior papers and approaches from the statistical language
learning literature. If successful, this approach would enable the mining of
all the information needed to power a natural language comprehension and
generation system, directly from a large, unannotated corpus.
|
1401.3375 | Flexible Backhaul Design and Degrees of Freedom for Linear Interference
Networks | cs.IT math.IT | The considered problem is that of maximizing the degrees of freedom (DoF) in
cellular downlink, under a backhaul load constraint that limits the number of
messages that can be delivered from a centralized controller to the base
station transmitters. A linear interference channel model is considered, where
each transmitter is connected to the receiver having the same index as well as
one succeeding receiver. The backhaul load is defined as the sum of all the
messages available at all the transmitters normalized by the number of users.
When the backhaul load is constrained to an integer level B, the asymptotic per
user DoF is shown to equal (4B-1)/(4B), and it is shown that the optimal
assignment of messages to transmitters is asymmetric and satisfies a local
cooperation constraint and that the optimal coding scheme relies only on
zero-forcing transmit beamforming. Finally, an extension of the presented
coding scheme is shown to apply for more general locally connected and
two-dimensional networks.
|
1401.3376 | Across neighbourhood search for numerical optimization | cs.NE | Population-based search algorithms (PBSAs), including swarm intelligence
algorithms (SIAs) and evolutionary algorithms (EAs), are competitive
alternatives for solving complex optimization problems and they have been
widely applied to real-world optimization problems in different fields. In this
study, a novel population-based across neighbourhood search (ANS) is proposed
for numerical optimization. ANS is motivated by two straightforward assumptions
and three important issues raised in improving and designing efficient PBSAs.
In ANS, a group of individuals collaboratively search the solution space for an
optimal solution of the optimization problem considered. A collection of
superior solutions found by individuals so far is maintained and updated
dynamically. At each generation, an individual directly searches across the
neighbourhoods of multiple superior solutions with the guidance of a Gaussian
distribution. This search manner is referred to as across neighbourhood search.
The characteristics of ANS are discussed and the concept comparisons with other
PBSAs are given. The principle behind ANS is simple. Moreover, ANS is easy for
implementation and application with three parameters being required to tune.
Extensive experiments on 18 benchmark optimization functions of different types
show that ANS has well balanced exploration and exploitation capabilities and
performs competitively compared with many efficient PBSAs (Related Matlab codes
used in the experiments are available from
http://guohuawunudt.gotoip2.com/publications.html).
|
1401.3381 | Promises, Impositions, and other Directionals | cs.MA | Promises, impositions, proposals, predictions, and suggestions are
categorized as voluntary co-operational methods. The class of voluntary
co-operational methods is included in the class of so-called directionals.
Directionals are mechanisms supporting the mutual coordination of autonomous
agents.
Notations are provided capable of expressing residual fragments of
directionals. An extensive example, involving promises about the suitability of
programs for tasks imposed on the promisee is presented. The example
illustrates the dynamics of promises and more specifically the corresponding
mechanism of trust updating and credibility updating. Trust levels and
credibility levels then determine the way certain promises and impositions are
handled.
The ubiquity of promises and impositions is further demonstrated with two
extensive examples involving human behaviour: an artificial example about an
agent planning a purchase, and a realistic example describing technology
mediated interaction concerning the solution of pay station failure related
problems arising for an agent intending to leave the parking area.
|
1401.3385 | A programme to determine the exact interior of any connected digital
picture | cs.CG cs.CV cs.GR | Region filling is one of the most important and fundamental operations in
computer graphics and image processing. Many filling algorithms and their
implementations are based on the Euclidean geometry, which are then translated
into computational models moving carelessly from the continuous to the finite
discrete space of the computer. The consequences of this approach is that most
implementations fail when tested for challenging degenerate and nearly
degenerate regions. We present a correct integer-only procedure that works for
all connected digital pictures. It finds all possible interior points, which
are then displayed and stored in a locating matrix. Namely, we present a
filling and locating procedure that can be used in computer graphics and image
processing applications.
|
1401.3387 | Maximum Throughput of a Cooperative Energy Harvesting Cognitive Radio
User | cs.IT cs.NI math.IT | In this paper, we investigate the maximum throughput of a saturated
rechargeable secondary user (SU) sharing the spectrum with a primary user (PU).
The SU harvests energy packets (tokens) from the environment with a certain
harvesting rate. All transmitters are assumed to have data buffers to store the
incoming data packets. In addition to its own traffic buffer, the SU has a
buffer for storing the admitted primary packets for relaying; and a buffer for
storing the energy tokens harvested from the environment. We propose a new
cooperative cognitive relaying protocol that allows the SU to relay a fraction
of the undelivered primary packets. We consider an interference channel model
(or a multipacket reception (MPR) channel model), where concurrent
transmissions can survive from interference with certain probability
characterized by the complement of channel outages. The proposed protocol
exploits the primary queue burstiness and receivers' MPR capability. In
addition, it efficiently expends the secondary energy tokens under the
objective of secondary throughput maximization. Our numerical results show the
benefits of cooperation, receivers' MPR capability, and secondary energy queue
arrival rate on the system performance from a network layer standpoint.
|
1401.3390 | Binary Classifier Calibration: Non-parametric approach | stat.ML cs.LG | Accurate calibration of probabilistic predictive models learned is critical
for many practical prediction and decision-making tasks. There are two main
categories of methods for building calibrated classifiers. One approach is to
develop methods for learning probabilistic models that are well-calibrated, ab
initio. The other approach is to use some post-processing methods for
transforming the output of a classifier to be well calibrated, as for example
histogram binning, Platt scaling, and isotonic regression. One advantage of the
post-processing approach is that it can be applied to any existing
probabilistic classification model that was constructed using any
machine-learning method.
In this paper, we first introduce two measures for evaluating how well a
classifier is calibrated. We prove three theorems showing that using a simple
histogram binning post-processing method, it is possible to make a classifier
be well calibrated while retaining its discrimination capability. Also, by
casting the histogram binning method as a density-based non-parametric binary
classifier, we can extend it using two simple non-parametric density estimation
methods. We demonstrate the performance of the proposed calibration methods on
synthetic and real datasets. Experimental results show that the proposed
methods either outperform or are comparable to existing calibration methods.
|
1401.3409 | Low-Rank Modeling and Its Applications in Image Analysis | cs.CV cs.LG stat.ML | Low-rank modeling generally refers to a class of methods that solve problems
by representing variables of interest as low-rank matrices. It has achieved
great success in various fields including computer vision, data mining, signal
processing and bioinformatics. Recently, much progress has been made in
theories, algorithms and applications of low-rank modeling, such as exact
low-rank matrix recovery via convex programming and matrix completion applied
to collaborative filtering. These advances have brought more and more
attentions to this topic. In this paper, we review the recent advance of
low-rank modeling, the state-of-the-art algorithms, and related applications in
image analysis. We first give an overview to the concept of low-rank modeling
and challenging problems in this area. Then, we summarize the models and
algorithms for low-rank matrix recovery and illustrate their advantages and
limitations with numerical experiments. Next, we introduce a few applications
of low-rank modeling in the context of image analysis. Finally, we conclude
this paper with some discussions.
|
1401.3410 | Effect of ISI Mitigation on Modulation Techniques in Communication via
Diffusion | cs.ET cs.IT math.IT | Communication via diffusion (CvD) is an effective and energy efficient method
for transmitting information in nanonetworks. In this work, we focus on a
diffusion-based communication system where the reception process is an
absorption via receptors. Whenever a molecule hits to the receiver it is
removed from the environment. This kind of reception process is called first
passage process and it is more complicated compared to diffusion process only.
In 3-D environments, obtaining analytical solution for hitting time
distribution for realistic cases is complicated, hence we develop an end-to-end
simulator for he diffusion-based communication system that sends consecutive
symbols.
In CvD, each symbol is modulated and demodulated in a time slot called symbol
duration, however the long tail distribution of hitting time is the main
challenge that affects the symbol detection error. The molecules arriving in
the following slots become an interference source when detection takes place.
End-to-end simulator enables us to analyze the effect of inter symbol
interference (ISI) without making any assumptions on the ISI. We propose an ISI
cancellation technique that utilizes decision feedback for compensating the
effect of previously demodulated symbol. Three different modulation types are
considered with pulse, square, and cosine carrier waves. In case of constraints
on transmitter or receiver node it may not be possible to use pulse as a
carrier, and peak-to-average messenger molecule metric is defined for this
purpose. Results show that, the proposed ISI mitigation technique improves the
symbol detection performance and the amplitude-based modulations are improved
more than frequency-based modulations.
|
1401.3413 | Infinite Mixed Membership Matrix Factorization | cs.LG cs.IR | Rating and recommendation systems have become a popular application area for
applying a suite of machine learning techniques. Current approaches rely
primarily on probabilistic interpretations and extensions of matrix
factorization, which factorizes a user-item ratings matrix into latent user and
item vectors. Most of these methods fail to model significant variations in
item ratings from otherwise similar users, a phenomenon known as the "Napoleon
Dynamite" effect. Recent efforts have addressed this problem by adding a
contextual bias term to the rating, which captures the mood under which a user
rates an item or the context in which an item is rated by a user. In this work,
we extend this model in a nonparametric sense by learning the optimal number of
moods or contexts from the data, and derive Gibbs sampling inference procedures
for our model. We evaluate our approach on the MovieLens 1M dataset, and show
significant improvements over the optimal parametric baseline, more than twice
the improvements previously encountered for this task. We also extract and
evaluate a DBLP dataset, wherein we predict the number of papers co-authored by
two authors, and present improvements over the parametric baseline on this
alternative domain as well.
|
1401.3420 | Democratic Representations | cs.IT math.IT | Minimization of the $\ell_{\infty}$ (or maximum) norm subject to a constraint
that imposes consistency to an underdetermined system of linear equations finds
use in a large number of practical applications, including vector quantization,
approximate nearest neighbor search, peak-to-average power ratio (or "crest
factor") reduction in communication systems, and peak force minimization in
robotics and control. This paper analyzes the fundamental properties of signal
representations obtained by solving such a convex optimization problem. We
develop bounds on the maximum magnitude of such representations using the
uncertainty principle (UP) introduced by Lyubarskii and Vershynin, and study
the efficacy of $\ell_{\infty}$-norm-based dynamic range reduction. Our
analysis shows that matrices satisfying the UP, such as randomly subsampled
Fourier or i.i.d. Gaussian matrices, enable the computation of what we call
democratic representations, whose entries all have small and similar magnitude,
as well as low dynamic range. To compute democratic representations at low
computational complexity, we present two new, efficient convex optimization
algorithms. We finally demonstrate the efficacy of democratic representations
for dynamic range reduction in a DVB-T2-based broadcast system.
|
1401.3426 | Networks of Influence Diagrams: A Formalism for Representing Agents'
Beliefs and Decision-Making Processes | cs.GT cs.AI | This paper presents Networks of Influence Diagrams (NID), a compact, natural
and highly expressive language for reasoning about agents beliefs and
decision-making processes. NIDs are graphical structures in which agents mental
models are represented as nodes in a network; a mental model for an agent may
itself use descriptions of the mental models of other agents. NIDs are
demonstrated by examples, showing how they can be used to describe conflicting
and cyclic belief structures, and certain forms of bounded rationality. In an
opponent modeling domain, NIDs were able to outperform other computational
agents whose strategies were not known in advance. NIDs are equivalent in
representation to Bayesian games but they are more compact and structured than
this formalism. In particular, the equilibrium definition for NIDs makes an
explicit distinction between agents optimal strategies, and how they actually
behave in reality.
|
1401.3427 | Analogical Dissimilarity: Definition, Algorithms and Two Experiments in
Machine Learning | cs.LG cs.AI | This paper defines the notion of analogical dissimilarity between four
objects, with a special focus on objects structured as sequences. Firstly, it
studies the case where the four objects have a null analogical dissimilarity,
i.e. are in analogical proportion. Secondly, when one of these objects is
unknown, it gives algorithms to compute it. Thirdly, it tackles the problem of
defining analogical dissimilarity, which is a measure of how far four objects
are from being in analogical proportion. In particular, when objects are
sequences, it gives a definition and an algorithm based on an optimal alignment
of the four sequences. It gives also learning algorithms, i.e. methods to find
the triple of objects in a learning sample which has the least analogical
dissimilarity with a given object. Two practical experiments are described: the
first is a classification problem on benchmarks of binary and nominal data, the
second shows how the generation of sequences by solving analogical equations
enables a handwritten character recognition system to rapidly be adapted to a
new writer.
|
1401.3428 | A Heuristic Search Approach to Planning with Continuous Resources in
Stochastic Domains | cs.AI | We consider the problem of optimal planning in stochastic domains with
resource constraints, where the resources are continuous and the choice of
action at each step depends on resource availability. We introduce the HAO*
algorithm, a generalization of the AO* algorithm that performs search in a
hybrid state space that is modeled using both discrete and continuous state
variables, where the continuous variables represent monotonic resources. Like
other heuristic search algorithms, HAO* leverages knowledge of the start state
and an admissible heuristic to focus computational effort on those parts of the
state space that could be reached from the start state by following an optimal
policy. We show that this approach is especially effective when resource
constraints limit how much of the state space is reachable. Experimental
results demonstrate its effectiveness in the domain that motivates our
research: automated planning for planetary exploration rovers.
|
1401.3429 | Latent Tree Models and Approximate Inference in Bayesian Networks | cs.LG | We propose a novel method for approximate inference in Bayesian networks
(BNs). The idea is to sample data from a BN, learn a latent tree model (LTM)
from the data offline, and when online, make inference with the LTM instead of
the original BN. Because LTMs are tree-structured, inference takes linear time.
In the meantime, they can represent complex relationship among leaf nodes and
hence the approximation accuracy is often good. Empirical evidence shows that
our method can achieve good approximation accuracy at low online computational
cost.
|
1401.3430 | A Unifying Framework for Structural Properties of CSPs: Definitions,
Complexity, Tractability | cs.AI cs.LO | Literature on Constraint Satisfaction exhibits the definition of several
structural properties that can be possessed by CSPs, like (in)consistency,
substitutability or interchangeability. Current tools for constraint solving
typically detect such properties efficiently by means of incomplete yet
effective algorithms, and use them to reduce the search space and boost search.
In this paper, we provide a unifying framework encompassing most of the
properties known so far, both in CSP and other fields literature, and shed
light on the semantical relationships among them. This gives a unified and
comprehensive view of the topic, allows new, unknown, properties to emerge, and
clarifies the computational complexity of the various detection problems.
In particular, among the others, two new concepts, fixability and
removability emerge, that come out to be the ideal characterisations of values
that may be safely assigned or removed from a variables domain, while
preserving problem satisfiability. These two notions subsume a large number of
known properties, including inconsistency, substitutability and others.
Because of the computational intractability of all the property-detection
problems, by following the CSP approach we then determine a number of
relaxations which provide sufficient conditions for their tractability. In
particular, we exploit forms of language restrictions and local reasoning.
|
1401.3431 | Compositional Belief Update | cs.AI | In this paper we explore a class of belief update operators, in which the
definition of the operator is compositional with respect to the sentence to be
added. The goal is to provide an update operator that is intuitive, in that its
definition is based on a recursive decomposition of the update sentences
structure, and that may be reasonably implemented. In addressing update, we
first provide a definition phrased in terms of the models of a knowledge base.
While this operator satisfies a core group of the benchmark Katsuno-Mendelzon
update postulates, not all of the postulates are satisfied. Other
Katsuno-Mendelzon postulates can be obtained by suitably restricting the
syntactic form of the sentence for update, as we show. In restricting the
syntactic form of the sentence for update, we also obtain a hierarchy of update
operators with Winsletts standard semantics as the most basic interesting
approach captured. We subsequently give an algorithm which captures this
approach; in the general case the algorithm is exponential, but with some
not-unreasonable assumptions we obtain an algorithm that is linear in the size
of the knowledge base. Hence the resulting approach has much better complexity
characteristics than other operators in some situations. We also explore other
compositional belief change operators: erasure is developed as a dual operator
to update; we show that a forget operator is definable in terms of update; and
we give a definition of the compositional revision operator. We obtain that
compositional revision, under the most natural definition, yields the Satoh
revision operator.
|
1401.3432 | A Rigorously Bayesian Beam Model and an Adaptive Full Scan Model for
Range Finders in Dynamic Environments | cs.AI cs.LG | This paper proposes and experimentally validates a Bayesian network model of
a range finder adapted to dynamic environments. All modeling assumptions are
rigorously explained, and all model parameters have a physical interpretation.
This approach results in a transparent and intuitive model. With respect to the
state of the art beam model this paper: (i) proposes a different functional
form for the probability of range measurements caused by unmodeled objects,
(ii) intuitively explains the discontinuity encountered in te state of the art
beam model, and (iii) reduces the number of model parameters, while maintaining
the same representational power for experimental data. The proposed beam model
is called RBBM, short for Rigorously Bayesian Beam Model. A maximum likelihood
and a variational Bayesian estimator (both based on expectation-maximization)
are proposed to learn the model parameters.
Furthermore, the RBBM is extended to a full scan model in two steps: first,
to a full scan model for static environments and next, to a full scan model for
general, dynamic environments. The full scan model accounts for the dependency
between beams and adapts to the local sample density when using a particle
filter. In contrast to Gaussian-based state of the art models, the proposed
full scan model uses a sample-based approximation. This sample-based
approximation enables handling dynamic environments and capturing
multi-modality, which occurs even in simple static environments.
|
1401.3434 | Adaptive Stochastic Resource Control: A Machine Learning Approach | cs.LG | The paper investigates stochastic resource allocation problems with scarce,
reusable resources and non-preemtive, time-dependent, interconnected tasks.
This approach is a natural generalization of several standard resource
management problems, such as scheduling and transportation problems. First,
reactive solutions are considered and defined as control policies of suitably
reformulated Markov decision processes (MDPs). We argue that this reformulation
has several favorable properties, such as it has finite state and action
spaces, it is aperiodic, hence all policies are proper and the space of control
policies can be safely restricted. Next, approximate dynamic programming (ADP)
methods, such as fitted Q-learning, are suggested for computing an efficient
control policy. In order to compactly maintain the cost-to-go function, two
representations are studied: hash tables and support vector regression (SVR),
particularly, nu-SVRs. Several additional improvements, such as the application
of limited-lookahead rollout algorithms in the initial phases, action space
decomposition, task clustering and distributed sampling are investigated, too.
Finally, experimental results on both benchmark and industry-related data are
presented.
|
1401.3436 | Online Planning Algorithms for POMDPs | cs.AI | Partially Observable Markov Decision Processes (POMDPs) provide a rich
framework for sequential decision-making under uncertainty in stochastic
domains. However, solving a POMDP is often intractable except for small
problems due to their complexity. Here, we focus on online approaches that
alleviate the computational complexity by computing good local policies at each
decision step during the execution. Online algorithms generally consist of a
lookahead search to find the best action to execute at each time step in an
environment. Our objectives here are to survey the various existing online
POMDP methods, analyze their properties and discuss their advantages and
disadvantages; and to thoroughly evaluate these online approaches in different
environments under various metrics (return, error bound reduction, lower bound
improvement). Our experimental results indicate that state-of-the-art online
heuristic search methods can handle large POMDP domains efficiently.
|
1401.3437 | Learning Partially Observable Deterministic Action Models | cs.AI | We present exact algorithms for identifying deterministic-actions effects and
preconditions in dynamic partially observable domains. They apply when one does
not know the action model(the way actions affect the world) of a domain and
must learn it from partial observations over time. Such scenarios are common in
real world applications. They are challenging for AI tasks because traditional
domain structures that underly tractability (e.g., conditional independence)
fail there (e.g., world features become correlated). Our work departs from
traditional assumptions about partial observations and action models. In
particular, it focuses on problems in which actions are deterministic of simple
logical structure and observation models have all features observed with some
frequency. We yield tractable algorithms for the modified problem for such
domains.
Our algorithms take sequences of partial observations over time as input, and
output deterministic action models that could have lead to those observations.
The algorithms output all or one of those models (depending on our choice), and
are exact in that no model is misclassified given the observations. Our
algorithms take polynomial time in the number of time steps and state features
for some traditional action classes examined in the AI-planning literature,
e.g., STRIPS actions. In contrast, traditional approaches for HMMs and
Reinforcement Learning are inexact and exponentially intractable for such
domains. Our experiments verify the theoretical tractability guarantees, and
show that we identify action models exactly. Several applications in planning,
autonomous exploration, and adventure-game playing already use these results.
They are also promising for probabilistic settings, partially observable
reinforcement learning, and diagnosis.
|
1401.3438 | The Ultrametric Constraint and its Application to Phylogenetics | cs.AI | A phylogenetic tree shows the evolutionary relationships among species.
Internal nodes of the tree represent speciation events and leaf nodes
correspond to species. A goal of phylogenetics is to combine such trees into
larger trees, called supertrees, whilst respecting the relationships in the
original trees. A rooted tree exhibits an ultrametric property; that is, for
any three leaves of the tree it must be that one pair has a deeper most recent
common ancestor than the other pairs, or that all three have the same most
recent common ancestor. This inspires a constraint programming encoding for
rooted trees. We present an efficient constraint that enforces the ultrametric
property over a symmetric array of constrained integer variables, with the
inevitable property that the lower bounds of any three variables are mutually
supportive. We show that this allows an efficient constraint-based solution to
the supertree construction problem. We demonstrate that the versatility of
constraint programming can be exploited to allow solutions to variants of the
supertree construction problem.
|
1401.3439 | Interactive Policy Learning through Confidence-Based Autonomy | cs.AI | We present Confidence-Based Autonomy (CBA), an interactive algorithm for
policy learning from demonstration. The CBA algorithm consists of two
components which take advantage of the complimentary abilities of humans and
computer agents. The first component, Confident Execution, enables the agent to
identify states in which demonstration is required, to request a demonstration
from the human teacher and to learn a policy based on the acquired data. The
algorithm selects demonstrations based on a measure of action selection
confidence, and our results show that using Confident Execution the agent
requires fewer demonstrations to learn the policy than when demonstrations are
selected by a human teacher. The second algorithmic component, Corrective
Demonstration, enables the teacher to correct any mistakes made by the agent
through additional demonstrations in order to improve the policy and future
task performance. CBA and its individual components are compared and evaluated
in a complex simulated driving domain. The complete CBA algorithm results in
the best overall learning performance, successfully reproducing the behavior of
the teacher while balancing the tradeoff between number of demonstrations and
number of incorrect actions during learning.
|
1401.3441 | Transductive Rademacher Complexity and its Applications | cs.LG cs.AI stat.ML | We develop a technique for deriving data-dependent error bounds for
transductive learning algorithms based on transductive Rademacher complexity.
Our technique is based on a novel general error bound for transduction in terms
of transductive Rademacher complexity, together with a novel bounding technique
for Rademacher averages for particular algorithms, in terms of their
"unlabeled-labeled" representation. This technique is relevant to many advanced
graph-based transductive algorithms and we demonstrate its effectiveness by
deriving error bounds to three well known algorithms. Finally, we present a new
PAC-Bayesian bound for mixtures of transductive algorithms based on our
Rademacher bounds.
|
1401.3442 | Asynchronous Forward Bounding for Distributed COPs | cs.AI | A new search algorithm for solving distributed constraint optimization
problems (DisCOPs) is presented. Agents assign variables sequentially and
compute bounds on partial assignments asynchronously. The asynchronous bounds
computation is based on the propagation of partial assignments. The
asynchronous forward-bounding algorithm (AFB) is a distributed optimization
search algorithm that keeps one consistent partial assignment at all times. The
algorithm is described in detail and its correctness proven. Experimental
evaluation shows that AFB outperforms synchronous branch and bound by many
orders of magnitude, and produces a phase transition as the tightness of the
problem increases. This is an analogous effect to the phase transition that has
been observed when local consistency maintenance is applied to MaxCSPs. The AFB
algorithm is further enhanced by the addition of a backjumping mechanism,
resulting in the AFB-BJ algorithm. Distributed backjumping is based on
accumulated information on bounds of all values and on processing concurrently
a queue of candidate goals for the next move back. The AFB-BJ algorithm is
compared experimentally to other DisCOP algorithms (ADOPT, DPOP, OptAPO) and is
shown to be a very efficient algorithm for DisCOPs.
|
1401.3443 | Computational Logic Foundations of KGP Agents | cs.AI | This paper presents the computational logic foundations of a model of agency
called the KGP (Knowledge, Goals and Plan model. This model allows the
specification of heterogeneous agents that can interact with each other, and
can exhibit both proactive and reactive behaviour allowing them to function in
dynamic environments by adjusting their goals and plans when changes happen in
such environments. KGP provides a highly modular agent architecture that
integrates a collection of reasoning and physical capabilities, synthesised
within transitions that update the agents state in response to reasoning,
sensing and acting. Transitions are orchestrated by cycle theories that specify
the order in which transitions are executed while taking into account the
dynamic context and agent preferences, as well as selection operators for
providing inputs to transitions.
|
1401.3444 | On the Qualitative Comparison of Decisions Having Positive and Negative
Features | cs.AI | Making a decision is often a matter of listing and comparing positive and
negative arguments. In such cases, the evaluation scale for decisions should be
considered bipolar, that is, negative and positive values should be explicitly
distinguished. That is what is done, for example, in Cumulative Prospect
Theory. However, contraryto the latter framework that presupposes genuine
numerical assessments, human agents often decide on the basis of an ordinal
ranking of the pros and the cons, and by focusing on the most salient
arguments. In other terms, the decision process is qualitative as well as
bipolar. In this article, based on a bipolar extension of possibility theory,
we define and axiomatically characterize several decision rules tailored for
the joint handling of positive and negative arguments in an ordinal setting.
The simplest rules can be viewed as extensions of the maximin and maximax
criteria to the bipolar case, and consequently suffer from poor decisive power.
More decisive rules that refine the former are also proposed. These refinements
agree both with principles of efficiency and with the spirit of
order-of-magnitude reasoning, that prevails in qualitative decision theory. The
most refined decision rule uses leximin rankings of the pros and the cons, and
the ideas of counting arguments of equal strength and cancelling pros by cons.
It is shown to come down to a special case of Cumulative Prospect Theory, and
to subsume the Take the Best heuristic studied by cognitive psychologists.
|
1401.3446 | Amino Acid Interaction Network Prediction using Multi-objective
Optimization | cs.CE cs.NE | Protein can be represented by amino acid interaction network. This network is
a graph whose vertices are the proteins amino acids and whose edges are the
interactions between them. This interaction network is the first step of
proteins three-dimensional structure prediction. In this paper we present a
multi-objective evolutionary algorithm for interaction prediction and ant
colony probabilistic optimization algorithm is used to confirm the interaction.
|
1401.3447 | Anytime Induction of Low-cost, Low-error Classifiers: a Sampling-based
Approach | cs.LG | Machine learning techniques are gaining prevalence in the production of a
wide range of classifiers for complex real-world applications with nonuniform
testing and misclassification costs. The increasing complexity of these
applications poses a real challenge to resource management during learning and
classification. In this work we introduce ACT (anytime cost-sensitive tree
learner), a novel framework for operating in such complex environments. ACT is
an anytime algorithm that allows learning time to be increased in return for
lower classification costs. It builds a tree top-down and exploits additional
time resources to obtain better estimations for the utility of the different
candidate splits. Using sampling techniques, ACT approximates the cost of the
subtree under each candidate split and favors the one with a minimal cost. As a
stochastic algorithm, ACT is expected to be able to escape local minima, into
which greedy methods may be trapped. Experiments with a variety of datasets
were conducted to compare ACT to the state-of-the-art cost-sensitive tree
learners. The results show that for the majority of domains ACT produces
significantly less costly trees. ACT also exhibits good anytime behavior with
diminishing returns.
|
1401.3448 | AND/OR Multi-Valued Decision Diagrams (AOMDDs) for Graphical Models | cs.AI | Inspired by the recently introduced framework of AND/OR search spaces for
graphical models, we propose to augment Multi-Valued Decision Diagrams (MDD)
with AND nodes, in order to capture function decomposition structure and to
extend these compiled data structures to general weighted graphical models
(e.g., probabilistic models). We present the AND/OR Multi-Valued Decision
Diagram (AOMDD) which compiles a graphical model into a canonical form that
supports polynomial (e.g., solution counting, belief updating) or constant time
(e.g. equivalence of graphical models) queries. We provide two algorithms for
compiling the AOMDD of a graphical model. The first is search-based, and works
by applying reduction rules to the trace of the memory intensive AND/OR search
algorithm. The second is inference-based and uses a Bucket Elimination schedule
to combine the AOMDDs of the input functions via the the APPLY operator. For
both algorithms, the compilation time and the size of the AOMDD are, in the
worst case, exponential in the treewidth of the graphical model, rather than
pathwidth as is known for ordered binary decision diagrams (OBDDs). We
introduce the concept of semantic treewidth, which helps explain why the size
of a decision diagram is often much smaller than the worst case bound. We
provide an experimental evaluation that demonstrates the potential of AOMDDs.
|
1401.3450 | Completeness and Performance Of The APO Algorithm | cs.AI | Asynchronous Partial Overlay (APO) is a search algorithm that uses
cooperative mediation to solve Distributed Constraint Satisfaction Problems
(DisCSPs). The algorithm partitions the search into different subproblems of
the DisCSP. The original proof of completeness of the APO algorithm is based on
the growth of the size of the subproblems. The present paper demonstrates that
this expected growth of subproblems does not occur in some situations, leading
to a termination problem of the algorithm. The problematic parts in the APO
algorithm that interfere with its completeness are identified and necessary
modifications to the algorithm that fix these problematic parts are given. The
resulting version of the algorithm, Complete Asynchronous Partial Overlay
(CompAPO), ensures its completeness. Formal proofs for the soundness and
completeness of CompAPO are given. A detailed performance evaluation of CompAPO
comparing it to other DisCSP algorithms is presented, along with an extensive
experimental evaluation of the algorithm's unique behavior. Additionally, an
optimization version of the algorithm, CompOptAPO, is presented, discussed, and
evaluated.
|
1401.3453 | The Computational Complexity of Dominance and Consistency in CP-Nets | cs.AI | We investigate the computational complexity of testing dominance and
consistency in CP-nets. Previously, the complexity of dominance has been
determined for restricted classes in which the dependency graph of the CP-net
is acyclic. However, there are preferences of interest that define cyclic
dependency graphs; these are modeled with general CP-nets. In our main results,
we show here that both dominance and consistency for general CP-nets are
PSPACE-complete. We then consider the concept of strong dominance, dominance
equivalence and dominance incomparability, and several notions of optimality,
and identify the complexity of the corresponding decision problems. The
reductions used in the proofs are from STRIPS planning, and thus reinforce the
earlier established connections between both areas.
|
1401.3454 | A Multiagent Reinforcement Learning Algorithm with Non-linear Dynamics | cs.LG cs.MA | Several multiagent reinforcement learning (MARL) algorithms have been
proposed to optimize agents decisions. Due to the complexity of the problem,
the majority of the previously developed MARL algorithms assumed agents either
had some knowledge of the underlying game (such as Nash equilibria) and/or
observed other agents actions and the rewards they received.
We introduce a new MARL algorithm called the Weighted Policy Learner (WPL),
which allows agents to reach a Nash Equilibrium (NE) in benchmark
2-player-2-action games with minimum knowledge. Using WPL, the only feedback an
agent needs is its own local reward (the agent does not observe other agents
actions or rewards). Furthermore, WPL does not assume that agents know the
underlying game or the corresponding Nash Equilibrium a priori. We
experimentally show that our algorithm converges in benchmark
two-player-two-action games. We also show that our algorithm converges in the
challenging Shapleys game where previous MARL algorithms failed to converge
without knowing the underlying game or the NE. Furthermore, we show that WPL
outperforms the state-of-the-art algorithms in a more realistic setting of 100
agents interacting and learning concurrently.
An important aspect of understanding the behavior of a MARL algorithm is
analyzing the dynamics of the algorithm: how the policies of multiple learning
agents evolve over time as agents interact with one another. Such an analysis
not only verifies whether agents using a given MARL algorithm will eventually
converge, but also reveals the behavior of the MARL algorithm prior to
convergence. We analyze our algorithm in two-player-two-action games and show
that symbolically proving WPLs convergence is difficult, because of the
non-linear nature of WPLs dynamics, unlike previous MARL algorithms that had
either linear or piece-wise-linear dynamics. Instead, we numerically solve WPLs
dynamics differential equations and compare the solution to the dynamics of
previous MARL algorithms.
|
1401.3455 | Monte Carlo Sampling Methods for Approximating Interactive POMDPs | cs.AI | Partially observable Markov decision processes (POMDPs) provide a principled
framework for sequential planning in uncertain single agent settings. An
extension of POMDPs to multiagent settings, called interactive POMDPs
(I-POMDPs), replaces POMDP belief spaces with interactive hierarchical belief
systems which represent an agent's belief about the physical world, about
beliefs of other agents, and about their beliefs about others' beliefs. This
modification makes the difficulties of obtaining solutions due to complexity of
the belief and policy spaces even more acute. We describe a general method for
obtaining approximate solutions of I-POMDPs based on particle filtering (PF).
We introduce the interactive PF, which descends the levels of the interactive
belief hierarchies and samples and propagates beliefs at each level. The
interactive PF is able to mitigate the belief space complexity, but it does not
address the policy space complexity. To mitigate the policy space complexity --
sometimes also called the curse of history -- we utilize a complementary method
based on sampling likely observations while building the look ahead
reachability tree. While this approach does not completely address the curse of
history, it beats back the curse's impact substantially. We provide
experimental results and chart future work.
|
1401.3457 | Learning Document-Level Semantic Properties from Free-Text Annotations | cs.CL cs.IR | This paper presents a new method for inferring the semantic properties of
documents by leveraging free-text keyphrase annotations. Such annotations are
becoming increasingly abundant due to the recent dramatic growth in
semi-structured, user-generated online content. One especially relevant domain
is product reviews, which are often annotated by their authors with pros/cons
keyphrases such as a real bargain or good value. These annotations are
representative of the underlying semantic properties; however, unlike expert
annotations, they are noisy: lay authors may use different labels to denote the
same property, and some labels may be missing. To learn using such noisy
annotations, we find a hidden paraphrase structure which clusters the
keyphrases. The paraphrase structure is linked with a latent topic model of the
review texts, enabling the system to predict the properties of unannotated
documents and to effectively aggregate the semantic properties of multiple
reviews. Our approach is implemented as a hierarchical Bayesian model with
joint inference. We find that joint inference increases the robustness of the
keyphrase clustering and encourages the latent topics to correlate with
semantically meaningful properties. Multiple evaluations demonstrate that our
model substantially outperforms alternative approaches for summarizing single
and multiple documents into a set of semantically salient keyphrases.
|
1401.3458 | Solving #SAT and Bayesian Inference with Backtracking Search | cs.AI | Inference in Bayes Nets (BAYES) is an important problem with numerous
applications in probabilistic reasoning. Counting the number of satisfying
assignments of a propositional formula (#SAT) is a closely related problem of
fundamental theoretical importance. Both these problems, and others, are
members of the class of sum-of-products (SUMPROD) problems. In this paper we
show that standard backtracking search when augmented with a simple memoization
scheme (caching) can solve any sum-of-products problem with time complexity
that is at least as good any other state-of-the-art exact algorithm, and that
it can also achieve the best known time-space tradeoff. Furthermore,
backtracking's ability to utilize more flexible variable orderings allows us to
prove that it can achieve an exponential speedup over other standard algorithms
for SUMPROD on some instances.
The ideas presented here have been utilized in a number of solvers that have
been applied to various types of sum-of-product problems. These system's have
exploited the fact that backtracking can naturally exploit more of the
problem's structure to achieve improved performance on a range of
probleminstances. Empirical evidence of this performance gain has appeared in
published works describing these solvers, and we provide references to these
works.
|
1401.3459 | Generic Preferences over Subsets of Structured Objects | cs.AI | Various tasks in decision making and decision support systems require
selecting a preferred subset of a given set of items. Here we focus on problems
where the individual items are described using a set of characterizing
attributes, and a generic preference specification is required, that is, a
specification that can work with an arbitrary set of items. For example,
preferences over the content of an online newspaper should have this form: At
each viewing, the newspaper contains a subset of the set of articles currently
available. Our preference specification over this subset should be provided
offline, but we should be able to use it to select a subset of any currently
available set of articles, e.g., based on their tags. We present a general
approach for lifting formalisms for specifying preferences over objects with
multiple attributes into ones that specify preferences over subsets of such
objects. We also show how we can compute an optimal subset given such a
specification in a relatively efficient manner. We provide an empirical
evaluation of the approach as well as some worst-case complexity results.
|
1401.3460 | Policy Iteration for Decentralized Control of Markov Decision Processes | cs.AI | Coordination of distributed agents is required for problems arising in many
areas, including multi-robot systems, networking and e-commerce. As a formal
framework for such problems, we use the decentralized partially observable
Markov decision process (DEC-POMDP). Though much work has been done on optimal
dynamic programming algorithms for the single-agent version of the problem,
optimal algorithms for the multiagent case have been elusive. The main
contribution of this paper is an optimal policy iteration algorithm for solving
DEC-POMDPs. The algorithm uses stochastic finite-state controllers to represent
policies. The solution can include a correlation device, which allows agents to
correlate their actions without communicating. This approach alternates between
expanding the controller and performing value-preserving transformations, which
modify the controller without sacrificing value. We present two efficient
value-preserving transformations: one can reduce the size of the controller and
the other can improve its value while keeping the size fixed. Empirical results
demonstrate the usefulness of value-preserving transformations in increasing
value while keeping controller size to a minimum. To broaden the applicability
of the approach, we also present a heuristic version of the policy iteration
algorithm, which sacrifices convergence to optimality. This algorithm further
reduces the size of the controllers at each step by assuming that probability
distributions over the other agents actions are known. While this assumption
may not hold in general, it helps produce higher quality solutions in our test
problems.
|
1401.3461 | A Bilinear Programming Approach for Multiagent Planning | cs.AI | Multiagent planning and coordination problems are common and known to be
computationally hard. We show that a wide range of two-agent problems can be
formulated as bilinear programs. We present a successive approximation
algorithm that significantly outperforms the coverage set algorithm, which is
the state-of-the-art method for this class of multiagent problems. Because the
algorithm is formulated for bilinear programs, it is more general and simpler
to implement. The new algorithm can be terminated at any time and-unlike the
coverage set algorithm-it facilitates the derivation of a useful online
performance bound. It is also much more efficient, on average reducing the
computation time of the optimal solution by about four orders of magnitude.
Finally, we introduce an automatic dimensionality reduction method that
improves the effectiveness of the algorithm, extending its applicability to new
domains and providing a new way to analyze a subclass of bilinear programs.
|
1401.3462 | Efficient Informative Sensing using Multiple Robots | cs.RO cs.AI | The need for efficient monitoring of spatio-temporal dynamics in large
environmental applications, such as the water quality monitoring in rivers and
lakes, motivates the use of robotic sensors in order to achieve sufficient
spatial coverage. Typically, these robots have bounded resources, such as
limited battery or limited amounts of time to obtain measurements. Thus,
careful coordination of their paths is required in order to maximize the amount
of information collected, while respecting the resource constraints. In this
paper, we present an efficient approach for near-optimally solving the NP-hard
optimization problem of planning such informative paths. In particular, we
first develop eSIP (efficient Single-robot Informative Path planning), an
approximation algorithm for optimizing the path of a single robot. Hereby, we
use a Gaussian Process to model the underlying phenomenon, and use the mutual
information between the visited locations and remainder of the space to
quantify the amount of information collected. We prove that the mutual
information collected using paths obtained by using eSIP is close to the
information obtained by an optimal solution. We then provide a general
technique, sequential allocation, which can be used to extend any single robot
planning algorithm, such as eSIP, for the multi-robot problem. This procedure
approximately generalizes any guarantees for the single-robot problem to the
multi-robot case. We extensively evaluate the effectiveness of our approach on
several experiments performed in-field for two important environmental sensing
applications, lake and river monitoring, and simulation experiments performed
using several real world sensor network data sets.
|
1401.3463 | Automated Reasoning in Modal and Description Logics via SAT Encoding:
the Case Study of K(m)/ALC-Satisfiability | cs.LO cs.AI | In the last two decades, modal and description logics have been applied to
numerous areas of computer science, including knowledge representation, formal
verification, database theory, distributed computing and, more recently,
semantic web and ontologies. For this reason, the problem of automated
reasoning in modal and description logics has been thoroughly investigated. In
particular, many approaches have been proposed for efficiently handling the
satisfiability of the core normal modal logic K(m), and of its notational
variant, the description logic ALC. Although simple in structure, K(m)/ALC is
computationally very hard to reason on, its satisfiability being
PSPACE-complete.
In this paper we start exploring the idea of performing automated reasoning
tasks in modal and description logics by encoding them into SAT, so that to be
handled by state-of-the-art SAT tools; as with most previous approaches, we
begin our investigation from the satisfiability in K(m). We propose an
efficient encoding, and we test it on an extensive set of benchmarks, comparing
the approach with the main state-of-the-art tools available. Although the
encoding is necessarily worst-case exponential, from our experiments we notice
that, in practice, this approach can handle most or all the problems which are
at the reach of the other approaches, with performances which are comparable
with, or even better than, those of the current state-of-the-art tools.
|
1401.3464 | Learning Bayesian Network Equivalence Classes with Ant Colony
Optimization | cs.NE cs.AI cs.LG | Bayesian networks are a useful tool in the representation of uncertain
knowledge. This paper proposes a new algorithm called ACO-E, to learn the
structure of a Bayesian network. It does this by conducting a search through
the space of equivalence classes of Bayesian networks using Ant Colony
Optimization (ACO). To this end, two novel extensions of traditional ACO
techniques are proposed and implemented. Firstly, multiple types of moves are
allowed. Secondly, moves can be given in terms of indices that are not based on
construction graph nodes. The results of testing show that ACO-E performs
better than a greedy search and other state-of-the-art and metaheuristic
algorithms whilst searching in the space of equivalence classes.
|
1401.3465 | Learning to Reach Agreement in a Continuous Ultimatum Game | cs.GT cs.SI physics.soc-ph | It is well-known that acting in an individually rational manner, according to
the principles of classical game theory, may lead to sub-optimal solutions in a
class of problems named social dilemmas. In contrast, humans generally do not
have much difficulty with social dilemmas, as they are able to balance personal
benefit and group benefit. As agents in multi-agent systems are regularly
confronted with social dilemmas, for instance in tasks such as resource
allocation, these agents may benefit from the inclusion of mechanisms thought
to facilitate human fairness. Although many of such mechanisms have already
been implemented in a multi-agent systems context, their application is usually
limited to rather abstract social dilemmas with a discrete set of available
strategies (usually two). Given that many real-world examples of social
dilemmas are actually continuous in nature, we extend this previous work to
more general dilemmas, in which agents operate in a continuous strategy space.
The social dilemma under study here is the well-known Ultimatum Game, in which
an optimal solution is achieved if agents agree on a common strategy. We
investigate whether a scale-free interaction network facilitates agents to
reach agreement, especially in the presence of fixed-strategy agents that
represent a desired (e.g. human) outcome. Moreover, we study the influence of
rewiring in the interaction network. The agents are equipped with
continuous-action learning automata and play a large number of random pairwise
games in order to establish a common strategy. From our experiments, we may
conclude that results obtained in discrete-strategy games can be generalized to
continuous-strategy games to a certain extent: a scale-free interaction network
structure allows agents to achieve agreement on a common strategy, and rewiring
in the interaction network greatly enhances the agents ability to reach
agreement. However, it also becomes clear that some alternative mechanisms,
such as reputation and volunteering, have many subtleties involved and do not
have convincing beneficial effects in the continuous case.
|
1401.3466 | An Anytime Algorithm for Optimal Coalition Structure Generation | cs.MA cs.AI | Coalition formation is a fundamental type of interaction that involves the
creation of coherent groupings of distinct, autonomous, agents in order to
efficiently achieve their individual or collective goals. Forming effective
coalitions is a major research challenge in the field of multi-agent systems.
Central to this endeavour is the problem of determining which of the many
possible coalitions to form in order to achieve some goal. This usually
requires calculating a value for every possible coalition, known as the
coalition value, which indicates how beneficial that coalition would be if it
was formed. Once these values are calculated, the agents usually need to find a
combination of coalitions, in which every agent belongs to exactly one
coalition, and by which the overall outcome of the system is maximized.
However, this coalition structure generation problem is extremely challenging
due to the number of possible solutions that need to be examined, which grows
exponentially with the number of agents involved. To date, therefore, many
algorithms have been proposed to solve this problem using different techniques
ranging from dynamic programming, to integer programming, to stochastic search
all of which suffer from major limitations relating to execution time, solution
quality, and memory requirements.
With this in mind, we develop an anytime algorithm to solve the coalition
structure generation problem. Specifically, the algorithm uses a novel
representation of the search space, which partitions the space of possible
solutions into sub-spaces such that it is possible to compute upper and lower
bounds on the values of the best coalition structures in them. These bounds are
then used to identify the sub-spaces that have no potential of containing the
optimal solution so that they can be pruned. The algorithm, then, searches
through the remaining sub-spaces very efficiently using a branch-and-bound
technique to avoid examining all the solutions within the searched subspace(s).
In this setting, we prove that our algorithm enumerates all coalition
structures efficiently by avoiding redundant and invalid solutions
automatically. Moreover, in order to effectively test our algorithm we develop
a new type of input distribution which allows us to generate more reliable
benchmarks compared to the input distributions previously used in the field.
Given this new distribution, we show that for 27 agents our algorithm is able
to find solutions that are optimal in 0.175% of the time required by the
fastest available algorithm in the literature. The algorithm is anytime, and if
interrupted before it would have normally terminated, it can still provide a
solution that is guaranteed to be within a bound from the optimal one.
Moreover, the guarantees we provide on the quality of the solution are
significantly better than those provided by the previous state of the art
algorithms designed for this purpose. For example, for the worst case
distribution given 25 agents, our algorithm is able to find a 90% efficient
solution in around 10% of time it takes to find the optimal solution.
|
1401.3467 | Planning over Chain Causal Graphs for Variables with Domains of Size 5
Is NP-Hard | cs.AI cs.CC | Recently, considerable focus has been given to the problem of determining the
boundary between tractable and intractable planning problems. In this paper, we
study the complexity of planning in the class C_n of planning problems,
characterized by unary operators and directed path causal graphs. Although this
is one of the simplest forms of causal graphs a planning problem can have, we
show that planning is intractable for C_n (unless P = NP), even if the domains
of state variables have bounded size. In particular, we show that plan
existence for C_n^k is NP-hard for k>=5 by reduction from CNFSAT. Here, k
denotes the upper bound on the size of the state variable domains. Our result
reduces the complexity gap for the class C_n^k to cases k=3 and k=4 only, since
C_n^2 is known to be tractable.
|
1401.3468 | Compiling Uncertainty Away in Conformant Planning Problems with Bounded
Width | cs.AI | Conformant planning is the problem of finding a sequence of actions for
achieving a goal in the presence of uncertainty in the initial state or action
effects. The problem has been approached as a path-finding problem in belief
space where good belief representations and heuristics are critical for scaling
up. In this work, a different formulation is introduced for conformant problems
with deterministic actions where they are automatically converted into
classical ones and solved by an off-the-shelf classical planner. The
translation maps literals L and sets of assumptions t about the initial
situation, into new literals KL/t that represent that L must be true if t is
initially true. We lay out a general translation scheme that is sound and
establish the conditions under which the translation is also complete. We show
that the complexity of the complete translation is exponential in a parameter
of the problem called the conformant width, which for most benchmarks is
bounded. The planner based on this translation exhibits good performance in
comparison with existing planners, and is the basis for T0, the best performing
planner in the Conformant Track of the 2006 International Planning Competition.
|
1401.3469 | Exploiting Single-Cycle Symmetries in Continuous Constraint Problems | cs.AI | Symmetries in discrete constraint satisfaction problems have been explored
and exploited in the last years, but symmetries in continuous constraint
problems have not received the same attention. Here we focus on permutations of
the variables consisting of one single cycle. We propose a procedure that takes
advantage of these symmetries by interacting with a continuous constraint
solver without interfering with it. A key concept in this procedure are the
classes of symmetric boxes formed by bisecting a n-dimensional cube at the same
point in all dimensions at the same time. We analyze these classes and quantify
them as a function of the cube dimensionality. Moreover, we propose a simple
algorithm to generate the representatives of all these classes for any number
of variables at very high rates. A problem example from the chemical
and#64257;eld and the cyclic n-roots problem are used to show the performance
of the approach in practice.
|
1401.3470 | Message-Based Web Service Composition, Integrity Constraints, and
Planning under Uncertainty: A New Connection | cs.AI | Thanks to recent advances, AI Planning has become the underlying technique
for several applications. Figuring prominently among these is automated Web
Service Composition (WSC) at the "capability" level, where services are
described in terms of preconditions and effects over ontological concepts. A
key issue in addressing WSC as planning is that ontologies are not only formal
vocabularies; they also axiomatize the possible relationships between concepts.
Such axioms correspond to what has been termed "integrity constraints" in the
actions and change literature, and applying a web service is essentially a
belief update operation. The reasoning required for belief update is known to
be harder than reasoning in the ontology itself. The support for belief update
is severely limited in current planning tools.
Our first contribution consists in identifying an interesting special case of
WSC which is both significant and more tractable. The special case, which we
term "forward effects", is characterized by the fact that every ramification of
a web service application involves at least one new constant generated as
output by the web service. We show that, in this setting, the reasoning
required for belief update simplifies to standard reasoning in the ontology
itself. This relates to, and extends, current notions of "message-based" WSC,
where the need for belief update is removed by a strong (often implicit or
informal) assumption of "locality" of the individual messages. We clarify the
computational properties of the forward effects case, and point out a strong
relation to standard notions of planning under uncertainty, suggesting that
effective tools for the latter can be successfully adapted to address the
former.
Furthermore, we identify a significant sub-case, named "strictly forward
effects", where an actual compilation into planning under uncertainty exists.
This enables us to exploit off-the-shelf planning tools to solve message-based
WSC in a general form that involves powerful ontologies, and requires reasoning
about partial matches between concepts. We provide empirical evidence that this
approach may be quite effective, using Conformant-FF as the underlying planner.
|
1401.3471 | Conservative Inference Rule for Uncertain Reasoning under Incompleteness | cs.AI | In this paper we formulate the problem of inference under incomplete
information in very general terms. This includes modelling the process
responsible for the incompleteness, which we call the incompleteness process.
We allow the process behaviour to be partly unknown. Then we use Walleys theory
of coherent lower previsions, a generalisation of the Bayesian theory to
imprecision, to derive the rule to update beliefs under incompleteness that
logically follows from our assumptions, and that we call conservative inference
rule. This rule has some remarkable properties: it is an abstract rule to
update beliefs that can be applied in any situation or domain; it gives us the
opportunity to be neither too optimistic nor too pessimistic about the
incompleteness process, which is a necessary condition to draw reliable while
strong enough conclusions; and it is a coherent rule, in the sense that it
cannot lead to inconsistencies. We give examples to show how the new rule can
be applied in expert systems, in parametric statistical inference, and in
pattern classification, and discuss more generally the view of incompleteness
processes defended here as well as some of its consequences.
|
1401.3472 | Variable Forgetting in Reasoning about Knowledge | cs.LO cs.AI | In this paper, we investigate knowledge reasoning within a simple framework
called knowledge structure. We use variable forgetting as a basic operation for
one agent to reason about its own or other agents\ knowledge. In our framework,
two notions namely agents\ observable variables and the weakest sufficient
condition play important roles in knowledge reasoning. Given a background
knowledge base and a set of observable variables for each agent, we show that
the notion of an agent knowing a formula can be defined as a weakest sufficient
condition of the formula under background knowledge base. Moreover, we show how
to capture the notion of common knowledge by using a generalized notion of
weakest sufficient condition. Also, we show that public announcement operator
can be conveniently dealt with via our notion of knowledge structure. Further,
we explore the computational complexity of the problem whether an epistemic
formula is realized in a knowledge structure. In the general case, this problem
is PSPACE-hard; however, for some interesting subcases, it can be reduced to
co-NP. Finally, we discuss possible applications of our framework in some
interesting domains such as the automated analysis of the well-known muddy
children puzzle and the verification of the revised Needham-Schroeder protocol.
We believe that there are many scenarios where the natural presentation of the
available information about knowledge is under the form of a knowledge
structure. What makes it valuable compared with the corresponding multi-agent
S5 Kripke structure is that it can be much more succinct.
|
1401.3474 | Optimal Value of Information in Graphical Models | cs.AI | Many real-world decision making tasks require us to choose among several
expensive observations. In a sensor network, for example, it is important to
select the subset of sensors that is expected to provide the strongest
reduction in uncertainty. In medical decision making tasks, one needs to select
which tests to administer before deciding on the most effective treatment. It
has been general practice to use heuristic-guided procedures for selecting
observations. In this paper, we present the first efficient optimal algorithms
for selecting observations for a class of probabilistic graphical models. For
example, our algorithms allow to optimally label hidden variables in Hidden
Markov Models (HMMs). We provide results for both selecting the optimal subset
of observations, and for obtaining an optimal conditional observation plan.
Furthermore we prove a surprising result: In most graphical models tasks, if
one designs an efficient algorithm for chain graphs, such as HMMs, this
procedure can be generalized to polytree graphical models. We prove that the
optimizing value of information is $NP^{PP}$-hard even for polytrees. It also
follows from our results that just computing decision theoretic value of
information objective functions, which are commonly used in practice, is a
#P-complete problem even on Naive Bayes models (a simple special case of
polytrees).
In addition, we consider several extensions, such as using our algorithms for
scheduling observation selection for multiple sensors. We demonstrate the
effectiveness of our approach on several real-world datasets, including a
prototype sensor network deployment for energy conservation in buildings.
|
1401.3475 | Prime Implicates and Prime Implicants: From Propositional to Modal Logic | cs.LO cs.AI | Prime implicates and prime implicants have proven relevant to a number of
areas of artificial intelligence, most notably abductive reasoning and
knowledge compilation. The purpose of this paper is to examine how these
notions might be appropriately extended from propositional logic to the modal
logic K. We begin the paper by considering a number of potential definitions of
clauses and terms for K. The different definitions are evaluated with respect
to a set of syntactic, semantic, and complexity-theoretic properties
characteristic of the propositional definition. We then compare the definitions
with respect to the properties of the notions of prime implicates and prime
implicants that they induce. While there is no definition that perfectly
generalizes the propositional notions, we show that there does exist one
definition which satisfies many of the desirable properties of the
propositional case. In the second half of the paper, we consider the
computational properties of the selected definition. To this end, we provide
sound and complete algorithms for generating and recognizing prime implicates,
and we show the prime implicate recognition task to be PSPACE-complete. We also
prove upper and lower bounds on the size and number of prime implicates. While
the paper focuses on the logic K, all of our results hold equally well for
multi-modal K and for concept expressions in the description logic ALC.
|
1401.3476 | The Complexity of Circumscription in DLs | cs.LO cs.AI | As fragments of first-order logic, Description logics (DLs) do not provide
nonmonotonic features such as defeasible inheritance and default rules. Since
many applications would benefit from the availability of such features, several
families of nonmonotonic DLs have been developed that are mostly based on
default logic and autoepistemic logic. In this paper, we consider
circumscription as an interesting alternative approach to nonmonotonic DLs
that, in particular, supports defeasible inheritance in a natural way. We study
DLs extended with circumscription under different language restrictions and
under different constraints on the sets of minimized, fixed, and varying
predicates, and pinpoint the exact computational complexity of reasoning for
DLs ranging from ALC to ALCIO and ALCQO. When the minimized and fixed
predicates include only concept names but no role names, then reasoning is
complete for NExpTime^NP. It becomes complete for NP^NExpTime when the number
of minimized and fixed predicates is bounded by a constant. If roles can be
minimized or fixed, then complexity ranges from NExpTime^NP to undecidability.
|
1401.3477 | Solving Weighted Constraint Satisfaction Problems with Memetic/Exact
Hybrid Algorithms | cs.AI | A weighted constraint satisfaction problem (WCSP) is a constraint
satisfaction problem in which preferences among solutions can be expressed.
Bucket elimination is a complete technique commonly used to solve this kind of
constraint satisfaction problem. When the memory required to apply bucket
elimination is too high, a heuristic method based on it (denominated
mini-buckets) can be used to calculate bounds for the optimal solution.
Nevertheless, the curse of dimensionality makes these techniques impractical on
large scale problems. In response to this situation, we present a memetic
algorithm for WCSPs in which bucket elimination is used as a mechanism for
recombining solutions, providing the best possible child from the parental set.
Subsequently, a multi-level model in which this exact/metaheuristic hybrid is
further hybridized with branch-and-bound techniques and mini-buckets is
studied. As a case study, we have applied these algorithms to the resolution of
the maximum density still life problem, a hard constraint optimization problem
based on Conways game of life. The resulting algorithm consistently finds
optimal patterns for up to date solved instances in less time than current
approaches. Moreover, it is shown that this proposal provides new best known
solutions for very large instances.
|
1401.3478 | Efficient Markov Network Structure Discovery Using Independence Tests | cs.LG cs.AI stat.ML | We present two algorithms for learning the structure of a Markov network from
data: GSMN* and GSIMN. Both algorithms use statistical independence tests to
infer the structure by successively constraining the set of structures
consistent with the results of these tests. Until very recently, algorithms for
structure learning were based on maximum likelihood estimation, which has been
proved to be NP-hard for Markov networks due to the difficulty of estimating
the parameters of the network, needed for the computation of the data
likelihood. The independence-based approach does not require the computation of
the likelihood, and thus both GSMN* and GSIMN can compute the structure
efficiently (as shown in our experiments). GSMN* is an adaptation of the
Grow-Shrink algorithm of Margaritis and Thrun for learning the structure of
Bayesian networks. GSIMN extends GSMN* by additionally exploiting Pearls
well-known properties of the conditional independence relation to infer novel
independences from known ones, thus avoiding the performance of statistical
tests to estimate them. To accomplish this efficiently GSIMN uses the Triangle
theorem, also introduced in this work, which is a simplified version of the set
of Markov axioms. Experimental comparisons on artificial and real-world data
sets show GSIMN can yield significant savings with respect to GSMN*, while
generating a Markov network with comparable or in some cases improved quality.
We also compare GSIMN to a forward-chaining implementation, called GSIMN-FCH,
that produces all possible conditional independences resulting from repeatedly
applying Pearls theorems on the known conditional independence tests. The
results of this comparison show that GSIMN, by the sole use of the Triangle
theorem, is nearly optimal in terms of the set of independences tests that it
infers.
|
1401.3479 | Complex Question Answering: Unsupervised Learning Approaches and
Experiments | cs.CL cs.IR cs.LG | Complex questions that require inferencing and synthesizing information from
multiple documents can be seen as a kind of topic-oriented, informative
multi-document summarization where the goal is to produce a single text as a
compressed version of a set of documents with a minimum loss of relevant
information. In this paper, we experiment with one empirical method and two
unsupervised statistical machine learning techniques: K-means and Expectation
Maximization (EM), for computing relative importance of the sentences. We
compare the results of these approaches. Our experiments show that the
empirical approach outperforms the other two techniques and EM performs better
than K-means. However, the performance of these approaches depends entirely on
the feature set used and the weighting of these features. In order to measure
the importance and relevance to the user query we extract different kinds of
features (i.e. lexical, lexical semantic, cosine similarity, basic element,
tree kernel based syntactic and shallow-semantic) for each of the document
sentences. We use a local search technique to learn the weights of the
features. To the best of our knowledge, no study has used tree kernel functions
to encode syntactic/semantic information for more complex tasks such as
computing the relatedness between the query sentences and the document
sentences in order to generate query-focused summaries (or answers to complex
questions). For each of our methods of generating summaries (i.e. empirical,
K-means and EM) we show the effects of syntactic and shallow-semantic features
over the bag-of-words (BOW) features.
|
1401.3481 | Bounds Arc Consistency for Weighted CSPs | cs.AI | The Weighted Constraint Satisfaction Problem (WCSP) framework allows
representing and solving problems involving both hard constraints and cost
functions. It has been applied to various problems, including resource
allocation, bioinformatics, scheduling, etc. To solve such problems, solvers
usually rely on branch-and-bound algorithms equipped with local consistency
filtering, mostly soft arc consistency. However, these techniques are not well
suited to solve problems with very large domains. Motivated by the resolution
of an RNA gene localization problem inside large genomic sequences, and in the
spirit of bounds consistency for large domains in crisp CSPs, we introduce soft
bounds arc consistency, a new weighted local consistency specifically designed
for WCSP with very large domains. Compared to soft arc consistency, BAC
provides significantly improved time and space asymptotic complexity. In this
paper, we show how the semantics of cost functions can be exploited to further
improve the time complexity of BAC. We also compare both in theory and in
practice the efficiency of BAC on a WCSP with bounds consistency enforced on a
crisp CSP using cost variables. On two different real problems modeled as WCSP,
including our RNA gene localization problem, we observe that maintaining bounds
arc consistency outperforms arc consistency and also improves over bounds
consistency enforced on a constraint model with cost variables.
|
1401.3482 | Enhancing QA Systems with Complex Temporal Question Processing
Capabilities | cs.CL cs.AI cs.IR | This paper presents a multilayered architecture that enhances the
capabilities of current QA systems and allows different types of complex
questions or queries to be processed. The answers to these questions need to be
gathered from factual information scattered throughout different documents.
Specifically, we designed a specialized layer to process the different types of
temporal questions. Complex temporal questions are first decomposed into simple
questions, according to the temporal relations expressed in the original
question. In the same way, the answers to the resulting simple questions are
recomposed, fulfilling the temporal restrictions of the original complex
question. A novel aspect of this approach resides in the decomposition which
uses a minimal quantity of resources, with the final aim of obtaining a
portable platform that is easily extensible to other languages. In this paper
we also present a methodology for evaluation of the decomposition of the
questions as well as the ability of the implemented temporal layer to perform
at a multilingual level. The temporal layer was first performed for English,
then evaluated and compared with: a) a general purpose QA system (F-measure
65.47% for QA plus English temporal layer vs. 38.01% for the general QA
system), and b) a well-known QA system. Much better results were obtained for
temporal questions with the multilayered system. This system was therefore
extended to Spanish and very good results were again obtained in the evaluation
(F-measure 40.36% for QA plus Spanish temporal layer vs. 22.94% for the general
QA system).
|
1401.3483 | Relaxed Survey Propagation for The Weighted Maximum Satisfiability
Problem | cs.AI | The survey propagation (SP) algorithm has been shown to work well on large
instances of the random 3-SAT problem near its phase transition. It was shown
that SP estimates marginals over covers that represent clusters of solutions.
The SP-y algorithm generalizes SP to work on the maximum satisfiability
(Max-SAT) problem, but the cover interpretation of SP does not generalize to
SP-y. In this paper, we formulate the relaxed survey propagation (RSP)
algorithm, which extends the SP algorithm to apply to the weighted Max-SAT
problem. We show that RSP has an interpretation of estimating marginals over
covers violating a set of clauses with minimal weight. This naturally
generalizes the cover interpretation of SP. Empirically, we show that RSP
outperforms SP-y and other state-of-the-art Max-SAT solvers on random Max-SAT
instances. RSP also outperforms state-of-the-art weighted Max-SAT solvers on
random weighted Max-SAT instances.
|
1401.3484 | Modularity Aspects of Disjunctive Stable Models | cs.LO cs.AI | Practically all programming languages allow the programmer to split a program
into several modules which brings along several advantages in software
development. In this paper, we are interested in the area of answer-set
programming where fully declarative and nonmonotonic languages are applied. In
this context, obtaining a modular structure for programs is by no means
straightforward since the output of an entire program cannot in general be
composed from the output of its components. To better understand the effects of
disjunctive information on modularity we restrict the scope of analysis to the
case of disjunctive logic programs (DLPs) subject to stable-model semantics. We
define the notion of a DLP-function, where a well-defined input/output
interface is provided, and establish a novel module theorem which indicates the
compositionality of stable-model semantics for DLP-functions. The module
theorem extends the well-known splitting-set theorem and enables the
decomposition of DLP-functions given their strongly connected components based
on positive dependencies induced by rules. In this setting, it is also possible
to split shared disjunctive rules among components using a generalized shifting
technique. The concept of modular equivalence is introduced for the mutual
comparison of DLP-functions using a generalization of a translation-based
verification method.
|
1401.3485 | Hypertableau Reasoning for Description Logics | cs.LO cs.AI | We present a novel reasoning calculus for the description logic SHOIQ^+---a
knowledge representation formalism with applications in areas such as the
Semantic Web. Unnecessary nondeterminism and the construction of large models
are two primary sources of inefficiency in the tableau-based reasoning calculi
used in state-of-the-art reasoners. In order to reduce nondeterminism, we base
our calculus on hypertableau and hyperresolution calculi, which we extend with
a blocking condition to ensure termination. In order to reduce the size of the
constructed models, we introduce anywhere pairwise blocking. We also present an
improved nominal introduction rule that ensures termination in the presence of
nominals, inverse roles, and number restrictions---a combination of DL
constructs that has proven notoriously difficult to handle. Our implementation
shows significant performance improvements over state-of-the-art reasoners on
several well-known ontologies.
|
1401.3486 | The Role of Macros in Tractable Planning | cs.AI | This paper presents several new tractability results for planning based on
macros. We describe an algorithm that optimally solves planning problems in a
class that we call inverted tree reducible, and is provably tractable for
several subclasses of this class. By using macros to store partial plans that
recur frequently in the solution, the algorithm is polynomial in time and space
even for exponentially long plans. We generalize the inverted tree reducible
class in several ways and describe modifications of the algorithm to deal with
these new classes. Theoretical results are validated in experiments.
|
1401.3487 | The DL-Lite Family and Relations | cs.LO cs.AI | The recently introduced series of description logics under the common moniker
DL-Lite has attracted attention of the description logic and semantic web
communities due to the low computational complexity of inference, on the one
hand, and the ability to represent conceptual modeling formalisms, on the
other. The main aim of this article is to carry out a thorough and systematic
investigation of inference in extensions of the original DL-Lite logics along
five axes: by (i) adding the Boolean connectives and (ii) number restrictions
to concept constructs, (iii) allowing role hierarchies, (iv) allowing role
disjointness, symmetry, asymmetry, reflexivity, irreflexivity and transitivity
constraints, and (v) adopting or dropping the unique same assumption. We
analyze the combined complexity of satisfiability for the resulting logics, as
well as the data complexity of instance checking and answering positive
existential queries. Our approach is based on embedding DL-Lite logics in
suitable fragments of the one-variable first-order logic, which provides useful
insights into their properties and, in particular, computational behavior.
|
1401.3488 | Content Modeling Using Latent Permutations | cs.IR cs.CL cs.LG | We present a novel Bayesian topic model for learning discourse-level document
structure. Our model leverages insights from discourse theory to constrain
latent topic assignments in a way that reflects the underlying organization of
document topics. We propose a global model in which both topic selection and
ordering are biased to be similar across a collection of related documents. We
show that this space of orderings can be effectively represented using a
distribution over permutations called the Generalized Mallows Model. We apply
our method to three complementary discourse-level tasks: cross-document
alignment, document segmentation, and information ordering. Our experiments
show that incorporating our permutation-based model in these applications
yields substantial improvements in performance over previously proposed
methods.
|
1401.3489 | Join-Graph Propagation Algorithms | cs.AI | The paper investigates parameterized approximate message-passing schemes that
are based on bounded inference and are inspired by Pearl's belief propagation
algorithm (BP). We start with the bounded inference mini-clustering algorithm
and then move to the iterative scheme called Iterative Join-Graph Propagation
(IJGP), that combines both iteration and bounded inference. Algorithm IJGP
belongs to the class of Generalized Belief Propagation algorithms, a framework
that allowed connections with approximate algorithms from statistical physics
and is shown empirically to surpass the performance of mini-clustering and
belief propagation, as well as a number of other state-of-the-art algorithms on
several classes of networks. We also provide insight into the accuracy of
iterative BP and IJGP by relating these algorithms to well known classes of
constraint propagation schemes.
|
1401.3490 | BnB-ADOPT: An Asynchronous Branch-and-Bound DCOP Algorithm | cs.AI | Distributed constraint optimization (DCOP) problems are a popular way of
formulating and solving agent-coordination problems. A DCOP problem is a
problem where several agents coordinate their values such that the sum of the
resulting constraint costs is minimal. It is often desirable to solve DCOP
problems with memory-bounded and asynchronous algorithms. We introduce
Branch-and-Bound ADOPT (BnB-ADOPT), a memory-bounded asynchronous DCOP search
algorithm that uses the message-passing and communication framework of ADOPT
(Modi, Shen, Tambe, and Yokoo, 2005), a well known memory-bounded asynchronous
DCOP search algorithm, but changes the search strategy of ADOPT from best-first
search to depth-first branch-and-bound search. Our experimental results show
that BnB-ADOPT finds cost-minimal solutions up to one order of magnitude faster
than ADOPT for a variety of large DCOP problems and is as fast as NCBB, a
memory-bounded synchronous DCOP search algorithm, for most of these DCOP
problems. Additionally, it is often desirable to find bounded-error solutions
for DCOP problems within a reasonable amount of time since finding cost-minimal
solutions is NP-hard. The existing bounded-error approximation mechanism allows
users only to specify an absolute error bound on the solution cost but a
relative error bound is often more intuitive. Thus, we present two new
bounded-error approximation mechanisms that allow for relative error bounds and
implement them on top of BnB-ADOPT.
|
1401.3491 | Soft Goals Can Be Compiled Away | cs.AI | Soft goals extend the classical model of planning with a simple model of
preferences. The best plans are then not the ones with least cost but the ones
with maximum utility, where the utility of a plan is the sum of the utilities
of the soft goals achieved minus the plan cost. Finding plans with high utility
appears to involve two linked problems: choosing a subset of soft goals to
achieve and finding a low-cost plan to achieve them. New search algorithms and
heuristics have been developed for planning with soft goals, and a new track
has been introduced in the International Planning Competition (IPC) to test
their performance. In this note, we show however that these extensions are not
needed: soft goals do not increase the expressive power of the basic model of
planning with action costs, as they can easily be compiled away. We apply this
compilation to the problems of the net-benefit track of the most recent IPC,
and show that optimal and satisficing cost-based planners do better on the
compiled problems than optimal and satisficing net-benefit planners on the
original problems with explicit soft goals. Furthermore, we show that
penalties, or negative preferences expressing conditions to avoid, can also be
compiled away using a similar idea.
|
1401.3492 | ParamILS: An Automatic Algorithm Configuration Framework | cs.AI | The identification of performance-optimizing parameter settings is an
important part of the development and application of algorithms. We describe an
automatic framework for this algorithm configuration problem. More formally, we
provide methods for optimizing a target algorithm's performance on a given
class of problem instances by varying a set of ordinal and/or categorical
parameters. We review a family of local-search-based algorithm configuration
procedures and present novel techniques for accelerating them by adaptively
limiting the time spent for evaluating individual configurations. We describe
the results of a comprehensive experimental evaluation of our methods, based on
the configuration of prominent complete and incomplete algorithms for SAT. We
also present what is, to our knowledge, the first published work on
automatically configuring the CPLEX mixed integer programming solver. All the
algorithms we considered had default parameter settings that were manually
identified with considerable effort. Nevertheless, using our automated
algorithm configuration procedures, we achieved substantial and consistent
performance improvements.
|
1401.3493 | Predicting the Performance of IDA* using Conditional Distributions | cs.AI | Korf, Reid, and Edelkamp introduced a formula to predict the number of nodes
IDA* will expand on a single iteration for a given consistent heuristic, and
experimentally demonstrated that it could make very accurate predictions. In
this paper we show that, in addition to requiring the heuristic to be
consistent, their formulas predictions are accurate only at levels of the
brute-force search tree where the heuristic values obey the unconditional
distribution that they defined and then used in their formula. We then propose
a new formula that works well without these requirements, i.e., it can make
accurate predictions of IDA*s performance for inconsistent heuristics and if
the heuristic values in any level do not obey the unconditional distribution.
In order to achieve this we introduce the conditional distribution of heuristic
values which is a generalization of their unconditional heuristic distribution.
We also provide extensions of our formula that handle individual start states
and the augmentation of IDA* with bidirectional pathmax (BPMX), a technique for
propagating heuristic values when inconsistent heuristics are used.
Experimental results demonstrate the accuracy of our new method and all its
variations.
|
1401.3510 | Improving Performance Of English-Hindi Cross Language Information
Retrieval Using Transliteration Of Query Terms | cs.IR cs.CL | The main issue in Cross Language Information Retrieval (CLIR) is the poor
performance of retrieval in terms of average precision when compared to
monolingual retrieval performance. The main reasons behind poor performance of
CLIR are mismatching of query terms, lexical ambiguity and un-translated query
terms. The existing problems of CLIR are needed to be addressed in order to
increase the performance of the CLIR system. In this paper, we are putting our
effort to solve the given problem by proposed an algorithm for improving the
performance of English-Hindi CLIR system. We used all possible combination of
Hindi translated query using transliteration of English query terms and
choosing the best query among them for retrieval of documents. The experiment
is performed on FIRE 2010 (Forum of Information Retrieval Evaluation) datasets.
The experimental result show that the proposed approach gives better
performance of English-Hindi CLIR system and also helps in overcoming existing
problems and outperforms the existing English-Hindi CLIR system in terms of
average precision.
|
1401.3511 | Optimal CSMA-based Wireless Communication with Worst-case Delay and
Non-uniform Sizes | cs.NI cs.IT math.IT | Carrier Sense Multiple Access (CSMA) protocols have been shown to reach the
full capacity region for data communication in wireless networks, with
polynomial complexity. However, current literature achieves the throughput
optimality with an exponential delay scaling with the network size, even in a
simplified scenario for transmission jobs with uniform sizes. Although CSMA
protocols with order-optimal average delay have been proposed for specific
topologies, no existing work can provide worst-case delay guarantee for each
job in general network settings, not to mention the case when the jobs have
non-uniform lengths while the throughput optimality is still targeted. In this
paper, we tackle on this issue by proposing a two-timescale CSMA-based data
communication protocol with dynamic decisions on rate control, link scheduling,
job transmission and dropping in polynomial complexity. Through rigorous
analysis, we demonstrate that the proposed protocol can achieve a throughput
utility arbitrarily close to its offline optima for jobs with non-uniform sizes
and worst-case delay guarantees, with a tradeoff of longer maximum allowable
delay.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.