text
stringlengths 17
3.36M
| source
stringlengths 3
333
| __index_level_0__
int64 0
518k
|
|---|---|---|
A definition of intelligence is given in terms of performance that can be quantitatively measured. In this study, we have presented a conceptual model of Intelligent Agent System for Automatic Vehicle Checking Agent (VCA). To achieve this goal, we have introduced several kinds of agents that exhibit intelligent features. These are the Management agent, internal agent, External Agent, Watcher agent and Report agent. Metrics and measurements are suggested for evaluating the performance of Automatic Vehicle Checking Agent (VCA). Calibrate data and test facilities are suggested to facilitate the development of intelligent systems.
|
Automatic Vehicle Checking Agent (VCA)
| 800
|
This paper presents the design and development of a proposed rule based Decision Support System that will help students in selecting the best suitable faculty/major decision while taking admission in Gomal University, Dera Ismail Khan, Pakistan. The basic idea of our approach is to design a model for testing and measuring the student capabilities like intelligence, understanding, comprehension, mathematical concepts plus his/her past academic record plus his/her intelligence level, and applying the module results to a rule-based decision support system to determine the compatibility of those capabilities with the available faculties/majors in Gomal University. The result is shown as a list of suggested faculties/majors with the student capabilities and abilities.
|
A Proposed Decision Support System/Expert System for Guiding Fresh
Students in Selecting a Faculty in Gomal University, Pakistan
| 801
|
Heuristics are crucial tools in decreasing search effort in varied fields of AI. In order to be effective, a heuristic must be efficient to compute, as well as provide useful information to the search algorithm. However, some well-known heuristics which do well in reducing backtracking are so heavy that the gain of deploying them in a search algorithm might be outweighed by their overhead. We propose a rational metareasoning approach to decide when to deploy heuristics, using CSP backtracking search as a case study. In particular, a value of information approach is taken to adaptive deployment of solution-count estimation heuristics for value ordering. Empirical results show that indeed the proposed mechanism successfully balances the tradeoff between decreasing backtracking and heuristic computational overhead, resulting in a significant overall search time reduction.
|
Rational Deployment of CSP Heuristics
| 802
|
Regularization is a well studied problem in the context of neural networks. It is usually used to improve the generalization performance when the number of input samples is relatively small or heavily contaminated with noise. The regularization of a parametric model can be achieved in different manners some of which are early stopping (Morgan and Bourlard, 1990), weight decay, output smoothing that are used to avoid overfitting during the training of the considered model. From a Bayesian point of view, many regularization techniques correspond to imposing certain prior distributions on model parameters (Krogh and Hertz, 1991). Using Bishop's approximation (Bishop, 1995) of the objective function when a restricted type of noise is added to the input of a parametric function, we derive the higher order terms of the Taylor expansion and analyze the coefficients of the regularization terms induced by the noisy input. In particular we study the effect of penalizing the Hessian of the mapping function with respect to the input in terms of generalization performance. We also show how we can control independently this coefficient by explicitly penalizing the Jacobian of the mapping function on corrupted inputs.
|
Adding noise to the input of a model trained with a regularized
objective
| 803
|
We solve constraint satisfaction problems through translation to answer set programming (ASP). Our reformulations have the property that unit-propagation in the ASP solver achieves well defined local consistency properties like arc, bound and range consistency. Experiments demonstrate the computational value of this approach.
|
Translation-based Constraint Answer Set Solving
| 804
|
Recent papers address the issue of updating the instance level of knowledge bases expressed in Description Logic following a model-based approach. One of the outcomes of these papers is that the result of updating a knowledge base K is generally not expressible in the Description Logic used to express K. In this paper we introduce a formula-based approach to this problem, by revisiting some research work on formula-based updates developed in the '80s, in particular the WIDTIO (When In Doubt, Throw It Out) approach. We show that our operator enjoys desirable properties, including that both insertions and deletions according to such operator can be expressed in the DL used for the original KB. Also, we present polynomial time algorithms for the evolution of the instance level knowledge bases expressed in the most expressive Description Logics of the DL-lite family.
|
On the evolution of the instance level of DL-lite knowledge bases
| 805
|
We present in this paper a novel approach for training deterministic auto-encoders. We show that by adding a well chosen penalty term to the classical reconstruction cost function, we can achieve results that equal or surpass those attained by other regularized auto-encoders as well as denoising auto-encoders on a range of datasets. This penalty term corresponds to the Frobenius norm of the Jacobian matrix of the encoder activations with respect to the input. We show that this penalty term results in a localized space contraction which in turn yields robust features on the activation layer. Furthermore, we show how this penalty term is related to both regularized auto-encoders and denoising encoders and how it can be seen as a link between deterministic and non-deterministic auto-encoders. We find empirically that this penalty helps to carve a representation that better captures the local directions of variation dictated by the data, corresponding to a lower-dimensional non-linear manifold, while being more invariant to the vast majority of directions orthogonal to the manifold. Finally, we show that by using the learned features to initialize a MLP, we achieve state of the art classification error on a range of datasets, surpassing other methods of pre-training.
|
Learning invariant features through local space contraction
| 806
|
The study of arguments as abstract entities and their interaction as introduced by Dung (Artificial Intelligence 177, 1995) has become one of the most active research branches within Artificial Intelligence and Reasoning. A main issue for abstract argumentation systems is the selection of acceptable sets of arguments. Value-based argumentation, as introduced by Bench-Capon (J. Logic Comput. 13, 2003), extends Dung's framework. It takes into account the relative strength of arguments with respect to some ranking representing an audience: an argument is subjectively accepted if it is accepted with respect to some audience, it is objectively accepted if it is accepted with respect to all audiences. Deciding whether an argument is subjectively or objectively accepted, respectively, are computationally intractable problems. In fact, the problems remain intractable under structural restrictions that render the main computational problems for non-value-based argumentation systems tractable. In this paper we identify nontrivial classes of value-based argumentation systems for which the acceptance problems are polynomial-time tractable. The classes are defined by means of structural restrictions in terms of the underlying graphical structure of the value-based system. Furthermore we show that the acceptance problems are intractable for two classes of value-based systems that where conjectured to be tractable by Dunne (Artificial Intelligence 171, 2007).
|
Algorithms and Complexity Results for Persuasive Argumentation
| 807
|
In this paper, we investigate the hybrid tractability of binary Quantified Constraint Satisfaction Problems (QCSPs). First, a basic tractable class of binary QCSPs is identified by using the broken-triangle property. In this class, the variable ordering for the broken-triangle property must be same as that in the prefix of the QCSP. Second, we break this restriction to allow that existentially quantified variables can be shifted within or out of their blocks, and thus identify some novel tractable classes by introducing the broken-angle property. Finally, we identify a more generalized tractable class, i.e., the min-of-max extendable class for QCSPs.
|
Hybrid Tractable Classes of Binary Quantified Constraint Satisfaction
Problems
| 808
|
Most current planners assume complete domain models and focus on generating correct plans. Unfortunately, domain modeling is a laborious and error-prone task. While domain experts cannot guarantee completeness, often they are able to circumscribe the incompleteness of the model by providing annotations as to which parts of the domain model may be incomplete. In such cases, the goal should be to generate plans that are robust with respect to any known incompleteness of the domain. In this paper, we first introduce annotations expressing the knowledge of the domain incompleteness, and formalize the notion of plan robustness with respect to an incomplete domain model. We then propose an approach to compiling the problem of finding robust plans to the conformant probabilistic planning problem. We present experimental results with Probabilistic-FF, a state-of-the-art planner, showing the promise of our approach.
|
Synthesizing Robust Plans under Incomplete Domain Models
| 809
|
Over the years, nonmonotonic rules have proven to be a very expressive and useful knowledge representation paradigm. They have recently been used to complement the expressive power of Description Logics (DLs), leading to the study of integrative formal frameworks, generally referred to as hybrid knowledge bases, where both DL axioms and rules can be used to represent knowledge. The need to use these hybrid knowledge bases in dynamic domains has called for the development of update operators, which, given the substantially different way Description Logics and rules are usually updated, has turned out to be an extremely difficult task. In [SL10], a first step towards addressing this problem was taken, and an update operator for hybrid knowledge bases was proposed. Despite its significance -- not only for being the first update operator for hybrid knowledge bases in the literature, but also because it has some applications - this operator was defined for a restricted class of problems where only the ABox was allowed to change, which considerably diminished its applicability. Many applications that use hybrid knowledge bases in dynamic scenarios require both DL axioms and rules to be updated. In this paper, motivated by real world applications, we introduce an update operator for a large class of hybrid knowledge bases where both the DL component as well as the rule component are allowed to dynamically change. We introduce splitting sequences and splitting theorem for hybrid knowledge bases, use them to define a modular update semantics, investigate its basic properties, and illustrate its use on a realistic example about cargo imports.
|
Splitting and Updating Hybrid Knowledge Bases (Extended Version)
| 810
|
A fundamental task for propositional logic is to compute models of propositional formulas. Programs developed for this task are called satisfiability solvers. We show that transition systems introduced by Nieuwenhuis, Oliveras, and Tinelli to model and analyze satisfiability solvers can be adapted for solvers developed for two other propositional formalisms: logic programming under the answer-set semantics, and the logic PC(ID). We show that in each case the task of computing models can be seen as "satisfiability modulo answer-set programming," where the goal is to find a model of a theory that also is an answer set of a certain program. The unifying perspective we develop shows, in particular, that solvers CLASP and MINISATID are closely related despite being developed for different formalisms, one for answer-set programming and the latter for the logic PC(ID).
|
Transition Systems for Model Generators - A Unifying Approach
| 811
|
This paper describes a graph clustering algorithm that aims to minimize the normalized cut criterion and has a model order selection procedure. The performance of the proposed algorithm is comparable to spectral approaches in terms of minimizing normalized cut. However, unlike spectral approaches, the proposed algorithm scales to graphs with millions of nodes and edges. The algorithm consists of three components that are processed sequentially: a greedy agglomerative hierarchical clustering procedure, model order selection, and a local refinement. For a graph of n nodes and O(n) edges, the computational complexity of the algorithm is O(n log^2 n), a major improvement over the O(n^3) complexity of spectral methods. Experiments are performed on real and synthetic networks to demonstrate the scalability of the proposed approach, the effectiveness of the model order selection procedure, and the performance of the proposed algorithm in terms of minimizing the normalized cut metric.
|
GANC: Greedy Agglomerative Normalized Cut
| 812
|
Machine-part cell formation is used in cellular manufacturing in order to process a large variety, quality, lower work in process levels, reducing manufacturing lead-time and customer response time while retaining flexibility for new products. This paper presents a new and novel approach for obtaining machine cells and part families. In the cellular manufacturing the fundamental problem is the formation of part families and machine cells. The present paper deals with the Self Organising Map (SOM) method an unsupervised learning algorithm in Artificial Intelligence, and has been used as a visually decipherable clustering tool of machine-part cell formation. The objective of the paper is to cluster the binary machine-part matrix through visually decipherable cluster of SOM color-coding and labelling via the SOM map nodes in such a way that the part families are processed in that machine cells. The Umatrix, component plane, principal component projection, scatter plot and histogram of SOM have been reported in the present work for the successful visualization of the machine-part cell formation. Computational result with the proposed algorithm on a set of group technology problems available in the literature is also presented. The proposed SOM approach produced solutions with a grouping efficacy that is at least as good as any results earlier reported in the literature and improved the grouping efficacy for 70% of the problems and found immensely useful to both industry practitioners and researchers.
|
Machine-Part cell formation through visual decipherable clustering of
Self Organizing Map
| 813
|
Rubik's Cube is an easily-understood puzzle, which is originally called the "magic cube". It is a well-known planning problem, which has been studied for a long time. Yet many simple properties remain unknown. This paper studies whether modern SAT solvers are applicable to this puzzle. To our best knowledge, we are the first to translate Rubik's Cube to a SAT problem. To reduce the number of variables and clauses needed for the encoding, we replace a naive approach of 6 Boolean variables to represent each color on each facelet with a new approach of 3 or 2 Boolean variables. In order to be able to solve quickly Rubik's Cube, we replace the direct encoding of 18 turns with the layer encoding of 18-subtype turns based on 6-type turns. To speed up the solving further, we encode some properties of two-phase algorithm as an additional constraint, and restrict some move sequences by adding some constraint clauses. Using only efficient encoding cannot solve this puzzle. For this reason, we improve the existing SAT solvers, and develop a new SAT solver based on PrecoSAT, though it is suited only for Rubik's Cube. The new SAT solver replaces the lookahead solving strategy with an ALO (\emph{at-least-one}) solving strategy, and decomposes the original problem into sub-problems. Each sub-problem is solved by PrecoSAT. The empirical results demonstrate both our SAT translation and new solving technique are efficient. Without the efficient SAT encoding and the new solving technique, Rubik's Cube will not be able to be solved still by any SAT solver. Using the improved SAT solver, we can find always a solution of length 20 in a reasonable time. Although our solver is slower than Kociemba's algorithm using lookup tables, but does not require a huge lookup table.
|
Solving Rubik's Cube Using SAT Solvers
| 814
|
The World Wide Web no longer consists just of HTML pages. Our work sheds light on a number of trends on the Internet that go beyond simple Web pages. The hidden Web provides a wealth of data in semi-structured form, accessible through Web forms and Web services. These services, as well as numerous other applications on the Web, commonly use XML, the eXtensible Markup Language. XML has become the lingua franca of the Internet that allows customized markups to be defined for specific domains. On top of XML, the Semantic Web grows as a common structured data source. In this work, we first explain each of these developments in detail. Using real-world examples from scientific domains of great interest today, we then demonstrate how these new developments can assist the managing, harvesting, and organization of data on the Web. On the way, we also illustrate the current research avenues in these domains. We believe that this effort would help bridge multiple database tracks, thereby attracting researchers with a view to extend database technology.
|
The Hidden Web, XML and Semantic Web: A Scientific Data Management
Perspective
| 815
|
Developing smart house systems has been a great challenge for researchers and engineers in this area because of the high cost of implementation and evaluation process of these systems, while being very time consuming. Testing a designed smart house before actually building it is considered as an obstacle towards an efficient smart house project. This is because of the variety of sensors, home appliances and devices available for a real smart environment. In this paper, we present the design and implementation of a multi-purpose smart house simulation system for designing and simulating all aspects of a smart house environment. This simulator provides the ability to design the house plan and different virtual sensors and appliances in a two dimensional model of the virtual house environment. This simulator can connect to any external smart house remote controlling system, providing evaluation capabilities to their system much easier than before. It also supports detailed adding of new emerging sensors and devices to help maintain its compatibility with future simulation needs. Scenarios can also be defined for testing various possible combinations of device states; so different criteria and variables can be simply evaluated without the need of experimenting on a real environment.
|
A Multi-Purpose Scenario-based Simulator for Smart House Environments
| 816
|
We introduce the Xapagy cognitive architecture: a software system designed to perform narrative reasoning. The architecture has been designed from scratch to model and mimic the activities performed by humans when witnessing, reading, recalling, narrating and talking about stories.
|
Xapagy: a cognitive architecture for narrative reasoning
| 817
|
This paper presents a general and efficient framework for probabilistic inference and learning from arbitrary uncertain information. It exploits the calculation properties of finite mixture models, conjugate families and factorization. Both the joint probability density of the variables and the likelihood function of the (objective or subjective) observation are approximated by a special mixture model, in such a way that any desired conditional distribution can be directly obtained without numerical integration. We have developed an extended version of the expectation maximization (EM) algorithm to estimate the parameters of mixture models from uncertain training examples (indirect observations). As a consequence, any piece of exact or uncertain information about both input and output values is consistently handled in the inference and learning stages. This ability, extremely useful in certain situations, is not found in most alternative methods. The proposed framework is formally justified from standard probabilistic principles and illustrative examples are provided in the fields of nonparametric pattern classification, nonlinear regression and pattern completion. Finally, experiments on a real application and comparative results over standard databases provide empirical evidence of the utility of the method in a wide range of applications.
|
Probabilistic Inference from Arbitrary Uncertainty using Mixtures of
Factorized Generalized Gaussians
| 818
|
Decision-theoretic agents predict and evaluate the results of their actions using a model, or ontology, of their environment. An agent's goal, or utility function, may also be specified in terms of the states of, or entities within, its ontology. If the agent may upgrade or replace its ontology, it faces a crisis: the agent's original goal may not be well-defined with respect to its new ontology. This crisis must be resolved before the agent can make plans towards achieving its goals. We discuss in this paper which sorts of agents will undergo ontological crises and why we may want to create such agents. We present some concrete examples, and argue that a well-defined procedure for resolving ontological crises is needed. We point to some possible approaches to solving this problem, and evaluate these methods on our examples.
|
Ontological Crises in Artificial Agents' Value Systems
| 819
|
A knowledge system S describing a part of real world does in general not contain complete information. Reasoning with incomplete information is prone to errors since any belief derived from S may be false in the present state of the world. A false belief may suggest wrong decisions and lead to harmful actions. So an important goal is to make false beliefs as unlikely as possible. This work introduces the notions of "typical atoms" and "typical models", and shows that reasoning with typical models minimizes the expected number of false beliefs over all ways of using incomplete information. Various properties of typical models are studied, in particular, correctness and stability of beliefs suggested by typical models, and their connection to oblivious reasoning.
|
Typical models: minimizing false beliefs
| 820
|
We present a new approach to path planning, called the "Ariadne's clew algorithm". It is designed to find paths in high-dimensional continuous spaces and applies to robots with many degrees of freedom in static, as well as dynamic environments - ones where obstacles may move. The Ariadne's clew algorithm comprises two sub-algorithms, called Search and Explore, applied in an interleaved manner. Explore builds a representation of the accessible space while Search looks for the target. Both are posed as optimization problems. We describe a real implementation of the algorithm to plan paths for a six degrees of freedom arm in a dynamic environment where another six degrees of freedom arm is used as a moving obstacle. Experimental results show that a path is found in about one second without any pre-processing.
|
The Ariadne's Clew Algorithm
| 821
|
This article studies the problem of modifying the action ordering of a plan in order to optimise the plan according to various criteria. One of these criteria is to make a plan less constrained and the other is to minimize its parallel execution time. Three candidate definitions are proposed for the first of these criteria, constituting a sequence of increasing optimality guarantees. Two of these are based on deordering plans, which means that ordering relations may only be removed, not added, while the third one uses reordering, where arbitrary modifications to the ordering are allowed. It is shown that only the weakest one of the three criteria is tractable to achieve, the other two being NP-hard and even difficult to approximate. Similarly, optimising the parallel execution time of a plan is studied both for deordering and reordering of plans. In the general case, both of these computations are NP-hard. However, it is shown that optimal deorderings can be computed in polynomial time for a class of planning languages based on the notions of producers, consumers and threats, which includes most of the commonly used planning languages. Computing optimal reorderings can potentially lead to even faster parallel executions, but this problem remains NP-hard and difficult to approximate even under quite severe restrictions.
|
Computational Aspects of Reordering Plans
| 822
|
It is common to view programs as a combination of logic and control: the logic part defines what the program must do, the control part -- how to do it. The Logic Programming paradigm was developed with the intention of separating the logic from the control. Recently, extensive research has been conducted on automatic generation of control for logic programs. Only a few of these works considered the issue of automatic generation of control for improving the efficiency of logic programs. In this paper we present a novel algorithm for automatic finding of lowest-cost subgoal orderings. The algorithm works using the divide-and-conquer strategy. The given set of subgoals is partitioned into smaller sets, based on co-occurrence of free variables. The subsets are ordered recursively and merged, yielding a provably optimal order. We experimentally demonstrate the utility of the algorithm by testing it in several domains, and discuss the possibilities of its cooperation with other existing methods.
|
The Divide-and-Conquer Subgoal-Ordering Algorithm for Speeding up Logic
Inference
| 823
|
Using an improved backtrack algorithm with sophisticated pruning techniques, we revise previous observations correlating a high frequency of hard to solve Hamiltonian Cycle instances with the Gn,m phase transition between Hamiltonicity and non-Hamiltonicity. Instead all tested graphs of 100 to 1500 vertices are easily solved. When we artificially restrict the degree sequence with a bounded maximum degree, although there is some increase in difficulty, the frequency of hard graphs is still low. When we consider more regular graphs based on a generalization of knight's tours, we observe frequent instances of really hard graphs, but on these the average degree is bounded by a constant. We design a set of graphs with a feature our algorithm is unable to detect and so are very hard for our algorithm, but in these we can vary the average degree from O(1) to O(n). We have so far found no class of graphs correlated with the Gn,m phase transition which asymptotically produces a high frequency of hard instances.
|
The Gn,m Phase Transition is Not Hard for the Hamiltonian Cycle Problem
| 824
|
This article presents a measure of semantic similarity in an IS-A taxonomy based on the notion of shared information content. Experimental evaluation against a benchmark set of human similarity judgments demonstrates that the measure performs better than the traditional edge-counting approach. The article presents algorithms that take advantage of taxonomic similarity in resolving syntactic and semantic ambiguity, along with experimental results demonstrating their effectiveness.
|
Semantic Similarity in a Taxonomy: An Information-Based Measure and its
Application to Problems of Ambiguity in Natural Language
| 825
|
A class of interval-based temporal languages for uniformly representing and reasoning about actions and plans is presented. Actions are represented by describing what is true while the action itself is occurring, and plans are constructed by temporally relating actions and world states. The temporal languages are members of the family of Description Logics, which are characterized by high expressivity combined with good computational properties. The subsumption problem for a class of temporal Description Logics is investigated and sound and complete decision procedures are given. The basic language TL-F is considered first: it is the composition of a temporal logic TL -- able to express interval temporal networks -- together with the non-temporal logic F -- a Feature Description Logic. It is proven that subsumption in this language is an NP-complete problem. Then it is shown how to reason with the more expressive languages TLU-FU and TL-ALCF. The former adds disjunction both at the temporal and non-temporal sides of the language, the latter extends the non-temporal side with set-valued features (i.e., roles) and a propositionally complete language.
|
A Temporal Description Logic for Reasoning about Actions and Plans
| 826
|
Many of the artificial intelligence techniques developed to date rely on heuristic search through large spaces. Unfortunately, the size of these spaces and the corresponding computational effort reduce the applicability of otherwise novel and effective algorithms. A number of parallel and distributed approaches to search have considerably improved the performance of the search process. Our goal is to develop an architecture that automatically selects parallel search strategies for optimal performance on a variety of search problems. In this paper we describe one such architecture realized in the Eureka system, which combines the benefits of many different approaches to parallel heuristic search. Through empirical and theoretical analyses we observe that features of the problem space directly affect the choice of optimal parallel search strategy. We then employ machine learning techniques to select the optimal parallel search strategy for a given problem space. When a new search task is input to the system, Eureka uses features describing the search space and the chosen architecture to automatically select the appropriate search strategy. Eureka has been tested on a MIMD parallel processor, a distributed network of workstations, and a single workstation using multithreading. Results generated from fifteen puzzle problems, robot arm motion problems, artificial search spaces, and planning problems indicate that Eureka outperforms any of the tested strategies used exclusively for all problem instances and is able to greatly reduce the search time for these applications.
|
Adaptive Parallel Iterative Deepening Search
| 827
|
Order of magnitude reasoning - reasoning by rough comparisons of the sizes of quantities - is often called 'back of the envelope calculation', with the implication that the calculations are quick though approximate. This paper exhibits an interesting class of constraint sets in which order of magnitude reasoning is demonstrably fast. Specifically, we present a polynomial-time algorithm that can solve a set of constraints of the form 'Points a and b are much closer together than points c and d.' We prove that this algorithm can be applied if `much closer together' is interpreted either as referring to an infinite difference in scale or as referring to a finite difference in scale, as long as the difference in scale is greater than the number of variables in the constraint set. We also prove that the first-order theory over such constraints is decidable.
|
Order of Magnitude Comparisons of Distance
| 828
|
This paper introduces AntNet, a novel approach to the adaptive learning of routing tables in communications networks. AntNet is a distributed, mobile agents based Monte Carlo system that was inspired by recent work on the ant colony metaphor for solving optimization problems. AntNet's agents concurrently explore the network and exchange collected information. The communication among the agents is indirect and asynchronous, mediated by the network itself. This form of communication is typical of social insects and is called stigmergy. We compare our algorithm with six state-of-the-art routing algorithms coming from the telecommunications and machine learning fields. The algorithms' performance is evaluated over a set of realistic testbeds. We run many experiments over real and artificial IP datagram networks with increasing number of nodes and under several paradigmatic spatial and temporal traffic distributions. Results are very encouraging. AntNet showed superior performance under all the experimental conditions with respect to its competitors. We analyze the main characteristics of the algorithm and try to explain the reasons for its superiority.
|
AntNet: Distributed Stigmergetic Control for Communications Networks
| 829
|
Cox's well-known theorem justifying the use of probability is shown not to hold in finite domains. The counterexample also suggests that Cox's assumptions are insufficient to prove the result even in infinite domains. The same counterexample is used to disprove a result of Fine on comparative conditional probability.
|
A Counter Example to Theorems of Cox and Fine
| 830
|
As planning is applied to larger and richer domains the effort involved in constructing domain descriptions increases and becomes a significant burden on the human application designer. If general planners are to be applied successfully to large and complex domains it is necessary to provide the domain designer with some assistance in building correctly encoded domains. One way of doing this is to provide domain-independent techniques for extracting, from a domain description, knowledge that is implicit in that description and that can assist domain designers in debugging domain descriptions. This knowledge can also be exploited to improve the performance of planners: several researchers have explored the potential of state invariants in speeding up the performance of domain-independent planners. In this paper we describe a process by which state invariants can be extracted from the automatically inferred type structure of a domain. These techniques are being developed for exploitation by STAN, a Graphplan based planner that employs state analysis techniques to enhance its performance.
|
The Automatic Inference of State Invariants in TIM
| 831
|
The notion of class is ubiquitous in computer science and is central in many formalisms for the representation of structured knowledge used both in knowledge representation and in databases. In this paper we study the basic issues underlying such representation formalisms and single out both their common characteristics and their distinguishing features. Such investigation leads us to propose a unifying framework in which we are able to capture the fundamental aspects of several representation languages used in different contexts. The proposed formalism is expressed in the style of description logics, which have been introduced in knowledge representation as a means to provide a semantically well-founded basis for the structural aspects of knowledge representation systems. The description logic considered in this paper is a subset of first order logic with nice computational characteristics. It is quite expressive and features a novel combination of constructs that has not been studied before. The distinguishing constructs are number restrictions, which generalize existence and functional dependencies, inverse roles, which allow one to refer to the inverse of a relationship, and possibly cyclic assertions, which are necessary for capturing real world domains. We are able to show that it is precisely such combination of constructs that makes our logic powerful enough to model the essential set of features for defining class structures that are common to frame systems, object-oriented database languages, and semantic data models. As a consequence of the established correspondences, several significant extensions of each of the above formalisms become available. The high expressiveness of the logic we propose and the need for capturing the reasoning in different contexts forces us to distinguish between unrestricted and finite model reasoning. A notable feature of our proposal is that reasoning in both cases is decidable. We argue that, by virtue of the high expressive power and of the associated reasoning capabilities on both unrestricted and finite models, our logic provides a common core for class-based representation formalisms.
|
Unifying Class-Based Representation Formalisms
| 832
|
In default reasoning, usually not all possible ways of resolving conflicts between default rules are acceptable. Criteria expressing acceptable ways of resolving the conflicts may be hardwired in the inference mechanism, for example specificity in inheritance reasoning can be handled this way, or they may be given abstractly as an ordering on the default rules. In this article we investigate formalizations of the latter approach in Reiter's default logic. Our goal is to analyze and compare the computational properties of three such formalizations in terms of their computational complexity: the prioritized default logics of Baader and Hollunder, and Brewka, and a prioritized default logic that is based on lexicographic comparison. The analysis locates the propositional variants of these logics on the second and third levels of the polynomial hierarchy, and identifies the boundary between tractable and intractable inference for restricted classes of prioritized default theories.
|
Complexity of Prioritized Default Logics
| 833
|
We describe a general approach to optimization which we term `Squeaky Wheel' Optimization (SWO). In SWO, a greedy algorithm is used to construct a solution which is then analyzed to find the trouble spots, i.e., those elements, that, if improved, are likely to improve the objective function score. The results of the analysis are used to generate new priorities that determine the order in which the greedy algorithm constructs the next solution. This Construct/Analyze/Prioritize cycle continues until some limit is reached, or an acceptable solution is found. SWO can be viewed as operating on two search spaces: solutions and prioritizations. Successive solutions are only indirectly related, via the re-prioritization that results from analyzing the prior solution. Similarly, successive prioritizations are generated by constructing and analyzing solutions. This `coupled search' has some interesting properties, which we discuss. We report encouraging experimental results on two domains, scheduling problems that arise in fiber-optic cable manufacturing, and graph coloring problems. The fact that these domains are very different supports our claim that SWO is a general technique for optimization.
|
Squeaky Wheel Optimization
| 834
|
Intractable distributions present a common difficulty in inference within the probabilistic knowledge representation framework and variational methods have recently been popular in providing an approximate solution. In this article, we describe a perturbational approach in the form of a cumulant expansion which, to lowest order, recovers the standard Kullback-Leibler variational bound. Higher-order terms describe corrections on the variational approach without incurring much further computational cost. The relationship to other perturbational approaches such as TAP is also elucidated. We demonstrate the method on a particular class of undirected graphical models, Boltzmann machines, for which our simulation results confirm improved accuracy and enhanced stability during learning.
|
Variational Cumulant Expansions for Intractable Distributions
| 835
|
STAN is a Graphplan-based planner, so-called because it uses a variety of STate ANalysis techniques to enhance its performance. STAN competed in the AIPS-98 planning competition where it compared well with the other competitors in terms of speed, finding solutions fastest to many of the problems posed. Although the domain analysis techniques STAN exploits are an important factor in its overall performance, we believe that the speed at which STAN solved the competition problems is largely due to the implementation of its plan graph. The implementation is based on two insights: that many of the graph construction operations can be implemented as bit-level logical operations on bit vectors, and that the graph should not be explicitly constructed beyond the fix point. This paper describes the implementation of STAN's plan graph and provides experimental results which demonstrate the circumstances under which advantages can be obtained from using this implementation.
|
Efficient Implementation of the Plan Graph in STAN
| 836
|
Top-down and bottom-up theorem proving approaches each have specific advantages and disadvantages. Bottom-up provers profit from strong redundancy control but suffer from the lack of goal-orientation, whereas top-down provers are goal-oriented but often have weak calculi when their proof lengths are considered. In order to integrate both approaches, we try to achieve cooperation between a top-down and a bottom-up prover in two different ways: The first technique aims at supporting a bottom-up with a top-down prover. A top-down prover generates subgoal clauses, they are then processed by a bottom-up prover. The second technique deals with the use of bottom-up generated lemmas in a top-down prover. We apply our concept to the areas of model elimination and superposition. We discuss the ability of our techniques to shorten proofs as well as to reorder the search space in an appropriate manner. Furthermore, in order to identify subgoal clauses and lemmas which are actually relevant for the proof task, we develop methods for a relevancy-based filtering. Experiments with the provers SETHEO and SPASS performed in the problem library TPTP reveal the high potential of our cooperation approaches.
|
Cooperation between Top-Down and Bottom-Up Theorem Provers
| 837
|
A previously developed quantum search algorithm for solving 1-SAT problems in a single step is generalized to apply to a range of highly constrained k-SAT problems. We identify a bound on the number of clauses in satisfiability problems for which the generalized algorithm can find a solution in a constant number of steps as the number of variables increases. This performance contrasts with the linear growth in the number of steps required by the best classical algorithms, and the exponential number required by classical and quantum methods that ignore the problem structure. In some cases, the algorithm can also guarantee that insoluble problems in fact have no solutions, unlike previously proposed quantum search algorithms.
|
Solving Highly Constrained Search Problems with Quantum Computers
| 838
|
Planning under uncertainty is a central problem in the study of automated sequential decision making, and has been addressed by researchers in many different fields, including AI planning, decision analysis, operations research, control theory and economics. While the assumptions and perspectives adopted in these areas often differ in substantial ways, many planning problems of interest to researchers in these fields can be modeled as Markov decision processes (MDPs) and analyzed using the techniques of decision theory. This paper presents an overview and synthesis of MDP-related methods, showing how they provide a unifying framework for modeling many classes of planning problems studied in AI. It also describes structural properties of MDPs that, when exhibited by particular classes of problems, can be exploited in the construction of optimal or approximately optimal policies or plans. Planning problems commonly possess structure in the reward and value functions used to describe performance criteria, in the functions used to describe state transitions and observations, and in the relationships among features used to describe states, actions, rewards, and observations. Specialized representations, and algorithms employing these representations, can achieve computational leverage by exploiting these various forms of structure. Certain AI techniques -- in particular those based on the use of structured, intensional representations -- can be viewed in this way. This paper surveys several types of representations for both classical and decision-theoretic planning problems, and planning algorithms that exploit these representations in a number of different ways to ease the computational burden of constructing policies or plans. It focuses primarily on abstraction, aggregation and decomposition techniques based on AI-style representations.
|
Decision-Theoretic Planning: Structural Assumptions and Computational
Leverage
| 839
|
We study the problem of probabilistic deduction with conditional constraints over basic events. We show that globally complete probabilistic deduction with conditional constraints over basic events is NP-hard. We then concentrate on the special case of probabilistic deduction in conditional constraint trees. We elaborate very efficient techniques for globally complete probabilistic deduction. In detail, for conditional constraint trees with point probabilities, we present a local approach to globally complete probabilistic deduction, which runs in linear time in the size of the conditional constraint trees. For conditional constraint trees with interval probabilities, we show that globally complete probabilistic deduction can be done in a global approach by solving nonlinear programs. We show how these nonlinear programs can be transformed into equivalent linear programs, which are solvable in polynomial time in the size of the conditional constraint trees.
|
Probabilistic Deduction with Conditional Constraints over Basic Events
| 840
|
We describe a variational approximation method for efficient inference in large-scale probabilistic models. Variational methods are deterministic procedures that provide approximations to marginal and conditional probabilities of interest. They provide alternatives to approximate inference methods based on stochastic sampling or search. We describe a variational approach to the problem of diagnostic inference in the `Quick Medical Reference' (QMR) network. The QMR network is a large-scale probabilistic graphical model built on statistical and expert knowledge. Exact probabilistic inference is infeasible in this model for all but a small set of cases. We evaluate our variational inference algorithm on a large set of diagnostic test cases, comparing the algorithm to a state-of-the-art stochastic sampling method.
|
Variational Probabilistic Inference and the QMR-DT Network
| 841
|
This paper offers an approach to extensible knowledge representation and reasoning for a family of formalisms known as Description Logics. The approach is based on the notion of adding new concept constructors, and includes a heuristic methodology for specifying the desired extensions, as well as a modularized software architecture that supports implementing extensions. The architecture detailed here falls in the normalize-compared paradigm, and supports both intentional reasoning (subsumption) involving concepts, and extensional reasoning involving individuals after incremental updates to the knowledge base. The resulting approach can be used to extend the reasoner with specialized notions that are motivated by specific problems or application areas, such as reasoning about dates, plans, etc. In addition, it provides an opportunity to implement constructors that are not currently yet sufficiently well understood theoretically, but are needed in practice. Also, for constructors that are provably hard to reason with (e.g., ones whose presence would lead to undecidability), it allows the implementation of incomplete reasoners where the incompleteness is tailored to be acceptable for the application at hand.
|
Extensible Knowledge Representation: the Case of Description Reasoners
| 842
|
The research on conditional planning rejects the assumptions that there is no uncertainty or incompleteness of knowledge with respect to the state and changes of the system the plans operate on. Without these assumptions the sequences of operations that achieve the goals depend on the initial state and the outcomes of nondeterministic changes in the system. This setting raises the questions of how to represent the plans and how to perform plan search. The answers are quite different from those in the simpler classical framework. In this paper, we approach conditional planning from a new viewpoint that is motivated by the use of satisfiability algorithms in classical planning. Translating conditional planning to formulae in the propositional logic is not feasible because of inherent computational limitations. Instead, we translate conditional planning to quantified Boolean formulae. We discuss three formalizations of conditional planning as quantified Boolean formulae, and present experimental results obtained with a theorem-prover.
|
Constructing Conditional Plans by a Theorem-Prover
| 843
|
Stacked generalization is a general method of using a high-level model to combine lower-level models to achieve greater predictive accuracy. In this paper we address two crucial issues which have been considered to be a `black art' in classification tasks ever since the introduction of stacked generalization in 1992 by Wolpert: the type of generalizer that is suitable to derive the higher-level model, and the kind of attributes that should be used as its input. We find that best results are obtained when the higher-level model combines the confidence (and not just the predictions) of the lower-level ones. We demonstrate the effectiveness of stacked generalization for combining three different types of learning algorithms for classification tasks. We also compare the performance of stacked generalization with majority vote and published results of arcing and bagging.
|
Issues in Stacked Generalization
| 844
|
We present PARIS, an approach for the automatic alignment of ontologies. PARIS aligns not only instances, but also relations and classes. Alignments at the instance-level cross-fertilize with alignments at the schema-level. Thereby, our system provides a truly holistic solution to the problem of ontology alignment. The heart of the approach is probabilistic. This allows PARIS to run without any parameter tuning. We demonstrate the efficiency of the algorithm and its precision through extensive experiments. In particular, we obtain a precision of around 90% in experiments with two of the world's largest ontologies.
|
Ontology Alignment at the Instance and Schema Level
| 845
|
We prove that it is NP-hard for a coalition of two manipulators to compute how to manipulate the Borda voting rule. This resolves one of the last open problems in the computational complexity of manipulating common voting rules. Because of this NP-hardness, we treat computing a manipulation as an approximation problem where we try to minimize the number of manipulators. Based on ideas from bin packing and multiprocessor scheduling, we propose two new approximation methods to compute manipulations of the Borda rule. Experiments show that these methods significantly outperform the previous best known %existing approximation method. We are able to find optimal manipulations in almost all the randomly generated elections tested. Our results suggest that, whilst computing a manipulation of the Borda rule by a coalition is NP-hard, computational complexity may provide only a weak barrier against manipulation in practice.
|
Complexity of and Algorithms for Borda Manipulation
| 846
|
We introduce a temporal model for reasoning on disjunctive metric constraints on intervals and time points in temporal contexts. This temporal model is composed of a labeled temporal algebra and its reasoning algorithms. The labeled temporal algebra defines labeled disjunctive metric point-based constraints, where each disjunct in each input disjunctive constraint is univocally associated to a label. Reasoning algorithms manage labeled constraints, associated label lists, and sets of mutually inconsistent disjuncts. These algorithms guarantee consistency and obtain a minimal network. Additionally, constraints can be organized in a hierarchy of alternative temporal contexts. Therefore, we can reason on context-dependent disjunctive metric constraints on intervals and points. Moreover, the model is able to represent non-binary constraints, such that logical dependencies on disjuncts in constraints can be handled. The computational cost of reasoning algorithms is exponential in accordance with the underlying problem complexity, although some improvements are proposed.
|
Reasoning on Interval and Point-based Disjunctive Metric Constraints in
Temporal Contexts
| 847
|
Negation as failure and incomplete information in logic programs have been studied by many researchers In order to explains HOW a negated conclusion was reached, we introduce and proof a different way for negating facts to overcoming misleads in logic programs. Negating facts can be achieved by asking the user for constants that do not appear elsewhere in the knowledge base.
|
Overcoming Misleads In Logic Programs by Redefining Negation
| 848
|
Despite the prevalence of the Computational Theory of Mind and the Connectionist Model, the establishing of the key principles of the Cognitive Science are still controversy and inconclusive. This paper proposes the concept of Pattern Recognition as Necessary and Sufficient Principle for a general cognitive science modeling, in a very ambitious scientific proposal. A formal physical definition of the pattern recognition concept is also proposed to solve many key conceptual gaps on the field.
|
Proposal of Pattern Recognition as a necessary and sufficient Principle
to Cognitive Science
| 849
|
As was shown recently, many important AI problems require counting the number of models of propositional formulas. The problem of counting models of such formulas is, according to present knowledge, computationally intractable in a worst case. Based on the Davis-Putnam procedure, we present an algorithm, CDP, that computes the exact number of models of a propositional CNF or DNF formula F. Let m and n be the number of clauses and variables of F, respectively, and let p denote the probability that a literal l of F occurs in a clause C of F, then the average running time of CDP is shown to be O(nm^d), where d=-1/log(1-p). The practical performance of CDP has been estimated in a series of experiments on a wide variety of CNF formulas.
|
The Good Old Davis-Putnam Procedure Helps Counting Models
| 850
|
This paper presents a new approach to identifying and eliminating mislabeled training instances for supervised learning. The goal of this approach is to improve classification accuracies produced by learning algorithms by improving the quality of the training data. Our approach uses a set of learning algorithms to create classifiers that serve as noise filters for the training data. We evaluate single algorithm, majority vote and consensus filters on five datasets that are prone to labeling errors. Our experiments illustrate that filtering significantly improves classification accuracy for noise levels up to 30 percent. An analytical and empirical evaluation of the precision of our approach shows that consensus filters are conservative at throwing away good data at the expense of retaining bad data and that majority filters are better at detecting bad data at the expense of throwing away good data. This suggests that for situations in which there is a paucity of data, consensus filters are preferable, whereas majority vote filters are preferable for situations with an abundance of data.
|
Identifying Mislabeled Training Data
| 851
|
In many real-world learning tasks, it is expensive to acquire a sufficient number of labeled examples for training. This paper investigates methods for reducing annotation cost by `sample selection'. In this approach, during training the learning program examines many unlabeled examples and selects for labeling only those that are most informative at each stage. This avoids redundantly labeling examples that contribute little new information. Our work follows on previous research on Query By Committee, extending the committee-based paradigm to the context of probabilistic classification. We describe a family of empirical methods for committee-based sample selection in probabilistic classification models, which evaluate the informativeness of an example by measuring the degree of disagreement between several model variants. These variants (the committee) are drawn randomly from a probability distribution conditioned by the training set labeled so far. The method was applied to the real-world natural language processing task of stochastic part-of-speech tagging. We find that all variants of the method achieve a significant reduction in annotation cost, although their computational efficiency differs. In particular, the simplest variant, a two member committee with no parameters to tune, gives excellent results. We also show that sample selection yields a significant reduction in the size of the model used by the tagger.
|
Committee-Based Sample Selection for Probabilistic Classifiers
| 852
|
We investigate the problem of reasoning in the propositional fragment of MBNF, the logic of minimal belief and negation as failure introduced by Lifschitz, which can be considered as a unifying framework for several nonmonotonic formalisms, including default logic, autoepistemic logic, circumscription, epistemic queries, and logic programming. We characterize the complexity and provide algorithms for reasoning in propositional MBNF. In particular, we show that entailment in propositional MBNF lies at the third level of the polynomial hierarchy, hence it is harder than reasoning in all the above mentioned propositional formalisms for nonmonotonic reasoning. We also prove the exact correspondence between negation as failure in MBNF and negative introspection in Moore's autoepistemic logic.
|
Reasoning about Minimal Belief and Negation as Failure
| 853
|
We show how to find a minimum weight loop cutset in a Bayesian network with high probability. Finding such a loop cutset is the first step in the method of conditioning for inference. Our randomized algorithm for finding a loop cutset outputs a minimum loop cutset after O(c 6^k kn) steps with probability at least 1 - (1 - 1/(6^k))^c6^k, where c > 1 is a constant specified by the user, k is the minimal size of a minimum weight loop cutset, and n is the number of vertices. We also show empirically that a variant of this algorithm often finds a loop cutset that is closer to the minimum weight loop cutset than the ones found by the best deterministic algorithms known.
|
Randomized Algorithms for the Loop Cutset Problem
| 854
|
Recently model checking representation and search techniques were shown to be efficiently applicable to planning, in particular to non-deterministic planning. Such planning approaches use Ordered Binary Decision Diagrams (OBDDs) to encode a planning domain as a non-deterministic finite automaton and then apply fast algorithms from model checking to search for a solution. OBDDs can effectively scale and can provide universal plans for complex planning domains. We are particularly interested in addressing the complexities arising in non-deterministic, multi-agent domains. In this article, we present UMOP, a new universal OBDD-based planning framework for non-deterministic, multi-agent domains. We introduce a new planning domain description language, NADL, to specify non-deterministic, multi-agent domains. The language contributes the explicit definition of controllable agents and uncontrollable environment agents. We describe the syntax and semantics of NADL and show how to build an efficient OBDD-based representation of an NADL description. The UMOP planning system uses NADL and different OBDD-based universal planning algorithms. It includes the previously developed strong and strong cyclic planning algorithms. In addition, we introduce our new optimistic planning algorithm that relaxes optimality guarantees and generates plausible universal plans in some domains where no strong nor strong cyclic solution exists. We present empirical results applying UMOP to domains ranging from deterministic and single-agent with no environment actions to non-deterministic and multi-agent with complex environment actions. UMOP is shown to be a rich and efficient planning system.
|
OBDD-based Universal Planning for Synchronized Agents in
Non-Deterministic Domains
| 855
|
This paper reviews the connections between Graphplan's planning-graph and the dynamic constraint satisfaction problem and motivates the need for adapting CSP search techniques to the Graphplan algorithm. It then describes how explanation based learning, dependency directed backtracking, dynamic variable ordering, forward checking, sticky values and random-restart search strategies can be adapted to Graphplan. Empirical results are provided to demonstrate that these augmentations improve Graphplan's performance significantly (up to 1000x speedups) on several benchmark problems. Special attention is paid to the explanation-based learning and dependency directed backtracking techniques as they are empirically found to be most useful in improving the performance of Graphplan.
|
Planning Graph as a (Dynamic) CSP: Exploiting EBL, DDB and other CSP
Search Techniques in Graphplan
| 856
|
We investigate the space efficiency of a Propositional Knowledge Representation (PKR) formalism. Intuitively, the space efficiency of a formalism F in representing a certain piece of knowledge A, is the size of the shortest formula of F that represents A. In this paper we assume that knowledge is either a set of propositional interpretations (models) or a set of propositional formulae (theorems). We provide a formal way of talking about the relative ability of PKR formalisms to compactly represent a set of models or a set of theorems. We introduce two new compactness measures, the corresponding classes, and show that the relative space efficiency of a PKR formalism in representing models/theorems is directly related to such classes. In particular, we consider formalisms for nonmonotonic reasoning, such as circumscription and default logic, as well as belief revision operators and the stable model semantics for logic programs with negation. One interesting result is that formalisms with the same time complexity do not necessarily belong to the same space efficiency class.
|
Space Efficiency of Propositional Knowledge Representation Formalisms
| 857
|
Partially observable Markov decision processes (POMDPs) provide an elegant mathematical framework for modeling complex decision and planning problems in stochastic domains in which states of the system are observable only indirectly, via a set of imperfect or noisy observations. The modeling advantage of POMDPs, however, comes at a price -- exact methods for solving them are computationally very expensive and thus applicable in practice only to very simple problems. We focus on efficient approximation (heuristic) methods that attempt to alleviate the computational problem and trade off accuracy for speed. We have two objectives here. First, we survey various approximation methods, analyze their properties and relations and provide some new insights into their differences. Second, we present a number of new approximation methods and novel refinements of existing techniques. The theoretical results are supported by experiments on a problem from the agent navigation domain.
|
Value-Function Approximations for Partially Observable Markov Decision
Processes
| 858
|
Pearl and Dechter (1996) claimed that the d-separation criterion for conditional independence in acyclic causal networks also applies to networks of discrete variables that have feedback cycles, provided that the variables of the system are uniquely determined by the random disturbances. I show by example that this is not true in general. Some condition stronger than uniqueness is needed, such as the existence of a causal dynamics guaranteed to lead to the unique solution.
|
On Deducing Conditional Independence from d-Separation in Causal Graphs
with Feedback (Research Note)
| 859
|
Functional relationships between objects, called `attributes', are of considerable importance in knowledge representation languages, including Description Logics (DLs). A study of the literature indicates that papers have made, often implicitly, different assumptions about the nature of attributes: whether they are always required to have a value, or whether they can be partial functions. The work presented here is the first explicit study of this difference for subclasses of the CLASSIC DL, involving the same-as concept constructor. It is shown that although determining subsumption between concept descriptions has the same complexity (though requiring different algorithms), the story is different in the case of determining the least common subsumer (lcs). For attributes interpreted as partial functions, the lcs exists and can be computed relatively easily; even in this case our results correct and extend three previous papers about the lcs of DLs. In the case where attributes must have a value, the lcs may not exist, and even if it exists it may be of exponential size. Interestingly, it is possible to decide in polynomial time if the lcs exists.
|
What's in an Attribute? Consequences for the Least Common Subsumer
| 860
|
We study the complexity of the combination of the Description Logics ALCQ and ALCQI with a terminological formalism based on cardinality restrictions on concepts. These combinations can naturally be embedded into C^2, the two variable fragment of predicate logic with counting quantifiers, which yields decidability in NExpTime. We show that this approach leads to an optimal solution for ALCQI, as ALCQI with cardinality restrictions has the same complexity as C^2 (NExpTime-complete). In contrast, we show that for ALCQ, the problem can be solved in ExpTime. This result is obtained by a reduction of reasoning with cardinality restrictions to reasoning with the (in general weaker) terminological formalism of general axioms for ALCQ extended with nominals. Using the same reduction, we show that, for the extension of ALCQI with nominals, reasoning with general axioms is a NExpTime-complete problem. Finally, we sharpen this result and show that pure concept satisfiability for ALCQI with nominals is NExpTime-complete. Without nominals, this problem is known to be PSpace-complete.
|
The Complexity of Reasoning with Cardinality Restrictions and Nominals
in Expressive Description Logics
| 861
|
The local search algorithm WSat is one of the most successful algorithms for solving the satisfiability (SAT) problem. It is notably effective at solving hard Random 3-SAT instances near the so-called `satisfiability threshold', but still shows a peak in search cost near the threshold and large variations in cost over different instances. We make a number of significant contributions to the analysis of WSat on high-cost random instances, using the recently-introduced concept of the backbone of a SAT instance. The backbone is the set of literals which are entailed by an instance. We find that the number of solutions predicts the cost well for small-backbone instances but is much less relevant for the large-backbone instances which appear near the threshold and dominate in the overconstrained region. We show a very strong correlation between search cost and the Hamming distance to the nearest solution early in WSat's search. This pattern leads us to introduce a measure of the backbone fragility of an instance, which indicates how persistent the backbone is as clauses are removed. We propose that high-cost random instances for local search are those with very large backbones which are also backbone-fragile. We suggest that the decay in cost beyond the satisfiability threshold is due to increasing backbone robustness (the opposite of backbone fragility). Our hypothesis makes three correct predictions. First, that the backbone robustness of an instance is negatively correlated with the local search cost when other factors are controlled for. Second, that backbone-minimal instances (which are 3-SAT instances altered so as to be more backbone-fragile) are unusually hard for WSat. Third, that the clauses most often unsatisfied during search are those whose deletion has the most effect on the backbone. In understanding the pathologies of local search methods, we hope to contribute to the development of new and better techniques.
|
Backbone Fragility and the Local Search Cost Peak
| 862
|
This paper describes a novel method by which a spoken dialogue system can learn to choose an optimal dialogue strategy from its experience interacting with human users. The method is based on a combination of reinforcement learning and performance modeling of spoken dialogue systems. The reinforcement learning component applies Q-learning (Watkins, 1989), while the performance modeling component applies the PARADISE evaluation framework (Walker et al., 1997) to learn the performance function (reward) used in reinforcement learning. We illustrate the method with a spoken dialogue system named ELVIS (EmaiL Voice Interactive System), that supports access to email over the phone. We conduct a set of experiments for training an optimal dialogue strategy on a corpus of 219 dialogues in which human users interact with ELVIS over the phone. We then test that strategy on a corpus of 18 dialogues. We show that ELVIS can learn to optimize its strategy selection for agent initiative, for reading messages, and for summarizing email folders.
|
An Application of Reinforcement Learning to Dialogue Strategy Selection
in a Spoken Dialogue System for Email
| 863
|
We show that for several variations of partially observable Markov decision processes, polynomial-time algorithms for finding control policies are unlikely to or simply don't have guarantees of finding policies within a constant factor or a constant summand of optimal. Here "unlikely" means "unless some complexity classes collapse," where the collapses considered are P=NP, P=PSPACE, or P=EXP. Until or unless these collapses are shown to hold, any control-policy designer must choose between such performance guarantees and efficient computation.
|
Nonapproximability Results for Partially Observable Markov Decision
Processes
| 864
|
The paper addresses the problem of computing goal orderings, which is one of the longstanding issues in AI planning. It makes two new contributions. First, it formally defines and discusses two different goal orderings, which are called the reasonable and the forced ordering. Both orderings are defined for simple STRIPS operators as well as for more complex ADL operators supporting negation and conditional effects. The complexity of these orderings is investigated and their practical relevance is discussed. Secondly, two different methods to compute reasonable goal orderings are developed. One of them is based on planning graphs, while the other investigates the set of actions directly. Finally, it is shown how the ordering relations, which have been derived for a given set of goals G, can be used to compute a so-called goal agenda that divides G into an ordered set of subgoals. Any planner can then, in principle, use the goal agenda to plan for increasing sets of subgoals. This can lead to an exponential complexity reduction, as the solution to a complex planning problem is found by solving easier subproblems. Since only a polynomial overhead is caused by the goal agenda computation, a potential exists to dramatically speed up planning algorithms as we demonstrate in the empirical evaluation, where we use this method in the IPP planner.
|
On Reasonable and Forced Goal Orderings and their Use in an
Agenda-Driven Planning Algorithm
| 865
|
The goal of this research is to develop agents that are adaptive and predictable and timely. At first blush, these three requirements seem contradictory. For example, adaptation risks introducing undesirable side effects, thereby making agents' behavior less predictable. Furthermore, although formal verification can assist in ensuring behavioral predictability, it is known to be time-consuming. Our solution to the challenge of satisfying all three requirements is the following. Agents have finite-state automaton plans, which are adapted online via evolutionary learning (perturbation) operators. To ensure that critical behavioral constraints are always satisfied, agents' plans are first formally verified. They are then reverified after every adaptation. If reverification concludes that constraints are violated, the plans are repaired. The main objective of this paper is to improve the efficiency of reverification after learning, so that agents have a sufficiently rapid response time. We present two solutions: positive results that certain learning operators are a priori guaranteed to preserve useful classes of behavioral assurance constraints (which implies that no reverification is needed for these operators), and efficient incremental reverification algorithms for those learning operators that have negative a priori results.
|
Asimovian Adaptive Agents
| 866
|
A major problem in machine learning is that of inductive bias: how to choose a learner's hypothesis space so that it is large enough to contain a solution to the problem being learnt, yet small enough to ensure reliable generalization from reasonably-sized training sets. Typically such bias is supplied by hand through the skill and insights of experts. In this paper a model for automatically learning bias is investigated. The central assumption of the model is that the learner is embedded within an environment of related learning tasks. Within such an environment the learner can sample from multiple tasks, and hence it can search for a hypothesis space that contains good solutions to many of the problems in the environment. Under certain restrictions on the set of all hypothesis spaces available to the learner, we show that a hypothesis space that performs well on a sufficiently large number of training tasks will also perform well when learning novel tasks in the same environment. Explicit bounds are also derived demonstrating that learning multiple tasks within an environment of related tasks can potentially give much better generalization than learning a single task.
|
A Model of Inductive Bias Learning
| 867
|
The chief aim of this paper is to propose mean-field approximations for a broad class of Belief networks, of which sigmoid and noisy-or networks can be seen as special cases. The approximations are based on a powerful mean-field theory suggested by Plefka. We show that Saul, Jaakkola and Jordan' s approach is the first order approximation in Plefka's approach, via a variational derivation. The application of Plefka's theory to belief networks is not computationally tractable. To tackle this problem we propose new approximations based on Taylor series. Small scale experiments show that the proposed schemes are attractive.
|
Mean Field Methods for a Special Class of Belief Networks
| 868
|
The recent approaches of extending the GRAPHPLAN algorithm to handle more expressive planning formalisms raise the question of what the formal meaning of "expressive power" is. We formalize the intuition that expressive power is a measure of how concisely planning domains and plans can be expressed in a particular formalism by introducing the notion of "compilation schemes" between planning formalisms. Using this notion, we analyze the expressiveness of a large family of propositional planning formalisms, ranging from basic STRIPS to a formalism with conditional effects, partial state specifications, and propositional formulae in the preconditions. One of the results is that conditional effects cannot be compiled away if plan size should grow only linearly but can be compiled away if we allow for polynomial growth of the resulting plans. This result confirms that the recently proposed extensions to the GRAPHPLAN algorithm concerning conditional effects are optimal with respect to the "compilability" framework. Another result is that general propositional formulae cannot be compiled into conditional effects if the plan size should be preserved linearly. This implies that allowing general propositional formulae in preconditions and effect conditions adds another level of difficulty in generating a plan.
|
On the Compilability and Expressive Power of Propositional Planning
Formalisms
| 869
|
In order to generate plans for agents with multiple actuators, agent teams, or distributed controllers, we must be able to represent and plan using concurrent actions with interacting effects. This has historically been considered a challenging task requiring a temporal planner with the ability to reason explicitly about time. We show that with simple modifications, the STRIPS action representation language can be used to represent interacting actions. Moreover, algorithms for partial-order planning require only small modifications in order to be applied in such multiagent domains. We demonstrate this fact by developing a sound and complete partial-order planner for planning with concurrent interacting actions, POMP, that extends existing partial-order planners in a straightforward way. These results open the way to the use of partial-order planners for the centralized control of cooperative multiagent systems.
|
Partial-Order Planning with Concurrent Interacting Actions
| 870
|
Domain-independent planning is a hard combinatorial problem. Taking into account plan quality makes the task even more difficult. This article introduces Planning by Rewriting (PbR), a new paradigm for efficient high-quality domain-independent planning. PbR exploits declarative plan-rewriting rules and efficient local search techniques to transform an easy-to-generate, but possibly suboptimal, initial plan into a high-quality plan. In addition to addressing the issues of planning efficiency and plan quality, this framework offers a new anytime planning algorithm. We have implemented this planner and applied it to several existing domains. The experimental results show that the PbR approach provides significant savings in planning effort while generating high-quality plans.
|
Planning by Rewriting
| 871
|
Partially observable Markov decision processes (POMDPs) have recently become popular among many AI researchers because they serve as a natural model for planning under uncertainty. Value iteration is a well-known algorithm for finding optimal policies for POMDPs. It typically takes a large number of iterations to converge. This paper proposes a method for accelerating the convergence of value iteration. The method has been evaluated on an array of benchmark problems and was found to be very effective: It enabled value iteration to converge after only a few iterations on all the test problems.
|
Speeding Up the Convergence of Value Iteration in Partially Observable
Markov Decision Processes
| 872
|
We tackle the problem of planning in nondeterministic domains, by presenting a new approach to conformant planning. Conformant planning is the problem of finding a sequence of actions that is guaranteed to achieve the goal despite the nondeterminism of the domain. Our approach is based on the representation of the planning domain as a finite state automaton. We use Symbolic Model Checking techniques, in particular Binary Decision Diagrams, to compactly represent and efficiently search the automaton. In this paper we make the following contributions. First, we present a general planning algorithm for conformant planning, which applies to fully nondeterministic domains, with uncertainty in the initial condition and in action effects. The algorithm is based on a breadth-first, backward search, and returns conformant plans of minimal length, if a solution to the planning problem exists, otherwise it terminates concluding that the problem admits no conformant solution. Second, we provide a symbolic representation of the search space based on Binary Decision Diagrams (BDDs), which is the basis for search techniques derived from symbolic model checking. The symbolic representation makes it possible to analyze potentially large sets of states and transitions in a single computation step, thus providing for an efficient implementation. Third, we present CMBP (Conformant Model Based Planner), an efficient implementation of the data structures and algorithm described above, directly based on BDD manipulations, which allows for a compact representation of the search layers and an efficient implementation of the search steps. Finally, we present an experimental comparison of our approach with the state-of-the-art conformant planners CGP, QBFPLAN and GPT. Our analysis includes all the planning problems from the distribution packages of these systems, plus other problems defined to stress a number of specific factors. Our approach appears to be the most effective: CMBP is strictly more expressive than QBFPLAN and CGP and, in all the problems where a comparison is possible, CMBP outperforms its competitors, sometimes by orders of magnitude.
|
Conformant Planning via Symbolic Model Checking
| 873
|
Stochastic sampling algorithms, while an attractive alternative to exact algorithms in very large Bayesian network models, have been observed to perform poorly in evidential reasoning with extremely unlikely evidence. To address this problem, we propose an adaptive importance sampling algorithm, AIS-BN, that shows promising convergence rates even under extreme conditions and seems to outperform the existing sampling algorithms consistently. Three sources of this performance improvement are (1) two heuristics for initialization of the importance function that are based on the theoretical properties of importance sampling in finite-dimensional integrals and the structural advantages of Bayesian networks, (2) a smooth learning method for the importance function, and (3) a dynamic weighting function for combining samples from different stages of the algorithm. We tested the performance of the AIS-BN algorithm along with two state of the art general purpose sampling algorithms, likelihood weighting (Fung and Chang, 1989; Shachter and Peot, 1989) and self-importance sampling (Shachter and Peot, 1989). We used in our tests three large real Bayesian network models available to the scientific community: the CPCS network (Pradhan et al., 1994), the PathFinder network (Heckerman, Horvitz, and Nathwani, 1990), and the ANDES network (Conati, Gertner, VanLehn, and Druzdzel, 1997), with evidence as unlikely as 10^-41. While the AIS-BN algorithm always performed better than the other two algorithms, in the majority of the test cases it achieved orders of magnitude improvement in precision of the results. Improvement in speed given a desired precision is even more dramatic, although we are unable to report numerical results here, as the other algorithms almost never achieved the precision reached even by the first few iterations of the AIS-BN algorithm.
|
AIS-BN: An Adaptive Importance Sampling Algorithm for Evidential
Reasoning in Large Bayesian Networks
| 874
|
In recent years, many improvements to backtracking algorithms for solving constraint satisfaction problems have been proposed. The techniques for improving backtracking algorithms can be conveniently classified as look-ahead schemes and look-back schemes. Unfortunately, look-ahead and look-back schemes are not entirely orthogonal as it has been observed empirically that the enhancement of look-ahead techniques is sometimes counterproductive to the effects of look-back techniques. In this paper, we focus on the relationship between the two most important look-ahead techniques---using a variable ordering heuristic and maintaining a level of local consistency during the backtracking search---and the look-back technique of conflict-directed backjumping (CBJ). We show that there exists a "perfect" dynamic variable ordering such that CBJ becomes redundant. We also show theoretically that as the level of local consistency that is maintained in the backtracking search is increased, the less that backjumping will be an improvement. Our theoretical results partially explain why a backtracking algorithm doing more in the look-ahead phase cannot benefit more from the backjumping look-back scheme. Finally, we show empirically that adding CBJ to a backtracking algorithm that maintains generalized arc consistency (GAC), an algorithm that we refer to as GAC-CBJ, can still provide orders of magnitude speedups. Our empirical results contrast with Bessiere and Regin's conclusion (1996) that CBJ is useless to an algorithm that maintains arc consistency.
|
Conflict-Directed Backjumping Revisited
| 875
|
This paper presents an implemented system for recognizing the occurrence of events described by simple spatial-motion verbs in short image sequences. The semantics of these verbs is specified with event-logic expressions that describe changes in the state of force-dynamic relations between the participants of the event. An efficient finite representation is introduced for the infinite sets of intervals that occur when describing liquid and semi-liquid events. Additionally, an efficient procedure using this representation is presented for inferring occurrences of compound events, described with event-logic expressions, from occurrences of primitive events. Using force dynamics and event logic to specify the lexical semantics of events allows the system to be more robust than prior systems based on motion profile.
|
Grounding the Lexical Semantics of Verbs in Visual Perception using
Force Dynamics and Event Logic
| 876
|
An ensemble consists of a set of individually trained classifiers (such as neural networks or decision trees) whose predictions are combined when classifying novel instances. Previous research has shown that an ensemble is often more accurate than any of the single classifiers in the ensemble. Bagging (Breiman, 1996c) and Boosting (Freund and Shapire, 1996; Shapire, 1990) are two relatively new but popular methods for producing ensembles. In this paper we evaluate these methods on 23 data sets using both neural networks and decision trees as our classification algorithm. Our results clearly indicate a number of conclusions. First, while Bagging is almost always more accurate than a single classifier, it is sometimes much less accurate than Boosting. On the other hand, Boosting can create ensembles that are less accurate than a single classifier -- especially when using neural networks. Analysis indicates that the performance of the Boosting methods is dependent on the characteristics of the data set being examined. In fact, further results show that Boosting ensembles may overfit noisy data sets, thus decreasing its performance. Finally, consistent with previous studies, our work suggests that most of the gain in an ensemble's performance comes in the first few classifiers combined; however, relatively large gains can be seen up to 25 classifiers when Boosting decision trees.
|
Popular Ensemble Methods: An Empirical Study
| 877
|
This paper presents an evolutionary algorithm with a new goal-sequence domination scheme for better decision support in multi-objective optimization. The approach allows the inclusion of advanced hard/soft priority and constraint information on each objective component, and is capable of incorporating multiple specifications with overlapping or non-overlapping objective functions via logical 'OR' and 'AND' connectives to drive the search towards multiple regions of trade-off. In addition, we propose a dynamic sharing scheme that is simple and adaptively estimated according to the on-line population distribution without needing any a priori parameter setting. Each feature in the proposed algorithm is examined to show its respective contribution, and the performance of the algorithm is compared with other evolutionary optimization methods. It is shown that the proposed algorithm has performed well in the diversity of evolutionary search and uniform distribution of non-dominated individuals along the final trade-offs, without significant computational effort. The algorithm is also applied to the design optimization of a practical servo control system for hard disk drives with a single voice-coil-motor actuator. Results of the evolutionary designed servo control system show a superior closed-loop performance compared to classical PID or RPT approaches.
|
An Evolutionary Algorithm with Advanced Goal and Priority Specification
for Multi-objective Optimization
| 878
|
This paper presents GRT, a domain-independent heuristic planning system for STRIPS worlds. GRT solves problems in two phases. In the pre-processing phase, it estimates the distance between each fact and the goals of the problem, in a backward direction. Then, in the search phase, these estimates are used in order to further estimate the distance between each intermediate state and the goals, guiding so the search process in a forward direction and on a best-first basis. The paper presents the benefits from the adoption of opposite directions between the preprocessing and the search phases, discusses some difficulties that arise in the pre-processing phase and introduces techniques to cope with them. Moreover, it presents several methods of improving the efficiency of the heuristic, by enriching the representation and by reducing the size of the problem. Finally, a method of overcoming local optimal states, based on domain axioms, is proposed. According to it, difficult problems are decomposed into easier sub-problems that have to be solved sequentially. The performance results from various domains, including those of the recent planning competitions, show that GRT is among the fastest planners.
|
The GRT Planning System: Backward Heuristic Construction in Forward
State-Space Planning
| 879
|
In the recent literature of Artificial Intelligence, an intensive research effort has been spent, for various algebras of qualitative relations used in the representation of temporal and spatial knowledge, on the problem of classifying the computational complexity of reasoning problems for subsets of algebras. The main purpose of these researches is to describe a restricted set of maximal tractable subalgebras, ideally in an exhaustive fashion with respect to the hosting algebras. In this paper we introduce a novel algebra for reasoning about Spatial Congruence, show that the satisfiability problem in the spatial algebra MC-4 is NP-complete, and present a complete classification of tractability in the algebra, based on the individuation of three maximal tractable subclasses, one containing the basic relations. The three algebras are formed by 14, 10 and 9 relations out of 16 which form the full algebra.
|
The Complexity of Reasoning about Spatial Congruence
| 880
|
Gradient-based approaches to direct policy search in reinforcement learning have received much recent attention as a means to solve problems of partial observability and to avoid some of the problems associated with policy degradation in value-function methods. In this paper we introduce GPOMDP, a simulation-based algorithm for generating a {\em biased} estimate of the gradient of the {\em average reward} in Partially Observable Markov Decision Processes (POMDPs) controlled by parameterized stochastic policies. A similar algorithm was proposed by Kimura, Yamamura, and Kobayashi (1995). The algorithm's chief advantages are that it requires storage of only twice the number of policy parameters, uses one free parameter $\beta\in [0,1)$ (which has a natural interpretation in terms of bias-variance trade-off), and requires no knowledge of the underlying state. We prove convergence of GPOMDP, and show how the correct choice of the parameter $\beta$ is related to the {\em mixing time} of the controlled POMDP. We briefly describe extensions of GPOMDP to controlled Markov chains, continuous state, observation and control spaces, multiple-agents, higher-order derivatives, and a version for training stochastic policies with internal states. In a companion paper (Baxter, Bartlett, & Weaver, 2001) we show how the gradient estimates generated by GPOMDP can be used in both a traditional stochastic gradient algorithm and a conjugate-gradient procedure to find local optima of the average reward
|
Infinite-Horizon Policy-Gradient Estimation
| 881
|
Description Logics (DLs) are suitable, well-known, logics for managing structured knowledge. They allow reasoning about individuals and well defined concepts, i.e., set of individuals with common properties. The experience in using DLs in applications has shown that in many cases we would like to extend their capabilities. In particular, their use in the context of Multimedia Information Retrieval (MIR) leads to the convincement that such DLs should allow the treatment of the inherent imprecision in multimedia object content representation and retrieval. In this paper we will present a fuzzy extension of ALC, combining Zadeh's fuzzy logic with a classical DL. In particular, concepts becomes fuzzy and, thus, reasoning about imprecise concepts is supported. We will define its syntax, its semantics, describe its properties and present a constraint propagation calculus for reasoning in it.
|
Reasoning within Fuzzy Description Logics
| 882
|
Top-down induction of decision trees has been observed to suffer from the inadequate functioning of the pruning phase. In particular, it is known that the size of the resulting tree grows linearly with the sample size, even though the accuracy of the tree does not improve. Reduced Error Pruning is an algorithm that has been used as a representative technique in attempts to explain the problems of decision tree learning. In this paper we present analyses of Reduced Error Pruning in three different settings. First we study the basic algorithmic properties of the method, properties that hold independent of the input decision tree and pruning examples. Then we examine a situation that intuitively should lead to the subtree under consideration to be replaced by a leaf node, one in which the class label and attribute values of the pruning examples are independent of each other. This analysis is conducted under two different assumptions. The general analysis shows that the pruning probability of a node fitting pure noise is bounded by a function that decreases exponentially as the size of the tree grows. In a specific analysis we assume that the examples are distributed uniformly to the tree. This assumption lets us approximate the number of subtrees that are pruned because they do not receive any pruning examples. This paper clarifies the different variants of the Reduced Error Pruning algorithm, brings new insight to its algorithmic properties, analyses the algorithm with less imposed assumptions than before, and includes the previously overlooked empty subtrees to the analysis.
|
An Analysis of Reduced Error Pruning
| 883
|
This paper investigates the problems arising in the construction of a program to play the game of contract bridge. These problems include both the difficulty of solving the game's perfect information variant, and techniques needed to address the fact that bridge is not, in fact, a perfect information game. GIB, the program being described, involves five separate technical advances: partition search, the practical application of Monte Carlo techniques to realistic problems, a focus on achievable sets to solve problems inherent in the Monte Carlo approach, an extension of alpha-beta pruning from total orders to arbitrary distributive lattices, and the use of squeaky wheel optimization to find approximately optimal solutions to cardplay problems. GIB is currently believed to be of approximately expert caliber, and is currently the strongest computer bridge program in the world.
|
GIB: Imperfect Information in a Computationally Challenging Game
| 884
|
Enforcing local consistencies is one of the main features of constraint reasoning. Which level of local consistency should be used when searching for solutions in a constraint network is a basic question. Arc consistency and partial forms of arc consistency have been widely studied, and have been known for sometime through the forward checking or the MAC search algorithms. Until recently, stronger forms of local consistency remained limited to those that change the structure of the constraint graph, and thus, could not be used in practice, especially on large networks. This paper focuses on the local consistencies that are stronger than arc consistency, without changing the structure of the network, i.e., only removing inconsistent values from the domains. In the last five years, several such local consistencies have been proposed by us or by others. We make an overview of all of them, and highlight some relations between them. We compare them both theoretically and experimentally, considering their pruning efficiency and the time required to enforce them.
|
Domain Filtering Consistencies
| 885
|
In this paper, we present a method for recognising an agent's behaviour in dynamic, noisy, uncertain domains, and across multiple levels of abstraction. We term this problem on-line plan recognition under uncertainty and view it generally as probabilistic inference on the stochastic process representing the execution of the agent's plan. Our contributions in this paper are twofold. In terms of probabilistic inference, we introduce the Abstract Hidden Markov Model (AHMM), a novel type of stochastic processes, provide its dynamic Bayesian network (DBN) structure and analyse the properties of this network. We then describe an application of the Rao-Blackwellised Particle Filter to the AHMM which allows us to construct an efficient, hybrid inference method for this model. In terms of plan recognition, we propose a novel plan recognition framework based on the AHMM as the plan execution model. The Rao-Blackwellised hybrid inference for AHMM can take advantage of the independence properties inherent in a model of plan execution, leading to an algorithm for online probabilistic plan recognition that scales well with the number of levels in the plan hierarchy. This illustrates that while stochastic models for plan execution can be complex, they exhibit special structures which, if exploited, can lead to efficient plan recognition algorithms. We demonstrate the usefulness of the AHMM framework via a behaviour recognition system in a complex spatial environment using distributed video surveillance data.
|
Policy Recognition in the Abstract Hidden Markov Model
| 886
|
We describe and evaluate the algorithmic techniques that are used in the FF planning system. Like the HSP system, FF relies on forward state space search, using a heuristic that estimates goal distances by ignoring delete lists. Unlike HSP's heuristic, our method does not assume facts to be independent. We introduce a novel search strategy that combines hill-climbing with systematic search, and we show how other powerful heuristic information can be extracted and used to prune the search space. FF was the most successful automatic planner at the recent AIPS-2000 planning competition. We review the results of the competition, give data for other benchmark domains, and investigate the reasons for the runtime performance of FF compared to HSP.
|
The FF Planning System: Fast Plan Generation Through Heuristic Search
| 887
|
The First Trading Agent Competition (TAC) was held from June 22nd to July 8th, 2000. TAC was designed to create a benchmark problem in the complex domain of e-marketplaces and to motivate researchers to apply unique approaches to a common task. This article describes ATTac-2000, the first-place finisher in TAC. ATTac-2000 uses a principled bidding strategy that includes several elements of adaptivity. In addition to the success at the competition, isolated empirical results are presented indicating the robustness and effectiveness of ATTac-2000's adaptive strategy.
|
ATTac-2000: An Adaptive Autonomous Bidding Agent
| 888
|
The theoretical properties of qualitative spatial reasoning in the RCC8 framework have been analyzed extensively. However, no empirical investigation has been made yet. Our experiments show that the adaption of the algorithms used for qualitative temporal reasoning can solve large RCC8 instances, even if they are in the phase transition region -- provided that one uses the maximal tractable subsets of RCC8 that have been identified by us. In particular, we demonstrate that the orthogonal combination of heuristic methods is successful in solving almost all apparently hard instances in the phase transition region up to a certain size in reasonable time.
|
Efficient Methods for Qualitative Spatial Reasoning
| 889
|
This paper presents our work on development of OWL-driven systems for formal representation and reasoning about terminological knowledge and facts in petrology. The long-term aim of our project is to provide solid foundations for a large-scale integration of various kinds of knowledge, including basic terms, rock classification algorithms, findings and reports. We describe three steps we have taken towards that goal here. First, we develop a semi-automated procedure for transforming a database of igneous rock samples to texts in a controlled natural language (CNL), and then a collection of OWL ontologies. Second, we create an OWL ontology of important petrology terms currently described in natural language thesauri. We describe a prototype of a tool for collecting definitions from domain experts. Third, we present an approach to formalization of current industrial standards for classification of rock samples, which requires linear equations in OWL 2. In conclusion, we discuss a range of opportunities arising from the use of semantic technologies in petrology and outline the future work in this area.
|
Towards OWL-based Knowledge Representation in Petrology
| 890
|
This paper discusses a system that accelerates reinforcement learning by using transfer from related tasks. Without such transfer, even if two tasks are very similar at some abstract level, an extensive re-learning effort is required. The system achieves much of its power by transferring parts of previously learned solutions rather than a single complete solution. The system exploits strong features in the multi-dimensional function produced by reinforcement learning in solving a particular task. These features are stable and easy to recognize early in the learning process. They generate a partitioning of the state space and thus the function. The partition is represented as a graph. This is used to index and compose functions stored in a case base to form a close approximation to the solution of the new task. Experiments demonstrate that function composition often produces more than an order of magnitude increase in learning rate compared to a basic reinforcement learning algorithm.
|
Accelerating Reinforcement Learning by Composing Solutions of
Automatically Identified Subtasks
| 891
|
We propose a logical/mathematical framework for statistical parameter learning of parameterized logic programs, i.e. definite clause programs containing probabilistic facts with a parameterized distribution. It extends the traditional least Herbrand model semantics in logic programming to distribution semantics, possible world semantics with a probability distribution which is unconditionally applicable to arbitrary logic programs including ones for HMMs, PCFGs and Bayesian networks. We also propose a new EM algorithm, the graphical EM algorithm, that runs for a class of parameterized logic programs representing sequential decision processes where each decision is exclusive and independent. It runs on a new data structure called support graphs describing the logical relationship between observations and their explanations, and learns parameters by computing inside and outside probability generalized for logic programs. The complexity analysis shows that when combined with OLDT search for all explanations for observations, the graphical EM algorithm, despite its generality, has the same time complexity as existing EM algorithms, i.e. the Baum-Welch algorithm for HMMs, the Inside-Outside algorithm for PCFGs, and the one for singly connected Bayesian networks that have been developed independently in each research field. Learning experiments with PCFGs using two corpora of moderate size indicate that the graphical EM algorithm can significantly outperform the Inside-Outside algorithm.
|
Parameter Learning of Logic Programs for Symbolic-Statistical Modeling
| 892
|
I consider the problem of learning an optimal path graphical model from data and show the problem to be NP-hard for the maximum likelihood and minimum description length approaches and a Bayesian approach. This hardness result holds despite the fact that the problem is a restriction of the polynomially solvable problem of finding the optimal tree graphical model.
|
Finding a Path is Harder than Finding a Tree
| 893
|
Simple conceptual graphs are considered as the kernel of most knowledge representation formalisms built upon Sowa's model. Reasoning in this model can be expressed by a graph homomorphism called projection, whose semantics is usually given in terms of positive, conjunctive, existential FOL. We present here a family of extensions of this model, based on rules and constraints, keeping graph homomorphism as the basic operation. We focus on the formal definitions of the different models obtained, including their operational semantics and relationships with FOL, and we analyze the decidability and complexity of the associated problems (consistency and deduction). As soon as rules are involved in reasonings, these problems are not decidable, but we exhibit a condition under which they fall in the polynomial hierarchy. These results extend and complete the ones already published by the authors. Moreover we systematically study the complexity of some particular cases obtained by restricting the form of constraints and/or rules.
|
Extensions of Simple Conceptual Graphs: the Complexity of Rules and
Constraints
| 894
|
Fusions are a simple way of combining logics. For normal modal logics, fusions have been investigated in detail. In particular, it is known that, under certain conditions, decidability transfers from the component logics to their fusion. Though description logics are closely related to modal logics, they are not necessarily normal. In addition, ABox reasoning in description logics is not covered by the results from modal logics. In this paper, we extend the decidability transfer results from normal modal logics to a large class of description logics. To cover different description logics in a uniform way, we introduce abstract description systems, which can be seen as a common generalization of description and modal logics, and show the transfer results in this general setting.
|
Fusions of Description Logics and Abstract Description Systems
| 895
|
Inductive logic programming, or relational learning, is a powerful paradigm for machine learning or data mining. However, in order for ILP to become practically useful, the efficiency of ILP systems must improve substantially. To this end, the notion of a query pack is introduced: it structures sets of similar queries. Furthermore, a mechanism is described for executing such query packs. A complexity analysis shows that considerable efficiency improvements can be achieved through the use of this query pack execution mechanism. This claim is supported by empirical results obtained by incorporating support for query pack execution in two existing learning systems.
|
Improving the Efficiency of Inductive Logic Programming Through the Use
of Query Packs
| 896
|
Recent trends in planning research have led to empirical comparison becoming commonplace. The field has started to settle into a methodology for such comparisons, which for obvious practical reasons requires running a subset of planners on a subset of problems. In this paper, we characterize the methodology and examine eight implicit assumptions about the problems, planners and metrics used in many of these comparisons. The problem assumptions are: PR1) the performance of a general purpose planner should not be penalized/biased if executed on a sampling of problems and domains, PR2) minor syntactic differences in representation do not affect performance, and PR3) problems should be solvable by STRIPS capable planners unless they require ADL. The planner assumptions are: PL1) the latest version of a planner is the best one to use, PL2) default parameter settings approximate good performance, and PL3) time cut-offs do not unduly bias outcome. The metrics assumptions are: M1) performance degrades similarly for each planner when run on degraded runtime environments (e.g., machine platform) and M2) the number of plan steps distinguishes performance. We find that most of these assumptions are not supported empirically; in particular, that planners are affected differently by these assumptions. We conclude with a call to the community to devote research resources to improving the state of the practice and especially to enhancing the available benchmark problems.
|
A Critical Assessment of Benchmark Comparison in Planning
| 897
|
An approach to the construction of classifiers from imbalanced datasets is described. A dataset is imbalanced if the classification categories are not approximately equally represented. Often real-world data sets are predominately composed of "normal" examples with only a small percentage of "abnormal" or "interesting" examples. It is also the case that the cost of misclassifying an abnormal (interesting) example as a normal example is often much higher than the cost of the reverse error. Under-sampling of the majority (normal) class has been proposed as a good means of increasing the sensitivity of a classifier to the minority class. This paper shows that a combination of our method of over-sampling the minority (abnormal) class and under-sampling the majority (normal) class can achieve better classifier performance (in ROC space) than only under-sampling the majority class. This paper also shows that a combination of our method of over-sampling the minority class and under-sampling the majority class can achieve better classifier performance (in ROC space) than varying the loss ratios in Ripper or class priors in Naive Bayes. Our method of over-sampling the minority class involves creating synthetic minority class examples. Experiments are performed using C4.5, Ripper and a Naive Bayes classifier. The method is evaluated using the area under the Receiver Operating Characteristic curve (AUC) and the ROC convex hull strategy.
|
SMOTE: Synthetic Minority Over-sampling Technique
| 898
|
Common wisdom has it that small distinctions in the probabilities (parameters) quantifying a belief network do not matter much for the results of probabilistic queries. Yet, one can develop realistic scenarios under which small variations in network parameters can lead to significant changes in computed queries. A pending theoretical question is then to analytically characterize parameter changes that do or do not matter. In this paper, we study the sensitivity of probabilistic queries to changes in network parameters and prove some tight bounds on the impact that such parameters can have on queries. Our analytic results pinpoint some interesting situations under which parameter changes do or do not matter. These results are important for knowledge engineers as they help them identify influential network parameters. They also help explain some of the previous experimental results and observations with regards to network robustness against parameter changes.
|
When do Numbers Really Matter?
| 899
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.