id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
cs/0001023
Structured Language Modeling for Speech Recognition
cs.CL
A new language model for speech recognition is presented. The model develops hidden hierarchical syntactic-like structure incrementally and uses it to extract meaningful information from the word history, thus complementing the locality of currently used trigram models. The structured language model (SLM) and its performance in a two-pass speech recognizer --- lattice decoding --- are presented. Experiments on the WSJ corpus show an improvement in both perplexity (PPL) and word error rate (WER) over conventional trigram models.
cs/0001024
A Parallel Algorithm for Dilated Contour Extraction from Bilevel Images
cs.CV
We describe a simple, but efficient algorithm for the generation of dilated contours from bilevel images. The initial part of the contour extraction is explained to be a good candidate for parallel computer code generation. The remainder of the algorithm is of linear nature.
cs/0001025
Computational Geometry Column 38
cs.CG cs.CV
Recent results on curve reconstruction are described.
cs/0001027
Pattern Discovery and Computational Mechanics
cs.LG cs.NE
Computational mechanics is a method for discovering, describing and quantifying patterns, using tools from statistical physics. It constructs optimal, minimal models of stochastic processes and their underlying causal structures. These models tell us about the intrinsic computation embedded within a process---how it stores and transforms information. Here we summarize the mathematics of computational mechanics, especially recent optimality and uniqueness results. We also expound the principles and motivations underlying computational mechanics, emphasizing its connections to the minimum description length principle, PAC theory, and other aspects of machine learning.
cs/0002001
Computing large and small stable models
cs.LO cs.AI
In this paper, we focus on the problem of existence and computing of small and large stable models. We show that for every fixed integer k, there is a linear-time algorithm to decide the problem LSM (large stable models problem): does a logic program P have a stable model of size at least |P|-k. In contrast, we show that the problem SSM (small stable models problem) to decide whether a logic program P has a stable model of size at most k is much harder. We present two algorithms for this problem but their running time is given by polynomials of order depending on k. We show that the problem SSM is fixed-parameter intractable by demonstrating that it is W[2]-hard. This result implies that it is unlikely, an algorithm exists to compute stable models of size at most k that would run in time O(n^c), where c is a constant independent of k. We also provide an upper bound on the fixed-parameter complexity of the problem SSM by showing that it belongs to the class W[3].
cs/0002002
Uniform semantic treatment of default and autoepistemic logics
cs.AI
We revisit the issue of connections between two leading formalisms in nonmonotonic reasoning: autoepistemic logic and default logic. For each logic we develop a comprehensive semantic framework based on the notion of a belief pair. The set of all belief pairs together with the so called knowledge ordering forms a complete lattice. For each logic, we introduce several semantics by means of fixpoints of operators on the lattice of belief pairs. Our results elucidate an underlying isomorphism of the respective semantic constructions. In particular, we show that the interpretation of defaults as modal formulas proposed by Konolige allows us to represent all semantics for default logic in terms of the corresponding semantics for autoepistemic logic. Thus, our results conclusively establish that default logic can indeed be viewed as a fragment of autoepistemic logic. However, as we also demonstrate, the semantics of Moore and Reiter are given by different operators and occupy different locations in their corresponding families of semantics. This result explains the source of the longstanding difficulty to formally relate these two semantics. In the paper, we also discuss approximating skeptical reasoning with autoepistemic and default logics and establish constructive principles behind such approximations.
cs/0002003
On the accuracy and running time of GSAT
cs.AI
Randomized algorithms for deciding satisfiability were shown to be effective in solving problems with thousands of variables. However, these algorithms are not complete. That is, they provide no guarantee that a satisfying assignment, if one exists, will be found. Thus, when studying randomized algorithms, there are two important characteristics that need to be considered: the running time and, even more importantly, the accuracy --- a measure of likelihood that a satisfying assignment will be found, provided one exists. In fact, we argue that without a reference to the accuracy, the notion of the running time for randomized algorithms is not well-defined. In this paper, we introduce a formal notion of accuracy. We use it to define a concept of the running time. We use both notions to study the random walk strategy GSAT algorithm. We investigate the dependence of accuracy on properties of input formulas such as clause-to-variable ratio and the number of satisfying assignments. We demonstrate that the running time of GSAT grows exponentially in the number of variables of the input formula for randomly generated 3-CNF formulas and for the formulas encoding 3- and 4-colorability of graphs.
cs/0002006
Multiplicative Nonholonomic/Newton -like Algorithm
cs.LG
We construct new algorithms from scratch, which use the fourth order cumulant of stochastic variables for the cost function. The multiplicative updating rule here constructed is natural from the homogeneous nature of the Lie group and has numerous merits for the rigorous treatment of the dynamics. As one consequence, the second order convergence is shown. For the cost function, functions invariant under the componentwise scaling are choosen. By identifying points which can be transformed to each other by the scaling, we assume that the dynamics is in a coset space. In our method, a point can move toward any direction in this coset. Thus, no prewhitening is required.
cs/0002007
Requirements of Text Processing Lexicons
cs.CL
As text processing systems expand in scope, they will require ever larger lexicons along with a parsing capability for discriminating among many senses of a word. Existing systems do not incorporate such subtleties in meaning for their lexicons. Ordinary dictionaries contain such information, but are largely untapped. When the contents of dictionaries are scrutinized, they reveal many requirements that must be satisfied in representing meaning and in developing semantic parsers. These requirements were identified in research designed to find primitive verb concepts. The requirements are outlined and general procedures for satisfying them through the use of ordinary dictionaries are described, illustrated by building frames for and examining the definitions of "change" and its uses as a hypernym in other definitions.
cs/0002009
Syntactic Autonomy: Why There is no Autonomy without Symbols and How Self-Organization Might Evolve Them
cs.AI
Two different types of agency are discussed based on dynamically coherent and incoherent couplings with an environment respectively. I propose that until a private syntax (syntactic autonomy) is discovered by dynamically coherent agents, there are no significant or interesting types of closure or autonomy. When syntactic autonomy is established, then, because of a process of description-based selected self-organization, open-ended evolution is enabled. At this stage, agents depend, in addition to dynamics, on localized, symbolic memory, thus adding a level of dynamical incoherence to their interaction with the environment. Furthermore, it is the appearance of syntactic autonomy which enables much more interesting types of closures amongst agents which share the same syntax. To investigate how we can study the emergence of syntax from dynamical systems, experiments with cellular automata leading to emergent computation to solve non-trivial tasks are discussed. RNA editing is also mentioned as a process that may have been used to obtain a primordial biological code necessary open-ended evolution.
cs/0002010
Biologically Motivated Distributed Designs for Adaptive Knowledge Management
cs.IR
We discuss how distributed designs that draw from biological network metaphors can largely improve the current state of information retrieval and knowledge management of distributed information systems. In particular, two adaptive recommendation systems named TalkMine and @ApWeb are discussed in more detail. TalkMine operates at the semantic level of keywords. It leads different databases to learn new and adapt existing keywords to the categories recognized by its communities of users using distributed algorithms. @ApWeb operates at the structural level of information resources, namely citation or hyperlink structure. It relies on collective behavior to adapt such structure to the expectations of users. TalkMine and @ApWeb are currently being implemented for the research library of the Los Alamos National Laboratory under the Active Recommendation Project. Together they define a biologically motivated information retrieval system, recommending simultaneously at the level of user knowledge categories expressed in keywords, and at the level of individual documents and their associations to other documents. Rather than passive information retrieval, with this system, users obtain an active, evolving interaction with information resources.
cs/0002012
On The Closest String and Substring Problems
cs.CE cs.CC
The problem of finding a center string that is `close' to every given string arises and has many applications in computational biology and coding theory. This problem has two versions: the Closest String problem and the Closest Substring problem. Assume that we are given a set of strings ${\cal S}=\{s_1, s_2, ..., s_n\}$ of strings, say, each of length $m$. The Closest String problem asks for the smallest $d$ and a string $s$ of length $m$ which is within Hamming distance $d$ to each $s_i\in {\cal S}$. This problem comes from coding theory when we are looking for a code not too far away from a given set of codes. The problem is NP-hard. Berman et al give a polynomial time algorithm for constant $d$. For super-logarithmic $d$, Ben-Dor et al give an efficient approximation algorithm using linear program relaxation technique. The best polynomial time approximation has ratio 4/3 for all $d$ given by Lanctot et al and Gasieniec et al. The Closest Substring problem looks for a string $t$ which is within Hamming distance $d$ away from a substring of each $s_i$. This problem only has a $2- \frac{2}{2|\Sigma|+1}$ approximation algorithm previously Lanctot et al and is much more elusive than the Closest String problem, but it has many applications in finding conserved regions, genetic drug target identification, and genetic probes in molecular biology. Whether there are efficient approximation algorithms for both problems are major open questions in this area. We present two polynomial time approxmation algorithms with approximation ratio $1+ \epsilon$ for any small $\epsilon$ to settle both questions.
cs/0002013
Computing and Comparing Semantics of Programs in Multi-valued Logics
cs.LO cs.DB
The different semantics that can be assigned to a logic program correspond to different assumptions made concerning the atoms whose logical values cannot be inferred from the rules. Thus, the well founded semantics corresponds to the assumption that every such atom is false, while the Kripke-Kleene semantics corresponds to the assumption that every such atom is unknown. In this paper, we propose to unify and extend this assumption-based approach by introducing parameterized semantics for logic programs. The parameter holds the value that one assumes for all atoms whose logical values cannot be inferred from the rules. We work within multi-valued logic with bilattice structure, and we consider the class of logic programs defined by Fitting. Following Fitting's approach, we define a simple operator that allows us to compute the parameterized semantics, and to compare and combine semantics obtained for different values of the parameter. The semantics proposed by Fitting corresponds to the value false. We also show that our approach captures and extends the usual semantics of conventional logic programs thereby unifying their computation.
cs/0002014
Safe cooperative robot dynamics on graphs
cs.RO cs.AI
This paper initiates the use of vector fields to design, optimize, and implement reactive schedules for safe cooperative robot patterns on planar graphs. We consider Automated Guided Vehicles (AGV's) operating upon a predefined network of pathways. In contrast to the case of locally Euclidean configuration spaces, regularization of collisions is no longer a local procedure, and issues concerning the global topology of configuration spaces must be addressed. The focus of the present inquiry is the achievement of safe, efficient, cooperative patterns in the simplest nontrivial example (a pair of robots on a Y-network) by means of a state-event heirarchical controller.
cs/0002015
Genetic Algorithms for Extension Search in Default Logic
cs.AI cs.LO
A default theory can be characterized by its sets of plausible conclusions, called its extensions. But, due to the theoretical complexity of Default Logic (Sigma_2p-complete), the problem of finding such an extension is very difficult if one wants to deal with non trivial knowledge bases. Based on the principle of natural selection, Genetic Algorithms have been quite successfully applied to combinatorial problems and seem useful for problems with huge search spaces and when no tractable algorithm is available. The purpose of this paper is to show that techniques issued from Genetic Algorithms can be used in order to build an efficient default reasoning system. After providing a formal description of the components required for an extension search based on Genetic Algorithms principles, we exhibit some experimental results.
cs/0002016
SLT-Resolution for the Well-Founded Semantics
cs.AI cs.PL
Global SLS-resolution and SLG-resolution are two representative mechanisms for top-down evaluation of the well-founded semantics of general logic programs. Global SLS-resolution is linear for query evaluation but suffers from infinite loops and redundant computations. In contrast, SLG-resolution resolves infinite loops and redundant computations by means of tabling, but it is not linear. The principal disadvantage of a non-linear approach is that it cannot be implemented using a simple, efficient stack-based memory structure nor can it be easily extended to handle some strictly sequential operators such as cuts in Prolog. In this paper, we present a linear tabling method, called SLT-resolution, for top-down evaluation of the well-founded semantics. SLT-resolution is a substantial extension of SLDNF-resolution with tabling. Its main features include: (1) It resolves infinite loops and redundant computations while preserving the linearity. (2) It is terminating, and sound and complete w.r.t. the well-founded semantics for programs with the bounded-term-size property with non-floundering queries. Its time complexity is comparable with SLG-resolution and polynomial for function-free logic programs. (3) Because of its linearity for query evaluation, SLT-resolution bridges the gap between the well-founded semantics and standard Prolog implementation techniques. It can be implemented by an extension to any existing Prolog abstract machines such as WAM or ATOAM.
cs/0002017
An Usage Measure Based on Psychophysical Relations
cs.CL
A new word usage measure is proposed. It is based on psychophysical relations and allows to reveal words by its degree of "importance" for making basic dictionaries of sublanguages.
cs/0003001
Making news understandable to computers
cs.IR
Computers and devices are largely unaware of events taking place in the world. This could be changed if news were made available in a computer-understandable form. In this paper we present XML documents called NewsForms that represent the key points of 17 types of news events. We discuss the benefits of computer-understandable news and present the NewsExtract program for converting text news stories into NewsForms.
cs/0003003
Prospects for in-depth story understanding by computer
cs.AI cs.CL
While much research on the hard problem of in-depth story understanding by computer was performed starting in the 1970s, interest shifted in the 1990s to information extraction and word sense disambiguation. Now that a degree of success has been achieved on these easier problems, I propose it is time to return to in-depth story understanding. In this paper I examine the shift away from story understanding, discuss some of the major problems in building a story understanding system, present some possible solutions involving a set of interacting understanding agents, and provide pointers to useful tools and resources for building story understanding systems.
cs/0003004
A database and lexicon of scripts for ThoughtTreasure
cs.AI cs.CL
Since scripts were proposed in the 1970's as an inferencing mechanism for AI and natural language processing programs, there have been few attempts to build a database of scripts. This paper describes a database and lexicon of scripts that has been added to the ThoughtTreasure commonsense platform. The database provides the following information about scripts: sequence of events, roles, props, entry conditions, results, goals, emotions, places, duration, frequency, and cost. English and French words and phrases are linked to script concepts.
cs/0003005
Don't Trash your Intermediate Results, Cache 'em
cs.DB
In data warehouse and data mart systems, queries often take a long time to execute due to their complex nature. Query response times can be greatly improved by caching final/intermediate results of previous queries, and using them to answer later queries. In this paper we describe a caching system called Exchequer which incorporates several novel features including optimization aware cache maintenance and the use of a cache aware optimizer. In contrast, in existing work, the module that makes cost-benefit decisions is part of the cache manager and works independent of the optimizer which essentially reconsiders these decisions while finding the best plan for a query. In our work, the optimizer takes the decisions for the cache manager. Furthermore, existing approaches are either restricted to cube (slice/point) queries, or cache just the query results. On the other hand, our work is extens ible and in fact presents a data-model independent framework and algorithm. Our experimental results attest to the efficacy of our cache management techniques and show that over a wide range of parameters (a) Exchequer's query response times are lower by more than 30% compared to the best performing competitor, and (b) Exchequer can deliver the same response time as its competitor with just one tenth of the cache size.
cs/0003006
Materialized View Selection and Maintenance Using Multi-Query Optimization
cs.DB
Because the presence of views enhances query performance, materialized views are increasingly being supported by commercial database/data warehouse systems. Whenever the data warehouse is updated, the materialized views must also be updated. However, whereas the amount of data entering a warehouse, the query loads, and the need to obtain up-to-date responses are all increasing, the time window available for making the warehouse up-to-date is shrinking. These trends necessitate efficient techniques for the maintenance of materialized views. In this paper, we show how to find an efficient plan for maintenance of a {\em set} of views, by exploiting common subexpressions between different view maintenance expressions. These common subexpressions may be materialized temporarily during view maintenance. Our algorithms also choose subexpressions/indices to be materialized permanently (and maintained along with other materialized views), to speed up view maintenance. While there has been much work on view maintenance in the past, our novel contributions lie in exploiting a recently developed framework for multiquery optimization to efficiently find good view maintenance plans as above. In addition to faster view maintenance, our algorithms can also be used to efficiently select materialized views to speed up workloads containing queries.
cs/0003007
Computing Circumscriptive Databases by Integer Programming: Revisited (Extended Abstract)
cs.AI cs.LO
In this paper, we consider a method of computing minimal models in circumscription using integer programming in propositional logic and first-order logic with domain closure axioms and unique name axioms. This kind of treatment is very important since this enable to apply various technique developed in operations research to nonmonotonic reasoning. Nerode et al. (1995) are the first to propose a method of computing circumscription using integer programming. They claimed their method was correct for circumscription with fixed predicate, but we show that their method does not correctly reflect their claim. We show a correct method of computing all the minimal models not only with fixed predicates but also with varied predicates and we extend our method to compute prioritized circumscription as well.
cs/0003008
Consistency Management of Normal Logic Program by Top-down Abductive Proof Procedure
cs.AI
This paper presents a method of computing a revision of a function-free normal logic program. If an added rule is inconsistent with a program, that is, if it leads to a situation such that no stable model exists for a new program, then deletion and addition of rules are performed to avoid inconsistency. We specify a revision by translating a normal logic program into an abductive logic program with abducibles to represent deletion and addition of rules. To compute such deletion and addition, we propose an adaptation of our top-down abductive proof procedure to compute a relevant abducibles to an added rule. We compute a minimally revised program, by choosing a minimal set of abducibles among all the sets of abducibles computed by a top-down proof procedure.
cs/0003009
Conditional indifference and conditional preservation
cs.AI cs.LO
The idea of preserving conditional beliefs emerged recently as a new paradigm apt to guide the revision of epistemic states. Conditionals are substantially different from propositional beliefs and need specific treatment. In this paper, we present a new approach to conditionals, capturing particularly well their dynamic part as revision policies. We thoroughly axiomatize a principle of conditional preservation as an indifference property with respect to conditional structures of worlds. This principle is developed in a semi-quantitative setting, so as to reveal its fundamental meaning for belief revision in quantitative as well as in qualitative frameworks. In fact, it is shown to cover other proposed approaches to conditional preservation.
cs/0003011
Automatic Belief Revision in SNePS
cs.AI cs.LO
SNePS is a logic- and network- based knowledge representation, reasoning, and acting system, based on a monotonic, paraconsistent, first-order term logic, with compositional intensional semantics. It has an ATMS-style facility for belief contraction, and an acting component, including a well-defined syntax and semantics for primitive and composite acts, as well as for ``rules'' that allow for acting in support of reasoning and reasoning in support of acting. SNePS has been designed to support natural language competent cognitive agents. When the current version of SNePS detects an explicit contradiction, it interacts with the user, providing information that helps the user decide what to remove from the knowledge base in order to remove the contradiction. The forthcoming SNePS 2.6 will also do automatic belief contraction if the information in the knowledge base warrents it.
cs/0003012
Defeasible Reasoning in OSCAR
cs.AI
This is a system description for the OSCAR defeasible reasoner.
cs/0003013
A flexible framework for defeasible logics
cs.AI cs.LO
Logics for knowledge representation suffer from over-specialization: while each logic may provide an ideal representation formalism for some problems, it is less than optimal for others. A solution to this problem is to choose from several logics and, when necessary, combine the representations. In general, such an approach results in a very difficult problem of combination. However, if we can choose the logics from a uniform framework then the problem of combining them is greatly simplified. In this paper, we develop such a framework for defeasible logics. It supports all defeasible logics that satisfy a strong negation principle. We use logic meta-programs as the basis for the framework.
cs/0003014
Applying Maxi-adjustment to Adaptive Information Filtering Agents
cs.AI cs.MA
Learning and adaptation is a fundamental property of intelligent agents. In the context of adaptive information filtering, a filtering agent's beliefs about a user's information needs have to be revised regularly with reference to the user's most current information preferences. This learning and adaptation process is essential for maintaining the agent's filtering performance. The AGM belief revision paradigm provides a rigorous foundation for modelling rational and minimal changes to an agent's beliefs. In particular, the maxi-adjustment method, which follows the AGM rationale of belief change, offers a sound and robust computational mechanism to develop adaptive agents so that learning autonomy of these agents can be enhanced. This paper describes how the maxi-adjustment method is applied to develop the learning components of adaptive information filtering agents, and discusses possible difficulties of applying such a framework to these agents.
cs/0003015
On the semantics of merging
cs.AI cs.LO
Intelligent agents are often faced with the problem of trying to merge possibly conflicting pieces of information obtained from different sources into a consistent view of the world. We propose a framework for the modelling of such merging operations with roots in the work of Spohn (1988, 1991). Unlike most approaches we focus on the merging of epistemic states, not knowledge bases. We construct a number of plausible merging operations and measure them against various properties that merging operations ought to satisfy. Finally, we discuss the connection between merging and the use of infobases Meyer (1999) and Meyer et al. (2000).
cs/0003016
Abductive and Consistency-Based Diagnosis Revisited: a Modeling Perspective
cs.AI
Diagnostic reasoning has been characterized logically as consistency-based reasoning or abductive reasoning. Previous analyses in the literature have shown, on the one hand, that choosing the (in general more restrictive) abductive definition may be appropriate or not, depending on the content of the knowledge base [Console&Torasso91], and, on the other hand, that, depending on the choice of the definition the same knowledge should be expressed in different form [Poole94]. Since in Model-Based Diagnosis a major problem is finding the right way of abstracting the behavior of the system to be modeled, this paper discusses the relation between modeling, and in particular abstraction in the model, and the notion of diagnosis.
cs/0003017
The lexicographic closure as a revision process
cs.AI cs.LO
The connections between nonmonotonic reasoning and belief revision are well-known. A central problem in the area of nonmonotonic reasoning is the problem of default entailment, i.e., when should an item of default information representing "if A is true then, normally, B is true" be said to follow from a given set of items of such information. Many answers to this question have been proposed but, surprisingly, virtually none have attempted any explicit connection to belief revision. The aim of this paper is to give an example of how such a connection can be made by showing how the lexicographic closure of a set of defaults may be conceptualised as a process of iterated revision by sets of sentences. Specifically we use the revision process of Nayak.
cs/0003018
Description of GADEL
cs.AI cs.LO
This article describes the first implementation of the GADEL system : a Genetic Algorithm for Default Logic. The goal of GADEL is to compute extensions in Reiter's default logic. It accepts every kind of finite propositional default theories and is based on evolutionary principles of Genetic Algorithms. Its first experimental results on certain instances of the problem show that this new approach of the problem can be successful.
cs/0003019
Extending Classical Logic with Inductive Definitions
cs.LO cs.AI
The goal of this paper is to extend classical logic with a generalized notion of inductive definition supporting positive and negative induction, to investigate the properties of this logic, its relationships to other logics in the area of non-monotonic reasoning, logic programming and deductive databases, and to show its application for knowledge representation by giving a typology of definitional knowledge.
cs/0003020
ACLP: Integrating Abduction and Constraint Solving
cs.AI
ACLP is a system which combines abductive reasoning and constraint solving by integrating the frameworks of Abductive Logic Programming (ALP) and Constraint Logic Programming (CLP). It forms a general high-level knowledge representation environment for abductive problems in Artificial Intelligence and other areas. In ACLP, the task of abduction is supported and enhanced by its non-trivial integration with constraint solving facilitating its application to complex problems. The ACLP system is currently implemented on top of the CLP language of ECLiPSe as a meta-interpreter exploiting its underlying constraint solver for finite domains. It has been applied to the problems of planning and scheduling in order to test its computational effectiveness compared with the direct use of the (lower level) constraint solving framework of CLP on which it is built. These experiments provide evidence that the abductive framework of ACLP does not compromise significantly the computational efficiency of the solutions. Other experiments show the natural ability of ACLP to accommodate easily and in a robust way new or changing requirements of the original problem.
cs/0003021
Relevance Sensitive Non-Monotonic Inference on Belief Sequences
cs.AI
We present a method for relevance sensitive non-monotonic inference from belief sequences which incorporates insights pertaining to prioritized inference and relevance sensitive, inconsistency tolerant belief revision. Our model uses a finite, logically open sequence of propositional formulas as a representation for beliefs and defines a notion of inference from maxiconsistent subsets of formulas guided by two orderings: a temporal sequencing and an ordering based on relevance relations between the conclusion and formulas in the sequence. The relevance relations are ternary (using context as a parameter) as opposed to standard binary axiomatizations. The inference operation thus defined easily handles iterated revision by maintaining a revision history, blocks the derivation of inconsistent answers from a possibly inconsistent sequence and maintains the distinction between explicit and implicit beliefs. In doing so, it provides a finitely presented formalism and a plausible model of reasoning for automated agents.
cs/0003022
Hypothetical revision and matter-of-fact supposition
cs.AI cs.CL
The paper studies the notion of supposition encoded in non-Archimedean conditional probability (and revealed in the acceptance of the so-called indicative conditionals). The notion of qualitative change of view that thus arises is axiomatized and compared with standard notions like AGM and UPDATE. Applications in the following fields are discussed: (1) theory of games and decisions, (2) causal models, (3) non-monotonic logic.
cs/0003023
Probabilistic Default Reasoning with Conditional Constraints
cs.AI
We propose a combination of probabilistic reasoning from conditional constraints with approaches to default reasoning from conditional knowledge bases. In detail, we generalize the notions of Pearl's entailment in system Z, Lehmann's lexicographic entailment, and Geffner's conditional entailment to conditional constraints. We give some examples that show that the new notions of z-, lexicographic, and conditional entailment have similar properties like their classical counterparts. Moreover, we show that the new notions of z-, lexicographic, and conditional entailment are proper generalizations of both their classical counterparts and the classical notion of logical entailment for conditional constraints.
cs/0003024
A Compiler for Ordered Logic Programs
cs.AI
This paper describes a system, called PLP, for compiling ordered logic programs into standard logic programs under the answer set semantics. In an ordered logic program, rules are named by unique terms, and preferences among rules are given by a set of dedicated atoms. An ordered logic program is transformed into a second, regular, extended logic program wherein the preferences are respected, in that the answer sets obtained in the transformed theory correspond with the preferred answer sets of the original theory. Since the result of the translation is an extended logic program, existing logic programming systems can be used as underlying reasoning engine. In particular, PLP is conceived as a front-end to the logic programming systems dlv and smodels.
cs/0003025
Logic Programming for Describing and Solving Planning Problems
cs.AI cs.LO
A logic programming paradigm which expresses solutions to problems as stable models has recently been promoted as a declarative approach to solving various combinatorial and search problems, including planning problems. In this paradigm, all program rules are considered as constraints and solutions are stable models of the rule set. This is a rather radical departure from the standard paradigm of logic programming. In this paper we revisit abductive logic programming and argue that it allows a programming style which is as declarative as programming based on stable models. However, within abductive logic programming, one has two kinds of rules. On the one hand predicate definitions (which may depend on the abducibles) which are nothing else than standard logic programs (with their non-monotonic semantics when containing with negation); on the other hand rules which constrain the models for the abducibles. In this sense abductive logic programming is a smooth extension of the standard paradigm of logic programming, not a radical departure.
cs/0003027
SLDNFA-system
cs.AI
The SLDNFA-system results from the LP+ project at the K.U.Leuven, which investigates logics and proof procedures for these logics for declarative knowledge representation. Within this project inductive definition logic (ID-logic) is used as representation logic. Different solvers are being developed for this logic and one of these is SLDNFA. A prototype of the system is available and used for investigating how to solve efficiently problems represented in ID-logic.
cs/0003028
Logic Programs with Compiled Preferences
cs.AI
We describe an approach for compiling preferences into logic programs under the answer set semantics. An ordered logic program is an extended logic program in which rules are named by unique terms, and in which preferences among rules are given by a set of dedicated atoms. An ordered logic program is transformed into a second, regular, extended logic program wherein the preferences are respected, in that the answer sets obtained in the transformed theory correspond with the preferred answer sets of the original theory. Our approach allows both the specification of static orderings (as found in most previous work), in which preferences are external to a logic program, as well as orderings on sets of rules. In large part then, we are interested in describing a general methodology for uniformly incorporating preference information in a logic program. Since the result of our translation is an extended logic program, we can make use of existing implementations, such as dlv and smodels. To this end, we have developed a compiler, available on the web, as a front-end for these programming systems.
cs/0003029
Fuzzy Approaches to Abductive Inference
cs.AI
This paper proposes two kinds of fuzzy abductive inference in the framework of fuzzy rule base. The abductive inference processes described here depend on the semantic of the rule. We distinguish two classes of interpretation of a fuzzy rule, certainty generation rules and possible generation rules. In this paper we present the architecture of abductive inference in the first class of interpretation. We give two kinds of problem that we can resolve by using the proposed models of inference.
cs/0003030
Problem solving in ID-logic with aggregates: some experiments
cs.AI
The goal of the LP+ project at the K.U.Leuven is to design an expressive logic, suitable for declarative knowledge representation, and to develop intelligent systems based on Logic Programming technology for solving computational problems using the declarative specifications. The ID-logic is an integration of typed classical logic and a definition logic. Different abductive solvers for this language are being developed. This paper is a report of the integration of high order aggregates into ID-logic and the consequences on the solver SLDNFA.
cs/0003031
Optimal Belief Revision
cs.AI
We propose a new approach to belief revision that provides a way to change knowledge bases with a minimum of effort. We call this way of revising belief states optimal belief revision. Our revision method gives special attention to the fact that most belief revision processes are directed to a specific informational objective. This approach to belief change is founded on notions such as optimal context and accessibility. For the sentential model of belief states we provide both a formal description of contexts as sub-theories determined by three parameters and a method to construct contexts. Next, we introduce an accessibility ordering for belief sets, which we then use for selecting the best (optimal) contexts with respect to the processing effort involved in the revision. Then, for finitely axiomatizable knowledge bases, we characterize a finite accessibility ranking from which the accessibility ordering for the entire base is generated and show how to determine the ranking of an arbitrary sentence in the language. Finally, we define the adjustment of the accessibility ranking of a revised base of a belief set.
cs/0003032
cc-Golog: Towards More Realistic Logic-Based Robot Controllers
cs.AI
High-level robot controllers in realistic domains typically deal with processes which operate concurrently, change the world continuously, and where the execution of actions is event-driven as in ``charge the batteries as soon as the voltage level is low''. While non-logic-based robot control languages are well suited to express such scenarios, they fare poorly when it comes to projecting, in a conspicuous way, how the world evolves when actions are executed. On the other hand, a logic-based control language like \congolog, based on the situation calculus, is well-suited for the latter. However, it has problems expressing event-driven behavior. In this paper, we show how these problems can be overcome by first extending the situation calculus to support continuous change and event-driven behavior and then presenting \ccgolog, a variant of \congolog which is based on the extended situation calculus. One benefit of \ccgolog is that it narrows the gap in expressiveness compared to non-logic-based control languages while preserving a semantically well-founded projection mechanism.
cs/0003033
Smodels: A System for Answer Set Programming
cs.AI
The Smodels system implements the stable model semantics for normal logic programs. It handles a subclass of programs which contain no function symbols and are domain-restricted but supports extensions including built-in functions as well as cardinality and weight constraints. On top of this core engine more involved systems can be built. As an example, we have implemented total and partial stable model computation for disjunctive logic programs. An interesting application method is based on answer set programming, i.e., encoding an application problem as a set of rules so that its solutions are captured by the stable models of the rules. Smodels has been applied to a number of areas including planning, model checking, reachability analysis, product configuration, dynamic constraint satisfaction, and feature interaction.
cs/0003034
E-RES: A System for Reasoning about Actions, Events and Observations
cs.AI
E-RES is a system that implements the Language E, a logic for reasoning about narratives of action occurrences and observations. E's semantics is model-theoretic, but this implementation is based on a sound and complete reformulation of E in terms of argumentation, and uses general computational techniques of argumentation frameworks. The system derives sceptical non-monotonic consequences of a given reformulated theory which exactly correspond to consequences entailed by E's model-theory. The computation relies on a complimentary ability of the system to derive credulous non-monotonic consequences together with a set of supporting assumptions which is sufficient for the (credulous) conclusion to hold. E-RES allows theories to contain general action laws, statements about action occurrences, observations and statements of ramifications (or universal laws). It is able to derive consequences both forward and backward in time. This paper gives a short overview of the theoretical basis of E-RES and illustrates its use on a variety of examples. Currently, E-RES is being extended so that the system can be used for planning.
cs/0003035
Declarative Representation of Revision Strategies
cs.AI cs.LO
In this paper we introduce a nonmonotonic framework for belief revision in which reasoning about the reliability of different pieces of information based on meta-knowledge about the information is possible, and where revision strategies can be described declaratively. The approach is based on a Poole-style system for default reasoning in which entrenchment information is represented in the logical language. A notion of inference based on the least fixed point of a monotone operator is used to make sure that all theories possess a consistent set of conclusions.
cs/0003036
DLV - A System for Declarative Problem Solving
cs.AI cs.LO
DLV is an efficient logic programming and non-monotonic reasoning (LPNMR) system with advanced knowledge representation mechanisms and interfaces to classic relational database systems. Its core language is disjunctive datalog (function-free disjunctive logic programming) under the Answer Set Semantics with integrity constraints, both default and strong (or explicit) negation, and queries. Integer arithmetics and various built-in predicates are also supported. In addition DLV has several frontends, namely brave and cautious reasoning, abductive diagnosis, consistency-based diagnosis, a subset of SQL3, planning with action languages, and logic programming with inheritance.
cs/0003037
QUIP - A Tool for Computing Nonmonotonic Reasoning Tasks
cs.AI
In this paper, we outline the prototype of an automated inference tool, called QUIP, which provides a uniform implementation for several nonmonotonic reasoning formalisms. The theoretical basis of QUIP is derived from well-known results about the computational complexity of nonmonotonic logics and exploits a representation of the different reasoning tasks in terms of quantified boolean formulae.
cs/0003038
A Splitting Set Theorem for Epistemic Specifications
cs.AI
Over the past decade a considerable amount of research has been done to expand logic programming languages to handle incomplete information. One such language is the language of epistemic specifications. As is usual with logic programming languages, the problem of answering queries is intractable in the general case. For extended disjunctive logic programs, an idea that has proven useful in simplifying the investigation of answer sets is the use of splitting sets. In this paper we will present an extended definition of splitting sets that will be applicable to epistemic specifications. Furthermore, an extension of the splitting set theorem will be presented. Also, a characterization of stratified epistemic specifications will be given in terms of splitting sets. This characterization leads us to an algorithmic method of computing world views of a subclass of epistemic logic programs.
cs/0003039
DES: a Challenge Problem for Nonmonotonic Reasoning Systems
cs.AI
The US Data Encryption Standard, DES for short, is put forward as an interesting benchmark problem for nonmonotonic reasoning systems because (i) it provides a set of test cases of industrial relevance which shares features of randomly generated problems and real-world problems, (ii) the representation of DES using normal logic programs with the stable model semantics is simple and easy to understand, and (iii) this subclass of logic programs can be seen as an interesting special case for many other formalizations of nonmonotonic reasoning. In this paper we present two encodings of DES as logic programs: a direct one out of the standard specifications and an optimized one extending the work of Massacci and Marraro. The computational properties of the encodings are studied by using them for DES key search with the Smodels system as the implementation of the stable model semantics. Results indicate that the encodings and Smodels are quite competitive: they outperform state-of-the-art SAT-checkers working with an optimized encoding of DES into SAT and are comparable with a SAT-checker that is customized and tuned for the optimized SAT encoding.
cs/0003040
Implementing Integrity Constraints in an Existing Belief Revision System
cs.AI cs.LO
SNePS is a mature knowledge representation, reasoning, and acting system that has long contained a belief revision subsystem, called SNeBR. SNeBR is triggered when an explicit contradiction is introduced into the SNePS belief space, either because of a user's new assertion, or because of a user's query. SNeBR then makes the user decide what belief to remove from the belief space in order to restore consistency, although it provides information to help the user in making that decision. We have recently added automatic belief revision to SNeBR, by which, under certain circumstances, SNeBR decides by itself which belief to remove, and then informs the user of the decision and its consequences. We have used the well-known belief revision integrity constraints as a guide in designing automatic belief revision, taking into account, however, that SNePS's belief space is not deductively closed, and that it would be infeasible to form the deductive closure in order to decide what belief to remove. This paper briefly describes SNeBR both before and after this revision, discusses how we adapted the integrity constraints for this purpose, and gives an example of the new SNeBR in action.
cs/0003041
Coherence, Belief Expansion and Bayesian Networks
cs.AI cs.LO
We construct a probabilistic coherence measure for information sets which determines a partial coherence ordering. This measure is applied in constructing a criterion for expanding our beliefs in the face of new information. A number of idealizations are being made which can be relaxed by an appeal to Bayesian Networks.
cs/0003042
Fages' Theorem and Answer Set Programming
cs.AI
We generalize a theorem by Francois Fages that describes the relationship between the completion semantics and the answer set semantics for logic programs with negation as failure. The study of this relationship is important in connection with the emergence of answer set programming. Whenever the two semantics are equivalent, answer sets can be computed by a satisfiability solver, and the use of answer set solvers such as smodels and dlv is unnecessary. A logic programming representation of the blocks world due to Ilkka Niemelae is discussed as an example.
cs/0003043
Automatic Classification of Text Databases through Query Probing
cs.DB cs.IR
Many text databases on the web are "hidden" behind search interfaces, and their documents are only accessible through querying. Search engines typically ignore the contents of such search-only databases. Recently, Yahoo-like directories have started to manually organize these databases into categories that users can browse to find these valuable resources. We propose a novel strategy to automate the classification of search-only text databases. Our technique starts by training a rule-based document classifier, and then uses the classifier's rules to generate probing queries. The queries are sent to the text databases, which are then classified based on the number of matches that they produce for each query. We report some initial exploratory experiments that show that our approach is promising to automatically characterize the contents of text databases accessible on the web.
cs/0003044
On the tractable counting of theory models and its application to belief revision and truth maintenance
cs.AI
We introduced decomposable negation normal form (DNNF) recently as a tractable form of propositional theories, and provided a number of powerful logical operations that can be performed on it in polynomial time. We also presented an algorithm for compiling any conjunctive normal form (CNF) into DNNF and provided a structure-based guarantee on its space and time complexity. We present in this paper a linear-time algorithm for converting an ordered binary decision diagram (OBDD) representation of a propositional theory into an equivalent DNNF, showing that DNNFs scale as well as OBDDs. We also identify a subclass of DNNF which we call deterministic DNNF, d-DNNF, and show that the previous complexity guarantees on compiling DNNF continue to hold for this stricter subclass, which has stronger properties. In particular, we present a new operation on d-DNNF which allows us to count its models under the assertion, retraction and flipping of every literal by traversing the d-DNNF twice. That is, after such traversal, we can test in constant-time: the entailment of any literal by the d-DNNF, and the consistency of the d-DNNF under the retraction or flipping of any literal. We demonstrate the significance of these new operations by showing how they allow us to implement linear-time, complete truth maintenance systems and linear-time, complete belief revision systems for two important classes of propositional theories.
cs/0003046
Linear Tabulated Resolution Based on Prolog Control Strategy
cs.AI cs.LO
Infinite loops and redundant computations are long recognized open problems in Prolog. Two ways have been explored to resolve these problems: loop checking and tabling. Loop checking can cut infinite loops, but it cannot be both sound and complete even for function-free logic programs. Tabling seems to be an effective way to resolve infinite loops and redundant computations. However, existing tabulated resolutions, such as OLDT-resolution, SLG- resolution, and Tabulated SLS-resolution, are non-linear because they rely on the solution-lookup mode in formulating tabling. The principal disadvantage of non-linear resolutions is that they cannot be implemented using a simple stack-based memory structure like that in Prolog. Moreover, some strictly sequential operators such as cuts may not be handled as easily as in Prolog. In this paper, we propose a hybrid method to resolve infinite loops and redundant computations. We combine the ideas of loop checking and tabling to establish a linear tabulated resolution called TP-resolution. TP-resolution has two distinctive features: (1) It makes linear tabulated derivations in the same way as Prolog except that infinite loops are broken and redundant computations are reduced. It handles cuts as effectively as Prolog. (2) It is sound and complete for positive logic programs with the bounded-term-size property. The underlying algorithm can be implemented by an extension to any existing Prolog abstract machines such as WAM or ATOAM.
cs/0003047
BDD-based reasoning in the fluent calculus - first results
cs.AI
The paper reports on first preliminary results and insights gained in a project aiming at implementing the fluent calculus using methods and techniques based on binary decision diagrams. After reporting on an initial experiment showing promising results we discuss our findings concerning various techniques and heuristics used to speed up the reasoning process.
cs/0003048
PAL: Pertinence Action Language
cs.AI cs.LO
The current document contains a brief description of a system for Reasoning about Actions and Change called PAL (Pertinence Action Language) which makes use of several reasoning properties extracted from a Temporal Expert Systems tool called Medtool.
cs/0003049
Planning with Incomplete Information
cs.AI
Planning is a natural domain of application for frameworks of reasoning about actions and change. In this paper we study how one such framework, the Language E, can form the basis for planning under (possibly) incomplete information. We define two types of plans: weak and safe plans, and propose a planner, called the E-Planner, which is often able to extend an initial weak plan into a safe plan even though the (explicit) information available is incomplete, e.g. for cases where the initial state is not completely known. The E-Planner is based upon a reformulation of the Language E in argumentation terms and a natural proof theory resulting from the reformulation. It uses an extension of this proof theory by means of abduction for the generation of plans and adopts argumentation-based techniques for extending weak plans into safe plans. We provide representative examples illustrating the behaviour of the E-Planner, in particular for cases where the status of fluents is incompletely known.
cs/0003050
A tableau methodology for deontic conditional logics
cs.LO cs.AI
In this paper we present a theorem proving methodology for a restricted but significant fragment of the conditional language made up of (boolean combinations of) conditional statements with unnested antecedents. The method is based on the possible world semantics for conditional logics. The KEM label formalism, designed to account for the semantics of normal modal logics, is easily adapted to the semantics of conditional logics by simply indexing labels with formulas. The inference rules are provided by the propositional system KE+ - a tableau-like analytic proof system devised to be used both as a refutation and a direct method of proof - enlarged with suitable elimination rules for the conditional connective. The theorem proving methodology we are going to present can be viewed as a first step towards developing an appropriate algorithmic framework for several conditional logics for (defeasible) conditional obligation.
cs/0003051
Local Diagnosis
cs.AI
In an earlier work, we have presented operations of belief change which only affect the relevant part of a belief base. In this paper, we propose the application of the same strategy to the problem of model-based diangosis. We first isolate the subset of the system description which is relevant for a given observation and then solve the diagnosis problem for this subset.
cs/0003052
A Consistency-Based Model for Belief Change: Preliminary Report
cs.AI
We present a general, consistency-based framework for belief change. Informally, in revising K by A, we begin with A and incorporate as much of K as consistently possible. Formally, a knowledge base K and sentence A are expressed, via renaming propositions in K, in separate languages. Using a maximization process, we assume the languages are the same insofar as consistently possible. Lastly, we express the resultant knowledge base in a single language. There may be more than one way in which A can be so extended by K: in choice revision, one such ``extension'' represents the revised state; alternately revision consists of the intersection of all such extensions. The most general formulation of our approach is flexible enough to express other approaches to revision and update, the merging of knowledge bases, and the incorporation of static and dynamic integrity constraints. Our framework differs from work based on ordinal conditional functions, notably with respect to iterated revision. We argue that the approach is well-suited for implementation: the choice revision operator gives better complexity results than general revision; the approach can be expressed in terms of a finite knowledge base; and the scope of a revision can be restricted to just those propositions mentioned in the sentence for revision A.
cs/0003055
TnT - A Statistical Part-of-Speech Tagger
cs.CL
Trigrams'n'Tags (TnT) is an efficient statistical part-of-speech tagger. Contrary to claims found elsewhere in the literature, we argue that a tagger based on Markov models performs at least as well as other current approaches, including the Maximum Entropy framework. A recent comparison has even shown that TnT performs significantly better for the tested corpora. We describe the basic model of TnT, the techniques used for smoothing and for handling unknown words. Furthermore, we present evaluations on two corpora.
cs/0003056
A note on the Declarative reading(s) of Logic Programming
cs.LO cs.AI
This paper analyses the declarative readings of logic programming. Logic programming - and negation as failure - has no unique declarative reading. One common view is that logic programming is a logic for default reasoning, a sub-formalism of default logic or autoepistemic logic. In this view, negation as failure is a modal operator. In an alternative view, a logic program is interpreted as a definition. In this view, negation as failure is classical objective negation. From a commonsense point of view, there is definitely a difference between these views. Surprisingly though, both types of declarative readings lead to grosso modo the same model semantics. This note investigates the causes for this.
cs/0003057
XNMR: A tool for knowledge bases exploration
cs.LO cs.AI
XNMR is a system designed to explore the results of combining the well-founded semantics system XSB with the stable-models evaluator SMODELS. Its main goal is to work as a tool for fast and interactive exploration of knowledge bases.
cs/0003059
SATEN: An Object-Oriented Web-Based Revision and Extraction Engine
cs.AI
SATEN is an object-oriented web-based extraction and belief revision engine. It runs on any computer via a Java 1.1 enabled browser such as Netscape 4. SATEN performs belief revision based on the AGM approach. The extraction and belief revision reasoning engines operate on a user specified ranking of information. One of the features of SATEN is that it can be used to integrate mutually inconsistent commensuate rankings into a consistent ranking.
cs/0003060
Message Classification in the Call Center
cs.CL
Customer care in technical domains is increasingly based on e-mail communication, allowing for the reproduction of approved solutions. Identifying the customer's problem is often time-consuming, as the problem space changes if new products are launched. This paper describes a new approach to the classification of e-mail requests based on shallow text processing and machine learning techniques. It is implemented within an assistance system for call center agents that is used in a commercial setting.
cs/0003061
dcs: An Implementation of DATALOG with Constraints
cs.AI
Answer-set programming (ASP) has emerged recently as a viable programming paradigm. We describe here an ASP system, DATALOG with constraints or DC, based on non-monotonic logic. Informally, DC theories consist of propositional clauses (constraints) and of Horn rules. The semantics is a simple and natural extension of the semantics of the propositional logic. However, thanks to the presence of Horn rules in the system, modeling of transitive closure becomes straightforward. We describe the syntax, use and implementation of DC and provide experimental results.
cs/0003065
Image Compression with Iterated Function Systems, Finite Automata and Zerotrees: Grand Unification
cs.CV
Fractal image compression, Culik's image compression and zerotree prediction coding of wavelet image decomposition coefficients succeed only because typical images being compressed possess a significant degree of self-similarity. Besides the common concept, these methods turn out to be even more tightly related, to the point of algorithmical reducibility of one technique to another. The goal of the present paper is to demonstrate these relations. The paper offers a plain-term interpretation of Culik's image compression, in regular image processing terms, without resorting to finite state machines and similar lofty language. The interpretation is shown to be algorithmically related to an IFS fractal image compression method: an IFS can be exactly transformed into Culik's image code. Using this transformation, we will prove that in a self-similar (part of an) image any zero wavelet coefficient is the root of a zerotree, or its branch. The paper discusses the zerotree coding of (wavelet/projection) coefficients as a common predictor/corrector, applied vertically through different layers of a multiresolutional decomposition, rather than within the same view. This interpretation leads to an insight into the evolution of image compression techniques: from a causal single-layer prediction, to non-causal same-view predictions (wavelet decomposition among others) and to a causal cross-layer prediction (zero-trees, Culik's method).
cs/0003067
Detecting Unsolvable Queries for Definite Logic Programs
cs.LO cs.AI
In solving a query, the SLD proof procedure for definite programs sometimes searches an infinite space for a non existing solution. For example, querying a planner for an unreachable goal state. Such programs motivate the development of methods to prove the absence of a solution. Considering the definite program and the query ``<- Q'' as clauses of a first order theory, one can apply model generators which search for a finite interpretation in which the program clauses as well as the clause ``false <- Q'' are true. This paper develops a new approach which exploits the fact that all clauses are definite. It is based on a goal directed abductive search in the space of finite pre-interpretations for a pre-interpretation such that ``Q'' is false in the least model of the program based on it. Several methods for efficiently searching the space of pre-interpretations are presented. Experimental results confirm that our approach find solutions with less search than with the use of a first order model generator.
cs/0003072
MOO: A Methodology for Online Optimization through Mining the Offline Optimum
cs.DS cs.LG
Ports, warehouses and courier services have to decide online how an arriving task is to be served in order that cost is minimized (or profit maximized). These operators have a wealth of historical data on task assignments; can these data be mined for knowledge or rules that can help the decision-making? MOO is a novel application of data mining to online optimization. The idea is to mine (logged) expert decisions or the offline optimum for rules that can be used for online decisions. It requires little knowledge about the task distribution and cost structure, and is applicable to a wide range of problems. This paper presents a feasibility study of the methodology for the well-known k-server problem. Experiments with synthetic data show that optimization can be recast as classification of the optimum decisions; the resulting heuristic can achieve the optimum for strong request patterns, consistently outperforms other heuristics for weak patterns, and is robust despite changes in cost model.
cs/0003073
Proceedings of the 8th International Workshop on Non-Monotonic Reasoning, NMR'2000
cs.AI cs.LO
The papers gathered in this collection were presented at the 8th International Workshop on Nonmonotonic Reasoning, NMR2000. The series was started by John McCarthy in 1978. The first international NMR workshop was held at Mohonk Mountain House, New Paltz, New York in June, 1984, and was organized by Ray Reiter and Bonnie Webber. In the last 10 years the area of nonmonotonic reasoning has seen a number of important developments. Significant theoretical advances were made in the understanding of general abstract principles underlying nonmonotonicity. Key results on the expressibility and computational complexity of nonmonotonic logics were established. The role of nonmonotonic reasoning in belief revision, abduction, reasoning about action, planing and uncertainty was further clarified. Several successful NMR systems were built and used in applications such as planning, scheduling, logic programming and constraint satisfaction. The papers in the proceedings reflect these recent advances in the field. They are grouped into sections corresponding to special sessions as they were held at the workshop: 1. General NMR track 2. Abductive reasonig 3. Belief revision: theory and practice 4. Representing action and planning 5. Systems descriptions and demonstrations 6. Uncertainty frameworks in NMR
cs/0003074
A Finite State and Data-Oriented Method for Grapheme to Phoneme Conversion
cs.CL
A finite-state method, based on leftmost longest-match replacement, is presented for segmenting words into graphemes, and for converting graphemes into phonemes. A small set of hand-crafted conversion rules for Dutch achieves a phoneme accuracy of over 93%. The accuracy of the system is further improved by using transformation-based learning. The phoneme accuracy of the best system (using a large set of rule templates and a `lazy' variant of Brill's algoritm), trained on only 40K words, reaches 99% accuracy.
cs/0003076
Constraint Programming viewed as Rule-based Programming
cs.AI cs.PL
We study here a natural situation when constraint programming can be entirely reduced to rule-based programming. To this end we explain first how one can compute on constraint satisfaction problems using rules represented by simple first-order formulas. Then we consider constraint satisfaction problems that are based on predefined, explicitly given constraints. To solve them we first derive rules from these explicitly given constraints and limit the computation process to a repeated application of these rules, combined with labeling.We consider here two types of rules. The first type, that we call equality rules, leads to a new notion of local consistency, called {\em rule consistency} that turns out to be weaker than arc consistency for constraints of arbitrary arity (called hyper-arc consistency in \cite{MS98b}). For Boolean constraints rule consistency coincides with the closure under the well-known propagation rules for Boolean constraints. The second type of rules, that we call membership rules, yields a rule-based characterization of arc consistency. To show feasibility of this rule-based approach to constraint programming we show how both types of rules can be automatically generated, as {\tt CHR} rules of \cite{fruhwirth-constraint-95}. This yields an implementation of this approach to programming by means of constraint logic programming. We illustrate the usefulness of this approach to constraint programming by discussing various examples, including Boolean constraints, two typical examples of many valued logics, constraints dealing with Waltz's language for describing polyhedral scenes, and Allen's qualitative approach to temporal logic.
cs/0003077
DATALOG with constraints - an answer-set programming system
cs.AI
Answer-set programming (ASP) has emerged recently as a viable programming paradigm well attuned to search problems in AI, constraint satisfaction and combinatorics. Propositional logic is, arguably, the simplest ASP system with an intuitive semantics supporting direct modeling of problem constraints. However, for some applications, especially those requiring that transitive closure be computed, it requires additional variables and results in large theories. Consequently, it may not be a practical computational tool for such problems. On the other hand, ASP systems based on nonmonotonic logics, such as stable logic programming, can handle transitive closure computation efficiently and, in general, yield very concise theories as problem representations. Their semantics is, however, more complex. Searching for the middle ground, in this paper we introduce a new nonmonotonic logic, DATALOG with constraints or DC. Informally, DC theories consist of propositional clauses (constraints) and of Horn rules. The semantics is a simple and natural extension of the semantics of the propositional logic. However, thanks to the presence of Horn rules in the system, modeling of transitive closure becomes straightforward. We describe the syntax and semantics of DC, and study its properties. We discuss an implementation of DC and present results of experimental study of the effectiveness of DC, comparing it with CSAT, a satisfiability checker and SMODELS implementation of stable logic programming. Our results show that DC is competitive with the other two approaches, in case of many search problems, often yielding much more efficient solutions.
cs/0003079
Differential Invariants under Gamma Correction
cs.CV
This paper presents invariants under gamma correction and similarity transformations. The invariants are local features based on differentials which are implemented using derivatives of the Gaussian. The use of the proposed invariant representation is shown to yield improved correlation results in a template matching scenario.
cs/0003080
Some Remarks on Boolean Constraint Propagation
cs.AI
We study here the well-known propagation rules for Boolean constraints. First we propose a simple notion of completeness for sets of such rules and establish a completeness result. Then we show an equivalence in an appropriate sense between Boolean constraint propagation and unit propagation, a form of resolution for propositional logic. Subsequently we characterize one set of such rules by means of the notion of hyper-arc consistency introduced in (Mohr and Masini 1988). Also, we clarify the status of a similar, though different, set of rules introduced in (Simonis 1989a) and more fully in (Codognet and Diaz 1996).
cs/0003081
Variable Word Rate N-grams
cs.CL
The rate of occurrence of words is not uniform but varies from document to document. Despite this observation, parameters for conventional n-gram language models are usually derived using the assumption of a constant word rate. In this paper we investigate the use of variable word rate assumption, modelled by a Poisson distribution or a continuous mixture of Poissons. We present an approach to estimating the relative frequencies of words or n-grams taking prior information of their occurrences into account. Discounting and smoothing schemes are also considered. Using the Broadcast News task, the approach demonstrates a reduction of perplexity up to 10%.
cs/0003082
Representation results for defeasible logic
cs.LO cs.AI
The importance of transformations and normal forms in logic programming, and generally in computer science, is well documented. This paper investigates transformations and normal forms in the context of Defeasible Logic, a simple but efficient formalism for nonmonotonic reasoning based on rules and priorities. The transformations described in this paper have two main benefits: on one hand they can be used as a theoretical tool that leads to a deeper understanding of the formalism, and on the other hand they have been used in the development of an efficient implementation of defeasible logic.
cs/0003083
Advances in domain independent linear text segmentation
cs.CL
This paper describes a method for linear text segmentation which is twice as accurate and over seven times as fast as the state-of-the-art (Reynar, 1998). Inter-sentence similarity is replaced by rank in the local context. Boundary locations are discovered by divisive clustering.
cs/0003084
Information Extraction from Broadcast News
cs.CL
This paper discusses the development of trainable statistical models for extracting content from television and radio news broadcasts. In particular we concentrate on statistical finite state models for identifying proper names and other named entities in broadcast speech. Two models are presented: the first represents name class information as a word attribute; the second represents both word-word and class-class transitions explicitly. A common n-gram based formulation is used for both models. The task of named entity identification is characterized by relatively sparse training data and issues related to smoothing are discussed. Experiments are reported using the DARPA/NIST Hub-4E evaluation for North American Broadcast News.
cs/0004001
A Theory of Universal Artificial Intelligence based on Algorithmic Complexity
cs.AI cs.IT cs.LG math.IT
Decision theory formally solves the problem of rational agents in uncertain worlds if the true environmental prior probability distribution is known. Solomonoff's theory of universal induction formally solves the problem of sequence prediction for unknown prior distribution. We combine both ideas and get a parameterless theory of universal Artificial Intelligence. We give strong arguments that the resulting AIXI model is the most intelligent unbiased agent possible. We outline for a number of problem classes, including sequence prediction, strategic games, function minimization, reinforcement and supervised learning, how the AIXI model can formally solve them. The major drawback of the AIXI model is that it is uncomputable. To overcome this problem, we construct a modified algorithm AIXI-tl, which is still effectively more intelligent than any other time t and space l bounded agent. The computation time of AIXI-tl is of the order tx2^l. Other discussed topics are formal definitions of intelligence order relations, the horizon problem and relations of the AIXI theory to other AI approaches.
cs/0004002
Programming in Alma-0, or Imperative and Declarative Programming Reconciled
cs.LO cs.AI cs.PL
In (Apt et al, TOPLAS 1998) we introduced the imperative programming language Alma-0 that supports declarative programming. In this paper we illustrate the hybrid programming style of Alma-0 by means of various examples that complement those presented in (Apt et al, TOPLAS 1998). The presented Alma-0 programs illustrate the versatility of the language and show that ``don't know'' nondeterminism can be naturally combined with assignment.
cs/0004003
Searching for Spaceships
cs.AI nlin.CG
We describe software that searches for spaceships in Conway's Game of Life and related two-dimensional cellular automata. Our program searches through a state space related to the de Bruijn graph of the automaton, using a method that combines features of breadth first and iterative deepening search, and includes fast bit-parallel graph reachability and path enumeration algorithms for finding the successors of each state. Successful results include a new 2c/7 spaceship in Life, found by searching a space with 2^126 states.
cs/0004005
Exact Phase Transitions in Random Constraint Satisfaction Problems
cs.AI cs.CC cs.DM
In this paper we propose a new type of random CSP model, called Model RB, which is a revision to the standard Model B. It is proved that phase transitions from a region where almost all problems are satisfiable to a region where almost all problems are unsatisfiable do exist for Model RB as the number of variables approaches infinity. Moreover, the critical values at which the phase transitions occur are also known exactly. By relating the hardness of Model RB to Model B, it is shown that there exist a lot of hard instances in Model RB.
cs/0004007
Deciding first-order properties of locally tree-decomposable structures
cs.DS cs.CC cs.DB
We introduce the concept of a class of graphs, or more generally, relational structures, being locally tree-decomposable. There are numerous examples of locally tree-decomposable classes, among them the class of planar graphs and all classes of bounded valence or of bounded tree-width. We also consider a slightly more general concept of a class of structures having bounded local tree-width. We show that for each property P of structures that is definable in first-order logic and for each locally tree-decomposable class C of graphs, there is a linear time algorithm deciding whether a given structure A in C has property P. For classes C of bounded local tree-width, we show that for every k\ge 1 there is an algorithm that solves the same problem in time O(n^{1+(1/k)}) (where n is the cardinality of the input structure).
cs/0004008
How to Evaluate your Question Answering System Every Day and Still Get Real Work Done
cs.CL cs.IR
In this paper, we report on Qaviar, an experimental automated evaluation system for question answering applications. The goal of our research was to find an automatically calculated measure that correlates well with human judges' assessment of answer correctness in the context of question answering tasks. Qaviar judges the response by computing recall against the stemmed content words in the human-generated answer key. It counts the answer correct if it exceeds agiven recall threshold. We determined that the answer correctness predicted by Qaviar agreed with the human 93% to 95% of the time. 41 question-answering systems were ranked by both Qaviar and human assessors, and these rankings correlated with a Kendall's Tau measure of 0.920, compared to a correlation of 0.956 between human assessors on the same data.
cs/0004010
Design and Evaluation of Mechanisms for a Multicomputer Object Store
cs.DC cs.DB
Multicomputers have traditionally been viewed as powerful compute engines. It is from this perspective that they have been applied to various problems in order to achieve significant performance gains. There are many applications for which this compute intensive approach is only a partial solution. CAD, virtual reality, simulation, document management and analysis all require timely access to large amounts of data. This thesis investigates the use of the object store paradigm to harness the large distributed memories found on multicomputers. The design, implementation, and evaluation of a distributed object server on the Fujitsu AP1000 is described. The performance of the distributed object server under example applications, mainly physical simulation problems, is used to evaluate solutions to the problems of client space recovery, object migration, and coherence maintenance. The distributed object server follows the client-server model, allows object replication, and uses binary semaphores as a concurrency control measure. Instrumentation of the server under these applications supports several conclusions: client space recovery should be dynamically controlled by the application, predictively prefetching object replicas yields benefits in restricted circumstances, object migration by storage unit (segment) is not generally suitable where there are many objects per storage unit, and binary semaphores are an expensive concurrency control measure in this environment.
cs/0004012
Assisted Video Sequences Indexing : Motion Analysis Based on Interest Points
cs.CV
This work deals with content-based video indexing. Our viewpoint is semi-automatic analysis of compressed video. We consider the possible applications of motion analysis and moving object detection : assisting moving object indexing, summarising videos, and allowing image and motion queries. We propose an approach based on interest points. As first results, we test and compare the stability of different types of interest point detectors in compressed sequences.
cs/0004016
Looking at discourse in a corpus: The role of lexical cohesion
cs.CL
This paper is aimed at reporting on the development and application of a computer model for discourse analysis through segmentation. Segmentation refers to the principled division of texts into contiguous constituents. Other studies have looked at the application of a number of models to the analysis of discourse by computer. The segmentation procedure developed for the present investigation is called LSM ('Link Set Median'). It was applied to three corpus of 300 texts from three different genres. The results obtained by application of the LSM procedure on the corpus were then compared to segmentation carried out at random. Statistical analyses suggested that LSM significantly outperformed random segmentation, thus indicating that the segmentation was meaningful.
cs/0005001
Robustness of Regional Matching Scheme over Global Matching Scheme
cs.CV
The paper has established and verified the theory prevailing widely among image and pattern recognition specialists that the bottom-up indirect regional matching process is the more stable and the more robust than the global matching process against concentrated types of noise represented by clutter, outlier or occlusion in the imagery. We have demonstrated this by analyzing the effect of concentrated noise on a typical decision making process of a simplified two candidate voting model where our theorem establishes the lower bounds to a critical breakdown point of election (or decision) result by the bottom-up matching process are greater than the exact bound of the global matching process implying that the former regional process is capable of accommodating a higher level of noise than the latter global process before the result of decision overturns. We present a convincing experimental verification supporting not only the theory by a white-black flag recognition problem in the presence of localized noise but also the validity of the conjecture by a facial recognition problem that the theorem remains valid for other decision making processes involving an important dimension-reducing transform such as principal component analysis or a Gabor transform.
cs/0005006
A Simple Approach to Building Ensembles of Naive Bayesian Classifiers for Word Sense Disambiguation
cs.CL
This paper presents a corpus-based approach to word sense disambiguation that builds an ensemble of Naive Bayesian classifiers, each of which is based on lexical features that represent co--occurring words in varying sized windows of context. Despite the simplicity of this approach, empirical results disambiguating the widely studied nouns line and interest show that such an ensemble achieves accuracy rivaling the best previously published results.
cs/0005008
A Denotational Semantics for First-Order Logic
cs.PL cs.AI
In Apt and Bezem [AB99] (see cs.LO/9811017) we provided a computational interpretation of first-order formulas over arbitrary interpretations. Here we complement this work by introducing a denotational semantics for first-order logic. Additionally, by allowing an assignment of a non-ground term to a variable we introduce in this framework logical variables. The semantics combines a number of well-known ideas from the areas of semantics of imperative programming languages and logic programming. In the resulting computational view conjunction corresponds to sequential composition, disjunction to ``don't know'' nondeterminism, existential quantification to declaration of a local variable, and negation to the ``negation as finite failure'' rule. The soundness result shows correctness of the semantics with respect to the notion of truth. The proof resembles in some aspects the proof of the soundness of the SLDNF-resolution.
cs/0005009
PSPACE Reasoning for Graded Modal Logics
cs.LO cs.AI cs.CC cs.DS
We present a PSPACE algorithm that decides satisfiability of the graded modal logic Gr(K_R)---a natural extension of propositional modal logic K_R by counting expressions---which plays an important role in the area of knowledge representation. The algorithm employs a tableaux approach and is the first known algorithm which meets the lower bound for the complexity of the problem. Thus, we exactly fix the complexity of the problem and refute an ExpTime-hardness conjecture. We extend the results to the logic Gr(K_(R \cap I)), which augments Gr(K_R) with inverse relations and intersection of accessibility relations. This establishes a kind of ``theoretical benchmark'' that all algorithmic approaches can be measured against.
cs/0005010
Extending and Implementing the Stable Model Semantics
cs.LO cs.AI
An algorithm for computing the stable model semantics of logic programs is developed. It is shown that one can extend the semantics and the algorithm to handle new and more expressive types of rules. Emphasis is placed on the use of efficient implementation techniques. In particular, an implementation of lookahead that safely avoids testing every literal for failure and that makes the use of lookahead feasible is presented. In addition, a good heuristic is derived from the principle that the search space should be minimized. Due to the lack of competitive algorithms and implementations for the computation of stable models, the system is compared with three satisfiability solvers. This shows that the heuristic can be improved by breaking ties, but leaves open the question of how to break them. It also demonstrates that the more expressive rules of the stable model semantics make the semantics clearly preferable over propositional logic when a problem has a more compact logic program representation. Conjunctive normal form representations are never more compact than logic program ones.
cs/0005011
An Average Analysis of Backtracking on Random Constraint Satisfaction Problems
cs.CC cs.AI
In this paper we propose a random CSP model, called Model GB, which is a natural generalization of standard Model B. It is proved that Model GB in which each constraint is easy to satisfy exhibits non-trivial behaviour (not trivially satisfiable or unsatisfiable) as the number of variables approaches infinity. A detailed analysis to obtain an asymptotic estimate (good to 1+o(1)) of the average number of nodes in a search tree used by the backtracking algorithm on Model GB is also presented. It is shown that the average number of nodes required for finding all solutions or proving that no solution exists grows exponentially with the number of variables. So this model might be an interesting distribution for studying the nature of hard instances and evaluating the performance of CSP algorithms. In addition, we further investigate the behaviour of the average number of nodes as r (the ratio of constraints to variables) varies. The results indicate that as r increases, random CSP instances get easier and easier to solve, and the base for the average number of nodes that is exponential in r tends to 1 as r approaches infinity. Therefore, although the average number of nodes used by the backtracking algorithm on random CSP is exponential, many CSP instances will be very easy to solve when r is sufficiently large.
cs/0005012
Reasoning with Axioms: Theory and Pratice
cs.LO cs.AI
When reasoning in description, modal or temporal logics it is often useful to consider axioms representing universal truths in the domain of discourse. Reasoning with respect to an arbitrary set of axioms is hard, even for relatively inexpressive logics, and it is essential to deal with such axioms in an efficient manner if implemented systems are to be effective in real applications. This is particularly relevant to Description Logics, where subsumption reasoning with respect to a terminology is a fundamental problem. Two optimisation techniques that have proved to be particularly effective in dealing with terminologies are lazy unfolding and absorption. In this paper we seek to improve our theoretical understanding of these important techniques. We define a formal framework that allows the techniques to be precisely described, establish conditions under which they can be safely applied, and prove that, provided these conditions are respected, subsumption testing algorithms will still function correctly. These results are used to show that the procedures used in the FaCT system are correct and, moreover, to show how efficiency can be significantly improved, while still retaining the guarantee of correctness, by relaxing the safety conditions for absorption.