id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
cs/0206001
|
Neural Net Model for Featured Word Extraction
|
cs.NE cs.NI
|
Search engines perform the task of retrieving information related to the
user-supplied query words. This task has two parts; one is finding "featured
words" which describe an article best and the other is finding a match among
these words to user-defined search terms. There are two main independent
approaches to achieve this task. The first one, using the concepts of
semantics, has been implemented partially. For more details see another paper
of Marko et al., 2002. The second approach is reported in this paper. It is a
theoretical model based on using Neural Network (NN). Instead of using keywords
or reading from the first few lines from papers/articles, the present model
gives emphasis on extracting "featured words" from an article. Obviously we
propose to exclude prepositions, articles and so on, that is, English words
like "of, the, are, so, therefore, " etc. from such a list. A neural model is
taken with its nodes pre-assigned energies. Whenever a match is found with
featured words and userdefined search words, the node is fired and jumps to a
higher energy. This firing continues until the model attains a steady energy
level and total energy is now calculated. Clearly, higher match will generate
higher energy; so on the basis of total energy, a ranking is done to the
article indicating degree of relevance to the user's interest. Another
important feature of the proposed model is incorporating a semantic module to
refine the search words; like finding association among search words, etc. In
this manner, information retrieval can be improved markedly.
|
cs/0206003
|
Handling Defeasibilities in Action Domains
|
cs.AI
|
Representing defeasibility is an important issue in common sense reasoning.
In reasoning about action and change, this issue becomes more difficult because
domain and action related defeasible information may conflict with general
inertia rules. Furthermore, different types of defeasible information may also
interfere with each other during the reasoning. In this paper, we develop a
prioritized logic programming approach to handle defeasibilities in reasoning
about action. In particular, we propose three action languages {\cal AT}^{0},
{\cal AT}^{1} and {\cal AT}^{2} which handle three types of defeasibilities in
action domains named defeasible constraints, defeasible observations and
actions with defeasible and abnormal effects respectively. Each language with a
higher superscript can be viewed as an extension of the language with a lower
superscript. These action languages inherit the simple syntax of {\cal A}
language but their semantics is developed in terms of transition systems where
transition functions are defined based on prioritized logic programs. By
illustrating various examples, we show that our approach eventually provides a
powerful mechanism to handle various defeasibilities in temporal prediction and
postdiction. We also investigate semantic properties of these three action
languages and characterize classes of action domains that present more
desirable solutions in reasoning about action within the underlying action
languages.
|
cs/0206004
|
Mining All Non-Derivable Frequent Itemsets
|
cs.DB cs.AI
|
Recent studies on frequent itemset mining algorithms resulted in significant
performance improvements. However, if the minimal support threshold is set too
low, or the data is highly correlated, the number of frequent itemsets itself
can be prohibitively large. To overcome this problem, recently several
proposals have been made to construct a concise representation of the frequent
itemsets, instead of mining all frequent itemsets. The main goal of this paper
is to identify redundancies in the set of all frequent itemsets and to exploit
these redundancies in order to reduce the result of a mining operation. We
present deduction rules to derive tight bounds on the support of candidate
itemsets. We show how the deduction rules allow for constructing a minimal
representation for all frequent itemsets. We also present connections between
our proposal and recent proposals for concise representations and we give the
results of experiments on real-life datasets that show the effectiveness of the
deduction rules. In fact, the experiments even show that in many cases, first
mining the concise representation, and then creating the frequent itemsets from
this representation outperforms existing frequent set mining algorithms.
|
cs/0206006
|
Robust Feature Selection by Mutual Information Distributions
|
cs.AI cs.LG
|
Mutual information is widely used in artificial intelligence, in a
descriptive way, to measure the stochastic dependence of discrete random
variables. In order to address questions such as the reliability of the
empirical value, one must consider sample-to-population inferential approaches.
This paper deals with the distribution of mutual information, as obtained in a
Bayesian framework by a second-order Dirichlet prior distribution. The exact
analytical expression for the mean and an analytical approximation of the
variance are reported. Asymptotic approximations of the distribution are
proposed. The results are applied to the problem of selecting features for
incremental learning and classification of the naive Bayes classifier. A fast,
newly defined method is shown to outperform the traditional approach based on
empirical mutual information on a number of real data sets. Finally, a
theoretical development is reported that allows one to efficiently extend the
above methods to incomplete samples in an easy and effective way.
|
cs/0206007
|
Using the Annotated Bibliography as a Resource for Indicative
Summarization
|
cs.CL cs.DL
|
We report on a language resource consisting of 2000 annotated bibliography
entries, which is being analyzed as part of our research on indicative document
summarization. We show how annotated bibliographies cover certain aspects of
summarization that have not been well-covered by other summary corpora, and
motivate why they constitute an important form to study for information
retrieval. We detail our methodology for collecting the corpus, and overview
our document feature markup that we introduced to facilitate summary analysis.
We present the characteristics of the corpus, methods of collection, and show
its use in finding the distribution of types of information included in
indicative summaries and their relative ordering within the summaries.
|
cs/0206008
|
Computer modeling of feelings and emotions: a quantitative neural
network model of the feeling-of-knowing
|
cs.AI cs.NE q-bio.NC q-bio.QM
|
The first quantitative neural network model of feelings and emotions is
proposed on the base of available data on their neuroscience and evolutionary
biology nature, and on a neural network human memory model which admits
distinct description of conscious and unconscious mental processes in a time
dependent manner. As an example, proposed model is applied to quantitative
description of the feeling of knowing.
|
cs/0206013
|
High-order fundamental and general solutions of convection-diffusion
equation and their applications with boundary particle method
|
cs.CE cs.CG
|
In this study, we presented the high-order fundamental solutions and general
solutions of convection-diffusion equation. To demonstrate their efficacy, we
applied the high-order general solutions to the boundary particle method (BPM)
for the solution of some inhomogeneous convection-diffusion problems, where the
BPM is a new truly boundary-only meshfree collocation method based on multiple
reciprocity principle. For the sake of completeness, the BPM is also briefly
described here.
|
cs/0206014
|
A Method for Open-Vocabulary Speech-Driven Text Retrieval
|
cs.CL
|
While recent retrieval techniques do not limit the number of index terms,
out-of-vocabulary (OOV) words are crucial in speech recognition. Aiming at
retrieving information with spoken queries, we fill the gap between speech
recognition and text retrieval in terms of the vocabulary size. Given a spoken
query, we generate a transcription and detect OOV words through speech
recognition. We then correspond detected OOV words to terms indexed in a target
collection to complete the transcription, and search the collection for
documents relevant to the completed transcription. We show the effectiveness of
our method by way of experiments.
|
cs/0206015
|
Japanese/English Cross-Language Information Retrieval: Exploration of
Query Translation and Transliteration
|
cs.CL
|
Cross-language information retrieval (CLIR), where queries and documents are
in different languages, has of late become one of the major topics within the
information retrieval community. This paper proposes a Japanese/English CLIR
system, where we combine a query translation and retrieval modules. We
currently target the retrieval of technical documents, and therefore the
performance of our system is highly dependent on the quality of the translation
of technical terms. However, the technical term translation is still
problematic in that technical terms are often compound words, and thus new
terms are progressively created by combining existing base words. In addition,
Japanese often represents loanwords based on its special phonogram.
Consequently, existing dictionaries find it difficult to achieve sufficient
coverage. To counter the first problem, we produce a Japanese/English
dictionary for base words, and translate compound words on a word-by-word
basis. We also use a probabilistic method to resolve translation ambiguity. For
the second problem, we use a transliteration method, which corresponds words
unlisted in the base word dictionary to their phonetic equivalents in the
target language. We evaluate our system using a test collection for CLIR, and
show that both the compound word translation and transliteration methods
improve the system performance.
|
cs/0206016
|
Distance function wavelets - Part III: "Exotic" transforms and series
|
cs.CE cs.CG
|
Part III of the reports consists of various unconventional distance function
wavelets (DFW). The dimension and the order of partial differential equation
(PDE) are first used as a substitute of the scale parameter in the DFW
transforms and series, especially with the space and time-space potential
problems. It is noted that the recursive multiple reciprocity formulation is
the DFW series. The Green second identity is used to avoid the singularity of
the zero-order fundamental solution in creating the DFW series. The fundamental
solutions of various composite PDEs are found very flexible and efficient to
handle a borad range of problems. We also discuss the underlying connections
between the crucial concepts of dimension, scale and the order of PDE through
the analysis of dissipative acoustic wave propagation. The shape parameter of
the potential problems is also employed as the "scale parameter" to create the
non-orthogonal DFW. This paper also briefly discusses and conjectures the DFW
correspondences of a variety of coordinate variable transforms and series.
Practically important, the anisotropic and inhomogeneous DFW's are developed by
using the geodesic distance variable. The DFW and the related basis functions
are also used in making the kernel distance sigmoidal functions, which are
potentially useful in the artificial neural network and machine learning. As or
even worse than the preceding two reports, this study scarifies mathematical
rigor and in turn unfetter imagination. Most results are intuitively obtained
without rigorous analysis. Follow-up research is still under way. The paper is
intended to inspire more research into this promising area.
|
cs/0206017
|
The Prioritized Inductive Logic Programs
|
cs.AI cs.LG
|
The limit behavior of inductive logic programs has not been explored, but
when considering incremental or online inductive learning algorithms which
usually run ongoingly, such behavior of the programs should be taken into
account. An example is given to show that some inductive learning algorithm may
not be correct in the long run if the limit behavior is not considered. An
inductive logic program is convergent if given an increasing sequence of
example sets, the program produces a corresponding sequence of the Horn logic
programs which has the set-theoretic limit, and is limit-correct if the limit
of the produced sequence of the Horn logic programs is correct with respect to
the limit of the sequence of the example sets. It is shown that the GOLEM
system is not limit-correct. Finally, a limit-correct inductive logic system,
called the prioritized GOLEM system, is proposed as a solution.
|
cs/0206023
|
Relational Association Rules: getting WARMeR
|
cs.DB cs.AI
|
In recent years, the problem of association rule mining in transactional data
has been well studied. We propose to extend the discovery of classical
association rules to the discovery of association rules of conjunctive queries
in arbitrary relational data, inspired by the WARMR algorithm, developed by
Dehaspe and Toivonen, that discovers association rules over a limited set of
conjunctive queries. Conjunctive query evaluation in relational databases is
well understood, but still poses some great challenges when approached from a
discovery viewpoint in which patterns are generated and evaluated with respect
to some well defined search space and pruning operators.
|
cs/0206026
|
Interleaved semantic interpretation in environment-based parsing
|
cs.CL cs.HC
|
This paper extends a polynomial-time parsing algorithm that resolves
structural ambiguity in input to a speech-based user interface by calculating
and comparing the denotations of rival constituents, given some model of the
interfaced application environment (Schuler 2001). The algorithm is extended to
incorporate a full set of logical operators, including quantifiers and
conjunctions, into this calculation without increasing the complexity of the
overall algorithm beyond polynomial time, both in terms of the length of the
input and the number of entities in the environment model.
|
cs/0206027
|
Behaviour-based Knowledge Systems: An Epigenetic Path from Behaviour to
Knowledge
|
cs.AI cs.AR cs.NE
|
In this paper we expose the theoretical background underlying our current
research. This consists in the development of behaviour-based knowledge
systems, for closing the gaps between behaviour-based and knowledge-based
systems, and also between the understandings of the phenomena they model. We
expose the requirements and stages for developing behaviour-based knowledge
systems and discuss their limits. We believe that these are necessary
conditions for the development of higher order cognitive capacities, in
artificial and natural cognitive systems.
|
cs/0206028
|
Knowledge management for enterprises (Wissensmanagement fuer
Unternehmen)
|
cs.IR cs.AI
|
Although knowledge is one of the most valuable resource of enterprises and an
important production and competition factor, this intellectual potential is
often used (or maintained) only inadequate by the enterprises. Therefore, in a
globalised and growing market the optimal usage of existing knowledge
represents a key factor for enterprises of the future. Here, knowledge
management systems should engage facilitating. Because geographically far
distributed establishments cause, however, a distributed system, this paper
should uncover the spectrum connected with it and present a possible basic
approach which is based on ontologies and modern, platform independent
technologies. Last but not least this attempt, as well as general questions of
the knowledge management, are discussed.
|
cs/0206030
|
A Probabilistic Method for Analyzing Japanese Anaphora Integrating Zero
Pronoun Detection and Resolution
|
cs.CL
|
This paper proposes a method to analyze Japanese anaphora, in which zero
pronouns (omitted obligatory cases) are used to refer to preceding entities
(antecedents). Unlike the case of general coreference resolution, zero pronouns
have to be detected prior to resolution because they are not expressed in
discourse. Our method integrates two probability parameters to perform zero
pronoun detection and resolution in a single framework. The first parameter
quantifies the degree to which a given case is a zero pronoun. The second
parameter quantifies the degree to which a given entity is the antecedent for a
detected zero pronoun. To compute these parameters efficiently, we use corpora
with/without annotations of anaphoric relations. We show the effectiveness of
our method by way of experiments.
|
cs/0206034
|
Applying a Hybrid Query Translation Method to Japanese/English
Cross-Language Patent Retrieval
|
cs.CL
|
This paper applies an existing query translation method to cross-language
patent retrieval. In our method, multiple dictionaries are used to derive all
possible translations for an input query, and collocational statistics are used
to resolve translation ambiguity. We used Japanese/English parallel patent
abstracts to perform comparative experiments, where our method outperformed a
simple dictionary-based query translation method, and achieved 76% of
monolingual retrieval in terms of average precision.
|
cs/0206035
|
PRIME: A System for Multi-lingual Patent Retrieval
|
cs.CL
|
Given the growing number of patents filed in multiple countries, users are
interested in retrieving patents across languages. We propose a multi-lingual
patent retrieval system, which translates a user query into the target
language, searches a multilingual database for patents relevant to the query,
and improves the browsing efficiency by way of machine translation and
clustering. Our system also extracts new translations from patent families
consisting of comparable patents, to enhance the translation dictionary.
|
cs/0206036
|
Language Modeling for Multi-Domain Speech-Driven Text Retrieval
|
cs.CL
|
We report experimental results associated with speech-driven text retrieval,
which facilitates retrieving information in multiple domains with spoken
queries. Since users speak contents related to a target collection, we produce
language models used for speech recognition based on the target collection, so
as to improve both the recognition and retrieval accuracy. Experiments using
existing test collections combined with dictated queries showed the
effectiveness of our method.
|
cs/0206037
|
Speech-Driven Text Retrieval: Using Target IR Collections for
Statistical Language Model Adaptation in Speech Recognition
|
cs.CL
|
Speech recognition has of late become a practical technology for real world
applications. Aiming at speech-driven text retrieval, which facilitates
retrieving information with spoken queries, we propose a method to integrate
speech recognition and retrieval methods. Since users speak contents related to
a target collection, we adapt statistical language models used for speech
recognition based on the target collection, so as to improve both the
recognition and retrieval accuracy. Experiments using existing test collections
combined with dictated queries showed the effectiveness of our method.
|
cs/0206039
|
Hidden Markov model segmentation of hydrological and enviromental time
series
|
cs.CE cs.NA math.NA nlin.CD physics.data-an
|
Motivated by Hubert's segmentation procedure we discuss the application of
hidden Markov models (HMM) to the segmentation of hydrological and enviromental
time series. We use a HMM algorithm which segments time series of several
hundred terms in a few seconds and is computationally feasible for even longer
time series. The segmentation algorithm computes the Maximum Likelihood
segmentation by use of an expectation / maximization iteration. We rigorously
prove algorithm convergence and use numerical experiments, involving
temperature and river discharge time series, to show that the algorithm usually
converges to the globally optimal segmentation. The relation of the proposed
algorithm to Hubert's segmentation procedure is also discussed.
|
cs/0206041
|
Anticipatory Guidance of Plot
|
cs.AI
|
An anticipatory system for guiding plot development in interactive narratives
is described. The executable model is a finite automaton that provides the
implemented system with a look-ahead. The identification of undesirable future
states in the model is used to guide the player, in a transparent manner. In
this way, too radical twists of the plot can be avoided. Since the player
participates in the development of the plot, such guidance can have many forms,
depending on the environment of the player, on the behavior of the other
players, and on the means of player interaction. We present a design method for
interactive narratives which produces designs suitable for the implementation
of anticipatory mechanisms. Use of the method is illustrated by application to
our interactive computer game Kaktus.
|
cs/0207001
|
National Infrastructure Contingencies: Survey of Wireless Technology
Support
|
cs.DC cs.CE
|
In modern society, the flow of information has become the lifeblood of
commerce and social interaction. This movement of data supports most aspects of
the United States economy in particular, as well as, serving as the vehicle
upon which governmental agencies react to social conditions. In addition, it is
understood that the continuance of efficient and reliable data communications
during times of national or regional disaster remains a priority in the United
States. The coordination of emergency response and area revitalization /
rehabilitation efforts between local, state, and federal emergency response is
increasingly necessary as agencies strive to work more seamlessly between the
affected organizations. Additionally, international support is often made
available to react to such adverse conditions as wildfire suppression scenarios
and therefore require the efficient management of workforce and associated
logistics support.
It is through the examination of the issues related to un-tethered data
transmission during infrastructure contingencies that responders may best
tailor a unified approach to the rapid recovery after disasters occur.
|
cs/0207002
|
Using eigenvectors of the bigram graph to infer morpheme identity
|
cs.CL
|
This paper describes the results of some experiments exploring statistical
methods to infer syntactic behavior of words and morphemes from a raw corpus in
an unsupervised fashion. It shares certain points in common with Brown et al
(1992) and work that has grown out of that: it employs statistical techniques
to analyze syntactic behavior based on what words occur adjacent to a given
word. However, we use an eigenvector decomposition of a nearest-neighbor graph
to produce a two-dimensional rendering of the words of a corpus in which words
of the same syntactic category tend to form neighborhoods. We exploit this
technique for extending the value of automatic learning of morphology. In
particular, we look at the suffixes derived from a corpus by unsupervised
learning of morphology, and we ask which of these suffixes have a consistent
syntactic function (e.g., in English, -tion is primarily a mark of nouns, but
-s marks both noun plurals and 3rd person present on verbs), and we determine
that this method works well for this task.
|
cs/0207003
|
Analysis of Titles and Readers For Title Generation Centered on the
Readers
|
cs.CL
|
The title of a document has two roles, to give a compact summary and to lead
the reader to read the document. Conventional title generation focuses on
finding key expressions from the author's wording in the document to give a
compact summary and pays little attention to the reader's interest. To make the
title play its second role properly, it is indispensable to clarify the content
(``what to say'') and wording (``how to say'') of titles that are effective to
attract the target reader's interest. In this article, we first identify
typical content and wording of titles aimed at general readers in a comparative
study between titles of technical papers and headlines rewritten for
newspapers. Next, we describe the results of a questionnaire survey on the
effects of the content and wording of titles on the reader's interest. The
survey of general and knowledgeable readers shows both common and different
tendencies in interest.
|
cs/0207005
|
Efficient Deep Processing of Japanese
|
cs.CL
|
We present a broad coverage Japanese grammar written in the HPSG formalism
with MRS semantics. The grammar is created for use in real world applications,
such that robustness and performance issues play an important role. It is
connected to a POS tagging and word segmentation tool. This grammar is being
developed in a multilingual context, requiring MRS structures that are easily
comparable across languages.
|
cs/0207008
|
Agent Programming with Declarative Goals
|
cs.AI cs.PL
|
A long and lasting problem in agent research has been to close the gap
between agent logics and agent programming frameworks. The main reason for this
problem of establishing a link between agent logics and agent programming
frameworks is identified and explained by the fact that agent programming
frameworks have not incorporated the concept of a `declarative goal'. Instead,
such frameworks have focused mainly on plans or `goals-to-do' instead of the
end goals to be realised which are also called `goals-to-be'. In this paper, a
new programming language called GOAL is introduced which incorporates such
declarative goals. The notion of a `commitment strategy' - one of the main
theoretical insights due to agent logics, which explains the relation between
beliefs and goals - is used to construct a computational semantics for GOAL.
Finally, a proof theory for proving properties of GOAL agents is introduced.
Thus, we offer a complete theory of agent programming in the sense that our
theory provides both for a programming framework and a programming logic for
such agents. An example program is proven correct by using this programming
logic.
|
cs/0207010
|
Symmetric boundary knot method
|
cs.CE cs.CG
|
The boundary knot method (BKM) is a recent boundary-type radial basis
function (RBF) collocation scheme for general PDEs. Like the method of
fundamental solution (MFS), the RBF is employed to approximate the
inhomogeneous terms via the dual reciprocity principle. Unlike the MFS, the
method uses a nonsingular general solution instead of a singular fundamental
solution to evaluate the homogeneous solution so as to circumvent the
controversial artificial boundary outside the physical domain. The BKM is
meshfree, superconvergent, integration free, very easy to learn and program.
The original BKM, however, loses symmetricity in the presense of mixed
boundary. In this study, by analogy with Hermite RBF interpolation, we
developed a symmetric BKM scheme. The accuracy and efficiency of the symmetric
BKM are also numerically validated in some 2D and 3D Helmholtz and diffusion
reaction problems under complicated geometries.
|
cs/0207011
|
Improving Web Database Access Using Decision Diagrams
|
cs.LO cs.DB
|
In some areas of management and commerce, especially in Electronic commerce
(E-commerce), that are accelerated by advances in Web technologies, it is
essential to support the decision making process using formal methods. Among
the problems of E-commerce applications: reducing the time of data access so
that huge databases can be searched quickly; decreasing the cost of database
design ... etc. We present the application of Decision Diagrams design using
Information Theory approach to improve database access speeds. We show that
such utilization provides systematic and visual ways of applying Decision
Making methods to simplify complex Web engineering problems.
|
cs/0207015
|
New advances in dual reciprocity and boundary-only RBF methods
|
cs.CE cs.CG
|
This paper made some significant advances in the dual reciprocity and
boundary-only RBF techniques. The proposed boundary knot method (BKM) is
different from the standard boundary element method in a number of important
aspects. Namely, it is truly meshless, exponential convergence,
integration-free (of course, no singular integration), boundary-only for
general problems, and leads to symmetric matrix under certain conditions (able
to be extended to general cases after further modified). The BKM also avoids
the artificial boundary in the method of fundamental solution. An amazing
finding is that the BKM can formulate linear modeling equations for nonlinear
partial differential systems with linear boundary conditions. This merit makes
it circumvent all perplexing issues in the iteration solution of nonlinear
equations. On the other hand, by analogy with Green's second identity, this
paper also presents a general solution RBF (GSR) methodology to construct
efficient RBFs in the dual reciprocity and domain-type RBF collocation methods.
The GSR approach first establishes an explicit relationship between the BEM and
RBF itself on the ground of the weighted residual principle. This paper also
discusses the RBF convergence and stability problems within the framework of
integral equation theory.
|
cs/0207016
|
Relationship between boundary integral equation and radial basis
function
|
cs.CE cs.CG
|
This paper aims to survey our recent work relating to the radial basis
function (RBF) from some new views of points. In the first part, we established
the RBF on numerical integration analysis based on an intrinsic relationship
between the Green's boundary integral representation and RBF. It is found that
the kernel function of integral equation is important to create efficient RBF.
The fundamental solution RBF (FS-RBF) was presented as a novel strategy
constructing operator-dependent RBF. We proposed a conjecture formula featuring
the dimension affect on error bound to show the independent-dimension merit of
the RBF techniques. We also discussed wavelet RBF, localized RBF schemes, and
the influence of node placement on the RBF solution accuracy. The
centrosymmetric matrix structure of the RBF interpolation matrix under
symmetric node placing is proved.
The second part of this paper is concerned with the boundary knot method
(BKM), a new boundary-only, meshless, spectral convergent, integration-free RBF
collocation technique. The BKM was tested to the Helmholtz, Laplace, linear and
nonlinear convection-diffusion problems. In particular, we introduced the
response knot-dependent nonsingular general solution to calculate
varying-parameter and nonlinear steady convection-diffusion problems very
efficiently. By comparing with the multiple dual reciprocity method, we
discussed the completeness issue of the BKM.
Finally, the nonsingular solutions for some known differential operators were
given in appendix. Also we expanded the RBF concepts by introducing time-space
RBF for transient problems.
|
cs/0207017
|
New Insights in Boundary-only and Domain-type RBF Methods
|
cs.CE cs.CG
|
This paper has made some significant advances in the boundary-only and
domain-type RBF techniques. The proposed boundary knot method (BKM) is
different from the standard boundary element method in a number of important
aspects. Namely, it is truly meshless, exponential convergence,
integration-free (of course, no singular integration), boundary-only for
general problems, and leads to symmetric matrix under certain conditions (able
to be extended to general cases after further modified). The BKM also avoids
the artificial boundary in the method of fundamental solution. An amazing
finding is that the BKM can formulate linear modeling equations for nonlinear
partial differential systems with linear boundary conditions. This merit makes
it circumvent all perplexing issues in the iteration solution of nonlinear
equations. On the other hand, by analogy with Green's second identity, we also
presents a general solution RBF (GSR) methodology to construct efficient RBFs
in the domain-type RBF collocation method and dual reciprocity method. The GSR
approach first establishes an explicit relationship between the BEM and RBF
itself on the ground of the potential theory. This paper also discusses some
essential issues relating to the RBF computing, which include time-space RBFs,
direct and indirect RBF schemes, finite RBF method, and the application of
multipole and wavelet to the RBF solution of the PDEs.
|
cs/0207018
|
Definitions of distance function in radial basis function approach
|
cs.CE cs.CG
|
Very few studies involve how to construct the efficient RBFs by means of
problem features. Recently the present author presented general solution RBF
(GS-RBF) methodology to create operator-dependent RBFs successfully [1]. On the
other hand, the normal radial basis function (RBF) is defined via Euclidean
space distance function or the geodesic distance [2]. This purpose of this note
is to redefine distance function in conjunction with problem features, which
include problem-dependent and time-space distance function.
|
cs/0207021
|
Abduction, ASP and Open Logic Programs
|
cs.AI
|
Open logic programs and open entailment have been recently proposed as an
abstract framework for the verification of incomplete specifications based upon
normal logic programs and the stable model semantics. There are obvious
analogies between open predicates and abducible predicates. However, despite
superficial similarities, there are features of open programs that have no
immediate counterpart in the framework of abduction and viceversa. Similarly,
open programs cannot be immediately simulated with answer set programming
(ASP). In this paper we start a thorough investigation of the relationships
between open inference, abduction and ASP. We shall prove that open programs
generalize the other two frameworks. The generalized framework suggests
interesting extensions of abduction under the generalized stable model
semantics. In some cases, we will be able to reduce open inference to abduction
and ASP, thereby estimating its computational complexity. At the same time, the
aforementioned reduction opens the way to new applications of abduction and
ASP.
|
cs/0207022
|
What is a Joint Goal? Games with Beliefs and Defeasible Desires
|
cs.MA cs.GT
|
In this paper we introduce a qualitative decision and game theory based on
belief (B) and desire (D) rules. We show that a group of agents acts as if it
is maximizing achieved joint goals.
|
cs/0207023
|
Domain-Dependent Knowledge in Answer Set Planning
|
cs.AI
|
In this paper we consider three different kinds of domain-dependent control
knowledge (temporal, procedural and HTN-based) that are useful in planning. Our
approach is declarative and relies on the language of logic programming with
answer set semantics (AnsProlog*). AnsProlog* is designed to plan without
control knowledge. We show how temporal, procedural and HTN-based control
knowledge can be incorporated into AnsProlog* by the modular addition of a
small number of domain-dependent rules, without the need to modify the planner.
We formally prove the correctness of our planner, both in the absence and
presence of the control knowledge. Finally, we perform some initial
experimentation that demonstrates the potential reduction in planning time that
can be achieved when procedural domain knowledge is used to solve planning
problems with large plan length.
|
cs/0207024
|
On Concise Encodings of Preferred Extensions
|
cs.AI cs.CC cs.DS
|
Much work on argument systems has focussed on preferred extensions which
define the maximal collectively defensible subsets. Identification and
enumeration of these subsets is (under the usual assumptions) computationally
demanding. We consider approaches to deciding if a subset S is a preferred
extension which query a representations encoding all such extensions, so that
the computational effort is invested once only (for the initial enumeration)
rather than for each separate query.
|
cs/0207025
|
"Minimal defence": a refinement of the preferred semantics for
argumentation frameworks
|
cs.AI
|
Dung's abstract framework for argumentation enables a study of the
interactions between arguments based solely on an ``attack'' binary relation on
the set of arguments. Various ways to solve conflicts between contradictory
pieces of information have been proposed in the context of argumentation,
nonmonotonic reasoning or logic programming, and can be captured by appropriate
semantics within Dung's framework. A common feature of these semantics is that
one can always maximize in some sense the set of acceptable arguments. We
propose in this paper to extend Dung's framework in order to allow for the
representation of what we call ``restricted'' arguments: these arguments should
only be used if absolutely necessary, that is, in order to support other
arguments that would otherwise be defeated. We modify Dung's preferred
semantics accordingly: a set of arguments becomes acceptable only if it
contains a minimum of restricted arguments, for a maximum of unrestricted
arguments.
|
cs/0207029
|
Two Representations for Iterative Non-prioritized Change
|
cs.AI
|
We address a general representation problem for belief change, and describe
two interrelated representations for iterative non-prioritized change: a
logical representation in terms of persistent epistemic states, and a
constructive representation in terms of flocks of bases.
|
cs/0207030
|
Collective Argumentation
|
cs.AI
|
An extension of an abstract argumentation framework, called collective
argumentation, is introduced in which the attack relation is defined directly
among sets of arguments. The extension turns out to be suitable, in particular,
for representing semantics of disjunctive logic programs. Two special kinds of
collective argumentation are considered in which the opponents can share their
arguments.
|
cs/0207031
|
Intuitions and the modelling of defeasible reasoning: some case studies
|
cs.AI cs.LO
|
The purpose of this paper is to address some criticisms recently raised by
John Horty in two articles against the validity of two commonly accepted
defeasible reasoning patterns, viz. reinstatement and floating conclusions. I
shall argue that Horty's counterexamples, although they significantly raise our
understanding of these reasoning patterns, do not show their invalidity. Some
of them reflect patterns which, if made explicit in the formalisation, avoid
the unwanted inference without having to give up the criticised inference
principles. Other examples seem to involve hidden assumptions about the
specific problem which, if made explicit, are nothing but extra information
that defeat the defeasible inference. These considerations will be put in a
wider perspective by reflecting on the nature of defeasible reasoning
principles as principles of justified acceptance rather than `real' logical
inference.
|
cs/0207032
|
Alternative Characterizations for Strong Equivalence of Logic Programs
|
cs.AI cs.LO
|
In this work we present additional results related to the property of strong
equivalence of logic programs. This property asserts that two programs share
the same set of stable models, even under the addition of new rules. As shown
in a recent work by Lifschitz, Pearce and Valverde, strong equivalence can be
simply reduced to equivalence in the logic of Here-and-There (HT). In this
paper we provide two alternatives respectively based on classical logic and
3-valued logic. The former is applicable to general rules, but not for nested
expressions, whereas the latter is applicable for nested expressions but, when
moving to an unrestricted syntax, it generally yields different results from
HT.
|
cs/0207033
|
Reducing the Computational Requirements of the Differential Quadrature
Method
|
cs.CE cs.CG
|
This paper shows that the weighting coefficient matrices of the differential
quadrature method (DQM) are centrosymmetric or skew-centrosymmetric if the grid
spacings are symmetric irrespective of whether they are equal or unequal. A new
skew centrosymmetric matrix is also discussed. The application of the
properties of centrosymmetric and skew centrosymmetric matrix can reduce the
computational effort of the DQM for calculations of the inverse, determinant,
eigenvectors and eigenvalues by 75%. This computational advantage are also
demonstrated via several numerical examples.
|
cs/0207035
|
A Lyapunov Formulation for Efficient Solution of the Poisson and
Convection-Diffusion Equations by the Differential Quadrature Method
|
cs.CE cs.CG
|
Civan and Sliepcevich [1, 2] suggested that special matrix solver should be
developed to further reduce the computing effort in applying the differential
quadrature (DQ) method for the Poisson and convection-diffusion equations.
Therefore, the purpose of the present communication is to introduce and apply
the Lyapunov formulation which can be solved much more efficiently than the
Gaussian elimination method. Civan and Sliepcevich [2] first presented DQ
approximate formulas in polynomial form for partial derivatives in
tow-dimensional variable domain. For simplifying formulation effort, Chen et
al. [3] proposed the compact matrix form of these DQ approximate formulas. In
this study, by using these matrix approximate formulas, the DQ formulations for
the Poisson and convection-diffusion equations can be expressed as the Lyapunov
algebraic matrix equation. The formulation effort is simplified, and a simple
and explicit matrix formulation is obtained. A variety of fast algorithms in
the solution of the Lyapunov equation [4-6] can be successfully applied in the
DQ analysis of these two-dimensional problems, and, thus, the computing effort
can be greatly reduced. Finally, we also point out that the present reduction
technique can be easily extended to the three-dimensional cases.
|
cs/0207037
|
Some logics of belief and disbelief
|
cs.AI cs.LO
|
The introduction of explicit notions of rejection, or disbelief, into logics
for knowledge representation can be justified in a number of ways. Motivations
range from the need for versions of negation weaker than classical negation, to
the explicit recording of classic belief contraction operations in the area of
belief change, and the additional levels of expressivity obtained from an
extended version of belief change which includes disbelief contraction. In this
paper we present four logics of disbelief which address some or all of these
intuitions. Soundness and completeness results are supplied and the logics are
compared with respect to applicability and utility.
|
cs/0207038
|
Iterated revision and the axiom of recovery: a unified treatment via
epistemic states
|
cs.AI cs.LO
|
The axiom of recovery, while capturing a central intuition regarding belief
change, has been the source of much controversy. We argue briefly against
putative counterexamples to the axiom--while agreeing that some of their
insight deserves to be preserved--and present additional recovery-like axioms
in a framework that uses epistemic states, which encode preferences, as the
object of revisions. This provides a framework in which iterated revision
becomes possible and makes explicit the connection between iterated belief
change and the axiom of recovery. We provide a representation theorem that
connects the semantic conditions that we impose on iterated revision and the
additional syntactical properties mentioned. We also show some interesting
similarities between our framework and that of Darwiche-Pearl. In particular,
we show that the intuitions underlying the controversial (C2) postulate are
captured by the recovery axiom and our recovery-like postulates (the latter can
be seen as weakenings of (C2).
|
cs/0207039
|
Dual reciprocity BEM and dynamic programming filter for inverse
elastodynamic problems
|
cs.CE cs.CG
|
This paper presents the first coupling application of the dual reciprocity
BEM (DRBEM) and dynamic programming filter to inverse elastodynamic problem.
The DRBEM is the only BEM method, which does not require domain discretization
for general linear and nonlinear dynamic problems. Since the size of numerical
discretization system has a great effect on the computing effort of recursive
or iterative calculations of inverse analysis, the intrinsic boundary-only
merit of the DRBEM causes a considerable computational saving. On the other
hand, the strengths of the dynamic programming filter lie in its mathematical
simplicity, easy to program and great flexibility in the type, number and
locations of measurements and unknown inputs. The combination of these two
techniques is therefore very attractive for the solution of practical inverse
problems. In this study, the spatial and temporal partial derivatives of the
governing equation are respectively discretized first by the DRBEM and the
precise integration method, and then, by using dynamic programming with
regularization, dynamic load is estimated based on noisy measurements of
velocity and displacement at very few locations. Numerical experiments involved
with the periodic and Heaviside impact load are conducted to demonstrate the
applicability, efficiency and simplicity of this strategy. The affect of noise
level, regularization parameter, and measurement types on the estimation is
also investigated.
|
cs/0207040
|
Well-Founded Argumentation Semantics for Extended Logic Programming
|
cs.LO cs.AI
|
This paper defines an argumentation semantics for extended logic programming
and shows its equivalence to the well-founded semantics with explicit negation.
We set up a general framework in which we extensively compare this semantics to
other argumentation semantics, including those of Dung, and Prakken and Sartor.
We present a general dialectical proof theory for these argumentation
semantics.
|
cs/0207041
|
RBF-based meshless boundary knot method and boundary particle method
|
cs.CE cs.CG
|
This paper is concerned with the two new boundary-type radial basis function
collocation schemes, boundary knot method (BKM) and boundary particle method
(BPM). The BKM is developed based on the dual reciprocity theorem, while the
BPM employs the multiple reciprocity technique. Unlike the method of
fundamental solution, the wto methods use the nonsingular general solutions
instead of singular fundamental solution to circumvent the controversial
artificial boundary outside physical domain. Compared with the boundary element
method, both the BKM and BPM are meshfree, superconvergent, meshfree,
integration free, symmetric, and mathematically simple collocation techniques
for general PDEs. In particular, the BPM does not require any inner nodes for
inhomogeneous problems. In this study, the accuracy and efficiency of the two
methods are numerically demonstrated to some 2D, 3D Helmholtz and
convection-diffusion problems under complicated geometries.
|
cs/0207042
|
Logic Programming with Ordered Disjunction
|
cs.AI
|
Logic programs with ordered disjunction (LPODs) combine ideas underlying
Qualitative Choice Logic (Brewka et al. KR 2002) and answer set programming.
Logic programming under answer set semantics is extended with a new connective
called ordered disjunction. The new connective allows us to represent
alternative, ranked options for problem solutions in the heads of rules: A
\times B intuitively means: if possible A, but if A is not possible then at
least B. The semantics of logic programs with ordered disjunction is based on a
preference relation on answer sets. LPODs are useful for applications in design
and configuration and can serve as a basis for qualitative decision making.
|
cs/0207043
|
A meshless, integration-free, and boundary-only RBF technique
|
cs.CE cs.CG
|
Based on the radial basis function (RBF), non-singular general solution and
dual reciprocity method (DRM), this paper presents an inherently meshless,
integration-free, boundary-only RBF collocation techniques for numerical
solution of various partial differential equation systems. The basic ideas
behind this methodology are very mathematically simple. In this study, the RBFs
are employed to approximate the inhomogeneous terms via the DRM, while
non-singular general solution leads to a boundary-only RBF formulation for
homogenous solution. The present scheme is named as the boundary knot method
(BKM) to differentiate it from the other numerical techniques. In particular,
due to the use of nonsingular general solutions rather than singular
fundamental solutions, the BKM is different from the method of fundamental
solution in that the former does no require the artificial boundary and results
in the symmetric system equations under certain conditions. The efficiency and
utility of this new technique are validated through a number of typical
numerical examples. Completeness concern of the BKM due to the only use of
non-singular part of complete fundamental solution is also discussed.
|
cs/0207045
|
Compilation of Propositional Weighted Bases
|
cs.AI
|
In this paper, we investigate the extent to which knowledge compilation can
be used to improve inference from propositional weighted bases. We present a
general notion of compilation of a weighted base that is parametrized by any
equivalence--preserving compilation function. Both negative and positive
results are presented. On the one hand, complexity results are identified,
showing that the inference problem from a compiled weighted base is as
difficult as in the general case, when the prime implicates, Horn cover or
renamable Horn cover classes are targeted. On the other hand, we show that the
inference problem becomes tractable whenever DNNF-compilations are used and
clausal queries are considered. Moreover, we show that the set of all preferred
models of a DNNF-compilation of a weighted base can be computed in time
polynomial in the output size. Finally, we sketch how our results can be used
in model-based diagnosis in order to compute the most probable diagnoses of a
system.
|
cs/0207055
|
The Rise and Fall of the Church-Turing Thesis
|
cs.CC cs.AI
|
The essay consists of three parts. In the first part, it is explained how
theory of algorithms and computations evaluates the contemporary situation with
computers and global networks. In the second part, it is demonstrated what new
perspectives this theory opens through its new direction that is called theory
of super-recursive algorithms. These algorithms have much higher computing
power than conventional algorithmic schemes. In the third part, we explicate
how realization of what this theory suggests might influence life of people in
future. It is demonstrated that now the theory is far ahead computing practice
and practice has to catch up with the theory. We conclude with a comparison of
different approaches to the development of information technology.
|
cs/0207056
|
Modeling Complex Domains of Actions and Change
|
cs.AI
|
This paper studies the problem of modeling complex domains of actions and
change within high-level action description languages. We investigate two main
issues of concern: (a) can we represent complex domains that capture together
different problems such as ramifications, non-determinism and concurrency of
actions, at a high-level, close to the given natural ontology of the problem
domain and (b) what features of such a representation can affect, and how, its
computational behaviour. The paper describes the main problems faced in this
representation task and presents the results of an empirical study, carried out
through a series of controlled experiments, to analyze the computational
performance of reasoning in these representations. The experiments compare
different representations obtained, for example, by changing the basic ontology
of the domain or by varying the degree of use of indirect effect laws through
domain constraints. This study has helped to expose the main sources of
computational difficulty in the reasoning and suggest some methodological
guidelines for representing complex domains. Although our work has been carried
out within one particular high-level description language, we believe that the
results, especially those that relate to the problems of representation, are
independent of the specific modeling language.
|
cs/0207058
|
Question Answering over Unstructured Data without Domain Restrictions
|
cs.CL cs.IR
|
Information needs are naturally represented as questions. Automatic
Natural-Language Question Answering (NLQA) has only recently become a practical
task on a larger scale and without domain constraints.
This paper gives a brief introduction to the field, its history and the
impact of systematic evaluation competitions.
It is then demonstrated that an NLQA system for English can be built and
evaluated in a very short time using off-the-shelf parsers and thesauri. The
system is based on Robust Minimal Recursion Semantics (RMRS) and is portable
with respect to the parser used as a frontend. It applies atomic term
unification supported by question classification and WordNet lookup for
semantic similarity matching of parsed question representation and free text.
|
cs/0207059
|
Value Based Argumentation Frameworks
|
cs.AI
|
This paper introduces the notion of value-based argumentation frameworks, an
extension of the standard argumentation frameworks proposed by Dung, which are
able toshow how rational decision is possible in cases where arguments derive
their force from the social values their acceptance would promote.
|
cs/0207060
|
Preferred well-founded semantics for logic programming by alternating
fixpoints: Preliminary report
|
cs.AI
|
We analyze the problem of defining well-founded semantics for ordered logic
programs within a general framework based on alternating fixpoint theory. We
start by showing that generalizations of existing answer set approaches to
preference are too weak in the setting of well-founded semantics. We then
specify some informal yet intuitive criteria and propose a semantical framework
for preference handling that is more suitable for defining well-founded
semantics for ordered logic programs. The suitability of the new approach is
convinced by the fact that many attractive properties are satisfied by our
semantics. In particular, our semantics is still correct with respect to
various existing answer sets semantics while it successfully overcomes the
weakness of their generalization to well-founded semantics. Finally, we
indicate how an existing preferred well-founded semantics can be captured
within our semantical framework.
|
cs/0207062
|
Some addenda on distance function wavelets
|
cs.NA cs.CE
|
This report will add some supplements to the recently finished report series
on the distance function wavelets (DFW). First, we define the general distance
in terms of the Riesz potential, and then, the distance function Abel wavelets
are derived via the fractional integral and Laplacian. Second, the DFW Weyl
transform is found to be a shifted Laplace potential DFW. The DFW Radon
transform is also presented. Third, we present a conjecture on truncation error
formula of the multiple reciprocity Laplace DFW series and discuss its error
distributions in terms of node density distributions. Forth, we point out that
the Hermite distance function interpolation can be used to replace overlapping
in the domain decomposition in order to produce sparse matrix. Fifth, the shape
parameter is explained as a virtual extra axis contribution in terms of the
MQ-type Possion kernel. The report is concluded with some remarks on a range of
other issues.
|
cs/0207064
|
Interpolation Theorems for Nonmonotonic Reasoning Systems
|
cs.AI cs.LO
|
Craig's interpolation theorem (Craig 1957) is an important theorem known for
propositional logic and first-order logic. It says that if a logical formula
$\beta$ logically follows from a formula $\alpha$, then there is a formula
$\gamma$, including only symbols that appear in both $\alpha,\beta$, such that
$\beta$ logically follows from $\gamma$ and $\gamma$ logically follows from
$\alpha$. Such theorems are important and useful for understanding those logics
in which they hold as well as for speeding up reasoning with theories in those
logics. In this paper we present interpolation theorems in this spirit for
three nonmonotonic systems: circumscription, default logic and logic programs
with the stable models semantics (a.k.a. answer set semantics). These results
give us better understanding of those logics, especially in contrast to their
nonmonotonic characteristics. They suggest that some \emph{monotonicity}
principle holds despite the failure of classic monotonicity for these logics.
Also, they sometimes allow us to use methods for the decomposition of reasoning
for these systems, possibly increasing their applicability and tractability.
Finally, they allow us to build structured representations that use those
logics.
|
cs/0207065
|
Embedding Default Logic in Propositional Argumentation Systems
|
cs.AI
|
In this paper we present a transformation of finite propositional default
theories into so-called propositional argumentation systems. This
transformation allows to characterize all notions of Reiter's default logic in
the framework of argumentation systems. As a consequence, computing extensions,
or determining wether a given formula belongs to one extension or all
extensions can be answered without leaving the field of classical propositional
logic. The transformation proposed is linear in the number of defaults.
|
cs/0207067
|
On the existence and multiplicity of extensions in dialectical
argumentation
|
cs.AI
|
In the present paper, the existence and multiplicity problems of extensions
are addressed. The focus is on extension of the stable type. The main result of
the paper is an elegant characterization of the existence and multiplicity of
extensions in terms of the notion of dialectical justification, a close cousin
of the notion of admissibility. The characterization is given in the context of
the particular logic for dialectical argumentation DEFLOG. The results are of
direct relevance for several well-established models of defeasible reasoning
(like default logic, logic programming and argumentation frameworks), since
elsewhere dialectical argumentation has been shown to have close formal
connections with these models.
|
cs/0207070
|
A continuation semantics of interrogatives that accounts for Baker's
ambiguity
|
cs.CL cs.PL
|
Wh-phrases in English can appear both raised and in-situ. However, only
in-situ wh-phrases can take semantic scope beyond the immediately enclosing
clause. I present a denotational semantics of interrogatives that naturally
accounts for these two properties. It neither invokes movement or economy, nor
posits lexical ambiguity between raised and in-situ occurrences of the same
wh-phrase. My analysis is based on the concept of continuations. It uses a
novel type system for higher-order continuations to handle wide-scope
wh-phrases while remaining strictly compositional. This treatment sheds light
on the combinatorics of interrogatives as well as other kinds of so-called
A'-movement.
|
cs/0207071
|
A Polynomial Translation of Logic Programs with Nested Expressions into
Disjunctive Logic Programs: Preliminary Report
|
cs.AI cs.LO
|
Nested logic programs have recently been introduced in order to allow for
arbitrarily nested formulas in the heads and the bodies of logic program rules
under the answer sets semantics. Nested expressions can be formed using
conjunction, disjunction, as well as the negation as failure operator in an
unrestricted fashion. This provides a very flexible and compact framework for
knowledge representation and reasoning. Previous results show that nested logic
programs can be transformed into standard (unnested) disjunctive logic programs
in an elementary way, applying the negation as failure operator to body
literals only. This is of great practical relevance since it allows us to
evaluate nested logic programs by means of off-the-shelf disjunctive logic
programming systems, like DLV. However, it turns out that this straightforward
transformation results in an exponential blow-up in the worst-case, despite the
fact that complexity results indicate that there is a polynomial translation
among both formalisms. In this paper, we take up this challenge and provide a
polynomial translation of logic programs with nested expressions into
disjunctive logic programs. Moreover, we show that this translation is modular
and (strongly) faithful. We have implemented both the straightforward as well
as our advanced transformation; the resulting compiler serves as a front-end to
DLV and is publicly available on the Web.
|
cs/0207072
|
Complexity of Nested Circumscription and Nested Abnormality Theories
|
cs.AI cs.CC cs.LO
|
The need for a circumscriptive formalism that allows for simple yet elegant
modular problem representation has led Lifschitz (AIJ, 1995) to introduce
nested abnormality theories (NATs) as a tool for modular knowledge
representation, tailored for applying circumscription to minimize exceptional
circumstances. Abstracting from this particular objective, we propose L_{CIRC},
which is an extension of generic propositional circumscription by allowing
propositional combinations and nesting of circumscriptive theories. As shown,
NATs are naturally embedded into this language, and are in fact of equal
expressive capability. We then analyze the complexity of L_{CIRC} and NATs, and
in particular the effect of nesting. The latter is found to be a source of
complexity, which climbs the Polynomial Hierarchy as the nesting depth
increases and reaches PSPACE-completeness in the general case. We also identify
meaningful syntactic fragments of NATs which have lower complexity. In
particular, we show that the generalization of Horn circumscription in the NAT
framework remains CONP-complete, and that Horn NATs without fixed letters can
be efficiently transformed into an equivalent Horn CNF, which implies
polynomial solvability of principal reasoning tasks. Finally, we also study
extensions of NATs and briefly address the complexity in the first-order case.
Our results give insight into the ``cost'' of using L_{CIRC} (resp. NATs) as a
host language for expressing other formalisms such as action theories,
narratives, or spatial theories.
|
cs/0207073
|
Reinforcing Reachable Routes
|
cs.NI cs.AI
|
This paper studies the evaluation of routing algorithms from the perspective
of reachability routing, where the goal is to determine all paths between a
sender and a receiver. Reachability routing is becoming relevant with the
changing dynamics of the Internet and the emergence of low-bandwidth
wireless/ad-hoc networks. We make the case for reinforcement learning as the
framework of choice to realize reachability routing, within the confines of the
current Internet infrastructure. The setting of the reinforcement learning
problem offers several advantages, including loop resolution, multi-path
forwarding capability, cost-sensitive routing, and minimizing state overhead,
while maintaining the incremental spirit of current backbone routing
algorithms. We identify research issues in reinforcement learning applied to
the reachability routing problem to achieve a fluid and robust backbone routing
framework. The paper is targeted toward practitioners seeking to implement a
reachability routing algorithm.
|
cs/0207075
|
Nonmonotonic Probabilistic Logics between Model-Theoretic Probabilistic
Logic and Probabilistic Logic under Coherence
|
cs.AI
|
Recently, it has been shown that probabilistic entailment under coherence is
weaker than model-theoretic probabilistic entailment. Moreover, probabilistic
entailment under coherence is a generalization of default entailment in System
P. In this paper, we continue this line of research by presenting probabilistic
generalizations of more sophisticated notions of classical default entailment
that lie between model-theoretic probabilistic entailment and probabilistic
entailment under coherence. That is, the new formalisms properly generalize
their counterparts in classical default reasoning, they are weaker than
model-theoretic probabilistic entailment, and they are stronger than
probabilistic entailment under coherence. The new formalisms are useful
especially for handling probabilistic inconsistencies related to conditioning
on zero events. They can also be applied for probabilistic belief revision.
More generally, in the same spirit as a similar previous paper, this paper
sheds light on exciting new formalisms for probabilistic reasoning beyond the
well-known standard ones.
|
cs/0207076
|
Introducing Dynamic Behavior in Amalgamated Knowledge Bases
|
cs.PL cs.DB cs.LO
|
The problem of integrating knowledge from multiple and heterogeneous sources
is a fundamental issue in current information systems. In order to cope with
this problem, the concept of mediator has been introduced as a software
component providing intermediate services, linking data resources and
application programs, and making transparent the heterogeneity of the
underlying systems. In designing a mediator architecture, we believe that an
important aspect is the definition of a formal framework by which one is able
to model integration according to a declarative style. To this purpose, the use
of a logical approach seems very promising. Another important aspect is the
ability to model both static integration aspects, concerning query execution,
and dynamic ones, concerning data updates and their propagation among the
various data sources. Unfortunately, as far as we know, no formal proposals for
logically modeling mediator architectures both from a static and dynamic point
of view have already been developed. In this paper, we extend the framework for
amalgamated knowledge bases, presented by Subrahmanian, to deal with dynamic
aspects. The language we propose is based on the Active U-Datalog language, and
extends it with annotated logic and amalgamation concepts. We model the sources
of information and the mediator (also called supervisor) as Active U-Datalog
deductive databases, thus modeling queries, transactions, and active rules,
interpreted according to the PARK semantics. By using active rules, the system
can efficiently perform update propagation among different databases. The
result is a logical environment, integrating active and deductive rules, to
perform queries and update propagation in an heterogeneous mediated framework.
|
cs/0207083
|
Evaluating Defaults
|
cs.AI
|
We seek to find normative criteria of adequacy for nonmonotonic logic similar
to the criterion of validity for deductive logic. Rather than stipulating that
the conclusion of an inference be true in all models in which the premises are
true, we require that the conclusion of a nonmonotonic inference be true in
``almost all'' models of a certain sort in which the premises are true. This
``certain sort'' specification picks out the models that are relevant to the
inference, taking into account factors such as specificity and vagueness, and
previous inferences. The frequencies characterizing the relevant models reflect
known frequencies in our actual world. The criteria of adequacy for a default
inference can be extended by thresholding to criteria of adequacy for an
extension. We show that this avoids the implausibilities that might otherwise
result from the chaining of default inferences. The model proportions, when
construed in terms of frequencies, provide a verifiable grounding of default
rules, and can become the basis for generating default rules from statistics.
|
cs/0207085
|
Repairing Inconsistent Databases: A Model-Theoretic Approach and
Abductive Reasoning
|
cs.LO cs.DB
|
In this paper we consider two points of views to the problem of coherent
integration of distributed data. First we give a pure model-theoretic analysis
of the possible ways to `repair' a database. We do so by characterizing the
possibilities to `recover' consistent data from an inconsistent database in
terms of those models of the database that exhibit as minimal inconsistent
information as reasonably possible. Then we introduce an abductive application
to restore the consistency of a given database. This application is based on an
abductive solver (A-system) that implements an SLDNFA-resolution procedure, and
computes a list of data-facts that should be inserted to the database or
retracted from it in order to keep the database consistent. The two approaches
for coherent data integration are related by soundness and completeness
results.
|
cs/0207088
|
A Paraconsistent Higher Order Logic
|
cs.LO cs.AI
|
Classical logic predicts that everything (thus nothing useful at all) follows
from inconsistency. A paraconsistent logic is a logic where an inconsistency
does not lead to such an explosion, and since in practice consistency is
difficult to achieve there are many potential applications of paraconsistent
logics in knowledge-based systems, logical semantics of natural language, etc.
Higher order logics have the advantages of being expressive and with several
automated theorem provers available. Also the type system can be helpful. We
present a concise description of a paraconsistent higher order logic with
countable infinite indeterminacy, where each basic formula can get its own
indeterminate truth value (or as we prefer: truth code). The meaning of the
logical operators is new and rather different from traditional many-valued
logics as well as from logics based on bilattices. The adequacy of the logic is
examined by a case study in the domain of medicine. Thus we try to build a
bridge between the HOL and MVL communities. A sequent calculus is proposed
based on recent work by Muskens.
|
cs/0207093
|
Preference Queries
|
cs.DB
|
The handling of user preferences is becoming an increasingly important issue
in present-day information systems. Among others, preferences are used for
information filtering and extraction to reduce the volume of data presented to
the user. They are also used to keep track of user profiles and formulate
policies to improve and automate decision making.
We propose here a simple, logical framework for formulating preferences as
preference formulas. The framework does not impose any restrictions on the
preference relations and allows arbitrary operation and predicate signatures in
preference formulas. It also makes the composition of preference relations
straightforward. We propose a simple, natural embedding of preference formulas
into relational algebra (and SQL) through a single winnow operator
parameterized by a preference formula. The embedding makes possible the
formulation of complex preference queries, e.g., involving aggregation, by
piggybacking on existing SQL constructs. It also leads in a natural way to the
definition of further, preference-related concepts like ranking. Finally, we
present general algebraic laws governing the winnow operator and its
interaction with other relational algebra operators. The preconditions on the
applicability of the laws are captured by logical formulas. The laws provide a
formal foundation for the algebraic optimization of preference queries. We
demonstrate the usefulness of our approach through numerous examples.
|
cs/0207094
|
Answer Sets for Consistent Query Answering in Inconsistent Databases
|
cs.DB
|
A relational database is inconsistent if it does not satisfy a given set of
integrity constraints. Nevertheless, it is likely that most of the data in it
is consistent with the constraints. In this paper we apply logic programming
based on answer sets to the problem of retrieving consistent information from a
possibly inconsistent database. Since consistent information persists from the
original database to every of its minimal repairs, the approach is based on a
specification of database repairs using disjunctive logic programs with
exceptions, whose answer set semantics can be represented and computed by
systems that implement stable model semantics. These programs allow us to
declare persistence by defaults and repairing changes by exceptions. We
concentrate mainly on logic programs for binary integrity constraints, among
which we find most of the integrity constraints found in practice.
|
cs/0207097
|
Optimal Ordered Problem Solver
|
cs.AI cs.CC cs.LG
|
We present a novel, general, optimally fast, incremental way of searching for
a universal algorithm that solves each task in a sequence of tasks. The Optimal
Ordered Problem Solver (OOPS) continually organizes and exploits previously
found solutions to earlier tasks, efficiently searching not only the space of
domain-specific algorithms, but also the space of search algorithms.
Essentially we extend the principles of optimal nonincremental universal search
to build an incremental universal learner that is able to improve itself
through experience. In illustrative experiments, our self-improver becomes the
first general system that learns to solve all n disk Towers of Hanoi tasks
(solution size 2^n-1) for n up to 30, profiting from previously solved, simpler
tasks involving samples of a simple context free language.
|
cs/0208005
|
Probabilistic Search for Object Segmentation and Recognition
|
cs.CV
|
The problem of searching for a model-based scene interpretation is analyzed
within a probabilistic framework. Object models are formulated as generative
models for range data of the scene. A new statistical criterion, the truncated
object probability, is introduced to infer an optimal sequence of object
hypotheses to be evaluated for their match to the data. The truncated
probability is partly determined by prior knowledge of the objects and partly
learned from data. Some experiments on sequence quality and object segmentation
and recognition from stereo data are presented. The article recovers classic
concepts from object recognition (grouping, geometric hashing, alignment) from
the probabilistic perspective and adds insight into the optimal ordering of
object hypotheses for evaluation. Moreover, it introduces point-relation
densities, a key component of the truncated probability, as statistical models
of local surface shape.
|
cs/0208008
|
Soft Concurrent Constraint Programming
|
cs.PL cs.AI
|
Soft constraints extend classical constraints to represent multiple
consistency levels, and thus provide a way to express preferences, fuzziness,
and uncertainty. While there are many soft constraint solving formalisms, even
distributed ones, by now there seems to be no concurrent programming framework
where soft constraints can be handled. In this paper we show how the classical
concurrent constraint (cc) programming framework can work with soft
constraints, and we also propose an extension of cc languages which can use
soft constraints to prune and direct the search for a solution. We believe that
this new programming paradigm, called soft cc (scc), can be also very useful in
many web-related scenarios. In fact, the language level allows web agents to
express their interaction and negotiation protocols, and also to post their
requests in terms of preferences, and the underlying soft constraint solver can
find an agreement among the agents even if their requests are incompatible.
|
cs/0208009
|
Offline Specialisation in Prolog Using a Hand-Written Compiler Generator
|
cs.PL cs.AI
|
The so called ``cogen approach'' to program specialisation, writing a
compiler generator instead of a specialiser, has been used with considerable
success in partial evaluation of both functional and imperative languages. This
paper demonstrates that the cogen approach is also applicable to the
specialisation of logic programs (also called partial deduction) and leads to
effective specialisers. Moreover, using good binding-time annotations, the
speed-ups of the specialised programs are comparable to the speed-ups obtained
with online specialisers. The paper first develops a generic approach to
offline partial deduction and then a specific offline partial deduction method,
leading to the offline system LIX for pure logic programs. While this is a
usable specialiser by itself, it is used to develop the cogen system LOGEN.
Given a program, a specification of what inputs will be static, and an
annotation specifying which calls should be unfolded, LOGEN generates a
specialised specialiser for the program at hand. Running this specialiser with
particular values for the static inputs results in the specialised program.
While this requires two steps instead of one, the efficiency of the
specialisation process is improved in situations where the same program is
specialised multiple times. The paper also presents and evaluates an automatic
binding-time analysis that is able to derive the annotations. While the derived
annotations are still suboptimal compared to hand-crafted ones, they enable
non-expert users to use the LOGEN system in a fully automated way. Finally,
LOGEN is extended so as to directly support a large part of Prolog's
declarative and non-declarative features and so as to be able to perform so
called mixline specialisations.
|
cs/0208010
|
TerraService.NET: An Introduction to Web Services
|
cs.DL cs.DB
|
This article explores the design and construction of a geo-spatial Internet
web service application from the host web site perspective and from the
perspective of an application using the web service. The TerraService.NET web
service was added to the popular TerraServer database and web site with no
major structural changes to the database. The article discusses web service
design, implementation, and deployment concepts and design guidelines. Web
services enable applications that aggregate and interact with information and
resources from Internet-scale distributed servers. The article presents the
design of two USDA applications that interoperate with database and web service
resources in Fort Collins Colorado and the TerraService web service located in
Tukwila Washington.
|
cs/0208013
|
Petabyte Scale Data Mining: Dream or Reality?
|
cs.DB cs.CE
|
Science is becoming very data intensive1. Today's astronomy datasets with
tens of millions of galaxies already present substantial challenges for data
mining. In less than 10 years the catalogs are expected to grow to billions of
objects, and image archives will reach Petabytes. Imagine having a 100GB
database in 1996, when disk scanning speeds were 30MB/s, and database tools
were immature. Such a task today is trivial, almost manageable with a laptop.
We think that the issue of a PB database will be very similar in six years. In
this paper we scale our current experiments in data archiving and analysis on
the Sloan Digital Sky Survey2,3 data six years into the future. We analyze
these projections and look at the requirements of performing data mining on
such data sets. We conclude that the task scales rather well: we could do the
job today, although it would be expensive. There do not seem to be any
show-stoppers that would prevent us from storing and using a Petabyte dataset
six years from today.
|
cs/0208015
|
Spatial Clustering of Galaxies in Large Datasets
|
cs.DB cs.DS
|
Datasets with tens of millions of galaxies present new challenges for the
analysis of spatial clustering. We have built a framework that integrates a
database of object catalogs, tools for creating masks of bad regions, and a
fast (NlogN) correlation code. This system has enabled unprecedented efficiency
in carrying out the analysis of galaxy clustering in the SDSS catalog. A
similar approach is used to compute the three-dimensional spatial clustering of
galaxies on very large scales. We describe our strategy to estimate the effect
of photometric errors using a database. We discuss our efforts as an early
example of data-intensive science. While it would have been possible to get
these results without the framework we describe, it will be infeasible to
perform these computations on the future huge datasets without using this
framework.
|
cs/0208016
|
A note on fractional derivative modeling of broadband
frequency-dependent absorption: Model III
|
cs.CE cs.CC
|
By far, the fractional derivative model is mainly related to the modelling of
complicated solid viscoelastic material. In this study, we try to build the
fractional derivative PDE model for broadband ultrasound propagation through
human tissues.
|
cs/0208017
|
Linking Makinson and Kraus-Lehmann-Magidor preferential entailments
|
cs.AI
|
About ten years ago, various notions of preferential entailment have been
introduced. The main reference is a paper by Kraus, Lehmann and Magidor (KLM),
one of the main competitor being a more general version defined by Makinson
(MAK). These two versions have already been compared, but it is time to revisit
these comparisons. Here are our three main results: (1) These two notions are
equivalent, provided that we restrict our attention, as done in KLM, to the
cases where the entailment respects logical equivalence (on the left and on the
right). (2) A serious simplification of the description of the fundamental
cases in which MAK is equivalent to KLM, including a natural passage in both
ways. (3) The two previous results are given for preferential entailments more
general than considered in some of the original texts, but they apply also to
the original definitions and, for this particular case also, the models can be
simplified.
|
cs/0208019
|
Knowledge Representation
|
cs.AI
|
This work analyses main features that should be present in knowledge
representation. It suggests a model for representation and a way to implement
this model in software. Representation takes care of both low-level sensor
information and high-level concepts.
|
cs/0208020
|
Using the DIFF Command for Natural Language Processing
|
cs.CL
|
Diff is a software program that detects differences between two data sets and
is useful in natural language processing. This paper shows several examples of
the application of diff. They include the detection of differences between two
different datasets, extraction of rewriting rules, merging of two different
datasets, and the optimal matching of two different data sets. Since diff comes
with any standard UNIX system, it is readily available and very easy to use.
Our studies showed that diff is a practical tool for research into natural
language processing.
|
cs/0208022
|
Symbolic Methodology in Numeric Data Mining: Relational Techniques for
Financial Applications
|
cs.CE
|
Currently statistical and artificial neural network methods dominate in
financial data mining. Alternative relational (symbolic) data mining methods
have shown their effectiveness in robotics, drug design and other applications.
Traditionally symbolic methods prevail in the areas with significant
non-numeric (symbolic) knowledge, such as relative location in robot
navigation. At first glance, stock market forecast looks as a pure numeric area
irrelevant to symbolic methods. One of our major goals is to show that
financial time series can benefit significantly from relational data mining
based on symbolic methods. The paper overviews relational data mining
methodology and develops this techniques for financial data mining.
|
cs/0208030
|
A direct time-domain FEM modeling of broadband frequency-dependent
absorption with the presence of matrix fractional power: Model I
|
cs.CE cs.CG
|
The frequency-dependent attenuation of broadband acoustics is often
confronted in many different areas. However, the related time domain simulation
is rarely found in literature due to enormous technical difficulty. The
currently popular relaxation models with the presence of convolution operation
require some material parameters which are not readily available. In this
study, three reports are contributed to address broadband ultrasound
frequency-dependent absorptions using the readily available empirical
parameters. This report is the first in series concerned with developing a
direct time domain FEM formulation. The next two reports are about the
frequency decomposition model and the fractional derivative model.
|
cs/0208033
|
Complete Axiomatizations for Reasoning About Knowledge and Time
|
cs.LO cs.AI
|
Sound and complete axiomatizations are provided for a number of different
logics involving modalities for knowledge and time. These logics arise from
different choices for various parameters. All the logics considered involve the
discrete time linear temporal logic operators `next' and `until' and an
operator for the knowledge of each of a number of agents. Both the single agent
and multiple agent cases are studied: in some instances of the latter there is
also an operator for the common knowledge of the group of all agents. Four
different semantic properties of agents are considered: whether they have a
unique initial state, whether they operate synchronously, whether they have
perfect recall, and whether they learn. The property of no learning is
essentially dual to perfect recall. Not all settings of these parameters lead
to recursively axiomatizable logics, but sound and complete axiomatizations are
presented for all the ones that do.
|
cs/0208034
|
Causes and Explanations: A Structural-Model Approach. Part II:
Explanations
|
cs.AI
|
We propose new definitions of (causal) explanation, using structural
equations to model counterfactuals. The definition is based on the notion of
actual cause, as defined and motivated in a companion paper. Essentially, an
explanation is a fact that is not known for certain but, if found to be true,
would constitute an actual cause of the fact to be explained, regardless of the
agent's initial uncertainty. We show that the definition handles well a number
of problematic examples from the literature.
|
cs/0208035
|
Evaluation of Coreference Rules on Complex Narrative Texts
|
cs.CL
|
This article studies the problem of assessing relevance to each of the rules
of a reference resolution system. The reference solver described here stems
from a formal model of reference and is integrated in a reference processing
workbench. Evaluation of the reference resolution is essential, as it enables
differential evaluation of individual rules. Numerical values of these measures
are given, and discussed, for simple selection rules and other processing
rules; such measures are then studied for numerical parameters.
|
cs/0208036
|
Three New Methods for Evaluating Reference Resolution
|
cs.CL
|
Reference resolution on extended texts (several thousand references) cannot
be evaluated manually. An evaluation algorithm has been proposed for the MUC
tests, using equivalence classes for the coreference relation. However, we show
here that this algorithm is too indulgent, yielding good scores even for poor
resolution strategies. We elaborate on the same formalism to propose two new
evaluation algorithms, comparing them first with the MUC algorithm and giving
then results on a variety of examples. A third algorithm using only
distributional comparison of equivalence classes is finally described; it
assesses the relative importance of the recall vs. precision errors.
|
cs/0208037
|
Cooperation between Pronoun and Reference Resolution for Unrestricted
Texts
|
cs.CL
|
Anaphora resolution is envisaged in this paper as part of the reference
resolution process. A general open architecture is proposed, which can be
particularized and configured in order to simulate some classic anaphora
resolution methods. With the aim of improving pronoun resolution, the system
takes advantage of elementary cues about characters of the text, which are
represented through a particular data structure. In its most robust
configuration, the system uses only a general lexicon, a local morpho-syntactic
parser and a dictionary of synonyms. A short comparative corpus analysis shows
that narrative texts are the most suitable for testing such a system.
|
cs/0208038
|
Reference Resolution Beyond Coreference: a Conceptual Frame and its
Application
|
cs.CL
|
A model for reference use in communication is proposed, from a
representationist point of view. Both the sender and the receiver of a message
handle representations of their common environment, including mental
representations of objects. Reference resolution by a computer is viewed as the
construction of object representations using referring expressions from the
discourse, whereas often only coreference links between such expressions are
looked for. Differences between these two approaches are discussed. The model
has been implemented with elementary rules, and tested on complex narrative
texts (hundreds to thousands of referring expressions). The results support the
mental representations paradigm.
|
cs/0208040
|
Using Hierarchical Data Mining to Characterize Performance of Wireless
System Configurations
|
cs.CE
|
This paper presents a statistical framework for assessing wireless systems
performance using hierarchical data mining techniques. We consider WCDMA
(wideband code division multiple access) systems with two-branch STTD (space
time transmit diversity) and 1/2 rate convolutional coding (forward error
correction codes). Monte Carlo simulation estimates the bit error probability
(BEP) of the system across a wide range of signal-to-noise ratios (SNRs). A
performance database of simulation runs is collected over a targeted space of
system configurations. This database is then mined to obtain regions of the
configuration space that exhibit acceptable average performance. The shape of
the mined regions illustrates the joint influence of configuration parameters
on system performance. The role of data mining in this application is to
provide explainable and statistically valid design conclusions. The research
issue is to define statistically meaningful aggregation of data in a manner
that permits efficient and effective data mining algorithms. We achieve a good
compromise between these goals and help establish the applicability of data
mining for characterizing wireless systems performance.
|
cs/0209001
|
A Novel Statistical Diagnosis of Clinical Data
|
cs.CE cs.CC
|
In this paper, we present a diagnosis method of diseases from clinical data.
The data are routine test such as urine test, hematology, chemistries etc.
Though those tests have been done for people who check in medical institutes,
how each item of the data interacts each other and which combination of them
cause a disease are neither understood nor studied well. Here we attack the
practically important problem by putting the data into mathematical setup and
applying support vector machine. Finally we present simulation results for
fatty liver, gastritis etc and discuss about their implications.
|
cs/0209002
|
A Chart-Parsing Algorithm for Efficient Semantic Analysis
|
cs.CL
|
In some contexts, well-formed natural language cannot be expected as input to
information or communication systems. In these contexts, the use of
grammar-independent input (sequences of uninflected semantic units like e.g.
language-independent icons) can be an answer to the users' needs. A semantic
analysis can be performed, based on lexical semantic knowledge: it is
equivalent to a dependency analysis with no syntactic or morphological clues.
However, this requires that an intelligent system should be able to interpret
this input with reasonable accuracy and in reasonable time. Here we propose a
method allowing a purely semantic-based analysis of sequences of semantic
units. It uses an algorithm inspired by the idea of ``chart parsing'' known in
Natural Language Processing, which stores intermediate parsing results in order
to bring the calculation time down. In comparison with using declarative logic
programming - where the calculation time, left to a prolog engine, is
hyperexponential -, this method brings the calculation time down to a
polynomial time, where the order depends on the valency of the predicates.
|
cs/0209003
|
Rerendering Semantic Ontologies: Automatic Extensions to UMLS through
Corpus Analytics
|
cs.CL
|
In this paper, we discuss the utility and deficiencies of existing ontology
resources for a number of language processing applications. We describe a
technique for increasing the semantic type coverage of a specific ontology, the
National Library of Medicine's UMLS, with the use of robust finite state
methods used in conjunction with large-scale corpus analytics of the domain
corpus. We call this technique "semantic rerendering" of the ontology. This
research has been done in the context of Medstract, a joint Brandeis-Tufts
effort aimed at developing tools for analyzing biomedical language (i.e.,
Medline), as well as creating targeted databases of bio-entities, biological
relations, and pathway data for biological researchers. Motivating the current
research is the need to have robust and reliable semantic typing of syntactic
elements in the Medline corpus, in order to improve the overall performance of
the information extraction applications mentioned above.
|
cs/0209008
|
The partition semantics of questions, syntactically
|
cs.CL cs.AI cs.LO
|
Groenendijk and Stokhof (1984, 1996; Groenendijk 1999) provide a logically
attractive theory of the semantics of natural language questions, commonly
referred to as the partition theory. Two central notions in this theory are
entailment between questions and answerhood. For example, the question "Who is
going to the party?" entails the question "Is John going to the party?", and
"John is going to the party" counts as an answer to both. Groenendijk and
Stokhof define these two notions in terms of partitions of a set of possible
worlds.
We provide a syntactic characterization of entailment between questions and
answerhood . We show that answers are, in some sense, exactly those formulas
that are built up from instances of the question. This result lets us compare
the partition theory with other approaches to interrogation -- both linguistic
analyses, such as Hamblin's and Karttunen's semantics, and computational
systems, such as Prolog. Our comparison separates a notion of answerhood into
three aspects: equivalence (when two questions or answers are interchangeable),
atomic answers (what instances of a question count as answers), and compound
answers (how answers compose).
|
cs/0209009
|
Question answering: from partitions to Prolog
|
cs.CL cs.AI cs.LO
|
We implement Groenendijk and Stokhof's partition semantics of questions in a
simple question answering algorithm. The algorithm is sound, complete, and
based on tableau theorem proving. The algorithm relies on a syntactic
characterization of answerhood: Any answer to a question is equivalent to some
formula built up only from instances of the question. We prove this
characterization by translating the logic of interrogation to classical
predicate logic and applying Craig's interpolation theorem.
|
cs/0209010
|
Introduction to the CoNLL-2002 Shared Task: Language-Independent Named
Entity Recognition
|
cs.CL
|
We describe the CoNLL-2002 shared task: language-independent named entity
recognition. We give background information on the data sets and the evaluation
method, present a general overview of the systems that have taken part in the
task and discuss their performance.
|
cs/0209019
|
Reasoning about Evolving Nonmonotonic Knowledge Bases
|
cs.AI
|
Recently, several approaches to updating knowledge bases modeled as extended
logic programs have been introduced, ranging from basic methods to incorporate
(sequences of) sets of rules into a logic program, to more elaborate methods
which use an update policy for specifying how updates must be incorporated. In
this paper, we introduce a framework for reasoning about evolving knowledge
bases, which are represented as extended logic programs and maintained by an
update policy. We first describe a formal model which captures various update
approaches, and we define a logical language for expressing properties of
evolving knowledge bases. We then investigate semantical and computational
properties of our framework, where we focus on properties of knowledge states
with respect to the canonical reasoning task of whether a given formula holds
on a given evolving knowledge base. In particular, we present finitary
characterizations of the evolution for certain classes of framework instances,
which can be exploited for obtaining decidability results. In more detail, we
characterize the complexity of reasoning for some meaningful classes of
evolving knowledge bases, ranging from polynomial to double exponential space
complexity.
|
cs/0209020
|
A new definition of the fractional Laplacian
|
cs.NA cs.CE
|
It is noted that the standard definition of the fractional Laplacian leads to
a hyper-singular convolution integral and is also obscure about how to
implement the boundary conditions. This purpose of this note is to introduce a
new definition of the fractional Laplacian to overcome these major drawbacks.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.