id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
cmp-lg/9502039
Multilingual Sentence Categorization according to Language
cmp-lg cs.CL
In this paper, we describe an approach to sentence categorization which has the originality to be based on natural properties of languages with no training set dependency. The implementation is fast, small, robust and textual errors tolerant. Tested for french, english, spanish and german discrimination, the system gives very interesting results, achieving in one test 99.4% correct assignments on real sentences. The resolution power is based on grammatical words (not the most common words) and alphabet. Having the grammatical words and the alphabet of each language at its disposal, the system computes for each of them its likelihood to be selected. The name of the language having the optimum likelihood will tag the sentence --- but non resolved ambiguities will be maintained. We will discuss the reasons which lead us to use these linguistic facts and present several directions to improve the system's classification performance. Categorization sentences with linguistic properties shows that difficult problems have sometimes simple solutions.
cmp-lg/9503001
Using a Corpus for Teaching Turkish Morphology
cmp-lg cs.CL
This paper reports on the preliminary phase of our ongoing research towards developing an intelligent tutoring environment for Turkish grammar. One of the components of this environment is a corpus search tool which, among other aspects of the language, will be used to present the learner sample sentences along with their morphological analyses. Following a brief introduction to the Turkish language and its morphology, the paper describes the morphological analysis and ambiguity resolution used to construct the corpus used in the search tool. Finally, implementation issues and details involving the user interface of the tool are discussed.
cmp-lg/9503002
Computational dialectology in Irish Gaelic
cmp-lg cs.CL
Dialect groupings can be discovered objectively and automatically by cluster analysis of phonetic transcriptions such as those found in a linguistic atlas. The first step in the analysis, the computation of linguistic distance between each pair of sites, can be computed as Levenshtein distance between phonetic strings. This correlates closely with the much more laborious technique of determining and counting isoglosses, and is more accurate than the more familiar metric of computing Hamming distance based on whether vocabulary entries match. In the actual clustering step, traditional agglomerative clustering works better than the top-down technique of partitioning around medoids. When agglomerative clustering of phonetic string comparison distances is applied to Gaelic, reasonable dialect boundaries are obtained, corresponding to national and (within Ireland) provincial boundaries.
cmp-lg/9503003
Tagging French -- comparing a statistical and a constraint-based method
cmp-lg cs.CL
In this paper we compare two competing approaches to part-of-speech tagging, statistical and constraint-based disambiguation, using French as our test language. We imposed a time limit on our experiment: the amount of time spent on the design of our constraint system was about the same as the time we used to train and test the easy-to-implement statistical model. We describe the two systems and compare the results. The accuracy of the statistical method is reasonably good, comparable to taggers for English. But the constraint-based tagger seems to be superior even with the limited time we allowed ourselves for rule development.
cmp-lg/9503004
Creating a tagset, lexicon and guesser for a French tagger
cmp-lg cs.CL
We earlier described two taggers for French, a statistical one and a constraint-based one. The two taggers have the same tokeniser and morphological analyser. In this paper, we describe aspects of this work concerned with the definition of the tagset, the building of the lexicon, derived from an existing two-level morphological analyser, and the definition of a lexical transducer for guessing unknown words.
cmp-lg/9503005
A specification language for Lexical Functional Grammars
cmp-lg cs.CL
This paper defines a language L for specifying LFG grammars. This enables constraints on LFG's composite ontology (c-structures synchronised with f-structures) to be stated directly; no appeal to the LFG construction algorithm is needed. We use L to specify schemata annotated rules and the LFG uniqueness, completeness and coherence principles. Broader issues raised by this work are noted and discussed.
cmp-lg/9503006
ParseTalk about Sentence- and Text-Level Anaphora
cmp-lg cs.CL
We provide a unified account of sentence-level and text-level anaphora within the framework of a dependency-based grammar model. Criteria for anaphora resolution within sentence boundaries rephrase major concepts from GB's binding theory, while those for text-level anaphora incorporate an adapted version of a Grosz-Sidner-style focus model.
cmp-lg/9503007
The Semantics of Motion
cmp-lg cs.CL
In this paper we present a semantic study of motion complexes (ie. of a motion verb followed by a spatial preposition). We focus on the spatial and the temporal intrinsic semantic properties of the motion verbs, on the one hand, and of the spatial prepositions, on the other hand. Then, we address the problem of combining these basic semantics in order to formally and automatically derive the spatiotemporal semantics of a motion complex from the spatiotemporal properties of its components.
cmp-lg/9503008
Ellipsis and Higher-Order Unification
cmp-lg cs.CL
We present a new method for characterizing the interpretive possibilities generated by elliptical constructions in natural language. Unlike previous analyses, which postulate ambiguity of interpretation or derivation in the full clause source of the ellipsis, our analysis requires no such hidden ambiguity. Further, the analysis follows relatively directly from an abstract statement of the ellipsis interpretation problem. It predicts correctly a wide range of interactions between ellipsis and other semantic phenomena such as quantifier scope and bound anaphora. Finally, although the analysis itself is stated nonprocedurally, it admits of a direct computational method for generating interpretations.
cmp-lg/9503009
Distributional Part-of-Speech Tagging
cmp-lg cs.CL
This paper presents an algorithm for tagging words whose part-of-speech properties are unknown. Unlike previous work, the algorithm categorizes word tokens in context instead of word types. The algorithm is evaluated on the Brown Corpus.
cmp-lg/9503010
Corpus-based Method for Automatic Identification of Support Verbs for Nominalizations
cmp-lg cs.CL
Nominalization is a highly productive phenomena in most languages. The process of nominalization ejects a verb from its syntactic role into a nominal position. The original verb is often replaced by a semantically emptied support verb (e.g., "make a proposal"). The choice of a support verb for a given nominalization is unpredictable, causing a problem for language learners as well as for natural language processing systems. We present here a method of discovering support verbs from an untagged corpus via low-level syntactic processing and comparison of arguments attached to verbal forms and potential nominalized forms. The result of the process is a list of potential support verbs for the nominalized form of a given predicate.
cmp-lg/9503011
Improving Statistical Language Model Performance with Automatically Generated Word Hierarchies
cmp-lg cs.CL
An automatic word classification system has been designed which processes word unigram and bigram frequency statistics extracted from a corpus of natural language utterances. The system implements a binary top-down form of word clustering which employs an average class mutual information metric. Resulting classifications are hierarchical, allowing variable class granularity. Words are represented as structural tags --- unique $n$-bit numbers the most significant bit-patterns of which incorporate class information. Access to a structural tag immediately provides access to all classification levels for the corresponding word. The classification system has successfully revealed some of the structure of English, from the phonemic to the semantic level. The system has been compared --- directly and indirectly --- with other recent word classification systems. Class based interpolated language models have been constructed to exploit the extra information supplied by the classifications and some experiments have shown that the new models improve model performance.
cmp-lg/9503012
A Note on Zipf's Law, Natural Languages, and Noncoding DNA regions
cmp-lg cs.CL q-bio
In Phys. Rev. Letters (73:2, 5 Dec. 94), Mantegna et al. conclude on the basis of Zipf rank frequency data that noncoding DNA sequence regions are more like natural languages than coding regions. We argue on the contrary that an empirical fit to Zipf's ``law'' cannot be used as a criterion for similarity to natural languages. Although DNA is a presumably an ``organized system of signs'' in Mandelbrot's (1961) sense, an observation of statistical features of the sort presented in the Mantegna et al. paper does not shed light on the similarity between DNA's ``grammar'' and natural language grammars, just as the observation of exact Zipf-like behavior cannot distinguish between the underlying processes of tossing an $M$ sided die or a finite-state branching process.
cmp-lg/9503013
Incremental Interpretation: Applications, Theory, and Relationship to Dynamic Semantics
cmp-lg cs.CL
Why should computers interpret language incrementally? In recent years psycholinguistic evidence for incremental interpretation has become more and more compelling, suggesting that humans perform semantic interpretation before constituent boundaries, possibly word by word. However, possible computational applications have received less attention. In this paper we consider various potential applications, in particular graphical interaction and dialogue. We then review the theoretical and computational tools available for mapping from fragments of sentences to fully scoped semantic representations. Finally, we tease apart the relationship between dynamic semantics and incremental interpretation.
cmp-lg/9503014
Non-Constituent Coordination: Theory and Practice
cmp-lg cs.CL
Despite the large amount of theoretical work done on non-constituent coordination during the last two decades, many computational systems still treat coordination using adapted parsing strategies, in a similar fashion to the SYSCONJ system developed for ATNs. This paper reviews the theoretical literature, and shows why many of the theoretical accounts actually have worse coverage than accounts based on processing. Finally, it shows how processing accounts can be described formally and declaratively in terms of Dynamic Grammars.
cmp-lg/9503015
Incremental Interpretation of Categorial Grammar
cmp-lg cs.CL
The paper describes a parser for Categorial Grammar which provides fully word by word incremental interpretation. The parser does not require fragments of sentences to form constituents, and thereby avoids problems of spurious ambiguity. The paper includes a brief discussion of the relationship between basic Categorial Grammar and other formalisms such as HPSG, Dependency Grammar and the Lambek Calculus. It also includes a discussion of some of the issues which arise when parsing lexicalised grammars, and the possibilities for using statistical techniques for tuning to particular languages.
cmp-lg/9503016
Natural Language Interfaces to Databases - An Introduction
cmp-lg cs.CL
This paper is an introduction to natural language interfaces to databases (NLIDBs). A brief overview of the history of NLIDBs is first given. Some advantages and disadvantages of NLIDBs are then discussed, comparing NLIDBs to formal query languages, form-based interfaces, and graphical interfaces. An introduction to some of the linguistic problems NLIDBs have to confront follows, for the benefit of readers less familiar with computational linguistics. The discussion then moves on to NLIDB architectures, portability issues, restricted natural language input systems (including menu-based NLIDBs), and NLIDBs with reasoning capabilities. Some less explored areas of NLIDB research are then presented, namely database updates, meta-knowledge questions, temporal questions, and multi-modal NLIDBs. The paper ends with reflections on the current state of the art.
cmp-lg/9503017
Redundancy in Collaborative Dialogue
cmp-lg cs.CL
In dialogues in which both agents are autonomous, each agent deliberates whether to accept or reject the contributions of the current speaker. A speaker cannot simply assume that a proposal or an assertion will be accepted. However, an examination of a corpus of naturally-occurring problem-solving dialogues shows that agents often do not explicitly indicate acceptance or rejection. Rather the speaker must infer whether the hearer understands and accepts the current contribution based on indirect evidence provided by the hearer's next dialogue contribution. In this paper, I propose a model of the role of informationally redundant utterances in providing evidence to support inferences about mutual understanding and acceptance. The model (1) requires a theory of mutual belief that supports mutual beliefs of various strengths; (2) explains the function of a class of informationally redundant utterances that cannot be explained by other accounts; and (3) contributes to a theory of dialogue by showing how mutual beliefs can be inferred in the absence of the master-slave assumption.
cmp-lg/9503018
Discourse and Deliberation: Testing a Collaborative Strategy
cmp-lg cs.CL
A discourse strategy is a strategy for communicating with another agent. Designing effective dialogue systems requires designing agents that can choose among discourse strategies. We claim that the design of effective strategies must take cognitive factors into account, propose a new method for testing the hypothesized factors, and present experimental results on an effective strategy for supporting deliberation. The proposed method of computational dialogue simulation provides a new empirical basis for computational linguistics.
cmp-lg/9503019
SATZ - An Adaptive Sentence Segmentation System
cmp-lg cs.CL
This paper provides a detailed description of the sentence segmentation system first introduced in cmp-lg/9411022. It provides results of systematic experiments involving sentence boundary determination, including context size, lexicon size, and single-case texts. Also included are the results of successfully adapting the system to German and French. The source code for the system is available as a compressed tar file at ftp://cs-tr.CS.Berkeley.EDU/pub/cstr/satz.tar.Z .
cmp-lg/9503020
Different Issues in the Design of a Lemmatizer/Tagger for Basque
cmp-lg cs.CL
This paper presents relevant issues that have been considered in the design of a general purpose lemmatizer/tagger for Basque (EUSLEM). The lemmatizer/tagger is conceived as a basic tool necessary for other linguistic applications. It uses the lexical data base and the morphological analyzer previously developed and implemented. Due to the characteristics of the language, the tagset here proposed in structured in for levels, so that each level is a refinement of the previous one in the sense that it adds more detailed information. We will focus on the problems found in designing this tagset and on the strategies for morphological disambiguation that will be used.
cmp-lg/9503021
A Note on the Complexity of Restricted Attribute-Value Grammars
cmp-lg cs.CL
The recognition problem for attribute-value grammars (AVGs) was shown to be undecidable by Johnson in 1988. Therefore, the general form of AVGs is of no practical use. In this paper we study a very restricted form of AVG, for which the recognition problem is decidable (though still NP-complete), the R-AVG. We show that the R-AVG formalism captures all of the context free languages and more, and introduce a variation on the so-called `off-line parsability constraint', the `honest parsability constraint', which lets different types of R-AVG coincide precisely with well-known time complexity classes.
cmp-lg/9503022
Assessing Complexity Results in Feature Theories
cmp-lg cs.CL
In this paper, we assess the complexity results of formalisms that describe the feature theories used in computational linguistics. We show that from these complexity results no immediate conclusions can be drawn about the complexity of the recognition problem of unification grammars using these feature theories. On the one hand, the complexity of feature theories does not provide an upper bound for the complexity of such unification grammars. On the other hand, the complexity of feature theories need not provide a lower bound. Therefore, we argue for formalisms that describe actual unification grammars instead of feature theories. Thus the complexity results of these formalisms judge upon the hardness of unification grammars in computational linguistics.
cmp-lg/9503023
A fast partial parse of natural language sentences using a connectionist method
cmp-lg cs.CL
The pattern matching capabilities of neural networks can be used to locate syntactic constituents of natural language. This paper describes a fully automated hybrid system, using neural nets operating within a grammatic framework. It addresses the representation of language for connectionist processing, and describes methods of constraining the problem size. The function of the network is briefly explained, and results are given.
cmp-lg/9503024
From compositional to systematic semantics
cmp-lg cs.CL
We prove a theorem stating that any semantics can be encoded as a compositional semantics, which means that, essentially, the standard definition of compositionality is formally vacuous. We then show that when compositional semantics is required to be "systematic" (that is, the meaning function cannot be arbitrary, but must belong to some class), it is possible to distinguish between compositional and non-compositional semantics. As a result, we believe that the paper clarifies the concept of compositionality and opens a possibility of making systematic formal comparisons of different systems of grammars.
cmp-lg/9503025
Co-occurrence Vectors from Corpora vs. Distance Vectors from Dictionaries
cmp-lg cs.CL
A comparison was made of vectors derived by using ordinary co-occurrence statistics from large text corpora and of vectors derived by measuring the inter-word distances in dictionary definitions. The precision of word sense disambiguation by using co-occurrence vectors from the 1987 Wall Street Journal (20M total words) was higher than that by using distance vectors from the Collins English Dictionary (60K head words + 1.6M definition words). However, other experimental results suggest that distance vectors contain some different semantic information from co-occurrence vectors.
cmp-lg/9504001
Automatic processing proper names in texts
cmp-lg cs.CL
This paper shows first the problems raised by proper names in natural language processing. Second, it introduces the knowledge representation structure we use based on conceptual graphs. Then it explains the techniques which are used to process known and unknown proper names. At last, it gives the performance of the system and the further works we intend to deal with.
cmp-lg/9504002
Tagset Design and Inflected Languages
cmp-lg cs.CL
An experiment designed to explore the relationship between tagging accuracy and the nature of the tagset is described, using corpora in English, French and Swedish. In particular, the question of internal versus external criteria for tagset design is considered, with the general conclusion that external (linguistic) criteria should be followed. Some problems associated with tagging unknown words in inflected languages are briefly considered.
cmp-lg/9504003
Collaborating on Referring Expressions
cmp-lg cs.CL
This paper presents a computational model of how conversational participants collaborate in order to make a referring action successful. The model is based on the view of language as goal-directed behavior. We propose that the content of a referring expression can be accounted for by the planning paradigm. Not only does this approach allow the processes of building referring expressions and identifying their referents to be captured by plan construction and plan inference, it also allows us to account for how participants clarify a referring expression by using meta-actions that reason about and manipulate the plan derivation that corresponds to the referring expression. To account for how clarification goals arise and how inferred clarification plans affect the agent, we propose that the agents are in a certain state of mind, and that this state includes an intention to achieve the goal of referring and a plan that the agents are currently considering. It is this mental state that sanctions the adoption of goals and the acceptance of inferred plans, and so acts as a link between understanding and generation.
cmp-lg/9504004
A Computational Treatment of HPSG Lexical Rules as Covariation in Lexical Entries
cmp-lg cs.CL
We describe a compiler which translates a set of HPSG lexical rules and their interaction into definite relations used to constrain lexical entries. The compiler ensures automatic transfer of properties unchanged by a lexical rule. Thus an operational semantics for the full lexical rule mechanism as used in HPSG linguistics is provided. Program transformation techniques are used to advance the resulting encoding. The final output constitutes a computational counterpart of the linguistic generalizations captured by lexical rules and allows ``on the fly'' application.
cmp-lg/9504005
Constraint Logic Programming for Natural Language Processing
cmp-lg cs.CL
This paper proposes an evaluation of the adequacy of the constraint logic programming paradigm for natural language processing. Theoretical aspects of this question have been discussed in several works. We adopt here a pragmatic point of view and our argumentation relies on concrete solutions. Using actual contraints (in the CLP sense) is neither easy nor direct. However, CLP can improve parsing techniques in several aspects such as concision, control, efficiency or direct representation of linguistic formalism. This discussion is illustrated by several examples and the presentation of an HPSG parser.
cmp-lg/9504006
Cues and control in Expert-Client Dialogues
cmp-lg cs.CL
We conducted an empirical analysis into the relation between control and discourse structure. We applied control criteria to four dialogues and identified 3 levels of discourse structure. We investigated the mechanism for changing control between these structures and found that utterance type and not cue words predicted shifts of control. Participants used certain types of signals when discourse goals were proceeding successfully but resorted to interruptions when they were not.
cmp-lg/9504007
Mixed Initiative in Dialogue: An Investigation into Discourse Segmentation
cmp-lg cs.CL
Conversation between two people is usually of mixed-initiative, with control over the conversation being transferred from one person to another. We apply a set of rules for the transfer of control to 4 sets of dialogues consisting of a total of 1862 turns. The application of the control rules lets us derive domain-independent discourse structures. The derived structures indicate that initiative plays a role in the structuring of discourse. In order to explore the relationship of control and initiative to discourse processes like centering, we analyze the distribution of four different classes of anaphora for two data sets. This distribution indicates that some control segments are hierarchically related to others. The analysis suggests that discourse participants often mutually agree to a change of topic. We also compared initiative in Task Oriented and Advice Giving dialogues and found that both allocation of control and the manner in which control is transferred is radically different for the two dialogue types. These differences can be explained in terms of collaborative planning principles.
cmp-lg/9504008
SKOPE: A connectionist/symbolic architecture of spoken Korean processing
cmp-lg cs.CL
Spoken language processing requires speech and natural language integration. Moreover, spoken Korean calls for unique processing methodology due to its linguistic characteristics. This paper presents SKOPE, a connectionist/symbolic spoken Korean processing engine, which emphasizes that: 1) connectionist and symbolic techniques must be selectively applied according to their relative strength and weakness, and 2) the linguistic characteristics of Korean must be fully considered for phoneme recognition, speech and language integration, and morphological/syntactic processing. The design and implementation of SKOPE demonstrates how connectionist/symbolic hybrid architectures can be constructed for spoken agglutinative language processing. Also SKOPE presents many novel ideas for speech and language processing. The phoneme recognition, morphological analysis, and syntactic analysis experiments show that SKOPE is a viable approach for the spoken Korean processing.
cmp-lg/9504009
Abstract Machine for Typed Feature Structures
cmp-lg cs.CL
This paper describes an abstract machine for linguistic formalisms that are based on typed feature structures, such as HPSG. The core design of the abstract machine is given in detail, including the compilation process from a high-level language to the abstract machine language and the implementation of the abstract instructions. The machine's engine supports the unification of typed, possibly cyclic, feature structures. A separate module deals with control structures and instructions to accommodate parsing for phrase structure grammars. We treat the linguistic formalism as a high-level declarative programming language, applying methods that were proved useful in computer science to the study of natural languages: a grammar specified using the formalism is endowed with an operational semantics.
cmp-lg/9504010
MAXIMUM LIKELIHOOD AND MINIMUM ENTROPY IDENTIFICATION OF GRAMMARS
cmp-lg cs.CL
Using the Thermodynamic Formalism, we introduce a Gibbsian model for the identification of regular grammars based only on positive evidence. This model mimics the natural language acquisition procedure driven by prosody which is here represented by the thermodynamical potential. The statistical question we face is how to estimate the incidenc e matrix of a subshift of finite type from a sample produced by a Gibbs state whose potential is known. The model acquaints for both the robustness of t he language acquisition procedure and language changes. The probabilistic appr oach we use avoids invoking ad-hoc restrictions as Berwick's Subset Principle.
cmp-lg/9504011
A Processing Model for Free Word Order Languages
cmp-lg cs.CL
Like many verb-final languages, Germn displays considerable word-order freedom: there is no syntactic constraint on the ordering of the nominal arguments of a verb, as long as the verb remains in final position. This effect is referred to as ``scrambling'', and is interpreted in transformational frameworks as leftward movement of the arguments. Furthermore, arguments from an embedded clause may move out of their clause; this effect is referred to as ``long-distance scrambling''. While scrambling has recently received considerable attention in the syntactic literature, the status of long-distance scrambling has only rarely been addressed. The reason for this is the problematic status of the data: not only is long-distance scrambling highly dependent on pragmatic context, it also is strongly subject to degradation due to processing constraints. As in the case of center-embedding, it is not immediately clear whether to assume that observed unacceptability of highly complex sentences is due to grammatical restrictions, or whether we should assume that the competence grammar does not place any restrictions on scrambling (and that, therefore, all such sentences are in fact grammatical), and the unacceptability of some (or most) of the grammatically possible word orders is due to processing limitations. In this paper, we will argue for the second view by presenting a processing model for German.
cmp-lg/9504012
Linear Logic for Meaning Assembly
cmp-lg cs.CL
Semantic theories of natural language associate meanings with utterances by providing meanings for lexical items and rules for determining the meaning of larger units given the meanings of their parts. Meanings are often assumed to combine via function application, which works well when constituent structure trees are used to guide semantic composition. However, we believe that the functional structure of Lexical-Functional Grammar is best used to provide the syntactic information necessary for constraining derivations of meaning in a cross-linguistically uniform format. It has been difficult, however, to reconcile this approach with the combination of meanings by function application. In contrast to compositional approaches, we present a deductive approach to assembling meanings, based on reasoning with constraints, which meshes well with the unordered nature of information in the functional structure. Our use of linear logic as a `glue' for assembling meanings allows for a coherent treatment of the LFG requirements of completeness and coherence as well as of modification and quantification.
cmp-lg/9504013
NLG vs. Templates
cmp-lg cs.CL
One of the most important questions in applied NLG is what benefits (or `value-added', in business-speak) NLG technology offers over template-based approaches. Despite the importance of this question to the applied NLG community, however, it has not been discussed much in the research NLG community, which I think is a pity. In this paper, I try to summarize the issues involved and recap current thinking on this topic. My goal is not to answer this question (I don't think we know enough to be able to do so), but rather to increase the visibility of this issue in the research community, in the hope of getting some input and ideas on this very important question. I conclude with a list of specific research areas I would like to see more work in, because I think they would increase the `value-added' of NLG over templates.
cmp-lg/9504014
LexGram - a practical categorial grammar formalism -
cmp-lg cs.CL
We present the LexGram system, an amalgam of (Lambek) categorial grammar and Head Driven Phrase Structure Grammar (HPSG), and show that the grammar formalism it implements is a well-structured and useful tool for actual grammar development.
cmp-lg/9504015
Estimating Lexical Priors for Low-Frequency Syncretic Forms
cmp-lg cs.CL
Given a previously unseen form that is morphologically n-ways ambiguous, what is the best estimator for the lexical prior probabilities for the various functions of the form? We argue that the best estimator is provided by computing the relative frequencies of the various functions among the hapax legomena --- the forms that occur exactly once in a corpus. This result has important implications for the development of stochastic morphological taggers, especially when some initial hand-tagging of a corpus is required: For predicting lexical priors for very low-frequency morphologically ambiguous types (most of which would not occur in any given corpus) one should concentrate on tagging a good representative sample of the hapax legomena, rather than extensively tagging words of all frequency ranges.
cmp-lg/9504016
Memoization of Top Down Parsing
cmp-lg cs.CL
This paper discusses the relationship between memoized top-down recognizers and chart parsers. It presents a version of memoization suitable for continuation-passing style programs. When applied to a simple formalization of a top-down recognizer it yields a terminating parser.
cmp-lg/9504017
A Uniform Treatment of Pragmatic Inferences in Simple and Complex Utterances and Sequences of Utterances
cmp-lg cs.CL
Drawing appropriate defeasible inferences has been proven to be one of the most pervasive puzzles of natural language processing and a recurrent problem in pragmatics. This paper provides a theoretical framework, called ``stratified logic'', that can accommodate defeasible pragmatic inferences. The framework yields an algorithm that computes the conversational, conventional, scalar, clausal, and normal state implicatures; and the presuppositions that are associated with utterances. The algorithm applies equally to simple and complex utterances and sequences of utterances.
cmp-lg/9504018
An Implemented Formalism for Computing Linguistic Presuppositions and Existential Commitments
cmp-lg cs.CL
We rely on the strength of linguistic and philosophical perspectives in constructing a framework that offers a unified explanation for presuppositions and existential commitment. We use a rich ontology and a set of methodological principles that embed the essence of Meinong's philosophy and Grice's conversational principles into a stratified logic, under an unrestricted interpretation of the quantifiers. The result is a logical formalism that yields a tractable computational method that uniformly calculates all the presuppositions of a given utterance, including the existential ones.
cmp-lg/9504019
A Formalism and an Algorithm for Computing Pragmatic Inferences and Detecting Infelicities
cmp-lg cs.CL
Since Austin introduced the term ``infelicity'', the linguistic literature has been flooded with its use, but no formal or computational explanation has been given for it. This thesis provides one for those infelicities that occur when a pragmatic inference is cancelled. Our contribution assumes the existence of a finer grained taxonomy with respect to pragmatic inferences. It is shown that if one wants to account for the natural language expressiveness, one should distinguish between pragmatic inferences that are felicitous to defeat and pragmatic inferences that are infelicitously defeasible. Thus, it is shown that one should consider at least three types of information: indefeasible, felicitously defeasible, and infelicitously defeasible. The cancellation of the last of these determines the pragmatic infelicities. A new formalism has been devised to accommodate the three levels of information, called ``stratified logic''. Within it, we are able to express formally notions such as ``utterance U presupposes P'' or ``utterance U is infelicitous''. Special attention is paid to the implications that our work has in solving some well-known existential philosophical puzzles. The formalism yields an algorithm for computing interpretations for utterances, for determining their associated presuppositions, and for signalling infelicitous utterances that has been implemented in Common Lisp. The algorithm applies equally to simple and complex utterances and sequences of utterances.
cmp-lg/9504020
Computational Interpretations of the Gricean Maxims in the Generation of Referring Expressions
cmp-lg cs.CL
We examine the problem of generating definite noun phrases that are appropriate referring expressions; i.e, noun phrases that (1) successfully identify the intended referent to the hearer whilst (2) not conveying to her any false conversational implicatures (Grice, 1975). We review several possible computational interpretations of the conversational implicature maxims, with different computational costs, and argue that the simplest may be the best, because it seems to be closest to what human speakers do. We describe our recommended algorithm in detail, along with a specification of the resources a host system must provide in order to make use of the algorithm, and an implementation used in the natural language generation component of the IDAS system. This paper will appear in the the April--June 1995 issue of Cognitive Science, and is made available on cmp-lg with the permission of Ablex, the publishers of that journal.
cmp-lg/9504021
Phonological Derivation in Optimality Theory
cmp-lg cs.CL
Optimality Theory is a constraint-based theory of phonology which allows constraints to be violated. Consequently, implementing the theory presents problems for declarative constraint-based processing frameworks. On the basis of two regularity assumptions, that candidate sets are regular and that constraints can be modelled by transducers, this paper presents and proves correct algorithms for computing the action of constraints, and hence deriving surface forms.
cmp-lg/9504022
Constraints, Exceptions and Representations
cmp-lg cs.CL
This paper shows that default-based phonologies have the potential to capture morphophonological generalisations which cannot be captured by non-defaul theories. In achieving this result, I offer a characterisation of Underspecification Theory and Optimality Theory in terms of their methods for ordering defaults. The result means that machine learning techniques for building non-defualt analyses may not provide a suitable basis for morphophonological analysis.
cmp-lg/9504023
TAKTAG: Two-phase learning method for hybrid statistical/rule-based part-of-speech disambiguation
cmp-lg cs.CL
Both statistical and rule-based approaches to part-of-speech (POS) disambiguation have their own advantages and limitations. Especially for Korean, the narrow windows provided by hidden markov model (HMM) cannot cover the necessary lexical and long-distance dependencies for POS disambiguation. On the other hand, the rule-based approaches are not accurate and flexible to new tag-sets and languages. In this regard, the statistical/rule-based hybrid method that can take advantages of both approaches is called for the robust and flexible POS disambiguation. We present one of such method, that is, a two-phase learning architecture for the hybrid statistical/rule-based POS disambiguation, especially for Korean. In this method, the statistical learning of morphological tagging is error-corrected by the rule-based learning of Brill [1992] style tagger. We also design the hierarchical and flexible Korean tag-set to cope with the multiple tagging applications, each of which requires different tag-set. Our experiments show that the two-phase learning method can overcome the undesirable features of solely HMM-based or solely rule-based tagging, especially for morphologically complex Korean.
cmp-lg/9504024
A Morphographemic Model for Error Correction in Nonconcatenative Strings
cmp-lg cs.CL
This paper introduces a spelling correction system which integrates seamlessly with morphological analysis using a multi-tape formalism. Handling of various Semitic error problems is illustrated, with reference to Arabic and Syriac examples. The model handles errors vocalisation, diacritics, phonetic syncopation and morphographemic idiosyncrasies, in addition to Damerau errors. A complementary correction strategy for morphologically sound but morphosyntactically ill-formed words is outlined.
cmp-lg/9504025
Discourse Processing of Dialogues with Multiple Threads
cmp-lg cs.CL
In this paper we will present our ongoing work on a plan-based discourse processor developed in the context of the Enthusiast Spanish to English translation system as part of the JANUS multi-lingual speech-to-speech translation system. We will demonstrate that theories of discourse which postulate a strict tree structure of discourse on either the intentional or attentional level are not totally adequate for handling spontaneous dialogues. We will present our extension to this approach along with its implementation in our plan-based discourse processor. We will demonstrate that the implementation of our approach outperforms an implementation based on the strict tree structure approach.
cmp-lg/9504026
The intersection of Finite State Automata and Definite Clause Grammars
cmp-lg cs.CL
Bernard Lang defines parsing as the calculation of the intersection of a FSA (the input) and a CFG. Viewing the input for parsing as a FSA rather than as a string combines well with some approaches in speech understanding systems, in which parsing takes a word lattice as input (rather than a word string). Furthermore, certain techniques for robust parsing can be modelled as finite state transducers. In this paper we investigate how we can generalize this approach for unification grammars. In particular we will concentrate on how we might the calculation of the intersection of a FSA and a DCG. It is shown that existing parsing algorithms can be easily extended for FSA inputs. However, we also show that the termination properties change drastically: we show that it is undecidable whether the intersection of a FSA and a DCG is empty (even if the DCG is off-line parsable). Furthermore we discuss approaches to cope with the problem.
cmp-lg/9504027
An Efficient Generation Algorithm for Lexicalist MT
cmp-lg cs.CL
The lexicalist approach to Machine Translation offers significant advantages in the development of linguistic descriptions. However, the Shake-and-Bake generation algorithm of (Whitelock, COLING-92) is NP-complete. We present a polynomial time algorithm for lexicalist MT generation provided that sufficient information can be transferred to ensure more determinism.
cmp-lg/9504028
Memoization of Coroutined Constraints
cmp-lg cs.CL
Some linguistic constraints cannot be effectively resolved during parsing at the location in which they are most naturally introduced. This paper shows how constraints can be propagated in a memoizing parser (such as a chart parser) in much the same way that variable bindings are, providing a general treatment of constraint coroutining in memoization. Prolog code for a simple application of our technique to Bouma and van Noord's (1994) categorial grammar analysis of Dutch is provided.
cmp-lg/9504029
Quantifiers, Anaphora, and Intensionality
cmp-lg cs.CL
The relationship between Lexical-Functional Grammar (LFG) {\em functional structures} (f-structures) for sentences and their semantic interpretations can be expressed directly in a fragment of linear logic in a way that correctly explains the constrained interactions between quantifier scope ambiguity, bound anaphora and intensionality. This deductive approach to semantic interpretaion obviates the need for additional mechanisms, such as Cooper storage, to represent the possible scopes of a quantified NP, and explains the interactions between quantified NPs, anaphora and intensional verbs such as `seek'. A single specification in linear logic of the argument requirements of intensional verbs is sufficient to derive the correct reading predictions for intensional-verb clauses both with nonquantified and with quantified direct objects. In particular, both de dicto and de re readings are derived for quantified objects. The effects of type-raising or quantifying-in rules in other frameworks here just follow as linear-logic theorems. While our approach resembles current categorial approaches in important ways, it differs from them in allowing the greater type flexibility of categorial semantics while maintaining a precise connection to syntax. As a result, we are able to provide derivations for certain readings of sentences with intensional verbs and complex direct objects that are not derivable in current purely categorial accounts of the syntax-semantics interface.
cmp-lg/9504030
Statistical Decision-Tree Models for Parsing
cmp-lg cs.CL
Syntactic natural language parsers have shown themselves to be inadequate for processing highly-ambiguous large-vocabulary text, as is evidenced by their poor performance on domains like the Wall Street Journal, and by the movement away from parsing-based approaches to text-processing in general. In this paper, I describe SPATTER, a statistical parser based on decision-tree learning techniques which constructs a complete parse for every sentence and achieves accuracy rates far better than any published result. This work is based on the following premises: (1) grammars are too complex and detailed to develop manually for most interesting domains; (2) parsing models must rely heavily on lexical and contextual information to analyze sentences accurately; and (3) existing {$n$}-gram modeling techniques are inadequate for parsing models. In experiments comparing SPATTER with IBM's computer manuals parser, SPATTER significantly outperforms the grammar-based parser. Evaluating SPATTER against the Penn Treebank Wall Street Journal corpus using the PARSEVAL measures, SPATTER achieves 86\% precision, 86\% recall, and 1.3 crossing brackets per sentence for sentences of 40 words or less, and 91\% precision, 90\% recall, and 0.5 crossing brackets for sentences between 10 and 20 words in length.
cmp-lg/9504031
Error-tolerant Finite State Recognition with Applications to Morphological Analysis and Spelling Correction
cmp-lg cs.CL
Error-tolerant recognition enables the recognition of strings that deviate mildly from any string in the regular set recognized by the underlying finite state recognizer. Such recognition has applications in error-tolerant morphological processing, spelling correction, and approximate string matching in information retrieval. After a description of the concepts and algorithms involved, we give examples from two applications: In the context of morphological analysis, error-tolerant recognition allows misspelled input word forms to be corrected, and morphologically analyzed concurrently. We present an application of this to error-tolerant analysis of agglutinative morphology of Turkish words. The algorithm can be applied to morphological analysis of any language whose morphology is fully captured by a single (and possibly very large) finite state transducer, regardless of the word formation processes and morphographemic phenomena involved. In the context of spelling correction, error-tolerant recognition can be used to enumerate correct candidate forms from a given misspelled string within a certain edit distance. Again, it can be applied to any language with a word list comprising all inflected forms, or whose morphology is fully described by a finite state transducer. We present experimental results for spelling correction for a number of languages. These results indicate that such recognition works very efficiently for candidate generation in spelling correction for many European languages such as English, Dutch, French, German, Italian (and others) with very large word lists of root and inflected forms (some containing well over 200,000 forms), generating all candidate solutions within 10 to 45 milliseconds (with edit distance 1) on a SparcStation 10/41. For spelling correction in Turkish, error-tolerant
cmp-lg/9504032
The Replace Operator
cmp-lg cs.CL
This paper introduces to the calculus of regular expressions a replace operator, ->, and defines a set of replacement expressions that concisely encode several alternate variations of the operation. The basic case is unconditional obligatory replacement: UPPER -> LOWER Conditional versions of replacement, such as, UPPER -> LOWER || LEFT _ RIGHT constrain the operation by left and right contexts. UPPER, LOWER, LEFT, and RIGHT may be regular expressions of any complexity. Replace expressions denote regular relations. The replace operator is defined in terms of other regular expression operators using techniques introduced by Ronald M. Kaplan and Martin Kay in "Regular Models of Phonological Rule Systems" (Computational Linguistics 20:3 331-378. 1994).
cmp-lg/9504033
Corpus Statistics Meet the Noun Compound: Some Empirical Results
cmp-lg cs.CL
A variety of statistical methods for noun compound analysis are implemented and compared. The results support two main conclusions. First, the use of conceptual association not only enables a broad coverage, but also improves the accuracy. Second, an analysis model based on dependency grammar is substantially more accurate than one based on deepest constituents, even though the latter is more prevalent in the literature.
cmp-lg/9504034
Bayesian Grammar Induction for Language Modeling
cmp-lg cs.CL
We describe a corpus-based induction algorithm for probabilistic context-free grammars. The algorithm employs a greedy heuristic search within a Bayesian framework, and a post-pass using the Inside-Outside algorithm. We compare the performance of our algorithm to n-gram models and the Inside-Outside algorithm in three language modeling tasks. In two of the tasks, the training data is generated by a probabilistic context-free grammar and in both tasks our algorithm outperforms the other techniques. The third task involves naturally-occurring data, and in this task our algorithm does not perform as well as n-gram models but vastly outperforms the Inside-Outside algorithm.
cmp-lg/9505001
Response Generation in Collaborative Negotiation
cmp-lg cs.CL
In collaborative planning activities, since the agents are autonomous and heterogeneous, it is inevitable that conflicts arise in their beliefs during the planning process. In cases where such conflicts are relevant to the task at hand, the agents should engage in collaborative negotiation as an attempt to square away the discrepancies in their beliefs. This paper presents a computational strategy for detecting conflicts regarding proposed beliefs and for engaging in collaborative negotiation to resolve the conflicts that warrant resolution. Our model is capable of selecting the most effective aspect to address in its pursuit of conflict resolution in cases where multiple conflicts arise, and of selecting appropriate evidence to justify the need for such modification. Furthermore, by capturing the negotiation process in a recursive Propose-Evaluate-Modify cycle of actions, our model can successfully handle embedded negotiation subdialogues.
cmp-lg/9505002
New Techniques for Context Modeling
cmp-lg cs.CL
We introduce three new techniques for statistical language models: extension modeling, nonmonotonic contexts, and the divergence heuristic. Together these techniques result in language models that have few states, even fewer parameters, and low message entropies. For example, our techniques achieve a message entropy of 1.97 bits/char on the Brown corpus using only 89,325 parameters. In contrast, the character 4-gram model requires more than 250 times as many parameters in order to achieve a message entropy of only 2.47 bits/char. The fact that our model performs significantly better while using vastly fewer parameters indicates that it is a better probability model of natural language text.
cmp-lg/9505003
Compiling HPSG type constraints into definite clause programs
cmp-lg cs.CL
We present a new approach to HPSG processing: compiling HPSG grammars expressed as type constraints into definite clause programs. This provides a clear and computationally useful correspondence between linguistic theories and their implementation. The compiler performs off-line constraint inheritance and code optimization. As a result, we are able to efficiently process with HPSG grammars without having to hand-translate them into definite clause or phrase structure based systems.
cmp-lg/9505004
DATR Theories and DATR Models
cmp-lg cs.CL
Evans and Gazdar introduced DATR as a simple, non-monotonic language for representing natural language lexicons. Although a number of implementations of DATR exist, the full language has until now lacked an explicit, declarative semantics. This paper rectifies the situation by providing a mathematical semantics for DATR. We present a view of DATR as a language for defining certain kinds of partial functions by cases. The formal model provides a transparent treatment of DATR's notion of global context. It is shown that DATR's default mechanism can be accounted for by interpreting value descriptors as families of values indexed by paths.
cmp-lg/9505005
Learning Syntactic Rules and Tags with Genetic Algorithms for Information Retrieval and Filtering: An Empirical Basis for Grammatical Rules
cmp-lg cs.CL
The grammars of natural languages may be learned by using genetic algorithms that reproduce and mutate grammatical rules and part-of-speech tags, improving the quality of later generations of grammatical components. Syntactic rules are randomly generated and then evolve; those rules resulting in improved parsing and occasionally improved retrieval and filtering performance are allowed to further propagate. The LUST system learns the characteristics of the language or sublanguage used in document abstracts by learning from the document rankings obtained from the parsed abstracts. Unlike the application of traditional linguistic rules to retrieval and filtering applications, LUST develops grammatical structures and tags without the prior imposition of some common grammatical assumptions (e.g., part-of-speech assumptions), producing grammars that are empirically based and are optimized for this particular application.
cmp-lg/9505006
Treating Coordination with Datalog Grammars
cmp-lg cs.CL
In previous work we studied a new type of DCGs, Datalog grammars, which are inspired on database theory. Their efficiency was shown to be better than that of their DCG counterparts under (terminating) OLDT-resolution. In this article we motivate a variant of Datalog grammars which allows us a meta-grammatical treatment of coordination. This treatment improves in some respects over previous work on coordination in logic grammars, although more research is needed for testing it in other respects.
cmp-lg/9505007
Parsing a Flexible Word Order Language
cmp-lg cs.CL
A logic formalism is presented which increases the expressive power of the ID/LP format of GPSG by enlarging the inventory of ordering relations and extending the domain of their application to non-siblings. This allows a concise, modular and declarative statement of intricate word order regularities.
cmp-lg/9505008
Conciseness through Aggregation in Text Generation
cmp-lg cs.CL
Aggregating different pieces of similar information is necessary to generate concise and easy to understand reports in technical domains. This paper presents a general algorithm that combines similar messages in order to generate one or more coherent sentences for them. The process is not as trivial as might be expected. Problems encountered are briefly described.
cmp-lg/9505009
Compilation of HPSG to TAG
cmp-lg cs.CL
We present an implemented compilation algorithm that translates HPSG into lexicalized feature-based TAG, relating concepts of the two theories. While HPSG has a more elaborated principle-based theory of possible phrase structures, TAG provides the means to represent lexicalized structures more explicitly. Our objectives are met by giving clear definitions that determine the projection of structures from the lexicon, and identify maximal projections, auxiliary trees and foot nodes.
cmp-lg/9505010
Tagset Reduction Without Information Loss
cmp-lg cs.CL
A technique for reducing a tagset used for n-gram part-of-speech disambiguation is introduced and evaluated in an experiment. The technique ensures that all information that is provided by the original tagset can be restored from the reduced one. This is crucial, since we are interested in the linguistically motivated tags for part-of-speech disambiguation. The reduced tagset needs fewer parameters for its statistical model and allows more accurate parameter estimation. Additionally, there is a slight but not significant improvement of tagging accuracy.
cmp-lg/9505011
Evaluation of Semantic Clusters
cmp-lg cs.CL
Semantic clusters of a domain form an important feature that can be useful for performing syntactic and semantic disambiguation. Several attempts have been made to extract the semantic clusters of a domain by probabilistic or taxonomic techniques. However, not much progress has been made in evaluating the obtained semantic clusters. This paper focuses on an evaluation mechanism that can be used to evaluate semantic clusters produced by a system against those provided by human experts.
cmp-lg/9505012
A Symbolic and Surgical Acquisition of Terms through Variation
cmp-lg cs.CL
Terminological acquisition is an important issue in learning for NLP due to the constant terminological renewal through technological changes. Terms play a key role in several NLP-activities such as machine translation, automatic indexing or text understanding. In opposition to classical once-and-for-all approaches, we propose an incremental process for terminological enrichment which operates on existing reference lists and large corpora. Candidate terms are acquired by extracting variants of reference terms through {\em FASTR}, a unification-based partial parser. As acquisition is performed within specific morpho-syntactic contexts (coordinations, insertions or permutations of compounds), rich conceptual links are learned together with candidate terms. A clustering of terms related through coordination yields classes of conceptually close terms while graphs resulting from insertions denote generic/specific relations. A graceful degradation of the volume of acquisition on partial initial lists confirms the robustness of the method to incomplete data.
cmp-lg/9505013
Utilizing Statistical Dialogue Act Processing in Verbmobil
cmp-lg cs.CL
In this paper, we present a statistical approach for dialogue act processing in the dialogue component of the speech-to-speech translation system \vm. Statistics in dialogue processing is used to predict follow-up dialogue acts. As an application example we show how it supports repair when unexpected dialogue states occur.
cmp-lg/9505014
Compositionality for Presuppositions over Tableaux
cmp-lg cs.CL
Tableaux originate as a decision method for a logical language. They can also be extended to obtain a structure that spells out all the information in a set of sentences in terms of truth value assignments to atomic formulas that appear in them. This approach is pursued here. Over such a structure, compositional rules are provided for obtaining the presuppositions of a logical statement from its atomic subformulas and their presuppositions. The rules are based on classical logic semantics and they are shown to model the behaviour of presuppositions observed in natural language sentences built with {\em if \ldots then}, {\em and} and {\em or}. The advantages of this method over existing frameworks for presuppositions are discussed.
cmp-lg/9505015
Efficient Analysis of Complex Diagrams using Constraint-Based Parsing
cmp-lg cs.CL
This paper describes substantial advances in the analysis (parsing) of diagrams using constraint grammars. The addition of set types to the grammar and spatial indexing of the data make it possible to efficiently parse real diagrams of substantial complexity. The system is probably the first to demonstrate efficient diagram parsing using grammars that easily be retargeted to other domains. The work assumes that the diagrams are available as a flat collection of graphics primitives: lines, polygons, circles, Bezier curves and text. This is appropriate for future electronic documents or for vectorized diagrams converted from scanned images. The classes of diagrams that we have analyzed include x,y data graphs and genetic diagrams drawn from the biological literature, as well as finite state automata diagrams (states and arcs). As an example, parsing a four-part data graph composed of 133 primitives required 35 sec using Macintosh Common Lisp on a Macintosh Quadra 700.
cmp-lg/9505016
A Pattern Matching method for finding Noun and Proper Noun Translations from Noisy Parallel Corpora
cmp-lg cs.CL
We present a pattern matching method for compiling a bilingual lexicon of nouns and proper nouns from unaligned, noisy parallel texts of Asian/Indo-European language pairs. Tagging information of one language is used. Word frequency and position information for high and low frequency words are represented in two different vector forms for pattern matching. New anchor point finding and noise elimination techniques are introduced. We obtained a 73.1\% precision. We also show how the results can be used in the compilation of domain-specific noun phrases.
cmp-lg/9505017
Robust Parsing of Spoken Dialogue Using Contextual Knowledge and Recognition Probabilities
cmp-lg cs.CL
In this paper we describe the linguistic processor of a spoken dialogue system. The parser receives a word graph from the recognition module as its input. Its task is to find the best path through the graph. If no complete solution can be found, a robust mechanism for selecting multiple partial results is applied. We show how the information content rate of the results can be improved if the selection is based on an integrated quality score combining word recognition scores and context-dependent semantic predictions. Results of parsing word graphs with and without predictions are reported.
cmp-lg/9505018
Acquiring a Lexicon from Unsegmented Speech
cmp-lg cs.CL
We present work-in-progress on the machine acquisition of a lexicon from sentences that are each an unsegmented phone sequence paired with a primitive representation of meaning. A simple exploratory algorithm is described, along with the direction of current work and a discussion of the relevance of the problem for child language acquisition and computer speech recognition.
cmp-lg/9505019
Measuring semantic complexity
cmp-lg cs.CL
We define {\em semantic complexity} using a new concept of {\em meaning automata}. We measure the semantic complexity of understanding of prepositional phrases, of an "in depth understanding system", and of a natural language interface to an on-line calendar. We argue that it is possible to measure some semantic complexities of natural language processing systems before building them, and that systems that exhibit relatively complex behavior can be built from semantically simple components.
cmp-lg/9505020
CRYSTAL: Inducing a Conceptual Dictionary
cmp-lg cs.CL
One of the central knowledge sources of an information extraction system is a dictionary of linguistic patterns that can be used to identify the conceptual content of a text. This paper describes CRYSTAL, a system which automatically induces a dictionary of "concept-node definitions" sufficient to identify relevant information from a training corpus. Each of these concept-node definitions is generalized as far as possible without producing errors, so that a minimum number of dictionary entries cover the positive training instances. Because it tests the accuracy of each proposed definition, CRYSTAL can often surpass human intuitions in creating reliable extraction rules.
cmp-lg/9505021
Improving the Efficiency of a Generation Algorithm for Shake and Bake Machine Translation Using Head-Driven Phrase Structure Grammar
cmp-lg cs.CL
A Shake and Bake machine translation algorithm for Head-Driven Phrase Structure Grammar is introduced based on the algorithm proposed by Whitelock for unification categorial grammar. The translation process is then analysed to determine where the potential sources of inefficiency reside, and some proposals are introduced which greatly improve the efficiency of the generation algorithm. Preliminary empirical results from tests involving a small grammar are presented, and suggestions for greater improvement to the algorithm are provided.
cmp-lg/9505022
Generating One-Anaphoric Expressions: Where Does the Decision Lie?
cmp-lg cs.CL
Most natural language generation systems embody mechanisms for choosing whether to subsequently refer to an already-introduced entity by means of a pronoun or a definite noun phrase. Relatively few systems, however, consider referring to entites by means of one-anaphoric expressions such as \lingform{the small green one}. This paper looks at what is involved in generating referring expressions of this type. Consideration of how to fit this capability into a standard algorithm for referring expression generation leads us to suggest a revision of some of the assumptions that underlie existing approaches. We demonstrate the usefulness of our approach to one-anaphora generation in the context of a simple database interface application, and make some observations about the impact of this approach on referring expression generation more generally.
cmp-lg/9505023
Some Novel Applications of Explanation-Based Learning to Parsing Lexicalized Tree-Adjoining Grammars
cmp-lg cs.CL
In this paper we present some novel applications of Explanation-Based Learning (EBL) technique to parsing Lexicalized Tree-Adjoining grammars. The novel aspects are (a) immediate generalization of parses in the training set, (b) generalization over recursive structures and (c) representation of generalized parses as Finite State Transducers. A highly impoverished parser called a ``stapler'' has also been introduced. We present experimental results using EBL for different corpora and architectures to show the effectiveness of our approach.
cmp-lg/9505024
Exploring the role of Punctuation in Parsing Natural Text
cmp-lg cs.CL
Few, if any, current NLP systems make any significant use of punctuation. Intuitively, a treatment of punctuation seems necessary to the analysis and production of text. Whilst this has been suggested in the fields of discourse structure, it is still unclear whether punctuation can help in the syntactic field. This investigation attempts to answer this question by parsing some corpus-based material with two similar grammars --- one including rules for punctuation, the other ignoring it. The punctuated grammar significantly out-performs the unpunctuated one, and so the conclusion is that punctuation can play a useful role in syntactic processing.
cmp-lg/9505025
Combining Multiple Knowledge Sources for Discourse Segmentation
cmp-lg cs.CL
We predict discourse segment boundaries from linguistic features of utterances, using a corpus of spoken narratives as data. We present two methods for developing segmentation algorithms from training data: hand tuning and machine learning. When multiple types of features are used, results approach human performance on an independent test set (both methods), and using cross-validation (machine learning).
cmp-lg/9505026
Tagging the Teleman Corpus
cmp-lg cs.CL
Experiments were carried out comparing the Swedish Teleman and the English Susanne corpora using an HMM-based and a novel reductionistic statistical part-of-speech tagger. They indicate that tagging the Teleman corpus is the more difficult task, and that the performance of the two different taggers is comparable.
cmp-lg/9505027
Quantifier Scope and Constituency
cmp-lg cs.CL
Traditional approaches to quantifier scope typically need stipulation to exclude readings that are unavailable to human understanders. This paper shows that quantifier scope phenomena can be precisely characterized by a semantic representation constrained by surface constituency, if the distinction between referential and quantificational NPs is properly observed. A CCG implementation is described and compared to other approaches.
cmp-lg/9505028
D-Tree Grammars
cmp-lg cs.CL
DTG are designed to share some of the advantages of TAG while overcoming some of its limitations. DTG involve two composition operations called subsertion and sister-adjunction. The most distinctive feature of DTG is that, unlike TAG, there is complete uniformity in the way that the two DTG operations relate lexical items: subsertion always corresponds to complementation and sister-adjunction to modification. Furthermore, DTG, unlike TAG, can provide a uniform analysis for em wh-movement in English and Kashmiri, despite the fact that the em wh element in Kashmiri appears in sentence-second position, and not sentence-initial position as in English.
cmp-lg/9505029
Mapping Scrambled Korean Sentences into English Using Synchronous TAGs
cmp-lg cs.CL
Synchronous Tree Adjoining Grammars can be used for Machine Translation. However, translating a free order language such as Korean to English is complicated. I present a mechanism to translate scrambled Korean sentences into English by combining the concepts of Multi-Component TAGs (MC-TAGs) and Synchronous TAGs (STAGs).
cmp-lg/9505030
Encoding Lexicalized Tree Adjoining Grammars with a Nonmonotonic Inheritance Hierarchy
cmp-lg cs.CL
This paper shows how DATR, a widely used formal language for lexical knowledge representation, can be used to define an LTAG lexicon as an inheritance hierarchy with internal lexical rules. A bottom-up featural encoding is used for LTAG trees and this allows lexical rules to be implemented as covariation constraints within feature structures. Such an approach eliminates the considerable redundancy otherwise associated with an LTAG lexicon.
cmp-lg/9505031
The Compactness of Construction Grammars
cmp-lg cs.CL
We present an argument for {\em construction grammars} based on the minimum description length (MDL) principle (a formal version of the Ockham Razor). The argument consists in using linguistic and computational evidence in setting up a formal model, and then applying the MDL principle to prove its superiority with respect to alternative models. We show that construction-based representations are at least an order of magnitude more compact that the corresponding lexicalized representations of the same linguistic data. The result is significant for our understanding of the relationship between syntax and semantics, and consequently for choosing NLP architectures. For instance, whether the processing should proceed in a pipeline from syntax to semantics to pragmatics, and whether all linguistic information should be combined in a set of constraints. From a broader perspective, this paper does not only argue for a certain model of processing, but also provides a methodology for determining advantages of different approaches to NLP.
cmp-lg/9505032
Context and ontology in understanding of dialogs
cmp-lg cs.CL
We present a model of NLP in which ontology and context are directly included in a grammar. The model is based on the concept of {\em construction}, consisting of a set of features of form, a set of semantic and pragmatic conditions describing its application context, and a description of its meaning. In this model ontology is embedded into the grammar; e.g. the hierarchy of {\it np} constructions is based on the corresponding ontology. Ontology is also used in defining contextual parameters; e.g. $\left[ current\_question \ time(\_) \right] $. A parser based on this model allowed us to build a set of dialog understanding systems that include an on-line calendar, a banking machine, and an insurance quote system. The proposed approach is an alternative to the standard "pipeline" design of morphology-syntax-semantics-pragmatics; the account of meaning conforms to our intuitions about compositionality, but there is no homomorphism from syntax to semantics.
cmp-lg/9505033
User-Defined Nonmonotonicity in Unification-Based Formalisms
cmp-lg cs.CL
A common feature of recent unification-based grammar formalisms is that they give the user the ability to define his own structures. However, this possibility is mostly limited and does not include nonmonotonic operations. In this paper we show how nonmonotonic operations can also be user-defined by applying default logic (Reiter 1980) and generalizing previous results on nonmonotonic sorts (Young & Rounds 1993).
cmp-lg/9505034
Semantic Ambiguity and Perceived Ambiguity
cmp-lg cs.CL
I explore some of the issues that arise when trying to establish a connection between the underspecification hypothesis pursued in the NLP literature and work on ambiguity in semantics and in the psychological literature. A theory of underspecification is developed `from the first principles', i.e., starting from a definition of what it means for a sentence to be semantically ambiguous and from what we know about the way humans deal with ambiguity. An underspecified language is specified as the translation language of a grammar covering sentences that display three classes of semantic ambiguity: lexical ambiguity, scopal ambiguity, and referential ambiguity. The expressions of this language denote sets of senses. A formalization of defeasible reasoning with underspecified representations is presented, based on Default Logic. Some issues to be confronted by such a formalization are discussed.
cmp-lg/9505035
Development of a Spanish Version of the Xerox Tagger
cmp-lg cs.CL
This paper describes work performed withing the CRATER ({\em C}orpus {\em R}esources {\em A}nd {\em T}erminology {\em E}xt{\em R}action, MLAP-93/20) project, funded by the Commission of the European Communities. In particular, it addresses the issue of adapting the Xerox Tagger to Spanish in order to tag the Spanish version of the ITU (International Telecommunications Union) corpus. The model implemented by this tagger is briefly presented along with some modifications performed on it in order to use some parameters not probabilistically estimated. Initial decisions, like the tagset, the lexicon and the training corpus are also discussed. Finally, results are presented and the benefits of the {\em mixed model} justified.
cmp-lg/9505036
Integrating Gricean and Attentional Constraints
cmp-lg cs.CL
This paper concerns how to generate and understand discourse anaphoric noun phrases. I present the results of an analysis of all discourse anaphoric noun phrases (N=1,233) in a corpus of ten narrative monologues, where the choice between a definite pronoun or phrasal NP conforms largely to Gricean constraints on informativeness. I discuss Dale and Reiter's [To appear] recent model and show how it can be augmented for understanding as well as generating the range of data presented here. I argue that integrating centering [Grosz et al., 1983] [Kameyama, 1985] with this model can be applied uniformly to discourse anaphoric pronouns and phrasal NPs. I conclude with a hypothesis for addressing the interaction between local and global discourse processing.
cmp-lg/9505037
Identifying Word Translations in Non-Parallel Texts
cmp-lg cs.CL
Common algorithms for sentence and word-alignment allow the automatic identification of word translations from parallel texts. This study suggests that the identification of word translations should also be possible with non-parallel and even unrelated texts. The method proposed is based on the assumption that there is a correlation between the patterns of word co-occurrences in texts of different languages.
cmp-lg/9505038
Ubiquitous Talker: Spoken Language Interaction with Real World Objects
cmp-lg cs.CL
Augmented reality is a research area that tries to embody an electronic information space within the real world, through computational devices. A crucial issue within this area, is the recognition of real world objects or situations. In natural language processing, it is much easier to determine interpretations of utterances, even if they are ill-formed, when the context or situation is fixed. We therefore introduce robust, natural language processing into a system of augmented reality with situation awareness. Based on this idea, we have developed a portable system, called the Ubiquitous Talker. This consists of an LCD display that reflects the scene at which a user is looking as if it is a transparent glass, a CCD camera for recognizing real world objects with color-bar ID codes, a microphone for recognizing a human voice and a speaker which outputs a synthesized voice. The Ubiquitous Talker provides its user with some information related to a recognized object, by using the display and voice. It also accepts requests or questions as voice inputs. The user feels as if he/she is talking with the object itself through the system.
cmp-lg/9505039
Generating efficient belief models for task-oriented dialogues
cmp-lg cs.CL
We have shown that belief modelling for dialogue can be simplified if the assumption is made that the participants are cooperating, i.e., they are not committed to any goals requiring deception. In such domains, there is no need to maintain individual representations of deeply nested beliefs; instead, three specific types of belief can be used to summarize all the states of nested belief that can exist about a domain entity. Here, we set out to design a ``compiler'' for belief models. This system will accept as input a description of agents' interactions with a task domain expressed in a fully-expressive belief logic with non-monotonic and temporal extensions. It generates an operational belief model for use in that domain, sufficient for the requirements of cooperative dialogue, including the negotiation of complex domain plans. The compiled model incorporates the belief simplification mentioned above, and also uses a simplified temporal logic of belief based on the restricted circumstances under which beliefs can change. We shall review the motivation for creating such a system, and introduce a general procedure for taking a logical specification for a domain and procesing it into an operational model. We shall then discuss the specific changes that are made during this procedure for limiting the level of abstraction at which the concepts of belief nesting, default reasoning and time are expressed. Finally we shall go through a worked example relating to the Map Task, a simple cooperative problem-solving exercise.
cmp-lg/9505040
Text Chunking using Transformation-Based Learning
cmp-lg cs.CL
Eric Brill introduced transformation-based learning and showed that it can do part-of-speech tagging with fairly high accuracy. The same method can be applied at a higher level of textual interpretation for locating chunks in the tagged text, including non-recursive ``baseNP'' chunks. For this purpose, it is convenient to view chunking as a tagging problem by encoding the chunk structure in new tags attached to each word. In automatic tests using Treebank-derived data, this technique achieved recall and precision rates of roughly 92% for baseNP chunks and 88% for somewhat more complex chunks that partition the sentence. Some interesting adaptations to the transformation-based learning approach are also suggested by this application.