id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
cmp-lg/9505041
On Descriptive Complexity, Language Complexity, and GB
cmp-lg cs.CL
We introduce $L^2_{K,P}$, a monadic second-order language for reasoning about trees which characterizes the strongly Context-Free Languages in the sense that a set of finite trees is definable in $L^2_{K,P}$ iff it is (modulo a projection) a Local Set---the set of derivation trees generated by a CFG. This provides a flexible approach to establishing language-theoretic complexity results for formalisms that are based on systems of well-formedness constraints on trees. We demonstrate this technique by sketching two such results for Government and Binding Theory. First, we show that {\em free-indexation\/}, the mechanism assumed to mediate a variety of agreement and binding relationships in GB, is not definable in $L^2_{K,P}$ and therefore not enforcible by CFGs. Second, we show how, in spite of this limitation, a reasonably complete GB account of English can be defined in $L^2_{K,P}$. Consequently, the language licensed by that account is strongly context-free. We illustrate some of the issues involved in establishing this result by looking at the definition, in $L^2_{K,P}$, of chains. The limitations of this definition provide some insight into the types of natural linguistic principles that correspond to higher levels of language complexity. We close with some speculation on the possible significance of these results for generative linguistics.
cmp-lg/9505042
Robust Parsing Based on Discourse Information: Completing partial parses of ill-formed sentences on the basis of discourse information
cmp-lg cs.CL
In a consistent text, many words and phrases are repeatedly used in more than one sentence. When an identical phrase (a set of consecutive words) is repeated in different sentences, the constituent words of those sentences tend to be associated in identical modification patterns with identical parts of speech and identical modifiee-modifier relationships. Thus, when a syntactic parser cannot parse a sentence as a unified structure, parts of speech and modifiee-modifier relationships among morphologically identical words in complete parses of other sentences within the same text provide useful information for obtaining partial parses of the sentence. In this paper, we describe a method for completing partial parses by maintaining consistency among morphologically identical words within the same text as regards their part of speech and their modifiee-modifier relationship. The experimental results obtained by using this method with technical documents offer good prospects for improving the accuracy of sentence analysis in a broad-coverage natural language processing system such as a machine translation system.
cmp-lg/9505043
Using Decision Trees for Coreference Resolution
cmp-lg cs.CL
This paper describes RESOLVE, a system that uses decision trees to learn how to classify coreferent phrases in the domain of business joint ventures. An experiment is presented in which the performance of RESOLVE is compared to the performance of a manually engineered set of rules for the same task. The results show that decision trees achieve higher performance than the rules in two of three evaluation metrics developed for the coreference task. In addition to achieving better performance than the rules, RESOLVE provides a framework that facilitates the exploration of the types of knowledge that are useful for solving the coreference problem.
cmp-lg/9505044
Automatic Evaluation and Uniform Filter Cascades for Inducing N-Best Translation Lexicons
cmp-lg cs.CL
This paper shows how to induce an N-best translation lexicon from a bilingual text corpus using statistical properties of the corpus together with four external knowledge sources. The knowledge sources are cast as filters, so that any subset of them can be cascaded in a uniform framework. A new objective evaluation measure is used to compare the quality of lexicons induced with different filter cascades. The best filter cascades improve lexicon quality by up to 137% over the plain vanilla statistical method, and approach human performance. Drastically reducing the size of the training corpus has a much smaller impact on lexicon quality when these knowledge sources are used. This makes it practical to train on small hand-built corpora for language pairs where large bilingual corpora are unavailable. Moreover, three of the four filters prove useful even when used with large training corpora.
cmp-lg/9505045
Hybrid Transfer in an English-French Spoken Language Translator
cmp-lg cs.CL
The paper argues the importance of high-quality translation for spoken language translation systems. It describes an architecture suitable for rapid development of high-quality limited-domain translation systems, which has been implemented within an advanced prototype English to French spoken language translator. The focus of the paper is the hybrid transfer model which combines unification-based rules and a set of trainable statistical preferences; roughly, rules encode domain-independent grammatical information and preferences encode domain-dependent distributional information. The preferences are trained from sets of examples produced by the system, which have been annotated by human judges as correct or incorrect. An experiment is described in which the model was tested on a 2000 utterance sample of previously unseen data.
cmp-lg/9506001
Ma(r)king concessions in English and German
cmp-lg cs.CL
In order to generate cohesive discourse, many of the relations holding between text segments need to be signalled to the reader by means of cue words, or {\em discourse markers}. Programs usually do this in a simplistic way, e.g., by using one marker per relation. In reality, however, language offers a very wide range of markers from which informed choices should be made. In order to account for the variety and to identify the parameters governing the choices, detailled linguistic analyses are necessary. We worked with one area of discourse relations, the Concession family, identified its underlying pragmatics and semantics, and undertook extensive corpus studies to examine the range of markers used in both English and German. On the basis of an initial classification of these markers, we propose a generation model for producing bilingual text that can incorporate marker choice into its overall decision framework.
cmp-lg/9506002
Weak subsumption Constraints for Type Diagnosis: An Incremental Algorithm
cmp-lg cs.CL
We introduce constraints necessary for type checking a higher-order concurrent constraint language, and solve them with an incremental algorithm. Our constraint system extends rational unification by constraints x$\subseteq$ y saying that ``$x$ has at least the structure of $y$'', modelled by a weak instance relation between trees. This notion of instance has been carefully chosen to be weaker than the usual one which renders semi-unification undecidable. Semi-unification has more than once served to link unification problems arising from type inference and those considered in computational linguistics. Just as polymorphic recursion corresponds to subsumption through the semi-unification problem, our type constraint problem corresponds to weak subsumption of feature graphs in linguistics. The decidability problem for \WhatsIt for feature graphs has been settled by D\"orre~\cite{Doerre:WeakSubsumption:94}. \nocite{RuppRosnerJohnson:94} In contrast to D\"orre's, our algorithm is fully incremental and does not refer to finite state automata. Our algorithm also is a lot more flexible. It allows a number of extensions (records, sorts, disjunctive types, type declarations, and others) which make it suitable for type inference of a full-fledged programming language.
cmp-lg/9506003
Syllable parsing in English and French
cmp-lg cs.CL
In this paper I argue that Optimality Theory provides for an explanatory model of syllabic parsing in English and French. The argument is based on psycholinguistic facts that have been mysterious up to now. This argument is further buttressed by the computational implementation developed here. This model is important for several reasons. First, it provides a demonstration of how OT can be used in a performance domain. Second, it suggests a new relationship between phonological theory and psycholinguistics. (Code in Perl is included and a WWW-interface is running at http://mayo.douglass.arizona.edu.)
cmp-lg/9506004
Using Higher-Order Logic Programming for Semantic Interpretation of Coordinate Constructs
cmp-lg cs.CL
Many theories of semantic interpretation use lambda-term manipulation to compositionally compute the meaning of a sentence. These theories are usually implemented in a language such as Prolog that can simulate lambda-term operations with first-order unification. However, for some interesting cases, such as a Combinatory Categorial Grammar account of coordination constructs, this can only be done by obscuring the underlying linguistic theory with the ``tricks'' needed for implementation. This paper shows how the use of abstract syntax permitted by higher-order logic programming allows an elegant implementation of the semantics of Combinatory Categorial Grammar, including its handling of coordination constructs.
cmp-lg/9506005
A Support Tool for Tagset Mapping
cmp-lg cs.CL
Many different tagsets are used in existing corpora; these tagsets vary according to the objectives of specific projects (which may be as far apart as robust parsing vs. spelling correction). In many situations, however, one would like to have uniform access to the linguistic information encoded in corpus annotations without having to know the classification schemes in detail. This paper describes a tool which maps unstructured morphosyntactic tags to a constraint-based, typed, configurable specification language, a ``standard tagset''. The mapping relies on a manually written set of mapping rules, which is automatically checked for consistency. In certain cases, unsharp mappings are unavoidable, and noise, i.e. groups of word forms {\sl not} conforming to the specification, will appear in the output of the mapping. The system automatically detects such noise and informs the user about it. The tool has been tested with rules for the UPenn tagset \cite{up} and the SUSANNE tagset \cite{garside}, in the framework of the EAGLES\footnote{LRE project EAGLES, cf. \cite{eagles}.} validation phase for standardised tagsets for European languages.
cmp-lg/9506006
Automatic Extraction of Tagset Mappings from Parallel-Annotated Corpora
cmp-lg cs.CL
This paper describes some of the recent work of project AMALGAM (automatic mapping among lexico-grammatical annotation models). We are investigating ways to map between the leading corpus annotation schemes in order to improve their resuability. Collation of all the included corpora into a single large annotated corpus will provide a more detailed language model to be developed for tasks such as speech and handwriting recognition. In particular, we focus here on a method of extracting mappings from corpora that have been annotated according to more than one annotation scheme.
cmp-lg/9506007
Features and Agreement
cmp-lg cs.CL
This paper compares the consistency-based account of agreement phenomena in `unification-based' grammars with an implication-based account based on a simple feature extension to Lambek Categorial Grammar (LCG). We show that the LCG treatment accounts for constructions that have been recognized as problematic for `unification-based' treatments.
cmp-lg/9506008
CLiFF Notes: Research in the Language, Information and Computation Laboratory of the University of Pennsylvania
cmp-lg cs.CL
Short abstracts by computational linguistics researchers at the University of Pennsylvania describing ongoing individual and joint projects.
cmp-lg/9506009
Filling Knowledge Gaps in a Broad-Coverage Machine Translation System
cmp-lg cs.CL
Knowledge-based machine translation (KBMT) techniques yield high quality in domains with detailed semantic models, limited vocabulary, and controlled input grammar. Scaling up along these dimensions means acquiring large knowledge resources. It also means behaving reasonably when definitive knowledge is not yet available. This paper describes how we can fill various KBMT knowledge gaps, often using robust statistical techniques. We describe quantitative and qualitative results from JAPANGLOSS, a broad-coverage Japanese-English MT system.
cmp-lg/9506010
Two-level, Many-Paths Generation
cmp-lg cs.CL
Large-scale natural language generation requires the integration of vast amounts of knowledge: lexical, grammatical, and conceptual. A robust generator must be able to operate well even when pieces of knowledge are missing. It must also be robust against incomplete or inaccurate inputs. To attack these problems, we have built a hybrid generator, in which gaps in symbolic knowledge are filled by statistical methods. We describe algorithms and show experimental results. We also discuss how the hybrid generation model can be used to simplify current generators and enhance their portability, even when perfect knowledge is in principle obtainable.
cmp-lg/9506011
Unification-Based Glossing
cmp-lg cs.CL
We present an approach to syntax-based machine translation that combines unification-style interpretation with statistical processing. This approach enables us to translate any Japanese newspaper article into English, with quality far better than a word-for-word translation. Novel ideas include the use of feature structures to encode word lattices and the use of unification to compose and manipulate lattices. Unification also allows us to specify abstract features that delay target-language synthesis until enough source-language information is assembled. Our statistical component enables us to search efficiently among competing translations and locate those with high English fluency.
cmp-lg/9506012
Presenting Punctuation
cmp-lg cs.CL
Until recently, punctuation has received very little attention in the linguistics and computational linguistics literature. Since the publication of Nunberg's (1990) monograph on the topic, however, punctuation has seen its stock begin to rise: spurred in part by Nunberg's ground-breaking work, a number of valuable inquiries have been subsequently undertaken, including Hovy and Arens (1991), Dale (1991), Pascual (1993), Jones (1994), and Briscoe (1994). Continuing this line of research, I investigate in this paper how Nunberg's approach to presenting punctuation (and other formatting devices) might be incorporated into NLG systems. Insofar as the present paper focuses on the proper syntactic treatment of punctuation, it differs from these other subsequent works in that it is the first to examine this issue from the generation perspective.
cmp-lg/9506013
A Study of the Context(s) in a Specific Type of Texts: Car Accident Reports
cmp-lg cs.CL
This paper addresses the issue of defining context, and more specifically the different contexts needed for understanding a particular type of texts. The corpus chosen is homogeneous and allows us to determine characteristic properties of the texts from which certain inferences can be drawn by the reader. These characteristic properties come from the real world domain (K-context), the type of events the texts describe (F-context) and the genre of the texts (E-context). Together, these three contexts provide elements for the resolution of anaphoric expressions and for several types of disambiguation. We show in particular that the argumentation aspect of these texts is an essential part of the context and explains some of the inferences that can be drawn.
cmp-lg/9506014
Inducing Features of Random Fields
cmp-lg cs.CL
We present a technique for constructing random fields from a set of training samples. The learning paradigm builds increasingly complex fields by allowing potential functions, or features, that are supported by increasingly large subgraphs. Each feature has a weight that is trained by minimizing the Kullback-Leibler divergence between the model and the empirical distribution of the training data. A greedy algorithm determines how features are incrementally added to the field and an iterative scaling algorithm is used to estimate the optimal values of the weights. The statistical modeling techniques introduced in this paper differ from those common to much of the natural language processing literature since there is no probabilistic finite state or push-down automaton on which the model is built. Our approach also differs from the techniques common to the computer vision literature in that the underlying random fields are non-Markovian and have a large number of parameters that must be estimated. Relations to other learning approaches including decision trees and Boltzmann machines are given. As a demonstration of the method, we describe its application to the problem of automatic word classification in natural language processing. Key words: random field, Kullback-Leibler divergence, iterative scaling, divergence geometry, maximum entropy, EM algorithm, statistical learning, clustering, word morphology, natural language processing
cmp-lg/9506015
Ambiguity in the Acquisition of Lexical Information
cmp-lg cs.CL
This paper describes an approach to the automatic identification of lexical information in on-line dictionaries. This approach uses bootstrapping techniques, specifically so that ambiguity in the dictionary text can be treated properly. This approach consists of processing an on-line dictionary multiple times, each time refining the lexical information previously acquired and identifying new lexical information. The strength of this approach is that lexical information can be acquired from definitions which are syntactically ambiguous, given that information acquired during the first pass can be used to improve the syntactic analysis of definitions in subsequent passes. In the context of a lexical knowledge base, the types of lexical information that need to be represented cannot be viewed as a fixed set, but rather as a set that will change given the resources of the lexical knowledge base and the requirements of analysis systems which access it.
cmp-lg/9506016
Indefeasible Semantics and Defeasible Pragmatics
cmp-lg cs.CL
An account of utterance interpretation in discourse needs to face the issue of how the discourse context controls the space of interacting preferences. Assuming a discourse processing architecture that distinguishes the grammar and pragmatics subsystems in terms of monotonic and nonmonotonic inferences, I will discuss how independently motivated default preferences interact in the interpretation of intersentential pronominal anaphora. In the framework of a general discourse processing model that integrates both the grammar and pragmatics subsystems, I will propose a fine structure of the preferential interpretation in pragmatics in terms of defeasible rule interactions. The pronoun interpretation preferences that serve as the empirical ground draw from the survey data specifically obtained for the present purpose.
cmp-lg/9506017
The Effect of Pitch Accenting on Pronoun Referent Resolution
cmp-lg cs.CL
By strictest interpretation, theories of both centering and intonational meaning fail to predict the existence of pitch accented pronominals. Yet they occur felicitously in spoken discourse. To explain this, I emphasize the dual functions served by pitch accents, as markers of both propositional (semantic/pragmatic) and attentional salience. This distinction underlies my proposals about the attentional consequences of pitch accents when applied to pronominals, in particular, that while most pitch accents may weaken or reinforce a cospecifier's status as the center of attention, a contrastively stressed pronominal may force a shift, even when contraindicated by textual features.
cmp-lg/9506018
Intelligent Voice Prosthesis: Converting Icons into Natural Language Sentences
cmp-lg cs.CL
The Intelligent Voice Prosthesis is a communication tool which reconstructs the meaning of an ill-structured sequence of icons or symbols, and expresses this meaning into sentences of a Natural Language (French). It has been developed for the use of people who cannot express themselves orally in natural language, and further, who are not able to comply to grammatical rules such as those of natural language. We describe how available corpora of iconic communication by children with Cerebral Palsy has led us to implement a simple and relevant semantic description of the symbol lexicon. We then show how a unification-based, bottom-up semantic analysis allows the system to uncover the meaning of the user's utterances by computing proper dependencies between the symbols. The result of the analysis is then passed to a lexicalization module which chooses the right words of natural language to use, and builds a linguistic semantic network. This semantic network is then generated into French sentences via hierarchization into trees, using a lexicalized Tree Adjoining Grammar. Finally we describe the modular, customizable interface which has been developed for this system.
cmp-lg/9506019
Review of Charniak's "Statistical Language Learning"
cmp-lg cs.CL
This article is an in-depth review of Eugene Charniak's book, "Statistical Language Learning". The review evaluates the appropriateness of the book as an introductory text for statistical language learning for a variety of audiences. It also includes an extensive bibliography of articles and papers which might be used as a supplement to this book for learning or teaching statistical language modeling.
cmp-lg/9506020
GLR-Parsing of Word Lattices Using a Beam Search Method
cmp-lg cs.CL
This paper presents an approach that allows the efficient integration of speech recognition and language understanding using Tomita's generalized LR-parsing algorithm. For this purpose the GLRP-algorithm is revised so that an agenda mechanism can be used to control the flow of computation of the parsing process. This new approach is used to integrate speech recognition and speech understanding incrementally with a beam search method. These considerations have been implemented and tested on ten word lattices.
cmp-lg/9506021
Prepositional Phrase Attachment through a Backed-Off Model
cmp-lg cs.CL
Recent work has considered corpus-based or statistical approaches to the problem of prepositional phrase attachment ambiguity. Typically, ambiguous verb phrases of the form {v np1 p np2} are resolved through a model which considers values of the four head words (v, n1, p and n2). This paper shows that the problem is analogous to n-gram language models in speech recognition, and that one of the most common methods for language modeling, the backed-off estimate, is applicable. Results on Wall Street Journal data of 84.5% accuracy are obtained using this method. A surprising result is the importance of low-count events - ignoring events which occur less than 5 times in training data reduces performance to 81.6%.
cmp-lg/9506022
Deriving Procedural and Warning Instructions from Device and Environment Models
cmp-lg cs.CL
This study is centred on the generation of instructions for household appliances. We show how knowledge about a device, together with knowledge about the environment, can be used for reasoning about instructions. The information communicated by the instructions can be planned from a version of the knowledge of the artifact and environment. We present the latter, which we call the {\it planning knowledge}, in the form of axioms in the {\it situation calculus}. This planning knowledge formally characterizes the behaviour of the artifact, and it is used to produce a basic plan of actions that the device and user take to accomplish a given goal. We explain how both procedural and warning instructions can be generated from this basic plan. In order to partially justify that instruction generation can be automated from a formal device design specification, we assume that the planning knowledge is {\it derivable\/} from the device and world knowledge.
cmp-lg/9506023
Empirical Discovery in Linguistics
cmp-lg cs.CL
A discovery system for detecting correspondences in data is described, based on the familiar induction methods of J. S. Mill. Given a set of observations, the system induces the ``causally'' related facts in these observations. Its application to empirical linguistic discovery is described.
cmp-lg/9506024
An Approach to Proper Name Tagging for German
cmp-lg cs.CL
This paper presents an incremental method for the tagging of proper names in German newspaper texts. The tagging is performed by the analysis of the syntactic and textual contexts of proper names together with a morphological analysis. The proper names selected by this process supply new contexts which can be used for finding new proper names, and so on. This procedure was applied to a small German corpus (50,000 words) and correctly disambiguated 65% of the capitalized words, which should improve when it is applied to a very large corpus.
cmp-lg/9506025
A Categorial Framework for Composition in Multiple Linguistic Domains
cmp-lg cs.CL
This paper describes a computational framework for a grammar architecture in which different linguistic domains such as morphology, syntax, and semantics are treated not as separate components but compositional domains. Word and phrase formation are modeled as uniform processes contributing to the derivation of the semantic form. The morpheme, as well as the lexeme, has lexical representation in the form of semantic content, tactical constraints, and phonological realization. The motivation for this work is to handle morphology-syntax interaction (e.g., valency change in causatives, subcategorization imposed by case-marking affixes) in an incremental way. The model is based on Combinatory Categorial Grammars.
cmp-lg/9506026
A Computational Approach to Aspectual Composition
cmp-lg cs.CL
In this paper, I argue, contrary to the prevailing opinion in the linguistics and philosophy literature, that a sortal approach to aspectual composition can indeed be explanatory. In support of this view, I develop a synthesis of competing proposals by Hinrichs, Krifka and Jackendoff which takes Jackendoff's cross-cutting sortal distinctions as its point of departure. To show that the account is well-suited for computational purposes, I also sketch an implemented calculus of eventualities which yields many of the desired inferences. Further details on both the model-theoretic semantics and the implementation can be found in (White, 1994).
cmp-lg/9507001
Constraint Categorial Grammars
cmp-lg cs.CL
Although unification can be used to implement a weak form of $\beta$-reduction, several linguistic phenomena are better handled by using some form of $\lambda$-calculus. In this paper we present a higher order feature description calculus based on a typed $\lambda$-calculus. We show how the techniques used in \CLG for resolving complex feature constraints can be efficiently extended. \CCLG is a simple formalism, based on categorial grammars, designed to test the practical feasibility of such a calculus.
cmp-lg/9507002
A framework for lexical representation
cmp-lg cs.CL
In this paper we present a unification-based lexical platform designed for highly inflected languages (like Roman ones). A formalism is proposed for encoding a lemma-based lexical source, well suited for linguistic generalizations. From this source, we automatically generate an allomorph indexed dictionary, adequate for efficient processing. A set of software tools have been implemented around this formalism: access libraries, morphological processors, etc.
cmp-lg/9507003
Robust Processing of Natural Language
cmp-lg cs.CL
Previous approaches to robustness in natural language processing usually treat deviant input by relaxing grammatical constraints whenever a successful analysis cannot be provided by ``normal'' means. This schema implies, that error detection always comes prior to error handling, a behaviour which hardly can compete with its human model, where many erroneous situations are treated without even noticing them. The paper analyses the necessary preconditions for achieving a higher degree of robustness in natural language processing and suggests a quite different approach based on a procedure for structural disambiguation. It not only offers the possibility to cope with robustness issues in a more natural way but eventually might be suited to accommodate quite different aspects of robust behaviour within a single framework.
cmp-lg/9507004
GRAMPAL: A Morphological Processor for Spanish implemented in Prolog
cmp-lg cs.CL
A model for the full treatment of Spanish inflection for verbs, nouns and adjectives is presented. This model is based on feature unification and it relies upon a lexicon of allomorphs both for stems and morphemes. Word forms are built by the concatenation of allomorphs by means of special contextual features. We make use of standard Definite Clause Grammars (DCG) included in most Prolog implementations, instead of the typical finite-state approach. This allows us to take advantage of the declarativity and bidirectionality of Logic Programming for NLP. The most salient feature of this approach is simplicity: A really straightforward rule and lexical components. We have developed a very simple model for complex phenomena. Declarativity, bidirectionality, consistency and completeness of the model are discussed: all and only correct word forms are analysed or generated, even alternative ones and gaps in paradigms are preserved. A Prolog implementation has been developed for both analysis and generation of Spanish word forms. It consists of only six DCG rules, because our {\em lexicalist\/} approach --i.e. most information is in the dictionary. Although it is quite efficient, the current implementation could be improved for analysis by using the non logical features of Prolog, especially in word segmentation and dictionary access.
cmp-lg/9507005
Comparative Ellipsis and Variable Binding
cmp-lg cs.CL
In this paper, we discuss the question whether phrasal comparatives should be given a direct interpretation, or require an analysis as elliptic constructions, and answer it with Yes and No. The most adequate analysis of wide reading attributive (WRA) comparatives seems to be as cases of ellipsis, while a direct (but asymmetric) analysis fits the data for narrow scope attributive comparatives. The question whether it is a syntactic or a semantic process which provides the missing linguistic material in the complement of WRA comparatives is also given a complex answer: Linguistic context is accessed by combining a reconstruction operation and a mechanism of anaphoric reference. The analysis makes only few and straightforward syntactic assumptions. In part, this is made possible because the use of Generalized Functional Application as a semantic operation allows us to model semantic composition in a flexible way.
cmp-lg/9507006
Transfer in a Connectionist Model of the Acquisition of Morphology
cmp-lg cs.CL
The morphological systems of natural languages are replete with examples of the same devices used for multiple purposes: (1) the same type of morphological process (for example, suffixation for both noun case and verb tense) and (2) identical morphemes (for example, the same suffix for English noun plural and possessive). These sorts of similarity would be expected to convey advantages on language learners in the form of transfer from one morphological category to another. Connectionist models of morphology acquisition have been faulted for their supposed inability to represent phonological similarity across morphological categories and hence to facilitate transfer. This paper describes a connectionist model of the acquisition of morphology which is shown to exhibit transfer of this type. The model treats the morphology acquisition problem as one of learning to map forms onto meanings and vice versa. As the network learns these mappings, it makes phonological generalizations which are embedded in connection weights. Since these weights are shared by different morphological categories, transfer is enabled. In a set of experiments with artificial stimuli, networks were trained first on one morphological task (e.g., tense) and then on a second (e.g., number). It is shown that in the context of suffixation, prefixation, and template rules, the second task is facilitated when the second category either makes use of the same forms or the same general process type (e.g., prefixation) as the first.
cmp-lg/9507007
An Efficient Algorithm for Surface Generation
cmp-lg cs.CL
A method is given that "inverts" a logic grammar and displays it from the point of view of the logical form, rather than from that of the word string. LR-compiling techniques are used to allow a recursive-descent generation algorithm to perform "functor merging" much in the same way as an LR parser performs prefix merging. This is an improvement on the semantic-head-driven generator that results in a much smaller search space. The amount of semantic lookahead can be varied, and appropriate tradeoff points between table size and resulting nondeterminism can be found automatically.
cmp-lg/9507008
A Constraint-based Case Frame Lexicon Architecture
cmp-lg cs.CL
In Turkish, (and possibly in many other languages) verbs often convey several meanings (some totally unrelated) when they are used with subjects, objects, oblique objects, adverbial adjuncts, with certain lexical, morphological, and semantic features, and co-occurrence restrictions. In addition to the usual sense variations due to selectional restrictions on verbal arguments, in most cases, the meaning conveyed by a case frame is idiomatic and not compositional, with subtle constraints. In this paper, we present an approach to building a constraint-based case frame lexicon for use in natural language processing in Turkish, whose prototype we have implemented under the TFS system developed at Univ. of Stuttgart. A number of observations that we have made on Turkish have indicated that we need something beyond the traditional transitive and intransitive distinction, and utilize a framework where verb valence is considered as the obligatory co-existence of an arbitrary subset of possible arguments along with the obligatory exclusion of certain others, relative to a verb sense. Additional morphological lexical and semantic constraints on the syntactic constituents organized as a 5-tier constraint hierarchy, are utilized to map a given syntactic structure case-fraame to a specific verb sense.
cmp-lg/9507009
Specifying Logic Programs in Controlled Natural Language
cmp-lg cs.CL
Writing specifications for computer programs is not easy since one has to take into account the disparate conceptual worlds of the application domain and of software development. To bridge this conceptual gap we propose controlled natural language as a declarative and application-specific specification language. Controlled natural language is a subset of natural language that can be accurately and efficiently processed by a computer, but is expressive enough to allow natural usage by non-specialists. Specifications in controlled natural language are automatically translated into Prolog clauses, hence become formal and executable. The translation uses a definite clause grammar (DCG) enhanced by feature structures. Inter-text references of the specification, e.g. anaphora, are resolved with the help of discourse representation theory (DRT). The generated Prolog clauses are added to a knowledge base. We have implemented a prototypical specification system that successfully processes the specification of a simple automated teller machine.
cmp-lg/9507010
On-line Learning of Binary Lexical Relations Using Two-dimensional Weighted Majority Algorithms
cmp-lg cs.CL
We consider the problem of learning a certain type of lexical semantic knowledge that can be expressed as a binary relation between words, such as the so-called sub-categorization of verbs (a verb-noun relation) and the compound noun phrase relation (a noun-noun relation). Specifically, we view this problem as an on-line learning problem in the sense of Littlestone's learning model in which the learner's goal is to minimize the total number of prediction mistakes. In the computational learning theory literature, Goldman, Rivest and Schapire and subsequently Goldman and Warmuth have considered the on-line learning problem for binary relations R : X * Y -> {0, 1} in which one of the domain sets X can be partitioned into a relatively small number of types, namely clusters consisting of behaviorally indistinguishable members of X. In this paper, we extend this model and suppose that both of the sets X, Y can be partitioned into a small number of types, and propose a host of prediction algorithms which are two-dimensional extensions of Goldman and Warmuth's weighted majority type algorithm proposed for the original model. We apply these algorithms to the learning problem for the `compound noun phrase' relation, in which a noun is related to another just in case they can form a noun phrase together. Our experimental results show that all of our algorithms out-perform Goldman and Warmuth's algorithm. We also theoretically analyze the performance of one of our algorithms, in the form of an upper bound on the worst case number of prediction mistakes it makes.
cmp-lg/9507011
Generalizing Case Frames Using a Thesaurus and the MDL Principle
cmp-lg cs.CL
We address the problem of automatically acquiring case-frame patterns from large corpus data. In particular, we view this problem as the problem of estimating a (conditional) distribution over a partition of words, and propose a new generalization method based on the MDL (Minimum Description Length) principle. In order to assist with the efficiency, our method makes use of an existing thesaurus and restricts its attention on those partitions that are present as `cuts' in the thesaurus tree, thus reducing the generalization problem to that of estimating the `tree cut models' of the thesaurus. We then give an efficient algorithm which provably obtains the optimal tree cut model for the given frequency data, in the sense of MDL. We have used the case-frame patterns obtained using our method to resolve pp-attachment ambiguity.Our experimental results indicate that our method improves upon or is at least as effective as existing methods.
cmp-lg/9507012
A Grammar Formalism and Cross-Serial Dependencies
cmp-lg cs.CL
First we define a unification grammar formalism called the Tree Homomorphic Feature Structure Grammar. It is based on Lexical Functional Grammar (LFG), but has a strong restriction on the syntax of the equations. We then show that this grammar formalism defines a full abstract family of languages, and that it is capable of describing cross-serial dependencies of the type found in Swiss German.
cmp-lg/9507013
Indexed Languages and Unification Grammars
cmp-lg cs.CL
Indexed languages are interesting in computational linguistics because they are the least class of languages in the Chomsky hierarchy that has not been shown not to be adequate to describe the string set of natural language sentences. We here define a class of unification grammars that exactly describe the class of indexed languages.
cmp-lg/9507014
Co-Indexing Labelled DRSs to Represent and Reason with Ambiguities
cmp-lg cs.CL
The paper addresses the problem of representing ambiguities in a way that allows for monotonic disambiguation and for direct deductive computation. The paper focuses on an extension of the formalism of underspecified DRSs to ambiguities introduced by plural NPs. It deals with the collective/distributive distinction, and also with generic and cumulative readings. In addition it provides a systematic account for an underspecified treatment of plural pronoun resolution.
cmp-lg/9508001
Bridging as Coercive Accommodation
cmp-lg cs.CL
In this paper we discuss the notion of "bridging" in Discourse Representation Theory as a tool to account for discourse referents that have only been established implicitly, through the lexical semantics of other referents. In doing so, we use ideas from Generative Lexicon theory, to introduce antecedents for anaphoric expressions that cannot be "linked" to a proper antecedent, but that do not need to be "accommodated" because they have some connection to the network of discourse referents that is already established.
cmp-lg/9508002
A Compositional Treatment of Polysemous Arguments in Categorial Grammar
cmp-lg cs.CL
We discuss an extension of the standard logical rules (functional application and abstraction) in Categorial Grammar (CG), in order to deal with some specific cases of polysemy. We borrow from Generative Lexicon theory which proposes the mechanism of {\em coercion}, next to a rich nominal lexical semantic structure called {\em qualia structure}. In a previous paper we introduced coercion into the framework of {\em sign-based} Categorial Grammar and investigated its impact on traditional Fregean compositionality. In this paper we will elaborate on this idea, mostly working towards the introduction of a new semantic dimension. Where in current versions of sign-based Categorial Grammar only two representations are derived: a prosodic one (form) and a logical one (modelling), here we introduce also a more detaled representation of the lexical semantics. This extra knowledge will serve to account for linguistic phenomena like {\em metonymy\/}.
cmp-lg/9508003
A Robust Parsing Algorithm For Link Grammars
cmp-lg cs.CL
In this paper we present a robust parsing algorithm based on the link grammar formalism for parsing natural languages. Our algorithm is a natural extension of the original dynamic programming recognition algorithm which recursively counts the number of linkages between two words in the input sentence. The modified algorithm uses the notion of a null link in order to allow a connection between any pair of adjacent words, regardless of their dictionary definitions. The algorithm proceeds by making three dynamic programming passes. In the first pass, the input is parsed using the original algorithm which enforces the constraints on links to ensure grammaticality. In the second pass, the total cost of each substring of words is computed, where cost is determined by the number of null links necessary to parse the substring. The final pass counts the total number of parses with minimal cost. All of the original pruning techniques have natural counterparts in the robust algorithm. When used together with memoization, these techniques enable the algorithm to run efficiently with cubic worst-case complexity. We have implemented these ideas and tested them by parsing the Switchboard corpus of conversational English. This corpus is comprised of approximately three million words of text, corresponding to more than 150 hours of transcribed speech collected from telephone conversations restricted to 70 different topics. Although only a small fraction of the sentences in this corpus are "grammatical" by standard criteria, the robust link grammar parser is able to extract relevant structure for a large portion of the sentences. We present the results of our experiments using this system, including the analyses of selected and random sentences from the corpus.
cmp-lg/9508004
Parsing English with a Link Grammar
cmp-lg cs.CL
We develop a formal grammatical system called a link grammar, show how English grammar can be encoded in such a system, and give algorithms for efficiently parsing with a link grammar. Although the expressive power of link grammars is equivalent to that of context free grammars, encoding natural language grammars appears to be much easier with the new system. We have written a program for general link parsing and written a link grammar for the English language. The performance of this preliminary system -- both in the breadth of English phenomena that it captures and in the computational resources used -- indicates that the approach may have practical uses as well as linguistic significance. Our program is written in C and may be obtained through the internet.
cmp-lg/9508005
A Matching Technique in Example-Based Machine Translation
cmp-lg cs.CL
This paper addresses an important problem in Example-Based Machine Translation (EBMT), namely how to measure similarity between a sentence fragment and a set of stored examples. A new method is proposed that measures similarity according to both surface structure and content. A second contribution is the use of clustering to make retrieval of the best matching example from the database more efficient. Results on a large number of test cases from the CELEX database are presented.
cmp-lg/9508006
Bi-Lexical Rules for Multi-Lexeme Translation in Lexicalist MT
cmp-lg cs.CL
The paper presents a prototype lexicalist Machine Translation system (based on the so-called `Shake-and-Bake' approach of Whitelock (1992) consisting of an analysis component, a dynamic bilingual lexicon, and a generation component, and shows how it is applied to a range of MT problems. Multi-Lexeme translations are handled through bi-lexical rules which map bilingual lexical signs into new bilingual lexical signs. It is argued that much translation can be handled by equating translationally equivalent lists of lexical signs, either directly in the bilingual lexicon, or by deriving them through bi-lexical rules. Lexical semantic information organized as Qualia structures (Pustejovsky 1991) is used as a mechanism for restricting the domain of the rules.
cmp-lg/9508007
A Dynamic Approach to Rhythm in Language: Toward a Temporal Phonology
cmp-lg cs.CL
It is proposed that the theory of dynamical systems offers appropriate tools to model many phonological aspects of both speech production and perception. A dynamic account of speech rhythm is shown to be useful for description of both Japanese mora timing and English timing in a phrase repetition task. This orientation contrasts fundamentally with the more familiar symbolic approach to phonology, in which time is modeled only with sequentially arrayed symbols. It is proposed that an adaptive oscillator offers a useful model for perceptual entrainment (or `locking in') to the temporal patterns of speech production. This helps to explain why speech is often perceived to be more regular than experimental measurements seem to justify. Because dynamic models deal with real time, they also help us understand how languages can differ in their temporal detail---contributing to foreign accents, for example. The fact that languages differ greatly in their temporal detail suggests that these effects are not mere motor universals, but that dynamical models are intrinsic components of the phonological characterization of language.
cmp-lg/9508008
On Constraint-Based Lambek Calculi
cmp-lg cs.CL
We explore the consequences of layering a Lambek proof system over an arbitrary (constraint) logic. A simple model-theoretic semantics for our hybrid language is provided for which a particularly simple combination of Lambek's and the proof system of the base logic is complete. Furthermore the proof system for the underlying base logic can be assumed to be a black box. The essential reasoning needed to be performed by the black box is that of {\em entailment checking}. Assuming feature logic as the base logic entailment checking amounts to a {\em subsumption} test which is a well-known quasi-linear time decidable problem.
cmp-lg/9508009
A Labelled Analytic Theorem Proving Environment for Categorial Grammar
cmp-lg cs.CL
We present a system for the investigation of computational properties of categorial grammar parsing based on a labelled analytic tableaux theorem prover. This proof method allows us to take a modular approach, in which the basic grammar can be kept constant, while a range of categorial calculi can be captured by assigning different properties to the labelling algebra. The theorem proving strategy is particularly well suited to the treatment of categorial grammar, because it allows us to distribute the computational cost between the algorithm which deals with the grammatical types and the algebraic checker which constrains the derivation.
cmp-lg/9508010
Heuristics and Parse Ranking
cmp-lg cs.CL
There are currently two philosophies for building grammars and parsers -- Statistically induced grammars and Wide-coverage grammars. One way to combine the strengths of both approaches is to have a wide-coverage grammar with a heuristic component which is domain independent but whose contribution is tuned to particular domains. In this paper, we discuss a three-stage approach to disambiguation in the context of a lexicalized grammar, using a variety of domain independent heuristic techniques. We present a training algorithm which uses hand-bracketed treebank parses to set the weights of these heuristics. We compare the performance of our grammar against the performance of the IBM statistical grammar, using both untrained and trained weights for the heuristics.
cmp-lg/9508011
The Use of Knowledge Preconditions in Language Processing
cmp-lg cs.CL
If an agent does not possess the knowledge needed to perform an action, it may privately plan to obtain the required information on its own, or it may involve another agent in the planning process by engaging it in a dialogue. In this paper, we show how the requirements of knowledge preconditions can be used to account for information-seeking subdialogues in discourse. We first present an axiomatization of knowledge preconditions for the SharedPlan model of collaborative activity (Grosz & Kraus, 1993), and then provide an analysis of information-seeking subdialogues within a general framework for discourse processing. In this framework, SharedPlans and relationships among them are used to model the intentional component of Grosz and Sidner's (1986) theory of discourse structure.
cmp-lg/9508012
A Natural Law of Succession
cmp-lg cs.CL
Consider the problem of multinomial estimation. You are given an alphabet of k distinct symbols and are told that the i-th symbol occurred exactly n_i times in the past. On the basis of this information alone, you must now estimate the conditional probability that the next symbol will be i. In this report, we present a new solution to this fundamental problem in statistics and demonstrate that our solution outperforms standard approaches, both in theory and in practice.
cmp-lg/9509001
How much is enough?: Data requirements for statistical NLP
cmp-lg cs.CL
In this paper I explore a number of issues in the analysis of data requirements for statistical NLP systems. A preliminary framework for viewing such systems is proposed and a sample of existing works are compared within this framework. The first steps toward a theory of data requirements are made by establishing some results relevant to bounding the expected error rate of a class of simplified statistical language learners as a function of the volume of training data.
cmp-lg/9509002
Conserving Fuel in Statistical Language Learning: Predicting Data Requirements
cmp-lg cs.CL
In this paper I address the practical concern of predicting how much training data is sufficient for a statistical language learning system. First, I briefly review earlier results and show how these can be combined to bound the expected accuracy of a mode-based learner as a function of the volume of training data. I then develop a more accurate estimate of the expected accuracy function under the assumption that inputs are uniformly distributed. Since this estimate is expensive to compute, I also give a close but cheaply computable approximation to it. Finally, I report on a series of simulations exploring the effects of inputs that are not uniformly distributed. Although these results are based on simplistic assumptions, they are a tentative step toward a useful theory of data requirements for SLL systems.
cmp-lg/9509003
Cluster Expansions and Iterative Scaling for Maximum Entropy Language Models
cmp-lg cs.CL
The maximum entropy method has recently been successfully introduced to a variety of natural language applications. In each of these applications, however, the power of the maximum entropy method is achieved at the cost of a considerable increase in computational requirements. In this paper we present a technique, closely related to the classical cluster expansion from statistical mechanics, for reducing the computational demands necessary to calculate conditional maximum entropy language models.
cmp-lg/9509004
The Development and Migration of Concepts from Donor to Borrower Disciplines: Sublanguage Term Use in Hard & Soft Sciences
cmp-lg cs.CL
Academic disciplines, often divided into hard and soft sciences, may be understood as "donor disciplines" if they produce more concepts than they borrow from other disciplines, or "borrower disciplines" if they import more than they originate. Terms used to describe these concepts can be used to distinguish between hard and soft, donor and borrower, as well as individual discipline-specific sublanguages. Using term frequencies, the birth, growth, death, and migration of concepts and their associated terms are examined.
cmp-lg/9509005
ParseTalk about Textual Ellipsis
cmp-lg cs.CL
A hybrid methodology for the resolution of text-level ellipsis is presented in this paper. It incorporates conceptual proximity criteria applied to ontologically well-engineered domain knowledge bases and an approach to centering based on functional topic/comment patterns. We state text grammatical predicates for ellipsis and then turn to the procedural aspects of their evaluation within the framework of an actor-based implementation of a lexically distributed parser.
cmp-lg/9510001
POS Tagging Using Relaxation Labelling
cmp-lg cs.CL
Relaxation labelling is an optimization technique used in many fields to solve constraint satisfaction problems. The algorithm finds a combination of values for a set of variables such that satisfies -to the maximum possible degree- a set of given constraints. This paper describes some experiments performed applying it to POS tagging, and the results obtained. It also ponders the possibility of applying it to word sense disambiguation.
cmp-lg/9510002
Using Chinese Text Processing Technique for the Processing of Sanskrit Based Indian Languages: Maximum Resource Utilization and Maximum Compatibility
cmp-lg cs.CL
Chinese text processing systems are using Double Byte Coding , while almost all existing Sanskrit Based Indian Languages have been using Single Byte coding for text processing. Through observation, Chinese Information Processing Technique has already achieved great technical development both in east and west. In contrast,Indian Languages are being processed by computer, more or less, for word processing purpose. This paper mainly emphasizes the method of processing Indian languages from a Computational Linguistic point of view. An overall design method is illustrated in this paper.This method concentrated on maximum resource utilization and compatibility: the ultimate goal is to have a Multiplatform Multilingual System. Keywords Text Procrssing, Multilingual Text Processing, Chinese Language Processing, Indian Language Processing, Character Coding.
cmp-lg/9510003
A Proposal for Word Sense Disambiguation using Conceptual Distance
cmp-lg cs.CL
This paper presents a method for the resolution of lexical ambiguity and its automatic evaluation over the Brown Corpus. The method relies on the use of the wide-coverage noun taxonomy of WordNet and the notion of conceptual distance among concepts, captured by a Conceptual Density formula developed for this purpose. This fully automatic method requires no hand coding of lexical entries, hand tagging of text nor any kind of training process. The results of the experiment have been automatically evaluated against SemCor, the sense-tagged version of the Brown Corpus.
cmp-lg/9510004
Disambiguating bilingual nominal entries against WordNet
cmp-lg cs.CL
This paper explores the acquisition of conceptual knowledge from bilingual dictionaries (French/English, Spanish/English and English/Spanish) using a pre-existing broad coverage Lexical Knowledge Base (LKB) WordNet. Bilingual nominal entries are disambiguated agains WordNet, therefore linking the bilingual dictionaries to WordNet yielding a multilingual LKB (MLKB). The resulting MLKB has the same structure as WordNet, but some nodes are attached additionally to disambiguated vocabulary of other languages. Two different, complementary approaches are explored. In one of the approaches each entry of the dictionary is taken in turn, exploiting the information in the entry itself. The inferential capability for disambiguating the translation is given by Semantic Density over WordNet. In the other approach, the bilingual dictionary was merged with WordNet, exploiting mainly synonymy relations. Each of the approaches was used in a different dictionary. Both approaches attain high levels of precision on their own, showing that disambiguating bilingual nominal entries, and therefore linking bilingual dictionaries to WordNet is a feasible task.
cmp-lg/9510005
Developing and Evaluating a Probabilistic LR Parser of Part-of-Speech and Punctuation Labels
cmp-lg cs.CL
We describe an approach to robust domain-independent syntactic parsing of unrestricted naturally-occurring (English) input. The technique involves parsing sequences of part-of-speech and punctuation labels using a unification-based grammar coupled with a probabilistic LR parser. We describe the coverage of several corpora using this grammar and report the results of a parsing experiment using probabilities derived from bracketed training data. We report the first substantial experiments to assess the contribution of punctuation to deriving an accurate syntactic analysis, by parsing identical texts both with and without naturally-occurring punctuation marks.
cmp-lg/9510006
Incorporating Discourse Aspects in English -- Polish MT: Towards Robust Implementation
cmp-lg cs.CL
The main aim of translation is an accurate transfer of meaning so that the result is not only grammatically and lexically correct but also communicatively adequate. This paper stresses the need for discourse analysis the aim of which is to preserve the communicative meaning in English--Polish machine translation. Unlike English, which is a positional language with word order grammatically determined, Polish displays a strong tendency to order constituents according to their degree of salience, so that the most informationally salient elements are placed towards the end of the clause regardless of their grammatical function. The Centering Theory developed for tracking down given information units in English and the Theory of Functional Sentence Perspective predicting informativeness of subsequent constituents provide theoretical background for this work. The notion of {\em center} is extended to accommodate not only for pronominalisation and exact reiteration but also for definiteness and other center pointing constructs. Center information is additionally graded and applicable to all primary constituents in a given utterance. This information is used to order the post-transfer constituents correctly, relying on statistical regularities and some syntactic clues.
cmp-lg/9510007
Automatic Identification of Support Verbs: A Step Towards a Definition of Semantic Weight
cmp-lg cs.CL
Current definitions of notions of lexical density and semantic weight are based on the division of words into closed and open classes, and on intuition. This paper develops a computationally tractable definition of semantic weight, concentrating on what it means for a word to be semantically light; the definition involves looking at the frequency of a word in particular syntactic constructions which are indicative of lightness. Verbs such as "make" and "take", when they function as support verbs, are often considered to be semantically light. To test our definition, we carried out an experiment based on that of Grefenstette and Teufel (1995), where we automatically identify light instances of these words in a corpus; this was done by incorporating our frequency-related definition of semantic weight into a statistical approach similar to that of Grefenstette and Teufel. The results show that this is a plausible definition of semantic lightness for verbs, which can possibly be extended to defining semantic lightness for other classes of words.
cmp-lg/9510008
Toward an MT System without Pre-Editing --- Effects of New Methods in ALT-J/E ---
cmp-lg cs.CL
Recently, several types of Japanese-to-English machine translation systems have been developed, but all of them require an initial process of rewriting the original text into easily translatable Japanese. Therefore these systems are unsuitable for translating information that needs to be speedily disseminated. To overcome this limitation, a Multi-Level Translation Method based on the Constructive Process Theory has been proposed. This paper describes the benefits of using this method in the Japanese-to-English machine translation system ALT-J/E. In comparison with conventional compositional methods, the Multi-Level Translation Method emphasizes the importance of the meaning contained in expression structures as a whole. It is shown to be capable of translating typical written Japanese based on the meaning of the text in its context, with comparative ease. We are now hopeful of carrying out useful machine translation with no manual pre-editing.
cmp-lg/9511001
Countability and Number in Japanese-to-English Machine Translation
cmp-lg cs.CL
This paper presents a heuristic method that uses information in the Japanese text along with knowledge of English countability and number stored in transfer dictionaries to determine the countability and number of English noun phrases. Incorporating this method into the machine translation system ALT-J/E, helped to raise the percentage of noun phrases generated with correct use of articles and number from 65% to 73%.
cmp-lg/9511002
Letting the Cat out of the Bag: Generation for Shake-and-Bake MT
cmp-lg cs.CL
Describes an algorithm for the generation phase of a Shake-and-Bake Machine Translation system. Since the problem is NP-complete, it is unlikely that the algorithm will be efficient in all cases, but for the cases tested it offers an improvement over Whitelock's previously published algorithm. The work was carried out while the author was employed at Sharp Laboratories of Europe Ltd.
cmp-lg/9511003
The Effect of Resource Limits and Task Complexity on Collaborative Planning in Dialogue
cmp-lg cs.CL
This paper shows how agents' choice in communicative action can be designed to mitigate the effect of their resource limits in the context of particular features of a collaborative planning task. I first motivate a number of hypotheses about effective language behavior based on a statistical analysis of a corpus of natural collaborative planning dialogues. These hypotheses are then tested in a dialogue testbed whose design is motivated by the corpus analysis. Experiments in the testbed examine the interaction between (1) agents' resource limits in attentional capacity and inferential capacity; (2) agents' choice in communication; and (3) features of communicative tasks that affect task difficulty such as inferential complexity, degree of belief coordination required, and tolerance for errors. The results show that good algorithms for communication must be defined relative to the agents' resource limits and the features of the task. Algorithms that are inefficient for inferentially simple, low coordination or fault-tolerant tasks are effective when tasks require coordination or complex inferences, or are fault-intolerant. The results provide an explanation for the occurrence of utterances in human dialogues that, prima facie, appear inefficient, and provide the basis for the design of effective algorithms for communicative choice for resource limited agents.
cmp-lg/9511004
An investigation into the correlation of cue phrases, unfilled pauses and the structuring of spoken discourse
cmp-lg cs.CL
Expectations about the correlation of cue phrases, the duration of unfilled pauses and the structuring of spoken discourse are framed in light of Grosz and Sidner's theory of discourse and are tested for a directions-giving dialogue. The results suggest that cue phrase and discourse structuring tasks may align, and show a correlation for pause length and some of the modifications that speakers can make to discourse structure.
cmp-lg/9511005
Chart-driven Connectionist Categorial Parsing of Spoken Korean
cmp-lg cs.CL
While most of the speech and natural language systems which were developed for English and other Indo-European languages neglect the morphological processing and integrate speech and natural language at the word level, for the agglutinative languages such as Korean and Japanese, the morphological processing plays a major role in the language processing since these languages have very complex morphological phenomena and relatively simple syntactic functionality. Obviously degenerated morphological processing limits the usable vocabulary size for the system and word-level dictionary results in exponential explosion in the number of dictionary entries. For the agglutinative languages, we need sub-word level integration which leaves rooms for general morphological processing. In this paper, we developed a phoneme-level integration model of speech and linguistic processings through general morphological analysis for agglutinative languages and a efficient parsing scheme for that integration. Korean is modeled lexically based on the categorial grammar formalism with unordered argument and suppressed category extensions, and chart-driven connectionist parsing method is introduced.
cmp-lg/9511006
Disambiguating Noun Groupings with Respect to WordNet Senses
cmp-lg cs.CL
Word groupings useful for language processing tasks are increasingly available, as thesauri appear on-line, and as distributional word clustering techniques improve. However, for many tasks, one is interested in relationships among word {\em senses}, not words. This paper presents a method for automatic sense disambiguation of nouns appearing within sets of related nouns --- the kind of data one finds in on-line thesauri, or as the output of distributional clustering algorithms. Disambiguation is performed with respect to WordNet senses, which are fairly fine-grained; however, the method also permits the assignment of higher-level WordNet categories rather than sense labels. The method is illustrated primarily by example, though results of a more rigorous evaluation are also presented.
cmp-lg/9511007
Using Information Content to Evaluate Semantic Similarity in a Taxonomy
cmp-lg cs.CL
This paper presents a new measure of semantic similarity in an IS-A taxonomy, based on the notion of information content. Experimental evaluation suggests that the measure performs encouragingly well (a correlation of r = 0.79 with a benchmark set of human similarity judgments, with an upper bound of r = 0.90 for human subjects performing the same task), and significantly better than the traditional edge counting approach (r = 0.66).
cmp-lg/9512001
Analysis of the Arabic Broken Plural and Diminutive
cmp-lg cs.CL
This paper demonstrates how the challenging problem of the Arabic broken plural and diminutive can be handled under a multi-tape two-level model, an extension to two-level morphology.
cmp-lg/9512002
The Unsupervised Acquisition of a Lexicon from Continuous Speech
cmp-lg cs.CL
We present an unsupervised learning algorithm that acquires a natural-language lexicon from raw speech. The algorithm is based on the optimal encoding of symbol sequences in an MDL framework, and uses a hierarchical representation of language that overcomes many of the problems that have stymied previous grammar-induction procedures. The forward mapping from symbol sequences to the speech stream is modeled using features based on articulatory gestures. We present results on the acquisition of lexicons and language models from raw speech, text, and phonetic transcripts, and demonstrate that our algorithm compares very favorably to other reported results with respect to segmentation performance and statistical efficiency.
cmp-lg/9512003
Limited Attention and Discourse Structure
cmp-lg cs.CL
This squib examines the role of limited attention in a theory of discourse structure and proposes a model of attentional state that relates current hierarchical theories of discourse structure to empirical evidence about human discourse processing capabilities. First, I present examples that are not predicted by Grosz and Sidner's stack model of attentional state. Then I consider an alternative model of attentional state, the cache model, which accounts for the examples, and which makes particular processing predictions. Finally I suggest a number of ways that future research could distinguish the predictions of the cache model and the stack model.
cmp-lg/9512004
Natural language processing: she needs something old and something new (maybe something borrowed and something blue, too)
cmp-lg cs.CL
Given the present state of work in natural language processing, this address argues first, that advance in both science and applications requires a revival of concern about what language is about, broadly speaking the world; and second, that an attack on the summarising task, which is made ever more important by the growth of electronic text resources and requires an understanding of the role of large-scale discourse structure in marking important text content, is a good way forward.
cmp-lg/9512005
Term Encoding of Typed Feature Structures
cmp-lg cs.CL
This paper presents an approach to Prolog-style term encoding of typed feature structures. The type feature structures to be encoded are constrained by appropriateness conditions as in Carpenter's ALE system. But unlike ALE, we impose a further independently motivated closed-world assumption. This assumption allows us to apply term encoding in cases that were problematic for previous approaches. In particular, previous approaches have ruled out multiple inheritance and further specification of feature-value declarations on subtypes. In the present approach, these spececial cases can be handled as well, though with some increase in complexity. For grammars without multiple inheritance and specification of feature values, the encoding presented here reduces to that of previous approaches.
cmp-lg/9601001
Automatic Inference of DATR Theories
cmp-lg cs.CL
This paper presents an approach for the automatic acquisition of linguistic knowledge from unstructured data. The acquired knowledge is represented in the lexical knowledge representation language DATR. A set of transformation rules that establish inheritance relationships and a default-inference algorithm make up the basis components of the system. Since the overall approach is not restricted to a special domain, the heuristic inference strategy uses criteria to evaluate the quality of a DATR theory, where different domains may require different criteria. The system is applied to the linguistic learning task of German noun inflection.
cmp-lg/9601002
Generic rules and non-constituent coordination
cmp-lg cs.CL
We present a metagrammatical formalism, {\em generic rules}, to give a default interpretation to grammar rules. Our formalism introduces a process of {\em dynamic binding} interfacing the level of pure grammatical knowledge representation and the parsing level. We present an approach to non-constituent coordination within categorial grammars, and reformulate it as a generic rule. This reformulation is context-free parsable and reduces drastically the search space associated to the parsing task for such phenomena.
cmp-lg/9601003
Report of the Study Group on Assessment and Evaluation
cmp-lg cs.CL
This is an interim report discussing possible guidelines for the assessment and evaluation of projects developing speech and language systems. It was prepared at the request of the European Commission DG XIII by an ad hoc study group, and is now being made available in the form in which it was submitted to the Commission. However, the report is not an official European Commission document, and does not reflect European Commission policy, official or otherwise. After a discussion of terminology, the report focusses on combining user-centred and technology-centred assessment, and on how meaningful comparisons can be made of a variety of systems performing different tasks for different domains. The report outlines the kind of infra-structure that might be required to support comparative assessment and evaluation of heterogenous projects, and also the results of a questionnaire concerning different approaches to evaluation.
cmp-lg/9601004
Similarity between Words Computed by Spreading Activation on an English Dictionary
cmp-lg cs.CL
This paper proposes a method for measuring semantic similarity between words as a new tool for text analysis. The similarity is measured on a semantic network constructed systematically from a subset of the English dictionary, LDOCE (Longman Dictionary of Contemporary English). Spreading activation on the network can directly compute the similarity between any two words in the Longman Defining Vocabulary, and indirectly the similarity of all the other words in LDOCE. The similarity represents the strength of lexical cohesion or semantic relation, and also provides valuable information about similarity and coherence of texts.
cmp-lg/9601005
Text Segmentation Based on Similarity between Words
cmp-lg cs.CL
This paper proposes a new indicator of text structure, called the lexical cohesion profile (LCP), which locates segment boundaries in a text. A text segment is a coherent scene; the words in a segment are linked together via lexical cohesion relations. LCP records mutual similarity of words in a sequence of text. The similarity of words, which represents their cohesiveness, is computed using a semantic network. Comparison with the text segments marked by a number of subjects shows that LCP closely correlates with the human judgments. LCP may provide valuable information for resolving anaphora and ellipsis.
cmp-lg/9601006
Possessive Pronouns as Determiners in Japanese-to-English Machine Translation
cmp-lg cs.CL
Possessive pronouns are used as determiners in English when no equivalent would be used in a Japanese sentence with the same meaning. This paper proposes a heuristic method of generating such possessive pronouns even when there is no equivalent in the Japanese. The method uses information about the use of possessive pronouns in English treated as a lexical property of nouns, in addition to contextual information about noun phrase referentiality and the subject and main verb of the sentence that the noun phrase appears in. The proposed method has been implemented in NTT Communication Science Laboratories' Japanese-to-English machine translation system ALT-J/E. In a test set of 6,200 sentences, the proposed method increased the number of noun phrases with appropriate possessive pronouns generated, by 263 to 609, at the cost of generating 83 noun phrases with inappropriate possessive pronouns.
cmp-lg/9601007
Context-Sensitive Measurement of Word Distance by Adaptive Scaling of a Semantic Space
cmp-lg cs.CL
The paper proposes a computationally feasible method for measuring context-sensitive semantic distance between words. The distance is computed by adaptive scaling of a semantic space. In the semantic space, each word in the vocabulary V is represented by a multi-dimensional vector which is obtained from an English dictionary through a principal component analysis. Given a word set C which specifies a context for measuring word distance, each dimension of the semantic space is scaled up or down according to the distribution of C in the semantic space. In the space thus transformed, distance between words in V becomes dependent on the context C. An evaluation through a word prediction task shows that the proposed measurement successfully extracts the context of a text.
cmp-lg/9601008
Noun Phrase Reference in Japanese-to-English Machine Translation
cmp-lg cs.CL
This paper shows the necessity of distinguishing different referential uses of noun phrases in machine translation. We argue that differentiating between the generic, referential and ascriptive uses of noun phrases is the minimum necessary to generate articles and number correctly when translating from Japanese to English. Heuristics for determining these differences are proposed for a Japanese-to-English machine translation system. Finally the results of using the proposed heuristics are shown to have raised the percentage of noun phrases generated with correct use of articles and number in the Japanese-to-English machine translation system ALT-J/E from 65% to 77%.
cmp-lg/9601009
A General Architecture for Language Engineering (GATE) - a new approach to Language Engineering R&D
cmp-lg cs.CL
This report argues for the provision of a common software infrastructure for NLP systems. Current trends in Language Engineering research are reviewed as motivation for this infrastructure, and relevant recent work discussed. A freely-available system called GATE is described which builds on this work.
cmp-lg/9601010
Parsing with Typed Feature Structures
cmp-lg cs.CL
In this paper we provide for parsing with respect to grammars expressed in a general TFS-based formalism, a restriction of ALE. Our motivation being the design of an abstract (WAM-like) machine for the formalism, we consider parsing as a computational process and use it as an operational semantics to guide the design of the control structures for the abstract machine. We emphasize the notion of abstract typed feature structures (AFSs) that encode the essential information of TFSs and define unification over AFSs rather than over TFSs. We then introduce an explicit construct of multi-rooted feature structures (MRSs) that naturally extend TFSs and use them to represent phrasal signs as well as grammar rules. We also employ abstractions of MRSs and give the mathematical foundations needed for manipulating them. We then present a simple bottom-up chart parser as a model for computation: grammars written in the TFS-based formalism are executed by the parser. Finally, we show that the parser is correct.
cmp-lg/9601011
Parsing with Typed Feature Structures
cmp-lg cs.CL
In this paper we provide for parsing with respect to grammars expressed in a general TFS-based formalism, a restriction of ALE. Our motivation being the design of an abstract (WAM-like) machine for the formalism, we consider parsing as a computational process and use it as an operational semantics to guide the design of the control structures for the abstract machine. We emphasize the notion of abstract typed feature structures (AFSs) that encode the essential information of TFSs and define unification over AFSs rather than over TFSs. We then introduce an explicit construct of multi-rooted feature structures (MRSs) that naturally extend TFSs and use them to represent phrasal signs as well as grammar rules. We also employ abstractions of MRSs and give the mathematical foundations needed for manipulating them. We formally define grammars and the languages they generate, and then describe a model for computation that corresponds to bottom-up chart parsing: grammars written in the TFS-based formalism are executed by the parser. We show that the computation is correct with respect to the independent definition. Finally, we discuss the class of grammars for which computations terminate and prove that termination can be guaranteed for off-line parsable grammars.
cmp-lg/9602001
How Part-of-Speech Tags Affect Text Retrieval and Filtering Performance
cmp-lg cs.CL
Natural language processing (NLP) applied to information retrieval (IR) and filtering problems may assign part-of-speech tags to terms and, more generally, modify queries and documents. Analytic models can predict the performance of a text filtering system as it incorporates changes suggested by NLP, allowing us to make precise statements about the average effect of NLP operations on IR. Here we provide a model of retrieval and tagging that allows us to both compute the performance change due to syntactic parsing and to allow us to understand what factors affect performance and how. In addition to a prediction of performance with tags, upper and lower bounds for retrieval performance are derived, giving the best and worst effects of including part-of-speech tags. Empirical grounds for selecting sets of tags are considered.
cmp-lg/9602002
Situations and Computation: An Overview of Recent Research
cmp-lg cs.CL
Serious thinking about the computational aspects of situation theory is just starting. There have been some recent proposals in this direction (viz. PROSIT and ASTL), with varying degrees of divergence from the ontology of the theory. We believe that a programming environment incorporating bona fide situation-theoretic constructs is needed and describe our very recent BABY-SIT implementation. A detailed critical account of PROSIT and ASTL is also offered in order to compare our system with these pioneering and influential frameworks.
cmp-lg/9602003
Text Windows and Phrases Differing by Discipline, Location in Document, and Syntactic Structure
cmp-lg cs.CL
Knowledge of window style, content, location and grammatical structure may be used to classify documents as originating within a particular discipline or may be used to place a document on a theory versus practice spectrum. This distinction is also studied here using the type-token ratio to differentiate between sublanguages. The statistical significance of windows is computed, based on the the presence of terms in titles, abstracts, citations, and section headers, as well as binary independent (BI) and inverse document frequency (IDF) weightings. The characteristics of windows are studied by examining their within window density (WWD) and the S concentration (SC), the concentration of terms from various document fields (e.g. title, abstract) in the fulltext. The rate of window occurrences from the beginning to the end of document fulltext differs between academic fields. Different syntactic structures in sublanguages are examined, and their use is considered for discriminating between specific academic disciplines and, more generally, between theory versus practice or knowledge versus applications oriented documents.
cmp-lg/9602004
Assessing agreement on classification tasks: the kappa statistic
cmp-lg cs.CL
Currently, computational linguists and cognitive scientists working in the area of discourse and dialogue argue that their subjective judgments are reliable using several different statistics, none of which are easily interpretable or comparable to each other. Meanwhile, researchers in content analysis have already experienced the same difficulties and come up with a solution in the kappa statistic. We discuss what is wrong with reliability measures as they are currently used for discourse and dialogue work in computational linguistics and cognitive science, and argue that we would be better off as a field adopting techniques from content analysis.
cmp-lg/9603001
Speech Recognition by Composition of Weighted Finite Automata
cmp-lg cs.CL
We present a general framework based on weighted finite automata and weighted finite-state transducers for describing and implementing speech recognizers. The framework allows us to represent uniformly the information sources and data structures used in recognition, including context-dependent units, pronunciation dictionaries, language models and lattices. Furthermore, general but efficient algorithms can used for combining information sources in actual recognizers and for optimizing their application. In particular, a single composition algorithm is used both to combine in advance information sources such as language models and dictionaries, and to combine acoustic observations and information sources dynamically during recognition.
cmp-lg/9603002
Finite-State Approximation of Phrase-Structure Grammars
cmp-lg cs.CL
Phrase-structure grammars are effective models for important syntactic and semantic aspects of natural languages, but can be computationally too demanding for use as language models in real-time speech recognition. Therefore, finite-state models are used instead, even though they lack expressive power. To reconcile those two alternatives, we designed an algorithm to compute finite-state approximations of context-free grammars and context-free-equivalent augmented phrase-structure grammars. The approximation is exact for certain context-free grammars generating regular languages, including all left-linear and right-linear context-free grammars. The algorithm has been used to build finite-state language models for limited-domain speech recognition tasks.
cmp-lg/9603003
Attempto Controlled English (ACE)
cmp-lg cs.CL
Attempto Controlled English (ACE) allows domain specialists to interactively formulate requirements specifications in domain concepts. ACE can be accurately and efficiently processed by a computer, but is expressive enough to allow natural usage. The Attempto system translates specification texts in ACE into discourse representation structures and optionally into Prolog. Translated specification texts are incrementally added to a knowledge base. This knowledge base can be queried in ACE for verification, and it can be executed for simulation, prototyping and validation of the specification.