id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
cmp-lg/9702013
|
Knowledge Acquisition for Content Selection
|
cmp-lg cs.CL
|
An important part of building a natural-language generation (NLG) system is
knowledge acquisition, that is deciding on the specific schemas, plans, grammar
rules, and so forth that should be used in the NLG system. We discuss some
experiments we have performed with KA for content-selection rules, in the
context of building an NLG system which generates health-related material.
These experiments suggest that it is useful to supplement corpus analysis with
KA techniques developed for building expert systems, such as structured group
discussions and think-aloud protocols. They also raise the point that KA issues
may influence architectural design issues, in particular the decision on
whether a planning approach is used for content selection. We suspect that in
some cases, KA may be easier if other constructive expert-system techniques
(such as production rules, or case-based reasoning) are used to determine the
content of a generated text.
|
cmp-lg/9702014
|
Building a Generation Knowledge Source using Internet-Accessible
Newswire
|
cmp-lg cs.CL
|
In this paper, we describe a method for automatic creation of a knowledge
source for text generation using information extraction over the Internet. We
present a prototype system called PROFILE which uses a client-server
architecture to extract noun-phrase descriptions of entities such as people,
places, and organizations. The system serves two purposes: as an information
extraction tool, it allows users to search for textual descriptions of
entities; as a utility to generate functional descriptions (FD), it is used in
a functional-unification based generation system. We present an evaluation of
the approach and its applications to natural language generation and
summarization.
|
cmp-lg/9702015
|
Improvising Linguistic Style: Social and Affective Bases for Agent
Personality
|
cmp-lg cs.CL
|
This paper introduces Linguistic Style Improvisation, a theory and set of
algorithms for improvisation of spoken utterances by artificial agents, with
applications to interactive story and dialogue systems. We argue that
linguistic style is a key aspect of character, and show how speech act
representations common in AI can provide abstract representations from which
computer characters can improvise. We show that the mechanisms proposed
introduce the possibility of socially oriented agents, meet the requirements
that lifelike characters be believable, and satisfy particular criteria for
improvisation proposed by Hayes-Roth.
|
cmp-lg/9702016
|
Instructions for Temporal Annotation of Scheduling Dialogs
|
cmp-lg cs.CL
|
Human annotation of natural language facilitates standardized evaluation of
natural language processing systems and supports automated feature extraction.
This document consists of instructions for annotating the temporal information
in scheduling dialogs, dialogs in which the participants schedule a meeting
with one another. Task-oriented dialogs, such as these are, would arise in many
useful applications, for instance, automated information providers and
automated phone operators. Explicit instructions support good inter-rater
reliability and serve as documentation for the classes being annotated.
|
cmp-lg/9703001
|
Domain Adaptation with Clustered Language Models
|
cmp-lg cs.CL
|
In this paper, a method of domain adaptation for clustered language models is
developed. It is based on a previously developed clustering algorithm, but with
a modified optimisation criterion. The results are shown to be slightly
superior to the previously published 'Fillup' method, which can be used to
adapt standard n-gram models. However, the improvement both methods give
compared to models built from scratch on the adaptation data is quite small
(less than 11% relative improvement in word error rate). This suggests that
both methods are still unsatisfactory from a practical point of view.
|
cmp-lg/9703002
|
Concept Clustering and Knowledge Integration from a Children's
Dictionary
|
cmp-lg cs.CL
|
Knowledge structures called Concept Clustering Knowledge Graphs (CCKGs) are
introduced along with a process for their construction from a machine readable
dictionary. CCKGs contain multiple concepts interrelated through multiple
semantic relations together forming a semantic cluster represented by a
conceptual graph. The knowledge acquisition is performed on a children's first
dictionary. A collection of conceptual clusters together can form the basis of
a lexical knowledge base, where each CCKG contains a limited number of highly
connected words giving useful information about a particular domain or
situation.
|
cmp-lg/9703003
|
A Semantics-based Communication System for Dysphasic Subjects
|
cmp-lg cs.CL
|
Dysphasic subjects do not have complete linguistic abilities and only produce
a weakly structured, topicalized language. They are offered artificial symbolic
languages to help them communicate in a way more adapted to their linguistic
abilities. After a structural analysis of a corpus of utterances from children
with cerebral palsy, we define a semantic lexicon for such a symbolic language.
We use it as the basis of a semantic analysis process able to retrieve an
interpretation of the utterances. This semantic analyser is currently used in
an application designed to convert iconic languages into natural language; it
might find other uses in the field of language rehabilitation.
|
cmp-lg/9703004
|
Insights into the Dialogue Processing of VERBMOBIL
|
cmp-lg cs.CL
|
We present the dialogue module of the speech-to-speech translation system
VERBMOBIL. We follow the approach that the solution to dialogue processing in a
mediating scenario can not depend on a single constrained processing tool, but
on a combination of several simple, efficient, and robust components. We show
how our solution to dialogue processing works when applied to real data, and
give some examples where our module contributes to the correct translation from
German to English.
|
cmp-lg/9703005
|
Semi-Automatic Acquisition of Domain-Specific Translation Lexicons
|
cmp-lg cs.CL
|
We investigate the utility of an algorithm for translation lexicon
acquisition (SABLE), used previously on a very large corpus to acquire general
translation lexicons, when that algorithm is applied to a much smaller corpus
to produce candidates for domain-specific translation lexicons.
|
cmp-lg/9704001
|
Evaluating Multilingual Gisting of Web Pages
|
cmp-lg cs.CL
|
We describe a prototype system for multilingual gisting of Web pages, and
present an evaluation methodology based on the notion of gisting as decision
support. This evaluation paradigm is straightforward, rigorous, permits fair
comparison of alternative approaches, and should easily generalize to
evaluation in other situations where the user is faced with decision-making on
the basis of information in restricted or alternative form.
|
cmp-lg/9704002
|
A Maximum Entropy Approach to Identifying Sentence Boundaries
|
cmp-lg cs.CL
|
We present a trainable model for identifying sentence boundaries in raw text.
Given a corpus annotated with sentence boundaries, our model learns to classify
each occurrence of ., ?, and ! as either a valid or invalid sentence boundary.
The training procedure requires no hand-crafted rules, lexica, part-of-speech
tags, or domain-specific information. The model can therefore be trained easily
on any genre of English, and should be trainable on any other Roman-alphabet
language. Performance is comparable to or better than the performance of
similar systems, but we emphasize the simplicity of retraining for new domains.
|
cmp-lg/9704003
|
Machine Transliteration
|
cmp-lg cs.CL
|
It is challenging to translate names and technical terms across languages
with different alphabets and sound inventories. These items are commonly
transliterated, i.e., replaced with approximate phonetic equivalents. For
example, "computer" in English comes out as "konpyuutaa" in Japanese.
Translating such items from Japanese back to English is even more challenging,
and of practical interest, as transliterated items make up the bulk of text
phrases not found in bilingual dictionaries. We describe and evaluate a method
for performing backwards transliterations by machine. This method uses a
generative model, incorporating several distinct stages in the transliteration
process.
|
cmp-lg/9704004
|
PARADISE: A Framework for Evaluating Spoken Dialogue Agents
|
cmp-lg cs.CL
|
This paper presents PARADISE (PARAdigm for DIalogue System Evaluation), a
general framework for evaluating spoken dialogue agents. The framework
decouples task requirements from an agent's dialogue behaviors, supports
comparisons among dialogue strategies, enables the calculation of performance
over subdialogues and whole dialogues, specifies the relative contribution of
various factors to performance, and makes it possible to compare agents
performing different tasks by normalizing for task complexity.
|
cmp-lg/9704005
|
Tracking Initiative in Collaborative Dialogue Interactions
|
cmp-lg cs.CL
|
In this paper, we argue for the need to distinguish between task and dialogue
initiatives, and present a model for tracking shifts in both types of
initiatives in dialogue interactions. Our model predicts the initiative holders
in the next dialogue turn based on the current initiative holders and the
effect that observed cues have on changing them. Our evaluation across various
corpora shows that the use of cues consistently improves the accuracy in the
system's prediction of task and dialogue initiative holders by 2-4 and 8-13
percentage points, respectively, thus illustrating the generality of our model.
|
cmp-lg/9704006
|
Representing Constraints with Automata
|
cmp-lg cs.CL
|
In this paper we describe an approach to constraint-based syntactic theories
in terms of finite tree automata. The solutions to constraints expressed in
weak monadic second order (MSO) logic are represented by tree automata
recognizing the assignments which make the formulas true. We show that this
allows an efficient representation of knowledge about the content of
constraints which can be used as a practical tool for grammatical theory
verification. We achieve this by using the intertranslatability of formulas of
MSO logic and tree automata and the embedding of MSO logic into a constraint
logic programming scheme. The usefulness of the approach is discussed with
examples from the realm of Principles-and-Parameters based parsing.
|
cmp-lg/9704007
|
Combining Unsupervised Lexical Knowledge Methods for Word Sense
Disambiguation
|
cmp-lg cs.CL
|
This paper presents a method to combine a set of unsupervised algorithms that
can accurately disambiguate word senses in a large, completely untagged corpus.
Although most of the techniques for word sense resolution have been presented
as stand-alone, it is our belief that full-fledged lexical ambiguity resolution
should combine several information sources and techniques. The set of
techniques have been applied in a combined way to disambiguate the genus terms
of two machine-readable dictionaries (MRD), enabling us to construct complete
taxonomies for Spanish and French. Tested accuracy is above 80% overall and 95%
for two-way ambiguous genus terms, showing that taxonomy building is not
limited to structured dictionaries such as LDOCE.
|
cmp-lg/9704008
|
Intonational Boundaries, Speech Repairs and Discourse Markers: Modeling
Spoken Dialog
|
cmp-lg cs.CL
|
To understand a speaker's turn of a conversation, one needs to segment it
into intonational phrases, clean up any speech repairs that might have
occurred, and identify discourse markers. In this paper, we argue that these
problems must be resolved together, and that they must be resolved early in the
processing stream. We put forward a statistical language model that resolves
these problems, does POS tagging, and can be used as the language model of a
speech recognizer. We find that by accounting for the interactions between
these tasks that the performance on each task improves, as does POS tagging and
perplexity.
|
cmp-lg/9704009
|
Developing a hybrid NP parser
|
cmp-lg cs.CL
|
We describe the use of energy function optimization in very shallow syntactic
parsing. The approach can use linguistic rules and corpus-based statistics, so
the strengths of both linguistic and statistical approaches to NLP can be
combined in a single framework. The rules are contextual constraints for
resolving syntactic ambiguities expressed as alternative tags, and the
statistical language model consists of corpus-based n-grams of syntactic tags.
The success of the hybrid syntactic disambiguator is evaluated against a
held-out benchmark corpus. Also the contributions of the linguistic and
statistical language models to the hybrid model are estimated.
|
cmp-lg/9704010
|
The Theoretical Status of Ontologies in Natural Language Processing
|
cmp-lg cs.CL
|
This paper discusses the use of `ontologies' in Natural Language Processing.
It classifies various kinds of ontologies that have been employed in NLP and
discusses various benefits and problems with those designs. Particular focus is
then placed on experiences gained in the use of the Upper Model, a
linguistically-motivated `ontology' originally designed for use with the Penman
text generation system. Some proposals for further NLP ontology design criteria
are then made.
|
cmp-lg/9704011
|
Morphological Disambiguation by Voting Constraints
|
cmp-lg cs.CL
|
We present a constraint-based morphological disambiguation system in which
individual constraints vote on matching morphological parses, and
disambiguation of all the tokens in a sentence is performed at the end by
selecting parses that receive the highest votes. This constraint application
paradigm makes the outcome of the disambiguation independent of the rule
sequence, and hence relieves the rule developer from worrying about potentially
conflicting rule sequencing. Our results for disambiguating Turkish indicate
that using about 500 constraint rules and some additional simple statistics, we
can attain a recall of 95-96% and a precision of 94-95% with about 1.01 parses
per token. Our system is implemented in Prolog and we are currently
investigating an efficient implementation based on finite state transducers.
|
cmp-lg/9704012
|
Emphatic generation: employing the theory of semantic emphasis for text
generation
|
cmp-lg cs.CL
|
The paper deals with the problem of text generation and planning approaches
making only limited formally specifiable contact with accounts of grammar. We
propose an enhancement of a systemically-based generation architecture for
German (the KOMET system) by aspects of Kunze's theory of semantic emphasis.
Doing this, we gain more control over both concept selection in generation and
choice of fine-grained grammatical variation.
|
cmp-lg/9704013
|
A Theory of Parallelism and the Case of VP Ellipsis
|
cmp-lg cs.CL
|
We provide a general account of parallelism in discourse, and apply it to the
special case of resolving possible readings for instances of VP ellipsis. We
show how several problematic examples are accounted for in a natural and
straightforward fashion. The generality of the approach makes it directly
applicable to a variety of other types of ellipsis and reference.
|
cmp-lg/9704014
|
Centering in-the-large: Computing referential discourse segments
|
cmp-lg cs.CL
|
We specify an algorithm that builds up a hierarchy of referential discourse
segments from local centering data. The spatial extension and nesting of these
discourse segments constrain the reachability of potential antecedents of an
anaphoric expression beyond the local level of adjacent center pairs. Thus, the
centering model is scaled up to the level of the global referential structure
of discourse. An empirical evaluation of the algorithm is supplied.
|
cmp-lg/9705001
|
Co-evolution of Language and of the Language Acquisition Device
|
cmp-lg cs.CL
|
A new account of parameter setting during grammatical acquisition is
presented in terms of Generalized Categorial Grammar embedded in a default
inheritance hierarchy, providing a natural partial ordering on the setting of
parameters. Experiments show that several experimentally effective learners can
be defined in this framework. Evolutionary simulations suggest that a learner
with default initial settings for parameters will emerge, provided that
learning is memory limited and the environment of linguistic adaptation
contains an appropriate language.
|
cmp-lg/9705002
|
Sloppy Identity
|
cmp-lg cs.CL
|
Although sloppy interpretation is usually accounted for by theories of
ellipsis, it often arises in non-elliptical contexts. In this paper, a theory
of sloppy interpretation is provided which captures this fact. The underlying
idea is that sloppy interpretation results from a semantic constraint on
parallel structures and the theory is shown to predict sloppy readings for
deaccented and paycheck sentences as well as relational-, event-, and
one-anaphora. It is further shown to capture the interaction of sloppy/strict
ambiguity with quantification and binding.
|
cmp-lg/9705003
|
Grammatical analysis in the OVIS spoken-dialogue system
|
cmp-lg cs.CL
|
We argue that grammatical processing is a viable alternative to concept
spotting for processing spoken input in a practical dialogue system. We discuss
the structure of the grammar, the properties of the parser, and a method for
achieving robustness. We discuss test results suggesting that grammatical
processing allows fast and accurate processing of spoken input.
|
cmp-lg/9705004
|
Computing Parallelism in Discourse
|
cmp-lg cs.CL
|
Although much has been said about parallelism in discourse, a formal,
computational theory of parallelism structure is still outstanding. In this
paper, we present a theory which given two parallel utterances predicts which
are the parallel elements. The theory consists of a sorted, higher-order
abductive calculus and we show that it reconciles the insights of discourse
theories of parallelism with those of Higher-Order Unification approaches to
discourse semantics, thereby providing a natural framework in which to capture
the effect of parallelism on discourse semantics.
|
cmp-lg/9705005
|
Document Classification Using a Finite Mixture Model
|
cmp-lg cs.CL
|
We propose a new method of classifying documents into categories. The simple
method of conducting hypothesis testing over word-based distributions in
categories suffers from the data sparseness problem. In order to address this
difficulty, Guthrie et.al. have developed a method using distributions based on
hard clustering of words, i.e., in which a word is assigned to a single cluster
and words in the same cluster are treated uniformly. This method might,
however, degrade classification results, since the distributions it employs are
not always precise enough for representing the differences between categories.
We propose here the use of soft clustering of words, i.e., in which a word can
be assigned to several different clusters and each cluster is characterized by
a specific word probability distribution. We define for each document category
a finite mixture model, which is a linear combination of the probability
distributions of the clusters. We thereby treat the problem of classifying
documents as that of conducting statistical hypothesis testing over finite
mixture models. In order to accomplish this testing, we employ the EM algorithm
which helps efficiently estimate parameters in a finite mixture model.
Experimental results indicate that our method outperforms not only the method
using distributions based on hard clustering, but also the method using
word-based distributions and the method based on cosine-similarity.
|
cmp-lg/9705006
|
Quantitative Constraint Logic Programming for Weighted Grammar
Applications
|
cmp-lg cs.CL
|
Constraint logic grammars provide a powerful formalism for expressing complex
logical descriptions of natural language phenomena in exact terms. Describing
some of these phenomena may, however, require some form of graded distinctions
which are not provided by such grammars. Recent approaches to weighted
constraint logic grammars attempt to address this issue by adding numerical
calculation schemata to the deduction scheme of the underlying CLP framework.
Currently, these extralogical extensions are not related to the model-theoretic
counterpart of the operational semantics of CLP, i.e., they do not come with a
formal semantics at all. The aim of this paper is to present a clear formal
semantics for weighted constraint logic grammars, which abstracts away from
specific interpretations of weights, but nevertheless gives insights into the
parsing problem for such weighted grammars. Building on the formalization of
constraint logic grammars in the CLP scheme of Hoehfeld and Smolka 1988, this
formal semantics will be given by a quantitative version of CLP. Such a
quantitative CLP scheme can also be valuable for CLP tasks independent of
grammars.
|
cmp-lg/9705007
|
Recycling Lingware in a Multilingual MT System
|
cmp-lg cs.CL
|
We describe two methods relevant to multi-lingual machine translation
systems, which can be used to port linguistic data (grammars, lexicons and
transfer rules) between systems used for processing related languages. The
methods are fully implemented within the Spoken Language Translator system, and
were used to create versions of the system for two new language pairs using
only a month of expert effort.
|
cmp-lg/9705008
|
The TreeBanker: a Tool for Supervised Training of Parsed Corpora
|
cmp-lg cs.CL
|
I describe the TreeBanker, a graphical tool for the supervised training
involved in domain customization of the disambiguation component of a speech-
or language-understanding system. The TreeBanker presents a user, who need not
be a system expert, with a range of properties that distinguish competing
analyses for an utterance and that are relatively easy to judge. This allows
training on a corpus to be completed in far less time, and with far less
expertise, than would be needed if analyses were inspected directly: it becomes
possible for a corpus of about 20,000 sentences of the complexity of those in
the ATIS corpus to be judged in around three weeks of work by a linguistically
aware non-expert.
|
cmp-lg/9705009
|
Charts, Interaction-Free Grammars, and the Compact Representation of
Ambiguity
|
cmp-lg cs.CL
|
Recently researchers working in the LFG framework have proposed algorithms
for taking advantage of the implicit context-free components of a unification
grammar [Maxwell 96]. This paper clarifies the mathematical foundations of
these techniques, provides a uniform framework in which they can be formally
studied and eliminates the need for special purpose runtime data-structures
recording ambiguity. The paper posits the identity: Ambiguous Feature
Structures = Grammars, which states that (finitely) ambiguous representations
are best seen as unification grammars of a certain type, here called
``interaction-free'' grammars, which generate in a backtrack-free way each of
the feature structures subsumed by the ambiguous representation. This work
extends a line of research [Billot and Lang 89, Lang 94] which stresses the
connection between charts and grammars: a chart can be seen as a specialization
of the reference grammar for a given input string. We show how this
specialization grammar can be transformed into an interaction-free form which
has the same practicality as a listing of the individual solutions, but is
produced in less time and space.
|
cmp-lg/9705010
|
Memory-Based Learning: Using Similarity for Smoothing
|
cmp-lg cs.CL
|
This paper analyses the relation between the use of similarity in
Memory-Based Learning and the notion of backed-off smoothing in statistical
language modeling. We show that the two approaches are closely related, and we
argue that feature weighting methods in the Memory-Based paradigm can offer the
advantage of automatically specifying a suitable domain-specific hierarchy
between most specific and most general conditioning information without the
need for a large number of parameters. We report two applications of this
approach: PP-attachment and POS-tagging. Our method achieves state-of-the-art
performance in both domains, and allows the easy integration of diverse
information sources, such as rich lexical representations.
|
cmp-lg/9705011
|
A Lexicon for Underspecified Semantic Tagging
|
cmp-lg cs.CL
|
The paper defends the notion that semantic tagging should be viewed as more
than disambiguation between senses. Instead, semantic tagging should be a first
step in the interpretation process by assigning each lexical item a
representation of all of its systematically related senses, from which further
semantic processing steps can derive discourse dependent interpretations. This
leads to a new type of semantic lexicon (CoreLex) that supports underspecified
semantic tagging through a design based on systematic polysemous classes and a
class-based acquisition of lexical knowledge for specific domains.
|
cmp-lg/9705012
|
A Comparative Study of the Application of Different Learning Techniques
to Natural Language Interfaces
|
cmp-lg cs.CL
|
In this paper we present first results from a comparative study. Its aim is
to test the feasibility of different inductive learning techniques to perform
the automatic acquisition of linguistic knowledge within a natural language
database interface. In our interface architecture the machine learning module
replaces an elaborate semantic analysis component. The learning module learns
the correct mapping of a user's input to the corresponding database command
based on a collection of past input data. We use an existing interface to a
production planning and control system as evaluation and compare the results
achieved by different instance-based and model-based learning algorithms.
|
cmp-lg/9705013
|
FASTUS: A Cascaded Finite-State Transducer for Extracting Information
from Natural-Language Text
|
cmp-lg cs.CL
|
FASTUS is a system for extracting information from natural language text for
entry into a database and for other applications. It works essentially as a
cascaded, nondeterministic finite-state automaton. There are five stages in the
operation of FASTUS. In Stage 1, names and other fixed form expressions are
recognized. In Stage 2, basic noun groups, verb groups, and prepositions and
some other particles are recognized. In Stage 3, certain complex noun groups
and verb groups are constructed. Patterns for events of interest are identified
in Stage 4 and corresponding ``event structures'' are built. In Stage 5,
distinct event structures that describe the same event are identified and
merged, and these are used in generating database entries. This decomposition
of language processing enables the system to do exactly the right amount of
domain-independent syntax, so that domain-dependent semantic and pragmatic
processing can be applied to the right larger-scale structures. FASTUS is very
efficient and effective, and has been used successfully in a number of
applications.
|
cmp-lg/9705014
|
Incorporating POS Tagging into Language Modeling
|
cmp-lg cs.CL
|
Language models for speech recognition tend to concentrate solely on
recognizing the words that were spoken. In this paper, we redefine the speech
recognition problem so that its goal is to find both the best sequence of words
and their syntactic role (part-of-speech) in the utterance. This is a necessary
first step towards tightening the interaction between speech recognition and
natural language understanding.
|
cmp-lg/9705015
|
Translation Methodology in the Spoken Language Translator: An Evaluation
|
cmp-lg cs.CL
|
In this paper we describe how the translation methodology adopted for the
Spoken Language Translator (SLT) addresses the characteristics of the speech
translation task in a context where it is essential to achieve easy
customization to new languages and new domains. We then discuss the issues that
arise in any attempt to evaluate a speech translator, and present the results
of such an evaluation carried out on SLT for several language pairs.
|
cmp-lg/9705016
|
Sense Tagging: Semantic Tagging with a Lexicon
|
cmp-lg cs.CL
|
Sense tagging, the automatic assignment of the appropriate sense from some
lexicon to each of the words in a text, is a specialised instance of the
general problem of semantic tagging by category or type. We discuss which
recent word sense disambiguation algorithms are appropriate for sense tagging.
It is our belief that sense tagging can be carried out effectively by combining
several simple, independent, methods and we include the design of such a
tagger. A prototype of this system has been implemented, correctly tagging 86%
of polysemous word tokens in a small test set, providing evidence that our
hypothesis is correct.
|
cmp-lg/9706001
|
Assigning Grammatical Relations with a Back-off Model
|
cmp-lg cs.CL
|
This paper presents a corpus-based method to assign grammatical
subject/object relations to ambiguous German constructs. It makes use of an
unsupervised learning procedure to collect training and test data, and the
back-off model to make assignment decisions.
|
cmp-lg/9706002
|
Learning Parse and Translation Decisions From Examples With Rich Context
|
cmp-lg cs.CL
|
We present a knowledge and context-based system for parsing and translating
natural language and evaluate it on sentences from the Wall Street Journal.
Applying machine learning techniques, the system uses parse action examples
acquired under supervision to generate a deterministic shift-reduce parser in
the form of a decision structure. It relies heavily on context, as encoded in
features which describe the morphological, syntactic, semantic and other
aspects of a given parse state.
|
cmp-lg/9706003
|
Three New Probabilistic Models for Dependency Parsing: An Exploration
|
cmp-lg cs.CL
|
After presenting a novel O(n^3) parsing algorithm for dependency grammar, we
develop three contrasting ways to stochasticize it. We propose (a) a lexical
affinity model where words struggle to modify each other, (b) a sense tagging
model where words fluctuate randomly in their selectional preferences, and (c)
a generative model where the speaker fleshes out each word's syntactic and
conceptual structure without regard to the implications for the hearer. We also
give preliminary empirical results from evaluating the three models' parsing
performance on annotated Wall Street Journal training text (derived from the
Penn Treebank). In these results, the generative (i.e., top-down) model
performs significantly better than the others, and does about equally well at
assigning part-of-speech tags.
|
cmp-lg/9706004
|
An Empirical Comparison of Probability Models for Dependency Grammar
|
cmp-lg cs.CL
|
This technical report is an appendix to Eisner (1996): it gives superior
experimental results that were reported only in the talk version of that paper.
Eisner (1996) trained three probability models on a small set of about 4,000
conjunction-free, dependency-grammar parses derived from the Wall Street
Journal section of the Penn Treebank, and then evaluated the models on a
held-out test set, using a novel O(n^3) parsing algorithm.
The present paper describes some details of the experiments and repeats them
with a larger training set of 25,000 sentences. As reported at the talk, the
more extensive training yields greatly improved performance. Nearly half the
sentences are parsed with no misattachments; two-thirds are parsed with at most
one misattachment.
Of the models described in the original written paper, the best score is
still obtained with the generative (top-down) "model C." However, slightly
better models are also explored, in particular, two variants on the
comprehension (bottom-up) "model B." The better of these has an attachment
accuracy of 90%, and (unlike model C) tags words more accurately than the
comparable trigram tagger. Differences are statistically significant.
If tags are roughly known in advance, search error is all but eliminated and
the new model attains an attachment accuracy of 93%. We find that the parser of
Collins (1996), when combined with a highly-trained tagger, also achieves 93%
when trained and tested on the same sentences. Similarities and differences are
discussed.
|
cmp-lg/9706005
|
Comparing a Linguistic and a Stochastic Tagger
|
cmp-lg cs.CL
|
Concerning different approaches to automatic PoS tagging: EngCG-2, a
constraint-based morphological tagger, is compared in a double-blind test with
a state-of-the-art statistical tagger on a common disambiguation task using a
common tag set. The experiments show that for the same amount of remaining
ambiguity, the error rate of the statistical tagger is one order of magnitude
greater than that of the rule-based one. The two related issues of priming
effects compromising the results and disagreement between human annotators are
also addressed.
|
cmp-lg/9706006
|
Mistake-Driven Learning in Text Categorization
|
cmp-lg cs.CL
|
Learning problems in the text processing domain often map the text to a space
whose dimensions are the measured features of the text, e.g., its words. Three
characteristic properties of this domain are (a) very high dimensionality, (b)
both the learned concepts and the instances reside very sparsely in the feature
space, and (c) a high variation in the number of active features in an
instance. In this work we study three mistake-driven learning algorithms for a
typical task of this nature -- text categorization. We argue that these
algorithms -- which categorize documents by learning a linear separator in the
feature space -- have a few properties that make them ideal for this domain. We
then show that a quantum leap in performance is achieved when we further modify
the algorithms to better address some of the specific characteristics of the
domain. In particular, we demonstrate (1) how variation in document length can
be tolerated by either normalizing feature weights or by using negative
weights, (2) the positive effect of applying a threshold range in training, (3)
alternatives in considering feature frequency, and (4) the benefits of
discarding features while training. Overall, we present an algorithm, a
variation of Littlestone's Winnow, which performs significantly better than any
other algorithm tested on this task using a similar feature set.
|
cmp-lg/9706007
|
Aggregate and mixed-order Markov models for statistical language
processing
|
cmp-lg cs.CL
|
We consider the use of language models whose size and accuracy are
intermediate between different order n-gram models. Two types of models are
studied in particular. Aggregate Markov models are class-based bigram models in
which the mapping from words to classes is probabilistic. Mixed-order Markov
models combine bigram models whose predictions are conditioned on different
words. Both types of models are trained by Expectation-Maximization (EM)
algorithms for maximum likelihood estimation. We examine smoothing procedures
in which these models are interposed between different order n-grams. This is
found to significantly reduce the perplexity of unseen word combinations.
|
cmp-lg/9706008
|
Distinguishing Word Senses in Untagged Text
|
cmp-lg cs.CL
|
This paper describes an experimental comparison of three unsupervised
learning algorithms that distinguish the sense of an ambiguous word in untagged
text. The methods described in this paper, McQuitty's similarity analysis,
Ward's minimum-variance method, and the EM algorithm, assign each instance of
an ambiguous word to a known sense definition based solely on the values of
automatically identifiable features in text. These methods and feature sets are
found to be more successful in disambiguating nouns rather than adjectives or
verbs. Overall, the most accurate of these procedures is McQuitty's similarity
analysis in combination with a high dimensional feature set.
|
cmp-lg/9706009
|
Library of Practical Abstractions, Release 1.2
|
cmp-lg cs.CL
|
The library of practical abstractions (LIBPA) provides efficient
implementations of conceptually simple abstractions, in the C programming
language. We believe that the best library code is conceptually simple so that
it will be easily understood by the application programmer; parameterized by
type so that it enjoys wide applicability; and at least as efficient as a
straightforward special-purpose implementation. You will find that our software
satisfies the highest standards of software design, implementation, testing,
and benchmarking.
The current LIBPA release is a source code distribution only. It consists of
modules for portable memory management, one dimensional arrays of arbitrary
types, compact symbol tables, hash tables for arbitrary types, a trie module
for length-delimited strings over arbitrary alphabets, single precision
floating point numbers with extended exponents, and logarithmic representations
of probability values using either fixed or floating point numbers.
We have used LIBPA to implement a wide range of statistical models for both
continuous and discrete domains. The time and space efficiency of LIBPA has
allowed us to build larger statistical models than previously reported, and to
investigate more computationally-intensive techniques than previously possible.
We have found LIBPA to be indispensible in our own research, and hope that you
will find it useful in yours.
|
cmp-lg/9706010
|
Exemplar-Based Word Sense Disambiguation: Some Recent Improvements
|
cmp-lg cs.CL
|
In this paper, we report recent improvements to the exemplar-based learning
approach for word sense disambiguation that have achieved higher disambiguation
accuracy. By using a larger value of $k$, the number of nearest neighbors to
use for determining the class of a test example, and through 10-fold cross
validation to automatically determine the best $k$, we have obtained improved
disambiguation accuracy on a large sense-tagged corpus first used in
\cite{ng96}. The accuracy achieved by our improved exemplar-based classifier is
comparable to the accuracy on the same data set obtained by the Naive-Bayes
algorithm, which was reported in \cite{mooney96} to have the highest
disambiguation accuracy among seven state-of-the-art machine learning
algorithms.
|
cmp-lg/9706011
|
Applying Reliability Metrics to Co-Reference Annotation
|
cmp-lg cs.CL
|
Studies of the contextual and linguistic factors that constrain discourse
phenomena such as reference are coming to depend increasingly on annotated
language corpora. In preparing the corpora, it is important to evaluate the
reliability of the annotation, but methods for doing so have not been readily
available. In this report, I present a method for computing reliability of
coreference annotation. First I review a method for applying the information
retrieval metrics of recall and precision to coreference annotation proposed by
Marc Vilain and his collaborators. I show how this method makes it possible to
construct contingency tables for computing Cohen's Kappa, a familiar
reliability metric. By comparing recall and precision to reliability on the
same data sets, I also show that recall and precision can be misleadingly high.
Because Kappa factors out chance agreement among coders, it is a preferable
measure for developing annotated corpora where no pre-existing target
annotation exists.
|
cmp-lg/9706012
|
Probabilistic Coreference in Information Extraction
|
cmp-lg cs.CL
|
Certain applications require that the output of an information extraction
system be probabilistic, so that a downstream system can reliably fuse the
output with possibly contradictory information from other sources. In this
paper we consider the problem of assigning a probability distribution to
alternative sets of coreference relationships among entity descriptions. We
present the results of initial experiments with several approaches to
estimating such distributions in an application using SRI's FASTUS information
extraction system.
|
cmp-lg/9706013
|
A Corpus-Based Approach for Building Semantic Lexicons
|
cmp-lg cs.CL
|
Semantic knowledge can be a great asset to natural language processing
systems, but it is usually hand-coded for each application. Although some
semantic information is available in general-purpose knowledge bases such as
WordNet and Cyc, many applications require domain-specific lexicons that
represent words and categories for a particular topic. In this paper, we
present a corpus-based method that can be used to build semantic lexicons for
specific categories. The input to the system is a small set of seed words for a
category and a representative text corpus. The output is a ranked list of words
that are associated with the category. A user then reviews the top-ranked words
and decides which ones should be entered in the semantic lexicon. In
experiments with five categories, users typically found about 60 words per
category in 10-15 minutes to build a core semantic lexicon.
|
cmp-lg/9706014
|
A Linear Observed Time Statistical Parser Based on Maximum Entropy
Models
|
cmp-lg cs.CL
|
This paper presents a statistical parser for natural language that obtains a
parsing accuracy---roughly 87% precision and 86% recall---which surpasses the
best previously published results on the Wall St. Journal domain. The parser
itself requires very little human intervention, since the information it uses
to make parsing decisions is specified in a concise and simple manner, and is
combined in a fully automatic way under the maximum entropy framework. The
observed running time of the parser on a test sentence is linear with respect
to the sentence length. Furthermore, the parser returns several scored parses
for a sentence, and this paper shows that a scheme to pick the best parse from
the 20 highest scoring parses could yield a dramatically higher accuracy of 93%
precision and recall.
|
cmp-lg/9706015
|
Determining Internal and External Indices for Chart Generation
|
cmp-lg cs.CL
|
This paper presents a compilation procedure which determines internal and
external indices for signs in a unification based grammar to be used in
improving the computational efficiency of lexicalist chart generation. The
procedure takes as input a grammar and a set of feature paths indicating the
position of semantic indices in a sign, and calculates the fixed-point of a set
of equations derived from the grammar. The result is a set of independent
constraints stating which indices in a sign can be bound to other signs within
a complete sentence. Based on these constraints, two tests are formulated which
reduce the search space during generation.
|
cmp-lg/9706016
|
Text Segmentation Using Exponential Models
|
cmp-lg cs.CL
|
This paper introduces a new statistical approach to partitioning text
automatically into coherent segments. Our approach enlists both short-range and
long-range language models to help it sniff out likely sites of topic changes
in text. To aid its search, the system consults a set of simple lexical hints
it has learned to associate with the presence of boundaries through inspection
of a large corpus of annotated data. We also propose a new probabilistically
motivated error metric for use by the natural language processing and
information retrieval communities, intended to supersede precision and recall
for appraising segmentation algorithms. Qualitative assessment of our algorithm
as well as evaluation using this new metric demonstrate the effectiveness of
our approach in two very different domains, Wall Street Journal articles and
the TDT Corpus, a collection of newswire articles and broadcast news
transcripts.
|
cmp-lg/9706017
|
Name Searching and Information Retrieval
|
cmp-lg cs.CL
|
The main application of name searching has been name matching in a database
of names. This paper discusses a different application: improving information
retrieval through name recognition. It investigates name recognition accuracy,
and the effect on retrieval performance of indexing and searching personal
names differently from non-name terms in the context of ranked retrieval. The
main conclusions are: that name recognition in text can be effective; that
names occur frequently enough in a variety of domains, including those of legal
documents and news databases, to make recognition worthwhile; and that
retrieval performance can be improved using name searching.
|
cmp-lg/9706018
|
A Model of Lexical Attraction and Repulsion
|
cmp-lg cs.CL
|
This paper introduces new methods based on exponential families for modeling
the correlations between words in text and speech. While previous work assumed
the effects of word co-occurrence statistics to be constant over a window of
several hundred words, we show that their influence is nonstationary on a much
smaller time scale. Empirical data drawn from English and Japanese text, as
well as conversational speech, reveals that the ``attraction'' between words
decays exponentially, while stylistic and syntactic contraints create a
``repulsion'' between words that discourages close co-occurrence. We show that
these characteristics are well described by simple mixture models based on
two-stage exponential distributions which can be trained using the EM
algorithm. The resulting distance distributions can then be incorporated as
penalizing features in an exponential language model.
|
cmp-lg/9706019
|
Evaluating Competing Agent Strategies for a Voice Email Agent
|
cmp-lg cs.CL
|
This paper reports experimental results comparing a mixed-initiative to a
system-initiative dialog strategy in the context of a personal voice email
agent. To independently test the effects of dialog strategy and user expertise,
users interact with either the system-initiative or the mixed-initiative agent
to perform three successive tasks which are identical for both agents. We
report performance comparisons across agent strategies as well as over tasks.
This evaluation utilizes and tests the PARADISE evaluation framework, and
discusses the performance function derivable from the experimental data.
|
cmp-lg/9706020
|
An Empirical Approach to Temporal Reference Resolution
|
cmp-lg cs.CL
|
This paper presents the results of an empirical investigation of temporal
reference resolution in scheduling dialogs. The algorithm adopted is primarily
a linear-recency based approach that does not include a model of global focus.
A fully automatic system has been developed and evaluated on unseen test data
with good results. This paper presents the results of an intercoder reliability
study, a model of temporal reference resolution that supports linear recency
and has very good coverage, the results of the system evaluated on unseen test
data, and a detailed analysis of the dialogs assessing the viability of the
approach.
|
cmp-lg/9706021
|
An Efficient Distribution of Labor in a Two Stage Robust Interpretation
Process
|
cmp-lg cs.CL
|
Although Minimum Distance Parsing (MDP) offers a theoretically attractive
solution to the problem of extragrammaticality, it is often computationally
infeasible in large scale practical applications. In this paper we present an
alternative approach where the labor is distributed between a more restrictive
partial parser and a repair module. Though two stage approaches have grown in
popularity in recent years because of their efficiency, they have done so at
the cost of requiring hand coded repair heuristics. In contrast, our two stage
approach does not require any hand coded knowledge sources dedicated to repair,
thus making it possible to achieve a similar run time advantage over MDP
without losing the quality of domain independence.
|
cmp-lg/9706022
|
Three Generative, Lexicalised Models for Statistical Parsing
|
cmp-lg cs.CL
|
In this paper we first propose a new statistical parsing model, which is a
generative model of lexicalised context-free grammar. We then extend the model
to include a probabilistic treatment of both subcategorisation and wh-movement.
Results on Wall Street Journal text show that the parser performs at 88.1/87.5%
constituent precision/recall, an average improvement of 2.3% over (Collins 96).
|
cmp-lg/9706023
|
An Information Extraction Core System for Real World German Text
Processing
|
cmp-lg cs.CL
|
This paper describes SMES, an information extraction core system for real
world German text processing. The basic design criterion of the system is of
providing a set of basic powerful, robust, and efficient natural language
components and generic linguistic knowledge sources which can easily be
customized for processing different tasks in a flexible manner.
|
cmp-lg/9706024
|
A Lexicalist Approach to the Translation of Colloquial Text
|
cmp-lg cs.CL
|
Colloquial English (CE) as found in television programs or typical
conversations is different than text found in technical manuals, newspapers and
books. Phrases tend to be shorter and less sophisticated. In this paper, we
look at some of the theoretical and implementational issues involved in
translating CE. We present a fully automatic large-scale multilingual natural
language processing system for translation of CE input text, as found in the
commercially transmitted closed-caption television signal, into simple target
sentences. Our approach is based on the Whitelock's Shake and Bake machine
translation paradigm, which relies heavily on lexical resources. The system
currently translates from English to Spanish with the translation modules for
Brazilian Portuguese under development.
|
cmp-lg/9706025
|
A Portable Algorithm for Mapping Bitext Correspondence
|
cmp-lg cs.CL
|
The first step in most empirical work in multilingual NLP is to construct
maps of the correspondence between texts and their translations ({\bf bitext
maps}). The Smooth Injective Map Recognizer (SIMR) algorithm presented here is
a generic pattern recognition algorithm that is particularly well-suited to
mapping bitext correspondence. SIMR is faster and significantly more accurate
than other algorithms in the literature. The algorithm is robust enough to use
on noisy texts, such as those resulting from OCR input, and on translations
that are not very literal. SIMR encapsulates its language-specific heuristics,
so that it can be ported to any language pair with a minimal effort.
|
cmp-lg/9706026
|
A Word-to-Word Model of Translational Equivalence
|
cmp-lg cs.CL
|
Many multilingual NLP applications need to translate words between different
languages, but cannot afford the computational expense of inducing or applying
a full translation model. For these applications, we have designed a fast
algorithm for estimating a partial translation model, which accounts for
translational equivalence only at the word level. The model's precision/recall
trade-off can be directly controlled via one threshold parameter. This feature
makes the model more suitable for applications that are not fully statistical.
The model's hidden parameters can be easily conditioned on information
extrinsic to the model, providing an easy way to integrate pre-existing
knowledge such as part-of-speech, dictionaries, word order, etc.. Our model can
link word tokens in parallel texts as well as other translation models in the
literature. Unlike other translation models, it can automatically produce
dictionary-sized translation lexicons, and it can do so with over 99% accuracy.
|
cmp-lg/9706027
|
Automatic Discovery of Non-Compositional Compounds in Parallel Data
|
cmp-lg cs.CL
|
Automatic segmentation of text into minimal content-bearing units is an
unsolved problem even for languages like English. Spaces between words offer an
easy first approximation, but this approximation is not good enough for machine
translation (MT), where many word sequences are not translated word-for-word.
This paper presents an efficient automatic method for discovering sequences of
words that are translated as a unit. The method proceeds by comparing pairs of
statistical translation models induced from parallel texts in two languages. It
can discover hundreds of non-compositional compounds on each iteration, and
constructs longer compounds out of shorter ones. Objective evaluation on a
simple machine translation task has shown the method's potential to improve the
quality of MT output. The method makes few assumptions about the data, so it
can be applied to parallel data other than parallel texts, such as word
spellings and pronunciations.
|
cmp-lg/9706028
|
Efficient Construction of Underspecified Semantics under Massive
Ambiguity
|
cmp-lg cs.CL
|
We investigate the problem of determining a compact underspecified semantical
representation for sentences that may be highly ambiguous. Due to combinatorial
explosion, the naive method of building semantics for the different syntactic
readings independently is prohibitive. We present a method that takes as input
a syntactic parse forest with associated constraint-based semantic construction
rules and directly builds a packed semantic structure. The algorithm is fully
implemented and runs in $O(n^4 log(n))$ in sentence length, if the grammar
meets some reasonable `normality' restrictions.
|
cmp-lg/9706029
|
Learning Parse and Translation Decisions From Examples With Rich Context
|
cmp-lg cs.CL
|
We propose a system for parsing and translating natural language that learns
from examples and uses some background knowledge.
As our parsing model we choose a deterministic shift-reduce type parser that
integrates part-of-speech tagging and syntactic and semantic processing.
Applying machine learning techniques, the system uses parse action examples
acquired under supervision to generate a parser in the form of a decision
structure, a generalization of decision trees.
To learn good parsing and translation decisions, our system relies heavily on
context, as encoded in currently 205 features describing the morphological,
syntactical and semantical aspects of a given parse state. Compared with recent
probabilistic systems that were trained on 40,000 sentences, our system relies
on more background knowledge and a deeper analysis, but radically fewer
examples, currently 256 sentences.
We test our parser on lexically limited sentences from the Wall Street
Journal and achieve accuracy rates of 89.8% for labeled precision, 98.4% for
part of speech tagging and 56.3% of test sentences without any crossing
brackets. Machine translations of 32 Wall Street Journal sentences to German
have been evaluated by 10 bilingual volunteers and been graded as 2.4 on a 1.0
(best) to 6.0 (worst) scale for both grammatical correctness and meaning
preservation.
|
cmp-lg/9707001
|
Reluctant Paraphrase: Textual Restructuring under an Optimisation Model
|
cmp-lg cs.CL
|
This paper develops a computational model of paraphrase under which text
modification is carried out reluctantly; that is, there are external
constraints, such as length or readability, on an otherwise ideal text, and
modifications to the text are necessary to ensure conformance to these
constraints. This problem is analogous to a mathematical optimisation problem:
the textual constraints can be described as a set of constraint equations, and
the requirement for minimal change to the text can be expressed as a function
to be minimised; so techniques from this domain can be used to solve the
problem.
The work is done as part of a computational paraphrase system using the XTAG
system as a base. The paper will present a theoretical computational framework
for working within the Reluctant Paraphrase paradigm: three types of textual
constraints are specified, effects of paraphrase on text are described, and a
model incorporating mathematical optimisation techniques is outlined.
|
cmp-lg/9707002
|
Automatic Detection of Text Genre
|
cmp-lg cs.CL
|
As the text databases available to users become larger and more
heterogeneous, genre becomes increasingly important for computational
linguistics as a complement to topical and structural principles of
classification. We propose a theory of genres as bundles of facets, which
correlate with various surface cues, and argue that genre detection based on
surface cues is as successful as detection based on deeper structural
properties.
|
cmp-lg/9707003
|
A Flexible POS tagger Using an Automatically Acquired Language Model
|
cmp-lg cs.CL
|
We present an algorithm that automatically learns context constraints using
statistical decision trees. We then use the acquired constraints in a flexible
POS tagger. The tagger is able to use information of any degree: n-grams,
automatically learned context constraints, linguistically motivated manually
written constraints, etc. The sources and kinds of constraints are
unrestricted, and the language model can be easily extended, improving the
results. The tagger has been tested and evaluated on the WSJ corpus.
|
cmp-lg/9707004
|
Discourse Preferences in Dynamic Logic
|
cmp-lg cs.CL
|
In order to enrich dynamic semantic theories with a `pragmatic' capacity, we
combine dynamic and nonmonotonic (preferential) logics in a modal logic
setting. We extend a fragment of Van Benthem and De Rijke's dynamic modal logic
with additional preferential operators in the underlying static logic, which
enables us to define defeasible (pragmatic) entailments over a given piece of
discourse. We will show how this setting can be used for a dynamic logical
analysis of preferential resolutions of ambiguous pronouns in discourse.
|
cmp-lg/9707005
|
Intrasentential Centering: A Case Study
|
cmp-lg cs.CL
|
One of the necessary extensions to the centering model is a mechanism to
handle pronouns with intrasentential antecedents. Existing centering models
deal only with discourses consisting of simple sentences. It leaves unclear how
to delimit center-updating utterance units and how to process complex
utterances consisting of multiple clauses. In this paper, I will explore the
extent to which a straightforward extension of an existing intersentential
centering model contributes to this effect. I will motivate an approach that
breaks a complex sentence into a hierarchy of center-updating units and
proposes the preferred interpretation of a pronoun in its local context
arbitrarily deep in the given sentence structure. This approach will be
substantiated with examples from naturally occurring written discourses.
|
cmp-lg/9707006
|
Finite State Transducers Approximating Hidden Markov Models
|
cmp-lg cs.CL
|
This paper describes the conversion of a Hidden Markov Model into a
sequential transducer that closely approximates the behavior of the stochastic
model. This transformation is especially advantageous for part-of-speech
tagging because the resulting transducer can be composed with other transducers
that encode correction rules for the most frequent tagging errors. The speed of
tagging is also improved. The described methods have been implemented and
successfully tested on six languages.
|
cmp-lg/9707007
|
Tailored Patient Information: Some Issues and Questions
|
cmp-lg cs.CL
|
Tailored patient information (TPI) systems are computer programs which
produce personalised heath-information material for patients. TPI systems are
of growing interest to the natural-language generation (NLG) community; many
TPI systems have also been developed in the medical community, usually with
mail-merge technology. No matter what technology is used, experience shows that
it is not easy to field a TPI system, even if it is shown to be effective in
clinical trials. In this paper we discuss some of the difficulties in fielding
TPI systems. This is based on our experiences with 2 TPI systems, one for
generating asthma-information booklets and one for generating smoking-cessation
letters.
|
cmp-lg/9707008
|
Stressed and Unstressed Pronouns: Complementary Preferences
|
cmp-lg cs.CL
|
I present a unified account of interpretation preferences of stressed and
unstressed pronouns in discourse. The central intuition is the Complementary
Preference Hypothesis that predicts the interpretation preference of a stressed
pronoun from that of an unstressed pronoun in the same discourse position. The
base preference must be computed in a total pragmatics module including
commonsense preferences. The focus constraint in Rooth's theory of semantic
focus is interpreted to be the salient subset of the domain in the local
attentional state in the discourse context independently motivated for other
purposes in Centering Theory.
|
cmp-lg/9707009
|
Recognizing Referential Links: An Information Extraction Perspective
|
cmp-lg cs.CL
|
We present an efficient and robust reference resolution algorithm in an
end-to-end state-of-the-art information extraction system, which must work with
a considerably impoverished syntactic analysis of the input sentences.
Considering this disadvantage, the basic setup to collect, filter, then order
by salience does remarkably well with third-person pronouns, but needs more
semantic and discourse information to improve the treatments of other
expression types.
|
cmp-lg/9707010
|
Experiences with the GTU grammar development environment
|
cmp-lg cs.CL
|
In this paper we describe our experiences with a tool for the development and
testing of natural language grammars called GTU (German:
Grammatik-Testumgebumg; grammar test environment). GTU supports four grammar
formalisms under a window-oriented user interface. Additionally, it contains a
set of German test sentences covering various syntactic phenomena as well as
three types of German lexicons that can be attached to a grammar via an
integrated lexicon interface. What follows is a description of the experiences
we gained when we used GTU as a tutoring tool for students and as an
experimental tool for CL researchers. From these we will derive the features
necessary for a future grammar workbench.
|
cmp-lg/9707011
|
A lexical database tool for quantitative phonological research
|
cmp-lg cs.CL
|
A lexical database tool tailored for phonological research is described.
Database fields include transcriptions, glosses and hyperlinks to speech files.
Database queries are expressed using HTML forms, and these permit regular
expression search on any combination of fields. Regular expressions are passed
directly to a Perl CGI program, enabling the full flexibility of Perl extended
regular expressions. The regular expression notation is extended to better
support phonological searches, such as search for minimal pairs. Search results
are presented in the form of HTML or LaTeX tables, where each cell is either a
number (representing frequency) or a designated subset of the fields. Tables
have up to four dimensions, with an elegant system for specifying which
fragments of which fields should be used for the row/column labels. The tool
offers several advantages over traditional methods of analysis: (i) it supports
a quantitative method of doing phonological research; (ii) it gives universal
access to the same set of informants; (iii) it enables other researchers to
hear the original speech data without having to rely on published
transcriptions; (iv) it makes the full power of regular expression search
available, and search results are full multimedia documents; and (v) it enables
the early refutation of false hypotheses, shortening the
analysis-hypothesis-test loop. A life-size application to an African tone
language (Dschang) is used for exemplification throughout the paper. The
database contains 2200 records, each with approximately 15 fields. Running on a
PC laptop with a stand-alone web server, the `Dschang HyperLexicon' has already
been used extensively in phonological fieldwork and analysis in Cameroon.
|
cmp-lg/9707012
|
Adjunction As Substitution: An Algebraic Formulation of Regular,
Context-Free and Tree Adjoining Languages
|
cmp-lg cs.CL
|
This note presents a method of interpreting the tree adjoining languages as
the natural third step in a hierarchy that starts with the regular and the
context-free languages. The central notion in this account is that of a
higher-order substitution. Whereas in traditional presentations of rule systems
for abstract language families the emphasis has been on a first-order
substitution process in which auxiliary variables are replaced by elements of
the carrier of the proper algebra - concatenations of terminal and auxiliary
category symbols in the string case - we lift this process to the level of
operations defined on the elements of the carrier of the algebra. Our own view
is that this change of emphasis provides the adequate platform for a better
understanding of the operation of adjunction. To put it in a nutshell:
Adjoining is not a first-order, but a second-order substitution operation.
|
cmp-lg/9707013
|
On Cloning Context-Freeness
|
cmp-lg cs.CL
|
To Rogers (1994) we owe the insight that monadic second order predicate logic
with multiple successors (MSO) is well suited in many respects as a realistic
formal base for syntactic theorizing. However, the agreeable formal properties
of this logic come at a cost: MSO is equivalent with the class of regular tree
automata/grammars, and, thereby, with the class of context-free languages.
This paper outlines one approach towards a solution of MSO's expressivity
problem. On the background of an algebraically refined Chomsky hierarchy, which
allows the definition of several classes of languages--in particular, a whole
hierarchy between CF and CS--via regular tree grammars over unambiguously
derivable alphabets of varying complexity plus their respective
yield-functions, it shows that not only some non-context-free string languages
can be captured by context-free means in this way, but that this approach can
be generalized to the corresponding structures. I.e., non-recognizable sets of
structures can--up to homomorphism--be coded context-freely. Since the class of
languages covered--Fischer's (1968} OI family of indexed languages--includes
all attested instances of non-context-freeness in natural language, there
exists an indirect, to be sure, but completely general way to formally describe
the natural languages using a weak framework like MSO.
|
cmp-lg/9707014
|
Towards a PURE Spoken Dialogue System for Information Access
|
cmp-lg cs.CL
|
With the rapid explosion of the World Wide Web, it is becoming increasingly
possible to easily acquire a wide variety of information such as flight
schedules, yellow pages, used car prices, current stock prices, entertainment
event schedules, account balances, etc. It would be very useful to have spoken
dialogue interfaces for such information access tasks. We identify portability,
usability, robustness, and extensibility as the four primary design objectives
for such systems. In other words, the objective is to develop a PURE (Portable,
Usable, Robust, Extensible) system. A two-layered dialogue architecture for
spoken dialogue systems is presented where the upper layer is
domain-independent and the lower layer is domain-specific. We are implementing
this architecture in a mixed-initiative system that accesses flight
arrival/departure information from the World Wide Web.
|
cmp-lg/9707015
|
Tagging Grammatical Functions
|
cmp-lg cs.CL
|
This paper addresses issues in automated treebank construction. We show how
standard part-of-speech tagging techniques extend to the more general problem
of structural annotation, especially for determining grammatical functions and
syntactic categories. Annotation is viewed as an interactive process where
manual and automatic processing alternate. Efficiency and accuracy results are
presented. We also discuss further automation steps.
|
cmp-lg/9707016
|
On aligning trees
|
cmp-lg cs.CL
|
The increasing availability of corpora annotated for linguistic structure
prompts the question: if we have the same texts, annotated for phrase structure
under two different schemes, to what extent do the annotations agree on
structuring within the text? We suggest the term tree alignment to indicate the
situation where two markup schemes choose to bracket off the same text
elements. We propose a general method for determining agreement between two
analyses. We then describe an efficient implementation, which is also modular
in that the core of the implementation can be reused regardless of the format
of markup used in the corpora. The output of the implementation on the Susanne
and Penn treebank corpora is discussed.
|
cmp-lg/9707017
|
Stochastic phonological grammars and acceptability
|
cmp-lg cs.CL
|
In foundational works of generative phonology it is claimed that subjects can
reliably discriminate between possible but non-occurring words and words that
could not be English. In this paper we examine the use of a probabilistic
phonological parser for words to model experimentally-obtained judgements of
the acceptability of a set of nonsense words. We compared various methods of
scoring the goodness of the parse as a predictor of acceptability. We found
that the probability of the worst part is not the best score of acceptability,
indicating that classical generative phonology and Optimality Theory miss an
important fact, as these approaches do not recognise a mechanism by which the
frequency of well-formed parts may ameliorate the unacceptability of
low-frequency parts. We argue that probabilistic generative grammars are
demonstrably a more psychologically realistic model of phonological competence
than standard generative phonology or Optimality Theory.
|
cmp-lg/9707018
|
Multilingual phonological analysis and speech synthesis
|
cmp-lg cs.CL
|
We give an overview of multilingual speech synthesis using the IPOX system.
The first part discusses work in progress for various languages: Tashlhit
Berber, Urdu and Dutch. The second part discusses a multilingual phonological
grammar, which can be adapted to a particular language by setting parameters
and adding language-specific details.
|
cmp-lg/9707019
|
Generating Coherent Messages in Real-time Decision Support: Exploiting
Discourse Theory for Discourse Practice
|
cmp-lg cs.CL
|
This paper presents a message planner, TraumaGEN, that draws on rhetorical
structure and discourse theory to address the problem of producing integrated
messages from individual critiques, each of which is designed to achieve its
own communicative goal. TraumaGEN takes into account the purpose of the
messages, the situation in which the messages will be received, and the social
role of the system.
|
cmp-lg/9707020
|
A Czech Morphological Lexicon
|
cmp-lg cs.CL
|
In this paper, a treatment of Czech phonological rules in two-level
morphology approach is described. First the possible phonological alternations
in Czech are listed and then their treatment in a practical application of a
Czech morphological lexicon.
|
cmp-lg/9708001
|
Expectations in Incremental Discourse Processing
|
cmp-lg cs.CL
|
The way in which discourse features express connections back to the previous
discourse has been described in the literature in terms of adjoining at the
right frontier of discourse structure. But this does not allow for discourse
features that express expectations about what is to come in the subsequent
discourse. After characterizing these expectations and their distribution in
text, we show how an approach that makes use of substitution as well as
adjoining on a suitably defined right frontier, can be used to both process
expectations and constrain discouse processing in general.
|
cmp-lg/9708002
|
Natural Language Generation in Healthcare: Brief Review
|
cmp-lg cs.CL
|
Good communication is vital in healthcare, both among healthcare
professionals, and between healthcare professionals and their patients. And
well-written documents, describing and/or explaining the information in
structured databases may be easier to comprehend, more edifying and even more
convincing, than the structured data, even when presented in tabular or graphic
form. Documents may be automatically generated from structured data, using
techniques from the field of natural language generation. These techniques are
concerned with how the content, organisation and language used in a document
can be dynamically selected, depending on the audience and context. They have
been used to generate health education materials, explanations and critiques in
decision support systems, and medical reports and progress notes.
|
cmp-lg/9708003
|
Structure and Ostension in the Interpretation of Discourse Deixis
|
cmp-lg cs.CL
|
This paper examines demonstrative pronouns used as deictics to refer to the
interpretation of one or more clauses. Although this usage is frowned upon in
style manuals (for example Strunk and White (1959) state that ``This. The
pronoun 'this', referring to the complete sense of a preceding sentence or
clause, cannot always carry the load and so may produce an imprecise
statement.''), it is nevertheless very common in written text. Handling this
usage poses a problem for Natural Language Understanding systems. The solution
I propose is based on distinguishing between what can be pointed to and what
can be referred to by virtue of pointing. I argue that a restricted set of
discourse segments yield what such demonstrative pronouns can point to and a
restricted set of what Nunberg (1979) has called referring functions yield what
they can refer to by virtue of that pointing.
|
cmp-lg/9708004
|
Epistemic NP Modifiers
|
cmp-lg cs.CL
|
The paper considers participles such as "unknown", "identified" and
"unspecified", which in sentences such as "Solange is staying in an unknown
hotel" have readings equivalent to an indirect question "Solange is staying in
a hotel, and it is not known which hotel it is." We discuss phenomena including
disambiguation of quantifier scope and a restriction on the set of determiners
which allow the reading in question. Epistemic modifiers are analyzed in a DRT
framework with file (information state) discourse referents. The proposed
semantics uses a predication on files and discourse referents which is related
to recent developments in dynamic modal predicate calculus. It is argued that a
compositional DRT semantics must employ a semantic type of discourse referents,
as opposed to just a type of individuals. A connection is developed between the
scope effects of epistemic modifiers and the scope-disambiguating effect of "a
certain".
|
cmp-lg/9708005
|
Centering, Anaphora Resolution, and Discourse Structure
|
cmp-lg cs.CL
|
Centering was formulated as a model of the relationship between attentional
state, the form of referring expressions, and the coherence of an utterance
within a discourse segment (Grosz, Joshi and Weinstein, 1986; Grosz, Joshi and
Weinstein, 1995). In this chapter, I argue that the restriction of centering to
operating within a discourse segment should be abandoned in order to integrate
centering with a model of global discourse structure. The within-segment
restriction causes three problems. The first problem is that centers are often
continued over discourse segment boundaries with pronominal referring
expressions whose form is identical to those that occur within a discourse
segment. The second problem is that recent work has shown that listeners
perceive segment boundaries at various levels of granularity. If centering
models a universal processing phenomenon, it is implausible that each listener
is using a different centering algorithm.The third issue is that even for
utterances within a discourse segment, there are strong contrasts between
utterances whose adjacent utterance within a segment is hierarchically recent
and those whose adjacent utterance within a segment is linearly recent. This
chapter argues that these problems can be eliminated by replacing Grosz and
Sidner's stack model of attentional state with an alternate model, the cache
model. I show how the cache model is easily integrated with the centering
algorithm, and provide several types of data from naturally occurring
discourses that support the proposed integrated model. Future work should
provide additional support for these claims with an examination of a larger
corpus of naturally occurring discourses.
|
cmp-lg/9708006
|
Global Thresholding and Multiple Pass Parsing
|
cmp-lg cs.CL
|
We present a variation on classic beam thresholding techniques that is up to
an order of magnitude faster than the traditional method, at the same
performance level. We also present a new thresholding technique, global
thresholding, which, combined with the new beam thresholding, gives an
additional factor of two improvement, and a novel technique, multiple pass
parsing, that can be combined with the others to yield yet another 50%
improvement. We use a new search algorithm to simultaneously optimize the
thresholding parameters of the various algorithms.
|
cmp-lg/9708007
|
A complexity measure for diachronic Chinese phonology
|
cmp-lg cs.CL
|
This paper addresses the problem of deriving distance measures between parent
and daughter languages with specific relevance to historical Chinese phonology.
The diachronic relationship between the languages is modelled as a
Probabilistic Finite State Automaton. The Minimum Message Length principle is
then employed to find the complexity of this structure. The idea is that this
measure is representative of the amount of dissimilarity between the two
languages.
|
cmp-lg/9708008
|
Fast Context-Free Parsing Requires Fast Boolean Matrix Multiplication
|
cmp-lg cs.CL
|
Valiant showed that Boolean matrix multiplication (BMM) can be used for CFG
parsing. We prove a dual result: CFG parsers running in time $O(|G||w|^{3 -
\myeps})$ on a grammar $G$ and a string $w$ can be used to multiply $m \times
m$ Boolean matrices in time $O(m^{3 - \myeps/3})$. In the process we also
provide a formal definition of parsing motivated by an informal notion due to
Lang. Our result establishes one of the first limitations on general CFG
parsing: a fast, practical CFG parser would yield a fast, practical BMM
algorithm, which is not believed to exist.
|
cmp-lg/9708009
|
DIA-MOLE: An Unsupervised Learning Approach to Adaptive Dialogue Models
for Spoken Dialogue Systems
|
cmp-lg cs.CL
|
The DIAlogue MOdel Learning Environment supports an engineering-oriented
approach towards dialogue modelling for a spoken-language interface. Major
steps towards dialogue models is to know about the basic units that are used to
construct a dialogue model and possible sequences. In difference to many other
approaches a set of dialogue acts is not predefined by any theory or manually
during the engineering process, but is learned from data that are available in
an avised spoken dialogue system. The architecture is outlined and the approach
is applied to the domain of appointment scheduling. Even though based on a word
correctness of about 70% predictability of dialogue acts in DIA-MOLE turns out
to be comparable to human-assigned dialogue acts.
|
cmp-lg/9708010
|
Similarity-Based Methods For Word Sense Disambiguation
|
cmp-lg cs.CL
|
We compare four similarity-based estimation methods against back-off and
maximum-likelihood estimation methods on a pseudo-word sense disambiguation
task in which we controlled for both unigram and bigram frequency. The
similarity-based methods perform up to 40% better on this particular task. We
also conclude that events that occur only once in the training set have major
impact on similarity-based estimates.
|
cmp-lg/9708011
|
Similarity-Based Approaches to Natural Language Processing
|
cmp-lg cs.CL
|
This thesis presents two similarity-based approaches to sparse data problems.
The first approach is to build soft, hierarchical clusters: soft, because each
event belongs to each cluster with some probability; hierarchical, because
cluster centroids are iteratively split to model finer distinctions. Our second
approach is a nearest-neighbor approach: instead of calculating a centroid for
each class, as in the hierarchical clustering approach, we in essence build a
cluster around each word. We compare several such nearest-neighbor approaches
on a word sense disambiguation task and find that as a whole, their performance
is far superior to that of standard methods. In another set of experiments, we
show that using estimation techniques based on the nearest-neighbor model
enables us to achieve perplexity reductions of more than 20 percent over
standard techniques in the prediction of low-frequency events, and
statistically significant speech recognition error-rate reduction.
|
cmp-lg/9708012
|
Encoding Frequency Information in Lexicalized Grammars
|
cmp-lg cs.CL
|
We address the issue of how to associate frequency information with
lexicalized grammar formalisms, using Lexicalized Tree Adjoining Grammar as a
representative framework. We consider systematically a number of alternative
probabilistic frameworks, evaluating their adequacy from both a theoretical and
empirical perspective using data from existing large treebanks. We also propose
three orthogonal approaches for backing off probability estimates to cope with
the large number of parameters involved.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.