id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
cmp-lg/9406021
|
A symbolic description of punning riddles and its computer
implementation
|
cmp-lg cs.CL
|
Riddles based on simple puns can be classified according to the patterns of
word, syllable or phrase similarity they depend upon. We have devised a formal
model of the semantic and syntactic regularities underlying some of the simpler
types of punning riddle. We have also implemented this preliminary theory in a
computer program which can generate riddles from a lexicon containing general
data about words and phrases; that is, the lexicon content is not customised to
produce jokes. Informal evaluation of the program's results by a set of human
judges suggest that the riddles produced by this program are of comparable
quality to those in general circulation among school children.
|
cmp-lg/9406022
|
An implemented model of punning riddles
|
cmp-lg cs.CL
|
In this paper, we discuss a model of simple question-answer punning,
implemented in a program, JAPE, which generates riddles from humour-independent
lexical entries. The model uses two main types of structure: schemata, which
determine the relationships between key words in a joke, and templates, which
produce the surface form of the joke. JAPE succeeds in generating pieces of
text that are recognizably jokes, but some of them are not very good jokes. We
mention some potential improvements and extensions, including post-production
heuristics for ordering the jokes according to quality.
|
cmp-lg/9406023
|
A Spanish Tagset for the CRATER Project
|
cmp-lg cs.CL
|
This working paper describes the Spanish tagset to be used in the context of
CRATER, a CEC funded project aiming at the creation of a multilingual (English,
French, Spanish) aligned corpus using the International Telecommunications
Union corpus. In this respect, each version of the corpus will be (or is
currently) tagged. Xerox PARC tagger will be adapted to Spanish in order to
perform the tagging of the Spanish version. This tagset has been devised as the
ideal one for Spanish, and has been posted to several lists in order to get
feedback to it.
|
cmp-lg/9406024
|
Learning Fault-tolerant Speech Parsing with SCREEN
|
cmp-lg cs.CL
|
This paper describes a new approach and a system SCREEN for fault-tolerant
speech parsing. SCREEEN stands for Symbolic Connectionist Robust EnterprisE for
Natural language. Speech parsing describes the syntactic and semantic analysis
of spontaneous spoken language. The general approach is based on incremental
immediate flat analysis, learning of syntactic and semantic speech parsing,
parallel integration of current hypotheses, and the consideration of various
forms of speech related errors. The goal for this approach is to explore the
parallel interactions between various knowledge sources for learning
incremental fault-tolerant speech parsing. This approach is examined in a
system SCREEN using various hybrid connectionist techniques. Hybrid
connectionist techniques are examined because of their promising properties of
inherent fault tolerance, learning, gradedness and parallel constraint
integration. The input for SCREEN is hypotheses about recognized words of a
spoken utterance potentially analyzed by a speech system, the output is
hypotheses about the flat syntactic and semantic analysis of the utterance. In
this paper we focus on the general approach, the overall architecture, and
examples for learning flat syntactic speech parsing. Different from most other
speech language architectures SCREEN emphasizes an interactive rather than an
autonomous position, learning rather than encoding, flat analysis rather than
in-depth analysis, and fault-tolerant processing of phonetic, syntactic and
semantic knowledge.
|
cmp-lg/9406025
|
Emergent Parsing and Generation with Generalized Chart
|
cmp-lg cs.CL
|
A new, flexible inference method for Horn logic program is proposed, which is
a drastic generalization of chart parsing, partial instantiations of clauses in
a program roughly corresponding to arcs in a chart. Chart-like parsing and
semantic-head-driven generation emerge from this method. With a parsimonious
instantiation scheme for ambiguity packing, the parsing complexity reduces to
that of standard chart-based algorithms.
|
cmp-lg/9406026
|
The Very Idea of Dynamic Semantics
|
cmp-lg cs.CL
|
"Natural languages are programming languages for minds." Can we or should we
take this slogan seriously? If so, how? Can answers be found by looking at the
various "dynamic" treatments of natural language developed over the last decade
or so, mostly in response to problems associated with donkey anaphora? In
Dynamic Logic of Programs, the meaning of a program is a binary relation on the
set of states of some abstract machine. This relation is meant to model aspects
of the effects of the execution of the program, in particular its input-output
behavior. What, if anything, are the dynamic aspects of various proposed
dynamic semantics for natural languages supposed to model? Is there anything
dynamic to be modeled? If not, what is all the full about? We shall try to
answer some, at least, of these questions and provide materials for answers to
others.
|
cmp-lg/9406027
|
Analyzing and Improving Statistical Language Models for Speech
Recognition
|
cmp-lg cs.CL
|
In many current speech recognizers, a statistical language model is used to
indicate how likely it is that a certain word will be spoken next, given the
words recognized so far. How can statistical language models be improved so
that more complex speech recognition tasks can be tackled? Since the knowledge
of the weaknesses of any theory often makes improving the theory easier, the
central idea of this thesis is to analyze the weaknesses of existing
statistical language models in order to subsequently improve them. To that end,
we formally define a weakness of a statistical language model in terms of the
logarithm of the total probability, LTP, a term closely related to the standard
perplexity measure used to evaluate statistical language models. We apply our
definition of a weakness to a frequently used statistical language model,
called a bi-pos model. This results, for example, in a new modeling of unknown
words which improves the performance of the model by 14% to 21%. Moreover, one
of the identified weaknesses has prompted the development of our generalized
N-pos language model, which is also outlined in this thesis. It can incorporate
linguistic knowledge even if it extends over many words and this is not
feasible in a traditional N-pos model. This leads to a discussion of
whatknowledge should be added to statistical language models in general and we
give criteria for selecting potentially useful knowledge. These results show
the usefulness of both our definition of a weakness and of performing an
analysis of weaknesses of statistical language models in general.
|
cmp-lg/9406028
|
Resolution of Syntactic Ambiguity: the Case of New Subjects
|
cmp-lg cs.CL
|
I review evidence for the claim that syntactic ambiguities are resolved on
the basis of the meaning of the competing analyses, not their structure. I
identify a collection of ambiguities that do not yet have a meaning-based
account and propose one which is based on the interaction of discourse and
grammatical function. I provide evidence for my proposal by examining
statistical properties of the Penn Treebank of syntactically annotated text.
|
cmp-lg/9406029
|
A Computational Model of Syntactic Processing: Ambiguity Resolution from
Interpretation
|
cmp-lg cs.CL
|
Syntactic ambiguity abounds in natural language, yet humans have no
difficulty coping with it. In fact, the process of ambiguity resolution is
almost always unconscious. But it is not infallible, however, as example 1
demonstrates.
1. The horse raced past the barn fell.
This sentence is perfectly grammatical, as is evident when it appears in the
following context:
2. Two horses were being shown off to a prospective buyer. One was raced past
a meadow. and the other was raced past a barn. ...
Grammatical yet unprocessable sentences such as 1 are called `garden-path
sentences.' Their existence provides an opportunity to investigate the human
sentence processing mechanism by studying how and when it fails. The aim of
this thesis is to construct a computational model of language understanding
which can predict processing difficulty. The data to be modeled are known
examples of garden path and non-garden path sentences, and other results from
psycholinguistics.
It is widely believed that there are two distinct loci of computation in
sentence processing: syntactic parsing and semantic interpretation. One
longstanding controversy is which of these two modules bears responsibility for
the immediate resolution of ambiguity. My claim is that it is the latter, and
that the syntactic processing module is a very simple device which blindly and
faithfully constructs all possible analyses for the sentence up to the current
point of processing. The interpretive module serves as a filter, occasionally
discarding certain of these analyses which it deems less appropriate for the
ongoing discourse than their competitors.
This document is divided into three parts. The first is introductory, and
reviews a selection of proposals from the sentence processing literature. The
second part explores a body of data which has been adduced in support of a
theory of structural preferences --- one that is inconsistent with the present
claim. I show how the current proposal can be specified to account for the
available data, and moreover to predict where structural preference theories
will go wrong. The third part is a theoretical investigation of how well the
proposed architecture can be realized using current conceptions of linguistic
competence. In it, I present a parsing algorithm and a meaning-based ambiguity
resolution method.
|
cmp-lg/9406030
|
The complexity of normal form rewrite sequences for Associativity
|
cmp-lg cs.CL
|
The complexity of a particular term-rewrite system is considered: the rule of
associativity (x*y)*z --> x*(y*z). Algorithms and exact calculations are given
for the longest and shortest sequences of applications of --> that result in
normal form (NF). The shortest NF sequence for a term x is always n-drm(x),
where n is the number of occurrences of * in x and drm(x) is the depth of the
rightmost leaf of x. The longest NF sequence for any term is of length
n(n-1)/2.
|
cmp-lg/9406031
|
A Psycholinguistically Motivated Parser for CCG
|
cmp-lg cs.CL
|
Considering the speed in which humans resolve syntactic ambiguity, and the
overwhelming evidence that syntactic ambiguity is resolved through selection of
the analysis whose interpretation is the most `sensible', one comes to the
conclusion that interpretation, hence parsing take place incrementally, just
about every word. Considerations of parsimony in the theory of the syntactic
processor lead one to explore the simplest of parsers: one which represents
only analyses as defined by the grammar and no other information.
Toward this aim of a simple, incremental parser I explore the proposal that
the competence grammar is a Combinatory Categorial Grammar (CCG). I address the
problem of the proliferating analyses that stem from CCG's associativity of
derivation. My solution involves maintaining only the maximally incremental
analysis and, when necessary, computing the maximally right-branching analysis.
I use results from the study of rewrite systems to show that this computation
is efficient.
|
cmp-lg/9406032
|
Anytime Algorithms for Speech Parsing?
|
cmp-lg cs.CL
|
This paper discusses to which extent the concept of ``anytime algorithms''
can be applied to parsing algorithms with feature unification. We first try to
give a more precise definition of what an anytime algorithm is. We arque that
parsing algorithms have to be classified as contract algorithms as opposed to
(truly) interruptible algorithms. With the restriction that the transaction
being active at the time an interrupt is issued has to be completed before the
interrupt can be executed, it is possible to provide a parser with limited
anytime behavior, which is in fact being realized in our research prototype.
|
cmp-lg/9406033
|
Verb Semantics and Lexical Selection
|
cmp-lg cs.CL
|
This paper will focus on the semantic representation of verbs in computer
systems and its impact on lexical selection problems in machine translation
(MT). Two groups of English and Chinese verbs are examined to show that lexical
selection must be based on interpretation of the sentence as well as selection
restrictions placed on the verb arguments. A novel representation scheme is
suggested, and is compared to representations with selection restrictions used
in transfer-based MT. We see our approach as closely aligned with
knowledge-based MT approaches (KBMT), and as a separate component that could be
incorporated into existing systems. Examples and experimental results will show
that, using this scheme, inexact matches can achieve correct lexical selection.
|
cmp-lg/9406034
|
Decision Lists for Lexical Ambiguity Resolution: Application to Accent
Restoration in Spanish and French
|
cmp-lg cs.CL
|
This paper presents a statistical decision procedure for lexical ambiguity
resolution. The algorithm exploits both local syntactic patterns and more
distant collocational evidence, generating an efficient, effective, and highly
perspicuous recipe for resolving a given ambiguity. By identifying and
utilizing only the single best disambiguating evidence in a target context, the
algorithm avoids the problematic complex modeling of statistical dependencies.
Although directly applicable to a wide class of ambiguities, the algorithm is
described and evaluated in a realistic case study, the problem of restoring
missing accents in Spanish and French text.
|
cmp-lg/9406035
|
DISCO---An HPSG-based NLP System and its Application for Appointment
Scheduling (Project Note)
|
cmp-lg cs.CL
|
The natural language system DISCO is described. It combines o a powerful and
flexible grammar development system; o linguistic competence for German
including morphology, syntax and semantics; o new methods for linguistic
performance modelling on the basis of high-level competence grammars; o new
methods for modelling multi-agent dialogue competence; o an interesting sample
application for appointment scheduling and calendar management.
|
cmp-lg/9406036
|
Text Analysis Tools in Spoken Language Processing
|
cmp-lg cs.CL
|
This submission contains the postscript of the final version of the slides
used in our ACL-94 tutorial.
|
cmp-lg/9406037
|
Multi-Paragraph Segmentation of Expository Text
|
cmp-lg cs.CL
|
This paper describes TextTiling, an algorithm for partitioning expository
texts into coherent multi-paragraph discourse units which reflect the subtopic
structure of the texts. The algorithm uses domain-independent lexical frequency
and distribution information to recognize the interactions of multiple
simultaneous themes. Two fully-implemented versions of the algorithm are
described and shown to produce segmentation that corresponds well to human
judgments of the major subtopic boundaries of thirteen lengthy texts.
|
cmp-lg/9406038
|
An Empirical Model of Acknowledgment for Spoken-Language Systems
|
cmp-lg cs.CL
|
We refine and extend prior views of the description, purposes, and
contexts-of-use of acknowledgment acts through empirical examination of the use
of acknowledgments in task-based conversation. We distinguish three broad
classes of acknowledgments (other-->ackn, self-->other-->ackn, and self+ackn)
and present a catalogue of 13 patterns within these classes that account for
the specific uses of acknowledgment in the corpus.
|
cmp-lg/9406039
|
Three studies of grammar-based surface-syntactic parsing of unrestricted
English text. A summary and orientation
|
cmp-lg cs.CL
|
The dissertation addresses the design of parsing grammars for automatic
surface-syntactic analysis of unconstrained English text. It consists of a
summary and three articles. {\it Morphological disambiguation} documents a
grammar for morphological (or part-of-speech) disambiguation of English, done
within the Constraint Grammar framework proposed by Fred Karlsson. The
disambiguator seeks to discard those of the alternative morphological analyses
proposed by the lexical analyser that are contextually illegitimate. The 1,100
constraints express some 23 general, essentially syntactic statements as
restrictions on the linear order of morphological tags. The error rate of the
morphological disambiguator is about ten times smaller than that of another
state-of-the-art probabilistic disambiguator, given that both are allowed to
leave some of the hardest ambiguities unresolved. This accuracy suggests the
viability of the grammar-based approach to natural language parsing, thus also
contributing to the more general debate concerning the viability of
probabilistic vs.\ linguistic techniques. {\it Experiments with heuristics}
addresses the question of how to resolve those ambiguities that survive the
morphological disambiguator. Two approaches are presented and empirically
evaluated: (i) heuristic disambiguation constraints and (ii) techniques for
learning from the fully disambiguated part of the corpus and then applying this
information to resolving remaining ambiguities.
|
cmp-lg/9406040
|
Learning unification-based grammars using the Spoken English Corpus
|
cmp-lg cs.CL
|
This paper describes a grammar learning system that combines model-based and
data-driven learning within a single framework. Our results from learning
grammars using the Spoken English Corpus (SEC) suggest that combined
model-based and data-driven learning can produce a more plausible grammar than
is the case when using either learning style isolation.
|
cmp-lg/9407001
|
Morphology with a Null-Interface
|
cmp-lg cs.CL
|
We present an integrated architecture for word-level and sentence-level
processing in a unification-based paradigm. The core of the system is a CLP
implementation of a unification engine for feature structures supporting
relational values. In this framework an HPSG-style grammar is implemented.
Word-level processing uses X2MorF, a morphological component based on an
extended version of two-level morphology. This component is tightly integrated
with the grammar as a relation. The advantage of this approach is that
morphology and syntax are kept logically autonomous while at the same time
minimizing interface problems.
|
cmp-lg/9407002
|
Syntactic Analysis by Local Grammars Automata: an Efficient Algorithm
|
cmp-lg cs.CL
|
Local grammars can be represented in a very convenient way by automata. This
paper describes and illustrates an efficient algorithm for the application of
local grammars put in this form to lemmatized texts.
|
cmp-lg/9407003
|
Compact Representations by Finite-State Transducers
|
cmp-lg cs.CL
|
Finite-state transducers give efficient representations of many Natural
Language phenomena. They allow to account for complex lexicon restrictions
encountered, without involving the use of a large set of complex rules
difficult to analyze. We here show that these representations can be made very
compact, indicate how to perform the corresponding minimization, and point out
interesting linguistic side-effects of this operation.
|
cmp-lg/9407004
|
Japanese word sense disambiguation based on examples of synonyms
|
cmp-lg cs.CL
|
(This is not the abstract): The language is Japanese. If your printer does
not have fonts for Japases characters, the characters in figures will not be
printed out correctly. Dissertation for Bachelor's degree at Kyoto
University(Nagao lab.),March 1994.
|
cmp-lg/9407005
|
A Corrective Training Algorithm for Adaptive Learning in Bag Generation
|
cmp-lg cs.CL
|
The sampling problem in training corpus is one of the major sources of errors
in corpus-based applications. This paper proposes a corrective training
algorithm to best-fit the run-time context domain in the application of bag
generation. It shows which objects to be adjusted and how to adjust their
probabilities. The resulting techniques are greatly simplified and the
experimental results demonstrate the promising effects of the training
algorithm from generic domain to specific domain. In general, these techniques
can be easily extended to various language models and corpus-based
applications.
|
cmp-lg/9407006
|
Interleaving Syntax and Semantics in an Efficient Bottom-Up Parser
|
cmp-lg cs.CL
|
We describe an efficient bottom-up parser that interleaves syntactic and
semantic structure building. Two techniques are presented for reducing search
by reducing local ambiguity: Limited left-context constraints are used to
reduce local syntactic ambiguity, and deferred sortal-constraint application is
used to reduce local semantic ambiguity. We experimentally evaluate these
techniques, and show dramatic reductions in both number of chart-edges and
total parsing time. The robust processing capabilities of the parser are
demonstrated in its use in improving the accuracy of a speech recognizer.
|
cmp-lg/9407007
|
GEMINI: A Natural Language System for Spoken-Language Understanding
|
cmp-lg cs.CL
|
Gemini is a natural language understanding system developed for spoken
language applications. The paper describes the architecture of Gemini, paying
particular attention to resolving the tension between robustness and
overgeneration. Gemini features a broad-coverage unification-based grammar of
English, fully interleaved syntactic and semantic processing in an all-paths,
bottom-up parser, and an utterance-level parser to find interpretations of
sentences that might not be analyzable as complete sentences. Gemini also
includes novel components for recognizing and correcting grammatical
disfluencies, and for doing parse preferences. This paper presents a
component-by-component view of Gemini, providing detailed relevant measurements
of size, efficiency, and performance.
|
cmp-lg/9407008
|
Tricolor DAGs for Machine Translation
|
cmp-lg cs.CL
|
Machine translation (MT) has recently been formulated in terms of
constraint-based knowledge representation and unification theories, but it is
becoming more and more evident that it is not possible to design a practical MT
system without an adequate method of handling mismatches between semantic
representations in the source and target languages. In this paper, we introduce
the idea of ``information-based'' MT, which is considerably more flexible than
interlingual MT or the conventional transfer-based MT.
|
cmp-lg/9407009
|
Estimating Performance of Pipelined Spoken Language Translation Systems
|
cmp-lg cs.CL
|
Most spoken language translation systems developed to date rely on a
pipelined architecture, in which the main stages are speech recognition,
linguistic analysis, transfer, generation and speech synthesis. When making
projections of error rates for systems of this kind, it is natural to assume
that the error rates for the individual components are independent, making the
system accuracy the product of the component accuracies.
The paper reports experiments carried out using the SRI-SICS-Telia Research
Spoken Language Translator and a 1000-utterance sample of unseen data. The
results suggest that the naive performance model leads to serious overestimates
of system error rates, since there are in fact strong dependencies between the
components. Predicting the system error rate on the independence assumption by
simple multiplication resulted in a 16\% proportional overestimate for all
utterances, and a 19\% overestimate when only utterances of length 1-10 words
were considered.
|
cmp-lg/9407010
|
Combining Knowledge Sources to Reorder N-Best Speech Hypothesis Lists
|
cmp-lg cs.CL
|
A simple and general method is described that can combine different knowledge
sources to reorder N-best lists of hypotheses produced by a speech recognizer.
The method is automatically trainable, acquiring information from both positive
and negative examples. Experiments are described in which it was tested on a
1000-utterance sample of unseen ATIS data.
|
cmp-lg/9407011
|
Discourse Obligations in Dialogue Processing
|
cmp-lg cs.CL
|
We show that in modeling social interaction, particularly dialogue, the
attitude of obligation can be a useful adjunct to the popularly considered
attitudes of belief, goal, and intention and their mutual and shared
counterparts. In particular, we show how discourse obligations can be used to
account in a natural manner for the connection between a question and its
answer in dialogue and how obligations can be used along with other parts of
the discourse context to extend the coverage of a dialogue system.
|
cmp-lg/9407012
|
Phoneme Recognition Using Acoustic Events
|
cmp-lg cs.CL
|
This paper presents a new approach to phoneme recognition using nonsequential
sub--phoneme units. These units are called acoustic events and are
phonologically meaningful as well as recognizable from speech signals. Acoustic
events form a phonologically incomplete representation as compared to
distinctive features. This problem may partly be overcome by incorporating
phonological constraints. Currently, 24 binary events describing manner and
place of articulation, vowel quality and voicing are used to recognize all
German phonemes. Phoneme recognition in this paradigm consists of two steps:
After the acoustic events have been determined from the speech signal, a
phonological parser is used to generate syllable and phoneme hypotheses from
the event lattice. Results obtained on a speaker--dependent corpus are
presented.
|
cmp-lg/9407013
|
The Acquisition of a Lexicon from Paired Phoneme Sequences and Semantic
Representations
|
cmp-lg cs.CL
|
We present an algorithm that acquires words (pairings of phonological forms
and semantic representations) from larger utterances of unsegmented phoneme
sequences and semantic representations. The algorithm maintains from utterance
to utterance only a single coherent dictionary, and learns in the presence of
homonymy, synonymy, and noise. Test results over a corpus of utterances
generated from the Childes database of mother-child interactions are presented.
|
cmp-lg/9407014
|
Abstract Machine for Typed Feature Structures
|
cmp-lg cs.CL
|
This paper describes a first step towards the definition of an abstract
machine for linguistic formalisms that are based on typed feature structures,
such as HPSG. The core design of the abstract machine is given in detail,
including the compilation process from a high-level specification language to
the abstract machine language and the implementation of the abstract
instructions. We thus apply methods that were proved useful in computer science
to the study of natural languages: a grammar specified using the formalism is
endowed with an operational semantics. Currently, our machine supports the
unification of simple feature structures, unification of sequences of such
structures, cyclic structures and disjunction.
|
cmp-lg/9407015
|
Specifying Intonation from Context for Speech Synthesis
|
cmp-lg cs.CL
|
This paper presents a theory and a computational implementation for
generating prosodically appropriate synthetic speech in response to database
queries. Proper distinctions of contrast and emphasis are expressed in an
intonation contour that is synthesized by rule under the control of a grammar,
a discourse model, and a knowledge base. The theory is based on Combinatory
Categorial Grammar, a formalism which easily integrates the notions of
syntactic constituency, semantics, prosodic phrasing and information structure.
Results from our current implementation demonstrate the system's ability to
generate a variety of intonational possibilities for a given sentence depending
on the discourse context.
|
cmp-lg/9407016
|
The Role of Cognitive Modeling in Achieving Communicative Intentions
|
cmp-lg cs.CL
|
A discourse planner for (task-oriented) dialogue must be able to make choices
about whether relevant, but optional information (for example, the "satellites"
in an RST-based planner) should be communicated. We claim that effective text
planners must explicitly model aspects of the Hearer's cognitive state, such as
what the hearer is attending to and what inferences the hearer can draw, in
order to make these choices. We argue that a mere representation of the
Hearer's knowledge is inadequate. We support this claim by (1) an analysis of
naturally occurring dialogue, and (2) by simulating the generation of
discourses in a situation in which we can vary the cognitive parameters of the
hearer. Our results show that modeling cognitive state can lead to more
effective discourses (measured with respect to a simple task).
|
cmp-lg/9407017
|
Generating Context-Appropriate Word Orders in Turkish
|
cmp-lg cs.CL
|
Turkish has considerably freer word order than English. The interpretations
of different word orders in Turkish rely on information that describes how a
sentence relates to its discourse context. To capture the syntactic features of
a free word order language, I present an adaptation of Combinatory Categorial
Grammars called {}-CCGs (set-CCGs). In {}-CCGs, a verb's subcategorization
requirements are relaxed so that it requires a set of arguments without
specifying their linear order. I integrate a level of information structure,
representing pragmatic functions such as topic and focus, with {}-CCGs to allow
certain pragmatic distinctions in meaning to influence the word order of a
sentence in a compositional way. Finally, I discuss how this strategy is used
within an implemented generation system which produces Turkish sentences with
context-appropriate word orders in a simple database query task.
|
cmp-lg/9407018
|
Generating Multilingual Documents from a Knowledge Base: The TECHDOC
Project
|
cmp-lg cs.CL
|
TECHDOC is an implemented system demonstrating the feasibility of generating
multilingual technical documents on the basis of a language-independent
knowledge base. Its application domain is user and maintenance instructions,
which are produced from underlying plan structures representing the activities,
the participating objects with their properties, relations, and so on. This
paper gives a brief outline of the system architecture and discusses some
recent developments in the project: the addition of actual event simulation in
the KB, steps towards a document authoring tool, and a multimodal user
interface. (slightly corrected version of a paper to appear in: COLING 94,
Proceedings)
|
cmp-lg/9407019
|
Tracking Point of View in Narrative
|
cmp-lg cs.CL
|
Third-person fictional narrative text is composed not only of passages that
objectively narrate events, but also of passages that present characters'
thoughts, perceptions, and inner states. Such passages take a character's
``psychological point of view''. A language understander must determine the
current psychological point of view in order to distinguish the beliefs of the
characters from the facts of the story, to correctly attribute beliefs and
other attitudes to their sources, and to understand the discourse relations
among sentences. Tracking the psychological point of view is not a trivial
problem, because many sentences are not explicitly marked for point of view,
and whether the point of view of a sentence is objective or that of a character
(and if the latter, which character it is) often depends on the context in
which the sentence appears. Tracking the psychological point of view is the
problem addressed in this work. The approach is to seek, by extensive
examinations of naturally-occurring narrative, regularities in the ways that
authors manipulate point of view, and to develop an algorithm that tracks point
of view on the basis of the regularities found. This paper presents this
algorithm, gives demonstrations of an implemented system, and describes the
results of some preliminary empirical studies, which lend support to the
algorithm.
|
cmp-lg/9407020
|
A Sequential Algorithm for Training Text Classifiers
|
cmp-lg cs.CL
|
The ability to cheaply train text classifiers is critical to their use in
information retrieval, content analysis, natural language processing, and other
tasks involving data which is partly or fully textual. An algorithm for
sequential sampling during machine learning of statistical classifiers was
developed and tested on a newswire text categorization task. This method, which
we call uncertainty sampling, reduced by as much as 500-fold the amount of
training data that would have to be manually classified to achieve a given
level of effectiveness.
|
cmp-lg/9407021
|
K-vec: A New Approach for Aligning Parallel Texts
|
cmp-lg cs.CL
|
Various methods have been proposed for aligning texts in two or more
languages such as the Canadian Parliamentary Debates(Hansards). Some of these
methods generate a bilingual lexicon as a by-product. We present an alternative
alignment strategy which we call K-vec, that starts by estimating the lexicon.
For example, it discovers that the English word "fisheries" is similar to the
French "pe^ches" by noting that the distribution of "fisheries" in the English
text is similar to the distribution of "pe^ches" in the French. K-vec does not
depend on sentence boundaries.
|
cmp-lg/9407022
|
Comparative Discourse Analysis of Parallel Texts
|
cmp-lg cs.CL
|
A quantitative representation of discourse structure can be computed by
measuring lexical cohesion relations among adjacent blocks of text. These
representations have been proposed to deal with sub-topic text segmentation. In
a parallel corpus, similar representations can be derived for versions of a
text in various languages. These can be used for parallel segmentation and as
an alternative measure of text-translation similarity.
|
cmp-lg/9407023
|
Multi-Tape Two-Level Morphology: A Case Study in Semitic Non-linear
Morphology
|
cmp-lg cs.CL
|
This paper presents an implemented multi-tape two-level model capable of
describing Semitic non-linear morphology. The computational framework behind
the current work is motivated by Kay (1987); the formalism presented here is an
extension to the formalism reported by Pulman and Hepple (1993). The objectives
of the current work are: to stay as close as possible, in spirit, to standard
two-level morphology, to stay close to the linguistic description of Semitic
stems, and to present a model which can be used with ease by the Semitist. The
paper illustrates that if finite-state transducers (FSTs) in a standard
two-level morphology model are replaced with multi-tape auxiliary versions
(AFSTs), one can account for Semitic root-and-pattern morphology using high
level notation.
|
cmp-lg/9407024
|
PRINCIPAR---An Efficient, Broad-coverage, Principle-based Parser
|
cmp-lg cs.CL
|
We present an efficient, broad-coverage, principle-based parser for English.
The parser has been implemented in C++ and runs on SUN Sparcstations with
X-windows. It contains a lexicon with over 90,000 entries, constructed
automatically by applying a set of extraction and conversion rules to entries
from machine readable dictionaries.
|
cmp-lg/9407025
|
Recovering From Parser Failures: A Hybrid Statistical/Symbolic Approach
|
cmp-lg cs.CL
|
We describe an implementation of a hybrid statistical/symbolic approach to
repairing parser failures in a speech-to-speech translation system. We describe
a module which takes as input a fragmented parse and returns a repaired meaning
representation. It negotiates with the speaker about what the complete meaning
of the utterance is by generating hypotheses about how to fit the fragments of
the partial parse together into a coherent meaning representation. By drawing
upon both statistical and symbolic information, it constrains its repair
hypotheses to those which are both likely and meaningful. Because it updates
its statistical model during use, it improves its performance over time.
|
cmp-lg/9407026
|
Tagging and Morphological Disambiguation of Turkish Text
|
cmp-lg cs.CL
|
Automatic text tagging is an important component in higher level analysis of
text corpora, and its output can be used in many natural language processing
applications. In languages like Turkish or Finnish, with agglutinative
morphology, morphological disambiguation is a very crucial process in tagging,
as the structures of many lexical forms are morphologically ambiguous. This
paper describes a POS tagger for Turkish text based on a full-scale two-level
specification of Turkish morphology that is based on a lexicon of about 24,000
root words. This is augmented with a multi-word and idiomatic construct
recognizer, and most importantly morphological disambiguator based on local
neighborhood constraints, heuristics and limited amount of statistical
information. The tagger also has functionality for statistics compilation and
fine tuning of the morphological analyzer, such as logging erroneous
morphological parses, commonly used roots, etc. Preliminary results indicate
that the tagger can tag about 98-99\% of the texts accurately with very minimal
user intervention. Furthermore for sentences morphologically disambiguated with
the tagger, an LFG parser developed for Turkish, generates, on the average,
50\% less ambiguous parses and parses almost 2.5 times faster. The tagging
functionality is not specific to Turkish, and can be applied to any language
with a proper morphological analysis interface.
|
cmp-lg/9407027
|
Parsing as Tree Traversal
|
cmp-lg cs.CL
|
This paper presents a unified approach to parsing, in which top-down,
bottom-up and left-corner parsers are related to preorder, postorder and
inorder tree traversals. It is shown that the simplest bottom-up and
left-corner parsers are left recursive and must be converted using an extended
Greibach normal form. With further partial execution, the bottom-up and
left-corner parsers collapse together as in the BUP parser of Matsumoto.
|
cmp-lg/9407028
|
Automated Postediting of Documents
|
cmp-lg cs.CL
|
Large amounts of low- to medium-quality English texts are now being produced
by machine translation (MT) systems, optical character readers (OCR), and
non-native speakers of English. Most of this text must be postedited by hand
before it sees the light of day. Improving text quality is tedious work, but
its automation has not received much research attention. Anyone who has
postedited a technical report or thesis written by a non-native speaker of
English knows the potential of an automated postediting system. For the case of
MT-generated text, we argue for the construction of postediting modules that
are portable across MT systems, as an alternative to hardcoding improvements
inside any one system. As an example, we have built a complete self-contained
postediting module for the task of article selection (a, an, the) for English
noun phrases. This is a notoriously difficult problem for Japanese-English MT.
Our system contains over 200,000 rules derived automatically from online text
resources. We report on learning algorithms, accuracy, and comparisons with
human performance.
|
cmp-lg/9407029
|
Building a Large-Scale Knowledge Base for Machine Translation
|
cmp-lg cs.CL
|
Knowledge-based machine translation (KBMT) systems have achieved excellent
results in constrained domains, but have not yet scaled up to newspaper text.
The reason is that knowledge resources (lexicons, grammar rules, world models)
must be painstakingly handcrafted from scratch. One of the hypotheses being
tested in the PANGLOSS machine translation project is whether or not these
resources can be semi-automatically acquired on a very large scale. This paper
focuses on the construction of a large ontology (or knowledge base, or world
model) for supporting KBMT. It contains representations for some 70,000
commonly encountered objects, processes, qualities, and relations. The ontology
was constructed by merging various online dictionaries, semantic networks, and
bilingual resources, through semi-automatic methods. Some of these methods
(e.g., conceptual matching of semantic taxonomies) are broadly applicable to
problems of importing/exporting knowledge from one KB to another. Other methods
(e.g., bilingual matching) allow a knowledge engineer to build up an index to a
KB in a second language, such as Spanish or Japanese.
|
cmp-lg/9407030
|
Computing FIRST and FOLLOW Functions for Feature-Theoretic Grammars
|
cmp-lg cs.CL
|
This paper describes an algorithm for the computation of FIRST and FOLLOW
sets for use with feature-theoretic grammars in which the value of the sets
consists of pairs of feature-theoretic categories. The algorithm preserves as
much information from the grammars as possible, using negative restriction to
define equivalence classes. Addition of a simple data structure leads to an
order of magnitude improvement in execution time over a naive implementation.
|
cmp-lg/9408001
|
The Correct and Efficient Implementation of Appropriateness
Specifications for Typed Feature Structures
|
cmp-lg cs.CL
|
In this paper, we argue that type inferencing incorrectly implements
appropriateness specifications for typed feature structures, promote a
combination of type resolution and unfilling as a correct and efficient
alternative, and consider the expressive limits of this alternative approach.
Throughout, we use feature cooccurence restrictions as illustration and
linguistic motivation.
|
cmp-lg/9408002
|
Computational Analyses of Arabic Morphology
|
cmp-lg cs.CL
|
This paper demonstrates how a (multi-tape) two-level formalism can be used to
write two-level grammars for Arabic non-linear morphology using a high level,
but computationally tractable, notation. Three illustrative grammars are
provided based on CV-, moraic- and affixational analyses. These are
complemented by a proposal for handling the hitherto computationally untreated
problem of the broken plural. It will be shown that the best grammars for
describing Arabic non-linear morphology are moraic in the case of templatic
stems, and affixational in the case of a-templatic stems. The paper will
demonstrate how the broken plural can be derived under two-level theory via the
`implicit' derivation of the singular.
|
cmp-lg/9408003
|
Typed Feature Structures as Descriptions
|
cmp-lg cs.CL
|
A description is an entity that can be interpreted as true or false of an
object, and using feature structures as descriptions accrues several
computational benefits. In this paper, I create an explicit interpretation of a
typed feature structure used as a description, define the notion of a
satisfiable feature structure, and create a simple and effective algorithm to
decide if a feature structure is satisfiable.
|
cmp-lg/9408004
|
Parsing with Principles and Probabilities
|
cmp-lg cs.CL
|
This paper is an attempt to bring together two approaches to language
analysis. The possible use of probabilistic information in principle-based
grammars and parsers is considered, including discussion on some theoretical
and computational problems that arise. Finally a partial implementation of
these ideas is presented, along with some preliminary results from testing on a
small set of sentences.
|
cmp-lg/9408005
|
A Modular and Flexible Architecture for an Integrated Corpus Query
System
|
cmp-lg cs.CL
|
The paper describes the architecture of an integrated and extensible corpus
query system developed at the University of Stuttgart and gives examples of
some of the modules realized within this architecture. The modules form the
core of a corpus workbench. Within the proposed architecture, information
required for the evaluation of queries may be derived from different knowledge
sources (the corpus text, databases, on-line thesauri) and by different means:
either through direct lookup in a database or by calling external tools which
may infer the necessary information at the time of query evaluation. The
information available and the method of information access can be stated
declaratively and individually for each corpus, leading to a flexible,
extensible and modular corpus workbench.
|
cmp-lg/9408006
|
LHIP: Extended DCGs for Configurable Robust Parsing
|
cmp-lg cs.CL
|
We present LHIP, a system for incremental grammar development using an
extended DCG formalism. The system uses a robust island-based parsing method
controlled by user-defined performance thresholds.
|
cmp-lg/9408007
|
Emergent Linguistic Rules from Inducing Decision Trees: Disambiguating
Discourse Clue Words
|
cmp-lg cs.CL
|
We apply decision tree induction to the problem of discourse clue word sense
disambiguation with a genetic algorithm. The automatic partitioning of the
training set which is intrinsic to decision tree induction gives rise to
linguistically viable rules.
|
cmp-lg/9408008
|
Statistical versus symbolic parsing for captioned-information retrieval
|
cmp-lg cs.CL
|
We discuss implementation issues of MARIE-1, a mostly symbolic parser fully
implemented, and MARIE-2, a more statistical parser partially implemented. They
address a corpus of 100,000 picture captions. We argue that the mixed approach
of MARIE-2 should be better for this corpus because its algorithms (not data)
are simpler.
|
cmp-lg/9408009
|
Tagging accurately -- Don't guess if you know
|
cmp-lg cs.CL
|
We discuss combining knowledge-based (or rule-based) and statistical
part-of-speech taggers. We use two mature taggers, ENGCG and Xerox Tagger, to
independently tag the same text and combine the results to produce a fully
disambiguated text. In a 27000 word test sample taken from a previously unseen
corpus we achieve 98.5% accuracy. This paper presents the data in detail. We
describe the problems we encountered in the course of combining the two taggers
and discuss the problem of evaluating taggers.
|
cmp-lg/9408010
|
On Using Selectional Restriction in Language Models for Speech
Recognition
|
cmp-lg cs.CL
|
In this paper, we investigate the use of selectional restriction -- the
constraints a predicate imposes on its arguments -- in a language model for
speech recognition. We use an un-tagged corpus, followed by a public domain
tagger and a very simple finite state machine to obtain verb-object pairs from
unrestricted English text. We then measure the impact the knowledge of the verb
has on the prediction of the direct object in terms of the perplexity of a
cluster-based language model. The results show that even though a clustered
bigram is more useful than a verb-object model, the combination of the two
leads to an improvement over the clustered bigram model.
|
cmp-lg/9408011
|
Distributional Clustering of English Words
|
cmp-lg cs.CL
|
We describe and experimentally evaluate a method for automatically clustering
words according to their distribution in particular syntactic contexts.
Deterministic annealing is used to find lowest distortion sets of clusters. As
the annealing parameter increases, existing clusters become unstable and
subdivide, yielding a hierarchical ``soft'' clustering of the data. Clusters
are used as the basis for class models of word coocurrence, and the models
evaluated with respect to held-out test data.
|
cmp-lg/9408012
|
Approximate N-Gram Markov Model for Natural Language Generation
|
cmp-lg cs.CL
|
This paper proposes an Approximate n-gram Markov Model for bag generation.
Directed word association pairs with distances are used to approximate
(n-1)-gram and n-gram training tables. This model has parameters of word
association model, and merits of both word association model and Markov Model.
The training knowledge for bag generation can be also applied to lexical
selection in machine translation design.
|
cmp-lg/9408013
|
Training and Scaling Preference Functions for Disambiguation
|
cmp-lg cs.CL
|
We present an automatic method for weighting the contributions of preference
functions used in disambiguation. Initial scaling factors are derived as the
solution to a least-squares minimization problem, and improvements are then
made by hill-climbing. The method is applied to disambiguating sentences in the
ATIS (Air Travel Information System) corpus, and the performance of the
resulting scaling factors is compared with hand-tuned factors. We then focus on
one class of preference function, those based on semantic lexical collocations.
Experimental results are presented showing that such functions vary
considerably in selecting correct analyses. In particular we define a function
that performs significantly better than ones based on mutual information and
likelihood ratios of lexical associations.
|
cmp-lg/9408014
|
Qualitative and Quantitative Models of Speech Translation
|
cmp-lg cs.CL
|
This paper compares a qualitative reasoning model of translation with a
quantitative statistical model. We consider these models within the context of
two hypothetical speech translation systems, starting with a logic-based design
and pointing out which of its characteristics are best preserved or eliminated
in moving to the second, quantitative design. The quantitative language and
translation models are based on relations between lexical heads of phrases.
Statistical parameters for structural dependency, lexical transfer, and linear
order are used to select a set of implicit relations between words in a source
utterance, a corresponding set of relations between target language words, and
the most likely translation of the original utterance.
|
cmp-lg/9408015
|
Experimentally Evaluating Communicative Strategies: The Effect of the
Task
|
cmp-lg cs.CL
|
Effective problem solving among multiple agents requires a better
understanding of the role of communication in collaboration. In this paper we
show that there are communicative strategies that greatly improve the
performance of resource-bounded agents, but that these strategies are highly
sensitive to the task requirements, situation parameters and agents' resource
limitations. We base our argument on two sources of evidence: (1) an analysis
of a corpus of 55 problem solving dialogues, and (2) experimental simulations
of collaborative problem solving dialogues in an experimental world,
Design-World, where we parameterize task requirements, agents' resources and
communicative strategies.
|
cmp-lg/9408016
|
On Implementing an HPSG theory -- Aspects of the logical architecture,
the formalization, and the implementation of head-driven phrase structure
grammars
|
cmp-lg cs.CL
|
The paper presents some aspects involved in the formalization and
implementation of HPSG theories. As basis, the logical setups of Carpenter
(1992) and King (1989, 1994) are briefly compared regarding their usefulness as
basis for HPSGII (Pollard and Sag 1994). The possibilities for expressing HPSG
theories in the HPSGII architecture and in various computational systems (ALE,
Troll, CUF, and TFS) are discussed. Beside a formal characterization of the
possibilities, the paper investigates the specific choices for constraints with
certain linguistic motivations, i.e. the lexicon, structure licencing, and
grammatical principles. An ALE implementation of a theory for German proposed
by Hinrichs and Nakazawa (1994) is used as example and the ALE grammar is
included in the appendix.
|
cmp-lg/9408017
|
Reaping the Benefits of Interactive Syntax and Semantics
|
cmp-lg cs.CL
|
Semantic feedback is an important source of information that a parser could
use to deal with local ambiguities in syntax. However, it is difficult to
devise a systematic communication mechanism for interactive syntax and
semantics. In this article, I propose a variant of left-corner parsing to
define the points at which syntax and semantics should interact, an account of
grammatical relations and thematic roles to define the content of the
communication, and a conflict resolution strategy based on independent
preferences from syntax and semantics. The resulting interactive model has been
implemented in a program called COMPERE and shown to account for a wide variety
of psycholinguistic data on structural and lexical ambiguities.
|
cmp-lg/9408018
|
Uniform Representations for Syntax-Semantics Arbitration
|
cmp-lg cs.CL
|
Psychological investigations have led to considerable insight into the
working of the human language comprehension system. In this article, we look at
a set of principles derived from psychological findings to argue for a
particular organization of linguistic knowledge along with a particular
processing strategy and present a computational model of sentence processing
based on those principles. Many studies have shown that human sentence
comprehension is an incremental and interactive process in which semantic and
other higher-level information interacts with syntactic information to make
informed commitments as early as possible at a local ambiguity. Early
commitments may be made by using top-down guidance from knowledge of different
types, each of which must be applicable independently of others. Further
evidence from studies of error recovery and delayed decisions points toward an
arbitration mechanism for combining syntactic and semantic information in
resolving ambiguities. In order to account for all of the above, we propose
that all types of linguistic knowledge must be represented in a common form but
must be separable so that they can be applied independently of each other and
integrated at processing time by the arbitrator. We present such a uniform
representation and a computational model called COMPERE based on the
representation and the processing strategy.
|
cmp-lg/9408019
|
Building a Parser That can Afford to Interact with Semantics
|
cmp-lg cs.CL
|
Natural language understanding programs get bogged down by the multiplicity
of possible syntactic structures while processing real world texts that human
understanders do not have much difficulty with. In this work, I analyze the
relationships between parsing strategies, the degree of local ambiguity
encountered by them, and semantic feedback to syntax, and propose a parsing
algorithm called {\em Head-Signaled Left Corner Parsing} (HSLC) that minimizes
local ambiguities while supporting interactive syntactic and semantic analysis.
Such a parser has been implemented in a sentence understanding program called
COMPERE.
|
cmp-lg/9408020
|
Having Your Cake and Eating It Too: Autonomy and Interaction in a Model
of Sentence Processing
|
cmp-lg cs.CL
|
Is the human language understander a collection of modular processes
operating with relative autonomy, or is it a single integrated process? This
ongoing debate has polarized the language processing community, with two
fundamentally different types of model posited, and with each camp concluding
that the other is wrong. One camp puts forth a model with separate processors
and distinct knowledge sources to explain one body of data, and the other
proposes a model with a single processor and a homogeneous, monolithic
knowledge source to explain the other body of data. In this paper we argue that
a hybrid approach which combines a unified processor with separate knowledge
sources provides an explanation of both bodies of data, and we demonstrate the
feasibility of this approach with the computational model called COMPERE. We
believe that this approach brings the language processing community
significantly closer to offering human-like language processing systems.
|
cmp-lg/9408021
|
A Unified Process Model of Syntactic and Semantic Error Recovery in
Sentence Understanding
|
cmp-lg cs.CL
|
The development of models of human sentence processing has traditionally
followed one of two paths. Either the model posited a sequence of processing
modules, each with its own task-specific knowledge (e.g., syntax and
semantics), or it posited a single processor utilizing different types of
knowledge inextricably integrated into a monolithic knowledge base. Our
previous work in modeling the sentence processor resulted in a model in which
different processing modules used separate knowledge sources but operated in
parallel to arrive at the interpretation of a sentence. One highlight of this
model is that it offered an explanation of how the sentence processor might
recover from an error in choosing the meaning of an ambiguous word. Recent
experimental work by Laurie Stowe strongly suggests that the human sentence
processor deals with syntactic error recovery using a mechanism very much like
that proposed by our model of semantic error recovery. Another way to interpret
Stowe's finding is this: the human sentence processor consists of a single
unified processing module utilizing multiple independent knowledge sources in
parallel. A sentence processor built upon this architecture should at times
exhibit behavior associated with modular approaches, and at other times act
like an integrated system. In this paper we explore some of these ideas via a
prototype computational model of sentence processing called COMPERE, and
propose a set of psychological experiments for testing our theories.
|
cmp-lg/9409001
|
Integrating Knowledge Bases and Statistics in MT
|
cmp-lg cs.CL
|
We summarize recent machine translation (MT) research at the Information
Sciences Institute of USC, and we describe its application to the development
of a Japanese-English newspaper MT system. Our work aims at scaling up
grammar-based, knowledge-based MT techniques. This scale-up involves the use of
statistical methods, both in acquiring effective knowledge resources and in
making reasonable linguistic choices in the face of knowledge gaps.
|
cmp-lg/9409002
|
Conceptual Association for Compound Noun Analysis
|
cmp-lg cs.CL
|
This paper describes research toward the automatic interpretation of compound
nouns using corpus statistics. An initial study aimed at syntactic
disambiguation is presented. The approach presented bases associations upon
thesaurus categories. Association data is gathered from unambiguous cases
extracted from a corpus and is then applied to the analysis of ambiguous
compound nouns. While the work presented is still in progress, a first attempt
to syntactically analyse a test set of 244 examples shows 75% correctness.
Future work is aimed at improving this accuracy and extending the technique to
assign semantic role information, thus producing a complete interpretation.
|
cmp-lg/9409003
|
A Probabilistic Model of Compound Nouns
|
cmp-lg cs.CL
|
Compound nouns such as example noun compound are becoming more common in
natural language and pose a number of difficult problems for NLP systems,
notably increasing the complexity of parsing. In this paper we develop a
probabilistic model for syntactically analysing such compounds. The model
predicts compound noun structures based on knowledge of affinities between
nouns, which can be acquired from a corpus. Problems inherent in this
corpus-based approach are addressed: data sparseness is overcome by the use of
semantically motivated word classes and sense ambiguity is explicitly handled
in the model. An implementation based on this model is described in Lauer
(1994) and correctly parses 77% of the test set.
|
cmp-lg/9409004
|
An Experiment on Learning Appropriate Selectional Restrictions from a
Parsed Corpus
|
cmp-lg cs.CL
|
We present a methodology to extract Selectional Restrictions at a variable
level of abstraction from phrasally analyzed corpora. The method relays in the
use of a wide-coverage noun taxonomy and a statistical measure of the
co-occurrence of linguistic items. Some experimental results about the
performance of the method are provided.
|
cmp-lg/9409005
|
Focusing for Pronoun Resolution in English Discourse: An Implementation
|
cmp-lg cs.CL
|
Anaphora resolution is one of the most active research areas in natural
language processing. This study examines focusing as a tool for the resolution
of pronouns which are a kind of anaphora. Focusing is a discourse phenomenon
like anaphora. Candy Sidner formalized focusing in her 1979 MIT PhD thesis and
devised several algorithms to resolve definite anaphora including pronouns. She
presented her theory in a computational framework but did not generally
implement the algorithms. Her algorithms related to focusing and pronoun
resolution are implemented in this thesis. This implementation provides a
better comprehension of the theory both from a conceptual and a computational
point of view. The resulting program is tested on different discourse segments,
and evaluation and analysis of the experiments are presented together with the
statistical results.
|
cmp-lg/9409006
|
Situated Modeling of Epistemic Puzzles
|
cmp-lg cs.CL
|
Situation theory is a mathematical theory of meaning introduced by Jon
Barwise and John Perry. It has evoked great theoretical and practical interest
and motivated the framework of a few `computational' systems. PROSIT is the
pioneering work in this direction. Unfortunately, there is a lack of real-life
applications on these systems and this study is a preliminary attempt to remedy
this deficiency. Here, we examine how much PROSIT reflects situation-theoretic
concepts and solve a group of epistemic puzzles using the constructs provided
by this programming language.
|
cmp-lg/9409007
|
Treating `Free Word Order' in Machine Translation
|
cmp-lg cs.CL
|
In `free word order' languages, every sentence is embedded in its specific
context. Among others, the order of constituents is determined by the
categories `theme', `rheme' and `contrastive focus'. This paper shows how to
recognise and to translate these categories automatically on a sentential
basis, so that sentence embedding can be achieved without having to refer to
the context. Modifier classes, which are traditionally neglected in linguistic
description, are fully covered by the proposed method. (Coling 94, Kyoto, Vol.
I, pages 69-75)
|
cmp-lg/9409008
|
Parsing of Spoken Language under Time Constraints
|
cmp-lg cs.CL
|
Spoken language applications in natural dialogue settings place serious
requirements on the choice of processing architecture. Especially under adverse
phonetic and acoustic conditions parsing procedures have to be developed which
do not only analyse the incoming speech in a time-synchroneous and incremental
manner, but which are able to schedule their resources according to the varying
conditions of the recognition process. Depending on the actual degree of local
ambiguity the parser has to select among the available constraints in order to
narrow down the search space with as little effort as possible.
A parsing approach based on constraint satisfaction techniques is discussed.
It provides important characteristics of the desired real-time behaviour and
attempts to mimic some of the attention focussing capabilities of the human
speech comprehension mechanism.
|
cmp-lg/9409009
|
Linguistics Computation, Automatic Model Generation, and Intensions
|
cmp-lg cs.CL
|
Techniques are presented for defining models of computational linguistics
theories. The methods of generalized diagrams that were developed by this
author for modeling artificial intelligence planning and reasoning are shown to
be applicable to models of computation of linguistics theories. It is shown
that for extensional and intensional interpretations, models can be generated
automatically which assign meaning to computations of linguistics theories for
natural languages.
Keywords: Computational Linguistics, Reasoning Models, G-diagrams For Models,
Dynamic Model Implementation, Linguistics and Logics For Artificial
Intelligence
|
cmp-lg/9409010
|
Inducing Probabilistic Grammars by Bayesian Model Merging
|
cmp-lg cs.CL
|
We describe a framework for inducing probabilistic grammars from corpora of
positive samples. First, samples are {\em incorporated} by adding ad-hoc rules
to a working grammar; subsequently, elements of the model (such as states or
nonterminals) are {\em merged} to achieve generalization and a more compact
representation. The choice of what to merge and when to stop is governed by the
Bayesian posterior probability of the grammar given the data, which formalizes
a trade-off between a close fit to the data and a default preference for
simpler models (`Occam's Razor'). The general scheme is illustrated using three
types of probabilistic grammars: Hidden Markov models, class-based $n$-grams,
and stochastic context-free grammars.
|
cmp-lg/9409011
|
Aligning Noisy Parallel Corpora Across Language Groups : Word Pair
Feature Matching by Dynamic Time Warping
|
cmp-lg cs.CL
|
We propose a new algorithm called DK-vec for aligning pairs of
Asian/Indo-European noisy parallel texts without sentence boundaries. DK-vec
improves on previous alignment algorithms in that it handles better the
non-linear nature of noisy corpora. The algorithm uses frequency, position and
recency information as features for pattern matching. Dynamic Time Warping is
used as the matching technique between word pairs. This algorithm produces a
small bilingual lexicon which provides anchor points for alignment.
|
cmp-lg/9409012
|
Towards an Automatic Dictation System for Translators: the TransTalk
Project
|
cmp-lg cs.CL
|
Professional translators often dictate their translations orally and have
them typed afterwards. The TransTalk project aims at automating the second part
of this process. Its originality as a dictation system lies in the fact that
both the acoustic signal produced by the translator and the source text under
translation are made available to the system. Probable translations of the
source text can be predicted and these predictions used to help the speech
recognition system in its lexical choices. We present the results of the first
prototype, which show a marked improvement in the performance of the speech
recognition task when translation predictions are taken into account.
|
cmp-lg/9410001
|
Improving Language Models by Clustering Training Sentences
|
cmp-lg cs.CL
|
Many of the kinds of language model used in speech understanding suffer from
imperfect modeling of intra-sentential contextual influences. I argue that this
problem can be addressed by clustering the sentences in a training corpus
automatically into subcorpora on the criterion of entropy reduction, and
calculating separate language model parameters for each cluster. This kind of
clustering offers a way to represent important contextual effects and can
therefore significantly improve the performance of a model. It also offers a
reasonably automatic means to gather evidence on whether a more complex,
context-sensitive model using the same general kind of linguistic information
is likely to reward the effort that would be required to develop it: if
clustering improves the performance of a model, this proves the existence of
further context dependencies, not exploited by the unclustered model. As
evidence for these claims, I present results showing that clustering improves
some models but not others for the ATIS domain. These results are consistent
with other findings for such models, suggesting that the existence or otherwise
of an improvement brought about by clustering is indeed a good pointer to
whether it is worth developing further the unclustered model.
|
cmp-lg/9410002
|
Lexikoneintraege fuer deutsche Adverbien (Dictionary Entries for German
Adverbs)
|
cmp-lg cs.CL
|
Modifiers in general, and adverbs in particular, are neglected categories in
linguistics, and consequently, their treatment in Natural Language Processing
poses problems. In this article, we present the dictionary information for
German adverbs which is necessary to deal with word order, degree modifier
scope and other problems in NLP. We also give evidence for the claim that a
classification according to position classes differs from any semantic
classification.
|
cmp-lg/9410003
|
Principle Based Semantics for HPSG
|
cmp-lg cs.CL
|
The paper presents a constraint based semantic formalism for HPSG. The
advantages of the formlism are shown with respect to a grammar for a fragment
of German that deals with (i) quantifier scope ambiguities triggered by
scrambling and/or movement and (ii) ambiguities that arise from the
collective/distributive distinction of plural NPs. The syntax-semantics
interface directly implements syntactic conditions on quantifier scoping and
distributivity. The construction of semantic representations is guided by
general principles governing the interaction between syntax and semantics. Each
of these principles acts as a constraint to narrow down the set of possible
interpretations of a sentence. Meanings of ambiguous sentences are represented
by single partial representations (so-called U(nderspecified) D(iscourse)
R(epresentation) S(tructure)s) to which further constraints can be added
monotonically to gain more information about the content of a sentence. There
is no need to build up a large number of alternative representations of the
sentence which are then filtered by subsequent discourse and world knowledge.
The advantage of UDRSs is not only that they allow for monotonic incremental
interpretation but also that they are equipped with truth conditions and a
proof theory that allows for inferences to be drawn directly on structures
where quantifier scope is not resolved.
|
cmp-lg/9410004
|
Spelling Correction in Agglutinative Languages
|
cmp-lg cs.CL
|
This paper presents an approach to spelling correction in agglutinative
languages that is based on two-level morphology and a dynamic programming based
search algorithm. Spelling correction in agglutinative languages is
significantly different than in languages like English. The concept of a word
in such languages is much wider that the entries found in a dictionary, owing
to {}~productive word formation by derivational and inflectional affixations.
After an overview of certain issues and relevant mathematical preliminaries, we
formally present the problem and our solution. We then present results from our
experiments with spelling correction in Turkish, a Ural--Altaic agglutinative
language. Our results indicate that we can find the intended correct word in
95\% of the cases and offer it as the first candidate in 74\% of the cases,
when the edit distance is 1.
|
cmp-lg/9410005
|
A Centering Approach to Pronouns
|
cmp-lg cs.CL
|
In this paper we present a formalization of the centering approach to
modeling attentional structure in discourse and use it as the basis for an
algorithm to track discourse context and bind pronouns. As described in Grosz,
Joshi and Weinstein (1986), the process of centering attention on entities in
the discourse gives rise to the intersentential transitional states of
continuing, retaining and shifting. We propose an extension to these states
which handles some additional cases of multiple ambiguous pronouns. The
algorithm has been implemented in an HPSG natural language system which serves
as the interface to a database query application.
|
cmp-lg/9410006
|
Evaluating Discourse Processing Algorithms
|
cmp-lg cs.CL
|
In order to take steps towards establishing a methodology for evaluating
Natural Language systems, we conducted a case study. We attempt to evaluate two
different approaches to anaphoric processing in discourse by comparing the
accuracy and coverage of two published algorithms for finding the co-specifiers
of pronouns in naturally occurring texts and dialogues. We present the
quantitative results of hand-simulating these algorithms, but this analysis
naturally gives rise to both a qualitative evaluation and recommendations for
performing such evaluations in general. We illustrate the general difficulties
encountered with quantitative evaluation. These are problems with: (a) allowing
for underlying assumptions, (b) determining how to handle underspecifications,
and (c) evaluating the contribution of false positives and error chaining.
|
cmp-lg/9410007
|
A Formal Look at Dependency Grammars and Phrase-Structure Grammars, with
Special Consideration of Word-Order Phenomena
|
cmp-lg cs.CL
|
The central role of the lexicon in Meaning-Text Theory (MTT) and other
dependency-based linguistic theories cannot be replicated in linguistic
theories based on context-free grammars (CFGs). We describe Tree Adjoining
Grammar (TAG) as a system that arises naturally in the process of lexicalizing
CFGs. A TAG grammar can therefore be compared directly to an Meaning-Text Model
(MTM). We illustrate this point by discussing the computational complexity of
certain non-projective constructions, and suggest a way of incorporating
locality of word-order definitions into the Surface-Syntactic Component of MTT.
|
cmp-lg/9410008
|
Recognizing Text Genres with Simple Metrics Using Discriminant Analysis
|
cmp-lg cs.CL
|
A simple method for categorizing texts into predetermined text genre
categories using the statistical standard technique of discriminant analysis is
demonstrated with application to the Brown corpus. Discriminant analysis makes
it possible use a large number of parameters that may be specific for a certain
corpus or information stream, and combine them into a small number of
functions, with the parameters weighted on basis of how useful they are for
discriminating text genres. An application to information retrieval is
discussed.
|
cmp-lg/9410009
|
Lexical Functions and Machine Translation
|
cmp-lg cs.CL
|
This paper discusses the lexicographical concept of lexical functions and
their potential exploitation in the development of a machine translation
lexicon designed to handle collocations.
|
cmp-lg/9410010
|
XTAG system - A Wide Coverage Grammar for English
|
cmp-lg cs.CL
|
This paper presents the XTAG system, a grammar development tool based on the
Tree Adjoining Grammar (TAG) formalism that includes a wide-coverage syntactic
grammar for English. The various components of the system are discussed and
preliminary evaluation results from the parsing of various corpora are given.
Results from the comparison of XTAG against the IBM statistical parser and the
Alvey Natural Language Tool parser are also given.
|
cmp-lg/9410011
|
Dilemma - An Instant Lexicographer
|
cmp-lg cs.CL
|
Dilemma is intended to enhance quality and increase productivity of expert
human translators by presenting to the writer relevant lexical information
mechanically extracted from comparable existing translations, thus replacing -
or compensating for the absence of - a lexicographer and stand-by terminologist
rather than the translator. Using statistics and crude surface analysis and a
minimum of prior information, Dilemma identifies instances and suggests their
counterparts in parallel source and target texts, on all levels down to
individual words. Dilemma forms part of a tool kit for translation where focus
is on text structure and over-all consistency in large text volumes rather than
on framing sentences, on interaction between many actors in a large project
rather than on retrieval of machine-stored data and on decision making rather
than on application of given rules. In particular, the system has been tuned to
the needs of the ongoing translation of European Community legislation into the
languages of candidate member countries. The system has been demonstrated to
and used by professional translators with promising results.
|
cmp-lg/9410012
|
Does Baum-Welch Re-estimation Help Taggers?
|
cmp-lg cs.CL
|
In part of speech tagging by Hidden Markov Model, a statistical model is used
to assign grammatical categories to words in a text. Early work in the field
relied on a corpus which had been tagged by a human annotator to train the
model. More recently, Cutting {\it et al.} (1992) suggest that training can be
achieved with a minimal lexicon and a limited amount of {\em a priori}
information about probabilities, by using Baum-Welch re-estimation to
automatically refine the model. In this paper, I report two experiments
designed to determine how much manual training information is needed. The first
experiment suggests that initial biasing of either lexical or transition
probabilities is essential to achieve a good accuracy. The second experiment
reveals that there are three distinct patterns of Baum-Welch re-estimation. In
two of the patterns, the re-estimation ultimately reduces the accuracy of the
tagging rather than improving it. The pattern which is applicable can be
predicted from the quality of the initial model and the similarity between the
tagged training corpus (if any) and the corpus to be tagged. Heuristics for
deciding how to use re-estimation in an effective manner are given. The
conclusions are broadly in agreement with those of Merialdo (1994), but give
greater detail about the contributions of different parts of the model.
|
cmp-lg/9410013
|
Automatic Error Detection in Part of Speech Tagging
|
cmp-lg cs.CL
|
A technique for detecting errors made by Hidden Markov Model taggers is
described, based on comparing observable values of the tagging process with a
threshold. The resulting approach allows the accuracy of the tagger to be
improved by accepting a lower efficiency, defined as the proportion of words
which are tagged. Empirical observations are presented which demonstrate the
validity of the technique and suggest how to choose an appropriate threshold.
|
cmp-lg/9410014
|
A Freely Available Syntactic Lexicon for English
|
cmp-lg cs.CL
|
This paper presents a syntactic lexicon for English that was originally
derived from the Oxford Advanced Learner's Dictionary and the Oxford Dictionary
of Current Idiomatic English, and then modified and augmented by hand. There
are more than 37,000 syntactic entries from all 8 parts of speech. An X-windows
based tool is available for maintaining the lexicon and performing searches. C
and Lisp hooks are also available so that the lexicon can be easily utilized by
parsers and other programs.
|
cmp-lg/9410015
|
Lexicalization and Grammar Development
|
cmp-lg cs.CL
|
In this paper we present a fully lexicalized grammar formalism as a
particularly attractive framework for the specification of natural language
grammars. We discuss in detail Feature-based, Lexicalized Tree Adjoining
Grammars (FB-LTAGs), a representative of the class of lexicalized grammars. We
illustrate the advantages of lexicalized grammars in various contexts of
natural language processing, ranging from wide-coverage grammar development to
parsing and machine translation. We also present a method for compact and
efficient representation of lexicalized trees.
|
cmp-lg/9410016
|
Dutch Cross Serial Dependencies in HPSG
|
cmp-lg cs.CL
|
We present an analysis of Dutch cross serial dependencies in Head-driven
Phrase Structure Grammar. Arguably, our analysis differs from other analyses in
that we do not refer to `additional' mechanisms (e.g., sequence union, head
wrapping): just standard structure sharing, an immediate dominance schema and a
linear precedence rule.
|
cmp-lg/9410017
|
Concurrent Lexicalized Dependency Parsing: The ParseTalk Model
|
cmp-lg cs.CL
|
A grammar model for concurrent, object-oriented natural language parsing is
introduced. Complete lexical distribution of grammatical knowledge is achieved
building upon the head-oriented notions of valency and dependency, while
inheritance mechanisms are used to capture lexical generalizations. The
underlying concurrent computation model relies upon the actor paradigm. We
consider message passing protocols for establishing dependency relations and
ambiguity handling.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.