id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
cmp-lg/9410018
|
Part-of-Speech Tagging with Neural Networks
|
cmp-lg cs.CL
|
Text corpora which are tagged with part-of-speech information are useful in
many areas of linguistic research. In this paper, a new part-of-speech tagging
method based on neural networks (Net- Tagger) is presented and its performance
is compared to that of a HMM-tagger and a trigram-based tagger. It is shown
that the Net- Tagger performs as well as the trigram-based tagger and better
than the HMM-tagger.
|
cmp-lg/9410019
|
Concurrent Lexicalized Dependency Parsing: A Behavioral View on
ParseTalk Events
|
cmp-lg cs.CL
|
The behavioral specification of an object-oriented grammar model is
considered. The model is based on full lexicalization, head-orientation via
valency constraints and dependency relations, inheritance as a means for
non-redundant lexicon specification, and concurrency of computation. The
computation model relies upon the actor paradigm, with concurrency entering
through asynchronous message passing between actors. In particular, we here
elaborate on principles of how the global behavior of a lexically distributed
grammar and its corresponding parser can be specified in terms of event type
networks and event networks, resp.
|
cmp-lg/9410020
|
Construction of a Bilingual Dictionary Intermediated by a Third Language
|
cmp-lg cs.CL
|
When using a third language to construct a bilingual dictionary, it is
necessary to discriminate equivalencies from inappropriate words derived as a
result of ambiguity in the third language. We propose a method to treat this by
utilizing the structures of dictionaries to measure the nearness of the
meanings of words. The resulting dictionary is a word-to-word bilingual
dictionary of nouns and can be used to refine the entries and equivalencies in
published bilingual dictionaries.
|
cmp-lg/9410021
|
Reference Resolution Using Semantic Patterns in Japanese Newspaper
Articles
|
cmp-lg cs.CL
|
Reference resolution is one of the important tasks in natural language
processing. In this paper, the author first determines the referents and their
locations of "dousha", literally meaning "the same company", which appear in
Japanese newspaper articles. Secondly, three heuristic methods, two of which
use semantic information in text such as company names and their patterns, are
proposed and tested on how accurately they identify the correct referents. The
proposed methods based on semantic patterns show high accuracy for reference
resolution of "dousha" (more than 90\%). This suggests that semantic
pattern-matching methods are effective for reference resolution in newspaper
articles.
|
cmp-lg/9410022
|
Automated tone transcription
|
cmp-lg cs.CL
|
In this paper I report on an investigation into the problem of assigning
tones to pitch contours. The proposed model is intended to serve as a tool for
phonologists working on instrumentally obtained pitch data from tone languages.
Motivation and exemplification for the model is provided by data taken from my
fieldwork on Bamileke Dschang (Cameroon). Following recent work by Liberman and
others, I provide a parametrised F_0 prediction function P which generates F_0
values from a tone sequence, and I explore the asymptotic behaviour of
downstep. Next, I observe that transcribing a sequence X of pitch (i.e. F_0)
values amounts to finding a tone sequence T such that P(T) {}~= X. This is a
combinatorial optimisation problem, for which two non-deterministic search
techniques are provided: a genetic algorithm and a simulated annealing
algorithm. Finally, two implementations---one for each technique---are
described and then compared using both artificial and real data for sequences
of up to 20 tones. These programs can be adapted to other tone languages by
adjusting the F_0 prediction function.
|
cmp-lg/9410023
|
Korean to English Translation Using Synchronous TAGs
|
cmp-lg cs.CL
|
It is often argued that accurate machine translation requires reference to
contextual knowledge for the correct treatment of linguistic phenomena such as
dropped arguments and accurate lexical selection. One of the historical
arguments in favor of the interlingua approach has been that, since it revolves
around a deep semantic representation, it is better able to handle the types of
linguistic phenomena that are seen as requiring a knowledge-based approach. In
this paper we present an alternative approach, exemplified by a prototype
system for machine translation of English and Korean which is implemented in
Synchronous TAGs. This approach is essentially transfer based, and uses
semantic feature unification for accurate lexical selection of polysemous
verbs. The same semantic features, when combined with a discourse model which
stores previously mentioned entities, can also be used for the recovery of
topicalized arguments. In this paper we concentrate on the translation of
Korean to English.
|
cmp-lg/9410024
|
A Freely Available Wide Coverage Morphological Analyzer for English
|
cmp-lg cs.CL
|
This paper presents a morphological lexicon for English that handles more
than 317000 inflected forms derived from over 90000 stems. The lexicon is
available in two formats. The first can be used by an implementation of a
two-level processor for morphological analysis. The second, derived from the
first one for efficiency reasons, consists of a disk-based database using a
UNIX hash table facility. We also built an X Window tool to facilitate the
maintenance and browsing of the lexicon. The package is ready to be integrated
into an natural language application such as a parser through hooks written in
Lisp and C.
|
cmp-lg/9410025
|
Syntactic Analysis Of Natural Language Using Linguistic Rules And
Corpus-based Patterns
|
cmp-lg cs.CL
|
We are concerned with the syntactic annotation of unrestricted text. We
combine a rule-based analysis with subsequent exploitation of empirical data.
The rule-based surface syntactic analyser leaves some amount of ambiguity in
the output that is resolved using empirical patterns. We have implemented a
system for generating and applying corpus-based patterns. Some patterns
describe the main constituents in the sentence and some the local context of
the each syntactic function. There are several (partly) reduntant patterns, and
the ``pattern'' parser selects analysis of the sentence that matches the
strictest possible pattern(s). The system is applied to an experimental corpus.
We present the results and discuss possible refinements of the method from a
linguistic point of view.
|
cmp-lg/9410026
|
A Rule-Based Approach To Prepositional Phrase Attachment Disambiguation
|
cmp-lg cs.CL
|
In this paper, we describe a new corpus-based approach to prepositional
phrase attachment disambiguation, and present results comparing performance of
this algorithm with other corpus-based approaches to this problem.
|
cmp-lg/9410027
|
Probabilistic Tagging with Feature Structures
|
cmp-lg cs.CL
|
The described tagger is based on a hidden Markov model and uses tags composed
of features such as part-of-speech, gender, etc. The contextual probability of
a tag (state transition probability) is deduced from the contextual
probabilities of its feature-value-pairs. This approach is advantageous when
the available training corpus is small and the tag set large, which can be the
case with morphologically rich languages.
|
cmp-lg/9410028
|
Minimal Change and Bounded Incremental Parsing
|
cmp-lg cs.CL
|
Ideally, the time that an incremental algorithm uses to process a change
should be a function of the size of the change rather than, say, the size of
the entire current input. Based on a formalization of ``the set of things
changed'' by an incremental modification, this paper investigates how and to
what extent it is possible to give such a guarantee for a chart-based parsing
framework and discusses the general utility of a minimality notion in
incremental processing.
|
cmp-lg/9410029
|
Disambiguation of Super Parts of Speech (or Supertags): Almost Parsing
|
cmp-lg cs.CL
|
In a lexicalized grammar formalism such as Lexicalized Tree-Adjoining Grammar
(LTAG), each lexical item is associated with at least one elementary structure
(supertag) that localizes syntactic and semantic dependencies. Thus a parser
for a lexicalized grammar must search a large set of supertags to choose the
right ones to combine for the parse of the sentence. We present techniques for
disambiguating supertags using local information such as lexical preference and
local lexical dependencies. The similarity between LTAG and Dependency grammars
is exploited in the dependency model of supertag disambiguation. The
performance results for various models of supertag disambiguation such as
unigram, trigram and dependency-based models are presented.
|
cmp-lg/9410030
|
Feature-Based TAG in place of multi-component adjunction: Computational
Implications
|
cmp-lg cs.CL
|
Using feature-based Tree Adjoining Grammar (TAG), this paper presents
linguistically motivated analyses of constructions claimed to require
multi-component adjunction. These feature-based TAG analyses permit parsing of
these constructions using an existing unification-based Earley-style TAG
parser, thus obviating the need for a multi-component TAG parser without
sacrificing linguistic coverage for English.
|
cmp-lg/9410031
|
Towards a More User-friendly Correction
|
cmp-lg cs.CL
|
We first present our view of detection and correction of syntactic errors. We
then introduce a new correction method, based on heuristic criteria used to
decide which correction should be preferred. Weighting of these criteria leads
to a flexible and parametrable system, which can adapt itself to the user. A
partitioning of the trees based on linguistic criteria: agreement rules, rather
than computational criteria is then necessary. We end by proposing extensions
to lexical correction and to some syntactic errors. Our aim is an adaptable and
user-friendly system capable of automatic correction for some applications.
|
cmp-lg/9410032
|
Planning Argumentative Texts
|
cmp-lg cs.CL
|
This paper presents \proverb\, a text planner for argumentative texts.
\proverb\'s main feature is that it combines global hierarchical planning and
unplanned organization of text with respect to local derivation relations in a
complementary way. The former splits the task of presenting a particular proof
into subtasks of presenting subproofs. The latter simulates how the next
intermediate conclusion to be presented is chosen under the guidance of the
local focus.
|
cmp-lg/9410033
|
Default Handling in Incremental Generation
|
cmp-lg cs.CL
|
Natural language generation must work with insufficient input.
Underspecifications can be caused by shortcomings of the component providing
the input or by the preliminary state of incrementally given input. The paper
aims to escape from such dead-end situations by making assumptions. We discuss
global aspects of default handling. Two problem classes for defaults in the
incremental syntactic generator VM-GEN are presented to substantiate our
discussion.
|
cmp-lg/9410034
|
A Comparison of Two Smoothing Methods for Word Bigram Models
|
cmp-lg cs.CL
|
A COMPARISON OF TWO SMOOTHING METHODS FOR WORD BIGRAM MODELS
Linda Bauman Peto
Department of Computer Science
University of Toronto Abstract Word bigram models estimated from text corpora
require smoothing methods to estimate the probabilities of unseen bigrams. The
deleted estimation method uses the formula:
Pr(i|j) = lambda f_i + (1-lambda)f_i|j, where f_i and f_i|j are the relative
frequency of i and the conditional relative frequency of i given j,
respectively, and lambda is an optimized parameter. MacKay (1994) proposes a
Bayesian approach using Dirichlet priors, which yields a different formula:
Pr(i|j) = (alpha/F_j + alpha) m_i + (1 - alpha/F_j + alpha) f_i|j where F_j
is the count of j and alpha and m_i are optimized parameters. This thesis
describes an experiment in which the two methods were trained on a
two-million-word corpus taken from the Canadian _Hansard_ and compared on the
basis of the experimental perplexity that they assigned to a shared test
corpus. The methods proved to be about equally accurate, with MacKay's method
using fewer resources.
|
cmp-lg/9411001
|
Sublanguage Terms: Dictionaries, Usage, and Automatic Classification
|
cmp-lg cs.CL
|
The use of terms from natural and social scientific titles and abstracts is
studied from the perspective of sublanguages and their specialized
dictionaries. Different notions of sublanguage distinctiveness are explored.
Objective methods for separating hard and soft sciences are suggested based on
measures of sublanguage use, dictionary characteristics, and sublanguage
distinctiveness. Abstracts were automatically classified with a high degree of
accuracy by using a formula that considers the degree of uniqueness of terms in
each sublanguage. This may prove useful for text filtering or information
retrieval systems.
|
cmp-lg/9411002
|
CLARE: A Contextual Reasoning and Cooperative Response Framework for the
Core Language Engine
|
cmp-lg cs.CL
|
This report describes the research, design and implementation work carried
out in building the CLARE system at SRI International, Cambridge, England.
CLARE was designed as a natural language processing system with facilities for
reasoning and understanding in context and for generating cooperative
responses. The project involved both further development of SRI's Core Language
Engine (Alshawi, 1992, MIT Press) natural language processor and the design and
implementation of new components for reasoning and response generation. The
CLARE system has advanced the state of the art in a wide variety of areas, both
through the use of novel techniques developed on the project, and by extending
the coverage or scale of known techniques. The language components are
application-independent and provide interfaces for the development of new types
of application.
|
cmp-lg/9411003
|
Adnominal adjectives, code-switching and lexicalized TAG
|
cmp-lg cs.CL
|
In codeswitching contexts, the language of a syntactic head determines the
distribution of its complements. Mahootian 1993 derives this generalization by
representing heads as the anchors of elementary trees in a lexicalized TAG.
However, not all codeswitching sequences are amenable to a head-complement
analysis. For instance, adnominal adjectives can occupy positions not available
to them in their own language, and the TAG derivation of such sequences must
use unanchored auxiliary trees. palabras heavy-duty `heavy-duty words'
(Spanish-English; Poplack 1980:584) taste lousy sana `very lousy taste'
(English-Swahili; Myers-Scotton 1993:29, (10)) Given the null hypothesis that
codeswitching and monolingual sequences are derived in an identical manner,
sequences like those above provide evidence that pure lexicalized TAGs are
inadequate for the description of natural language.
|
cmp-lg/9411004
|
Determining Determiner Sequencing: A Syntactic Analysis for English
|
cmp-lg cs.CL
|
Previous work on English determiners has primarily concentrated on their
semantics or scoping properties rather than their complex ordering behavior.
The little work that has been done on determiner ordering generally splits
determiners into three subcategories. However, this small number of categories
does not capture the finer distinctions necessary to correctly order
determiners. This paper presents a syntactic account of determiner sequencing
based on eight independently identified semantic features. Complex determiners,
such as genitives, partitives, and determiner modifying adverbials, are also
presented. This work has been implemented as part of XTAG, a wide-coverage
grammar for English based in the Feature-Based, Lexicalized Tree Adjoining
Grammar (FB-LTAG) formalism.
|
cmp-lg/9411005
|
Constraining Lexical Selection Across Languages Using TAGs
|
cmp-lg cs.CL
|
Lexical selection in Machine Translation consists of several related
components. Two that have received a lot of attention are lexical mapping from
an underlying concept or lexical item, and choosing the correct
subcategorization frame based on argument structure. Because most MT
applications are small or relatively domain specific, a third component of
lexical selection is generally overlooked - distinguishing between lexical
items that are closely related conceptually. While some MT systems have
proposed using a 'world knowledge' module to decide which word is more
appropriate based on various pragmatic or stylistic constraints, we are
interested in seeing how much we can accomplish using a combination of syntax
and lexical semantics. By using separate ontologies for each language
implemented in FB-LTAGs, we are able to elegantly model the more specific and
language dependent syntactic and semantic distinctions necessary to further
filter the choice of the lexical item.
|
cmp-lg/9411006
|
Status of the XTAG System
|
cmp-lg cs.CL
|
XTAG is an ongoing project to develop a wide-coverage grammar for English,
based on the Feature-based Lexicalized Tree Adjoining Grammar (FB-LTAG)
formalism. The XTAG system integrates a morphological analyzer, an N-best
part-of-speech tagger, an Early-style parser and an X-window interface, along
with a wide-coverage grammar for English developed using the system. This
system serves as a linguist's workbench for developing FB-LTAG specifications.
This paper presents a description of and recent improvements to the various
components of the XTAG system. It also presents the recent performance of the
wide-coverage grammar on various corpora and compares it against the
performance of other wide-coverage and domain-specific grammars.
|
cmp-lg/9411007
|
The Linguistic Relevance of Quasi-Trees
|
cmp-lg cs.CL
|
We discuss two constructions (long scrambling and ECM verbs) which challenge
most syntactic theories (including traditional TAG approaches) since they seem
to require exceptional mechanisms and postulates. We argue that these
constructions should in fact be analyzed in a similar manner, namely as
involving a verb which selects for a ``defective'' complement. These
complements are defective in that they lack certain Case-assigning abilities
(represented as functional heads). The constructions differ in how many such
abilities are lacking. Following the previous analysis of scrambling of Rambow
(1994), we propose a TAG analysis based on quasi-trees.
|
cmp-lg/9411008
|
Parsing Free Word-Order Languages in Polynomial Time
|
cmp-lg cs.CL
|
We present a parsing algorithm with polynomial time complexity for a large
subset of V-TAG languages. V-TAG, a variant of multi-component TAG, can handle
free-word order phenomena which are beyond the class LCFRS (which includes
regular TAG). Our algorithm is based on a CYK-style parser for TAGs.
|
cmp-lg/9411009
|
Bootstrapping A Wide-Coverage CCG from FB-LTAG
|
cmp-lg cs.CL
|
A number of researchers have noted the similarities between LTAGs and CCGs.
Observing this resemblance, we felt that we could make use of the wide-coverage
grammar developed in the XTAG project to build a wide-coverage CCG. To our
knowledge there have been no attempts to construct a large-scale CCG parser
with the lexicon to support it. In this paper, we describe such a system, built
by adapting various XTAG components to CCG. We find that, despite the
similarities between the formalisms, certain parts of the grammatical workload
are distributed differently. In addition, the flexibility of CCG derivations
allows the translated grammar to handle a number of ``non-constituent''
constructions which the XTAG grammar cannot.
|
cmp-lg/9411010
|
The "Whiteboard" Architecture: a way to integrate heterogeneous
components of NLP systems
|
cmp-lg cs.CL
|
We present a new software architecture for NLP systems made of heterogeneous
components, and demonstrate an architectural prototype we have built at ATR in
the context of Speech Translation.
|
cmp-lg/9411011
|
Acquiring Knowledge from Encyclopedic Texts
|
cmp-lg cs.CL
|
A computational model for the acquisition of knowledge from encyclopedic
texts is described. The model has been implemented in a program, called SNOWY,
that reads unedited texts from {\em The World Book Encyclopedia}, and acquires
new concepts and conceptual relations about topics dealing with the dietary
habits of animals, their classifications and habitats. The program is also able
to answer an ample set of questions about the knowledge that it has acquired.
This paper describes the essential components of this model, namely semantic
interpretation, inferences and representation, and ends with an evaluation of
the performance of the program, a sample of the questions that it is able to
answer, and its relation to other programs of similar nature.
|
cmp-lg/9411012
|
From Regular to Context Free to Mildly Context Sensitive Tree Rewriting
Systems: The Path of Child Language Acquisition
|
cmp-lg cs.CL
|
Current syntactic theory limits the range of grammatical variation so
severely that the logical problem of grammar learning is trivial. Yet, children
exhibit characteristic stages in syntactic development at least through their
sixth year. Rather than positing maturational delays, I suggest that
acquisition difficulties are the result of limitations in manipulating
grammatical representations. I argue that the genesis of complex sentences
reflects increasing generative capacity in the systems generating structural
descriptions: conjoined clauses demand only a regular tree rewriting system;
sentential embedding uses a context-free tree substitution grammar;
modification requires TAG, a mildly context-sensitive system.
|
cmp-lg/9411013
|
Phoneme-level speech and natural language intergration for agglutinative
languages
|
cmp-lg cs.CL
|
A new tightly coupled speech and natural language integration model is
presented for a TDNN-based large vocabulary continuous speech recognition
system. Unlike the popular n-best techniques developed for integrating mainly
HMM-based speech and natural language systems in word level, which is obviously
inadequate for the morphologically complex agglutinative languages, our model
constructs a spoken language system based on the phoneme-level integration. The
TDNN-CYK spoken language architecture is designed and implemented using the
TDNN-based diphone recognition module integrated with the table-driven
phonological/morphological co-analysis. Our integration model provides a
seamless integration of speech and natural language for connectionist speech
recognition systems especially for morphologically complex languages such as
Korean. Our experiment results show that the speaker-dependent continuous
Eojeol (word) recognition can be integrated with the morphological analysis
with over 80\% morphological analysis success rate directly from the speech
input for the middle-level vocabularies.
|
cmp-lg/9411014
|
Automatically Identifying Morphological Relations in = Machine-Readable
Dictionaries
|
cmp-lg cs.CL
|
We describe an automated method for identifying classes of morphologically
related words in an on-line dictionary, and for linking individual senses in
the derived form to one or more senses in the base form by means of
morphological relation attributes. We also present an algorithm for computing a
score reflecting the system=92s certainty in these derivational links; this
computation relies on the content of semantic relations associated with each
sense, which are extracted automatically by parsing each sense definition and
subjecting the parse structure to automated semantic analysis. By processing
the entire set of headwords in the dictionary in this fashion we create a large
set of directed derivational graphs, which can then be accessed by other
components in our broad-coverage NLP system. Spurious or unlikely derivations
are not discarded, but are rather added to the dictionary and assigned a
negative score; this allows the system to handle non-standard uses of these
forms.
|
cmp-lg/9411015
|
Parsing Using Linearly Ordered Phonological Rules
|
cmp-lg cs.CL
|
A generate and test algorithm is described which parses a surface form into
one or more lexical entries using linearly ordered phonological rules. This
algorithm avoids the exponential expansion of search space which a naive
parsing algorithm would face by encoding into the form being parsed the
ambiguities which arise during parsing. The algorithm has been implemented and
tested on real language data, and its speed compares favorably with that of a
KIMMO-type parser.
|
cmp-lg/9411016
|
Extending DRT with a Focusing Mechanism for Pronominal Anaphora and
Ellipsis Resolution
|
cmp-lg cs.CL
|
Cormack (1992) proposed a framework for pronominal anaphora resolution. Her
proposal integrates focusing theory (Sidner et al.) and DRT (Kamp and Reyle).
We analyzed this methodology and adjusted it to the processing of Portuguese
texts. The scope of the framework was widened to cover sentences containing
restrictive relative clauses and subject ellipsis. Tests were conceived and
applied to probe the adequacy of proposed modifications when dealing with
processing of current texts.
|
cmp-lg/9411017
|
Comlex Syntax: Building a Computational Lexicon
|
cmp-lg cs.CL
|
We describe the design of Comlex Syntax, a computational lexicon providing
detailed syntactic information for approximately 38,000 English headwords. We
consider the types of errors which arise in creating such a lexicon, and how
such errors can be measured and controlled.
|
cmp-lg/9411018
|
Interlanguage Signs and Lexical Transfer Errors
|
cmp-lg cs.CL
|
A theory of interlanguage (IL) lexicons is outlined, with emphasis on IL
lexical entries, based on the HPSG notion of lexical sign. This theory accounts
for idiosyncratic or lexical transfer of syntactic subcategorisation and idioms
from the first language to the IL. It also accounts for developmental stages in
IL lexical grammar, and grammatical variation in the use of the same lexical
item. The theory offers a tool for robust parsing of lexical transfer errors
and diagnosis of such errors.
|
cmp-lg/9411019
|
Focus on ``only" and ``Not"
|
cmp-lg cs.CL
|
Krifka [1993] has suggested that focus should be seen as a means of providing
material for a range of semantic and pragmatic functions to work on, rather
than as a specific semantic or pragmatic function itself. The current paper
describes an implementation of this general idea, and applies it to the
interpretation of {\em only} and {\em not}.
|
cmp-lg/9411020
|
Extraction in Dutch with Lexical Rules
|
cmp-lg cs.CL
|
Unbounded dependencies are often modelled by ``traces'' (and ``gap
threading'') in unification-based grammars. Pollard and Sag, however, suggest
an analysis of extraction based on lexical rules, which excludes the notion of
traces (P&S 1994, Chapter 9). In parsing, it suggests a trade of indeterminism
for lexical ambiguity. This paper provides a short introduction to this
approach to extraction with lexical rules, and illustrates the linguistic power
of the approach by applying it to particularly idiosyncratic Dutch extraction
data.
|
cmp-lg/9411021
|
Free-ordered CUG on Chemical Abstract Machine
|
cmp-lg cs.CL
|
We propose a paradigm for concurrent natural language generation. In order to
represent grammar rules distributively, we adopt categorial unification grammar
(CUG) where each category owns its functional type. We augment typed lambda
calculus with several new combinators, to make the order of lambda-conversions
free for partial / local processing. The concurrent calculus is modeled with
Chemical Abstract Machine. We show an example of a Japanese causative auxiliary
verb that requires a drastic rearrangement of case domination.
|
cmp-lg/9411022
|
Adaptive Sentence Boundary Disambiguation
|
cmp-lg cs.CL
|
Labeling of sentence boundaries is a necessary prerequisite for many natural
language processing tasks, including part-of-speech tagging and sentence
alignment. End-of-sentence punctuation marks are ambiguous; to disambiguate
them most systems use brittle, special-purpose regular expression grammars and
exception rules. As an alternative, we have developed an efficient, trainable
algorithm that uses a lexicon with part-of-speech probabilities and a
feed-forward neural network. After training for less than one minute, the
method correctly labels over 98.5\% of sentence boundaries in a corpus of over
27,000 sentence-boundary marks. We show the method to be efficient and easily
adaptable to different text genres, including single-case texts.
|
cmp-lg/9411023
|
Abstract Generation based on Rhetorical Structure Extraction
|
cmp-lg cs.CL
|
We have developed an automatic abstract generation system for Japanese
expository writings based on rhetorical structure extraction. The system first
extracts the rhetorical structure, the compound of the rhetorical relations
between sentences, and then cuts out less important parts in the extracted
structure to generate an abstract of the desired length.
Evaluation of the generated abstract showed that it contains at maximum 74\%
of the most important sentences of the original text. The system is now
utilized as a text browser for a prototypical interactive document retrieval
system.
|
cmp-lg/9411024
|
Reverse Queries in DATR
|
cmp-lg cs.CL
|
DATR is a declarative representation language for lexical information and as
such, in principle, neutral with respect to particular processing strategies.
Previous DATR compiler/interpreter systems support only one access strategy
that closely resembles the set of inference rules of the procedural semantics
of DATR (Evans & Gazdar 1989a). In this paper we present an alternative access
strategy (reverse query strategy) for a non-trivial subset of DATR.
|
cmp-lg/9411025
|
Multi-Dimensional Inheritance
|
cmp-lg cs.CL
|
In this paper, we present an alternative approach to multiple inheritance for
typed feature structures. In our approach, a feature structure can be
associated with several types coming from different hierarchies (dimensions).
In case of multiple inheritance, a type has supertypes from different
hierarchies. We contrast this approach with approaches based on a single type
hierarchy where a feature structure has only one unique most general type, and
multiple inheritance involves computation of greatest lower bounds in the
hierarchy. The proposed approach supports current linguistic analyses in
constraint-based formalisms like HPSG, inheritance in the lexicon, and
knowledge representation for NLP systems. Finally, we show that
multi-dimensional inheritance hierarchies can be compiled into a Prolog term
representation, which allows to compute the conjunction of two types
efficiently by Prolog term unification.
|
cmp-lg/9411026
|
Manipulating Human-oriented Dictionaries with very simple tools
|
cmp-lg cs.CL
|
This paper presents a methodology for building and manipulating
human-oriented dictionaries. This methodology has been applied in the
construction of a French-English-Malay dictionary which has been obtained by
"crossing" semi-automatically two bilingual dictionaries. We use only Microsoft
Word, a specialized language for writing transcriptors and a small but powerful
dictionary tool.
|
cmp-lg/9411027
|
Classifier Assignment by Corpus-based Approach
|
cmp-lg cs.CL
|
This paper presents an algorithm for selecting an appropriate classifier word
for a noun. In Thai language, it frequently happens that there is fluctuation
in the choice of classifier for a given concrete noun, both from the point of
view of the whole spe ech community and individual speakers. Basically, there
is no exect rule for classifier selection. As far as we can do in the
rule-based approach is to give a default rule to pick up a corresponding
classifier of each noun. Registration of classifier for each noun is limited to
the type of unit classifier because other types are open due to the meaning of
representation. We propose a corpus-based method (Biber, 1993; Nagao, 1993;
Smadja, 1993) which generates Noun Classifier Associations (NCA) to overcome
the problems in classifier assignment and semantic construction of noun phrase.
The NCA is created statistically from a large corpus and recomposed under
concept hierarchy constraints and frequency of occurrences.
|
cmp-lg/9411028
|
The Speech-Language Interface in the Spoken Language Translator
|
cmp-lg cs.CL
|
The Spoken Language Translator is a prototype for practically useful systems
capable of translating continuous spoken language within restricted domains.
The prototype system translates air travel (ATIS) queries from spoken English
to spoken Swedish and to French. It is constructed, with as few modifications
as possible, from existing pieces of speech and language processing software.
The speech recognizer and language understander are connected by a fairly
conventional pipelined N-best interface. This paper focuses on the ways in
which the language processor makes intelligent use of the sentence hypotheses
delivered by the recognizer. These ways include (1) producing modified
hypotheses to reflect the possible presence of repairs in the uttered word
sequence; (2) fast parsing with a version of the grammar automatically
specialized to the more frequent constructions in the training corpus; and (3)
allowing syntactic and semantic factors to interact with acoustic ones in the
choice of a meaning structure for translation, so that the acoustically
preferred hypothesis is not always selected even if it is within linguistic
coverage.
|
cmp-lg/9411029
|
An Efficient Probabilistic Context-Free Parsing Algorithm that Computes
Prefix Probabilities
|
cmp-lg cs.CL
|
We describe an extension of Earley's parser for stochastic context-free
grammars that computes the following quantities given a stochastic context-free
grammar and an input string: a) probabilities of successive prefixes being
generated by the grammar; b) probabilities of substrings being generated by the
nonterminals, including the entire string being generated by the grammar; c)
most likely (Viterbi) parse of the string; d) posterior expected number of
applications of each grammar production, as required for reestimating rule
probabilities. (a) and (b) are computed incrementally in a single left-to-right
pass over the input. Our algorithm compares favorably to standard bottom-up
parsing methods for SCFGs in that it works efficiently on sparse grammars by
making use of Earley's top-down control structure. It can process any
context-free rule format without conversion to some normal form, and combines
computations for (a) through (d) in a single algorithm. Finally, the algorithm
has simple extensions for processing partially bracketed inputs, and for
finding partial parses and their likelihoods on ungrammatical inputs.
|
cmp-lg/9411030
|
Complexity of Scrambling: A New Twist to the Competence - Performance
Distinction
|
cmp-lg cs.CL
|
In this paper we discuss the following issue: How do we decide whether a
certain property of language is a competence property or a performance
property? Our claim is that the answer to this question is not given a-priori.
The answer depends on the formal devices (formal grammars and machines)
available to us for describing language. We discuss this issue in the context
of the complexity of processing of center embedding (of relative clauses in
English) and scrambling (in German, for example) from arbitrary depths of
embedding.
|
cmp-lg/9411031
|
Automatic Generation of Technical Documentation
|
cmp-lg cs.CL
|
Natural-language generation (NLG) techniques can be used to automatically
produce technical documentation from a domain knowledge base and linguistic and
contextual models. We discuss this application of NLG technology from both a
technical and a usefulness (costs and benefits) perspective. This discussion is
based largely on our experiences with the IDAS documentation-generation
project, and the reactions various interested people from industry have had to
IDAS. We hope that this summary of our experiences with IDAS and the lessons we
have learned from it will be beneficial for other researchers who wish to build
technical-documentation generation systems.
|
cmp-lg/9411032
|
Has a Consensus NL Generation Architecture Appeared, and is it
Psycholinguistically Plausible?
|
cmp-lg cs.CL
|
I survey some recent applications-oriented NL generation systems, and claim
that despite very different theoretical backgrounds, these systems have a
remarkably similar architecture in terms of the modules they divide the
generation process into, the computations these modules perform, and the way
the modules interact with each other. I also compare this `consensus
architecture' among applied NLG systems with psycholinguistic knowledge about
how humans speak, and argue that at least some aspects of the consensus
architecture seem to be in agreement with what is known about human language
production, despite the fact that psycholinguistic plausibility was not in
general a goal of the developers of the surveyed systems.
|
cmp-lg/9412001
|
Dependency Grammar and the Parsing of Chinese Sentences
|
cmp-lg cs.CL
|
Dependency Grammar has been used by linguists as the basis of the syntactic
components of their grammar formalisms. It has also been used in natural
language parsing. In China, attempts have been made to use this grammar
formalism to parse Chinese sentences using corpus-based techniques. This paper
reviews the properties of Dependency Grammar as embodied in four axioms for the
well-formedness conditions for dependency structures. It is shown that allowing
multiple governors as done by some followers of this formalism is unnecessary.
The practice of augmenting Dependency Grammar with functional labels is also
discussed in the light of building functional structures when the sentence is
parsed. This will also facilitate semantic interpretation.
|
cmp-lg/9412002
|
N-Gram Cluster Identification During Empirical Knowledge Representation
Generation
|
cmp-lg cs.CL
|
This paper presents an overview of current research concerning knowledge
extraction from technical texts. In particular, the use of empirical techniques
during the identification and generation of a semantic representation is
considered. A key step is the discovery of useful n-grams and correlations
between clusters of these n-grams.
|
cmp-lg/9412003
|
An Extended Clustering Algorithm for Statistical Language Models
|
cmp-lg cs.CL
|
Statistical language models frequently suffer from a lack of training data.
This problem can be alleviated by clustering, because it reduces the number of
free parameters that need to be trained. However, clustered models have the
following drawback: if there is ``enough'' data to train an unclustered model,
then the clustered variant may perform worse. On currently used language
modeling corpora, e.g. the Wall Street Journal corpus, how do the performances
of a clustered and an unclustered model compare? While trying to address this
question, we develop the following two ideas. First, to get a clustering
algorithm with potentially high performance, an existing algorithm is extended
to deal with higher order N-grams. Second, to make it possible to cluster large
amounts of training data more efficiently, a heuristic to speed up the
algorithm is presented. The resulting clustering algorithm can be used to
cluster trigrams on the Wall Street Journal corpus and the language models it
produces can compete with existing back-off models. Especially when there is
only little training data available, the clustered models clearly outperform
the back-off models.
|
cmp-lg/9412004
|
Knowledge Representation for Lexical Semantics: Is Standard First Order
Logic Enough?
|
cmp-lg cs.CL
|
Natural language understanding applications such as interactive planning and
face-to-face translation require extensive inferencing. Many of these
inferences are based on the meaning of particular open class words. Providing a
representation that can support such lexically-based inferences is a primary
concern of lexical semantics. The representation language of first order logic
has well-understood semantics and a multitude of inferencing systems have been
implemented for it. Thus it is a prime candidate to serve as a lexical
semantics representation. However, we argue that FOL, although a good starting
point, needs to be extended before it can efficiently and concisely support all
the lexically-based inferences needed.
|
cmp-lg/9412005
|
Segmenting speech without a lexicon: The roles of phonotactics and
speech source
|
cmp-lg cs.CL
|
Infants face the difficult problem of segmenting continuous speech into words
without the benefit of a fully developed lexicon. Several sources of
information in speech might help infants solve this problem, including prosody,
semantic correlations and phonotactics. Research to date has focused on
determining to which of these sources infants might be sensitive, but little
work has been done to determine the potential usefulness of each source. The
computer simulations reported here are a first attempt to measure the
usefulness of distributional and phonotactic information in segmenting phoneme
sequences. The algorithms hypothesize different segmentations of the input into
words and select the best hypothesis according to the Minimum Description
Length principle. Our results indicate that while there is some useful
information in both phoneme distributions and phonotactic rules, the
combination of both sources is most useful.
|
cmp-lg/9412006
|
Robust stochastic parsing using the inside-outside algorithm
|
cmp-lg cs.CL
|
The paper describes a parser of sequences of (English) part-of-speech labels
which utilises a probabilistic grammar trained using the inside-outside
algorithm. The initial (meta)grammar is defined by a linguist and further rules
compatible with metagrammatical constraints are automatically generated. During
training, rules with very low probability are rejected yielding a wide-coverage
parser capable of ranking alternative analyses. A series of corpus-based
experiments describe the parser's performance.
|
cmp-lg/9412007
|
Coupling Phonology and Phonetics in a Constraint-Based Gestural Model
|
cmp-lg cs.CL
|
An implemented approach which couples a constraint-based phonology component
with an articulatory speech synthesizer is proposed. Articulatory gestures
ensure a tight connection between both components, as they comprise both
physical-phonetic and phonological aspects. The phonological modelling of e.g.
syllabification and phonological processes such as German final devoicing is
expressed in the constraint logic programming language CUF. Extending CUF by
arithmetic constraints allows the simultaneous description of both phonology
and phonetics. Thus declarative lexicalist theories of grammar such as HPSG may
be enriched up to the level of detailed phonetic realisation. Initial acoustic
demonstrations show that our approach is in principle capable of synthesizing
full utterances in a linguistically motivated fashion.
|
cmp-lg/9412008
|
Analysis of Japanese Compound Nouns using Collocational Information
|
cmp-lg cs.CL
|
Analyzing compound nouns is one of the crucial issues for natural language
processing systems, in particular for those systems that aim at a wide coverage
of domains. In this paper, we propose a method to analyze structures of
Japanese compound nouns by using both word collocations statistics and a
thesaurus. An experiment is conducted with 160,000 word collocations to analyze
compound nouns of with an average length of 4.9 characters. The accuracy of
this method is about 80%.
|
cmp-lg/9501001
|
Using default inheritance to describe LTAG
|
cmp-lg cs.CL
|
We present the results of an investigation into how the set of elementary
trees of a Lexicalized Tree Adjoining Grammar can be represented in the lexical
knowledge representation language DATR (Evans & Gazdar 1989a,b). The LTAG under
consideration is based on the one described in Abeille et al. (1990). Our
approach is similar to that of Vijay-Shanker & Schabes (1992) in that we
formulate an inheritance hierarchy that efficiently encodes the elementary
trees. However, rather than creating a new representation formalism for this
task, we employ techniques of established utility in other lexically-oriented
frameworks. In particular, we show how DATR's default mechanism can be used to
eliminate the need for a non-immediate dominance relation in the descriptions
of the surface LTAG entries. This allows us to embed the tree structures in the
feature theory in a manner reminiscent of HPSG subcategorisation frames, and
hence express lexical rules as relations over feature structures.
|
cmp-lg/9501002
|
NL Understanding with a Grammar of Constructions
|
cmp-lg cs.CL
|
We present an approach to natural language understanding based on a
computable grammar of constructions. A "construction" consists of a set of
features of form and a description of meaning in a context. A grammar is a set
of constructions. This kind of grammar is the key element of Mincal, an
implemented natural language, speech-enabled interface to an on-line calendar
system. The system consists of a NL grammar, a parser, an on-line calendar, a
domain knowledge base (about dates, times and meetings), an application
knowledge base (about the calendar), a speech recognizer, a speech generator,
and the interfaces between those modules. We claim that this architecture
should work in general for spoken interfaces in small domains. In this paper we
present two novel aspects of the architecture: (a) the use of constructions,
integrating descriptions of form, meaning and context into one whole; and (b)
the separation of domain knowledge from application knowledge. We describe the
data structures for encoding constructions, the structure of the knowledge
bases, and the interactions of the key modules of the system.
|
cmp-lg/9501003
|
An HPSG Parser Based on Description Logics
|
cmp-lg cs.CL
|
In this paper I present a parser based on Description Logics (DL) for a
German HPSG -style fragment. The specified parser relies mainly on the
inferential capabilities of the underlying DL system. Given a preferential
default extension for DL disambiguation is achieved by choosing the parse
containing a qualitatively minimal number of exceptions.
|
cmp-lg/9501004
|
Lexical Knowledge Representation in an Intelligent Dictionary Help
System
|
cmp-lg cs.CL
|
The frame-based knowledge representation model adopted in IDHS (Intelligent
Dictionary Help System) is described in this paper. It is used to represent the
lexical knowledge acquired automatically from a conventional dictionary.
Moreover, the enrichment processes that have been performed on the Dictionary
Knowledge Base and the dynamic exploitation of this knowledge - both based on
the exploitation of the properties of lexical semantic relations - are also
described.
|
cmp-lg/9501005
|
A Tool for Collecting Domain Dependent Sortal Constraints From Corpora
|
cmp-lg cs.CL
|
In this paper, we describe a tool designed to generate semi-automatically the
sortal constraints specific to a domain to be used in a natural language (NL)
understanding system. This tool is evaluated using the SRI Gemini NL
understanding system in the ATIS domain.
|
cmp-lg/9502001
|
Interlingual Lexical Organisation for Multilingual Lexical Databases in
NADIA
|
cmp-lg cs.CL
|
We propose a lexical organisation for multilingual lexical databases (MLDB).
This organisation is based on acceptions (word-senses). We detail this lexical
organisation and show a mock-up built to experiment with it. We also present
our current work in defining and prototyping a specialised system for the
management of acception-based MLDB. Keywords: multilingual lexical database,
acception, linguistic structure.
|
cmp-lg/9502002
|
Learning Unification-Based Natural Language Grammars
|
cmp-lg cs.CL
|
When parsing unrestricted language, wide-covering grammars often
undergenerate. Undergeneration can be tackled either by sentence correction, or
by grammar correction. This thesis concentrates upon automatic grammar
correction (or machine learning of grammar) as a solution to the problem of
undergeneration. Broadly speaking, grammar correction approaches can be
classified as being either {\it data-driven}, or {\it model-based}. Data-driven
learners use data-intensive methods to acquire grammar. They typically use
grammar formalisms unsuited to the needs of practical text processing and
cannot guarantee that the resulting grammar is adequate for subsequent semantic
interpretation. That is, data-driven learners acquire grammars that generate
strings that humans would judge to be grammatically ill-formed (they {\it
overgenerate}) and fail to assign linguistically plausible parses. Model-based
learners are knowledge-intensive and are reliant for success upon the
completeness of a {\it model of grammaticality}. But in practice, the model
will be incomplete. Given that in this thesis we deal with undergeneration by
learning, we hypothesise that the combined use of data-driven and model-based
learning would allow data-driven learning to compensate for model-based
learning's incompleteness, whilst model-based learning would compensate for
data-driven learning's unsoundness. We describe a system that we have used to
test the hypothesis empirically. The system combines data-driven and
model-based learning to acquire unification-based grammars that are more
suitable for practical text parsing. Using the Spoken English Corpus as data,
and by quantitatively measuring undergeneration, overgeneration and parse
plausibility, we show that this hypothesis is correct.
|
cmp-lg/9502003
|
ProFIT: Prolog with Features, Inheritance and Templates
|
cmp-lg cs.CL
|
ProFIT is an extension of Standard Prolog with Features, Inheritance and
Templates. ProFIT allows the programmer or grammar developer to declare an
inheritance hierarchy, features and templates. Sorted feature terms can be used
in ProFIT programs together with Prolog terms to provide a clearer description
language for linguistic structures. ProFIT compiles all sorted feature terms
into a Prolog term representation, so that the built-in Prolog term unification
can be used for the unification of sorted feature structures, and no special
unification algorithm is needed. ProFIT programs are compiled into Prolog
programs, so that no meta-interpreter is needed for their execution. ProFIT
thus provides a direct step from grammars developed with sorted feature terms
to Prolog programs usable for practical NLP systems.
|
cmp-lg/9502004
|
Bottom-Up Earley Deduction
|
cmp-lg cs.CL
|
We propose a bottom-up variant of Earley deduction. Bottom-up deduction is
preferable to top-down deduction because it allows incremental processing (even
for head-driven grammars), it is data-driven, no subsumption check is needed,
and preference values attached to lexical items can be used to guide best-first
search. We discuss the scanning step for bottom-up Earley deduction and
indexing schemes that help avoid useless deduction steps.
|
cmp-lg/9502005
|
Off-line Optimization for Earley-style HPSG Processing
|
cmp-lg cs.CL
|
A novel approach to HPSG based natural language processing is described that
uses an off-line compiler to automatically prime a declarative grammar for
generation or parsing, and inputs the primed grammar to an advanced
Earley-style processor. This way we provide an elegant solution to the problems
with empty heads and efficient bidirectional processing which is illustrated
for the special case of HPSG generation. Extensive testing with a large HPSG
grammar revealed some important constraints on the form of the grammar.
|
cmp-lg/9502006
|
Rapid Development of Morphological Descriptions for Full Language
Processing Systems
|
cmp-lg cs.CL
|
I describe a compiler and development environment for feature-augmented
two-level morphology rules integrated into a full NLP system. The compiler is
optimized for a class of languages including many or most European ones, and
for rapid development and debugging of descriptions of new languages. The key
design decision is to compose morphophonological and morphosyntactic
information, but not the lexicon, when compiling the description. This results
in typical compilation times of about a minute, and has allowed a reasonably
full, feature-based description of French inflectional morphology to be
developed in about a month by a linguist new to the system.
|
cmp-lg/9502007
|
Utilization of a Lexicon for Spelling Correction in Modern Greek
|
cmp-lg cs.CL
|
In this paper we present an interactive spelling correction system for Modern
Greek. The entire system is based on a morphological lexicon. Emphasis is given
to the development of the lexicon, especially as far as storage economy, speed
efficiency and dictionary coverage is concerned. Extensive research was
conducted from both the computer engineering and linguisting fields, in order
to describe inflectional morphology as economically as possible.
|
cmp-lg/9502008
|
A Robust and Efficient Three-Layered Dialogue Component for a
Speech-to-Speech Translation System
|
cmp-lg cs.CL
|
We present the dialogue component of the speech-to-speech translation system
VERBMOBIL. In contrast to conventional dialogue systems it mediates the
dialogue while processing maximally 50% of the dialogue in depth. Special
requirements like robustness and efficiency lead to a 3-layered hybrid
architecture for the dialogue module, using statistics, an automaton and a
planner. A dialogue memory is constructed incrementally.
|
cmp-lg/9502009
|
On Learning More Appropriate Selectional Restrictions
|
cmp-lg cs.CL
|
We present some variations affecting the association measure and thresholding
on a technique for learning Selectional Restrictions from on-line corpora. It
uses a wide-coverage noun taxonomy and a statistical measure to generalize the
appropriate semantic classes. Evaluation measures for the Selectional
Restrictions learning task are discussed. Finally, an experimental evaluation
of these variations is reported.
|
cmp-lg/9502010
|
NPtool, a detector of English noun phrases
|
cmp-lg cs.CL
|
NPtool is a fast and accurate system for extracting noun phrases from English
texts for the purposes of e.g. information retrieval, translation unit
discovery, and corpus studies. After a general introduction, the system
architecture is presented in outline. Then follows an examination of a recently
written Constraint Syntax. An evaluation report concludes the paper.
|
cmp-lg/9502011
|
Specifying a shallow grammatical representation for parsing purposes
|
cmp-lg cs.CL
|
Is it possible to specify a grammatical representation (descriptors and their
application guidelines) to such a degree that it can be consistently applied by
different grammarians e.g. for producing a benchmark corpus for parser
evaluation? Arguments for and against have been given, but very little
empirical evidence. In this article we report on a double-blind experiment with
a surface-oriented morphosyntactic grammatical representation used in a
large-scale English parser. We argue that a consistently applicable
representation for morphology and also shallow syntax can be specified. A
grammatical representation with a near-100% coverage of running text can be
specified with a reasonable effort, especially if the representation is based
on structural distinctions (i.e. it is structurally resolvable).
|
cmp-lg/9502012
|
A syntax-based part-of-speech analyser
|
cmp-lg cs.CL
|
There are two main methodologies for constructing the knowledge base of a
natural language analyser: the linguistic and the data-driven. Recent
state-of-the-art part-of-speech taggers are based on the data-driven approach.
Because of the known feasibility of the linguistic rule-based approach at
related levels of description, the success of the data-driven approach in
part-of-speech analysis may appear surprising. In this paper, a case is made
for the syntactic nature of part-of-speech tagging. A new tagger of English
that uses only linguistic distributional rules is outlined and empirically
evaluated. Tested against a benchmark corpus of 38,000 words of previously
unseen text, this syntax-based system reaches an accuracy of above 99%.
Compared to the 95-97% accuracy of its best competitors, this result suggests
the feasibility of the linguistic approach also in part-of-speech analysis.
|
cmp-lg/9502013
|
Ambiguity resolution in a reductionistic parser
|
cmp-lg cs.CL
|
We are concerned with dependency-oriented morphosyntactic parsing of running
text. While a parsing grammar should avoid introducing structurally
unresolvable distinctions in order to optimise on the accuracy of the parser,
it also is beneficial for the grammarian to have as expressive a structural
representation available as possible. In a reductionistic parsing system this
policy may result in considerable ambiguity in the input; however, even massive
ambiguity can be tackled efficiently with an accurate parsing description and
effective parsing technology.
|
cmp-lg/9502014
|
Ellipsis and Quantification: a substitutional approach
|
cmp-lg cs.CL
|
The paper describes a substitutional approach to ellipsis resolution giving
comparable results to Dalrymple, Shieber and Pereira (1991), but without the
need for order-sensitive interleaving of quantifier scoping and ellipsis
resolution. It is argued that the order-independence results from viewing
semantic interpretation as building a description of a semantic composition,
instead of the more common view of interpretation as actually performing the
composition
|
cmp-lg/9502015
|
The Semantics of Resource Sharing in Lexical-Functional Grammar
|
cmp-lg cs.CL
|
We argue that the resource sharing that is commonly manifest in semantic
accounts of coordination is instead appropriately handled in terms of
structure-sharing in LFG f-structures. We provide an extension to the previous
account of LFG semantics (Dalrymple et al., 1993b) according to which
dependencies between f-structures are viewed as resources; as a result a
one-to-one correspondence between uses of f-structures and meanings is
maintained. The resulting system is sufficiently restricted in cases where
other approaches overgenerate; the very property of resource-sensitivity for
which resource sharing appears to be problematic actually provides explanatory
advantages over systems that more freely replicate resources during derivation.
|
cmp-lg/9502016
|
Higher-order Linear Logic Programming of Categorial Deduction
|
cmp-lg cs.CL
|
We show how categorial deduction can be implemented in higher-order (linear)
logic programming, thereby realising parsing as deduction for the associative
and non-associative Lambek calculi. This provides a method of solution to the
parsing problem of Lambek categorial grammar applicable to a variety of its
extensions.
|
cmp-lg/9502017
|
Deterministic Consistency Checking of LP Constraints
|
cmp-lg cs.CL
|
We provide a constraint based computational model of linear precedence as
employed in the HPSG grammar formalism. An extended feature logic which adds a
wide range of constraints involving precedence is described. A sound, complete
and terminating deterministic constraint solving procedure is given.
Deterministic computational model is achieved by weakening the logic such that
it is sufficient for linguistic applications involving word-order.
|
cmp-lg/9502018
|
Algorithms for Analysing the Temporal Structure of Discourse
|
cmp-lg cs.CL
|
We describe a method for analysing the temporal structure of a discourse
which takes into account the effects of tense, aspect, temporal adverbials and
rhetorical structure and which minimises unnecessary ambiguity in the temporal
structure. It is part of a discourse grammar implemented in Carpenter's ALE
formalism. The method for building up the temporal structure of the discourse
combines constraints and preferences: we use constraints to reduce the number
of possible structures, exploiting the HPSG type hierarchy and unification for
this purpose; and we apply preferences to choose between the remaining options
using a temporal centering mechanism. We end by recommending that an
underspecified representation of the structure using these techniques be used
to avoid generating the temporal/rhetorical structure until higher-level
information can be used to disambiguate.
|
cmp-lg/9502019
|
Integrating "Free" Word Order Syntax and Information Structure
|
cmp-lg cs.CL
|
This paper describes a combinatory categorial formalism called Multiset-CCG
that can capture the syntax and interpretation of ``free'' word order in
languages such as Turkish. The formalism compositionally derives the
predicate-argument structure and the information structure (e.g. topic, focus)
of a sentence in parallel, and uniformly handles word order variation among the
arguments and adjuncts within a clause, as well as in complex clauses and
across clause boundaries.
|
cmp-lg/9502020
|
Formalization and Parsing of Typed Unification-Based ID/LP Grammars
|
cmp-lg cs.CL
|
This paper defines unification based ID/LP grammars based on typed feature
structures as nonterminals and proposes a variant of Earley's algorithm to
decide whether a given input sentence is a member of the language generated by
a particular typed unification ID/LP grammar. A solution to the problem of the
nonlocal flow of information in unification ID/LP grammars as discussed in
Seiffert (1991) is incorporated into the algorithm. At the same time, it tries
to connect this technical work with linguistics by presenting an example of the
problem resulting from HPSG approaches to linguistics (Hinrichs and Nakasawa
1994, Richter and Sailer 1995) and with computational linguistics by drawing
connections from this approach to systems implementing HPSG, especially the
TROLL system, Gerdemann et al. (forthcoming).
|
cmp-lg/9502021
|
A Tractable Extension of Linear Indexed Grammars
|
cmp-lg cs.CL
|
It has been shown that Linear Indexed Grammars can be processed in polynomial
time by exploiting constraints which make possible the extensive use of
structure-sharing. This paper describes a formalism that is more powerful than
Linear Indexed Grammar, but which can also be processed in polynomial time
using similar techniques. The formalism, which we refer to as Partially Linear
PATR manipulates feature structures rather than stacks.
|
cmp-lg/9502022
|
Stochastic HPSG
|
cmp-lg cs.CL
|
In this paper we provide a probabilistic interpretation for typed feature
structures very similar to those used by Pollard and Sag. We begin with a
version of the interpretation which lacks a treatment of re-entrant feature
structures, then provide an extended interpretation which allows them. We
sketch algorithms allowing the numerical parameters of our probabilistic
interpretations of HPSG to be estimated from corpora.
|
cmp-lg/9502023
|
Splitting the Reference Time: Temporal Anaphora and Quantification in
DRT
|
cmp-lg cs.CL
|
This paper presents an analysis of temporal anaphora in sentences which
contain quantification over events, within the framework of Discourse
Representation Theory. The analysis in (Partee 1984) of quantified sentences,
introduced by a temporal connective, gives the wrong truth-conditions when the
temporal connective in the subordinate clause is "before" or "after". This
problem has been previously analyzed in (de Swart 1991) as an instance of the
proportion problem, and given a solution from a Generalized Quantifier
approach. By using a careful distinction between the different notions of
reference time, based on (Kamp and Reyle 1993), we propose a solution to this
problem, within the framework of DRT. We show some applications of this
solution to additional temporal anaphora phenomena in quantified sentences.
|
cmp-lg/9502024
|
A Robust Parser Based on Syntactic Information
|
cmp-lg cs.CL
|
In this paper, we propose a robust parser which can parse extragrammatical
sentences. This parser can recover them using only syntactic information. It
can be easily modified and extended because it utilize only syntactic
information.
|
cmp-lg/9502025
|
Principle Based Semantics for HPSG
|
cmp-lg cs.CL
|
The paper presents a constraint based semantic formalism for HPSG. The
syntax-semantics interface directly implements syntactic conditions on
quantifier scoping and distributivity. The construction of semantic
representations is guided by general principles governing the interaction
between syntax and semantics. Each of these principles acts as a constraint to
narrow down the set of possible interpretations of a sentence. Meanings of
ambiguous sentences are represented by single partial representations
(so-called U(nderspecified) D(iscourse) R(epresentation) S(tructure)s) to which
further constraints can be added monotonically to gain more information about
the content of a sentence. There is no need to build up a large number of
alternative representations of the sentence which are then filtered by
subsequent discourse and world knowledge. The advantage of UDRSs is not only
that they allow for monotonic incremental interpretation but also that they are
equipped with truth conditions and a proof theory that allows for inferences to
be drawn directly on structures where quantifier scope is not resolved.
|
cmp-lg/9502026
|
On Reasoning with Ambiguities
|
cmp-lg cs.CL
|
The paper adresses the problem of reasoning with ambiguities. Semantic
representations are presented that leave scope relations between quantifiers
and/or other operators unspecified. Truth conditions are provided for these
representations and different consequence relations are judged on the basis of
intuitive correctness. Finally inference patterns are presented that operate
directly on these underspecified structures, i.e. do not rely on any
translation into the set of their disambiguations.
|
cmp-lg/9502027
|
Towards an Account of Extraposition in HPSG
|
cmp-lg cs.CL
|
This paper investigates the syntax of extraposition in the HPSG framework. We
present English and German data (partly taken from corpora), and provide an
analysis using lexical rules and a nonlocal dependency. The condition for
binding this dependency is formulated relative to the antecedent of the
extraposed phrase, which entails that no fixed site for extraposition exists.
Our analysis accounts for the interaction of extraposition with fronting and
coordination, and predicts constraints on multiple extraposition.
|
cmp-lg/9502028
|
Lexical Acquisition via Constraint Solving
|
cmp-lg cs.CL
|
This paper describes a method to automatically acquire the syntactic and
semantic classifications of unknown words. Our method reduces the search space
of the lexical acquisition problem by utilizing both the left and the right
context of the unknown word. Link Grammar provides a convenient framework in
which to implement our method.
|
cmp-lg/9502029
|
Topic Identification in Discourse
|
cmp-lg cs.CL
|
This paper proposes a corpus-based language model for topic identification.
We analyze the association of noun-noun and noun-verb pairs in LOB Corpus. The
word association norms are based on three factors: 1) word importance, 2) pair
co-occurrence, and 3) distance. They are trained on the paragraph and sentence
levels for noun-noun and noun-verb pairs, respectively. Under the topic
coherence postulation, the nouns that have the strongest connectivities with
the other nouns and verbs in the discourse form the preferred topic set. The
collocational semantics then is used to identify the topics from paragraphs and
to discuss the topic shift phenomenon among paragraphs.
|
cmp-lg/9502030
|
Bi-directional memory-based dialog translation: The KEMDT approach
|
cmp-lg cs.CL
|
A bi-directional Korean/English dialog translation system is designed and
implemented using the memory-based translation technique. The system KEMDT
(Korean/English Memory-based Dialog Translation system) can perform Korean to
English, and English to Korean translation using unified memory network and
extended marker passing algorithm. We resolve the word order variation and
frequent word omission problems in Korean by classifying the concept sequence
element in four different types and extending the marker-
passing-based-translation algorithm. Unlike the previous memory-based
translation systems, the KEMDT system develops the bilingual memory network and
the unified bi-directional marker passing translation algorithm. For efficient
language specific processing, we separate the morphological processors from the
memory-based translator. The KEMDT technology provides a hierarchical memory
network and an efficient marker-based control for the recent example-based MT
paradigm.
|
cmp-lg/9502031
|
Cooperative Error Handling and Shallow Processing
|
cmp-lg cs.CL
|
This paper is concerned with the detection and correction of sub-sentential
English text errors. Previous spelling programs, unless restricted to a very
small set of words, have operated as post-processors. And to date, grammar
checkers and other programs which deal with ill-formed input usually step
directly from spelling considerations to a full-scale parse, assuming a
complete sentence. Work described below is aimed at evaluating the
effectiveness of shallow (sub-sentential) processing and the feasibility of
cooperative error checking, through building and testing appropriately an
error-processing system. A system under construction is outlined which
incorporates morphological checks (using new two-level error rules) over a
directed letter graph, tag positional trigrams and partial parsing. Intended
testing is discussed.
|
cmp-lg/9502032
|
An NLP Approach to a Specific Type of Texts: Car Accident Reports
|
cmp-lg cs.CL
|
The work reported here is the result of a study done within a larger project
on the ``Semantics of Natural Languages'' viewed from the field of Artificial
Intelligence and Computational Linguistics. In this project, we have chosen a
corpus of insurance claim reports. These texts deal with a relatively
circumscribed domain, that of road traffic, thereby limiting the
extra-linguistic knowledge necessary to understand them. Moreover, these texts
present a number of very specific characteristics, insofar as they are written
in a quasi-institutional setting which imposes many constraints on their
production. We first determine what these constraints are in order to then show
how they provide the writer with the means to create as succint a text as
possible, and in a symmetric way, how they provide the reader with the means to
interpret the text and to distinguish between its factual and argumentative
aspects.
|
cmp-lg/9502033
|
An Algorithm to Co-Ordinate Anaphora Resolution and PPS Disambiguation
Process
|
cmp-lg cs.CL
|
This paper concerns both anaphora resolution and prepositional phrase (PP)
attachment that are the most frequent ambiguities in natural language
processing. Several methods have been proposed to deal with each phenomenon
separately, however none of proposed systems has considered the way of dealing
both phenomena. We tackle this issue, proposing an algorithm to co-ordinate the
treatment of these two problems efficiently, i.e., the aim is also to exploit
at each step all the results that each component can provide.
|
cmp-lg/9502034
|
Grouping Words Using Statistical Context
|
cmp-lg cs.CL
|
This paper (cmp-lg/yymmnnn) has been accepted for publication in the student
session of EACL-95. It outlines ongoing work using statistical and unsupervised
neural network methods for clustering words in untagged corpora. Such
approaches are of interest when attempting to understand the development of
human intuitive categorization of language as well as for trying to improve
computational methods in natural language understanding. Some preliminary
results using a simple statistical approach are described, along with work
using an unsupervised neural network to distinguish between the sense classes
into which words fall.
|
cmp-lg/9502035
|
Incorporating "Unconscious Reanalysis" into an Incremental, Monotonic
Parser
|
cmp-lg cs.CL
|
This paper describes an implementation based on a recent model in the
psycholinguistic literature. We define a parsing operation which allows the
reanalysis of dependencies within an incremental and monotonic processing
architecture, and discuss search strategies for its application in a
head-initial language (English) and a head-final language (Japanese).
|
cmp-lg/9502036
|
Literal Movement Grammars
|
cmp-lg cs.CL
|
Literal movement grammars (LMGs) provide a general account of extraposition
phenomena through an attribute mechanism allowing top-down displacement of
syntactical information. LMGs provide a simple and efficient treatment of
complex linguistic phenomena such as cross-serial dependencies in German and
Dutch---separating the treatment of natural language into a parsing phase
closely resembling traditional context-free treatment, and a disambiguation
phase which can be carried out using matching, as opposed to full unification
employed in most current grammar formalisms of linguistical relevance.
|
cmp-lg/9502037
|
A State-Transition Grammar for Data-Oriented Parsing
|
cmp-lg cs.CL
|
This paper presents a grammar formalism designed for use in data-oriented
approaches to language processing. The formalism is best described as a
right-linear indexed grammar extended in linguistically interesting ways. The
paper goes on to investigate how a corpus pre-parsed with this formalism may be
processed to provide a probabilistic language model for use in the parsing of
fresh texts.
|
cmp-lg/9502038
|
Implementation and evaluation of a German HMM for POS disambiguation
|
cmp-lg cs.CL
|
A German language model for the Xerox HMM tagger is presented. This model's
performance is compared with two other German taggers with partial parameter
re-estimation and full adaption of parameters from pre-tagged corpora. The
ambiguity types resolved by this model are analysed and compared to ambiguity
types of English and French. Finally, the model's error types are described. I
argue that although the overall performance of these models for German is
comparable to results for English and French, a more exact analysis
demonstrates important differences in the types of disambiguation involved for
German.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.