id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
cmp-lg/9603004
|
Attempto - From Specifications in Controlled Natural Language towards
Executable Specifications
|
cmp-lg cs.CL
|
Deriving formal specifications from informal requirements is difficult since
one has to take into account the disparate conceptual worlds of the application
domain and of software development. To bridge the conceptual gap we propose
controlled natural language as a textual view on formal specifications in
logic. The specification language Attempto Controlled English (ACE) is a subset
of natural language that can be accurately and efficiently processed by a
computer, but is expressive enough to allow natural usage. The Attempto system
translates specifications in ACE into discourse representation structures and
into Prolog. The resulting knowledge base can be queried in ACE for
verification, and it can be executed for simulation, prototyping and validation
of the specification.
|
cmp-lg/9603005
|
Integrated speech and morphological processing in a connectionist
continuous speech understanding for Korean
|
cmp-lg cs.CL
|
A new tightly coupled speech and natural language integration model is
presented for a TDNN-based continuous possibly large vocabulary speech
recognition system for Korean. Unlike popular n-best techniques developed for
integrating mainly HMM-based speech recognition and natural language processing
in a {\em word level}, which is obviously inadequate for morphologically
complex agglutinative languages, our model constructs a spoken language system
based on a {\em morpheme-level} speech and language integration. With this
integration scheme, the spoken Korean processing engine (SKOPE) is designed and
implemented using a TDNN-based diphone recognition module integrated with a
Viterbi-based lexical decoding and symbolic phonological/morphological
co-analysis. Our experiment results show that the speaker-dependent continuous
{\em eojeol} (Korean word) recognition and integrated morphological analysis
can be achieved with over 80.6% success rate directly from speech inputs for
the middle-level vocabularies.
|
cmp-lg/9603006
|
Extraction of V-N-Collocations from Text Corpora: A Feasibility Study
for German
|
cmp-lg cs.CL
|
The usefulness of a statistical approach suggested by Church et al. (1991) is
evaluated for the extraction of verb-noun (V-N) collocations from German text
corpora. Some problematic issues of that method arising from properties of the
German language are discussed and various modifications of the method are
considered that might improve extraction results for German. The precision and
recall of all variant methods is evaluated for V-N collocations containing
support verbs, and the consequences for further work on the extraction of
collocations from German corpora are discussed.
With a sufficiently large corpus (>= 6 mio. word-tokens), the average error
rate of wrong extractions can be reduced to 2.2% (97.8% precision) with the
most restrictive method, however with a loss in data of almost 50% compared to
a less restrictive method with still 87.6% precision. Depending on the goal to
be achieved, emphasis can be put on a high recall for lexicographic purposes or
on high precision for automatic lexical acquisition, in each case unfortunately
leading to a decrease of the corresponding other variable. Low recall can still
be acceptable if very large corpora (i.e. 50 - 100 million words) are available
or if corpora for special domains are used in addition to the data found in
machine readable (collocation) dictionaries.
|
cmp-lg/9604001
|
Combining Hand-crafted Rules and Unsupervised Learning in
Constraint-based Morphological Disambiguation
|
cmp-lg cs.CL
|
This paper presents a constraint-based morphological disambiguation approach
that is applicable languages with complex morphology--specifically
agglutinative languages with productive inflectional and derivational
morphological phenomena. In certain respects, our approach has been motivated
by Brill's recent work, but with the observation that his transformational
approach is not directly applicable to languages like Turkish. Our system
combines corpus independent hand-crafted constraint rules, constraint rules
that are learned via unsupervised learning from a training corpus, and
additional statistical information from the corpus to be morphologically
disambiguated. The hand-crafted rules are linguistically motivated and tuned to
improve precision without sacrificing recall. The unsupervised learning process
produces two sets of rules: (i) choose rules which choose morphological parses
of a lexical item satisfying constraint effectively discarding other parses,
and (ii) delete rules, which delete parses satisfying a constraint. Our
approach also uses a novel approach to unknown word processing by employing a
secondary morphological processor which recovers any relevant inflectional and
derivational information from a lexical item whose root is unknown. With this
approach, well below 1 percent of the tokens remains as unknown in the texts we
have experimented with. Our results indicate that by combining these
hand-crafted,statistical and learned information sources, we can attain a
recall of 96 to 97 percent with a corresponding precision of 93 to 94 percent,
and ambiguity of 1.02 to 1.03 parses per token.
|
cmp-lg/9604002
|
A Constraint-based Case Frame Lexicon
|
cmp-lg cs.CL
|
We present a constraint-based case frame lexicon architecture for
bi-directional mapping between a syntactic case frame and a semantic frame. The
lexicon uses a semantic sense as the basic unit and employs a multi-tiered
constraint structure for the resolution of syntactic information into the
appropriate senses and/or idiomatic usage. Valency changing transformations
such as morphologically marked passivized or causativized forms are handled via
lexical rules that manipulate case frames templates. The system has been
implemented in a typed-feature system and applied to Turkish.
|
cmp-lg/9604003
|
Error-tolerant Tree Matching
|
cmp-lg cs.CL
|
This paper presents an efficient algorithm for retrieving from a database of
trees, all trees that match a given query tree approximately, that is, within a
certain error tolerance. It has natural language processing applications in
searching for matches in example-based translation systems, and retrieval from
lexical databases containing entries of complex feature structures. The
algorithm has been implemented on SparcStations, and for large randomly
generated synthetic tree databases (some having tens of thousands of trees) it
can associatively search for trees with a small error, in a matter of tenths of
a second to few seconds.
|
cmp-lg/9604004
|
Apportioning Development Effort in a Probabilistic LR Parsing System
through Evaluation
|
cmp-lg cs.CL
|
We describe an implemented system for robust domain-independent syntactic
parsing of English, using a unification-based grammar of part-of-speech and
punctuation labels coupled with a probabilistic LR parser. We present
evaluations of the system's performance along several different dimensions;
these enable us to assess the contribution that each individual part is making
to the success of the system as a whole, and thus prioritise the effort to be
devoted to its further enhancement. Currently, the system is able to parse
around 80% of sentences in a substantial corpus of general text containing a
number of distinct genres. On a random sample of 250 such sentences the system
has a mean crossing bracket rate of 0.71 and recall and precision of 83% and
84% respectively when evaluated against manually-disambiguated analyses.
|
cmp-lg/9604005
|
Better Language Models with Model Merging
|
cmp-lg cs.CL
|
This paper investigates model merging, a technique for deriving Markov models
from text or speech corpora. Models are derived by starting with a large and
specific model and by successively combining states to build smaller and more
general models. We present methods to reduce the time complexity of the
algorithm and report on experiments on deriving language models for a speech
recognition task. The experiments show the advantage of model merging over the
standard bigram approach. The merged model assigns a lower perplexity to the
test set and uses considerably fewer states.
|
cmp-lg/9604006
|
The Role of the Gricean Maxims in the Generation of Referring
Expressions
|
cmp-lg cs.CL
|
Grice's maxims of conversation [Grice 1975] are framed as directives to be
followed by a speaker of the language. This paper argues that, when considered
from the point of view of natural language generation, such a characterisation
is rather misleading, and that the desired behaviour falls out quite naturally
if we view language generation as a goal-oriented process. We argue this
position with particular regard to the generation of referring expressions.
|
cmp-lg/9604007
|
Collocational Grammar
|
cmp-lg cs.CL
|
A perspective of statistical language models which emphasizes their
collocational aspect is advocated. It is suggested that strings be generalized
in terms of classes of relationships instead of classes of objects. The single
most important characteristic of such a model is a mechanism for comparing
patterns. When patterns are fully generalized a natural definition of syntactic
class emerges as a subset of relational class. These collocational syntactic
classes should be an unambiguous partition of traditional syntactic classes.
|
cmp-lg/9604008
|
Efficient Algorithms for Parsing the DOP Model
|
cmp-lg cs.CL
|
Excellent results have been reported for Data-Oriented Parsing (DOP) of
natural language texts (Bod, 1993). Unfortunately, existing algorithms are both
computationally intensive and difficult to implement. Previous algorithms are
expensive due to two factors: the exponential number of rules that must be
generated and the use of a Monte Carlo parsing algorithm. In this paper we
solve the first problem by a novel reduction of the DOP model to a small,
equivalent probabilistic context-free grammar. We solve the second problem by a
novel deterministic parsing strategy that maximizes the expected number of
correct constituents, rather than the probability of a correct parse tree.
Using the optimizations, experiments yield a 97% crossing brackets rate and 88%
zero crossing brackets rate. This differs significantly from the results
reported by Bod, and is comparable to results from a duplication of Pereira and
Schabes's (1992) experiment on the same data. We show that Bod's results are at
least partially due to an extremely fortuitous choice of test data, and
partially due to using cleaner data than other researchers.
|
cmp-lg/9604009
|
Another Facet of LIG Parsing
|
cmp-lg cs.CL
|
In this paper we present a new parsing algorithm for linear indexed grammars
(LIGs) in the same spirit as the one described in (Vijay-Shanker and Weir,
1993) for tree adjoining grammars. For a LIG $L$ and an input string $x$ of
length $n$, we build a non ambiguous context-free grammar whose sentences are
all (and exclusively) valid derivation sequences in $L$ which lead to $x$. We
show that this grammar can be built in ${\cal O}(n^6)$ time and that individual
parses can be extracted in linear time with the size of the extracted parse
tree. Though this ${\cal O}(n^6)$ upper bound does not improve over previous
results, the average case behaves much better. Moreover, practical parsing
times can be decreased by some statically performed computations.
|
cmp-lg/9604010
|
Off-line Constraint Propagation for Efficient HPSG Processing
|
cmp-lg cs.CL
|
We investigate the use of a technique developed in the constraint programming
community called constraint propagation to automatically make a HPSG theory
more specific at those places where linguistically motivated underspecification
would lead to inefficient processing. We discuss two concrete HPSG examples
showing how off-line constraint propagation helps improve processing
efficiency.
|
cmp-lg/9604011
|
Multi-level post-processing for Korean character recognition using
morphological analysis and linguistic evaluation
|
cmp-lg cs.CL
|
Most of the post-processing methods for character recognition rely on
contextual information of character and word-fragment levels. However, due to
linguistic characteristics of Korean, such low-level information alone is not
sufficient for high-quality character-recognition applications, and we need
much higher-level contextual information to improve the recognition results.
This paper presents a domain independent post-processing technique that
utilizes multi-level morphological, syntactic, and semantic information as well
as character-level information. The proposed post-processing system performs
three-level processing: candidate character-set selection, candidate eojeol
(Korean word) generation through morphological analysis, and final single
eojeol-sequence selection by linguistic evaluation. All the required linguistic
information and probabilities are automatically acquired from a statistical
corpus analysis. Experimental results demonstrate the effectiveness of our
method, yielding error correction rate of 80.46%, and improved recognition rate
of 95.53% from before-post-processing rate 71.2% for single best-solution
selection.
|
cmp-lg/9604012
|
SemHe: A Generalised Two-Level System
|
cmp-lg cs.CL
|
This paper presents a generalised two-level implementation which can handle
linear and non-linear morphological operations. An algorithm for the
interpretation of multi-tape two-level rules is described. In addition, a
number of issues which arise when developing non-linear grammars are discussed
with examples from Syriac.
|
cmp-lg/9604013
|
Syntactic Analyses for Parallel Grammars: Auxiliaries and Genitive NPs
|
cmp-lg cs.CL
|
This paper focuses on two disparate aspects of German syntax from the
perspective of parallel grammar development. As part of a cooperative project,
we present an innovative approach to auxiliaries and multiple genitive NPs in
German. The LFG-based implementation presented here avoids unnessary structural
complexity in the representation of auxiliaries by challenging the traditional
analysis of auxiliaries as raising verbs. The approach developed for multiple
genitive NPs provides a more abstract, language independent representation of
genitives associated with nominalized verbs. Taken together, the two approaches
represent a step towards providing uniformly applicable treatments for
differing languages, thus lightening the burden for machine translation.
|
cmp-lg/9604014
|
The importance of being lazy -- using lazy evaluation to process queries
to HPSG grammars
|
cmp-lg cs.CL
|
Linguistic theories formulated in the architecture of {\sc hpsg} can be very
precise and explicit since {\sc hpsg} provides a formally well-defined setup.
However, when querying a faithful implementation of such an explicit theory,
the large data structures specified can make it hard to see the relevant
aspects of the reply given by the system. Furthermore, the system spends much
time applying constraints which can never fail just to be able to enumerate
specific answers. In this paper we want to describe lazy evaluation as the
result of an off-line compilation technique. This method of evaluation can be
used to answer queries to an {\sc hpsg} system so that only the relevant
aspects are checked and output.
|
cmp-lg/9604015
|
Computing Prosodic Morphology
|
cmp-lg cs.CL
|
This paper establishes a framework under which various aspects of prosodic
morphology, such as templatic morphology and infixation, can be handled under
two-level theory using an implemented multi-tape two-level model. The paper
provides a new computational analysis of root-and-pattern morphology based on
prosody.
|
cmp-lg/9604016
|
Processing Metonymy: a Domain-Model Heuristic Graph Traversal Approach
|
cmp-lg cs.CL
|
We address here the treatment of metonymic expressions from a knowledge
representation perspective, that is, in the context of a text understanding
system which aims to build a conceptual representation from texts according to
a domain model expressed in a knowledge representation formalism.
We focus in this paper on the part of the semantic analyser which deals with
semantic composition. We explain how we use the domain model to handle metonymy
dynamically, and more generally, to underlie semantic composition, using the
knowledge descriptions attached to each concept of our ontology as a kind of
concept-level, multiple-role qualia structure.
We rely for this on a heuristic path search algorithm that exploits the
graphic aspects of the conceptual graphs formalism. The methods described have
been implemented and applied on French texts in the medical domain.
|
cmp-lg/9604017
|
Fast Parsing using Pruning and Grammar Specialization
|
cmp-lg cs.CL
|
We show how a general grammar may be automatically adapted for fast parsing
of utterances from a specific domain by means of constituent pruning and
grammar specialization based on explanation-based learning. These methods
together give an order of magnitude increase in speed, and the coverage loss
entailed by grammar specialization is reduced to approximately half that
reported in previous work. Experiments described here suggest that the loss of
coverage has been reduced to the point where it no longer causes significant
performance degradation in the context of a real application.
|
cmp-lg/9604018
|
The Measure of a Model
|
cmp-lg cs.CL
|
This paper describes measures for evaluating the three determinants of how
well a probabilistic classifier performs on a given test set. These
determinants are the appropriateness, for the test set, of the results of (1)
feature selection, (2) formulation of the parametric form of the model, and (3)
parameter estimation. These are part of any model formulation procedure, even
if not broken out as separate steps, so the tradeoffs explored in this paper
are relevant to a wide variety of methods. The measures are demonstrated in a
large experiment, in which they are used to analyze the results of roughly 300
classifiers that perform word-sense disambiguation.
|
cmp-lg/9604019
|
Magic for Filter Optimization in Dynamic Bottom-up Processing
|
cmp-lg cs.CL
|
Off-line compilation of logic grammars using Magic allows an incorporation of
filtering into the logic underlying the grammar. The explicit definite clause
characterization of filtering resulting from Magic compilation allows processor
independent and logically clean optimizations of dynamic bottom-up processing
with respect to goal-directedness. Two filter optimizations based on the
program transformation technique of Unfolding are discussed which are of
practical and theoretical interest.
|
cmp-lg/9604020
|
Translating into Free Word Order Languages
|
cmp-lg cs.CL
|
In this paper, I discuss machine translation of English text into Turkish, a
relatively ``free'' word order language. I present algorithms that determine
the topic and the focus of each target sentence (using salience (Centering
Theory), old vs. new information, and contrastiveness in the discourse model)
in order to generate the contextually appropriate word orders in the target
language.
|
cmp-lg/9604021
|
Extended Dependency Structures and their Formal Interpretation
|
cmp-lg cs.CL
|
We describe two ``semantically-oriented'' dependency-structure formalisms,
U-forms and S-forms. U-forms have been previously used in machine translation
as interlingual representations, but without being provided with a formal
interpretation. S-forms, which we introduce in this paper, are a scoped version
of U-forms, and we define a compositional semantics mechanism for them. Two
types of semantic composition are basic: complement incorporation and modifier
incorporation. Binding of variables is done at the time of incorporation,
permitting much flexibility in composition order and a simple account of the
semantic effects of permuting several incorporations.
|
cmp-lg/9604022
|
Unsupervised Learning of Word-Category Guessing Rules
|
cmp-lg cs.CL
|
Words unknown to the lexicon present a substantial problem to part-of-speech
tagging. In this paper we present a technique for fully unsupervised
statistical acquisition of rules which guess possible parts-of-speech for
unknown words. Three complementary sets of word-guessing rules are induced from
the lexicon and a raw corpus: prefix morphological rules, suffix morphological
rules and ending-guessing rules. The learning was performed on the Brown Corpus
data and rule-sets, with a highly competitive performance, were produced and
compared with the state-of-the-art.
|
cmp-lg/9604023
|
A Model-Theoretic Framework for Theories of Syntax
|
cmp-lg cs.CL
|
A natural next step in the evolution of constraint-based grammar formalisms
from rewriting formalisms is to abstract fully away from the details of the
grammar mechanism---to express syntactic theories purely in terms of the
properties of the class of structures they license. By focusing on the
structural properties of languages rather than on mechanisms for generating or
checking structures that exhibit those properties, this model-theoretic
approach can offer simpler and significantly clearer expression of theories and
can potentially provide a uniform formalization, allowing disparate theories to
be compared on the basis of those properties. We discuss $\LKP$, a monadic
second-order logical framework for such an approach to syntax that has the
distinctive virtue of being superficially expressive---supporting direct
statement of most linguistically significant syntactic properties---but having
well-defined strong generative capacity---languages are definable in $\LKP$ iff
they are strongly context-free. We draw examples from the realms of GPSG and
GB.
|
cmp-lg/9604024
|
Connectivity in Bag Generation
|
cmp-lg cs.CL
|
This paper presents a pruning technique which can be used to reduce the
number of paths searched in rule-based bag generators of the type proposed by
\cite{poznanskietal95} and \cite{popowich95}. Pruning the search space in these
generators is important given the computational cost of bag generation. The
technique relies on a connectivity constraint between the semantic indices
associated with each lexical sign in a bag. Testing the algorithm on a range of
sentences shows reductions in the generation time and the number of edges
constructed.
|
cmp-lg/9604025
|
Learning Part-of-Speech Guessing Rules from Lexicon: Extension to
Non-Concatenative Operations
|
cmp-lg cs.CL
|
One of the problems in part-of-speech tagging of real-word texts is that of
unknown to the lexicon words. In Mikheev (ACL-96 cmp-lg/9604022), a technique
for fully unsupervised statistical acquisition of rules which guess possible
parts-of-speech for unknown words was proposed. One of the over-simplification
assumed by this learning technique was the acquisition of morphological rules
which obey only simple concatenative regularities of the main word with an
affix. In this paper we extend this technique to the non-concatenative cases of
suffixation and assess the gain in the performance.
|
cmp-lg/9604026
|
Towards a Workbench for Acquisition of Domain Knowledge from Natural
Language
|
cmp-lg cs.CL
|
In this paper we describe an architecture and functionality of main
components of a workbench for an acquisition of domain knowledge from large
text corpora. The workbench supports an incremental process of corpus analysis
starting from a rough automatic extraction and organization of lexico-semantic
regularities and ending with a computer supported analysis of extracted data
and a semi-automatic refinement of obtained hypotheses. For doing this the
workbench employs methods from computational linguistics, information retrieval
and knowledge engineering. Although the workbench is currently under
implementation some of its components are already implemented and their
performance is illustrated with samples from engineering for a medical domain.
|
cmp-lg/9605001
|
Compiling a Partition-Based Two-Level Formalism
|
cmp-lg cs.CL
|
This paper describes an algorithm for the compilation of a two (or more)
level orthographic or phonological rule notation into finite state transducers.
The notation is an alternative to the standard one deriving from Koskenniemi's
work: it is believed to have some practical descriptive advantages, and is
quite widely used, but has a different interpretation. Efficient interpreters
exist for the notation, but until now it has not been clear how to compile to
equivalent automata in a transparent way. The present paper shows how to do
this, using some of the conceptual tools provided by Kaplan and Kay's regular
relations calculus.
|
cmp-lg/9605002
|
Building Natural-Language Generation Systems
|
cmp-lg cs.CL
|
This is a very short paper that briefly discusses some of the tasks that NLG
systems perform. It is of no research interest, but I have occasionally found
it useful as a way of introducing NLG to potential project collaborators who
know nothing about the field.
|
cmp-lg/9605003
|
Yet Another Paper about Partial Verb Phrase Fronting in German
|
cmp-lg cs.CL
|
I describe a very simple HPSG analysis for partial verb phrase fronting. I
will argue that the presented account is more adequate than others made during
the past years because it allows the description of constituents in fronted
positions with their modifier remaining in the non-fronted part of the
sentence. A problem with ill-formed signs that are admitted by all HPSG
accounts for partial verb phrase fronting known so far will be explained and a
solution will be suggested that uses the difference between combinatoric
relations of signs and their representation in word order domains.
|
cmp-lg/9605004
|
Higher-Order Coloured Unification and Natural Language Semantics
|
cmp-lg cs.CL
|
In this paper, we show that Higher-Order Coloured Unification - a form of
unification developed for automated theorem proving - provides a general theory
for modeling the interface between the interpretation process and other sources
of linguistic, non semantic information. In particular, it provides the general
theory for the Primary Occurrence Restriction which (Dalrymple, Shieber and
Pereira, 1991)'s analysis called for.
|
cmp-lg/9605005
|
Focus and Higher-Order Unification
|
cmp-lg cs.CL
|
Pulman has shown that Higher--Order Unification (HOU) can be used to model
the interpretation of focus. In this paper, we extend the unification--based
approach to cases which are often seen as a test--bed for focus theory:
utterances with multiple focus operators and second occurrence expressions. We
then show that the resulting analysis favourably compares with two prominent
theories of focus (namely, Rooth's Alternative Semantics and Krifka's
Structured Meanings theory) in that it correctly generates interpretations
which these alternative theories cannot yield. Finally, we discuss the formal
properties of the approach and argue that even though HOU need not terminate,
for the class of unification--problems dealt with in this paper, HOU avoids
this shortcoming and is in fact computationally tractable.
|
cmp-lg/9605006
|
Active Constraints for a Direct Interpretation of HPSG
|
cmp-lg cs.CL
|
In this paper, we characterize the properties of a direct interpretation of
HPSG and present the advantages of this approach. High-level programming
languages constitute in this perspective an efficient solution: we show how a
multi-paradigm approach, containing in particular constraint logic programming,
offers mechanims close to that of the theory and preserves its fundamental
properties. We take the example of LIFE and describe the implementation of the
main HPSG mechanisms.
|
cmp-lg/9605007
|
Resolving Anaphors in Embedded Sentences
|
cmp-lg cs.CL
|
We propose an algorithm to resolve anaphors, tackling mainly the problem of
intrasentential antecedents. We base our methodology on the fact that such
antecedents are likely to occur in embedded sentences. Sidner's focusing
mechanism is used as the basic algorithm in a more complete approach. The
proposed algorithm has been tested and implemented as a part of a conceptual
analyser, mainly to process pronouns. Details of an evaluation are given.
|
cmp-lg/9605008
|
Tactical Generation in a Free Constituent Order Language
|
cmp-lg cs.CL
|
This paper describes tactical generation in Turkish, a free constituent order
language, in which the order of the constituents may change according to the
information structure of the sentences to be generated. In the absence of any
information regarding the information structure of a sentence (i.e., topic,
focus, background, etc.), the constituents of the sentence obey a default
order, but the order is almost freely changeable, depending on the constraints
of the text flow or discourse. We have used a recursively structured finite
state machine for handling the changes in constituent order, implemented as a
right-linear grammar backbone. Our implementation environment is the GenKit
system, developed at Carnegie Mellon University--Center for Machine
Translation. Morphological realization has been implemented using an external
morphological analysis/generation component which performs concrete morpheme
selection and handles morphographemic processes.
|
cmp-lg/9605009
|
Learning similarity-based word sense disambiguation from sparse data
|
cmp-lg cs.CL
|
We describe a method for automatic word sense disambiguation using a text
corpus and a machine-readable dictionary (MRD). The method is based on word
similarity and context similarity measures. Words are considered similar if
they appear in similar contexts; contexts are similar if they contain similar
words. The circularity of this definition is resolved by an iterative,
converging process, in which the system learns from the corpus a set of typical
usages for each of the senses of the polysemous word listed in the MRD. A new
instance of a polysemous word is assigned the sense associated with the typical
usage most similar to its context. Experiments show that this method performs
well, and can learn even from very sparse training data.
|
cmp-lg/9605010
|
Best-First Surface Realization
|
cmp-lg cs.CL
|
Current work in surface realization concentrates on the use of general,
abstract algorithms that interpret large, reversible grammars. Only little
attention has been paid so far to the many small and simple applications that
require coverage of a small sublanguage at different degrees of sophistication.
The system TG/2 described in this paper can be smoothly integrated with deep
generation processes, it integrates canned text, templates, and context-free
rules into a single formalism, it allows for both textual and tabular output,
and it can be parameterized according to linguistic preferences. These features
are based on suitably restricted production system techniques and on a generic
backtracking regime.
|
cmp-lg/9605011
|
Counting Coordination Categorially
|
cmp-lg cs.CL
|
This paper presents a way of reducing the complexity of parsing free
coordination. It lives on the Coordinative Count Invariant, a property of
derivable sequences in occurrence-sensitive categorial grammar. This invariant
can be exploited to cut down deterministically the search space for coordinated
sentences to minimal fractions. The invariant is based on inequalities, which
is shown to be the best one can get in the presence of coordination without
proper parsing. It is implemented in a categorial parser for Dutch. Some
results of applying the invariant to the parsing of coordination in this parser
are presented.
|
cmp-lg/9605012
|
A New Statistical Parser Based on Bigram Lexical Dependencies
|
cmp-lg cs.CL
|
This paper describes a new statistical parser which is based on probabilities
of dependencies between head-words in the parse tree. Standard bigram
probability estimation techniques are extended to calculate probabilities of
dependencies between pairs of words. Tests using Wall Street Journal data show
that the method performs at least as well as SPATTER (Magerman 95, Jelinek et
al 94), which has the best published results for a statistical parser on this
task. The simplicity of the approach means the model trains on 40,000 sentences
in under 15 minutes. With a beam search strategy parsing speed can be improved
to over 200 sentences a minute with negligible loss in accuracy.
|
cmp-lg/9605013
|
Learning Dependencies between Case Frame Slots
|
cmp-lg cs.CL
|
We address the problem of automatically acquiring case frame patterns
(selectional patterns) from large corpus data. In particular, we propose a
method of learning dependencies between case frame slots. We view the problem
of learning case frame patterns as that of learning multi-dimensional discrete
joint distributions, where random variables represent case slots. We then
formalize the dependencies between case slots as the probabilistic dependencies
between these random variables. Since the number of parameters in a
multi-dimensional joint distribution is exponential, it is infeasible to
accurately estimate them in practice. To overcome this difficulty, we settle
with approximating the target joint distribution by the product of low order
component distributions, based on corpus data. In particular we propose to
employ an efficient learning algorithm based on the MDL principle to realize
this task. Our experimental results indicate that for certain classes of verbs,
the accuracy achieved in a disambiguation experiment is improved by using the
acquired knowledge of dependencies.
|
cmp-lg/9605014
|
Clustering Words with the MDL Principle
|
cmp-lg cs.CL
|
We address the problem of automatically constructing a thesaurus
(hierarchically clustering words) based on corpus data. We view the problem of
clustering words as that of estimating a joint distribution over the Cartesian
product of a partition of a set of nouns and a partition of a set of verbs, and
propose an estimation algorithm using simulated annealing with an energy
function based on the Minimum Description Length (MDL) Principle. We
empirically compared the performance of our method based on the MDL Principle
against that of one based on the Maximum Likelihood Estimator, and found that
the former outperforms the latter. We also evaluated the method by conducting
pp-attachment disambiguation experiments using an automatically constructed
thesaurus. Our experimental results indicate that we can improve accuracy in
disambiguation by using such a thesaurus.
|
cmp-lg/9605015
|
Adapting the Core Language Engine to French and Spanish
|
cmp-lg cs.CL
|
We describe how substantial domain-independent language-processing systems
for French and Spanish were quickly developed by manually adapting an existing
English-language system, the SRI Core Language Engine. We explain the
adaptation process in detail, and argue that it provides a fairly general
recipe for converting a grammar-based system for English into a corresponding
one for a Romance language.
|
cmp-lg/9605016
|
Parsing for Semidirectional Lambek Grammar is NP-Complete
|
cmp-lg cs.CL
|
We study the computational complexity of the parsing problem of a variant of
Lambek Categorial Grammar that we call {\em semidirectional}. In
semidirectional Lambek calculus $\SDL$ there is an additional non-directional
abstraction rule allowing the formula abstracted over to appear anywhere in the
premise sequent's left-hand side, thus permitting non-peripheral extraction.
$\SDL$ grammars are able to generate each context-free language and more than
that. We show that the parsing problem for semidirectional Lambek Grammar is
NP-complete by a reduction of the 3-Partition problem.
|
cmp-lg/9605017
|
A Chart Generator for Shake and Bake Machine Translation
|
cmp-lg cs.CL
|
A generation algorithm based on an active chart parsing algorithm is
introduced which can be used in conjunction with a Shake and Bake machine
translation system. A concise Prolog implementation of the algorithm is
provided, and some performance comparisons with a shift-reduce based algorithm
are given which show the chart generator is much more efficient for generating
all possible sentences from an input specification.
|
cmp-lg/9605018
|
Efficient Tabular LR Parsing
|
cmp-lg cs.CL
|
We give a new treatment of tabular LR parsing, which is an alternative to
Tomita's generalized LR algorithm. The advantage is twofold. Firstly, our
treatment is conceptually more attractive because it uses simpler concepts,
such as grammar transformations and standard tabulation techniques also know as
chart parsing. Secondly, the static and dynamic complexity of parsing, both in
space and time, is significantly reduced.
|
cmp-lg/9605019
|
Noun-Phrase Analysis in Unrestricted Text for Information Retrieval
|
cmp-lg cs.CL
|
Information retrieval is an important application area of natural-language
processing where one encounters the genuine challenge of processing large
quantities of unrestricted natural-language text. This paper reports on the
application of a few simple, yet robust and efficient noun-phrase analysis
techniques to create better indexing phrases for information retrieval. In
particular, we describe a hybrid approach to the extraction of meaningful
(continuous or discontinuous) subcompounds from complex noun phrases using both
corpus statistics and linguistic heuristics. Results of experiments show that
indexing based on such extracted subcompounds improves both recall and
precision in an information retrieval system. The noun-phrase analysis
techniques are also potentially useful for book indexing and automatic
thesaurus extraction.
|
cmp-lg/9605020
|
Where Defaults Don't Help: the Case of the German Plural System
|
cmp-lg cs.CL
|
The German plural system has become a focal point for conflicting theories of
language, both linguistic and cognitive. We present simulation results with
three simple classifiers - an ordinary nearest neighbour algorithm, Nosofsky's
`Generalized Context Model' (GCM) and a standard, three-layer backprop network
- predicting the plural class from a phonological representation of the
singular in German. Though these are absolutely `minimal' models, in terms of
architecture and input information, they nevertheless do remarkably well. The
nearest neighbour predicts the correct plural class with an accuracy of 72% for
a set of 24,640 nouns from the CELEX database. With a subset of 8,598
(non-compound) nouns, the nearest neighbour, the GCM and the network score
71.0%, 75.0% and 83.5%, respectively, on novel items. Furthermore, they
outperform a hybrid, `pattern-associator + default rule', model, as proposed by
Marcus et al. (1995), on this data set.
|
cmp-lg/9605021
|
Functional Centering
|
cmp-lg cs.CL
|
Based on empirical evidence from a free word order language (German) we
propose a fundamental revision of the principles guiding the ordering of
discourse entities in the forward-looking centers within the centering model.
We claim that grammatical role criteria should be replaced by indicators of the
functional information structure of the utterances, i.e., the distinction
between context-bound and unbound discourse elements. This claim is backed up
by an empirical evaluation of functional centering.
|
cmp-lg/9605022
|
Processing Complex Sentences in the Centering Framework
|
cmp-lg cs.CL
|
We extend the centering model for the resolution of intra-sentential anaphora
and specify how to handle complex sentences. An empirical evaluation indicates
that the functional information structure guides the search for an antecedent
within the sentence.
|
cmp-lg/9605023
|
A Simple Transformation for Offline-Parsable Grammars and its
Termination Properties
|
cmp-lg cs.CL
|
We present, in easily reproducible terms, a simple transformation for
offline-parsable grammars which results in a provably terminating parsing
program directly top-down interpretable in Prolog. The transformation consists
in two steps: (1) removal of empty-productions, followed by: (2) left-recursion
elimination. It is related both to left-corner parsing (where the grammar is
compiled, rather than interpreted through a parsing program, and with the
advantage of guaranteed termination in the presence of empty productions) and
to the Generalized Greibach Normal Form for DCGs (with the advantage of
implementation simplicity).
|
cmp-lg/9605024
|
Using Terminological Knowledge Representation Languages to Manage
Linguistic Resources
|
cmp-lg cs.CL
|
I examine how terminological languages can be used to manage linguistic data
during NL research and development. In particular, I consider the lexical
semantics task of characterizing semantic verb classes and show how the
language can be extended to flag inconsistencies in verb class definitions,
identify the need for new verb classes, and identify appropriate linguistic
hypotheses for a new verb's behavior.
|
cmp-lg/9605025
|
A Conceptual Reasoning Approach to Textual Ellipsis
|
cmp-lg cs.CL
|
We present a hybrid text understanding methodology for the resolution of
textual ellipsis. It integrates conceptual criteria (based on the
well-formedness and conceptual strength of role chains in a terminological
knowledge base) and functional constraints reflecting the utterances'
information structure (based on the distinction between context-bound and
unbound discourse elements). The methodological framework for text ellipsis
resolution is the centering model that has been adapted to these constraints.
|
cmp-lg/9605026
|
Trading off Completeness for Efficiency --- The \textsc{ParseTalk}
Performance Grammar Approach to Real-World Text Parsing
|
cmp-lg cs.CL
|
We argue for a performance-based design of natural language grammars and
their associated parsers in order to meet the constraints posed by real-world
natural language understanding. This approach incorporates declarative and
procedural knowledge about language and language use within an object-oriented
specification framework. We discuss several message passing protocols for
real-world text parsing and provide reasons for sacrificing completeness of the
parse in favor of efficiency.
|
cmp-lg/9605027
|
Restricted Parallelism in Object-Oriented Lexical Parsing
|
cmp-lg cs.CL
|
We present an approach to parallel natural language parsing which is based on
a concurrent, object-oriented model of computation. A depth-first, yet
incomplete parsing algorithm for a dependency grammar is specified and several
restrictions on the degree of its parallelization are discussed.
|
cmp-lg/9605028
|
Towards Understanding Spontaneous Speech: Word Accuracy vs. Concept
Accuracy
|
cmp-lg cs.CL
|
In this paper we describe an approach to automatic evaluation of both the
speech recognition and understanding capabilities of a spoken dialogue system
for train time table information. We use word accuracy for recognition and
concept accuracy for understanding performance judgement. Both measures are
calculated by comparing these modules' output with a correct reference answer.
We report evaluation results for a spontaneous speech corpus with about 10000
utterances. We observed a nearly linear relationship between word accuracy and
concept accuracy.
|
cmp-lg/9605029
|
Learning Word Association Norms Using Tree Cut Pair Models
|
cmp-lg cs.CL
|
We consider the problem of learning co-occurrence information between two
word categories, or more in general between two discrete random variables
taking values in a hierarchically classified domain. In particular, we consider
the problem of learning the `association norm' defined by A(x,y)=p(x,
y)/(p(x)*p(y)), where p(x, y) is the joint distribution for x and y and p(x)
and p(y) are marginal distributions induced by p(x, y). We formulate this
problem as a sub-task of learning the conditional distribution p(x|y), by
exploiting the identity p(x|y) = A(x,y)*p(x). We propose a two-step estimation
method based on the MDL principle, which works as follows: It first estimates
p(x) as p1 using MDL, and then estimates p(x|y) for a fixed y by applying MDL
on the hypothesis class of {A * p1 | A \in B} for some given class B of
representations for association norm. The estimation of A is therefore obtained
as a side-effect of a near optimal estimation of p(x|y). We then apply this
general framework to the problem of acquiring case-frame patterns. We assume
that both p(x) and A(x, y) for given y are representable by a model based on a
classification that exists within an existing thesaurus tree as a `cut,' and
hence p(x|y) is represented as the product of a pair of `tree cut models.' We
then devise an efficient algorithm that implements our general strategy. We
tested our method by using it to actually acquire case-frame patterns and
conducted disambiguation experiments using the acquired knowledge. The
experimental results show that our method improves upon existing methods.
|
cmp-lg/9605030
|
Incremental Centering and Center Ambiguity
|
cmp-lg cs.CL
|
In this paper, we present a model of anaphor resolution within the framework
of the centering model. The consideration of an incremental processing mode
introduces the need to manage structural ambiguity at the center level. Hence,
the centering framework is further refined to account for local and global
parsing ambiguities which propagate up to the level of center representations,
yielding moderately adapted data structures for the centering algorithm.
|
cmp-lg/9605031
|
Efficient Algorithms for Parsing the DOP Model? A Reply to Joshua
Goodman
|
cmp-lg cs.CL
|
This note is a reply to Joshua Goodman's paper "Efficient Algorithms for
Parsing the DOP Model" (Goodman, 1996; cmp-lg/9604008). In his paper, Goodman
makes a number of claims about (my work on) the Data-Oriented Parsing model
(Bod, 1992-1996). This note shows that some of these claims must be mistaken.
|
cmp-lg/9605032
|
Synchronous Models of Language
|
cmp-lg cs.CL
|
In synchronous rewriting, the productions of two rewriting systems are paired
and applied synchronously in the derivation of a pair of strings. We present a
new synchronous rewriting system and argue that it can handle certain phenomena
that are not covered by existing synchronous systems. We also prove some
interesting formal/computational properties of our system.
|
cmp-lg/9605033
|
Notes on LR Parser Design
|
cmp-lg cs.CL
|
The design of an LR parser based on interleaving the atomic symbol processing
of a context-free backbone grammar with the full constraints of the underlying
unification grammar is described. The parser employs a set of reduced
constraints derived from the unification grammar in the LR parsing step. Gap
threading is simulated to reduce the applicability of empty productions.
|
cmp-lg/9605034
|
Handling Sparse Data by Successive Abstraction
|
cmp-lg cs.CL
|
A general, practical method for handling sparse data that avoids held-out
data and iterative reestimation is derived from first principles. It has been
tested on a part-of-speech tagging task and outperformed (deleted)
interpolation with context-independent weights, even when the latter used a
globally optimal parameter setting determined a posteriori.
|
cmp-lg/9605035
|
Example-Based Optimization of Surface-Generation Tables
|
cmp-lg cs.CL
|
A method is given that "inverts" a logic grammar and displays it from the
point of view of the logical form, rather than from that of the word string.
LR-compiling techniques are used to allow a recursive-descent generation
algorithm to perform "functor merging" much in the same way as an LR parser
performs prefix merging. This is an improvement on the semantic-head-driven
generator that results in a much smaller search space. The amount of semantic
lookahead can be varied, and appropriate tradeoff points between table size and
resulting nondeterminism can be found automatically. This can be done by
removing all spurious nondeterminism for input sufficiently close to the
examples of a training corpus, and large portions of it for other input, while
preserving completeness.
|
cmp-lg/9605036
|
Parsing Algorithms and Metrics
|
cmp-lg cs.CL
|
Many different metrics exist for evaluating parsing results, including
Viterbi, Crossing Brackets Rate, Zero Crossing Brackets Rate, and several
others. However, most parsing algorithms, including the Viterbi algorithm,
attempt to optimize the same metric, namely the probability of getting the
correct labelled tree. By choosing a parsing algorithm appropriate for the
evaluation metric, better performance can be achieved. We present two new
algorithms: the ``Labelled Recall Algorithm,'' which maximizes the expected
Labelled Recall Rate, and the ``Bracketed Recall Algorithm,'' which maximizes
the Bracketed Recall Rate. Experimental results are given, showing that the two
new algorithms have improved performance over the Viterbi algorithm on many
criteria, especially the ones that they optimize.
|
cmp-lg/9605037
|
Combining Trigram-based and Feature-based Methods for Context-Sensitive
Spelling Correction
|
cmp-lg cs.CL
|
This paper addresses the problem of correcting spelling errors that result in
valid, though unintended words (such as ``peace'' and ``piece'', or ``quiet''
and ``quite'') and also the problem of correcting particular word usage errors
(such as ``amount'' and ``number'', or ``among'' and ``between''). Such
corrections require contextual information and are not handled by conventional
spelling programs such as Unix `spell'. First, we introduce a method called
Trigrams that uses part-of-speech trigrams to encode the context. This method
uses a small number of parameters compared to previous methods based on word
trigrams. However, it is effectively unable to distinguish among words that
have the same part of speech. For this case, an alternative feature-based
method called Bayes performs better; but Bayes is less effective than Trigrams
when the distinction among words depends on syntactic constraints. A hybrid
method called Tribayes is then introduced that combines the best of the
previous two methods. The improvement in performance of Tribayes over its
components is verified experimentally. Tribayes is also compared with the
grammar checker in Microsoft Word, and is found to have substantially higher
performance.
|
cmp-lg/9605038
|
Efficient Normal-Form Parsing for Combinatory Categorial Grammar
|
cmp-lg cs.CL
|
Under categorial grammars that have powerful rules like composition, a simple
n-word sentence can have exponentially many parses. Generating all parses is
inefficient and obscures whatever true semantic ambiguities are in the input.
This paper addresses the problem for a fairly general form of Combinatory
Categorial Grammar, by means of an efficient, correct, and easy to implement
normal-form parsing technique. The parser is proved to find exactly one parse
in each semantic equivalence class of allowable parses; that is, spurious
ambiguity (as carefully defined) is shown to be both safely and completely
eliminated.
|
cmp-lg/9606001
|
A Bayesian hybrid method for context-sensitive spelling correction
|
cmp-lg cs.CL
|
Two classes of methods have been shown to be useful for resolving lexical
ambiguity. The first relies on the presence of particular words within some
distance of the ambiguous target word; the second uses the pattern of words and
part-of-speech tags around the target word. These methods have complementary
coverage: the former captures the lexical ``atmosphere'' (discourse topic,
tense, etc.), while the latter captures local syntax. Yarowsky has exploited
this complementarity by combining the two methods using decision lists. The
idea is to pool the evidence provided by the component methods, and to then
solve a target problem by applying the single strongest piece of evidence,
whatever type it happens to be. This paper takes Yarowsky's work as a starting
point, applying decision lists to the problem of context-sensitive spelling
correction. Decision lists are found, by and large, to outperform either
component method. However, it is found that further improvements can be
obtained by taking into account not just the single strongest piece of
evidence, but ALL the available evidence. A new hybrid method, based on
Bayesian classifiers, is presented for doing this, and its performance
improvements are demonstrated.
|
cmp-lg/9606002
|
Clustered Language Models with Context-Equivalent States
|
cmp-lg cs.CL
|
In this paper, a hierarchical context definition is added to an existing
clustering algorithm in order to increase its robustness. The resulting
algorithm, which clusters contexts and events separately, is used to experiment
with different ways of defining the context a language model takes into
account. The contexts range from standard bigram and trigram contexts to part
of speech five-grams. Although none of the models can compete directly with a
backoff trigram, they give up to 9\% improvement in perplexity when
interpolated with a trigram. Moreover, the modified version of the algorithm
leads to a performance increase over the original version of up to 12\%.
|
cmp-lg/9606003
|
Morphological Cues for Lexical Semantics
|
cmp-lg cs.CL
|
Most natural language processing tasks require lexical semantic information.
Automated acquisition of this information would thus increase the robustness
and portability of NLP systems. This paper describes an acquisition method
which makes use of fixed correspondences between derivational affixes and
lexical semantic information. One advantage of this method, and of other
methods that rely only on surface characteristics of language, is that the
necessary input is currently available.
|
cmp-lg/9606004
|
Classification in Feature-based Default Inheritance Hierarchies
|
cmp-lg cs.CL
|
Increasingly, inheritance hierarchies are being used to reduce redundancy in
natural language processing lexicons. Systems that utilize inheritance
hierarchies need to be able to insert words under the optimal set of classes in
these hierarchies. In this paper, we formalize this problem for feature-based
default inheritance hierarchies. Since the problem turns out to be NP-complete,
we present an approximation algorithm for it. We show that this algorithm is
efficient and that it performs well with respect to a number of standard
problems for default inheritance. A prototype implementation has been tested on
lexical hierarchies and it has produced encouraging results. The work presented
here is also relevant to other types of default hierarchies.
|
cmp-lg/9606005
|
Part-of-Speech-Tagging using morphological information
|
cmp-lg cs.CL
|
This paper presents the results of an experiment to decide the question of
authenticity of the supposedly spurious Rhesus - a attic tragedy sometimes
credited to Euripides. The experiment involves use of statistics in order to
test whether significant deviations in the distribution of word categories
between Rhesus and the other works of Euripides can or cannot be found. To
count frequencies of word categories in the corpus, a part-of-speech-tagger for
Greek has been implemented. Some special techniques for reducing the problem of
sparse data are used resulting in an accuracy of ca. 96.6%.
|
cmp-lg/9606006
|
Coordination in Tree Adjoining Grammars: Formalization and
Implementation
|
cmp-lg cs.CL
|
In this paper we show that an account for coordination can be constructed
using the derivation structures in a lexicalized Tree Adjoining Grammar (LTAG).
We present a notion of derivation in LTAGs that preserves the notion of fixed
constituency in the LTAG lexicon while providing the flexibility needed for
coordination phenomena. We also discuss the construction of a practical parser
for LTAGs that can handle coordination including cases of non-constituent
coordination.
|
cmp-lg/9606007
|
Word Sense Disambiguation using Conceptual Density
|
cmp-lg cs.CL
|
This paper presents a method for the resolution of lexical ambiguity of nouns
and its automatic evaluation over the Brown Corpus. The method relies on the
use of the wide-coverage noun taxonomy of WordNet and the notion of conceptual
distance among concepts, captured by a Conceptual Density formula developed for
this purpose. This fully automatic method requires no hand coding of lexical
entries, hand tagging of text nor any kind of training process. The results of
the experiments have been automatically evaluated against SemCor, the
sense-tagged version of the Brown Corpus.
|
cmp-lg/9606008
|
Coordination as a Direct Process
|
cmp-lg cs.CL
|
We propose a treatment of coordination based on the concepts of functor,
argument and subcategorization. Its formalization comprises two parts which are
conceptually independent. On one hand, we have extended the feature structure
unification to disjunctive and set values in order to check the compatibility
and the satisfiability of subcategorization requirements by structured
complements. On the other hand, we have considered the conjunction {\em et
(and)} as the head of the coordinate structure, so that coordinate structures
stem simply from the subcategorization specifications of {\em et} and the
general schemata of a head saturation. Both parts have been encoded within HPSG
using the same resource that is the subcategorization and its principle which
we have just extended.
|
cmp-lg/9606009
|
Modularizing Contexted Constraints
|
cmp-lg cs.CL
|
This paper describes a method for compiling a constraint-based grammar into a
potentially more efficient form for processing. This method takes dependent
disjunctions within a constraint formula and factors them into non-interacting
groups whenever possible by determining their independence. When a group of
dependent disjunctions is split into smaller groups, an exponential amount of
redundant information is reduced. At runtime, this means that an exponential
amount of processing can be saved as well. Since the performance of an
algorithm for processing constraints with dependent disjunctions is highly
determined by its input, the transformation presented in this paper should
prove beneficial for all such algorithms.
|
cmp-lg/9606010
|
An Information Structural Approach to Spoken Language Generation
|
cmp-lg cs.CL
|
This paper presents an architecture for the generation of spoken monologues
with contextually appropriate intonation. A two-tiered information structure
representation is used in the high-level content planning and sentence planning
stages of generation to produce efficient, coherent speech that makes certain
discourse relationships, such as explicit contrasts, appropriately salient. The
system is able to produce appropriate intonational patterns that cannot be
generated by other systems which rely solely on word class and given/new
distinctions.
|
cmp-lg/9606011
|
An Empirical Study of Smoothing Techniques for Language Modeling
|
cmp-lg cs.CL
|
We present an extensive empirical comparison of several smoothing techniques
in the domain of language modeling, including those described by Jelinek and
Mercer (1980), Katz (1987), and Church and Gale (1991). We investigate for the
first time how factors such as training data size, corpus (e.g., Brown versus
Wall Street Journal), and n-gram order (bigram versus trigram) affect the
relative performance of these methods, which we measure through the
cross-entropy of test data. In addition, we introduce two novel smoothing
techniques, one a variation of Jelinek-Mercer smoothing and one a very simple
linear interpolation technique, both of which outperform existing methods.
|
cmp-lg/9606012
|
An Efficient Inductive Unsupervised Semantic Tagger
|
cmp-lg cs.CL
|
We report our development of a simple but fast and efficient inductive
unsupervised semantic tagger for Chinese words. A POS hand-tagged corpus of
348,000 words is used. The corpus is being tagged in two steps. First, possible
semantic tags are selected from a semantic dictionary(Tong Yi Ci Ci Lin), the
POS and the conditional probability of semantic from POS, i.e., P(S|P). The
final semantic tag is then assigned by considering the semantic tags before and
after the current word and the semantic-word conditional probability P(S|W)
derived from the first step. Semantic bigram probabilities P(S|S) are used in
the second step. Final manual checking shows that this simple but efficient
algorithm has a hit rate of 91%. The tagger tags 142 words per second, using a
120 MHz Pentium running FOXPRO. It runs about 2.3 times faster than a Viterbi
tagger.
|
cmp-lg/9606013
|
Relating Turing's Formula and Zipf's Law
|
cmp-lg cs.CL
|
An asymptote is derived from Turing's local reestimation formula for
population frequencies, and a local reestimation formula is derived from Zipf's
law for the asymptotic behavior of population frequencies. The two are shown to
be qualitatively different asymptotically, but nevertheless to be instances of
a common class of reestimation-formula-asymptote pairs, in which they
constitute the upper and lower bounds of the convergence region of the
cumulative of the frequency function, as rank tends to infinity. The results
demonstrate that Turing's formula is qualitatively different from the various
extensions to Zipf's law, and suggest that it smooths the frequency estimates
towards a geometric distribution.
|
cmp-lg/9606014
|
Building Probabilistic Models for Natural Language
|
cmp-lg cs.CL
|
In this thesis, we investigate three problems involving the probabilistic
modeling of language: smoothing n-gram models, statistical grammar induction,
and bilingual sentence alignment. These three problems employ models at three
different levels of language; they involve word-based, constituent-based, and
sentence-based models, respectively. We describe techniques for improving the
modeling of language at each of these levels, and surpass the performance of
existing algorithms for each problem. We approach the three problems using
three different frameworks. We relate each of these frameworks to the Bayesian
paradigm, and show why each framework used was appropriate for the given
problem. Finally, we show how our research addresses two central issues in
probabilistic modeling: the sparse data problem and the problem of inducing
hidden structure.
|
cmp-lg/9606015
|
Stabilizing the Richardson Algorithm by Controlling Chaos
|
cmp-lg chao-dyn comp-gas cond-mat cs.CL nlin.CD nlin.CG
|
By viewing the operations of the Richardson purification algorithm as a
discrete time dynamical process, we propose a method to overcome the
instability of the algorithm by controlling chaos. We present theoretical
analysis and numerical results on the behavior and performance of the
stabilized algorithm.
|
cmp-lg/9606016
|
A Probabilistic Disambiguation Method Based on Psycholinguistic
Principles
|
cmp-lg cs.CL
|
We address the problem of structural disambiguation in syntactic parsing. In
psycholinguistics, a number of principles of disambiguation have been proposed,
notably the Lexical Preference Rule (LPR), the Right Association Principle
(RAP), and the Attach Low and Parallel Principle (ALPP) (an extension of RAP).
We argue that in order to improve disambiguation results it is necessary to
implement these principles on the basis of a probabilistic methodology. We
define a `three-word probability' for implementing LPR, and a `length
probability' for implementing RAP and ALPP. Furthermore, we adopt the
`back-off' method to combine these two types of probabilities. Our experimental
results indicate our method to be effective, attaining an accuracy of 89.2%.
|
cmp-lg/9606017
|
With raised eyebrows or the eyebrows raised ? A Neural Network Approach
to Grammar Checking for Definiteness
|
cmp-lg cs.CL
|
In this paper, we use a feature model of the semantics of plural determiners
to present an approach to grammar checking for definiteness. Using neural
network techniques, a semantics -- morphological category mapping was learned.
We then applied a textual encoding technique to the 125 occurences of the
relevant category in a 10 000 word narrative text and learned a surface --
semantics mapping. By applying the learned generation function to the newly
generated representations, we achieved a correct category assignment in many
cases (87 %). These results are considerably better than a direct surface
categorization approach (54 %), with a baseline (always guessing the dominant
category) of 60 %. It is discussed, how these results could be used in
multilingual NLP applications.
|
cmp-lg/9606018
|
Compilation of Weighted Finite-State Transducers from Decision Trees
|
cmp-lg cs.CL
|
We report on a method for compiling decision trees into weighted finite-state
transducers. The key assumptions are that the tree predictions specify how to
rewrite symbols from an input string, and the decision at each tree node is
stateable in terms of regular expressions on the input string. Each leaf node
can then be treated as a separate rule where the left and right contexts are
constructable from the decisions made traversing the tree from the root to the
leaf. These rules are compiled into transducers using the weighted rewrite-rule
rule-compilation algorithm described in (Mohri and Sproat, 1996).
|
cmp-lg/9606019
|
Computational Complexity of Probabilistic Disambiguation by means of
Tree-Grammars
|
cmp-lg cs.CL
|
This paper studies the computational complexity of disambiguation under
probabilistic tree-grammars and context-free grammars. It presents a proof that
the following problems are NP-hard: computing the Most Probable Parse (MPP)
from a sentence or from a word-graph, and computing the Most Probable Sentence
(MPS) from a word-graph. The NP-hardness of computing the MPS from a word-graph
also holds for Stochastic Context-Free Grammars. Consequently, the existence of
deterministic polynomial-time algorithms for solving these disambiguation
problems is a highly improbable event.
|
cmp-lg/9606020
|
Computing Optimal Descriptions for Optimality Theory Grammars with
Context-Free Position Structures
|
cmp-lg cs.CL
|
This paper describes an algorithm for computing optimal structural
descriptions for Optimality Theory grammars with context-free position
structures. This algorithm extends Tesar's dynamic programming approach [Tesar
1994][Tesar 1995] to computing optimal structural descriptions from regular to
context-free structures. The generalization to context-free structures creates
several complications, all of which are overcome without compromising the core
dynamic programming approach. The resulting algorithm has a time complexity
cubic in the length of the input, and is applicable to grammars with universal
constraints that exhibit context-free locality.
|
cmp-lg/9606021
|
An Iterative Algorithm to Build Chinese Language Models
|
cmp-lg cs.CL
|
We present an iterative procedure to build a Chinese language model (LM). We
segment Chinese text into words based on a word-based Chinese language model.
However, the construction of a Chinese LM itself requires word boundaries. To
get out of the chicken-and-egg problem, we propose an iterative procedure that
alternates two operations: segmenting text into words and building an LM.
Starting with an initial segmented corpus and an LM based upon it, we use a
Viterbi-liek algorithm to segment another set of data. Then, we build an LM
based on the second set and use the resulting LM to segment again the first
corpus. The alternating procedure provides a self-organized way for the
segmenter to detect automatically unseen words and correct segmentation errors.
Our preliminary experiment shows that the alternating procedure not only
improves the accuracy of our segmentation, but discovers unseen words
surprisingly well. The resulting word-based LM has a perplexity of 188 for a
general Chinese corpus.
|
cmp-lg/9606022
|
Two Questions about Data-Oriented Parsing
|
cmp-lg cs.CL
|
In this paper I present ongoing work on the data-oriented parsing (DOP)
model. In previous work, DOP was tested on a cleaned-up set of analyzed
part-of-speech strings from the Penn Treebank, achieving excellent test
results. This left, however, two important questions unanswered: (1) how does
DOP perform if tested on unedited data, and (2) how can DOP be used for parsing
word strings that contain unknown words? This paper addresses these questions.
We show that parse results on unedited data are worse than on cleaned-up data,
although very competitive if compared to other models. As to the parsing of
word strings, we show that the hardness of the problem does not so much depend
on unknown words, but on previously unseen lexical categories of known words.
We give a novel method for parsing these words by estimating the probabilities
of unknown subtrees. The method is of general interest since it shows that good
performance can be obtained without the use of a part-of-speech tagger. To the
best of our knowledge, our method outperforms other statistical parsers tested
on Penn Treebank word strings.
|
cmp-lg/9606023
|
A Robust System for Natural Spoken Dialogue
|
cmp-lg cs.CL
|
This paper describes a system that leads us to believe in the feasibility of
constructing natural spoken dialogue systems in task-oriented domains. It
specifically addresses the issue of robust interpretation of speech in the
presence of recognition errors. Robustness is achieved by a combination of
statistical error post-correction, syntactically- and semantically-driven
robust parsing, and extensive use of the dialogue context. We present an
evaluation of the system using time-to-completion and the quality of the final
solution that suggests that most native speakers of English can use the system
successfully with virtually no training.
|
cmp-lg/9606024
|
A Data-Oriented Approach to Semantic Interpretation
|
cmp-lg cs.CL
|
In Data-Oriented Parsing (DOP), an annotated language corpus is used as a
stochastic grammar. The most probable analysis of a new input sentence is
constructed by combining sub-analyses from the corpus in the most probable way.
This approach has been succesfully used for syntactic analysis, using corpora
with syntactic annotations such as the Penn Treebank. If a corpus with
semantically annotated sentences is used, the same approach can also generate
the most probable semantic interpretation of an input sentence. The present
paper explains this semantic interpretation method, and summarizes the results
of a preliminary experiment. Semantic annotations were added to the syntactic
annotations of most of the sentences of the ATIS corpus. A data-oriented
semantic interpretation algorithm was succesfully tested on this semantically
enriched corpus.
|
cmp-lg/9606025
|
Two Sources of Control over the Generation of Software Instructions
|
cmp-lg cs.CL
|
This paper presents an analysis conducted on a corpus of software
instructions in French in order to establish whether task structure elements
(the procedural representation of the users' tasks) are alone sufficient to
control the grammatical resources of a text generator. We show that the
construct of genre provides a useful additional source of control enabling us
to resolve undetermined cases.
|
cmp-lg/9606026
|
An Efficient Compiler for Weighted Rewrite Rules
|
cmp-lg cs.CL
|
Context-dependent rewrite rules are used in many areas of natural language
and speech processing. Work in computational phonology has demonstrated that,
given certain conditions, such rewrite rules can be represented as finite-state
transducers (FSTs). We describe a new algorithm for compiling rewrite rules
into FSTs. We show the algorithm to be simpler and more efficient than existing
algorithms. Further, many of our applications demand the ability to compile
weighted rules into weighted FSTs, transducers generalized by providing
transitions with weights. We have extended the algorithm to allow for this.
|
cmp-lg/9606027
|
Linguistic Structure as Composition and Perturbation
|
cmp-lg cs.CL
|
This paper discusses the problem of learning language from unprocessed text
and speech signals, concentrating on the problem of learning a lexicon. In
particular, it argues for a representation of language in which linguistic
parameters like words are built by perturbing a composition of existing
parameters. The power of this representation is demonstrated by several
examples in text segmentation and compression, acquisition of a lexicon from
raw speech, and the acquisition of mappings between text and artificial
representations of meaning.
|
cmp-lg/9606028
|
Maximizing Top-down Constraints for Unification-based Systems
|
cmp-lg cs.CL
|
A left-corner parsing algorithm with top-down filtering has been reported to
show very efficient performance for unification-based systems. However, due to
the nontermination of parsing with left-recursive grammars, top-down
constraints must be weakened. In this paper, a general method of maximizing
top-down constraints is proposed. The method provides a procedure to
dynamically compute *restrictor*, a minimum set of features involved in an
infinite loop for every propagation path; thus top-down constraints are
maximally propagated.
|
cmp-lg/9606029
|
Directed Replacement
|
cmp-lg cs.CL
|
This paper introduces to the finite-state calculus a family of directed
replace operators. In contrast to the simple replace expression, UPPER ->
LOWER, defined in Karttunen (ACL-95), the new directed version, UPPER @->
LOWER, yields an unambiguous transducer if the lower language consists of a
single string. It transduces the input string from left to right, making only
the longest possible replacement at each point.
A new type of replacement expression, UPPER @-> PREFIX ... SUFFIX, yields a
transducer that inserts text around strings that are instances of UPPER. The
symbol ... denotes the matching part of the input which itself remains
unchanged. PREFIX and SUFFIX are regular expressions describing the insertions.
Expressions of the type UPPER @-> PREFIX ... SUFFIX may be used to compose a
deterministic parser for a ``local grammar'' in the sense of Gross (1989).
Other useful applications of directed replacement include tokenization and
filtering of text streams.
|
cmp-lg/9606030
|
Minimizing Manual Annotation Cost In Supervised Training From Corpora
|
cmp-lg cs.CL
|
Corpus-based methods for natural language processing often use supervised
training, requiring expensive manual annotation of training corpora. This paper
investigates methods for reducing annotation cost by {\it sample selection}. In
this approach, during training the learning program examines many unlabeled
examples and selects for labeling (annotation) only those that are most
informative at each stage. This avoids redundantly annotating examples that
contribute little new information. This paper extends our previous work on {\it
committee-based sample selection} for probabilistic classifiers. We describe a
family of methods for committee-based sample selection, and report experimental
results for the task of stochastic part-of-speech tagging. We find that all
variants achieve a significant reduction in annotation cost, though their
computational efficiency differs. In particular, the simplest method, which has
no parameters to tune, gives excellent results. We also show that sample
selection yields a significant reduction in the size of the model used by the
tagger.
|
cmp-lg/9606031
|
Research on Architectures for Integrated Speech/Language Systems in
Verbmobil
|
cmp-lg cs.CL
|
The German joint research project Verbmobil (VM) aims at the development of a
speech to speech translation system. This paper reports on research done in our
group which belongs to Verbmobil's subproject on system architectures (TP15).
Our specific research areas are the construction of parsers for spontaneous
speech, investigations in the parallelization of parsing and to contribute to
the development of a flexible communication architecture with distributed
control.
|
cmp-lg/9606032
|
Integrating Multiple Knowledge Sources to Disambiguate Word Sense: An
Exemplar-Based Approach
|
cmp-lg cs.CL
|
In this paper, we present a new approach for word sense disambiguation (WSD)
using an exemplar-based learning algorithm. This approach integrates a diverse
set of knowledge sources to disambiguate word sense, including part of speech
of neighboring words, morphological form, the unordered set of surrounding
words, local collocations, and verb-object syntactic relation. We tested our
WSD program, named {\sc Lexas}, on both a common data set used in previous
work, as well as on a large sense-tagged corpus that we separately constructed.
{\sc Lexas} achieves a higher accuracy on the common data set, and performs
better than the most frequent heuristic on the highly ambiguous words in the
large corpus tagged with the refined senses of {\sc WordNet}.
|
cmp-lg/9607001
|
GramCheck: A Grammar and Style Checker
|
cmp-lg cs.CL
|
This paper presents a grammar and style checker demonstrator for Spanish and
Greek native writers developed within the project GramCheck. Besides a brief
grammar error typology for Spanish, a linguistically motivated approach to
detection and diagnosis is presented, based on the generalized use of PROLOG
extensions to highly typed unification-based grammars. The demonstrator,
currently including full coverage for agreement errors and certain
head-argument relation issues, also provides correction by means of an
analysis-transfer-synthesis cycle. Finally, future extensions to the current
system are discussed.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.