id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
cs/9905001
Supervised Grammar Induction Using Training Data with Limited Constituent Information
cs.CL
Corpus-based grammar induction generally relies on hand-parsed training data to learn the structure of the language. Unfortunately, the cost of building large annotated corpora is prohibitively expensive. This work aims to improve the induction strategy when there are few labels in the training data. We show that the most informative linguistic constituents are the higher nodes in the parse trees, typically denoting complex noun phrases and sentential clauses. They account for only 20% of all constituents. For inducing grammars from sparsely labeled training data (e.g., only higher-level constituent labels), we propose an adaptation strategy, which produces grammars that parse almost as well as grammars induced from fully labeled corpora. Our results suggest that for a partial parser to replace human annotators, it must be able to automatically extract higher-level constituents rather than base noun phrases.
cs/9905003
Collective Choice Theory in Collaborative Computing
cs.MA cs.DC
This paper presents some fundamental collective choice theory for information system designers, particularly those working in the field of computer-supported cooperative work. This paper is focused on a presentation of Arrow's Possibility and Impossibility theorems which form the fundamental boundary on the efficacy of collective choice: voting and selection procedures. It restates the conditions that Arrow placed on collective choice functions in more rigorous second-order logic, which could be used as a set of test conditions for implementations, and a useful probabilistic result for analyzing votes on issue pairs. It also describes some simple collective choice functions. There is also some discussion of how enterprises should approach putting their resources under collective control: giving an outline of a superstructure of performative agents to carry out this function and what distributing processing technology would be needed.
cs/9905004
Using Collective Intelligence to Route Internet Traffic
cs.LG adap-org cond-mat.stat-mech cs.DC cs.NI nlin.AO
A COllective INtelligence (COIN) is a set of interacting reinforcement learning (RL) algorithms designed in an automated fashion so that their collective behavior optimizes a global utility function. We summarize the theory of COINs, then present experiments using that theory to design COINs to control internet traffic routing. These experiments indicate that COINs outperform all previously investigated RL-based, shortest path routing algorithms.
cs/9905005
General Principles of Learning-Based Multi-Agent Systems
cs.MA adap-org cond-mat.stat-mech cs.DC cs.LG nlin.AO
We consider the problem of how to design large decentralized multi-agent systems (MAS's) in an automated fashion, with little or no hand-tuning. Our approach has each agent run a reinforcement learning algorithm. This converts the problem into one of how to automatically set/update the reward functions for each of the agents so that the global goal is achieved. In particular we do not want the agents to ``work at cross-purposes'' as far as the global goal is concerned. We use the term artificial COllective INtelligence (COIN) to refer to systems that embody solutions to this problem. In this paper we present a summary of a mathematical framework for COINs. We then investigate the real-world applicability of the core concepts of that framework via two computer experiments: we show that our COINs perform near optimally in a difficult variant of Arthur's bar problem (and in particular avoid the tragedy of the commons for that problem), and we also illustrate optimal performance for our COINs in the leader-follower problem.
cs/9905007
An Efficient, Probabilistically Sound Algorithm for Segmentation and Word Discovery
cs.CL cs.LG
This paper presents a model-based, unsupervised algorithm for recovering word boundaries in a natural-language text from which they have been deleted. The algorithm is derived from a probability model of the source that generated the text. The fundamental structure of the model is specified abstractly so that the detailed component models of phonology, word-order, and word frequency can be replaced in a modular fashion. The model yields a language-independent, prior probability distribution on all possible sequences of all possible words over a given alphabet, based on the assumption that the input was generated by concatenating words from a fixed but unknown lexicon. The model is unusual in that it treats the generation of a complete corpus, regardless of length, as a single event in the probability space. Accordingly, the algorithm does not estimate a probability distribution on words; instead, it attempts to calculate the prior probabilities of various word sequences that could underlie the observed text. Experiments on phonemic transcripts of spontaneous speech by parents to young children suggest that this algorithm is more effective than other proposed algorithms, at least when utterance boundaries are given and the text includes a substantial number of short utterances. Keywords: Bayesian grammar induction, probability models, minimum description length (MDL), unsupervised learning, cognitive modeling, language acquisition, segmentation
cs/9905008
Inducing a Semantically Annotated Lexicon via EM-Based Clustering
cs.CL cs.AI cs.LG
We present a technique for automatic induction of slot annotations for subcategorization frames, based on induction of hidden classes in the EM framework of statistical estimation. The models are empirically evalutated by a general decision test. Induction of slot labeling for subcategorization frames is accomplished by a further application of EM, and applied experimentally on frame observations derived from parsing large corpora. We outline an interpretation of the learned representations as theoretical-linguistic decompositional lexical entries.
cs/9905009
Inside-Outside Estimation of a Lexicalized PCFG for German
cs.CL cs.LG
The paper describes an extensive experiment in inside-outside estimation of a lexicalized probabilistic context free grammar for German verb-final clauses. Grammar and formalism features which make the experiment feasible are described. Successive models are evaluated on precision and recall of phrase markup.
cs/9905010
Statistical Inference and Probabilistic Modelling for Constraint-Based NLP
cs.CL cs.LG
We present a probabilistic model for constraint-based grammars and a method for estimating the parameters of such models from incomplete, i.e., unparsed data. Whereas methods exist to estimate the parameters of probabilistic context-free grammars from incomplete data (Baum 1970), so far for probabilistic grammars involving context-dependencies only parameter estimation techniques from complete, i.e., fully parsed data have been presented (Abney 1997). However, complete-data estimation requires labor-intensive, error-prone, and grammar-specific hand-annotating of large language corpora. We present a log-linear probability model for constraint logic programming, and a general algorithm to estimate the parameters of such models from incomplete data by extending the estimation algorithm of Della-Pietra, Della-Pietra, and Lafferty (1997) to incomplete data settings.
cs/9905011
Ensembles of Radial Basis Function Networks for Spectroscopic Detection of Cervical Pre-Cancer
cs.NE cs.LG q-bio
The mortality related to cervical cancer can be substantially reduced through early detection and treatment. However, current detection techniques, such as Pap smear and colposcopy, fail to achieve a concurrently high sensitivity and specificity. In vivo fluorescence spectroscopy is a technique which quickly, non-invasively and quantitatively probes the biochemical and morphological changes that occur in pre-cancerous tissue. A multivariate statistical algorithm was used to extract clinically useful information from tissue spectra acquired from 361 cervical sites from 95 patients at 337, 380 and 460 nm excitation wavelengths. The multivariate statistical analysis was also employed to reduce the number of fluorescence excitation-emission wavelength pairs required to discriminate healthy tissue samples from pre-cancerous tissue samples. The use of connectionist methods such as multi layered perceptrons, radial basis function networks, and ensembles of such networks was investigated. RBF ensemble algorithms based on fluorescence spectra potentially provide automated, and near real-time implementation of pre-cancer detection in the hands of non-experts. The results are more reliable, direct and accurate than those achieved by either human experts or multivariate statistical algorithms.
cs/9905012
Linear and Order Statistics Combiners for Pattern Classification
cs.NE cs.LG
Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This chapter provides an analytical framework to quantify the improvements in classification results due to combining. The results apply to both linear combiners and order statistics combiners. We first show that to a first order approximation, the error rate obtained over and above the Bayes error rate, is directly proportional to the variance of the actual decision boundaries around the Bayes optimum boundary. Combining classifiers in output space reduces this variance, and hence reduces the "added" error. If N unbiased classifiers are combined by simple averaging, the added error rate can be reduced by a factor of N if the individual errors in approximating the decision boundaries are uncorrelated. Expressions are then derived for linear combiners which are biased or correlated, and the effect of output correlations on ensemble performance is quantified. For order statistics based non-linear combiners, we derive expressions that indicate how much the median, the maximum and in general the ith order statistic can improve classifier performance. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space. Experimental results on several public domain data sets are provided to illustrate the benefits of combining and to support the analytical results.
cs/9905013
Robust Combining of Disparate Classifiers through Order Statistics
cs.LG cs.CV cs.NE
Integrating the outputs of multiple classifiers via combiners or meta-learners has led to substantial improvements in several difficult pattern recognition problems. In the typical setting investigated till now, each classifier is trained on data taken or resampled from a common data set, or (almost) randomly selected subsets thereof, and thus experiences similar quality of training data. However, in certain situations where data is acquired and analyzed on-line at several geographically distributed locations, the quality of data may vary substantially, leading to large discrepancies in performance of individual classifiers. In this article we introduce and investigate a family of classifiers based on order statistics, for robust handling of such cases. Based on a mathematical modeling of how the decision boundaries are affected by order statistic combiners, we derive expressions for the reductions in error expected when such combiners are used. We show analytically that the selection of the median, the maximum and in general, the $i^{th}$ order statistic improves classification performance. Furthermore, we introduce the trim and spread combiners, both based on linear combinations of the ordered classifier outputs, and show that they are quite beneficial in presence of outliers or uneven classifier performance. Experimental results on several public domain data sets corroborate these findings.
cs/9905014
Hierarchical Reinforcement Learning with the MAXQ Value Function Decomposition
cs.LG
This paper presents the MAXQ approach to hierarchical reinforcement learning based on decomposing the target Markov decision process (MDP) into a hierarchy of smaller MDPs and decomposing the value function of the target MDP into an additive combination of the value functions of the smaller MDPs. The paper defines the MAXQ hierarchy, proves formal results on its representational power, and establishes five conditions for the safe use of state abstractions. The paper presents an online model-free learning algorithm, MAXQ-Q, and proves that it converges wih probability 1 to a kind of locally-optimal policy known as a recursively optimal policy, even in the presence of the five kinds of state abstraction. The paper evaluates the MAXQ representation and MAXQ-Q through a series of experiments in three domains and shows experimentally that MAXQ-Q (with state abstractions) converges to a recursively optimal policy much faster than flat Q learning. The fact that MAXQ learns a representation of the value function has an important benefit: it makes it possible to compute and execute an improved, non-hierarchical policy via a procedure similar to the policy improvement step of policy iteration. The paper demonstrates the effectiveness of this non-hierarchical execution experimentally. Finally, the paper concludes with a comparison to related work and a discussion of the design tradeoffs in hierarchical reinforcement learning.
cs/9905015
State Abstraction in MAXQ Hierarchical Reinforcement Learning
cs.LG
Many researchers have explored methods for hierarchical reinforcement learning (RL) with temporal abstractions, in which abstract actions are defined that can perform many primitive actions before terminating. However, little is known about learning with state abstractions, in which aspects of the state space are ignored. In previous work, we developed the MAXQ method for hierarchical RL. In this paper, we define five conditions under which state abstraction can be combined with the MAXQ value function decomposition. We prove that the MAXQ-Q learning algorithm converges under these conditions and show experimentally that state abstraction is important for the successful application of MAXQ-Q learning.
cs/9905016
Programs with Stringent Performance Objectives Will Often Exhibit Chaotic Behavior
cs.CE cs.CC
Software for the resolution of certain kind of problems, those that rate high in the Stringent Performance Objectives adjustment factor (IFPUG scheme), can be described using a combination of game theory and autonomous systems. From this description it can be shown that some of those problems exhibit chaotic behavior, an important fact in understanding the functioning of the related software. As a relatively simple example, it is shown that chess exhibits chaotic behavior in its configuration space. This implies that static evaluators in chess programs have intrinsic limitations.
cs/9906001
On Bounded-Weight Error-Correcting Codes
cs.IT math.IT
This paper computationally obtains optimal bounded-weight, binary, error-correcting codes for a variety of distance bounds and dimensions. We compare the sizes of our codes to the sizes of optimal constant-weight, binary, error-correcting codes, and evaluate the differences.
cs/9906002
The Symbol Grounding Problem
cs.AI
How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads? How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes, be grounded in anything but other meaningless symbols? The problem is analogous to trying to learn Chinese from a Chinese/Chinese dictionary alone. A candidate solution is sketched: Symbolic representations must be grounded bottom-up in nonsymbolic representations of two kinds: (1) "iconic representations," which are analogs of the proximal sensory projections of distal objects and events, and (2) "categorical representations," which are learned and innate feature-detectors that pick out the invariant features of object and event categories from their sensory projections. Elementary symbols are the names of these object and event categories, assigned on the basis of their (nonsymbolic) categorical representations. Higher-order (3) "symbolic representations," grounded in these elementary symbols, consist of symbol strings describing category membership relations (e.g., "An X is a Y that is Z").
cs/9906003
The syntactic processing of particles in Japanese spoken language
cs.CL
Particles fullfill several distinct central roles in the Japanese language. They can mark arguments as well as adjuncts, can be functional or have semantic funtions. There is, however, no straightforward matching from particles to functions, as, e.g., GA can mark the subject, the object or an adjunct of a sentence. Particles can cooccur. Verbal arguments that could be identified by particles can be eliminated in the Japanese sentence. And finally, in spoken language particles are often omitted. A proper treatment of particles is thus necessary to make an analysis of Japanese sentences possible. Our treatment is based on an empirical investigation of 800 dialogues. We set up a type hierarchy of particles motivated by their subcategorizational and modificational behaviour. This type hierarchy is part of the Japanese syntax in VERBMOBIL.
cs/9906004
Cascaded Grammatical Relation Assignment
cs.CL cs.LG
In this paper we discuss cascaded Memory-Based grammatical relations assignment. In the first stages of the cascade, we find chunks of several types (NP,VP,ADJP,ADVP,PP) and label them with their adverbial function (e.g. local, temporal). In the last stage, we assign grammatical relations to pairs of chunks. We studied the effect of adding several levels to this cascaded classifier and we found that even the less performing chunkers enhanced the performance of the relation finder.
cs/9906005
Memory-Based Shallow Parsing
cs.CL cs.LG
We present a memory-based learning (MBL) approach to shallow parsing in which POS tagging, chunking, and identification of syntactic relations are formulated as memory-based modules. The experiments reported in this paper show competitive results, the F-value for the Wall Street Journal (WSJ) treebank is: 93.8% for NP chunking, 94.7% for VP chunking, 77.1% for subject detection and 79.0% for object detection.
cs/9906006
Learning Efficient Disambiguation
cs.CL cs.AI
This dissertation analyses the computational properties of current performance-models of natural language parsing, in particular Data Oriented Parsing (DOP), points out some of their major shortcomings and suggests suitable solutions. It provides proofs that various problems of probabilistic disambiguation are NP-Complete under instances of these performance-models, and it argues that none of these models accounts for attractive efficiency properties of human language processing in limited domains, e.g. that frequent inputs are usually processed faster than infrequent ones. The central hypothesis of this dissertation is that these shortcomings can be eliminated by specializing the performance-models to the limited domains. The dissertation addresses "grammar and model specialization" and presents a new framework, the Ambiguity-Reduction Specialization (ARS) framework, that formulates the necessary and sufficient conditions for successful specialization. The framework is instantiated into specialization algorithms and applied to specializing DOP. Novelties of these learning algorithms are 1) they limit the hypotheses-space to include only "safe" models, 2) are expressed as constrained optimization formulae that minimize the entropy of the training tree-bank given the specialized grammar, under the constraint that the size of the specialized model does not exceed a predefined maximum, and 3) they enable integrating the specialized model with the original one in a complementary manner. The dissertation provides experiments with initial implementations and compares the resulting Specialized DOP (SDOP) models to the original DOP models with encouraging results.
cs/9906009
Cascaded Markov Models
cs.CL
This paper presents a new approach to partial parsing of context-free structures. The approach is based on Markov Models. Each layer of the resulting structure is represented by its own Markov Model, and output of a lower layer is passed as input to the next higher layer. An empirical evaluation of the method yields very good results for NP/PP chunking of German newspaper texts.
cs/9906010
Predicate Logic with Definitions
cs.LO cs.AI
Predicate Logic with Definitions (PLD or D-logic) is a modification of first-order logic intended mostly for practical formalization of mathematics. The main syntactic constructs of D-logic are terms, formulas and definitions. A definition is a definition of variables, a definition of constants, or a composite definition (D-logic has also abbreviation definitions called abbreviations). Definitions can be used inside terms and formulas. This possibility alleviates introducing new quantifier-like names. Composite definitions allow constructing new definitions from existing ones.
cs/9906011
A Newton method without evaluation of nonlinear function values
cs.CE cs.NA math.NA
The present author recently proposed and proved a relationship theorem between nonlinear polynomial equations and the corresponding Jacobian matrix. By using this theorem, this paper derives a Newton iterative formula without requiring the evaluation of nonlinear function values in the solution of nonlinear polynomial-only problems.
cs/9906012
The application of special matrix product to differential quadrature solution of geometrically nonlinear bending of orthotropic rectangular plates
cs.CE cs.NA math.NA
The Hadamard and SJT product of matrices are two types of special matrix product. The latter was first defined by Chen. In this study, they are applied to the differential quadrature (DQ) solution of geometrically nonlinear bending of isotropic and orthotropic rectangular plates. By using the Hadamard product, the nonlinear formulations are greatly simplified, while the SJT product approach minimizes the effort to evaluate the Jacobian derivative matrix in the Newton-Raphson method for solving the resultant nonlinear formulations. In addition, the coupled nonlinear formulations for the present problems can easily be decoupled by means of the Hadamard and SJT product. Therefore, the size of the simultaneous nonlinear algebraic equations is reduced by two-thirds and the computing effort and storage requirements are alleviated greatly. Two recent approaches applying the multiple boundary conditions are employed in the present DQ nonlinear computations. The solution accuracies are improved obviously in comparison to the previously given by Bert et al. The numerical results and detailed solution procedures are provided to demonstrate the superb efficiency, accuracy and simplicity of the new approaches in applying DQ method for nonlinear computations.
cs/9906014
Evaluation of the NLP Components of the OVIS2 Spoken Dialogue System
cs.CL
The NWO Priority Programme Language and Speech Technology is a 5-year research programme aiming at the development of spoken language information systems. In the Programme, two alternative natural language processing (NLP) modules are developed in parallel: a grammar-based (conventional, rule-based) module and a data-oriented (memory-based, stochastic, DOP) module. In order to compare the NLP modules, a formal evaluation has been carried out three years after the start of the Programme. This paper describes the evaluation procedure and the evaluation results. The grammar-based component performs much better than the data-oriented one in this comparison.
cs/9906015
Learning Transformation Rules to Find Grammatical Relations
cs.CL
Grammatical relationships are an important level of natural language processing. We present a trainable approach to find these relationships through transformation sequences and error-driven learning. Our approach finds grammatical relationships between core syntax groups and bypasses much of the parsing phase. On our training and test set, our procedure achieves 63.6% recall and 77.3% precision (f-score = 69.8).
cs/9906016
Automatically Selecting Useful Phrases for Dialogue Act Tagging
cs.AI cs.LG
We present an empirical investigation of various ways to automatically identify phrases in a tagged corpus that are useful for dialogue act tagging. We found that a new method (which measures a phrase's deviation from an optimally-predictive phrase), enhanced with a lexical filtering mechanism, produces significantly better cues than manually-selected cue phrases, the exhaustive set of phrases in a training corpus, and phrases chosen by traditional metrics, like mutual information and information gain.
cs/9906019
Resolving Part-of-Speech Ambiguity in the Greek Language Using Learning Techniques
cs.CL cs.AI
This article investigates the use of Transformation-Based Error-Driven learning for resolving part-of-speech ambiguity in the Greek language. The aim is not only to study the performance, but also to examine its dependence on different thematic domains. Results are presented here for two different test cases: a corpus on "management succession events" and a general-theme corpus. The two experiments show that the performance of this method does not depend on the thematic domain of the corpus, and its accuracy for the Greek language is around 95%.
cs/9906020
Temporal Meaning Representations in a Natural Language Front-End
cs.CL
Previous work in the context of natural language querying of temporal databases has established a method to map automatically from a large subset of English time-related questions to suitable expressions of a temporal logic-like language, called TOP. An algorithm to translate from TOP to the TSQL2 temporal database language has also been defined. This paper shows how TOP expressions could be translated into a simpler logic-like language, called BOT. BOT is very close to traditional first-order predicate logic (FOPL), and hence existing methods to manipulate FOPL expressions can be exploited to interface to time-sensitive applications other than TSQL2 databases, maintaining the existing English-to-TOP mapping.
cs/9906025
Mapping Multilingual Hierarchies Using Relaxation Labeling
cs.CL
This paper explores the automatic construction of a multilingual Lexical Knowledge Base from pre-existing lexical resources. We present a new and robust approach for linking already existing lexical/semantic hierarchies. We used a constraint satisfaction algorithm (relaxation labeling) to select --among all the candidate translations proposed by a bilingual dictionary-- the right English WordNet synset for each sense in a taxonomy automatically derived from a Spanish monolingual dictionary. Although on average, there are 15 possible WordNet connections for each sense in the taxonomy, the method achieves an accuracy over 80%. Finally, we also propose several ways in which this technique could be applied to enrich and improve existing lexical databases.
cs/9906026
Robust Grammatical Analysis for Spoken Dialogue Systems
cs.CL
We argue that grammatical analysis is a viable alternative to concept spotting for processing spoken input in a practical spoken dialogue system. We discuss the structure of the grammar, and a model for robust parsing which combines linguistic sources of information and statistical sources of information. We discuss test results suggesting that grammatical processing allows fast and accurate processing of spoken input.
cs/9906027
Human-Computer Conversation
cs.CL cs.HC
The article surveys a little of the history of the technology, sets out the main current theoretical approaches in brief, and discusses the on-going opposition between theoretical and empirical approaches. It illustrates the situation with some discussion of CONVERSE, a system that won the Loebner prize in 1997 and which displays features of both approaches.
cs/9906029
Events in Property Patterns
cs.SE cs.AI cs.CL cs.SC
A pattern-based approach to the presentation, codification and reuse of property specifications for finite-state verification was proposed by Dwyer and his collegues. The patterns enable non-experts to read and write formal specifications for realistic systems and facilitate easy conversion of specifications between formalisms, such as LTL, CTL, QRE. In this paper, we extend the pattern system with events - changes of values of variables in the context of LTL.
cs/9906034
A Unified Example-Based and Lexicalist Approach to Machine Translation
cs.CL
We present an approach to Machine Translation that combines the ideas and methodologies of the Example-Based and Lexicalist theoretical frameworks. The approach has been implemented in a multilingual Machine Translation system.
cs/9907003
Annotation graphs as a framework for multidimensional linguistic data analysis
cs.CL
In recent work we have presented a formal framework for linguistic annotation based on labeled acyclic digraphs. These `annotation graphs' offer a simple yet powerful method for representing complex annotation structures incorporating hierarchy and overlap. Here, we motivate and illustrate our approach using discourse-level annotations of text and speech data drawn from the CALLHOME, COCONUT, MUC-7, DAMSL and TRAINS annotation schemes. With the help of domain specialists, we have constructed a hybrid multi-level annotation for a fragment of the Boston University Radio Speech Corpus which includes the following levels: segment, word, breath, ToBI, Tilt, Treebank, coreference and named entity. We show how annotation graphs can represent hybrid multi-level structures which derive from a diverse set of file formats. We also show how the approach facilitates substantive comparison of multiple annotations of a single signal based on different theoretical models. The discussion shows how annotation graphs open the door to wide-ranging integration of tools, formats and corpora.
cs/9907004
MAP Lexicon is useful for segmentation and word discovery in child-directed speech
cs.CL cs.LG
Because of rather fundamental changes to the underlying model proposed in the paper, it has been withdrawn from the archive.
cs/9907006
Representing Text Chunks
cs.CL
Dividing sentences in chunks of words is a useful preprocessing step for parsing, information extraction and information retrieval. (Ramshaw and Marcus, 1995) have introduced a "convenient" data representation for chunking by converting it to a tagging task. In this paper we will examine seven different data representations for the problem of recognizing noun phrase chunks. We will show that the the data representation choice has a minor influence on chunking performance. However, equipped with the most suitable data representation, our memory-based learning chunker was able to improve the best published chunking results for a standard data set.
cs/9907007
Cross-Language Information Retrieval for Technical Documents
cs.CL
This paper proposes a Japanese/English cross-language information retrieval (CLIR) system targeting technical documents. Our system first translates a given query containing technical terms into the target language, and then retrieves documents relevant to the translated query. The translation of technical terms is still problematic in that technical terms are often compound words, and thus new terms can be progressively created simply by combining existing base words. In addition, Japanese often represents loanwords based on its phonogram. Consequently, existing dictionaries find it difficult to achieve sufficient coverage. To counter the first problem, we use a compound word translation method, which uses a bilingual dictionary for base words and collocational statistics to resolve translation ambiguity. For the second problem, we propose a transliteration method, which identifies phonetic equivalents in the target language. We also show the effectiveness of our system using a test collection for CLIR.
cs/9907008
Explanation-based Learning for Machine Translation
cs.CL
In this paper we present an application of explanation-based learning (EBL) in the parsing module of a real-time English-Spanish machine translation system designed to translate closed captions. We discuss the efficiency/coverage trade-offs available in EBL and introduce the techniques we use to increase coverage while maintaining a high level of space and time efficiency. Our performance results indicate that this approach is effective.
cs/9907009
Designing and Mining Multi-Terabyte Astronomy Archives: The Sloan Digital Sky Survey
cs.DB cs.DL
The next-generation astronomy digital archives will cover most of the universe at fine resolution in many wave-lengths, from X-rays to ultraviolet, optical, and infrared. The archives will be stored at diverse geographical locations. One of the first of these projects, the Sloan Digital Sky Survey (SDSS) will create a 5-wavelength catalog over 10,000 square degrees of the sky (see http://www.sdss.org/). The 200 million objects in the multi-terabyte database will have mostly numerical attributes, defining a space of 100+ dimensions. Points in this space have highly correlated distributions. The archive will enable astronomers to explore the data interactively. Data access will be aided by a multidimensional spatial index and other indices. The data will be partitioned in many ways. Small tag objects consisting of the most popular attributes speed up frequent searches. Splitting the data among multiple servers enables parallel, scalable I/O and applies parallel processing to the data. Hashing techniques allow efficient clustering and pair-wise comparison algorithms that parallelize nicely. Randomly sampled subsets allow debugging otherwise large queries at the desktop. Central servers will operate a data pump that supports sweeping searches that touch most of the data. The anticipated queries require special operators related to angular distances and complex similarity tests of object properties, like shapes, colors, velocity vectors, or temporal behaviors. These issues pose interesting data management challenges.
cs/9907010
Language Identification With Confidence Limits
cs.CL
A statistical classification algorithm and its application to language identification from noisy input are described. The main innovation is to compute confidence limits on the classification, so that the algorithm terminates when enough evidence to make a clear decision has been made, and so avoiding problems with categories that have similar characteristics. A second application, to genre identification, is briefly examined. The results show that some of the problems of other language identification techniques can be avoided, and illustrate a more important point: that a statistical language process can be used to provide feedback about its own success rate.
cs/9907012
Selective Magic HPSG Parsing
cs.CL
We propose a parser for constraint-logic grammars implementing HPSG that combines the advantages of dynamic bottom-up and advanced top-down control. The parser allows the user to apply magic compilation to specific constraints in a grammar which as a result can be processed dynamically in a bottom-up and goal-directed fashion. State of the art top-down processing techniques are used to deal with the remaining constraints. We discuss various aspects concerning the implementation of the parser as part of a grammar development system.
cs/9907013
Corpus Annotation for Parser Evaluation
cs.CL
We describe a recently developed corpus annotation scheme for evaluating parsers that avoids shortcomings of current methods. The scheme encodes grammatical relations between heads and dependents, and has been used to mark up a new public-domain corpus of naturally occurring English text. We show how the corpus can be used to evaluate the accuracy of a robust parser, and relate the corpus to extant resources.
cs/9907016
Microsoft TerraServer: A Spatial Data Warehouse
cs.DB cs.DL
The TerraServer stores aerial, satellite, and topographic images of the earth in a SQL database available via the Internet. It is the world's largest online atlas, combining five terabytes of image data from the United States Geological Survey (USGS) and SPIN-2. This report describes the system-redesign based on our experience over the last year. It also reports usage and operations results over the last year -- over 2 billion web hits and over 20 Terabytes of imagry served over the Internet. Internet browsers provide intuitive spatial and text interfaces to the data. Users need no special hardware, software, or knowledge to locate and browse imagery. This paper describes how terabytes of "Internet unfriendly" geo-spatial images were scrubbed and edited into hundreds of millions of "Internet friendly" image tiles and loaded into a SQL data warehouse. Microsoft TerraServer demonstrates that general-purpose relational database technology can manage large scale image repositories, and shows that web browsers can be a good geospatial image presentation system.
cs/9907017
A Bootstrap Approach to Automatically Generating Lexical Transfer Rules
cs.CL
We describe a method for automatically generating Lexical Transfer Rules (LTRs) from word equivalences using transfer rule templates. Templates are skeletal LTRs, unspecified for words. New LTRs are created by instantiating a template with words, provided that the words belong to the appropriate lexical categories required by the template. We define two methods for creating an inventory of templates and using them to generate new LTRs. A simpler method consists of extracting a finite set of templates from a sample of hand coded LTRs and directly using them in the generation process. A further method consists of abstracting over the initial finite set of templates to define higher level templates, where bilingual equivalences are defined in terms of correspondences involving phrasal categories. Phrasal templates are then mapped onto sets of lexical templates with the aid of grammars. In this way an infinite set of lexical templates is recursively defined. New LTRs are created by parsing input words, matching a template at the phrasal level and using the corresponding lexical categories to instantiate the lexical template. The definition of an infinite set of templates enables the automatic creation of LTRs for multi-word, non-compositional word equivalences of any cardinality.
cs/9907020
Generalized linearization in nonlinear modeling of data
cs.CE cs.NA math.NA
The principal innovative idea in this paper is to transform the original complex nonlinear modeling problem into a combination of linear problem and very simple nonlinear problems. The key step is the generalized linearization of nonlinear terms. This paper only presents the introductory strategy of this methodology. The practical numerical experiments will be provided subsequently.
cs/9907021
Architectural Considerations for Conversational Systems -- The Verbmobil/INTARC Experience
cs.CL
The paper describes the speech to speech translation system INTARC, developed during the first phase of the Verbmobil project. The general design goals of the INTARC system architecture were time synchronous processing as well as incrementality and interactivity as a means to achieve a higher degree of robustness and scalability. Interactivity means that in addition to the bottom-up (in terms of processing levels) data flow the ability to process top-down restrictions considering the same signal segment for all processing levels. The construction of INTARC 2.0, which has been operational since fall 1996, followed an engineering approach focussing on the integration of symbolic (linguistic) and stochastic (recognition) techniques which led to a generalization of the concept of a ``one pass'' beam search.
cs/9907026
Mixing representation levels: The hybrid approach to automatic text generation
cs.CL cs.AI
Natural language generation systems (NLG) map non-linguistic representations into strings of words through a number of steps using intermediate representations of various levels of abstraction. Template based systems, by contrast, tend to use only one representation level, i.e. fixed strings, which are combined, possibly in a sophisticated way, to generate the final text. In some circumstances, it may be profitable to combine NLG and template based techniques. The issue of combining generation techniques can be seen in more abstract terms as the issue of mixing levels of representation of different degrees of linguistic abstraction. This paper aims at defining a reference architecture for systems using mixed representations. We argue that mixed representations can be used without abandoning a linguistically grounded approach to language generation.
cs/9907032
Clausal Temporal Resolution
cs.LO cs.AI
In this article, we examine how clausal resolution can be applied to a specific, but widely used, non-classical logic, namely discrete linear temporal logic. Thus, we first define a normal form for temporal formulae and show how arbitrary temporal formulae can be translated into the normal form, while preserving satisfiability. We then introduce novel resolution rules that can be applied to formulae in this normal form, provide a range of examples and examine the correctness and complexity of this approach is examined and. This clausal resolution approach. Finally, we describe related work and future developments concerning this work.
cs/9907042
Raising Reliability of Web Search Tool Research through Replication and Chaos Theory
cs.IR cs.DL
Because the World Wide Web is a dynamic collection of information, the Web search tools (or "search engines") that index the Web are dynamic. Traditional information retrieval evaluation techniques may not provide reliable results when applied to the Web search tools. This study is the result of ten replications of the classic 1996 Ding and Marchionini Web search tool research. It explores the effects that replication can have on transforming unreliable results from one iteration into replicable and therefore reliable results after multiple iterations.
cs/9907043
A simple C++ library for manipulating scientific data sets as structured data
cs.CE cs.DB
Representing scientific data sets efficiently on external storage usually involves converting them to a byte string representation using specialized reader/writer routines. The resulting storage files are frequently difficult to interpret without these specialized routines as they do not contain information about the logical structure of the data. Avoiding such problems usually involves heavy-weight data format libraries or data base systems. We present a simple C++ library that allows to create and access data files that store structured data. The structure of the data is described by a data type that can be built from elementary data types (integer and floating-point numbers, byte strings) and composite data types (arrays, structures, unions). An abstract data access class presents the data to the application. Different actual data file structures can be implemented under this layer. This method is particularly suited to applications that require complex data structures, e.g. molecular dynamics simulations. Extensions such as late type binding and object persistence are discussed.
cs/9908001
Detecting Sub-Topic Correspondence through Bipartite Term Clustering
cs.CL
This paper addresses a novel task of detecting sub-topic correspondence in a pair of text fragments, enhancing common notions of text similarity. This task is addressed by coupling corresponding term subsets through bipartite clustering. The paper presents a cost-based clustering scheme and compares it with a bipartite version of the single-link method, providing illustrating results.
cs/9908004
Extending the Stable Model Semantics with More Expressive Rules
cs.LO cs.AI
The rules associated with propositional logic programs and the stable model semantics are not expressive enough to let one write concise programs. This problem is alleviated by introducing some new types of propositional rules. Together with a decision procedure that has been used as a base for an efficient implementation, the new rules supplant the standard ones in practical applications of the stable model semantics.
cs/9908013
Collective Intelligence for Control of Distributed Dynamical Systems
cs.LG adap-org cond-mat cs.AI cs.DC cs.MA nlin.AO
We consider the El Farol bar problem, also known as the minority game (W. B. Arthur, ``The American Economic Review'', 84(2): 406--411 (1994), D. Challet and Y.C. Zhang, ``Physica A'', 256:514 (1998)). We view it as an instance of the general problem of how to configure the nodal elements of a distributed dynamical system so that they do not ``work at cross purposes'', in that their collective dynamics avoids frustration and thereby achieves a provided global goal. We summarize a mathematical theory for such configuration applicable when (as in the bar problem) the global goal can be expressed as minimizing a global energy function and the nodes can be expressed as minimizers of local free energy functions. We show that a system designed with that theory performs nearly optimally for the bar problem.
cs/9908014
An Introduction to Collective Intelligence
cs.LG adap-org cond-mat cs.DC cs.MA nlin.AO
This paper surveys the emerging science of how to design a ``COllective INtelligence'' (COIN). A COIN is a large multi-agent system where: (i) There is little to no centralized communication or control; and (ii) There is a provided world utility function that rates the possible histories of the full system. In particular, we are interested in COINs in which each agent runs a reinforcement learning (RL) algorithm. Rather than use a conventional modeling approach (e.g., model the system dynamics, and hand-tune agents to cooperate), we aim to solve the COIN design problem implicitly, via the ``adaptive'' character of the RL algorithms of each of the agents. This approach introduces an entirely new, profound design problem: Assuming the RL algorithms are able to achieve high rewards, what reward functions for the individual agents will, when pursued by those agents, result in high world utility? In other words, what reward functions will best ensure that we do not have phenomena like the tragedy of the commons, Braess's paradox, or the liquidity trap? Although still very young, research specifically concentrating on the COIN design problem has already resulted in successes in artificial domains, in particular in packet-routing, the leader-follower problem, and in variants of Arthur's El Farol bar problem. It is expected that as it matures and draws upon other disciplines related to COINs, this research will greatly expand the range of tasks addressable by human engineers. Moreover, in addition to drawing on them, such a fully developed scie nce of COIN design may provide much insight into other already established scientific fields, such as economics, game theory, and population biology.
cs/9908015
Representing Scholarly Claims in Internet Digital Libraries: A Knowledge Modelling Approach
cs.DL cs.AI cs.HC cs.IR
This paper is concerned with tracking and interpreting scholarly documents in distributed research communities. We argue that current approaches to document description, and current technological infrastructures particularly over the World Wide Web, provide poor support for these tasks. We describe the design of a digital library server which will enable authors to submit a summary of the contributions they claim their documents makes, and its relations to the literature. We describe a knowledge-based Web environment to support the emergence of such a community-constructed semantic hypertext, and the services it could provide to assist the interpretation of an idea or document in the context of its literature. The discussion considers in detail how the approach addresses usability issues associated with knowledge structuring environments.
cs/9908017
A Differential Invariant for Zooming
cs.CV
This paper presents an invariant under scaling and linear brightness change. The invariant is based on differentials and therefore is a local feature. Rotationally invariant 2-d differential Gaussian operators up to third order are proposed for the implementation of the invariant. The performance is analyzed by simulating a camera zoom-out.
cs/9909002
Semantic robust parsing for noun extraction from natural language queries
cs.CL
This paper describes how robust parsing techniques can be fruitful applied for building a query generation module which is part of a pipelined NLP architecture aimed at process natural language queries in a restricted domain. We want to show that semantic robustness represents a key issue in those NLP systems where it is more likely to have partial and ill-formed utterances due to various factors (e.g. noisy environments, low quality of speech recognition modules, etc...) and where it is necessary to succeed, even if partially, in extracting some meaningful information.
cs/9909003
Iterative Deepening Branch and Bound
cs.AI
In tree search problem the best-first search algorithm needs too much of space . To remove such drawbacks of these algorithms the IDA* was developed which is both space and time cost efficient. But again IDA* can give an optimal solution for real valued problems like Flow shop scheduling, Travelling Salesman and 0/1 Knapsack due to their real valued cost estimates. Thus further modifications are done on it and the Iterative Deepening Branch and Bound Search Algorithms is developed which meets the requirements. We have tried using this algorithm for the Flow Shop Scheduling Problem and have found that it is quite effective.
cs/9909009
The Rough Guide to Constraint Propagation
cs.AI cs.PL
We provide here a simple, yet very general framework that allows us to explain several constraint propagation algorithms in a systematic way. In particular, using the notions commutativity and semi-commutativity, we show how the well-known AC-3, PC-2, DAC and DPC algorithms are instances of a single generic algorithm. The work reported here extends and simplifies that of Apt, cs.AI/9811024.
cs/9909010
Automatic Generation of Constraint Propagation Algorithms for Small Finite Domains
cs.AI cs.PL
We study here constraint satisfaction problems that are based on predefined, explicitly given finite constraints. To solve them we propose a notion of rule consistency that can be expressed in terms of rules derived from the explicit representation of the initial constraints. This notion of local consistency is weaker than arc consistency for constraints of arbitrary arity but coincides with it when all domains are unary or binary. For Boolean constraints rule consistency coincides with the closure under the well-known propagation rules for Boolean constraints. By generalizing the format of the rules we obtain a characterization of arc consistency in terms of so-called inclusion rules. The advantage of rule consistency and this rule based characterization of the arc consistency is that the algorithms that enforce both notions can be automatically generated, as CHR rules. So these algorithms could be integrated into constraint logic programming systems such as Eclipse. We illustrate the usefulness of this approach to constraint propagation by discussing the implementations of both algorithms and their use on various examples, including Boolean constraints, three valued logic of Kleene, constraints dealing with Waltz's language for describing polyhedreal scenes, and Allen's qualitative approach to temporal logic.
cs/9909014
Reasoning About Common Knowledge with Infinitely Many Agents
cs.LO cs.AI
Complete axiomatizations and exponential-time decision procedures are provided for reasoning about knowledge and common knowledge when there are infinitely many agents. The results show that reasoning about knowledge and common knowledge with infinitely many agents is no harder than when there are finitely many agents, provided that we can check the cardinality of certain set differences G - G', where G and G' are sets of agents. Since our complexity results are independent of the cardinality of the sets G involved, they represent improvements over the previous results even with the sets of agents involved are finite. Moreover, our results make clear the extent to which issues of complexity and completeness depend on how the sets of agents involved are represented.
cs/9909016
Least expected cost query optimization: an exercise in utility
cs.DB
We identify two unreasonable, though standard, assumptions made by database query optimizers that can adversely affect the quality of the chosen evaluation plans. One assumption is that it is enough to optimize for the expected case---that is, the case where various parameters (like available memory) take on their expected value. The other assumption is that the parameters are constant throughout the execution of the query. We present an algorithm based on the ``System R''-style query optimization algorithm that does not rely on these assumptions. The algorithm we present chooses the plan of the least expected cost instead of the plan of least cost given some fixed value of the parameters. In execution environments that exhibit a high degree of variability, our techniques should result in better performance.
cs/9909019
Knowledge in Multi-Agent Systems: Initial Configurations and Broadcast
cs.LO cs.AI
The semantic framework for the modal logic of knowledge due to Halpern and Moses provides a way to ascribe knowledge to agents in distributed and multi-agent systems. In this paper we study two special cases of this framework: full systems and hypercubes. Both model static situations in which no agent has any information about another agent's state. Full systems and hypercubes are an appropriate model for the initial configurations of many systems of interest. We establish a correspondence between full systems and hypercube systems and certain classes of Kripke frames. We show that these classes of systems correspond to the same logic. Moreover, this logic is also the same as that generated by the larger class of weakly directed frames. We provide a sound and complete axiomatization, S5WDn, of this logic. Finally, we show that under certain natural assumptions, in a model where knowledge evolves over time, S5WDn characterizes the properties of knowledge not just at the initial configuration, but also at all later configurations. In particular, this holds for homogeneous broadcast systems, which capture settings in which agents are initially ignorant of each others local states, operate synchronously, have perfect recall and can communicate only by broadcasting.
cs/9910011
A statistical model for word discovery in child directed speech
cs.CL cs.LG
A statistical model for segmentation and word discovery in child directed speech is presented. An incremental unsupervised learning algorithm to infer word boundaries based on this model is described and results of empirical tests showing that the algorithm is competitive with other models that have been used for similar tasks are also presented.
cs/9910015
PIPE: Personalizing Recommendations via Partial Evaluation
cs.IR cs.AI
It is shown that personalization of web content can be advantageously viewed as a form of partial evaluation --- a technique well known in the programming languages community. The basic idea is to model a recommendation space as a program, then partially evaluate this program with respect to user preferences (and features) to obtain specialized content. This technique supports both content-based and collaborative approaches, and is applicable to a range of applications that require automatic information integration from multiple web sources. The effectiveness of this methodology is illustrated by two example applications --- (i) personalizing content for visitors to the Blacksburg Electronic Village (http://www.bev.net), and (ii) locating and selecting scientific software on the Internet. The scalability of this technique is demonstrated by its ability to interface with online web ontologies that index thousands of web pages.
cs/9910016
Probabilistic Agent Programs
cs.AI
Agents are small programs that autonomously take actions based on changes in their environment or ``state.'' Over the last few years, there have been an increasing number of efforts to build agents that can interact and/or collaborate with other agents. In one of these efforts, Eiter, Subrahmanian amd Pick (AIJ, 108(1-2), pages 179-255) have shown how agents may be built on top of legacy code. However, their framework assumes that agent states are completely determined, and there is no uncertainty in an agent's state. Thus, their framework allows an agent developer to specify how his agents will react when the agent is 100% sure about what is true/false in the world state. In this paper, we propose the concept of a \emph{probabilistic agent program} and show how, given an arbitrary program written in any imperative language, we may build a declarative ``probabilistic'' agent program on top of it which supports decision making in the presence of uncertainty. We provide two alternative semantics for probabilistic agent programs. We show that the second semantics, though more epistemically appealing, is more complex to compute. We provide sound and complete algorithms to compute the semantics of \emph{positive} agent programs.
cs/9910019
Consistent Checkpointing in Distributed Databases: Towards a Formal Approach
cs.DB cs.DC
Whether it is for audit or for recovery purposes, data checkpointing is an important problem of distributed database systems. Actually, transactions establish dependence relations on data checkpoints taken by data object managers. So, given an arbitrary set of data checkpoints (including at least a single data checkpoint from a data manager, and at most a data checkpoint from each data manager), an important question is the following one: ``Can these data checkpoints be members of a same consistent global checkpoint?''. This paper answers this question by providing a necessary and sufficient condition suited for database systems. Moreover, to show the usefulness of this condition, two {\em non-intrusive} data checkpointing protocols are derived from this condition. It is also interesting to note that this paper, by exhibiting ``correspondences'', establishes a bridge between the data object/transaction model and the process/message-passing model.
cs/9910020
Selective Sampling for Example-based Word Sense Disambiguation
cs.CL
This paper proposes an efficient example sampling method for example-based word sense disambiguation systems. To construct a database of practical size, a considerable overhead for manual sense disambiguation (overhead for supervision) is required. In addition, the time complexity of searching a large-sized database poses a considerable problem (overhead for search). To counter these problems, our method selectively samples a smaller-sized effective subset from a given example set for use in word sense disambiguation. Our method is characterized by the reliance on the notion of training utility: the degree to which each example is informative for future example sampling when used for the training of the system. The system progressively collects examples by selecting those with greatest utility. The paper reports the effectiveness of our method through experiments on about one thousand sentences. Compared to experiments with other example sampling methods, our method reduced both the overhead for supervision and the overhead for search, without the degeneration of the performance of the system.
cs/9910021
Efficient and Extensible Algorithms for Multi Query Optimization
cs.DB
Complex queries are becoming commonplace, with the growing use of decision support systems. These complex queries often have a lot of common sub-expressions, either within a single query, or across multiple such queries run as a batch. Multi-query optimization aims at exploiting common sub-expressions to reduce evaluation cost. Multi-query optimization has hither-to been viewed as impractical, since earlier algorithms were exhaustive, and explore a doubly exponential search space. In this paper we demonstrate that multi-query optimization using heuristics is practical, and provides significant benefits. We propose three cost-based heuristic algorithms: Volcano-SH and Volcano-RU, which are based on simple modifications to the Volcano search strategy, and a greedy heuristic. Our greedy heuristic incorporates novel optimizations that improve efficiency greatly. Our algorithms are designed to be easily added to existing optimizers. We present a performance study comparing the algorithms, using workloads consisting of queries from the TPC-D benchmark. The study shows that our algorithms provide significant benefits over traditional optimization, at a very acceptable overhead in optimization time.
cs/9910022
Practical experiments with regular approximation of context-free languages
cs.CL
Several methods are discussed that construct a finite automaton given a context-free grammar, including both methods that lead to subsets and those that lead to supersets of the original context-free language. Some of these methods of regular approximation are new, and some others are presented here in a more refined form with respect to existing literature. Practical experiments with the different methods of regular approximation are performed for spoken-language input: hypotheses from a speech recognizer are filtered through a finite automaton.
cs/9911006
Question Answering System Using Syntactic Information
cs.CL
Question answering task is now being done in TREC8 using English documents. We examined question answering task in Japanese sentences. Our method selects the answer by matching the question sentence with knowledge-based data written in natural language. We use syntactic information to obtain highly accurate answers.
cs/9911011
One-Level Prosodic Morphology
cs.CL
Recent developments in theoretical linguistics have lead to a widespread acceptance of constraint-based analyses of prosodic morphology phenomena such as truncation, infixation, floating morphemes and reduplication. Of these, reduplication is particularly challenging for state-of-the-art computational morphology, since it involves copying of some part of a phonological string. In this paper I argue for certain extensions to the one-level model of phonology and morphology (Bird & Ellison 1994) to cover the computational aspects of prosodic morphology using finite-state methods. In a nutshell, enriched lexical representations provide additional automaton arcs to repeat or skip sounds and also to allow insertion of additional material. A kind of resource consciousness is introduced to control this additional freedom, distinguishing between producer and consumer arcs. The non-finite-state copying aspect of reduplication is mapped to automata intersection, itself a non-finite-state operation. Bounded local optimization prunes certain automaton arcs that fail to contribute to linguistic optimisation criteria. The paper then presents implemented case studies of Ulwa construct state infixation, German hypocoristic truncation and Tagalog over-applying reduplication that illustrate the expressive power of this approach, before its merits and limitations are discussed and possible extensions are sketched. I conclude that the one-level approach to prosodic morphology presents an attractive way of extending finite-state techniques to difficult phenomena that hitherto resisted elegant computational analyses.
cs/9911012
Cox's Theorem Revisited
cs.AI
The assumptions needed to prove Cox's Theorem are discussed and examined. Various sets of assumptions under which a Cox-style theorem can be proved are provided, although all are rather strong and, arguably, not natural.
cs/9912002
A Geometric Model for Information Retrieval Systems
cs.IR cs.CC cs.DL
This decade has seen a great deal of progress in the development of information retrieval systems. Unfortunately, we still lack a systematic understanding of the behavior of the systems and their relationship with documents. In this paper we present a completely new approach towards the understanding of the information retrieval systems. Recently, it has been observed that retrieval systems in TREC 6 show some remarkable patterns in retrieving relevant documents. Based on the TREC 6 observations, we introduce a geometric linear model of information retrieval systems. We then apply the model to predict the number of relevant documents by the retrieval systems. The model is also scalable to a much larger data set. Although the model is developed based on the TREC 6 routing test data, I believe it can be readily applicable to other information retrieval systems. In Appendix, we explained a simple and efficient way of making a better system from the existing systems.
cs/9912003
Resolution of Indirect Anaphora in Japanese Sentences Using Examples 'X no Y (Y of X)'
cs.CL
A noun phrase can indirectly refer to an entity that has already been mentioned. For example, ``I went into an old house last night. The roof was leaking badly and ...'' indicates that ``the roof'' is associated with `` an old house}'', which was mentioned in the previous sentence. This kind of reference (indirect anaphora) has not been studied well in natural language processing, but is important for coherence resolution, language understanding, and machine translation. In order to analyze indirect anaphora, we need a case frame dictionary for nouns that contains knowledge of the relationships between two nouns but no such dictionary presently exists. Therefore, we are forced to use examples of ``X no Y'' (Y of X) and a verb case frame dictionary instead. We tried estimating indirect anaphora using this information and obtained a recall rate of 63% and a precision rate of 68% on test sentences. This indicates that the information of ``X no Y'' is useful to a certain extent when we cannot make use of a noun case frame dictionary. We estimated the results that would be given by a noun case frame dictionary, and obtained recall and precision rates of 71% and 82% respectively. Finally, we proposed a way to construct a noun case frame dictionary by using examples of ``X no Y.''
cs/9912004
Pronoun Resolution in Japanese Sentences Using Surface Expressions and Examples
cs.CL
In this paper, we present a method of estimating referents of demonstrative pronouns, personal pronouns, and zero pronouns in Japanese sentences using examples, surface expressions, topics and foci. Unlike conventional work which was semantic markers for semantic constraints, we used examples for semantic constraints and showed in our experiments that examples are as useful as semantic markers. We also propose many new methods for estimating referents of pronouns. For example, we use the form ``X of Y'' for estimating referents of demonstrative adjectives. In addition to our new methods, we used many conventional methods. As a result, experiments using these methods obtained a precision rate of 87% in estimating referents of demonstrative pronouns, personal pronouns, and zero pronouns for training sentences, and obtained a precision rate of 78% for test sentences.
cs/9912005
An Estimate of Referent of Noun Phrases in Japanese Sentences
cs.CL
In machine translation and man-machine dialogue, it is important to clarify referents of noun phrases. We present a method for determining the referents of noun phrases in Japanese sentences by using the referential properties, modifiers, and possessors of noun phrases. Since the Japanese language has no articles, it is difficult to decide whether a noun phrase has an antecedent or not. We had previously estimated the referential properties of noun phrases that correspond to articles by using clue words in the sentences. By using these referential properties, our system determined the referents of noun phrases in Japanese sentences. Furthermore we used the modifiers and possessors of noun phrases in determining the referents of noun phrases. As a result, on training sentences we obtained a precision rate of 82% and a recall rate of 85% in the determination of the referents of noun phrases that have antecedents. On test sentences, we obtained a precision rate of 79% and a recall rate of 77%.
cs/9912006
Resolution of Verb Ellipsis in Japanese Sentence using Surface Expressions and Examples
cs.CL
Verbs are sometimes omitted in Japanese sentences. It is necessary to recover omitted verbs for purposes of language understanding, machine translation, and conversational processing. This paper describes a practical way to recover omitted verbs by using surface expressions and examples. We experimented the resolution of verb ellipses by using this information, and obtained a recall rate of 73% and a precision rate of 66% on test sentences.
cs/9912007
An Example-Based Approach to Japanese-to-English Translation of Tense, Aspect, and Modality
cs.CL
We have developed a new method for Japanese-to-English translation of tense, aspect, and modality that uses an example-based method. In this method the similarity between input and example sentences is defined as the degree of semantic matching between the expressions at the ends of the sentences. Our method also uses the k-nearest neighbor method in order to exclude the effects of noise; for example, wrongly tagged data in the bilingual corpora. Experiments show that our method can translate tenses, aspects, and modalities more accurately than the top-level MT software currently available on the market can. Moreover, it does not require hand-craft rules.
cs/9912008
New Error Bounds for Solomonoff Prediction
cs.AI cs.LG
Solomonoff sequence prediction is a scheme to predict digits of binary strings without knowing the underlying probability distribution. We call a prediction scheme informed when it knows the true probability distribution of the sequence. Several new relations between universal Solomonoff sequence prediction and informed prediction and general probabilistic prediction schemes will be proved. Among others, they show that the number of errors in Solomonoff prediction is finite for computable distributions, if finite in the informed case. Deterministic variants will also be studied. The most interesting result is that the deterministic variant of Solomonoff prediction is optimal compared to any other probabilistic or deterministic prediction scheme apart from additive square root corrections only. This makes it well suited even for difficult prediction problems, where it does not suffice when the number of errors is minimal to within some factor greater than one. Solomonoff's original bound and the ones presented here complement each other in a useful way.
cs/9912009
Deduction over Mixed-Level Logic Representations for Text Passage Retrieval
cs.CL
A system is described that uses a mixed-level representation of (part of) meaning of natural language documents (based on standard Horn Clause Logic) and a variable-depth search strategy that distinguishes between the different levels of abstraction in the knowledge representation to locate specific passages in the documents. Mixed-level representations as well as variable-depth search strategies are applicable in fields outside that of NLP.
cs/9912011
Adaptivity in Agent-Based Routing for Data Networks
cs.MA adap-org cs.NI nlin.AO
Adaptivity, both of the individual agents and of the interaction structure among the agents, seems indispensable for scaling up multi-agent systems (MAS's) in noisy environments. One important consideration in designing adaptive agents is choosing their action spaces to be as amenable as possible to machine learning techniques, especially to reinforcement learning (RL) techniques. One important way to have the interaction structure connecting agents itself be adaptive is to have the intentions and/or actions of the agents be in the input spaces of the other agents, much as in Stackelberg games. We consider both kinds of adaptivity in the design of a MAS to control network packet routing. We demonstrate on the OPNET event-driven network simulator the perhaps surprising fact that simply changing the action space of the agents to be better suited to RL can result in very large improvements in their potential performance: at their best settings, our learning-amenable router agents achieve throughputs up to three and one half times better than that of the standard Bellman-Ford routing algorithm, even when the Bellman-Ford protocol traffic is maintained. We then demonstrate that much of that potential improvement can be realized by having the agents learn their settings when the agent interaction structure is itself adaptive.
cs/9912012
Avoiding Braess' Paradox through Collective Intelligence
cs.DC adap-org cs.MA cs.NI nlin.AO
In an Ideal Shortest Path Algorithm (ISPA), at each moment each router in a network sends all of its traffic down the path that will incur the lowest cost to that traffic. In the limit of an infinitesimally small amount of traffic for a particular router, its routing that traffic via an ISPA is optimal, as far as cost incurred by that traffic is concerned. We demonstrate though that in many cases, due to the side-effects of one router's actions on another routers performance, having routers use ISPA's is suboptimal as far as global aggregate cost is concerned, even when only used to route infinitesimally small amounts of traffic. As a particular example of this we present an instance of Braess' paradox for ISPA's, in which adding new links to a network decreases overall throughput. We also demonstrate that load-balancing, in which the routing decisions are made to optimize the global cost incurred by all traffic currently being routed, is suboptimal as far as global cost averaged across time is concerned. This is also due to "side-effects", in this case of current routing decision on future traffic. The theory of COllective INtelligence (COIN) is concerned precisely with the issue of avoiding such deleterious side-effects. We present key concepts from that theory and use them to derive an idealized algorithm whose performance is better than that of the ISPA, even in the infinitesimal limit. We present experiments verifying this, and also showing that a machine-learning-based version of this COIN algorithm in which costs are only imprecisely estimated (a version potentially applicable in the real world) also outperforms the ISPA, despite having access to less information than does the ISPA. In particular, this COIN algorithm avoids Braess' paradox.
cs/9912015
Comparative Analysis of Five XML Query Languages
cs.DB
XML is becoming the most relevant new standard for data representation and exchange on the WWW. Novel languages for extracting and restructuring the XML content have been proposed, some in the tradition of database query languages (i.e. SQL, OQL), others more closely inspired by XML. No standard for XML query language has yet been decided, but the discussion is ongoing within the World Wide Web Consortium and within many academic institutions and Internet-related major companies. We present a comparison of five, representative query languages for XML, highlighting their common features and differences.
cs/9912016
HMM Specialization with Selective Lexicalization
cs.CL cs.LG
We present a technique which complements Hidden Markov Models by incorporating some lexicalized states representing syntactically uncommon words. Our approach examines the distribution of transitions, selects the uncommon words, and makes lexicalized states for the words. We performed a part-of-speech tagging experiment on the Brown corpus to evaluate the resultant language model and discovered that this technique improved the tagging accuracy by 0.21% at the 95% level of confidence.
cs/9912017
Mixed-Level Knowledge Representation and Variable-Depth Inference in Natural Language Processing
cs.CL
A system is described that uses a mixed-level knowledge representation based on standard Horn Clause Logic to represent (part of) the meaning of natural language documents. A variable-depth search strategy is outlined that distinguishes between the different levels of abstraction in the knowledge representation to locate specific passages in the documents. A detailed description of the linguistic aspects of the system is given. Mixed-level representations as well as variable-depth search strategies are applicable in fields outside that of NLP.
cs/9912021
Seeing the Forest in the Tree: Applying VRML to Mathematical Problems in Number Theory
cs.MS cs.CE
We show how VRML (Virtual Reality Modeling Language) can provide potentially powerful insight into the 3x + 1 problem via the introduction of a unique geometrical object, called the 'G-cell', akin to a fractal generator. We present an example of a VRML world developed programmatically with the G-cell. The role of VRML as a tool for furthering the understanding the 3x+1 problem is potentially significant for several reasons: a) VRML permits the observer to zoom into the geometric structure at all scales (up to limitations of the computing platform). b) VRML enables rotation to alter comparative visual perspective (similar to Tukey's data-spinning concept). c) VRML facilitates the demonstration of interesting tree features between collaborators on the internet who might otherwise have difficulty conveying their ideas unambiguously. d) VRML promises to reveal any dimensional dependencies among 3x+1 sequences.
hep-lat/0003009
Data storage issues in lattice QCD calculations
hep-lat cs.DB
I describe some of the data management issues in lattice Quantum Chromodynamics calculations. I focus on the experience of the UKQCD collaboration. I describe an attempt to use a relational database to store part of the data produced by a lattice QCD calculation.
hep-lat/0505005
Parallel Programming with Matrix Distributed Processing
hep-lat cs.CE physics.comp-ph
Matrix Distributed Processing (MDP) is a C++ library for fast development of efficient parallel algorithms. It constitues the core of FermiQCD. MDP enables programmers to focus on algorithms, while parallelization is dealt with automatically and transparently. Here we present a brief overview of MDP and examples of applications in Computer Science (Cellular Automata), Engineering (PDE Solver) and Physics (Ising Model).
hep-lat/9808001
Genetic Algorithm for SU(N) gauge theory on a lattice
hep-lat cs.NE
An Algorithm is proposed for the simulation of pure SU(N) lattice gauge theories based on Genetic Algorithms(GAs). Main difference between GAs and Metropolis methods(MPs) is that GAs treat a population of points at once, while MPs treat only one point in the searching space. This provides GAs with information about the assortment as well as the fitness of the evolution function and producting a better solution. We apply GAs to SU(2) pure gauge theory on a 2 dimensional lattice and show the results are consistent with those given by MP and Heatbath methods(HBs). Thermalization speed of GAs is especially faster than the simple MPs.
hep-lat/9809068
Genetic Algorithm for SU(2) Gauge Theory on a 2-dimensional Lattice
hep-lat cs.NE
An algorithm is proposed for the simulation of pure SU(N) lattice gauge theories based on Genetic Algorithms(GAs). We apply GAs to SU(2) pure gauge theory on a 2 dimensional lattice and show the results, the action per plaquette and Wilson loops, are consistent with those by Metropolis method(MP)s and Heatbath method(HB)s. Thermalization speed of GAs is especially faster than the simple MPs.
math-ph/0211067
Method of Additional Structures on the Objects of a Monoidal Kleisli Category as a Background for Information Transformers Theory
math-ph cs.MA math.CT math.MP
Category theory provides a compact method of encoding mathematical structures in a uniform way, thereby enabling the use of general theorems on, for example, equivalence and universal constructions. In this article we develop the method of additional structures on the objects of a monoidal Kleisli category. It is proposed to consider any uniform class of information transformers (ITs) as a family of morphisms of a category that satisfy certain set of axioms. This makes it possible to study in a uniform way different types of ITs, e.g., statistical, multivalued, and fuzzy ITs. Proposed axioms define a category of ITs as a monoidal category that contains a subcategory (of deterministic ITs) with finite products. Besides, it is shown that many categories of ITs can be constructed as Kleisli categories with additional structures.
math-ph/0512026
MIMO Channel Correlation in General Scattering Environments
math-ph cs.IT math.IT math.MP
This paper presents an analytical model for the fading channel correlation in general scattering environments. In contrast to the existing correlation models, our new approach treats the scattering environment as non-separable and it is modeled using a bi-angular power distribution. The bi-angular power distribution is parameterized by the mean departure and arrival angles, angular spreads of the univariate angular power distributions at the transmitter and receiver apertures, and a third parameter, the covariance between transmit and receive angles which captures the statistical interdependency between angular power distributions at the transmitter and receiver apertures. When this third parameter is zero, this new model reduces to the well known "Kronecker" model. Using the proposed model, we show that Kronecker model is a good approximation to the actual channel when the scattering channel consists of a single scattering cluster. In the presence of multiple remote scattering clusters we show that Kronecker model over estimates the performance by artificially increasing the number of multipaths in the channel.
math-ph/9903036
Numerically Invariant Signature Curves
math-ph cs.CV math.MP
Corrected versions of the numerically invariant expressions for the affine and Euclidean signature of a planar curve proposed by E.Calabi et. al are presented. The new formulas are valid for fine but otherwise arbitrary partitions of the curve. We also give numerically invariant expressions for the four differential invariants parametrizing the three dimensional version of the Euclidean signature curve, namely the curvature, the torsion and their derivatives with respect to arc length.
math/0005058
An information-spectrum approach to joint source-channel coding
math.PR cs.IT math.IT
Given a general source $\sV=\{V^n\}\noi$ with {\em countably infinite} source alphabet and a general channel $\sW=\{W^n\}\noi$ with arbitrary {\em abstract} channel input and output alphabets, we study the joint source-channel coding problem from the information-spectrum point of view. First, we generalize Feinstein's lemma (direct part) and Verd\'u-Han's lemma (converse part) so as to be applicable to the general joint source-channel coding problem. Based on these lemmas, we establish a sufficient condition as well as a necessary condition for the source $\sV$ to be reliably transmissible over the channel $\sW$ with asymptotically vanishing probability of error. It is shown that our sufficient condition coincides with the sufficient condition derived by Vembu, Verd\'u and Steinberg, whereas our necessary condition is much stronger than the necessary condition derived by them. Actually, our necessary condition coincide with our sufficient condition if we disregard some asymptotically vanishing terms appearing in those conditions. Also, it is shown that {\em Separation Theorem} in the generalized sense always holds. In addition, we demonstrate a sufficient condition as well as a necessary condition for the $\vep$-transmissibility ($0\le \vep <1$). Finally, the separation theorem of the traditional standard form is shown to hold for the class of sources and channels that satisfy the (semi-) strong converse property.
math/0005281
Connections between Linear Systems and Convolutional Codes
math.OC cs.IT math.IT
The article reviews different definitions for a convolutional code which can be found in the literature. The algebraic differences between the definitions are worked out in detail. It is shown that bi-infinite support systems are dual to finite-support systems under Pontryagin duality. In this duality the dual of a controllable system is observable and vice versa. Uncontrollability can occur only if there are bi-infinite support trajectories in the behavior, so finite and half-infinite-support systems must be controllable. Unobservability can occur only if there are finite support trajectories in the behavior, so bi-infinite and half-infinite-support systems must be observable. It is shown that the different definitions for convolutional codes are equivalent if one restricts attention to controllable and observable codes.
math/0006233
Algorithmic Statistics
math.ST cs.IT cs.LG math.IT math.PR physics.data-an stat.TH
While Kolmogorov complexity is the accepted absolute measure of information content of an individual finite object, a similarly absolute notion is needed for the relation between an individual data sample and an individual model summarizing the information in the data, for example, a finite set (or probability distribution) where the data sample typically came from. The statistical theory based on such relations between individual objects can be called algorithmic statistics, in contrast to classical statistical theory that deals with relations between probabilistic ensembles. We develop the algorithmic theory of statistic, sufficient statistic, and minimal sufficient statistic. This theory is based on two-part codes consisting of the code for the statistic (the model summarizing the regularity, the meaningful information, in the data) and the model-to-data code. In contrast to the situation in probabilistic statistical theory, the algorithmic relation of (minimal) sufficiency is an absolute relation between the individual model and the individual data sample. We distinguish implicit and explicit descriptions of the models. We give characterizations of algorithmic (Kolmogorov) minimal sufficient statistic for all data samples for both description modes--in the explicit mode under some constraints. We also strengthen and elaborate earlier results on the ``Kolmogorov structure function'' and ``absolutely non-stochastic objects''--those rare objects for which the simplest models that summarize their relevant information (minimal sufficient statistics) are at least as complex as the objects themselves. We demonstrate a close relation between the probabilistic notions and the algorithmic ones.
math/0009018
Critical Behavior in Lossy Source Coding
math.PR cs.IT math.IT
The following critical phenomenon was recently discovered. When a memoryless source is compressed using a variable-length fixed-distortion code, the fastest convergence rate of the (pointwise) compression ratio to the optimal $R(D)$ bits/symbol is either $O(\sqrt{n})$ or $O(\log n)$. We show it is always $O(\sqrt{n})$, except for discrete, uniformly distributed sources.
math/0010173
Hot-pressing process modeling for medium density fiberboard (MDF)
math.NA cs.CE
In this paper we present a numerical solution for the mathematical modeling of the hot-pressing process applied to medium density fiberboard. The model is based in the work of Humphrey[82], Humphrey and Bolton[89] and Carvalho and Costa[98], with some modifications and extensions in order to take into account mainly the convective effects on the phase change term and also a conservative numerical treatment of the resulting system of partial differential equations.