_id
stringlengths
4
10
text
stringlengths
0
18.4k
title
stringlengths
0
8.56k
d14967283
Lexical information for South African Bantu languages is not readily available in the form of machine-readable lexicons. At present the availability of lexical information is restricted to a variety of paper dictionaries. These dictionaries display considerable diversity in the organisation and representation of data. In order to proceed towards the development of reusable and suitably standardised machine-readable lexicons for these languages, a data model for lexical entries becomes a prerequisite. In this study the general purpose model as developed by Bell and Bird(2000)is used as a point of departure. Firstly, the extent to which the Bell and Bird (2000) data model may be applied to and modified for the above-mentioned languages is investigated. Initial investigations indicate that modification of this data model is necessary to make provision for the specific requirements of lexical entries in these languages. Secondly, a data model in the form of an XML DTD for the languages in question, based on our findings regarding(Bell & Bird, 2000)and(Weber, 2002)is presented. Included in this model are additional particular requirements for complete and appropriate representation of linguistic information as identified in the study of available paper dictionaries.
Towards machine-readable lexicons for South African Bantu languages
d4656111
This paper presents a novel approach to the task of temporal text classification combining text ranking and probability for the automatic dating of historical texts. The method was applied to three historical corpora: an English, a Portuguese and a Romanian corpus. It obtained performance ranging from 83% to 93% accuracy, using a fully automated approach with very basic features.
Temporal Text Ranking and Automatic Dating of Texts
d8037559
We present a Portuguese↔English hybrid deep MT system based on an analysistransfer-synthesis architecture, with transfer being done at the level of deep syntax, a level that already includes a great deal of semantic information. The system received a few months of development, but its performance is already similar to that of baseline phrase-based MT, when evaluated using BLEU, and surpasses the baseline under human qualitative assessment.
Bootstrapping a hybrid deep MT system
d15749909
We present an approach for automatic triage of message posts in ReachOut.com mental health forum, which was a shared task in the 2016 Computational Linguistics and Clinical Psychology (CLPsych). This effort is aimed at providing the trained moderators of Rea-chOut.com with a systematic triage of forum posts, enabling them to more efficiently support the young users aged 14-25 communicating with each other about their issues. We use different features and classifiers to predict the users' mental health states, marked as green, amber, red, and crisis. Our results show that random forests have significant success over our baseline mutli-class SVM classifier. In addition, we perform feature importance analysis to characterize key features in identification of the critical posts.157
Text Analysis and Automatic Triage of Posts in a Mental Health Forum
d218974090
d2852730
Ranking tweets is a fundamental task to make it easier to distill the vast amounts of information shared by users. In this paper, we explore the novel idea of ranking tweets on a topic using heterogeneous networks. We construct heterogeneous networks by harnessing cross-genre linkages between tweets and semantically-related web documents from formal genres, and inferring implicit links between tweets and users. To rank tweets effectively by capturing the semantics and importance of different linkages, we introduce Tri-HITS, a model to iteratively propagate ranking scores across heterogeneous networks. We show that integrating both formal genre and inferred social networks with tweet networks produces a higher-quality ranking than the tweet networks alone. 1 Title and Abstract in Chinese 基 基 基于 于 于异 异 异构 构 构网 网 网络 络 络的 的 的微 微 微信 信 信息 息 息排 排 排序 序 序
Tweet Ranking Based on Heterogeneous Networks
d38708803
Statistical Machine Translation (SMT) relies on the availability of rich parallel corpora. However, in the case of under-resourced languages or some specific domains, parallel corpora are not readily available. This leads to under-performing machine translation systems in those sparse data settings. To overcome the low availability of parallel resources the machine translation community has recognized the potential of using comparable resources as training data. However, most efforts have been related to European languages and less in middle-east languages. In this study, we report comparable corpora created from news articles for the pair English -{Arabic, Persian, Urdu} languages. The data has been collected over a period of a year, entails Arabic, Persian and Urdu languages. Furthermore using the English as a pivot language, comparable corpora that involve more than one language can be created, e.g. English-Arabic -Persian, English -Arabic -Urdu, English -Urdu -Persian, etc. Upon request the data can be provided for research purposes.
Creation of Comparable Corpora for English-{Urdu, Arabic, Persian}
d40110441
The South Asian languages are well-known for their replicative words.
Dealing with Replicative Words in Hindi for Machine Translation to English
d7672097
We present a machine translation framework that can incorporate arbitrary features of both input and output sentences. The core of the approach is a novel decoder based on lattice parsing with quasisynchronous grammar (Smith and Eisner, 2006), a syntactic formalism that does not require source and target trees to be isomorphic. Using generic approximate dynamic programming techniques, this decoder can handle "non-local" features. Similar approximate inference techniques support efficient parameter estimation with hidden variables. We use the decoder to conduct controlled experiments on a German-to-English translation task, to compare lexical phrase, syntax, and combined models, and to measure effects of various restrictions on nonisomorphism.
Feature-Rich Translation by Quasi-Synchronous Lattice Parsing
d251402080
Entity linking in dialogue is the task of mapping entity mentions in utterances to a target knowledge base. Prior work on entity linking has mainly focused on well-written articles such as Wikipedia, annotated newswire, or domain-specific datasets. We extend the study of entity linking to open domain dialogue by presenting the OPENEL corpus: an annotated multi-domain corpus for linking entities in natural conversation to Wikidata. Each dialogic utterance, in 179 dialogues over 12 topics from the original EDINA corpus, has been annotated for entities realized by definite referring expressions as well as anaphoric forms such as he, she, it and they. OPENEL thus supports training and evaluation of entity linking in open-domain dialogue, as well as analysis of the effect of using dialogue context and anaphora resolution in model training. It can also be used for fine-tuning a coreference resolution algorithm. To the best of our knowledge, this is the first substantial entity linking corpus publicly available for open-domain dialogue. We also establish baselines for named entity linking in open domain conversation using several existing entity linking systems. We find that the Transformer-based system, Flair + BLINK, has the best performance with a 0.65 F1 score. Our results show that dialogue context is extremely beneficial for entity linking in conversations, with Flair + BLINK achieving an F1 of 0.61 without discourse context. These results also demonstrate the remaining performance gap between the baselines and human performance, highlighting the challenges of entity linking in open-domain dialogue, and suggesting many avenues for future research using OPENEL.
OPENEL: An Annotated Corpus for Entity Linking and Discourse in Open Domain Dialogue
d174800820
Language is gendered if the context surrounding a mention is suggestive of a particular binary gender for that mention. Detecting the different ways in which language is gendered is an important task since gendered language can bias NLP models (such as for coreference resolution). This task is challenging since genderedness is often expressed in subtle ways. Existing approaches need considerable annotation efforts for each language, domain, and author, and often require handcrafted lexicons and features. Additionally, these approaches do not provide a quantifiable measure of how gendered the text is, nor are they applicable at the fine-grained mention level.In this paper, we use existing NLP pipelines to automatically annotate gender of mentions in the text. On corpora labeled using this method, we train a supervised classifier to predict the gender of any mention from its context and evaluate it on unseen text. The model confidence for a mention's gender can be used as a proxy to indicate the level of genderedness of the context. We test this gendered language detector on movie summaries, movie reviews, news articles, and fiction novels, achieving an AUC-ROC of up to 0.71, and observe that the model predictions agree with human judgments collected for this task. We also provide examples of detected gendered sentences from aforementioned domains.
GenderQuant: Quantifying Mention-Level Genderedness
d12070326
We describe our systems for the SemEval 2014 Task 5: L2 writing assistant where a system has to find appropriate translations of L1 segments in a given L2 context. We participated in three out of four possible language pairs (English-Spanish, French-English and Dutch-English) and achieved the best performance for all our submitted systems according to word-based accuracy. Our models are based on phrasebased machine translation systems and combine topical context information and language model scoring.
UEdin: Translating L1 Phrases in L2 Context using Context-Sensitive SMT
d11996398
The syntactic structure of a sentence often manifests quite clearly the predicate-argument structure and relations of grammatical subordination. But scope dependencies are not so transparent. As a result, many systems for representing the semantics of sentences have ignored scoping or generated scopings with mechanisms that have often been inexplicit as to the range of scopings they choose among or profligate in the scopings they allow. This paper presents, along with proofs of some of its important properties, an algorithm that generates scoped semantic forms from unscoped expressions encoding predicate-argument structure. The algorithm is not profligate as are those based on permutation of quantifiers, and it can provide a solid foundation for computational solutions where completeness is sacrificed for efficiency and heuristic efficacy.
AN ALGORITHM FOR GENERATING QUANTIFIER SCOPINGS
d219301590
d60779225
A Linguistic and Computational Analysis of the German "Third Construction"*
d10284738
Commonly used gralm~mrs which describe i~turai lan~uages /ex. ATN, Metamorphosis Grammars/ can be hardly applied in describing higly inflectional languages. So I propose a grammar called the grammar with natural context which takes into consideration properties of higly inflectional languages /ex. Polish / as well as structural languages /ex. English/. I introduce its normal form.
L'IDEE DE GRA~E~IRE AVEC LE CONTEXTE NATUREL
d32508655
We describe different architectures that combine rule-based and statistical machine translation (RBMT and SMT) engines into hybrid systems. One of them allows to combine many existing MT engines in a multi-engine setup, which can be done under the control of a decoder for SMT. Another architecture uses lexical entries induced via SMT technology to be included in a rule-based system. For all these approaches prototypical implementations have been done within the Eu-roMatrix project and some indicative results from the recent evaluation campaign are given, which help to highlight the strengths and weaknesses of these approaches.
Hybrid Architectures for Multi-Engine Machine Translation
d10771210
Nous décrivons dans cet article l'utilisation d'arbres décisionnels pour l'acquisition d'informations lexicales et l'enrichissement de notre système de traitement automatique des langues naturelles (NLP). Notre approche diffère d'autres projets d'apprentissage automatique en ce qu'elle repose sur l'exploitation d'un système d'analyse linguistique profonde. Après l'introduction de notre sujet nous présentons l'architecture de notre module d'apprentissage lexical. Nous présentons ensuite une situation d'apprentissage lexical effectué en utilisant des arbres décisionnels; nous apprenons quels verbes prennent un sujet humain en espagnol et en français.This paper describes the use of decision trees to learn lexical information for the enrichment of our natural language processing (NLP) system. Our approach to lexical learning differs from other approaches in the field in that our machine learning techniques exploit a deep knowledge understanding system. After the introduction we present the overall architecture of our lexical learning module. In the following sections we present a showcase of lexical learning using decision trees: we learn verbs that take a human subject in Spanish and French.
TALN 2003 Using decision trees to learn lexical information in a linguistics-based NLP system
d8666396
We present an implemented model of story understanding and apply it to the understanding of a children's story. We argue that understanding a story consists of building multirepresentation models of the story and that story models are efficiently constructed using a satisfiability solver. We present a computer program that contains multiple representations of commonsense knowledge, takes a narrative as input, transforms the narrative and representations of commonsense knowledge into a satisfiability problem, runs a satisfiability solver, and produces models of the story as output. The narrative, models, and representations are expressed in the language of Shanahan's event calculus.
Story understanding through multi-representation model construction
d11394500
Negative life event is an important reason of causing depression, such as the death of family members, quarreled with the spouse, fired by the boss or blamed by the teacher. The
應用詞向量於語言樣式探勘之研究 Mining Language Patterns Using Word Embeddings
d202635721
This paper applies BERT to ad hoc document retrieval on news articles, which requires addressing two challenges: relevance judgments in existing test collections are typically provided only at the document level, and documents often exceed the length that BERT was designed to handle. Our solution is to aggregate sentence-level evidence to rank documents. Furthermore, we are able to leverage passage-level relevance judgments fortuitously available in other domains to fine-tune BERT models that are able to capture cross-domain notions of relevance, and can be directly used for ranking news articles. Our simple neural ranking models achieve state-of-the-art effectiveness on three standard test collections.
Cross-Domain Modeling of Sentence-Level Evidence for Document Retrieval
d236460284
d219302945
d219301751
d9084229
This paper describes the RenTAL system, which enables sharing resources in LTAG and HPSG formalisms by a method of grammar conversion from an FB-LTAG grammar to a strongly equivalent HPSG-style grammar. The system is applied to the latest version of the XTAG English grammar. Experimental results show that the obtained HPSG-style grammar successfully worked with an HPSG parser, and achieved a drastic speed-up against an LTAG parser. This system enables to share not only grammars and lexicons but also parsing techniques. 1 In this paper, we use the term LTAG to refer to FB-LTAG, if not confusing.
Resource sharing among HPSG and LTAG communities by a method of grammar conversion from FB-LTAG to HPSG
d221373773
d348276
Differences between large-scale historical population archives and small decentralized databases can be used to improve data quality and record connectedness in both types of databases. A parser is developed to account for differences in syntax and data representation models. A matching procedure is described to discover records from different databases referring to the same historical event. The problem of verification without reliable benchmark data is addressed by matching on a subset of record attributes and measuring support for the match using a different subset of attributes. An application of the matching procedure for comparison of family trees is discussed. A visualization tool is described to present an interactive overview of comparison results.
Comparison between historical population archives and decentralized databases
d7751200
Data sparseness is one of the factors that degrade statistical machine translation (SMT). Existing work has shown that using morphosyntactic information is an effective solution to data sparseness. However, fewer efforts have been made for Chinese-to-English SMT with using English morpho-syntactic analysis. We found that while English is a language with less inflection, using English lemmas in training can significantly improve the quality of word alignment that leads to yield better translation performance. We carried out comprehensive experiments on multiple training data of varied sizes to prove this. We also proposed a new effective linear interpolation method to integrate multiple homologous features of translation models.
Boosting Statistical Machine Translation by Lemmatization and Linear Interpolation
d15408203
Content localisation via machine translation (MT) is a sine qua non, especially for international online business. While most applications utilise rule-based solutions due to the lack of suitable in-domain parallel corpora for statistical MT (SMT) training, in this paper we investigate the possibility of applying SMT where huge amounts of monolingual content only are available. We describe a case study where an analysis of a very large amount of monolingual online trading data from eBay is conducted by ALS with a view to reducing this corpus to the most representative sample in order to ensure the widest possible coverage of the total data set. Furthermore, minimal yet optimal sets of sentences/words/terms are selected for generation of initial translation units for future SMT system-building.
Monolingual Data Optimisation for Bootstrapping SMT Engines
d6098592
In this paper, we discuss a method to screen inconsistencies in ontologies by applying a natural language processing (NLP) technique, especially, those used for word sense disambiguation (WSD). In the database research field, it is claimed that queries over target ontologies should play a significant role because they represent every aspect of the terms described in each ontology. According to(Calvanese et al., 2001), considering the global and the local ontologies, the terms in the global ontology can be viewed as the query over the local ontology, and the mapping between the global and the local ontologies is given by associating each term in the global ontology with a view. On the other hand, ontology screening systems should be able to take advantage of some popular techniques for WSD, which is supposed to decide the right sense where the target word is used in a specific context. We present several examples regarding inconsistencies in ontologies with the aid of DAML+OIL notation (DAML+OIL, 2001), and propose that WSD can be one of the promising method to screen such as inconsistencies.
A Proposal for Screening Inconsistencies in Ontologies based on Query Languages using WSD
d218973790
d232021960
d8058321
Acceptance, accessibility, and usability data from a series of studies of a series of applications suggest that most users readily accept responsive virtual characters as valid conversational partners. By responsive virtual characters we mean full-body animated, conversant, realistic characters with whom the user interacts via natural language and who exhibit emotional, social, gestural, and cognitive intelligence. We have developed applications for medical clinicians interviewing pediatric patients, field interviewers learning about in-formed consent procedures, and telephone interviewers seeking to obtain cooperation from respondents on federally-funded surveys. Usage data from informational kiosks using the same underlying technology (e.g., at conference exhibits) provide additional corroboration. Our evidence suggests the technology is both sufficient to actively engage users and appropriate for consideration of use in training, assessment, and marketing environments.
Usability and Acceptability Studies of Conversational Virtual Human Technology
d2718926
eGIFT (Extracting Gene Information From Text) is an intelligent system which is intended to aid scientists in surveying literature relevant to genes of interest. From a gene specific set of abstracts retrieved from PubMed, eGIFT determines the most important terms associated with the given gene. Annotators using eGIFT can quickly find articles describing gene functions and individuals scientists surveying the results of highthroughput experiments can quickly extract information important to their hits.
Mining the Biomedical Literature for Genic Information
d9186858
We propose a novel unsupervised method for separating out distinct authorial components of a document. In particular, we show that, given a book artificially "munged" from two thematically similar biblical books, we can separate out the two constituent books almost perfectly. This allows us to automatically recapitulate many conclusions reached by Bible scholars over centuries of research. One of the key elements of our method is exploitation of differences in synonym choice by different authors.
Unsupervised Decomposition of a Document into Authorial Components
d7173447
Accurate high-coverage translation is a vital component of reliable cross language information access (CLIA) systems. While machine translation (MT) has been shown to be effective for CLIA tasks in previous evaluation workshops, it is not well suited to specialized tasks where domain specific translations are required. We demonstrate that effective query translation for CLIA can be achieved in the domain of cultural heritage (CH). This is performed by augmenting a standard MT system with domainspecific phrase dictionaries automatically mined from the online Wikipedia. Experiments using our hybrid translation system with sample query logs from users of CH websites demonstrate a large improvement in the accuracy of domain specific phrase detection and translation.34
Domain-Specific Query Translation for Multilingual Information Access using Machine Translation Augmented With Dictionaries Mined from Wikipedia
d8954558
This paper describes the participation of the SINAI research group in the 2013 edition of the International Workshop Se-mEval. The SINAI research group has submitted two systems, which cover the two main approaches in the field of sentiment analysis: supervised and unsupervised.
SINAI: Machine Learning and Emotion of the Crowd for Sentiment Analysis in Microblogs
d14802675
The focus of this study is positive feedback in one-on-one tutoring, its computational modeling, and its application to the design of more effective Intelligent Tutoring Systems. A data collection of tutoring sessions in the domain of basic Computer Science data structures has been carried out. A methodology based on multiple regression is proposed, and some preliminary results are presented. A prototype Intelligent Tutoring System on linked lists has been developed and deployed in a collegelevel Computer Science class.
The role of positive feedback in Intelligent Tutoring Systems
d21730483
Language disturbances can be a diagnostic marker for neurodegenerative diseases, such as Alzheimer's disease, at earlier stages. Connected speech analysis provides a non-invasive and easy-to-assess measure for determining aspects of the severity of language impairment. In this paper we focus on the development of a new corpus consisting of audio recordings of picture descriptions (including transcriptions) of the Cookie-theft, produced by Swedish speakers. The speech elicitation procedure provides an established method of obtaining highly constrained samples of connected speech that can allow us to study the intricate interactions between various linguistic levels and cognition. We chose the Cookie-theft picture since it's a standardized test that has been used in various studies in the past, and therefore comparisons can be made based on previous research. This type of picture description task might be useful for detecting subtle language deficits in patients with subjective and mild cognitive impairment. The resulting corpus is a new, rich and multi-faceted resource for the investigation of linguistic characteristics of connected speech and a unique dataset that provides a rich resource for (future) research and experimentation in many areas, and of language impairment in particular. The information in the corpus can also be combined and correlated with other collected data about the speakers, such as neuropsychological tests, brain physiology and cerebrospinal fluid markers as well as imaging.
A Swedish Cookie-Theft Corpus
d219202134
d11473195
This work concerns automatic topic segmentation of email conversations. We present a corpus of email threads manually annotated with topics, and evaluate annotator reliability. To our knowledge, this is the first such email corpus. We show how the existing topic segmentation models (i.e., Lexical Chain Segmenter (LCSeg) and Latent Dirichlet Allocation (LDA)) which are solely based on lexical information, can be applied to emails. By pointing out where these methods fail and what any desired model should consider, we propose two novel extensions of the models that not only use lexical information but also exploit finer level conversation structure in a principled way. Empirical evaluation shows that LCSeg is a better model than LDA for segmenting an email thread into topical clusters and incorporating conversation structure into these models improves the performance significantly.
Exploiting Conversation Structure in Unsupervised Topic Segmentation for Emails
d6907782
This paper presents an unsupervised method for the resolution of lexical ambiguity of nouns. The method relies on the topological structure of the noun taxonomy of WordNet where a notion of semantic distance is defined. An unsupervised semantic tagger, based on the above measure, is evaluated over an hand-annotated portion of the British National Corpus and compared with a supervised approach based on the Maximum Entropy Model.
A similarity measure for unsupervised semantic disambiguation
d219308082
d45576850
En TAL et plus particulièrement en analyse sémantique, les informations sur la couleur peuvent être importantes pour traiter correctement des informations textuelles (sens des mots, désambiguïsation et indexation). Plus généralement, connaître la ou les couleurs habituellement associée(s) à un terme est une information cruciale. Dans cet article, nous montrons comment le crowdsourcing, à travers un jeu, peut être une bonne stratégie pour collecter ces données lexico-sémantiques.Abstract. In Natural Language Processing and semantic analysis in particular, color information may be important in order to properly process textual information (word sense disambiguation, and indexing). More specifically, knowing which colors are generally associated to terms is a crucial information. In this paper, we explore how crowdsourcing through a game with a purpose (GWAP) can be an adequate strategy to collect such lexico-semantic data.IntroductionLa couleur, en tant qu'élément de notre vie quotidienne, est une donnée intéressante en Traitement Automatique du Langage Naturel. En effet, fournir des informations relatives aux associations mots-couleurs, en plus des données classiques (hyperonymes, parties de, rôle sémantique, etc.) à un système dédié à l'analyse sémantique de textes, pourrait en améliorer grandement les performances. Bien que ce soit une observation marginale par rapport au sujet de cet article, il existe indiscutablement une très forte liaison entre couleurs et émotions. Ainsi, si on relève souvent des associations entre des couleurs et des termes abstraits relatifs à des émotions (comme la crainte, la colère, le danger, l'espoir, etc.), c'est encore plus vrai pour les termes désignant des choses concrètes (comme le ciel, un lion, la mer, la neige, etc.). En psychologie, de nombreuses études concernent les associations entre mots et couleurs et leur impact sur la communication (verbale ou non-verbale). Saif (2011a) montre que la notion de couleur est d'une importance capitale pour la qualité du message délivré, que ce soit en marketing (Sable and Akcay, 2010), en conception de sites web(Meier, 1988;Pribadi et al., 1990), ou pour caractériser visuellement une information(Christ, 1975;Card et al.,1999). La couleur est indiscutablement une donnée très porteuse de sens. Mais pour de nombreux chercheurs, les significations attribuées aux couleurs dépendent de plusieurs facteurs.Luscher (1969), psychothérapeute auteur du test éponyme, le Lüscher color test, un outil qui permet d'évaluer l'état émotionnel d'une personne en fonction de ses préférences de couleurs, affirme que le choix de telle ou telle couleur par une personne dépend directement de(voire traduit) ses émotions et son état psychique. Child et al. (1968), et Ou et al. (2011) montrent que les préférences en matière de couleurs varient en fonction de l'âge et du sexe. De plus, pour certaines couleurs, on rencontre des variations lexicales en fonction des langues et des cultures (par exemple, en turc et en hongrois, il existe deux mots différents pour rouge).
21 ème Traitement Automatique des Langues Naturelles
d15315010
The importance of good weighting methods in information retrieval --methods that stress the most useful features of a document or query representative --is examined. Evidence is presented that good weighting methods are more important than the feature selection process and it is suggested that the two need to go hand-in-hand in order to be effective. The paper concludes with a method for learning a good weight for a term based upon the characteristics of that term.
The Importance of Proper Weighting Methods
d53082628
Multilingual knowledge graphs (KGs) such as DBpedia and YAGO contain structured knowledge of entities in several distinct languages, and they are useful resources for cross-lingual AI and NLP applications. Cross-lingual KG alignment is the task of matching entities with their counterparts in different languages, which is an important way to enrich the crosslingual links in multilingual KGs. In this paper, we propose a novel approach for crosslingual KG alignment via graph convolutional networks (GCNs). Given a set of pre-aligned entities, our approach trains GCNs to embed entities of each language into a unified vector space. Entity alignments are discovered based on the distances between entities in the embedding space. Embeddings can be learned from both the structural and attribute information of entities, and the results of structure embedding and attribute embedding are combined to get accurate alignments. In the experiments on aligning real multilingual KGs, our approach gets the best performance compared with other embedding-based KG alignment approaches.
Cross-lingual Knowledge Graph Alignment via Graph Convolutional Networks
d5805592
In Natural Language Generation, the task of attribute selection (AS) consists of determining the appropriate attribute-value pairs (or semantic properties) that represent the contents of a referring expression. Existing work on AS includes a wide range of algorithmic solutions to the problem, but the recent availability of corpora annotated with referring expressions data suggests that corpus-based AS strategies become possible as well. In this work we tentatively discuss a number of AS strategies using both semantic and surface information obtained from a corpus of this kind. Relying on semantic information, we attempt to learn both global and individual AS strategies that could be applied to a standard AS algorithm in order to generate descriptions found in the corpus. As an alternative, and perhaps less traditional approach, we also use surface information to build statistical language models of the referring expressions that are most likely to occur in the corpus, and let the model probabilities guide attribute selection.
Corpus-based Referring Expressions Generation
d1378655
This paper presents a model that extends semantic role labeling. Existing approaches independently analyze relations expressed by verb predicates or those expressed as nominalizations. However, sentences express relations via other linguistic phenomena as well. Furthermore, these phenomena interact with each other, thus restricting the structures they articulate. In this paper, we use this intuition to define a joint inference model that captures the inter-dependencies between verb semantic role labeling and relations expressed using prepositions. The scarcity of jointly labeled data presents a crucial technical challenge for learning a joint model. The key strength of our model is that we use existing structure predictors as black boxes. By enforcing consistency constraints between their predictions, we show improvements in the performance of both tasks without retraining the individual models.
A Joint Model for Extended Semantic Role Labeling
d372234
Monolingual alignment models have been shown to boost the performance of question answering systems by "bridging the lexical chasm" between questions and answers. The main limitation of these approaches is that they require semistructured training data in the form of question-answer pairs, which is difficult to obtain in specialized domains or lowresource languages. We propose two inexpensive methods for training alignment models solely using free text, by generating artificial question-answer pairs from discourse structures. Our approach is driven by two representations of discourse: a shallow sequential representation, and a deep one based on Rhetorical Structure Theory. We evaluate the proposed model on two corpora from different genres and domains: one from Yahoo! Answers and one from the biology domain, and two types of non-factoid questions: manner and reason. We show that these alignment models trained directly from discourse structures imposed on free text improve performance considerably over an information retrieval baseline and a neural network language model trained on the same data.
Spinning Straw into Gold: Using Free Text to Train Monolingual Alignment Models for Non-factoid Question Answering
d7082252
One of the major challenges facing statistical machine translation is how to model differences in word order between languages. Although a great deal of research has focussed on this problem, progress is hampered by the lack of reliable metrics. Most current metrics are based on matching lexical items in the translation and the reference, and their ability to measure the quality of word order has not been demonstrated. This paper presents a novel metric, the LRscore, which explicitly measures the quality of word order by using permutation distance metrics. We show that the metric is more consistent with human judgements than other metrics, including the BLEU score. We also show that the LRscore can successfully be used as the objective function when training translation model parameters. Training with the LRscore leads to output which is preferred by humans. Moreover, the translations incur no penalty in terms of BLEU scores.
Reordering Metrics for MT
d16684206
This paper describes an emotional speech database recorded for standard Basque. This database was recorded in the framework of a project in which the goal was to develop an avatar. therefore, the image corresponding to the expression of the different emotions was also needed. This is why an audiovisual database was developed. The designed database contains six basic emotions as well as the neutral speaking style. It consists in isolated words and sentences read by a professional dubbing actress. At present, this database is being used to study the prosodic models related with each emotion in standard Basque.
Designing and Recording an Audiovisual Database of Emotional Speech in Basque
d798995
Machine translation benefits from two types of decoding techniques: consensus decoding over multiple hypotheses under a single model and system combination over hypotheses from different models. We present model combination, a method that integrates consensus decoding and system combination into a unified, forest-based technique. Our approach makes few assumptions about the underlying component models, enabling us to combine systems with heterogenous structure. Unlike most system combination techniques, we reuse the search space of component models, which entirely avoids the need to align translation hypotheses. Despite its relative simplicity, model combination improves translation quality over a pipelined approach of first applying consensus decoding to individual systems, and then applying system combination to their output. We demonstrate BLEU improvements across data sets and language pairs in large-scale experiments.
Model Combination for Machine Translation
d8073851
This paper presents our work on using the graph structure of Wiktionary for synonym detection. We implement semantic relatedness metrics using both a direct measure of information flow on the graph and a comparison of the list of vertices found to be "close" to a given vertex. Our algorithms, evaluated on ESL 50, TOEFL 80 and RDWP 300 data sets, perform better than or comparable to existing semantic relatedness measures.
Using the Wiktionary Graph Structure for Synonym Detection
d18199386
Using a specific example of a newspaper commentary, the paper explores the relationship between 'surface-oriented' and 'deep' analysis for purposes such as text summarization. The discussion is followed by a description of our ongoing work on automatic commentary understanding and the current state of the implementation.
Surfaces and Depths in Text Understanding: The Case of Newspaper Commentary
d222002098
d2820373
Nowadays supervised sequence labeling models can reach competitive performance on the task of Chinese word segmentation. However, the ability of these models is restricted by the availability of annotated data and the design of features. We propose a scalable semi-supervised feature engineering approach. In contrast to previous works using pre-defined taskspecific features with fixed values, we dynamically extract representations of label distributions from both an in-domain corpus and an out-of-domain corpus. We update the representation values with a semi-supervised approach. Experiments on the benchmark datasets show that our approach achieve good results and reach an f-score of 0.961. The feature engineering approach proposed here is a general iterative semi-supervised method and not limited to the word segmentation task.
Exploring Representations from Unlabeled Data with Co-training for Chinese Word Segmentation
d1078681
Abs%~actPreviouswork has emphasized the need ~+or parat]hrasos
HINTING BY PARAPHRASTN5 IN AN INSTRUCTION SYS'rEM Uladimir PERICLIEU Svjato~lav HRAJNOU
d229365643
This paper presents our work in the WMT 2020 News Translation Shared Task. We participate in 3 language pairs including Zh/En, Km/En, and Ps/En and in both directions under the constrained condition. We use the standard Transformer-Big model as the baseline and obtain the best performance via two variants with larger parameter sizes. We perform detailed pre-processing and filtering on the provided large-scale bilingual and monolingual dataset. Several commonly used strategies are used to train our models such as Back Translation, Ensemble Knowledge Distillation, etc. We also conduct experiment with similar language augmentation, which lead to positive results, although not used in our submission. Our submission obtains competitive results in the final evaluation.
HW-TSC's Participation in the WMT 2020 News Translation Shared Task
d8394214
The appropriateness of paraphrases for words depends often on context: "grab" can replace "catch" in "catch a ball", but not in "catch a cold". Structured Vector Space (SVS) (Erk and Padó, 2008) is a model that computes word meaning in context in order to assess the appropriateness of such paraphrases. This paper investigates "best-practice" parameter settings for SVS, and it presents a method to obtain large datasets for paraphrase assessment from corpora with WSD annotation.
Paraphrase assessment in structured vector space: Exploring parameters and datasets
d219302905
d16911660
It is expensive for companies to browse daily reports. Our aim is to create a system that extracts information about problems from reports. This system operates in two steps. First, it records expressions involving troubles in a dictionary from training data. Second, it expands the dictionary to include information not included in the training data. We experimentally tested this extraction system; in the tests, a two-values classifier attained an F-value of 0.772, and experimental extraction of troubles attained a precision of 0.400 and a recall of 0.827.
Extracting Troubles from Daily Reports based on Syntactic Pieces
d44119534
This paper describes our system, entitled
IronyMagnet at SemEval-2018 Task 3: A Siamese Network for Irony Detection in Social Media
d186241668
d5369006
d15032026
In this paper we propose several novel approaches to improve phrase reordering for statistical machine translation in the framework of maximum-entropy-based modeling. A smoothed prior probability is introduced to take into account the distortion effect in the priors. In addition to that we propose multiple novel distortion features based on syntactic parsing. A new metric is also introduced to measure the effect of distortion in the translation hypotheses. We show that both smoothed priors and syntax-based features help to significantly improve the reordering and hence the translation performance on a large-scale Chinese-to-English machine translation task.
Improving Reordering for Statistical Machine Translation with Smoothed Priors and Syntactic Features
d16937973
Traditional keyword based search is found to have some limitations. Such as word sense ambiguity, and the query intent ambiguity which can hurt the precision. Semantic search uses the contextual meaning of terms in addition to the semantic matching techniques in order to overcome these limitations. This paper introduces a query expansion approach using an ontology built from Wikipedia pages in addition to other thesaurus to improve search accuracy for Arabic language. Our approach outperformed the traditional keyword based approach in terms of both F-score and NDCG measures.
Semantic Query Expansion for Arabic Information Retrieval
d229365645
Priming is a well known and studied psychology phenomenon based on the prior presentation of one stimulus (cue) to influence the processing of a response. In this paper, we propose a framework to mimic the process of priming in the context of neural machine translation (NMT). We evaluate the effect of using similar translations as priming cues on the NMT network. We propose a method to inject priming cues into the NMT network and compare our framework to other mechanisms that perform micro-adaptation during inference. Overall, experiments conducted in a multi-domain setting confirm that adding priming cues in the NMT decoder can go a long way towards improving the translation accuracy. Besides, we show the suitability of our framework to gather valuable information for an NMT network from monolingual resources.
Priming Neural Machine Translation
d219307741
d17281269
This paper describes a study in which a corpus of spoken Danish annotated with focus and topic tags was used to investigate the relation between information structure and pauses. The results show that intra-clausal pauses in the focus domain, tend to precede those words that express the property or semantic type whereby the object in focus is distinguished from other ones in the domain.
Information structure and pauses in a corpus of spoken Danish
d6701027
In this paper, we report a Japanese language resource for answering how-type questions. It was developed it by using mails posted to a mailing list. We show a QA system based on this language resource.
Confirmed Language Resource for Answering How Type Questions Developed by Using Mails Posted to a Mailing List
d250164552
We present a corpus of simulated counselling sessions consisting of speech-and text-based dialogs in Cantonese. Consisting of 152K Chinese characters, the corpus labels the dialog act of both client and counsellor utterances, segments each dialog into stages, and identifies the forward and backward links in the dialog. We analyze the distribution of client and counsellor communicative intentions in the various stages, and discuss significant patterns of the dialog flow.
A Corpus of Simulated Counselling Sessions with Dialog Act Annotation
d13308868
Case frames are an important knowledge base for a variety of natural language processing (NLP) systems. For the practical use of these systems in the real world, wide-coverage case frames are required. In order to acquire such large-scale case frames, in this paper, we automatically compile case frames from a large corpus. The resultant case frames that are compiled from the English Gigaword corpus contain 9,300 verb entries. The case frames include most examples of normal usage, and are ready to be used in numerous NLP analyzers and applications.
A Method for Automatically Constructing Case Frames for English
d2079221
Author Index Volume 27 27(2):231-248 The Need for Accurate Alignment in Natural Language System Evaluation Automatic Verb Classification Based on Statistical Distributions of Argument Structure Combining Human Elicitation and Machine Learning Saiz-Noeda, Maximiliano, and Mufioz, Rafael 27(4):545-567 An Algorithm for Anaphora Resolution in Spanish Texts Roark, Brian 27(2):249-276 Probabilistic Top-Down Parsing and Language Modeling A Machine Learning Approach to Coreference Resolution of Noun Phrases
d1843242
We describe Prograder, a software package for automatic checking of requirements for programming homework assignments. Prograder lets instructors specify requirements in natural language as well as explains grading results to students in natural language. It does so using a grammar that generates as well as parses to translate between a small fragment of English and a first-order logical specification language that can be executed directly in Python. This execution embodies multiple semantics-both to check the requirement and to search for evidence that proves or disproves the requirement. Such a checker needs to interpret and generate sentences containing quantifiers and negation. To handle quantifier and negation scope, we systematically simulate continuation grammars using record structures in the Grammatical Framework.
Generating Quantifiers and Negation to Explain Homework Testing
d374683
Japanese emphatic particles such as mo, wa, and sae are known to present exceedingly recalcitrant problems for grammarians. Since they are clearly concerned with connecting discourse presuppositions with the assertive content of the current utterance, their nature has to be pragmatic as well as semantic. Their syntactic, or rather morphological, behaviour also seems highly unmanageable since they interact not only with themselves but also with other types of particles, especially case particles. In this paper, we try to present a basic scheme for treating emphatic particles based on four features: type, self, edge and polarity. We also try to place emphatic particles in their proper place within overall grammar of the Japanese language.
Emphatic Particles and their Scopal Interactions in Japanese
d8331522
This paper proposes a named entity (NE) ontology generation engine, called X NE -Tree engine, which produces relational named entities by given a seed. The engine incrementally extracts high co-occurring named entities with the seed by using a common search engine. In each iterative step, the seed will be replaced by its siblings or descendants, which form new seeds. In this way, X NE -Tree engine will build a tree structure with the original seed as a root incrementally. Two seeds, Chinese transliteration names of Nicole Kidman (a famous actress) and Ernest Hemingway (a famous writer), are experimented to evaluate the performance of the X NE -Tree. For test the applicability of the ontology, we employ it to a phoneme-character conversion system, which convert input phoneme syllable sequences to text strings. Total 100 Chinese transliteration names, including 50 person names and 50 location names are used as test data. We derive an ontology composed of 7,642 named entities. The results of phoneme-character conversion show that both the recall rate and the MRR are improved from 0.79 and 0.50 to 0.84 to 0.55, respectively.
Constructing a Named Entity Ontology from Web Corpora
d29633703
d17307513
We present a gold standard for the evaluation of Cross Language Information Retrieval systems in the domain of Organic Agriculture and AgroEcology. The presented resource is free to use for research purposes and it includes a collection of multilingual documents annotated with respect to a domain ontology, the ontology used for annotating the resources, a set of 48 queries in 12 languages and a gold standard with the correct resources for the proposed queries. The goal of this work consists in contributing to the research community with a resource for evaluating multilingual retrieval algorithms, with particular focus on domain adaptation strategies for "general purpose" multilingual information retrieval systems and on the effective exploitation of semantic annotations. Domain adaptation is in fact an important activity for tuning the retrieval system, reducing the ambiguities and improving the precision of information retrieval. Domain ontologies constitute a diffuse practice for defining the conceptual space of a corpus and mapping resources to specific topics and in our lab we propose as well to investigate and evaluate the impact of this information in enhancing the retrieval of contents. An initial experiment is described, giving a baseline for further research with the proposed gold standard.
A Gold Standard for CLIR evaluation in the Organic Agriculture Domain
d17906989
Words in Semitic languages are formed by combining two morphemes: a root and a pattern. The root consists of consonants only, by default three, and the pattern is a combination of vowels and consonants, with non-consecutive "slots" into which the root consonants are inserted. Identifying the root of a given word is an important task, considered to be an essential part of the morphological analysis of Semitic languages, and information on roots is important for linguistics research as well as for practical applications. We present a machine learning approach, augmented by limited linguistic knowledge, to the problem of identifying the roots of Semitic words. Although programs exist which can extract the root of words in Arabic and Hebrew, they are all dependent on labor-intensive construction of large-scale lexicons which are components of full-scale morphological analyzers. The advantage of our method is an automation of this process, avoiding the bottleneck of having to laboriously list the root and pattern of each lexeme in the language. To the best of our knowledge, this is the first application of machine learning to this problem, and one of the few attempts to directly address non-concatenative morphology using machine learning. More generally, our results shed light on the problem of combining classifiers under (linguistically motivated) constraints.
Identifying Semitic Roots: Machine Learning with Linguistic Constraints
d235097369
d219302684
d13885496
This paper aims to provide an effective interface for progressive refinement of pattern-based information extraction systems. Pattern-based information extraction (IE) systems have an advantage over machine learning based systems that patterns are easy to customize to cope with errors and are interpretable by humans. Building a pattern-based system is usually an iterative process of trying different parameters and thresholds to learn patterns and entities with high precision and recall. Since patterns are interpretable to humans, it is possible to identify sources of errors, such as patterns responsible for extracting incorrect entities and vice-versa, and correct them. However, it involves time consuming manual inspection of the extracted output. We present a light-weight tool, SPIED, to aid IE system developers in learning entities using patterns with bootstrapping, and visualizing the learned entities and patterns with explanations. SPIED is the first publicly available tool to visualize diagnostic information of multiple pattern learning systems to the best of our knowledge.
SPIED: Stanford Pattern-based Information Extraction and Diagnostics
d812040
The computational linguistics community in The Netherlands and Belgium has long recognized the dire need for a major reference corpus of written Dutch. In part to answer this need, the STEVIN programme was established. To pave the way for the effective building of a 500-million-word reference corpus of written Dutch, a pilot project was established. The Dutch Corpus Initiative project or D-Coi was highly successful in that it not only realized about 10% of the projected large reference corpus, but also established the best practices and developed all the protocols and the necessary tools for building the larger corpus within the confines of a necessarily limited budget. We outline the steps involved in an endeavour of this kind, including the major highlights and possible pitfalls. Once converted to a suitable XML format, further linguistic annotation based on the state-of-the-art tools developed either before or during the pilot by the consortium partners proved easily and fruitfully applicable. Linguistic enrichment of the corpus includes PoS tagging, syntactic parsing and semantic annotation, involving both semantic role labeling and spatiotemporal annotation. D-Coi is expected to be followed by SoNaR, during which the 500-million-word reference corpus of Dutch should be built.
From D-Coi to SoNaR: A reference corpus for Dutch
d5771067
We describe a distributed, modular architecture for platform independent natural language systems. It features automatic interface generation and self-organization. Adaptive (and nonadaptive) voting mechanisms are used for integrating discrete modules. The architecture is suitable for rapid prototyping and product delivery.
A flexible distributed architecture for NLP system development and use
d2658668
Augmented phrase structure grammars consist of phrase structure rules ~with embedded conditions and structure-building actions written in a specially developed language.An attribute-value, record-oriented information structure is an integral part of the theory.
AUGMENTED PHRASE STRUCTURE GRAMMARS
d259376534
This paper describes the development of a system for SemEval-2023 Shared Task 11 on Learning with Disagreements (Le-Wi-Di) (Leonardellli et al., 2023). Labelled data plays a vital role in the development of machine learning systems. The human-annotated labels are usually considered the truth for training or validation. To obtain truth labels, a traditional way is to hire domain experts to perform an expensive annotation process. Crowd-sourcing labelling is comparably cheap, whereas it raises a question on the reliability of annotators. A common strategy in a mixed-annotator dataset with various sets of annotators for each instance is to aggregate the labels among multiple groups of annotators to obtain the truth labels. However, these annotators might not reach an agreement, and there is no guarantee of the reliability of these labels either. With further problems caused by human label variation, subjective tasks usually suffer from the different opinions provided by the annotators. In this paper, we propose two simple heuristic functions to compute the annotator ranking scores, namely An-noHard and AnnoSoft, based on the hard labels (i.e., aggregative labels) and soft labels (i.e., cross-entropy values). By introducing these scores, we adjust the weights of the training instances to improve the learning with disagreements among the annotators.
xiacui at SemEval-2023 Task 11: Learning a Model in Mixed-Annotator Datasets using Annotator Ranking Scores as Training Weights
d11474571
Idiomaticity and Classical Traditions in Some East Asian Languages 1
d11511539
La TA généraliste de haute qualité et totalement automatique est considérée comme impossible. Nous nous intéressons aux problèmes de traduction scripturale, qui sont des sous-problèmes faibles du problème général de la traduction. Nous présentons les caractéristiques des problèmes faibles de traduction et les problèmes de traduction scripturale, décrivons différentes approches computationnelles (à états finis, statistiques, et hybrides) et présentons nos résultats sur différentes combinaisons de langues et systèmes d'écriture Indo-Pak.Abstract General purpose, high quality and fully automatic MT is believed to be impossible. We are interested in scriptural translation problems, which are weak sub-problems of the general problem of translation. We introduce the characteristics of the weak problems of translation and of the scriptural translation problems, describe different computational approaches (finite-state, statistical and hybrid) to solve these problems, and report our results on several combinations of Indo-Pak languages and writing systems.
WEAK TRANSLATION PROBLEMS -A CASE STUDY OF SCRIPTURAL TRANSLATION Weak Translation Problems -a case study of Scriptural Translation
d36528709
We propose an automatic approach towards determining the relative location of adjectives on a common scale based on their strength. We focus on adjectives expressing different degrees of goodness occurring in French product (perfumes) reviews. Using morphosyntactic patterns, we extract from the reviews short phrases consisting of a noun that encodes a particular aspect of the perfume and an adjective modifying that noun. We then associate each such n-gram with the corresponding product aspect and its related star rating. Next, based on the star scores, we generate adjective scales reflecting the relative strength of specific adjectives associated with a shared attribute of the product. An automatic ordering of the adjectives "correct" (correct), "sympa" (nice), "bon" (good) and "excellent" (excellent) according to their score in our resource is consistent with an intuitive scale based on human judgments. Our long-term objective is to generate different adjective scales in an empirical manner, which could allow the enrichment of lexical resources.
Encoding Adjective Scales for Fine-grained Resources
d14746921
In this paper, we present and compare various centrality measures for graphbased keyphrase extraction. Through experiments carried out on three standard datasets of different languages and domains, we show that simple degree centrality achieve results comparable to the widely used TextRank algorithm, and that closeness centrality obtains the best results on short documents.
A Comparison of Centrality Measures for Graph-Based Keyphrase Extraction
d1680521
Treebank is a basic language resource for training and testing syntactic parser which forms a key module in various NLP systems like machine translation system. This paper reports an ongoing research of building dependency treebank for Kashmiri (KashTreeBank) and discusses some main annotation issues. The paper is based on the pilot annotation of 500 sentences.
Introducing Kashmiri Dependency Treebank Linguistic Data Consortium for Indian Languages, CIIL Mysore
d233364898
d232021891
d17830818
Typically, only a very limited amount of in-domain data is available for training the language model component of an Handwritten Text Recognition (HTR) system for historical data. One has to rely on a combination of in-domain and out-ofdomain data to develop language models. Accordingly, domain adaptation is a central issue in language modeling for HTR. We pursue a topic modeling approach to handle this issue, and propose two algorithms based on this approach. The first algorithm relies on posterior inference for topic modeling to construct a language model adapted to the development set, and the second algorithm proceeds by iterative selection, using a new ranking criterion, of topic-dependent language models. Our experimental results show that both approaches clearly outperform a strong baseline method.
An LDA-based Topic Selection Approach to Language Model Adaptation for Handwritten Text Recognition
d5679344
In this paper we present an opinion summarization technique in spoken dialogue systems. Opinion mining has been well studied for years, but very few have considered its application in spoken dialogue systems. Review summarization, when applied to real dialogue systems, is much more complicated than pure text-based summarization. We conduct a systematic study on dialogue-system-oriented review analysis and propose a three-level framework for a recommendation dialogue system. In previous work we have explored a linguistic parsing approach to phrase extraction from reviews. In this paper we will describe an approach using statistical models such as decision trees and SVMs to select the most representative phrases from the extracted phrase set. We will also explain how to generate informative yet concise review summaries for dialogue purposes. Experimental results in the restaurant domain show that the proposed approach using decision tree algorithms achieves an outperformance of 13% compared to SVM models and an improvement of 36% over a heuristic rule baseline. Experiments also show that the decision-treebased phrase selection model can achieve rather reliable predictions on the phrase label, comparable to human judgment. The proposed statistical approach is based on domain-independent learning features and can be extended to other domains effectively.
Dialogue-Oriented Review Summary Generation for Spoken Dialogue Recommendation Systems
d4889763
We present a method for the sentence-level alignment of short simplified text to the original text from which they were adapted. Our goal is to align a medium-sized corpus of parallel text, consisting of short news texts in Spanish with their simplified counterpart. No training data is available for this task, so we have to rely on unsupervised learning. In contrast to bilingual sentence alignment, in this task we can exploit the fact that the probability of sentence correspondence can be estimated from lexical similarity between sentences. We show that the algoithm employed performs better than a baseline which approaches the problem with a TF*IDF sentence similarity metric. The alignment algorithm is being used for the creation of a corpus for the study of text simplification in the Spanish language.
An Unsupervised Alignment Algorithm for Text Simplification Corpus Construction
d18766258
This paper elucidates the InterlinguaPlus design and its application in bi-directional text translations between Ekegusii and Kiswahili languages unlike the traditional translation pairs, one-by-one. Therefore, any of the languages can be the source or target language. The first section is an overview of the project, which is followed by a brief review of Machine Translation. The next section discusses the implementation of the system using Carabao's open machine translation framework and the results obtained. So far, the translation results have been plausible particularly for the resource-scarce local languages and clearly affirm morphological similarities inherent in Bantu languages.
InterlinguaPlus Machine Translation Approach for Under-Resourced Languages: Ekegusii & Swahili
d41936718
This paper argues for the existence of a deeper and more primitive structural unit of syntax and morphology than the constituent. Data from ellipsis, idiom formation, predicate complexes, bracketing paradoxes, and multiple auxiliary constructions challenge constituency-based analyses. In chain-based dependency grammar, however, constituents are seen as complete components. Components are units that are continuous both in the linear and in the dominance dimension. A unit continuous in the dominance dimension is called a chain. Evidence suggests that chains constitute the fundamental structural relationship between syntactic and morphological units, and that constituents are just a special subset of chains. If these assumptions are correct, linguistic research may need to change direction.
Chains in Syntax and Morphology
d157315
This paper presents an exploration into automated content scoring of non-native spontaneous speech using ontology-based information to enhance a vector space approach. We use content vector analysis as a baseline and evaluate the correlations between human rater proficiency scores and two cosine-similarity-based features, previously used in the context of automated essay scoring. We use two ontology-facilitated approaches to improve feature correlations by exploiting the semantic knowledge encoded in WordNet: (1) extending word vectors with semantic concepts from the WordNet ontology (synsets); and (2) using a reasoning approach for estimating the concept weights of concepts not present in the set of training responses by exploiting the hierarchical structure of WordNet. Furthermore, we compare features computed from human transcriptions of spoken responses with features based on output from an automatic speech recognizer. We find that (1) for one of the two features, both ontologically based approaches improve average feature correlations with human scores, and that (2) the correlations for both features decrease only marginally when moving from human speech transcriptions to speech recognizer output.
The 7th Workshop on the Innovative Use of NLP for Building Educational Applications, pages 86-94, Using an Ontology for Improved Automated Content Scoring of Spontaneous Non-Native Speech
d12084540
Users prefer natural language software requirements because of their usability and accessibility. When they describe their wishes for software development, they often provide off-topic information. We therefore present REaCT 1 , an automated approach for identifying and semantically annotating the on-topic parts of requirement descriptions. It is designed to support requirement engineers in the elicitation process on detecting and analyzing requirements in user-generated content. Since no lexical resources with domain-specific information about requirements are available, we created a corpus of requirements written in controlled language by instructed users and uncontrolled language by uninstructed users. We annotated these requirements regarding predicate-argument structures, conditions, priorities, motivations and semantic roles and used this information to train classifiers for information extraction purposes. REaCT achieves an accuracy of 92% for the on-and off-topic classification task and an F 1measure of 72% for the semantic annotation.
On-and Off-Topic Classification and Semantic Annotation of User-Generated Software Requirements