_id stringlengths 4 10 | text stringlengths 0 18.4k | title stringlengths 0 8.56k |
|---|---|---|
d201058480 | This paper proposes a dually interactive matching network (DIM) for presenting the personalities of dialogue agents in retrievalbased chatbots. This model develops from the interactive matching network (IMN) which models the matching degree between a context composed of multiple utterances and a response candidate. Compared with previous persona fusion approaches which enhance the representation of a context by calculating its similarity with a given persona, the DIM model adopts a dual matching architecture, which performs interactive matching between responses and contexts and between responses and personas respectively for ranking response candidates. Experimental results on PERSONA-CHAT dataset show that the DIM model outperforms its baseline model, i.e., IMN with persona fusion, by a margin of 14.5% and outperforms the current state-of-the-art model by a margin of 27.7% in terms of top-1 accuracy hits@1. | Dually Interactive Matching Network for Personalized Response Selection in Retrieval-Based Chatbots |
d219302104 | ||
d1174079 | We describe a segmentation component that utilizes minimal syntactic knowledge to produce a lattice of word candidates for a broad coverage Japanese NL parser. The segmenter is a finite state morphological analyzer and text normalizer designed to handle the orthographic variations characteristic of written Japanese, including alternate spellings, script variation, vowel extensions and word-internal parenthetical material. This architecture differs from conventional Japanese wordbreakers in that it does not attempt to simultaneously attack the problems of identifying segmentation candidates and choosing the most probable analysis. To minimize duplication of effort between components and to give the segmenter greater fi'eedom to address orthography issues, the task of choosing the best analysis is handled by the parser, which has access to a much richer set of linguistic information. By maximizing recall in the segmenter and allowing a precision of 34.7%, our parser currently achieves a breaking accuracy of ~97% over a wide variety of corpora. | Robust Segmentation of Japanese Text into a Lattice for Parsing |
d12767961 | We develop a new objective function for word alignment that measures the size of the bilingual dictionary induced by an alignment. A word alignment that results in a small dictionary is preferred over one that results in a large dictionary. In order to search for the alignment that minimizes this objective, we cast the problem as an integer linear program. We then extend our objective function to align corpora at the sub-word level, which we demonstrate on a small Turkish-English corpus. | A New Objective Function for Word Alignment |
d16052726 | This paper describes the Universitat d'Alacant submissions (labelled as UAlacant) for the machine translation quality estimation (MTQE) shared task in WMT 2015, where we participated in the wordlevel MTQE sub-task. The method we used to produce our submissions uses external sources of bilingual information as a black box to spot sub-segment correspondences between a source segment S and the translation hypothesis T produced by a machine translation system. This is done by segmenting both S and T into overlapping subsegments of variable length and translating them in both translation directions, using the available sources of bilingual information on the fly. For our submissions, two sources of bilingual information were used: machine translation (Apertium and Google Translate) and the bilingual concordancer Reverso Context. After obtaining the subsegment correspondences, a collection of features is extracted from them, which are then used by a binary classifer to obtain the final "GOOD" or "BAD" word-level quality labels. We prepared two submissions for this year's edition of WMT 2015: one using the features produced by our system, and one combining them with the baseline features published by the organisers of the task, which were ranked third and first for the sub-task, respectively. | UAlacant word-level machine translation quality estimation system at WMT 2015 |
d48357847 | We introduce Picturebook, a large-scale lookup operation to ground language via 'snapshots' of our physical world accessed through image search. For each word in a vocabulary, we extract the top-k images from Google image search and feed the images through a convolutional network to extract a word embedding. We introduce a multimodal gating function to fuse our Picturebook embeddings with other word representations. We also introduce Inverse Picturebook, a mechanism to map a Picturebook embedding back into words. We experiment and report results across a wide range of tasks: word similarity, natural language inference, semantic relatedness, sentiment/topic classification, image-sentence ranking and machine translation. We also show that gate activations corresponding to Picturebook embeddings are highly correlated to human judgments of concreteness ratings. | Illustrative Language Understanding: Large-Scale Visual Grounding with Image Search |
d41310809 | Since its inception in 2010, the Linguistic Data Consortium's data scholarship program has awarded no cost grants in data to 64 recipients from 24 countries. A survey of the twelve cycles to date -two awards each in the Fall and Spring semesters from Fall 2010 through Spring 2016 -yields an interesting view into graduate program research trends in human language technology and related fields and the particular data sets deemed important to support that research. The survey also reveals regions in which such activity appears to be on a rise, including in Arabic-speaking regions and portions of the Americas. | Trends in HLT Research: A Survey of LDC's Data Scholarship Program |
d15236315 | Newswire text is often linguistically complex and stylistically decorated, hence very difficult to comprehend for people with reading disabilities. Acknowledging that events represent the most important information in news, we propose an eventcentered approach to news simplification. Our method relies on robust extraction of factual events and elimination of surplus information which is not part of event mentions. Experimental results obtained by combining automated readability measures with human evaluation of correctness justify the proposed event-centered approach to text simplification. | Event-Centered Simplification of News Stories |
d184482987 | ||
d549286 | Literal movement grammars (LMGs) provide a general account of extraposition phenomena through an attribute mechanism allowing top-down displacement of syntactical information. LMGs provide a simple and efficient treatment of complex linguistic phenomena such as cross-serial dependencies in German and Dutch--separating the treatment of natural language into a parsing phase closely resembling traditional contextfree treatment, and a disambiguation phase which can be carried out using matching, as opposed to full unification employed in most current grammar formalisms of linguistical relevance. | Literal Movement Grammars |
d241583232 | We present an information retrieval-based question answer system to answer legal questions. The system is not limited to a predefined set of questions or patterns and uses both sparse vector search and embeddings for input to a BERT-based answer re-ranking system. A combination of general domain and legal domain data is used for training. This natural question answering system is in production and is used commercially. | A Free Format Legal Question Answering System |
d15387869 | Ontologies are a tool for Knowledge Representation that is now widely used, but the effort employed to build an ontology is still high.There are a few automatic and semi-automatic methods for extending ontologies with domain-specific information, but they use different training and test data, and different evaluation metrics. The work described in this paper is an attempt to build a benchmark corpus that can be used for comparing these systems. We provide standard evaluation metrics as well as two different annotated corpora: one in which every unknown word has been labelled with the places where it should be added onto the ontology, and other in which only the high-frequency unknown terms have been annotated ¤ | Proposal for Evaluating Ontology Refinement Methods |
d5662189 | In multilingual question answering, either the question needs to be translated into the document language, or vice versa. In addition to direction, there are multiple methods to perform the translation, four of which we explore in this paper: word-based, 10-best, contextbased, and grammar-based. We build a feature for each combination of translation direction and method, and train a model that learns optimal feature weights. On a large forum dataset consisting of posts in English, Arabic, and Chinese, our novel learn-to-translate approach was more effective than a strong baseline (p < 0.05): translating all text into English, then training a classifier based only on English (original or translated) text. | Learning to Translate for Multilingual Question Answering |
d259370760 | Knowledge transfer can boost neural machine translation (NMT), for example, by finetuning a pretrained masked language model (LM). However, it may suffer from the forgetting problem and the structural inconsistency between pretrained LMs and NMT models. Knowledge distillation (KD) may be a potential solution to alleviate these issues, but few studies have investigated language knowledge transfer from pretrained language models to NMT models through KD. In this paper, we propose Pretrained Bidirectional Distillation (PBD) for NMT, which aims to efficiently transfer bidirectional language knowledge from masked language pretraining to NMT models. Its advantages are reflected in efficiency and effectiveness through a globally defined and bidirectional context-aware distillation objective. Bidirectional language knowledge of the entire sequence is transferred to an NMT model concurrently during translation training. Specifically, we propose self-distilled masked language pretraining to obtain the PBD objective. We also design PBD losses to efficiently distill the language knowledge, in the form of token probabilities, to the encoder and decoder of an NMT model using the PBD objective. Extensive experiments reveal that pretrained bidirectional distillation can significantly improve machine translation performance and achieve competitive or even better results than previous pretrain-finetune or unified multilingual translation methods in supervised, unsupervised, and zero-shot scenarios. Empirically, it is concluded that pretrained bidirectional distillation is an effective and efficient method for transferring language knowledge from pretrained language models to NMT models. | Pretrained Bidirectional Distillation for Machine Translation |
d18185205 | Although lexicography of Latin has a long tradition dating back to ancient grammarians, and almost all Latin grammars devote to wordformation at least one part of the section(s) concerning morphology, none of the today available lexical resources and NLP tools of Latin feature a wordformation-based organization of the Latin lexicon. In this paper, we describe the first steps towards the semi-automatic development of a wordformation-based lexicon of Latin, by detailing several problems occurring while building the lexicon and presenting our solutions. Developing a wordformation-based lexicon of Latin is nowadays of outmost importance, as the last years have seen a large growth of annotated corpora of Latin texts of different eras. While these corpora include lemmatization, morphological tagging and syntactic analysis, none of them features segmentation of the word forms and wordformation relations between the lexemes. This restricts the browsing and the exploitation of the annotated data for linguistic research and NLP tasks, such as information retrieval and heuristics in PoS tagging of unknown words. | First Steps towards the Semi-automatic Development of a Wordformation-based Lexicon of Latin |
d1738274 | It is an honor to have this chance to tie together themes from my recent research, and to sketch some challenges and opportunities for NLG in face-to-face conversational interaction.Communication reflects our general involvement in one anothers' lives. Through the choices we manifest with one another, we share our thoughts and feelings, strengthen our relationships and further our joint projects. We rely not only on words to articulate our perspectives, but also on a heterogeneous array of accompanying efforts: embodied deixis, expressive movement, presentation of iconic imagery and instrumental action in the world. Words showcase the distinctive linguistic knowledge which human communication exploits. But people's diverse choices in conversation in fact come together to reveal multifaceted, interrelated meanings, in which all our actions, verbal and nonverbal, fit the situation and further social purposes. In the best case, they let interlocutors understand not just each other's words, but each other.As NLG researchers, I argue, we have good reason to work towards models of social cognition that embrace the breadth of conversation. Scientifically, it connects us to an emerging consensus in favor of a general human pragmatic competence, rooted in capacities for engagement, coordination, shared intentionality and extended relationships. Technically, it lets us position ourselves as part of an emerging revolution in integrative Artificial Intelligence, characterized by research challenges like human-robot interaction and the design of virtual humans, and applications in assistive and educational technology and interactive entertainment. | Language, Embodiment and Social Intelligence |
d53235300 | For the WMT 2018 shared task of translating documents pertaining to the Biomedical domain, we developed a scoring formula that uses an unsophisticated and effective method of weighting term frequencies and was integrated in a data selection pipeline. The method was applied on five language pairs and it performed best on Portuguese-English, where a BLEU score of 41.84 placed it third out of seven runs submitted by three institutions. In this paper, we describe our method and results with a special focus on Spanish-English where we compare it against a state-of-the-art method. Our contribution to the task lies in introducing a fast, unsupervised method for selecting domain-specific data for training models which obtain good results using only 10% of the general domain data. | Translation of Biomedical Documents with Focus on Spanish-English |
d17314470 | Th~s paper presents an on-going project mtended to enhance WordNet molpholog~cally and semanttcally The mottvatmn for th~s work steams from the current hm~ta-t~ons of WordNet when used as a hngmst~c knowledge base We enwmon a software tool that automatically parses the conceptual defining glosses, attributing part-ofspeech tags and phrasal brackets The nouns, verbs, adjectives and adverbs from every defimtmn are then d~samb~guated and hnked to the corresponding synsets Th~s increases the connectlv~ty between synsets allowing the ~etneval of topically ~elated concepts Furthermore, the tool t~ansforms the glosses, first into logical forms and then into semantm fo~ms Usmg der~vatmnal morphology new hnks are added between the synsets | m m m m mm m m n [] WordNet 2 -A Morphologically and Semantically Enhanced Resource |
d201741515 | For this round of the WMT 2019 APE shared task, our submission focuses on addressing the "over-correction" problem in APE. Overcorrection occurs when the APE system tends to rephrase an already correct MT output, and the resulting sentence is penalized by a reference-based evaluation against human post-edits. Our intuition is that this problem can be prevented by informing the system about the predicted quality of the MT output or, in other terms, the expected amount of needed corrections. For this purpose, following the common approach in multilingual NMT, we prepend a special token to the beginning of both the source text and the MT output indicating the required amount of post-editing. Following the best submissions to the WMT 2018 APE shared task, our backbone architecture is based on multi-source Transformer to encode both the MT output and the corresponding source text. We participated both in the English-German and English-Russian subtasks. In the first subtask, our best submission improved the original MT output quality up to +0.98 BLEU and -0.47 TER. In the second subtask, where the higher quality of the MT output increases the risk of over-correction, none of our submitted runs was able to improve the MT output. | Effort-Aware Neural Automatic Post-Editing |
d8437725 | In this paper, we introduce a novel graph based technique for topic based multidocument summarization. We transform documents into a bipartite graph where one set of nodes represents entities and the other set of nodes represents sentences. To obtain the summary we apply a ranking technique to the bipartite graph which is followed by an optimization step. We test the performance of our method on several DUC datasets and compare it to the stateof-the-art. | Multi-document Summarization Using Bipartite Graphs |
d208978352 | ooking at the Proceedings of last year's Annual Meeting, one sccs that the session most closely parallcling this one was entitled Language Structure and Par~ing. [n avcry nice prescnu~fion, Martin Kay was able to unite the papers of that scssion uudcr a single theme. As hc stated it."Illcre has been a shift of emphasis away from highly ~tmctured systems of complex rules as the principal repository of infi~mmtion about the syntax of a language towards a view in which the responsibility is distributed among the Icxicoo. semantic parts of the linguistic description, aod a cognitive or strategic component. Concomitantly, interest has shiRed from al!lorithms for syntactic analysis and generation, in which the central stnlctorc and the exact seqtlencc of events are paramount, to systems iu which a heavier burden is carried by the data stl ucture and in wilich the order of,:vents is a m,.~ter of strategy.['his ycar. the papers of the session represent a greater diversity of rescan:h directions. The paper by Hayes. and thc paper by Wilcnsky and Aren~ arc both examples of what Kay had in mind. but the paper I)y Church, with rcgard to the question of algorithms, is quite the opposite. He {tolds that once the full range uf constraints dcscribing pc~plc's processing behavior has been captul'ed, the best parsing strategies will be rather straightforwarcL and easily cxplaincd as algorithms.Perilaps the seven papers in this year's session can best be introduecd by briefly citing ~mc of the achievcmcqts and problems reported in the works they refcrence,In thc late i960"s Woods tweeds70] capped an cfTort by several people to dcvch)p NI'N parsing. 'lllis well known technique applies a smdghtforward top down, left CO right` dcpth fic~t pat.~ing algorithm to a syntactic grammar. I-:~pccialiy in the compiled fi)rm produced by Ilorton [Bnrton76~,]. the parser was able to produce the first parse in good time. but without ~mantic constraints, numcroos syn~ictic analyses could be and ~,mctimcs were fou.nd, especially in scntenccs with conjunctions. A strength of the system was the ATN grammar, which can be dc~ribcd as a sct of context frec production rules whose right hand sides arc finite statc machincs and who.~ U'ansition arcs have bccn augmented with functions able to read and set registers, and also able to block a transition on their an:. Many people have found this a convenient fonnulism in which m develop grammars of Engtish.The Woods ATN parser was a great success and attempts were made to exploit it (a) as a modc[ of human processing and (b) as a tool for writing grammars. At the same time it was recognized to havc limimdoos. It wasn't tolerant of errors, and it couldn't handle unknown words or constructions (there were n~'tny syntactic constmcdons which it didn't know). In addidon, the question answering system fed by the parser had a weak notion of word and phrase .~emantics and it was not always able to handle quantificrs properly. It is not ctcar thcs¢ components could have supported a stronger interaction with syntactic parsing, had Woods chosen to a~cmpt it.On the success side. Kaplan [Kaplan72] was inspired to claim that the ATN parser provided a good model tbr some aspects of human processing. Some aspects which might bc modeled are:Linnuistic Phenomenon | Parsing w |
d15010990 | The document describes the knowledgebased Domain-WSD system using heuristic rules (knowledge-base). This HR-WSD system delivered the best performance (55.9%) among all Chinese systems in SemEval-2010 Task 17: All-words WSD on a specific domain. | HR-WSD: System Description for All-words Word Sense Disambiguation on a Specific Domain at SemEval-2010 |
d97238 | The Duluth-WSI systems in SemEval-2 built word co-occurrence matrices from the task test data to create a second order co-occurrence representation of those test instances. The senses of words were induced by clustering these instances, where the number of clusters was automatically predicted. The Duluth-Mix system was a variation of WSI that used the combination of training and test data to create the co-occurrence matrix. The Duluth-R system was a series of random baselines. | Duluth-WSI: SenseClusters Applied to the Sense Induction Task of SemEval-2 |
d1616975 | Simple questions require small snippets of text as the answers whereas complex questions require inferencing and synthesizing information from multiple documents to have multiple sentences as the answers. The traditional QA systems can handle simple questions easily but complex questions often need more sophisticated treatment e.g. question decomposition. Therefore, it is necessary to automatically classify an input question as simple or complex to treat them accordingly. We apply two machine learning techniques and a Latent Semantic Analysis (LSA) based method to automatically classify the questions as simple or complex.KEYWORDS: Simple questions, complex questions, support vector machines, k-means clustering, latent semantic analysis.1 Automated Question Answering (QA), the ability of a machine to answer questions asked in natural language, is perhaps the most exciting technological development of the past few years(Strzalkowski and Harabagiu, 2008). QA research attempts to deal with a wide range of question types including: fact, list, definition, how, why, hypothetical, semantically-constrained, and cross-lingual questions. This paper concerns open-domain question answering where the QA system must handle questions of different types: simple or complex.Simple questions are easier to answer(Moldovan et al., 2007)as they require small snippets of texts as the answers. For example, with a simple (i.e. factoid) question like: "What is the magnitude of the earthquake in Japan?", it can be safely assumed that the submitter of the question is looking for a number. Current QA systems have been significantly advanced in demonstrating finer abilities to answer simple factoid and list questions. On the other hand, with complex questions like: "How is Japan affected by the earthquake?", the wider focus of this question suggests that the submitter may not have a single or well-defined information need. Therefore, to answer complex type of questions we often need to go through complex procedures such as question decomposition and multi-document summarizationHarabagiu et al., 2006;Chali et al., 2009). Hence, it is necessary to automatically classify an input question as simple or complex in order to answer them using the appropriate technique. Once we classify the questions as simple or complex, we can pass the simple questions to the traditional question answering systems whereas complex questions can be tackled differently in a sophisticated manner. For example, the above complex question can be decomposed into a series of simple questions such as "How many people had died by the earthquake?", "How many people became homeless?", and "Which cities were mostly damaged?". These simple questions can then be passed to the state-of-the-art QA systems, and a single answer to the complex question can be formed by combining the individual answers to the simple questionsHickl et al., 2006). This motivates the significance of classifying a question as simple or complex. We experiment with two well-known machine learning methods and show that the task can be accomplished effectively using a simple feature set. We also use a LSA-based technique to automatically classify the questions as simple or complex. | Simple or Complex? Classifying the Question by the Answer Complexity |
d8339611 | In this paper we describe the overall model for MILE lexical entries and provide an instantiation of the model in RDF/OWL. This work has been done with an eye toward the goal of creating a web-based registry of lexical data categories and enabling the description of lexical information by establishing relations among them, and/or using predefined objects that may reside at various locations on the web. It is also assumed that using OWL specifications to enhance specifications of the ontology of lexical objects will eventually enable the exploitation of inferencing engines to retrieve and possibly create lexical information on the fly, as suited to particular contexts. As such, the model and RDF instantiation provided here are in line with the goals of ISO TC37 SC4, and should be fully mappable to the proposed pivot. | RDF Instantiation of ISLE/MILE Lexical Entries |
d202541481 | Contextualized word embeddings such as ELMo and BERT provide a foundation for strong performance across a wide range of natural language processing tasks by pretraining on large corpora of unlabeled text. However, the applicability of this approach is unknown when the target domain varies substantially from the pretraining corpus. We are specifically interested in the scenario in which labeled data is available in only a canonical source domain such as newstext, and the target domain is distinct from both the labeled and pretraining texts. To address this scenario, we propose domain-adaptive finetuning, in which the contextualized embeddings are adapted by masked language modeling on text from the target domain. We test this approach on sequence labeling in two challenging domains: Early Modern English and Twitter. Both domains differ substantially from existing pretraining corpora, and domain-adaptive fine-tuning yields substantial improvements over strong BERT baselines, with particularly impressive results on out-ofvocabulary words. We conclude that domainadaptive fine-tuning offers a simple and effective approach for the unsupervised adaptation of sequence labeling to difficult new domains. 1 | Unsupervised Domain Adaptation of Contextualized Embeddings for Sequence Labeling |
d253761968 | Various approaches have been proposed for automated stance detection, including those that use machine and deep learning models and natural language processing techniques. However, their cross-dataset performance, the impact of sample size on performance, and experimental aspects such as runtime have yet to be compared, limiting what is known about the generalizability of prominent approaches. This paper presents a replication study of stance detection approaches on current benchmark datasets. Specifically, we compare six existing machine and deep learning stance detection models on three publicly available datasets. We investigate performance as a function of the number of samples, length of samples (word count), representation across targets, type of text data, and the stance detection models themselves. We identify the current limitations of these approaches and categorize their utility for stance detection under varying circumstances (e.g., size of text samples), which provides valuable insight for future research in stance detection. | A Comparative Analysis of Stance Detection Approaches and Datasets |
d52009631 | State-of-the-art entity linkers achieve high accuracy scores with probabilistic methods. However, these scores should be considered in relation to the properties of the datasets they are evaluated on. Until now, there has not been a systematic investigation of the properties of entity linking datasets and their impact on system performance. In this paper we report on a series of hypotheses regarding the long tail phenomena in entity linking datasets, their interaction, and their impact on system performance. Our systematic study of these hypotheses shows that evaluation datasets mainly capture head entities and only incidentally cover data from the tail, thus encouraging systems to overfit to popular/frequent and non-ambiguous cases. We find the most difficult cases of entity linking among the infrequent candidates of ambiguous forms. With our findings, we hope to inspire future designs of both entity linking systems and evaluation datasets. To support this goal, we provide a list of recommended actions for better inclusion of tail cases. This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/ | Systematic Study of Long Tail Phenomena in Entity Linking |
d219189020 | ||
d258331772 | With the increasing number of clinical trial reports generated every day, it is becoming hard to keep up with novel discoveries that inform evidence-based healthcare recommendations. To help automate this process and assist medical experts, NLP solutions are being developed. This motivated the SemEval-2023 Task 7, where the goal was to develop an NLP system for two tasks: evidence retrieval and natural language inference from clinical trial data. In this paper, we describe our two developed systems. The first one is a pipeline system that models the two tasks separately, while the second one is a joint system that learns the two tasks simultaneously with a shared representation and a multi-task learning approach. The final system combines their outputs in an ensemble system. We formalize the models, present their characteristics and challenges, and provide an analysis of achieved results. Our system ranked 3rd out of 40 participants with a final submission. | Sebis at SemEval-2023 Task 7: A Joint System for Natural Language Inference and Evidence Retrieval from Clinical Trial Reports |
d3454975 | This paper presents the complete and consistent ontological annotation of the nominal part of WordNet. The annotation has been carried out using the semantic features defined in the EuroWordNet Top Concept Ontology and made available to the NLP community. Up to now only an initial core set of 1,024 synsets, the so-called Base Concepts, was ontologized in such a way.The work has been achieved by following a methodology based on an iterative and incremental expansion of the initial labeling through the hierarchy while setting inheritance blockage points. Since this labeling has been set on the EuroWordNet's Interlingual Index (ILI), it can be also used to populate any other wordnet linked to it through a simple porting process.This feature-annotated WordNet is intended to be useful for a large number of semantic NLP tasks and for testing for the first time componential analysis on real environments. Moreover, the quantitative analysis of the work shows that more than 40% of the nominal part of WordNet is involved in structure errors or inadequacies. | Complete and Consistent Annotation of WordNet using the Top Concept Ontology |
d18069111 | Au cours des deux dernières décennies des psychologues et des linguistes informaticiens ont essayé de modéliser l'accès lexical en construisant des simulations ou des ressources. Cependant, parmi ces chercheurs, pratiquement personne n'a vraiment cherché à améliorer la navigation dans des 'dictionnaires électroniques destinés aux producteurs de langue'. Pourtant, beaucoup de travaux ont été consacrés à l'étude du phénomène du mot sur le bout de la langue et à la construction de réseaux lexicaux. Par ailleurs, vu les progrès réalisés en neurosciences et dans le domaine des réseaux complexes, on pourrait être tenté de construire un simulacre du dictionnaire mental, ou, à défaut une ressource destinée aux producteurs de langue (écrivains, conférenciers). Nous sommes restreints en construisant un réseau de co-occurrences à partir des résumés de Wikipedia, le but étant de vérifier jusqu'où l'on pouvait pousser une telle ressource pour trouver un mot, sachant que la ressource ne contient pas de liens sémantiques, car le réseau est construit de manière automatique et à partir de textes non-annotés.ABSTRACT _____________________________________________________________________________________________________During the last two decades psychologists and computational linguists have attempted to tackle the problem of word access via computational resources, yet hardly none of them has seriously tried to support 'interactive' word finding. Yet, a lot of work has been done to understand the causes of the tip-of-the-tongue problem (TOT). Given the progress made in neuroscience, corpus linguistics, and graph theory (complex graphs), one may be tempted to emulate the mental lexicon, or to build a resource likely to help authors (speakers, writers) to overcome word-finding problems. Our goal here is much more limited. We try to identify good hints for finding a target word. To this end we have built a co-occurrence network on the basis of Wikipedia abstracts. Since the network is built automatically and from raw data, i.e. non-annotated text, it does not reveal the kind of relationship holding between the nodes. Despite this shortcoming we tried to see whether we can find a given word, or, to identify what is a good clue word.MOTS-CLÉS: accès lexical, anomie, mot sur le bout de la langue, réseaux lexicaux KEYWORDS: lexical access, anomia, tip of the tongue (TOT), lexical networks TALN-RÉCITAL 2013, 17-21 Juin, Les Sables d'Olonne 596 c ATALA c ATALA | Lexical access via a simple co-occurrence network Trouver les mots dans un simple réseau de co-occurrences |
d16598006 | In the past few years, Indian languages have seen a welcome arrival of large parts of speech annotated corpora, thanks to the DIT funded projects across the country. A major corpus of 50,000 sentences in each of the 12 of major Indian languages is available for research purposes. This corpus has been annotated for parts of speech using the BIS annotation guideline. However, it remains to be seen how good these corpora are with respect to annotation itself. Given that annotated corpora are also prone to human errors which later affect the accuracies achieved by the statistical NLP tools based on these corpora, there is a need to open evaluation of such a corpus. This paper focuses on finding annotation and other types of errors in two major parts of speech annotated corpora of Hindi and correcting them using a tool developed for the identification of verb classes in Hindi.] 1 https://catalog.ldc.upenn.edu/LDC2010T24 2 http://tdil-dc.in | Evaluating Two Annotated Corpora of Hindi Using a Verb Class Identifier |
d33590512 | We describe a monolingual English corpus of original and (human) translated texts, with an accurate annotation of speaker properties, including the original language of the utterances and the speaker's country of origin. We thus obtain three sub-corpora of texts reflecting native English, non-native English, and English translated from a variety of European languages. This dataset will facilitate the investigation of similarities and differences between these kinds of sub-languages. Moreover, it will facilitate a unified comparative study of translations and language produced by (highly fluent) non-native speakers, two closely-related phenomena that have only been studied in isolation so far. | A Corpus of Native, Non-native and Translated Texts |
d9199676 | One of the central problems of opinion mining is to extract aspects of entities or topics that have been evaluated in an opinion sentence or document. Much of the existing research focused on extracting explicit aspects which are nouns and nouns phrases that have appeared in sentences, e.g., price in The price of this bike is very high. (owever, in many cases, people do not explicitly mention an aspect in a sentence, but the aspect is implied, e.g., This bike is expensive, where expensive indicates the price aspect of the bike. Although there are some existing works dealing with the problem, they all used the corpus-based approach, which has several shortcomings. )n this paper, we propose a dictionary-based approach to address these shortcomings. We formulate the problem as collective classification. Experimental results show that the proposed approach is effective and produces significantly better results than strong baselines based on traditional supervised classification. | A Dictionary-Based Approach to Identifying Aspects Im- plied by Adjectives for Opinion Mining |
d226283883 | Automatic summarization research has traditionally focused on providing high quality general-purpose summaries of documents. However, there are many applications that require more specific summaries, such as supporting question answering or topic-based literature discovery. In this paper, we study the problem of conditional summarization in which content selection and surface realization are explicitly conditioned on an ad-hoc natural language question or topic description. Because of the difficulty in obtaining sufficient reference summaries to support arbitrary conditional summarization, we explore the use of multi-task fine-tuning (MTFT) on twentyone natural language tasks to enable zero-shot conditional summarization on five tasks. We present four new summarization datasets, two novel "online" or adaptive task-mixing strategies, and report zero-shot performance using T5 and BART, demonstrating that MTFT can improve zero-shot summarization quality. | Towards Zero-Shot Conditional Summarization with Adaptive Multi-Task Fine-Tuning |
d6699186 | We propose a principled and efficient phraseto-phrase alignment model, useful in machine translation as well as other related natural language processing problems. In a hidden semi-Markov model, word-to-phrase and phraseto-word translations are modeled directly by the system. Agreement between two directional models encourages the selection of parsimonious phrasal alignments, avoiding the overfitting commonly encountered in unsupervised training with multi-word units. Expanding the state space to include "gappy phrases" (such as French ne pas) makes the alignment space more symmetric; thus, it allows agreement between discontinuous alignments. The resulting system shows substantial improvements in both alignment quality and translation quality over word-based Hidden Markov Models, while maintaining asymptotically equivalent runtime. | Gappy Phrasal Alignment by Agreement |
d21629710 | This article reports on three cases of teaching translation or English as a foreign language using pre-editing tasks with a machine translation system. Trainee translators or English learners were asked to input a Chinese or English paragraph into an MT system, observe the irregula rities in the output, and subsequently edit the source text and input it again in the hope of getting better output from the MT system. Student reports on pre-editing procedures were collected and analysed, as well as thoughts and suggestions about using MT. It is argued that pre-editing MT input can boost student learning in the cognitive and affective domains, apart from training students to use MT systems. | Teaching MT Through Pre -editing: Three Case Studies |
d255096237 | Transformer language models (TLMs) are critical for most NLP tasks, but they are difficult to create for low-resource languages because of how much pretraining data they require. In this work, we investigate two techniques for training monolingual TLMs in a low-resource setting: greatly reducing TLM size, and complementing the masked language modeling objective with two linguistically rich supervised tasks (part-of-speech tagging and dependency parsing). Results from 7 diverse languages indicate that our model, MicroBERT, is able to produce marked improvements in downstream task evaluations relative to a typical monolingual TLM pretraining approach. Specifically, we find that monolingual MicroBERT models achieve gains of up to 18% for parser LAS and 11% for NER F1 compared to a multilingual baseline, mBERT, while having less than 1% of its parameter count. We conclude reducing TLM parameter count and using labeled data for pretraining low-resource TLMs can yield large quality benefits and in some cases produce models that outperform multilingual approaches. | MicroBERT: Effective Training of Low-resource Monolingual BERTs through Parameter Reduction and Multitask Learning |
d64226 | Previous dialogue systems have focussed on dia. logues betwe(:n two agents. Many ~q)plications, however, require conversations between several l)articipants. This paper extends speech act deftnitions to handle multi-agent conversations, based on a model of multi-agent belief attribution with some unique properties. Our approach has |,lie advantage of capturing a lnnnlmr of interesting phenomena in a straightforward way. | COMMUNICATING WITH MULTIPLE AGENTS* |
d251465786 | Automatic post-editing (APE) refers to a research field that aims to automatically correct errors included in the translation sentences derived by the machine translation system. This study has several limitations, considering the data acquisition, because there is no official dataset for most language pairs. Moreover, the amount of data is restricted even for language pairs in which official data has been released, such as WMT. To solve this problem and promote universal APE research regardless of APE data existence, this study proposes a method for automatically generating APE data based on a noising scheme from a parallel corpus. Particularly, we propose a human mimicking errors-based noising scheme that considers a practical correction process at the human level. We propose a precise inspection to attain high performance, and we derived the optimal noising schemes that show substantial effectiveness. Through these, we also demonstrate that depending on the type of noise, the noising scheme-based APE data generation may lead to inferior performance. In addition, we propose a dynamic noise injection strategy that enables the acquisition of a robust error correction capability and demonstrated its effectiveness by comparative analysis. This study enables obtaining a high performance APE model without human-generated data and can promote universal APE research for all language pairs targeting English. | Empirical Analysis of Noising Scheme based Synthetic Data Generation for Automatic Post-editing |
d16766506 | Non-compositional expressions present a special challenge to NLP applications. We present a method for automatic identification of non-compositional expressions using their statistical properties in a text corpus. Our method is based on the hypothesis that when a phrase is non-composition, its mutual information differs significantly from the mutual informations of phrases obtained by substituting one of the word in the phrase with a similar word. | Automatic Identification of Non-compositional Phrases |
d118033329 | T h is p ap er ad d resse s the issu e of how to organ ize lin gu istic prin ciples for efficient p rocessin g. B ased on the general ch aracterizatio n of princi ples in term s o f purely co m p u ta tio n a l p ro p erties, the effects of principleorderin g on p arser p erform ance are in v e stigated . A novel p arser th a t ex plo its the p o ssib le variatio n in prin ciple-ordering to dy n am ically re-order prin ciples is d e scrib ed . H eu ristics for m inim izing the am ou n t of unn eces sa ry work perform ed du ring the p arsin g p rocess are also d iscu ssed . | The Computational Implementation of Principle-Based Parsers1 |
d239020521 | What? Why? How? Factors that impact the success of commercial MT projects Why MT? "Why would I need MT?" Why Now? | |
d51867524 | Humor is one of the most attractive parts in human communication. However, automatically recognizing humor in text is challenging due to the complex characteristics of humor. This paper proposes to model sentiment association between discourse units to indicate how the punchline breaks the expectation of the setup. We found that discourse relation, sentiment conflict and sentiment transition are effective indicators for humor recognition. On the perspective of using sentiment related features, sentiment association in discourse is more useful than counting the number of emotional words. | Modeling Sentiment Association in Discourse for Humor Recognition |
d43876245 | Compositional Semantics for Relative Clauses in Lexicalized Tree Adjoining Grammars | |
d16872779 | In this article we report work on Chinese semantic role labeling, taking advantage of two recently completed corpora, the Chinese PropBank, a semantically annotated corpus of Chinese verbs, and the Chinese Nombank, a companion corpus that annotates the predicate-argument structure of nominalized predicates. Because the semantic role labels are assigned to the constituents in a parse tree, we first report experiments in which semantic role labels are automatically assigned to hand-crafted parses in the Chinese Treebank. This gives us a measure of the extent to which semantic role labels can be bootstrapped from the syntactic annotation provided in the treebank. We then report experiments using automatic parses with decreasing levels of human annotation in the input to the syntactic parser: parses that use gold-standard segmentation and POS-tagging, parses that use only gold-standard segmentation, and fully automatic parses. These experiments gauge how successful semantic role labeling for Chinese can be in more realistic situations. Our results show that when hand-crafted parses are used, semantic role labeling accuracy for Chinese is comparable to what has been reported for the state-of-the-art English semantic role labeling systems trained and tested on the English PropBank, even though the Chinese PropBank is significantly smaller in size. When an automatic parser is used, however, the accuracy of our system is significantly lower than the English state of the art. This indicates that an improvement in Chinese parsing is critical to high-performance semantic role labeling for Chinese. * | Labeling Chinese Predicates with Semantic Roles |
d468680 | In this paper, we describe the methodological procedures and issues that emerged from the development of a pilot Levantine Arabic Treebank (LATB) at the Linguistic Data Consortium (LDC) and its use at the Johns Hopkins University (JHU) Center for Language and Speech Processing workshop on Parsing Arabic Dialects (PAD). This pilot, consisting of morphological and syntactic annotation of approximately 26,000 words of Levantine Arabic conversational telephone speech, was developed under severe time constraints; hence the LDC team drew on their experience in treebanking Modern Standard Arabic (MSA) text. The resulting Levantine dialect treebanked corpus was used by the PAD team to develop and evaluate parsers for Levantine dialect texts. The parsers were trained on MSA resources and adapted using dialect-MSA lexical resources (some developed especially for this task) and existing linguistic knowledge about syntactic differences between MSA and dialect. The use of the LATB for development and evaluation of syntactic parsers allowed the PAD team to provide feedback to the LDC treebank developers. In this paper, we describe the creation of resources for this corpus, as well as transformations on the corpus to eliminate speech effects and lessen the gap between our preexisting MSA resources and the new dialectal corpus. | Developing and Using a Pilot Dialectal Arabic Treebank |
d236486121 | We participate in the DialDoc Shared Task subtask 1 (Knowledge Identification). The task requires identifying the grounding knowledge in form of a document span for the next dialogue turn. We employ two well-known pre-trained language models (RoBERTa and ELECTRA) to identify candidate document spans and propose a metric-based ensemble method for span selection. Our methods include data augmentation, model pre-training/fine-tuning, postprocessing, and ensemble. On the submission page, we rank 2nd based on the average of normalized F1 and EM scores used for the final evaluation. Specifically, we rank 2nd on EM and 3rd on F1. * These three authors contributed equally. † Corresponding author. associated document. The output is agent utterance. The evaluation metrics are SacreBLEU (Post, 2018) and human evaluations. We only participate in sub-task 1. | Technical Report on Shared Task in DialDoc21 |
d257219882 | ROOTS is a 1.6TB multilingual text corpus developed for the training of BLOOM, currently the largest language model explicitly accompanied by commensurate data governance efforts. In continuation of these efforts, we present the ROOTS Search Tool: a search engine over the entire ROOTS corpus offering both fuzzy and exact search capabilities. ROOTS is the largest corpus to date that can be investigated this way. The ROOTS Search Tool is open-sourced and available on Hugging Face Spaces. We describe our implementation and the possible use cases of our tool. 1 bigscience.huggingface.co 2 hf.co/spaces/bigscience-data/roots-search 3 bigscience.huggingface.co/blog/ the-bigscience-rail-license 304 word British National Corpus (Leech, 1992) was created to represent the spoken and written British English of the late 20th century, with each text handpicked by experts, who also procured appropriate copyright exemptions. Similar national corpora were later created for many other languages, e.g. Japanese(Maekawa, 2008). The texts were often accompanied by multiple layers of annotationssyntactic, morphological, semantic, genre, source etc. This enabled valuable empirical research on the variants of represented languages, finding use in early distributional semantic models. Corpus linguistics developed sophisticated methodologies including concordances, word sketches and various word association measures(Stefanowitsch and Gries, 2003;Baker, 2004; Kilgarriff, 2014, among others). However, this methodology did not adapt well to Web-scale corpora due to the lack of tools and resources that could support such scale. 4 https://c4-search.apps.allenai.org/ 5 https://haveibeentrained.com/ 6 Sign-up link is available here 7 Data governance and representation in BigScience. 305 Machinery. | The ROOTS Search Tool: Data Transparency for LLMs |
d6647500 | Parsing, Projecting & Prototypes: Repurposing Linguistic Data on the Web | |
d219305740 | ||
d233365307 | ||
d12535 | Identifying textual inferences, where the meaning of one text follows from another, is a general underlying task within many natural language applications. Commonly, it is approached either by generative syntactic-based methods or by "lightweight" heuristic lexical models. We suggest a model which is confined to simple lexical information, but is formulated as a principled generative probabilistic model. We focus our attention on the task of ranking textual inferences and show substantially improved results on a recently investigated question answering data set. | A Probabilistic Lexical Model for Ranking Textual Inferences |
d18632014 | This study examines the acoustic variability in four 4-year-old children: two with cerebral palsy (CP) and two typically developing (TD). One recording from each child, collected from the picture-naming task and spontaneous interaction with adults was analyzed. Acoustic vowel space, pitch and speech rate in their production were investigated. Study findings indicated the following: 1) children with CP have a smaller vowel space than TD children, and there was a scattered distribution of the formant frequencies in CP; 2) children with CP tend to spend more time producing the utterances and their production of tones was unstable; and 3) both the speech rate and speech intelligibility in CP were lower. Future studies are needed to verify these preliminary findings. The variability features in the production of children with CP provide important references in speech therapy. | Acoustic variability in the speech of children with cerebral palsy |
d17508890 | This paper introduces some of the research behind automatic scoring of the speaking part of the Arizona English Language Learner Assessment, a large-scale test now operational for students in Arizona. Approximately 70% of the students tested are in the range 4-11 years old. We cover the methods used to assess spoken responses automatically, considering both what the student says and the way in which the student speaks. We also provide evidence for the validity of machine scores. The assessments include 10 open-ended item types. For 9 of the 10 open item types, machine scoring performed at a similar level or better than human scoring at the item-type level. At the participant level, correlation coefficients between machine overall scores and average human overall scores were: Kindergarten: 0.88; Grades 1-2: 0.90; Grades 3-5: 0.94; Grades 6-8: 0.95; Grades 9-12: 0.93. The average correlation coefficient was 0.92. We include a note on implementing a detector to catch problematic test performances. | Automatic Assessment of the Speech of Young English Learners |
d3112835 | The NOMAD project (Policy Formulation and Validation through non Moderated Crowd-sourcing) is a project that supports policy making, by providing rich, actionable information related to how citizens perceive different policies. NOMAD automatically analyzes citizen contributions to the informal web (e.g. forums, social networks, blogs, newsgroups and wikis) using a variety of tools. These tools comprise text retrieval, topic classification, argument detection and sentiment analysis, as well as argument summarization. NOMAD provides decision-makers with a full arsenal of solutions starting from describing a domain and a policy to applying content search and acquisition, categorization and visualization. These solutions work in a collaborative menner in the policy-making arena. NOMAD, thus, embeds editing, analysis and visualization technologies into a concrete framework, applicable in a variety of policy-making and decision support settings In this paper we provide an overview of the linguistic tools and resources of NOMAD. | NOMAD: Linguistic Resources and Tools Aimed at Policy Formulation and Validation |
d14008875 | Most studies on statistical Korean word spacing do not utilize the information provided by the input sentence and assume that it was completely concatenated. This makes the word spacer ignore the correct spaced parts of the input sentence and erroneously alter them. To overcome such limit, this paper proposes a structural SVM-based Korean word spacing method that can utilize the space information of the input sentence. The experiment on sentences with 10% spacing errors showed that our method achieved 96.81% F-score, while the basic structural SVM method only achieved 92.53% F-score. The more the input sentence was correctly spaced, the more accurately our method performed. | Balanced Korean Word Spacing with Structural SVM |
d9830179 | LANGUAGE RESEARCH SPONSORED BY ONR | |
d17569794 | In this paper we announce the new BITS 1 Synthesis Corpus for German. The BITS project is funded by the German Ministry of Education and Science to provide a publicly available synthesis corpus for German. The corpus comprises the voices of four German speakers (two male and two female) and consists of two parts: a set of logatome recordings for controlled diphone synthesis and a set of sentence recordings for unit selection. The paper gives an overview about the basic specifications, the profiles of the speakers, the casting procedure and quality control. Annotation and its organisation are described in detail. The final BITS speech synthesis corpus will be available via BAS and ELDA probably end of 2005. | The BITS Speech Synthesis Corpus for German |
d7606657 | This paper presents a method of retrieving bilingual collocations of a verb and its objective noun from cross-lingual documents with similar contents. Relevant documents are obtained by integrating crosslanguage hierarchies. The results showed a 15.1% improvement over the baseline nonhierarchy model, and a 6.0% improvement over use of relevant documents retrieved from a single hierarchy. Moreover, we found that some of the retrieved collocations were domain-specific. | Retrieving Bilingual Verb-noun Collocations by Integrating Cross-Language Category Hierarchies |
d8950336 | Word sense disambiguation aims to identify which meaning of a word is present in a given usage. Gathering word sense annotations is a laborious and difficult task. Several methods have been proposed to gather sense annotations using large numbers of untrained annotators, with mixed results. We propose three new annotation methodologies for gathering word senses where untrained annotators are allowed to use multiple labels and weight the senses. Our findings show that given the appropriate annotation task, untrained workers can obtain at least as high agreement as annotators in a controlled setting, and in aggregate generate equally as good of a sense labeling. | Embracing Ambiguity: A Comparison of Annotation Methodologies for Crowdsourcing Word Sense Labels |
d17347947 | We present a software architecture for data-driven dependency parsing of unrestricted natural language text, which achieves a strict modularization of parsing algorithm, feature model and learning method such that these parameters can be varied independently. The design has been realized in MaltParser, which supports several parsing algorithms and learning methods, for which complex feature models can be defined in a special description language.1 MaltParser is freely available for research and educational purposes and can be downloaded from | A generic architecture for data-driven dependency parsing |
d37742236 | To appreciate this article fully, it is essential to understand the historical context into which it fits, and which it has to some extent created. Although formally published for the first time here, it is already an extremely influential and classic piece of work.Finite-state machines, in one form or another, have been used for the description of natural language since the early 1950s, with the extension to transducers appearing in the 1960s. After Chomsky's stern condemnation of the adequacy of finite-state machines for describing sentence structures, they virtually disappeared from mainstream theoretical linguistics. Within computer science, they continued to be a standard formalism, although transducers were not accorded the same detailed algebraic attention as simple automata.Phonologists, meanwhile, were inventing a variety of rule mechanisms that were (with rare exceptions) only partly formalized. Superficially, most of these systems (as typified by those of Chomsky and Halle) appeared to have little to do with finite-state machines. Indeed, their notations tended to suggest that the rules had much more than finite-state power.Kaplan and Kay have integrated these two streams of work--algebraic treatment of automata in computer science, and phonologically-motivated formalisms within linguistics--and their results should feed back productively into both subfields. The framework they have established allows the comparison of different competing formalisms in a rigorous manner, and permits the exploration of the formal limitations or capabilities of rule notations that were previously more like expository devices than formally defined systems.What may not be clear to the casual reader is that this work has been developed over many years, and early versions of it have already escaped into the computational linguistics community in less prominent forums. In this way it has already affected the course of research into phonological/morphological formalisms. Perhaps the most notable (and in its turn, influential) development has been Koskenniemi's two-level morphology, which has been successfully applied to the morphology of a very wide number of languages. Koskenniemi's ideas are a direct development of Kaplan and Kay's, as explained in Section 7 of the paper here.The theory of regular relations and finite-state transducers should not be viewed as a mere re-formalisation of 1960s linguistics. As well as its relevance to the two-level model, Kaplan and Kay suggest that it may also throw light on the formal properties of autosegmental phonology. Although the ideas were first circulated about 15 years ago, they are still of central relevance to computational phonology today. | Syntactic Structures |
d241583273 | ||
d1559412 | At ATR Spoken Language Translation Research Laboratories, we are building a broad-coverage bilingual corpus to study corpus-based speech translation technologies for the real world. There are three important points to consider in designing and constructing a corpus for future speech translation research. The first is to have a variety of speech samples, with a wide range of pronunciations and speakers. The second is to have data for a variety of situations. The third is to have a variety of expressions. This paper reports our trials and discusses the methodology. First, we introduce a bilingual travel conversation (TC) corpus of spoken languages and a broad-coverage bilingual basic expression (BE) corpus. TC and BE are designed to be complementary. TC is a collection of transcriptions of bilingual spoken dialogues, while BE is a collection of Japanese sentences and their English translations. Whereas TC covers a small domain, BE covers a wide variety of domains. We compare the characteristics of vocabulary and expressions between these two corpora and suggest that we need a much greater variety of expressions. One promising approach might be to collect paraphrases representing various different expressions generated by many people for similar concepts. | Toward a Broad-coverage Bilingual Corpus for Speech Translation of Travel Conversations in the Real World |
d10084087 | We consider semi-supervised learning of information extraction methods, especially for extracting instances of noun categories (e.g., 'athlete,' 'team') and relations (e.g., 'playsForTeam(athlete,team)').Semisupervised approaches using a small number of labeled examples together with many unlabeled examples are often unreliable as they frequently produce an internally consistent, but nevertheless incorrect set of extractions. We propose that this problem can be overcome by simultaneously learning classifiers for many different categories and relations in the presence of an ontology defining constraints that couple the training of these classifiers. Experimental results show that simultaneously learning a coupled collection of classifiers for 30 categories and relations results in much more accurate extractions than training classifiers individually. | Coupling Semi-Supervised Learning of Categories and Relations |
d60227474 | The idea that statistical significance tests can be applied to the task of determining relatedness of languages is known to provoke rather emotional reactions. "Fallacious," "specious," "circular, .... superficially plausible but in fact utterly unreliable," "exhibiting innumeracy to a fatal degree"--these are epithets actually used by researchers to describe one another's work in this area. What is generally agreed upon is that languages of common origin normally exhibit more common traits than unrelated languages. The main difficulty is finding an objective and reliable measure of the probability that the regularities are not due to chance.A number of phenomena make such statistical comparisons difficult. First, lexical replacement steadily decreases the number of cognates shared by related languages. Second, with the passage of time, words can radically change their phonetic form and sometimes also their meaning. Finally, lexical transfer of words between unrelated languages that come into contact further obscures the true nature of their relationship. Nevertheless, a number of proposals have been put forward, the most prominent being the method of Ringe, introduced in his 1992 monograph and refined in his subsequent papers.The new book by Kessler, based on his Ph.D. dissertation, is the most comprehensive work on the subject so far. It critically analyzes the previous approaches, points out a number of possible pitfalls of statistical testing, and proposes a novel solution to the problem. Although its main target audience is linguists interested in statistical argumentation, the book is also of computational interest. Kessler's approach crucially depends on computerized simulations, which are used in lieu of deduction in order to arrive at nontrivial results. Moreover, proper application of statistical reasoning is a topic of concern for many computational linguists. No deep background in historical linguistics is necessary for understanding the problems discussed in the book.In the opening chapters, Kessler clearly introduces the problem by means of several illuminating examples. He explains in detail the important difference between measures based on phonetic similarities and those focused on sound recurrences, and his reasons for preferring the latter. The previously proposed methods are carefully analyzed, and their linguistic biases and mathematical flaws are pointed out. The | Book Reviews The Significance of Word Lists |
d17162694 | Lexical substitution is a task of determining a meaning-preserving replacement for a word in context. We report on a preliminary study of this task for the Croatian language on a small-scale lexical sample dataset, manually annotated using three different annotation schemes. We compare the annotations, analyze the inter-annotator agreement, and observe a number of interesting language-specific details in the obtained lexical substitutes. Furthermore, we apply a recently-proposed, dependency-based lexical substitution model to our dataset. The model achieves a P@3 score of 0.35, which indicates the difficulty of the task. | A Preliminary Study of Croatian Lexical Substitution |
d252847557 | Metaphors are creative cognitive constructs that are employed in everyday conversation to describe abstract concepts and feelings. Prevalent conceptual metaphors such as WAR, MONSTER, and DARKNESS in COVID-19 online discourse sparked a multi-faceted debate over their efficacy in communication, resultant psychological impact on listeners, and their appropriateness in social discourse. In this work, we investigate metaphors used in discussions around COVID-19 on Indian Twitter. We observe subtle transitions in metaphorical mappings as the pandemic progressed. Our experiments, however, didn't indicate any affective impact of WAR metaphors on the COVID-19 discourse. | |
d257258162 | Historical treebanking within the generative framework has gained in popularity. However, there are still many languages and historical periods yet to be represented. For German, a constituency treebank exists for historical Low German, but not Early New High German. We begin to fill this gap by presenting our initial work on the Parsed Corpus of Early New High German (PCENHG). We present the methodological considerations and workflow for the treebank's annotations and development. Given the limited amount of currently available PCENHG treebank data, we treat it as a low-resource language and leverage a larger, closely related variety-Middle Low German-to build a parser to help facilitate faster post-annotation correction. We present an analysis on annotation speeds and conclude with a small pilot use-case, highlighting potential for future linguistic analyses. In doing so we highlight the value of the treebank's development for historical linguistic analysis and demonstrate the benefits and challenges of developing a parser using two closely related historical Germanic varieties. | |
d252819055 | Most attempts on Text-to-SQL task using encoder-decoder approach show a big problem of dramatic decline in performance for new databases. Models trained on Spider dataset, despite achieving 75% accuracy on Spider development or test sets, show a huge decline below 20% accuracy for databases not in Spider. We present a system that combines automated training-data augmentation and ensemble technique. We achieve double-digit percentage improvement for databases that are not part of the Spider corpus. | Addressing Limitations of Encoder-Decoder Based Approach to Text-to-SQL |
d2924682 | ConceptNet is a knowledge representation project, providing a large semantic graph that describes general human knowledge and how it is expressed in natural language. This paper presents the latest iteration, ConceptNet 5, including its fundamental design decisions, ways to use it, and evaluations of its coverage and accuracy.Figure 1: A high-level view of the knowledge ConceptNet has about a cluster of related concepts. | Representing General Relational Knowledge in ConceptNet 5 |
d1851389 | We propose a computationally efficient graph-based approach for local coherence modeling. We evaluate our system on three tasks: sentence ordering, summary coherence rating and readability assessment. The performance is comparable to entity grid based approaches though these rely on a computationally expensive training phase and face data sparsity problems. | Graph-based Local Coherence Modeling |
d15654218 | We present in this paper an on-going research: the construction and annotation of a Romanian Generative Lexicon (RoGL). Our system follows the specifications of CLIPS project for Italian language. It contains a corpus, a type ontology, a graphical interface and a database from which we generate data in XML format. | Building a generative lexicon for Romanian |
d258463962 | Warning: This paper contains examples of the language that some people may find offensive.Transformer-based Language models have achieved state-of-the-art performance on a wide range of Natural Language Processing (NLP) tasks. This work will examine the effectiveness of transformer language models like BERT, RoBERTa, ALBERT, and DistilBERT on existing Indian hate speech datasets such as HASOC-Hindi (2019), HASOC-Marathi (2021) and Bengali Hate Speech (BenHateSpeech) over binary classification. Most deep learning methods fail to recognize a hate sentence if hate words are wrapped into sophisticated words where transformers understand the context of a hate word present in a sentence. Here, Transformer-based multilingual models such as MuRILBERT, XLM-RoBERTa, etc. are compared with monolingual models like NeuralSpace-BERTHi (Hindi), MahaBERT (Marathi), BanglaBERT (Bengali), etc. It is noticed that the monolingual MahaBERT model performs the best on HASOC-Marathi, whereas the multilingual MuRIL-BERT performs the best on HASOC-Hindi and BenHateSpeech. Several other crosslanguage evaluations over Marathi and Hindi monolingual models and mixed observations are presented. | Hate speech detection: a comparison of mono and multilingual transformer model with cross-language evaluation |
d202676718 | An interesting method of evaluating word representations is by how much they reflect the semantic representations in the human brain. However, most, if not all, previous works only focus on small datasets and a single modality. In this paper, we present the first multimodal framework for evaluating English word representations based on cognitive lexical semantics. Six types of word embeddings are evaluated by fitting them to 15 datasets of eyetracking, EEG and fMRI signals recorded during language processing. To achieve a global score over all evaluation hypotheses, we apply statistical significance testing accounting for the multiple comparisons problem. This framework is easily extensible and available to include other intrinsic and extrinsic evaluation methods. We find strong correlations in the results between cognitive datasets, across recording modalities and to their performance on extrinsic NLP tasks. | CogniVal: A Framework for Cognitive Word Embedding Evaluation |
d2489350 | Unsupervised dialogue act modeling holds great promise for decreasing the development time to build dialogue systems. Work to date has utilized manual annotation or a synthetic task to evaluate unsupervised dialogue act models, but each of these evaluation approaches has substantial limitations. This paper presents an incontext evaluation framework for an unsupervised dialogue act model within tutorial dialogue. The clusters generated by the model are mapped to tutor responses by a handcrafted policy, which is applied to unseen test data and evaluated by human judges. The results suggest that incontext evaluation may better reflect the performance of a model than comparing against manual dialogue act labels. | In-Context Evaluation of Unsupervised Dialogue Act Models for Tutorial Dialogue |
d5692502 | Knowledge of the anaphoricity of a noun phrase might be profitably exploited by a coreference system to bypass the resolution of non-anaphoric noun phrases. Perhaps surprisingly, recent attempts to incorporate automatically acquired anaphoricity information into coreference systems, however, have led to the degradation in resolution performance. This paper examines several key issues in computing and using anaphoricity information to improve learning-based coreference systems. In particular, we present a new corpus-based approach to anaphoricity determination. Experiments on three standard coreference data sets demonstrate the effectiveness of our approach. | Learning Noun Phrase Anaphoricity to Improve Coreference Resolution: Issues in Representation and Optimization |
d250391092 | In this paper we explore how a demographic distribution of occupations, along gender dimensions, is reflected in pre-trained language models. We give a descriptive assessment of the distribution of occupations, and investigate to what extent these are reflected in four Norwegian and two multilingual models. To this end, we introduce a set of simple bias probes, and perform five different tasks combining gendered pronouns, first names, and a set of occupations from the Norwegian statistics bureau. We show that language specific models obtain more accurate results, and are much closer to the real-world distribution of clearly gendered occupations. However, we see that none of the models have correct representations of the occupations that are demographically balanced between genders. We also discuss the importance of the training data on which the models were trained on, and argue that template-based bias probes can sometimes be fragile, and a simple alteration in a template can change a model's behavior. | Occupational Biases in Norwegian and Multilingual Language Models |
d258378176 | A modular design encourages neural models to disentangle and recombine different facets of knowledge to generalise more systematically to new tasks. In this work, we assume that each task is associated with a subset of latent skills from an (arbitrary size) inventory. In turn, each skill corresponds to a parameter-efficient (sparse / low-rank) model adapter. By jointly learning adapters and a routing function that allocates skills to each task, the full network is instantiated as the average of the parameters of active skills. We propose several inductive biases that encourage re-usage and composition of the skills, including variable-size skill allocation and a dual-speed learning rate. We evaluate our latent-skill model in two main settings: 1) multitask reinforcement learning for instruction following on 8 levels of the BabyAI platform; and 2) few-shot fine-tuning of language models on 160 NLP tasks of the CrossFit benchmark. We find that the modular design of our network enhances sample efficiency in reinforcement learning and few-shot generalisation in supervised learning, compared to a series of baselines. These include models where parameters are fully shared, task-specific, or conditionally generated (HyperFormer), as well as sparse mixture-of-experts (Task-MoE). | Combining Parameter-efficient Modules for Task-level Generalisation |
d252819175 | This article presents the specification and evaluation of DiaBiz.Kom -the corpus of dialogue texts in Polish. The corpus contains transcriptions of telephone conversations conducted according to a prepared scenario. The transcripts of conversations have been manually annotated with a layer of information concerning communicative functions. DiaBiz.Kom is the first corpus of this type prepared for the Polish language and will be used to develop a system of dialogue analysis and modules for creating advanced chatbots. | DiaBiz.Kom -Towards a Polish Dialogue Act Corpus Based on ISO 24617-2 Standard |
d14782847 | The characteristic and an advantage of natural language is that, as a symbolic system, it has an internal logical framework for organizing and positioning conceptual knowledge, which is its lexicon system. This framework implements the fundamental function of natural language to condense, absorb, organize and position conceptual knowledge, and creates progressively a very large and complex build-in knowledge system in the language. It is also the basis of the other two fundamental functions of natural language; i.e., it serves as a tool for communication and as a medium for conceptual thought. The natural language semantics should reproduce the basic framework of natural language in their theoretic realms to represent these three functions and their relationships. The lexical semantics thereby become their core. | Situation -A Suitable Framework for Organizing and Positioning Lexical Semantic Knowledge |
d11047550 | In cases when phrase-based statistical machine translation (SMT) is applied to languages with rather free word order and rich morphology, translated texts often are not fluent due to misused inflectional forms and wrong word order between phrases or even inside the phrase. One of possible solutions how to improve translation quality is to apply factored models. The paper presents work on English-Latvian phrase-based and factored SMT systems and, using evaluation results, demonstrates that although factored models seem more appropriate for highly inflected languages, they have rather small influence on translation results, while using phrase-model with more data better translation quality could be achieved. | English-Latvian SMT: knowledge or data? |
d20245049 | This paper presents the results of an experimental pilot user study, focusing on the evaluation of machine-translated user-generated content by users of an online community forum and how those users interact with the MT content that is presented to them. Preliminary results show that ratings are very difficult to obtain, that a low percentage of posts (21%) was rated, that users need to be well informed about their task and that there is a weak correlation between the length of the post (number of words) and its comprehensibility. | Evaluation of Machine-Translated User Generated Content: A pilot study based on User Ratings |
d233029487 | ||
d241583398 | ||
d34312692 | In this paper, we combine methods to estimate sense rankings from raw text with recent work on word embeddings to provide sense ranking estimates for the entries in the Open Multilingual WordNet (OMW). The existing Word2Vec pre-trained models from Polygot2 are only built for single word entries, we, therefore, re-train them with multiword expressions from the wordnets, so that multiword expressions can also be ranked. Thus this trained model gives embeddings for both single words and multiwords. The resulting lexicon gives a WSD baseline for five languages. The results are evaluated for Semcor sense corpora for 5 languages using Word2Vec and Glove models. The Glove model achieves an average accuracy of 0.47 and Word2Vec achieves 0.31 for languages such as English, Italian, Indonesian, Chinese and Japanese. The experimentation on OMW sense ranking proves that the rank correlation is generally similar to the human ranking. Hence distributional semantics can aid in Wordnet Sense Ranking. | Multilingual Wordnet sense Ranking using nearest context |
d13086372 | We introduce possibilities of automatic evaluation of surface text coherence (cohesion) in texts written by learners of Czech during certified exams for non-native speakers. On the basis of a corpus analysis, we focus on finding and describing relevant distinctive features for automatic detection of A1-C1 levels (established by CEFR -the Common European Framework of Reference for Languages) in terms of surface text coherence.The CEFR levels are evaluated by human assessors and we try to reach this assessment automatically by using several discourse features like frequency and diversity of discourse connectives, density of discourse relations etc. We present experiments with various features using two machine learning algorithms. Our results of automatic evaluation of CEFR coherence/cohesion marks (compared to human assessment) achieved 73.2% success rate for the detection of A1-C1 levels and 74.9% for the detection of A2-B2 levels. | Automatic evaluation of surface coherence in L2 texts in Czech |
d454696 | This paper provides a method for improving tensor-based compositional distributional models of meaning by the addition of an explicit disambiguation step prior to composition. In contrast with previous research where this hypothesis has been successfully tested against relatively simple compositional models, in our work we use a robust model trained with linear regression. The results we get in two experiments show the superiority of the prior disambiguation method and suggest that the effectiveness of this approach is modelindependent. | Resolving Lexical Ambiguity in Tensor Regression Models of Meaning |
d199442469 | Given many recent advanced embedding models, selecting pre-trained word embedding (a.k.a., word representation) models best fit for a specific downstream task is non-trivial. In this paper, we propose a systematic approach, called ETNLP, for extracting, evaluating, and visualizing multiple sets of pre-trained word embeddings to determine which embeddings should be used in a downstream task.We demonstrate the effectiveness of the proposed approach on our pre-trained word embedding models in Vietnamese to select which models are suitable for a named entity recognition (NER) task. Specifically, we create a large Vietnamese word analogy list to evaluate and select the pre-trained embedding models for the task. We then utilize the selected embeddings for the NER task and achieve the new state-of-the-art results on the task benchmark dataset. We also apply the approach to another downstream task of privacy-guaranteed embedding selection, and show that it helps users quickly select the most suitable embeddings. In addition, we create an open-source system using the proposed systematic approach to facilitate similar studies on other NLP tasks. The source code and data are available at https: //github.com/vietnlp/etnlp. | ETNLP: A Visual-Aided Systematic Approach to Select Pre-Trained Embeddings for a Downstream Task |
d250391045 | Recognizing offensive text is an important requirement for every content management system, especially for social networks. While the majority of the prior work formulate this problem as text classification, i.e., if a text excerpt is offensive or not, in this work we propose a novel model for offensive span detection (OSD), whose goal is to identify the spans responsible for the offensive tone of the text. One of the challenges to train a model for this novel setting is the lack of enough training data. To address this limitation, in this work we propose a novel method in which the large-scale pretrained language model GPT-2 is employed to generate synthetic training data for OSD. In particular, we propose to train the GPT-2 model in a dual-training setting using the REINFORCE algorithm to generate in-domain, natural and diverse training samples. Extensive experiments on the benchmark dataset for OSD reveal the effectiveness of the proposed method. | Data Augmentation with Dual Training for Offensive Span Detection |
d244464099 | Temporal commonsense reasoning is a challenging task as it requires temporal knowledge usually not explicitly stated in text. In this work, we propose an ensemble model for temporal commonsense reasoning. Our model relies on pre-trained contextual representations from transformer-based language models (i.e., BERT), and on a variety of training methods for enhancing model generalization: 1) multistep fine-tuning using carefully selected auxiliary tasks and datasets, and 2) a specifically designed temporal task-adaptive pre-trainig task aimed to capture temporal commonsense knowledge. Our model greatly outperforms the standard fine-tuning approach and strong baselines on the MC-TACO dataset. | Towards a Language Model for Temporal Commonsense Reasoning |
d236460200 | Misinformation has recently become a welldocumented matter of public concern. Existing studies on this topic have hitherto adopted a coarse concept of misinformation, which incorporates a broad spectrum of story types ranging from political conspiracies to misinterpreted pranks. This paper aims to structurize these misinformation stories by leveraging fact-check articles. Our intuition is that key phrases in a fact-check article that identify the misinformation type(s) (e.g., doctored images, urban legends) also act as rationales that determine the verdict of the fact-check (e.g., false). We experiment on rationalized models with domain knowledge as weak supervision to extract these phrases as rationales, and then cluster semantically similar rationales to summarize prevalent misinformation types. Using archived fact-checks from Snopes.com, we identify ten types of misinformation stories. We discuss how these types have evolved over the last ten years and compare their prevalence between the 2016/2020 US presidential elections and the H1N1/COVID-19 pandemics. | Structurizing Misinformation Stories via Rationalizing Fact-Checks |
d258765289 | This paper explores knowledge distillation for multi-domain neural machine translation (NMT). We focus on the Estonian-English translation direction and experiment with distilling the knowledge of multiple domain-specific teacher models into a single student model that is tiny and efficient. Our experiments use a large parallel dataset of 18 million sentence pairs, consisting of 10 corpora, divided into 6 domain groups based on source similarity, and incorporate forward-translated monolingual data. Results show that tiny student models can cope with multiple domains even in case of large corpora, with different approaches benefiting frequent and low-resource domains. | Distilling Estonian Text Domains for Production-Oriented Machine Translation |
d34257390 | Shallow semantic analyzers, such as semantic role labeling and sense tagging, are increasing in accuracy and becoming commonplace. However, they only provide limited and local representations of local words and individual predicate-argument structures. This talk will address some of the current challenges in producing deeper, connected representations of eventualities. Available resources, such as VerbNet, FrameNet and TimeBank, that can assist in this process will also be discussed, as well as some of their limitations. | Going Beyond Shallow Semantics (invited talk) |
d142503522 | We present a very simple, unsupervised method for the pairwise matching of documents from heterogeneous collections. We demonstrate our method with the Concept-Project matching task, which is a binary classification task involving pairs of documents from heterogeneous collections. Although our method only employs standard resources without any domain-or task-specific modifications, it clearly outperforms the more complex system of the original authors. In addition, our method is transparent, because it provides explicit information about how a similarity score was computed, and efficient, because it is based on the aggregation of (pre-computable) word-level similarities. | Semantic Matching of Documents from Heterogeneous Collections: A Simple and Transparent Method for Practical Applications |
d13511420 | We report on the implementation of a morphological analyzer for the Sahidic dialect of Coptic, a now extinct Afro-Asiatic language. The system is developed in the finite-state paradigm. The main purpose of the project is provide a method by which scholars and linguists can semi-automatically gloss extant texts written in Sahidic. Since a complete lexicon containing all attested forms in different manuscripts requires significant expertise in Coptic spanning almost 1,000 years, we have equipped the analyzer with a core lexicon and extended it with a 'guesser' ability to capture out-of-vocabulary items in any inflection. We also suggest an ASCII transliteration for the language. A brief evaluation is provided. | Morphological Analysis of Sahidic Coptic for Automatic Glossing |
d12807398 | This paper describes a simple patternmatching algorithm for recovering empty nodes and identifying their co-indexed antecedents in phrase structure trees that do not contain this information. The patterns are minimal connected tree fragments containing an empty node and all other nodes co-indexed with it. This paper also proposes an evaluation procedure for empty node recovery procedures which is independent of most of the details of phrase structure, which makes it possible to compare the performance of empty node recovery on parser output with the empty node annotations in a goldstandard corpus. Evaluating the algorithm on the output of Charniak's parser(Charniak, 2000)and the Penn treebank(Marcus et al., 1993)shows that the patternmatching algorithm does surprisingly well on the most frequently occuring types of empty nodes given its simplicity. | A simple pattern-matching algorithm for recovering empty nodes and their antecedents * |
d261341916 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.