ID
stringlengths
11
54
url
stringlengths
33
64
title
stringlengths
11
184
abstract
stringlengths
17
3.87k
label_nlp4sg
bool
2 classes
task
list
method
list
goal1
stringclasses
9 values
goal2
stringclasses
9 values
goal3
stringclasses
1 value
acknowledgments
stringlengths
28
1.28k
year
stringlengths
4
4
sdg1
bool
1 class
sdg2
bool
1 class
sdg3
bool
2 classes
sdg4
bool
2 classes
sdg5
bool
2 classes
sdg6
bool
1 class
sdg7
bool
1 class
sdg8
bool
2 classes
sdg9
bool
2 classes
sdg10
bool
2 classes
sdg11
bool
2 classes
sdg12
bool
1 class
sdg13
bool
2 classes
sdg14
bool
1 class
sdg15
bool
1 class
sdg16
bool
2 classes
sdg17
bool
2 classes
qi-etal-2018-universal
https://aclanthology.org/K18-2016
Universal Dependency Parsing from Scratch
This paper describes Stanford's system at the CoNLL 2018 UD Shared Task. We introduce a complete neural pipeline system that takes raw text as input, and performs all tasks required by the shared task, ranging from tokenization and sentence segmentation, to POS tagging and dependency parsing. Our single system submission achieved very competitive performance on big treebanks. Moreover, after fixing an unfortunate bug, our corrected system would have placed the 2 nd , 1 st , and 3 rd on the official evaluation metrics LAS, MLAS, and BLEX, and would have outperformed all submission systems on lowresource treebank categories on all metrics by a large margin. We further show the effectiveness of different model components through extensive ablation studies.
false
[]
[]
null
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sirai-1992-syntactic
https://aclanthology.org/C92-4172
Syntactic Constraints on Relativization in Japanese
This paper discusses the formalization of relative clauses in Japanese based on JPSG framework. We characterize them as adjuncts to nouns, and formalize them in terms of constraints among grammatical features. Furthermore, we claim that there is a constraint on the number of slash elements and show the supporting facts.
false
[]
[]
null
null
null
Acknowledgments. We are grateful to I)r. Takao Gunji, Dr. K6iti Hasida and other members of the JPSG working group at ICOT for discussion. And we thank Dr. Phillip Morrow for proofreading.
1992
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yang-1999-towards
https://aclanthology.org/1999.mtsummit-1.58
Towards the automatic acquisition of lexical selection rules
This paper is a study of a certain type of collocations and implication and application to acquisition of lexical selection rules in transfer-approach MT systems. Collocations reveal the co-occurrence possibilities of linguistic units in one language, which often require lexical selection rules to enhance the natural flow and clarity of MT output. The study presents an automatic acquisition and human verification process to acquire collocations and suggest possible candidates for lexical selection rules. The mechanism has been used in the development and enhancement of the Chinese-English and Japanese-English MT systems, and can be easily adapted to other language pairs. Future work includes expanding its usage to more language pairs and furthering its application to MT customers.
false
[]
[]
null
null
null
The work was initiated for the Chinese-English MT system development, which has been supported in part by NAIC (National Air Force Intelligence Center). We thank Dale Bostad of NAIC for his continuous support. The SYSTRAN Chinese and Japanese development groups have contributed to the experiment and evaluation of the process. Many thanks to my colleagues Elke Lange and Dan Roffee for reviewing the paper and anonymous MT Summit VII reviewers for their helpful comments.
1999
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ruckle-etal-2021-adapterdrop
https://aclanthology.org/2021.emnlp-main.626
AdapterDrop: On the Efficiency of Adapters in Transformers
Transformer models are expensive to fine-tune, slow for inference, and have large storage requirements. Recent approaches tackle these shortcomings by training smaller models, dynamically reducing the model size, and by training lightweight adapters. In this paper, we propose AdapterDrop, removing adapters from lower transformer layers during training and inference, which incorporates concepts from all three directions. We show that Adap-terDrop can dynamically reduce the computational overhead when performing inference over multiple tasks simultaneously, with minimal decrease in task performances. We further prune adapters from AdapterFusion, which improves the inference efficiency while maintaining the task performances entirely.
false
[]
[]
null
null
null
This work has received financial support from multiple sources. (1) The German Federal Ministry of
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hutchins-2012-obituary
https://aclanthology.org/J12-3001
Obituary: Victor H. Yngve
(MIT), as editor of its first journal, as designer and developer of the first non-numerical programming language (COMIT), and as an influential contributor to linguistic theory. While still completing his Ph.D. on cosmic ray physics at the University of Chicago during 1950-1953, Yngve had an idea for using the newly invented computers to translate languages. He contemplated building a translation machine based on simple dictionary lookup. At this time he knew nothing of the earlier speculations of Warren Weaver and others (Hutchins 1997). Then during a visit to Claude Shannon at Bell Telephone Laboratories in early 1952 he heard about a conference on machine translation to be held at MIT in June of that year. He attended the opening public meeting and participated in conference discussions, and then, after Bar-Hillel's departure from MIT, he was appointed in July 1953 by Jerome Wiesner at the Research Laboratory for Electronics (RLE) to lead the MT research effort there. (For a retrospective survey of his MT research activities see Yngve [2000].) Yngve, along with many others at the time, deprecated the premature publicity around the Georgetown-IBM system demonstrated in January 1954. Yngve was appalled to see research of such a limited nature reported in newspapers; his background in physics required experiments to be carefully planned, with their assumptions made plain, and properly tested and reviewed by other researchers. He was determined to set the new field of MT on a proper scientific course. The first step was a journal for the field, to be named Mechanical Translation-the field became "machine translation" in later years. He found a collaborator for the journal in William N. Locke of the MIT Modern Languages department. The aim was to provide a forum for information about what research was going on in the form of abstracts, and then for peer-reviewed articles. The first issue appeared in March 1954. Yngve's first experiments at MIT in October 1953 were an implementation of his earlier ideas on word-for-word translation. The results of translating from German were published in the collection edited by Locke and Booth (Yngve 1955b). One example of output began: Die CONVINCINGe CRITIQUE des CLASSICALen IDEA-OF-PROBABILITY IS eine der REMARKABLEen WORKS des AUTHORs. Er HAS BOTHen LAWe der GREATen NUMBERen ein DOUBLEes TO SHOWen: (1) wie sie IN seinem SYSTEM TO INTERPRETen ARE, (2) THAT sie THROUGH THISe INTERPRETATION NOT den CHARACTER von NOT-TRIVIALen DEMONSTRABLE PROPOSITIONen LOSen. . .
false
[]
[]
null
null
null
null
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zue-1989-acoustic
https://aclanthology.org/H89-1025
Acoustic-Phonetics Based Speech Recognition
The objective of this project is to develop a robust and high-performance speech recognitiotl system using a segment-based approach to phonetic recognition. The recognition system will eventually be integrated with natural language processing to achieve spoken lallguagc understanding. Developed a phonetic recognition front-end and achieved 77% and 71% classiilcatiou accuracy under speaker-dependent and -independent conditions, respectively, using a set of 38 context-independent models.
false
[]
[]
null
null
null
null
1989
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
riedl-biemann-2012-sweeping
https://aclanthology.org/W12-0703
Sweeping through the Topic Space: Bad luck? Roll again!
Topic Models (TM) such as Latent Dirichlet Allocation (LDA) are increasingly used in Natural Language Processing applications. At this, the model parameters and the influence of randomized sampling and inference are rarely examined-usually, the recommendations from the original papers are adopted. In this paper, we examine the parameter space of LDA topic models with respect to the application of Text Segmentation (TS), specifically targeting error rates and their variance across different runs. We find that the recommended settings result in error rates far from optimal for our application. We show substantial variance in the results for different runs of model estimation and inference, and give recommendations for increasing the robustness and stability of topic models. Running the inference step several times and selecting the last topic ID assigned per token, shows considerable improvements. Similar improvements are achieved with the mode method: We store all assigned topic IDs during each inference iteration step and select the most frequent topic ID assigned to each word. These recommendations do not only apply to TS, but are generic enough to transfer to other applications.
false
[]
[]
null
null
null
This work has been supported by the Hessian research excellence program "Landes-Offensive zur Entwicklung Wissenschaftlich-konomischer Exzellenz" (LOEWE) as part of the research center "Digital Humanities". We would also thank the anonymous reviewers for their comments, which greatly helped to improve the paper.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
liao-grishman-2011-using
https://aclanthology.org/I11-1080
Using Prediction from Sentential Scope to Build a Pseudo Co-Testing Learner for Event Extraction
Event extraction involves the identification of instances of a type of event, along with their attributes and participants. Developing a training corpus by annotating events in text is very labor intensive, and so selecting informative instances to annotate can save a great deal of manual work. We present an active learning (AL) strategy, pseudo co-testing, based on one view from a classifier aiming to solve the original problem of event extraction, and another view from a classifier aiming to solve a coarser granularity task. As the second classifier can provide more graded matching from a wider scope, we can build a set of pseudocontention-points which are very informative, and can speed up the AL process. Moreover, we incorporate multiple selection criteria into the pseudo cotesting, seeking training examples that are informative, representative, and varied. Experiments show that pseudo co-testing can reduce annotation labor by 81%; incorporating multiple selection criteria reduces the labor by a further 7%.
false
[]
[]
null
null
null
Supported by the Intelligence Advanced Research Projects Activity (IARPA) via Air Force Research Laboratory (AFRL) contract number FA8650-10-C-7058. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, AFRL, or the U.S. Government.
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chiang-etal-2008-decomposability
https://aclanthology.org/D08-1064
Decomposability of Translation Metrics for Improved Evaluation and Efficient Algorithms
B is the de facto standard for evaluation and development of statistical machine translation systems. We describe three real-world situations involving comparisons between different versions of the same systems where one can obtain improvements in B scores that are questionable or even absurd. These situations arise because B lacks the property of decomposability, a property which is also computationally convenient for various applications. We propose a very conservative modification to B and a cross between B and word error rate that address these issues while improving correlation with human judgments.
false
[]
[]
null
null
null
Our thanks go to Daniel Marcu for suggesting modifying the B brevity penalty, and to Jonathan May and Kevin Knight for their insightful comments. This research was supported in part by DARPA grant HR0011-06-C-0022 under BBN Technologies subcontract 9500008412.
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
vacher-etal-2014-sweet
http://www.lrec-conf.org/proceedings/lrec2014/pdf/118_Paper.pdf
The Sweet-Home speech and multimodal corpus for home automation interaction
Ambient Assisted Living aims at enhancing the quality of life of older and disabled people at home thanks to Smart Homes and Home Automation. However, many studies do not include tests in real settings, because data collection in this domain is very expensive and challenging and because of the few available data sets. The SWEET-HOME multimodal corpus is a dataset recorded in realistic conditions in DOMUS, a fully equipped Smart Home with microphones and home automation sensors, in which participants performed Activities of Daily living (ADL). This corpus is made of a multimodal subset, a French home automation speech subset recorded in Distant Speech conditions, and two interaction subsets, the first one being recorded by 16 persons without disabilities and the second one by 6 seniors and 5 visually impaired people. This corpus was used in studies related to ADL recognition, context aware interaction and distant speech recognition applied to home automation controled through voice.
false
[]
[]
null
null
null
This work is part of the SWEET-HOME project founded by the French National Research Agency (Agence Nationale de la Recherche / ANR-09-VERS-011). The authors would like to thank the participants who accepted to perform the experiments. Thanks are extended to S. Humblot, S. Meignard, D. Guerin, C. Fontaine, D. Istrate, C. Roux and E. Elias for their support.
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
pallotta-etal-2007-user
https://aclanthology.org/P07-1127
User Requirements Analysis for Meeting Information Retrieval Based on Query Elicitation
We present a user requirements study for Question Answering on meeting records that assesses the difficulty of users questions in terms of what type of knowledge is required in order to provide the correct answer. We grounded our work on the empirical analysis of elicited user queries. We found that the majority of elicited queries (around 60%) pertain to argumentative processes and outcomes. Our analysis also suggests that standard keyword-based Information Retrieval can only deal successfully with less than 20% of the queries, and that it must be complemented with other types of metadata and inference.
false
[]
[]
null
null
null
We wish to thank Martin Rajman and Hatem Ghorbel for their constant and valuable feedback. This work has been partially supported by the Swiss National Science Foundation NCCR IM2 and by the SNSF grant no. 200021-116235.
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hasan-etal-2006-reranking
https://aclanthology.org/W06-2606
Reranking Translation Hypotheses Using Structural Properties
We investigate methods that add syntactically motivated features to a statistical machine translation system in a reranking framework. The goal is to analyze whether shallow parsing techniques help in identifying ungrammatical hypotheses. We show that improvements are possible by utilizing supertagging, lightweight dependency analysis, a link grammar parser and a maximum-entropy based chunk parser. Adding features to n-best lists and discriminatively training the system on a development set increases the BLEU score up to 0.7% on the test set.
false
[]
[]
null
null
null
This work has been partly funded by the European Union under the integrated project TC-Star (Technology and Corpora for Speech to Speech Translation, IST-2002-FP6-506738, http://www.tc-star.org), and by the R&D project TRAMES managed by Bertin Technologies as prime contractor and operated by the french DGA (Délégation Générale pour l'Armement).
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gonzalez-martinez-2019-graphemic
https://aclanthology.org/W19-9001
Graphemic ambiguous queries on Arabic-scripted historical corpora
Arabic script is a multi-layered orthographic system that consists of a base of archigraphemes, roughly equivalent to the traditional so-called rasm, with several layers of diacritics. The archigrapheme represents the smallest logical unit of Arabic script; it consists of the shared features between two or more graphemes, i.e., eliminating diacritics. Archigraphemes are to orthography what archiphonemes are to phonology. An archiphoneme is the abstract representation of two or more phonemes without their distinctive phonological features. For example, in Spanish, occlusive consonants loose their distinctive feature of sonority in syllabic coda position; the words adjetivo 'adjective' [aDxe'tiβo] and atleta 'athlete' [aD'leta] both shared an archiphoneme [D] (in careful speech) in their first syllable, corresponding to the phonemes /d/ and /t/ respectively. In some cases, the neutralisation of two phonemes may cause two words to be homophones. For example, vid 'vine' and bit 'bit' are both pronounced as [biD] . In paleo-orthographic Arabic script, consonant diacritics were not written down in all positions as it happens in modern Arabic script, where they are mandatory. Consequently, homographic letter blocks were quite common. An additional characteristic of early Arabic script is that graphemic or logical spaces between words did not exist: Arabic orthography preserved the ancient practice of scriptio continua, in which script tries to represent connected speech. Diacritics are signs placed in relation with the archigraphemic skeleton. From a functional point of view, there are two basic types of diacritics: a layer of consonant diacritics for differentiating graphemes and a second layer for vowels. In early script, diacritics are marked in a different colour from the one of the skeleton. Strokes were used for consonant diacritics, whereas dots were used for indicating vowels. In modern Arabic script, dots are instead used for consonant diacritics and they are mandatory. On the other hand, vowels are marked by different types of symbols and are usually optional. Unicode, the standard for digital encoding of language information, evolved from a typographic approach to language and its main concern is modern script. Typography is a technique to reproduce written language based on surface shape. As a consequence, it represents an obstacle for dealing with script from a linguistic point of view, since the same logical grapheme may be rendered using different glyphs. The main problems that arise are the following: 1. Only contemporary everyday use is covered, and that with a typographical approach: Unicode encodes multiple Arabic letters (archigraphemes + consonant diacritics) as single printing units. 2. Some calligraphic variants for the same letter were allowed to have separate Unicode characters. In practice, this means that a search for an Arabic word may yield nothing when typed in a Persian or an Urdu keyboard. This is also why you may find only a fraction of all the results when searching in an Arabic text. 3. There are currently no specialised tools that allow scholars to perform searches on Arabic historical orthography: archigraphemes. Additionally, in order to study early documents written in Arabic script, we need to have search tools that can handle continuous archigraphemic representation, i.e., Arabic script as a scripto continua. In collaboration with Thomas Milo from the Dutch company DecoType, we have developed a search utility that disambiguates and normalises Arabic text in real time and also allows the user to perform archigraphemic search on any Arabic-scripted text. The system is called Yakabikaj (traditional invocation protecting texts against bugs), and show the new perspectives it opens for research in the field of historical digital humanities for Arabic-scripted texts.
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hu-etal-2022-deep
https://aclanthology.org/2022.acl-long.123
DEEP: DEnoising Entity Pre-training for Neural Machine Translation
It has been shown that machine translation models usually generate poor translations for named entities that are infrequent in the training corpus. Earlier named entity translation methods mainly focus on phonetic transliteration, which ignores the sentence context for translation and is limited in domain and language coverage. To address this limitation, we propose DEEP, a DEnoising Entity Pretraining method that leverages large amounts of monolingual data and a knowledge base to improve named entity translation accuracy within sentences. Besides, we investigate a multi-task learning strategy that finetunes a pre-trained neural machine translation model on both entity-augmented monolingual data and parallel data to further improve entity translation. Experimental results on three language pairs demonstrate that DEEP results in significant improvements over strong denoising autoencoding baselines, with a gain of up to 1.3 BLEU and up to 9.2 entity accuracy points for English-Russian translation. 1 Krasnodar (Q3646) Language Label Description English Krasnodar capital of Krasnodar region (Krai) in Southern Russia Russian Краснодар город на юге России, административный центр Краснодарского края :
false
[]
[]
null
null
null
This work was supported in part by a grant from the Singapore Defence Science and Technology Agency.
2022
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
stewart-etal-2018-si
https://aclanthology.org/N18-2022
Si O No, Que Penses? Catalonian Independence and Linguistic Identity on Social Media
Political identity is often manifested in language variation, but the relationship between the two is still relatively unexplored from a quantitative perspective. This study examines the use of Catalan, a language local to the semi-autonomous region of Catalonia in Spain, on Twitter in discourse related to the 2017 independence referendum. We corroborate prior findings that pro-independence tweets are more likely to include the local language than anti-independence tweets. We also find that Catalan is used more often in referendum-related discourse than in other contexts, contrary to prior findings on language variation. This suggests a strong role for the Catalan language in the expression of Catalonian political identity.
false
[]
[]
null
null
null
We thank Sandeep Soni, Umashanthi Pavalanathan, our anonymous reviewers, and members of Georgia Tech's Computational Social Science class for their feedback. This research was supported by NSF award IIS-1452443 and NIH award R01-GM112697-03.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dulceanu-etal-2018-photoshopquia
https://aclanthology.org/L18-1438
PhotoshopQuiA: A Corpus of Non-Factoid Questions and Answers for Why-Question Answering
Recent years have witnessed a high interest in non-factoid question answering using Community Question Answering (CQA) web sites. Despite ongoing research using state-of-the-art methods, there is a scarcity of available datasets for this task. Why-questions, which play an important role in open-domain and domain-specific applications, are difficult to answer automatically since the answers need to be constructed based on different information extracted from multiple knowledge sources. We introduce the PhotoshopQuiA dataset, a new publicly available set of 2,854 why-question and answer(s) (WhyQ, A) pairs related to Adobe Photoshop usage collected from five CQA web sites. We chose Adobe Photoshop because it is a popular and well-known product, with a lively, knowledgeable and sizable community. To the best of our knowledge, this is the first English dataset for Why-QA that focuses on a product, as opposed to previous open-domain datasets. The corpus is stored in JSON format and contains detailed data about questions and questioners as well as answers and answerers. The dataset can be used to build Why-QA systems, to evaluate current approaches for answering why-questions, and to develop new models for future QA systems research.
false
[]
[]
null
null
null
The authors express their sincere thank to the University Gift Funding of Adobe Systems Incorporated for the partial financial support for this research.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
fahmi-bouma-2006-learning
https://aclanthology.org/W06-2609
Learning to Identify Definitions using Syntactic Features
This paper describes an approach to learning concept definitions which operates on fully parsed text. A subcorpus of the Dutch version of Wikipedia was searched for sentences which have the syntactic properties of definitions. Next, we experimented with various text classification techniques to distinguish actual definitions from other sentences. A maximum entropy classifier which incorporates features referring to the position of the sentence in the document as well as various syntactic features, gives the best results.
false
[]
[]
null
null
null
null
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
rei-etal-2017-artificial
https://aclanthology.org/W17-5032
Artificial Error Generation with Machine Translation and Syntactic Patterns
Shortage of available training data is holding back progress in the area of automated error detection. This paper investigates two alternative methods for artificially generating writing errors, in order to create additional resources. We propose treating error generation as a machine translation task, where grammatically correct text is translated to contain errors. In addition, we explore a system for extracting textual patterns from an annotated corpus, which can then be used to insert errors into grammatically correct sentences. Our experiments show that the inclusion of artificially generated errors significantly improves error detection accuracy on both FCE and CoNLL 2014 datasets.
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
feng-etal-2006-learning
https://aclanthology.org/N06-1027
Learning to Detect Conversation Focus of Threaded Discussions
In this paper we present a novel featureenriched approach that learns to detect the conversation focus of threaded discussions by combining NLP analysis and IR techniques. Using the graph-based algorithm HITS, we integrate different features such as lexical similarity, poster trustworthiness, and speech act analysis of human conversations with featureoriented link generation functions. It is the first quantitative study to analyze human conversation focus in the context of online discussions that takes into account heterogeneous sources of evidence. Experimental results using a threaded discussion corpus from an undergraduate class show that it achieves significant performance improvements compared with the baseline system.
false
[]
[]
null
null
null
The work was supported in part by DARPA grant DOI-NBC Contract No. NBCHC050051, Learning by Reading, and in part by a grant from the Lord Corporation Foundation to the USC Distance Education Network. The authors want to thank Deepak Ravichandran, Feng Pan, and Rahul Bhagat for their helpful suggestions with the manuscript. We would also like to thank the HLT-NAACL reviewers for their valuable comments.
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hana-etal-2006-tagging
https://aclanthology.org/W06-2005
Tagging Portuguese with a Spanish Tagger
We describe a knowledge and resource light system for an automatic morphological analysis and tagging of Brazilian Portuguese. 1 We avoid the use of labor intensive resources; particularly, large annotated corpora and lexicons. Instead, we use (i) an annotated corpus of Peninsular Spanish, a language related to Portuguese, (ii) an unannotated corpus of Portuguese, (iii) a description of Portuguese morphology on the level of a basic grammar book. We extend the similar work that we have done (Hana et al., 2004; Feldman et al., 2006) by proposing an alternative algorithm for cognate transfer that effectively projects the Spanish emission probabilities into Portuguese. Our experiments use minimal new human effort and show 21% error reduction over even emissions on a fine-grained tagset.
false
[]
[]
null
null
null
We would like to thank Maria das Graças Volpe Nunes, Sandra Maria Aluísio, and Ricardo Hasegawa for giving us access to the NILC corpus annotated with PALAVRAS and to Carlos Rodríguez Penagos for letting us use the Spanish part of the CLiC-TALP corpus.
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
emerson-2005-second
https://aclanthology.org/I05-3017
The Second International Chinese Word Segmentation Bakeoff
The second international Chinese word segmentation bakeoff was held in the summer of 2005 to evaluate the current state of the art in word segmentation. Twenty three groups submitted 130 result sets over two tracks and four different corpora. We found that the technology has improved over the intervening two years, though the out-of-vocabulary problem is still or paramount importance.
false
[]
[]
null
null
null
null
2005
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
vulic-etal-2017-automatic
https://aclanthology.org/K17-1013
Automatic Selection of Context Configurations for Improved Class-Specific Word Representations
This paper is concerned with identifying contexts useful for training word representation models for different word classes such as adjectives (A), verbs (V), and nouns (N). We introduce a simple yet effective framework for an automatic selection of class-specific context configurations. We construct a context configuration space based on universal dependency relations between words, and efficiently search this space with an adapted beam search algorithm. In word similarity tasks for each word class, we show that our framework is both effective and efficient. Particularly, it improves the Spearman's ρ correlation with human scores on SimLex-999 over the best previously proposed class-specific contexts by 6 (A), 6 (V) and 5 (N) ρ points. With our selected context configurations, we train on only 14% (A), 26.2% (V), and 33.6% (N) of all dependency-based contexts, resulting in a reduced training time. Our results generalise: we show that the configurations our algorithm learns for one English training setup outperform previously proposed context types in another training setup for English. Moreover, basing the configuration space on universal dependencies, it is possible to transfer the learned configurations to German and Italian. We also demonstrate improved per-class results over other context types in these two languages.
false
[]
[]
null
null
null
This work is supported by the ERC Consolidator Grant LEXICAL: Lexical Acquisition Across Languages (no 648909). Roy Schwartz was supported by the Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI). The authors are grateful to the anonymous reviewers for their helpful and constructive suggestions.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mehdad-etal-2013-abstractive
https://aclanthology.org/W13-2117
Abstractive Meeting Summarization with Entailment and Fusion
We propose a novel end-to-end framework for abstractive meeting summarization. We cluster sentences in the input into communities and build an entailment graph over the sentence communities to identify and select the most relevant sentences. We then aggregate those selected sentences by means of a word graph model. We exploit a ranking strategy to select the best path in the word graph as an abstract sentence. Despite not relying on the syntactic structure, our approach significantly outperforms previous models for meeting summarization in terms of informativeness. Moreover, the longer sentences generated by our method are competitive with shorter sentences generated by the previous word graph model in terms of grammaticality.
false
[]
[]
null
null
null
We would like to thank the anonymous reviewers for their valuable comments and suggestions to improve the paper, our annotators for their valuable work, and the NSERC Business Intelligence Network for financial support.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hsiao-etal-2007-word
https://aclanthology.org/O07-1011
Word Translation Disambiguation via Dependency (利用依存關係之辭彙翻譯)
We introduce a new method for automatically disambiguation of word translations by using dependency relationships. In our approach, we learn the relationships between translations and dependency relationships from a parallel corpus. The method consists of a training stage and a runtime stage. During the training stage, the system automatically learns a translation decision list based on source sentences and its dependency relationships. At runtime, for each content word in the given sentence, we give a most appropriate Chinese translation relevant to the context of the given sentence according to the decision list. We also describe the implementation of the proposed method using bilingual Hong Kong news and Hong Kong Hansard corpus. In the experiment, we use five different ways to translate content words in the test data and evaluate the results based an automatic BLEU-like evaluation methodology. Experimental results indicate that dependency relations can obviously help us to disambiguate word translations and some kinds of dependency are more effective than others.
false
[]
[]
null
null
null
null
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hamoui-etal-2020-flodusta
https://aclanthology.org/2020.lrec-1.174
FloDusTA: Saudi Tweets Dataset for Flood, Dust Storm, and Traffic Accident Events
The rise of social media platforms makes it a valuable information source of recent events and users' perspective towards them. Twitter has been one of the most important communication platforms in recent years. Event detection, one of the information extraction aspects, involves identifying specified types of events in the text. Detecting events from tweets can help to predict real-world events precisely. A serious challenge that faces Arabic event detection is the lack of Arabic datasets that can be exploited in detecting events. This paper will describe FloDusTA, which is a dataset of tweets that we have built for the purpose of developing an event detection system. The dataset contains tweets written in both Modern Standard Arabic and Saudi dialect. The process of building the dataset starting from tweets collection to annotation by human annotators will be present. The tweets are labeled with four labels: flood, dust storm, traffic accident, and non-event. The dataset was tested for classification and the result was strongly encouraging.
true
[]
[]
Sustainable Cities and Communities
Climate Action
null
Alsaedi, N., & Burnap, P. (2015). Arabic event detection in social media. In International Conference on Intelligent Text . Springer, Cham. Al-Twairesh, N., Al-Khalifa, H., Al-Salman, A., and Al-Ohali, Y. (2017). AraSenTi-Tweet: A Corpus for Arabic Sentiment Analysis of Saudi Tweets. Procedia Computer Science, 117, 63-72. Fleiss, J. L. (1971). Measuring nominal scale agreement among many raters. Psychological Bulletin, 76(5), 378-382. Gu, Y., Qian, Z. (S., and Chen, F. (2016). From Twitter to detector: Real-time traffic incident detection using social media data. -7) . Sakaki, T., Okazaki, M., and Matsuo, Y. (2010, April).Earthquake shakes Twitter users: real-time event detection by social sensors. In Proceedings of the 19th international conference on World wide web (pp. 851-860). ACM. Schulz, A., Hadjakos, A., Paulheim, H., Nachtwey, J., and Mühlhäuser, M. (2013, June). A multi-indicator approach for geolocalization of tweets. In Seventh international AAAI conference on weblogs and social media. Youssef, A. M., Sefry, S. A., Pradhan, B., and Alfadail, E. A. (2015). Analysis on causes of flash flood in Jeddah city (Kingdom of Saudi Arabia) of 2009 and 2011 using multi-sensor remote sensing data and GIS. Geomatics, Natural Hazards and Risk, 7(3), 1018-1042.
2020
false
false
false
false
false
false
false
false
false
false
true
false
true
false
false
false
false
ek-knuutinen-2017-mainstreaming
https://aclanthology.org/W17-0236
Mainstreaming August Strindberg with Text Normalization
This article explores the application of text normalization methods based on Levenshtein distance and Statistical Machine Translation to the literary genre, specifically on the collected works of August Strindberg. The goal is to normalize archaic spellings to modern day spelling. The study finds evidence of success in text normalization, and explores some problems and improvements to the process of analysing mid-19th to early 20th century Swedish texts. This article is part of an ongoing project at Stockholm University which aims to create a corpus and webfriendly texts from Strindsberg's collected works.
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
damani-ghonge-2013-appropriately
https://aclanthology.org/D13-1017
Appropriately Incorporating Statistical Significance in PMI
Two recent measures incorporate the notion of statistical significance in basic PMI formulation. In some tasks, we find that the new measures perform worse than the PMI. Our analysis shows that while the basic ideas in incorporating statistical significance in PMI are reasonable, they have been applied slightly inappropriately. By fixing this, we get new measures that improve performance over not just PMI but on other popular co-occurrence measures as well. In fact, the revised measures perform reasonably well compared with more resource intensive non co-occurrence based methods also.
false
[]
[]
null
null
null
We thank Dipak Chaudhari for his help with the implementation.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sikdar-gamback-2016-language
https://aclanthology.org/W16-5817
Language Identification in Code-Switched Text Using Conditional Random Fields and Babelnet
The paper outlines a supervised approach to language identification in code-switched data, framing this as a sequence labeling task where the label of each token is identified using a classifier based on Conditional Random Fields and trained on a range of different features, extracted both from the training data and by using information from Babelnet and Babelfy. The method was tested on the development dataset provided by organizers of the shared task on language identification in codeswitched data, obtaining tweet level monolingual, code-switched and weighted F1-scores of 94%, 85% and 91%, respectively, with a token level accuracy of 95.8%. When evaluated on the unseen test data, the system achieved 90%, 85% and 87.4% monolingual, code-switched and weighted tweet level F1scores, and a token level accuracy of 95.7%.
false
[]
[]
null
null
null
null
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
monson-etal-2004-data
http://www.lrec-conf.org/proceedings/lrec2004/pdf/747.pdf
Data Collection and Analysis of Mapudungun Morphology for Spelling Correction
This paper describes part of a three year collaboration between Carnegie Mellon University's Language Technologies Institute, the
false
[]
[]
null
null
null
This research was funded in part by NSF grant number IIS-0121-631. We would also like to thank the Chilean Ministry of Education funding the team at the Instituto de Estudios Indígenas, especially Carolina Huenchullán, the National Coordinator of the Chilean Ministry of Education's Programa de Educación Intercultural Bilingüe for her continuing support, and the team in Temuco-Flor Caniupil, Cristián Carrillán, Luis Canuipil, and Marcella Collío for their hard work in collecting, transcribing and translating the data. And Pascual Masullo for his expert linguistic advice.
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hohensee-bender-2012-getting
https://aclanthology.org/N12-1032
Getting More from Morphology in Multilingual Dependency Parsing
We propose a linguistically motivated set of features to capture morphological agreement and add them to the MSTParser dependency parser. Compared to the built-in morphological feature set, ours is both much smaller and more accurate across a sample of 20 morphologically annotated treebanks. We find increases in accuracy of up to 5.3% absolute. While some of this results from the feature set capturing information unrelated to morphology, there is still significant improvement, up to 4.6% absolute, due to the agreement model.
false
[]
[]
null
null
null
We would like to thank everyone who assisted us in gathering treebanks, particularly Maite Oronoz and her colleagues at the University of the Basque Country and Yoav Goldberg, as well as three anonymous reviewers for their comments.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
doval-etal-2020-robustness
https://aclanthology.org/2020.lrec-1.495
On the Robustness of Unsupervised and Semi-supervised Cross-lingual Word Embedding Learning
Cross-lingual word embeddings are vector representations of words in different languages where words with similar meaning are represented by similar vectors, regardless of the language. Recent developments which construct these embeddings by aligning monolingual spaces have shown that accurate alignments can be obtained with little or no supervision, which usually comes in the form of bilingual dictionaries. However, the focus has been on a particular controlled scenario for evaluation, and there is no strong evidence on how current state-of-the-art systems would fare with noisy text or for language pairs with major linguistic differences. In this paper we present an extensive evaluation over multiple cross-lingual embedding models, analyzing their strengths and limitations with respect to different variables such as target language, training corpora and amount of supervision. Our conclusions put in doubt the view that high-quality cross-lingual embeddings can always be learned without much supervision.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
naz-etal-2021-fjwu
https://aclanthology.org/2021.wmt-1.86
FJWU Participation for the WMT21 Biomedical Translation Task
In this paper we present the FJWU's system submitted to the biomedical shared task at WMT21. We prepared state-of-the-art multilingual neural machine translation systems for three languages (i.e. German, Spanish and French) with English as target language. Our NMT systems based on Transformer architecture, were trained on combination of indomain and out-domain parallel corpora developed using Information Retrieval (IR) and domain adaptation techniques.
true
[]
[]
Good Health and Well-Being
null
null
This study is funded by the National Research Program for Universities (NRPU) by Higher Education Commission of Pakistan (5469/Punjab/NRPU/R&D/HEC/2016).
2021
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
reisert-etal-2014-corpus
https://aclanthology.org/W14-4910
A Corpus Study for Identifying Evidence on Microblogs
Microblogs are a popular way for users to communicate and have recently caught the attention of researchers in the natural language processing (NLP) field. However, regardless of their rising popularity, little attention has been given towards determining the properties of discourse relations for the rapid, large-scale microblog data. Therefore, given their importance for various NLP tasks, we begin a study of discourse relations on microblogs by focusing on evidence relations. As no annotated corpora for evidence relations on microblogs exist, we conduct a corpus study to identify such relations on Twitter, a popular microblogging service. We create annotation guidelines, conduct a large-scale annotation phase, and develop a corpus of annotated evidence relations. Finally, we report our observations, annotation difficulties, and data statistics.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
We would like to acknowledge MEXT (Ministry of Education, Culture, Sports, Science and Technology) for their generous financial support via the Research Student Scholarship. This study was partly supported by Japan Society for the Promotion of Science (JSPS) KAKENHI Grant No. 23240018 and Japan Science and Technology Agency (JST). Furthermore, we would like to also thank Eric Nichols (Honda Research Institute Japan Co., Ltd.) for his discussions on the topic of evidence relations.
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
kwong-2009-phonological
https://aclanthology.org/W09-3516
Phonological Context Approximation and Homophone Treatment for NEWS 2009 English-Chinese Transliteration Shared Task
This paper describes our systems participating in the NEWS 2009 Machine Transliteration Shared Task. Two runs were submitted for the English-Chinese track. The system for the standard run is based on graphemic approximation of local phonological context. The one for the non-standard run is based on parallel modelling of sound and tone patterns for treating homophones in Chinese. Official results show that both systems stand in the mid range amongst all participating systems.
false
[]
[]
null
null
null
The work described in this paper was substantially supported by a grant from City University of Hong Kong (Project No. 7002203).
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
faili-2009-partial
https://aclanthology.org/R09-1014
From Partial toward Full Parsing
Full-Parsing systems able to analyze sentences robustly and completely at an appropriate accuracy can be useful in many computer applications like information retrieval and machine translation systems. Increasing the domain of locality by using tree-adjoining-grammars (TAG) caused some researchers to use it as a modeling formalism in their language application. But parsing with a rich grammar like TAG faces two main obstacles: low parsing speed and a lot of ambiguous syntactical parses. In order to decrease the parse time and these ambiguities, we use an idea of combining statistical chunker based on TAG formalism, with a heuristically rule-based search method to achieve the full parses. The partial parses induced from statistical chunker are basically resulted from a system named supertagger, and are followed by two different phases: error detection and error correction, which in each phase, different completion heuristics apply on the partial parses. The experiments on Penn Treebank show that by using a trained probability model considerable improvement in full-parsing rate is achieved.
false
[]
[]
null
null
null
null
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gao-vogel-2008-parallel
https://aclanthology.org/W08-0509
Parallel Implementations of Word Alignment Tool
Training word alignment models on large corpora is a very time-consuming processes. This paper describes two parallel implementations of GIZA++ that accelerate this word alignment process. One of the implementations runs on computer clusters, the other runs on multi-processor system using multi-threading technology. Results show a near-linear speedup according to the number of CPUs used, and alignment quality is preserved.
false
[]
[]
null
null
null
null
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
theologitis-1997-euramis
https://aclanthology.org/1997.eamt-1.3
EURAMIS, the platform of the EC Translator
Linguistic technology brings new tools to the desktop of the translator: full-text retrieval systems, terminological systems, translation memories and machine translation. These systems are now being integrated into a single, seamless workflow in a large organisation, the Translation Service of the European Commission. SdTVista is used for full-text search of reference documents; Euramis powerful aligner creates translation memories which are stored in a central Linguistic Resources Database; not-found sentences are automatically sent to EC-Systran machine translation; pertinent terminology is retrieved in batch mode from Eurodicautom. The resulting resources are brought together on the translator's workbench. Screen-shots and a demonstration complete the presentation. Dimitri Theologitis Born in Athens, civil engineer, specialised in integrated transportation systems and computers. Opted for a major change of career in 1984 when he joined the Translation Service of the European Commission. Responsible for the Rationalisation of Working Methods from 1990. In 1994 became head of unit Development of Multilingual Computer Aids, a multilingual team active in the technological modernisation of the Translation Service.
false
[]
[]
null
null
null
null
1997
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
popescu-belis-etal-2012-discourse
http://www.lrec-conf.org/proceedings/lrec2012/pdf/255_Paper.pdf
Discourse-level Annotation over Europarl for Machine Translation: Connectives and Pronouns
This paper describes methods and results for the annotation of two discourse-level phenomena, connectives and pronouns, over a multilingual parallel corpus. Excerpts from Europarl in English and French have been annotated with disambiguation information for connectives and pronouns, for about 3600 tokens. This data is then used in several ways: for cross-linguistic studies, for training automatic disambiguation software, and ultimately for training and testing discourse-aware statistical machine translation systems. The paper presents the annotation procedures and their results in detail, and overviews the first systems trained on the annotated resources and their use for machine translation.
false
[]
[]
null
null
null
We are grateful for the funding of this work to the Swiss National Science Foundation (SNSF), under its Sinergia program, grant n. CRSI22 127510. The resources described in this article will be made available through the project's website (www.idiap.ch/comtis) in the near future.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
seddah-etal-2012-ubiquitous
http://www.lrec-conf.org/proceedings/lrec2012/pdf/1130_Paper.pdf
Ubiquitous Usage of a Broad Coverage French Corpus: Processing the Est Republicain corpus
In this paper, we introduce a set of resources that we have derived from the EST RÉPUBLICAIN CORPUS, a large, freely-available collection of regional newspaper articles in French, totaling 150 million words. Our resources are the result of a full NLP treatment of the EST RÉPUBLICAIN CORPUS: handling of multi-word expressions, lemmatization, part-of-speech tagging, and syntactic parsing. Processing of the corpus is carried out using statistical machine-learning approaches-joint model of data driven lemmatization and partof-speech tagging, PCFG-LA and dependency based models for parsing-that have been shown to achieve state-of-the-art performance when evaluated on the French Treebank. Our derived resources are made freely available, and released according to the original Creative Common license for the EST RÉPUBLICAIN CORPUS. We additionally provide an overview of the use of these resources in various applications, in particular the use of generated word clusters from the corpus to alleviate lexical data sparseness for statistical parsing.
false
[]
[]
null
null
null
We are very grateful to Bertrand Gaiffe and Kamel Nehbi from the CNRTL for making this corpus available. Many thanks to Grzegorz Chrupala for making MORFETTE available to us and for providing unlimited support on this work. This work was partly supported by the ANR Sequoia (ANR-08-EMER-013).
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
berend-vincze-2012-evaluate
https://aclanthology.org/W12-3715
How to Evaluate Opinionated Keyphrase Extraction?
Evaluation often denotes a key issue in semantics-or subjectivity-related tasks. Here we discuss the difficulties of evaluating opinionated keyphrase extraction. We present our method to reduce the subjectivity of the task and to alleviate the evaluation process and we also compare the results of human and machine-based evaluation.
false
[]
[]
null
null
null
This work was supported in part by the NIH grant (project codename MASZEKER) of the Hungarian government.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
borin-etal-2014-linguistic
http://www.lrec-conf.org/proceedings/lrec2014/pdf/159_Paper.pdf
Linguistic landscaping of South Asia using digital language resources: Genetic vs. areal linguistics
Like many other research fields, linguistics is entering the age of big data. We are now at a point where it is possible to see how new research questions can be formulated-and old research questions addressed from a new angle or established results verified-on the basis of exhaustive collections of data, rather than small, carefully selected samples. For example, South Asia is often mentioned in the literature as a classic example of a linguistic area, but there is no systematic, empirical study substantiating this claim. Examination of genealogical and areal relationships among South Asian languages requires a large-scale quantitative and qualitative comparative study, encompassing more than one language family. Further, such a study cannot be conducted manually, but needs to draw on extensive digitized language resources and state-of-the-art computational tools. We present some preliminary results of our large-scale investigation of the genealogical and areal relationships among the languages of this region, based on the linguistic descriptions available in the 19 tomes of Grierson's monumental Linguistic Survey of India (1903-1927), which is currently being digitized with the aim of turning the linguistic information in the LSI into a digital language resource suitable for a broad array of linguistic investigations.
false
[]
[]
null
null
null
null
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ling-etal-2011-reordering
https://aclanthology.org/P11-2079
Reordering Modeling using Weighted Alignment Matrices
In most statistical machine translation systems, the phrase/rule extraction algorithm uses alignments in the 1-best form, which might contain spurious alignment points. The usage of weighted alignment matrices that encode all possible alignments has been shown to generate better phrase tables for phrase-based systems. We propose two algorithms to generate the well known MSD reordering model using weighted alignment matrices. Experiments on the IWSLT 2010 evaluation datasets for two language pairs with different alignment algorithms show that our methods produce more accurate reordering models, as can be shown by an increase over the regular MSD models of 0.4 BLEU points in the BTEC French to English test set, and of 1.5 BLEU points in the DIALOG Chinese to English test set.
false
[]
[]
null
null
null
This work was partially supported by FCT (INESC-ID multiannual funding) through the PIDDAC Program funds, and also through projects CMU-PT/HuMach/0039/2008 and CMU-PT/0005/2007. The PhD thesis of Tiago Luís is supported by FCT grant SFRH/BD/62151/2009. The PhD thesis of Wang Ling is supported by FCT grant SFRH/BD/51157/2010. The authors also wish to thank the anonymous reviewers for many helpful comments.
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kato-etal-2006-woz
https://aclanthology.org/W06-3002
WoZ Simulation of Interactive Question Answering
QACIAD (Question Answering Challenge for Information Access Dialogue) is an evaluation framework for measuring interactive question answering (QA) technologies. It assumes that users interactively collect information using a QA system for writing a report on a given topic and evaluates, among other things, the capabilities needed under such circumstances. This paper reports an experiment for examining the assumptions made by QACIAD. In this experiment, dialogues under the situation that QACIAD assumes are collected using WoZ (Wizard of Oz) simulating, which is frequently used for collecting dialogue data for designing speech dialogue systems, and then analyzed. The results indicate that the setting of QACIAD is real and appropriate and that one of the important capabilities for future interactive QA systems is providing cooperative and helpful responses.
false
[]
[]
null
null
null
null
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
guo-etal-2016-unified
https://aclanthology.org/C16-1120
A Unified Architecture for Semantic Role Labeling and Relation Classification
This paper describes a unified neural architecture for identifying and classifying multi-typed semantic relations between words in a sentence. We investigate two typical and well-studied tasks: semantic role labeling (SRL) which identifies the relations between predicates and arguments, and relation classification (RC) which focuses on the relation between two entities or nominals. While mostly studied separately in prior work, we show that the two tasks can be effectively connected and modeled using a general architecture. Experiments on CoNLL-2009 benchmark datasets show that our SRL models significantly outperform state-of-the-art approaches. Our RC models also yield competitive performance with the best published records. Furthermore, we show that the two tasks can be trained jointly with multi-task learning, resulting in additive significant improvements for SRL.
false
[]
[]
null
null
null
We are grateful to Tao Lei for providing the outputs of their systems. We thank the anonymous reviewers for their insightful comments and suggestions. This work was supported by the National Key Basic Research Program of China via grant 2014CB340503 and the National Natural Science Foundation of China (NSFC) via grant 61300113 and 61370164.
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
nurani-venkitasubramanian-etal-2017-learning
https://aclanthology.org/W17-2003
Learning to Recognize Animals by Watching Documentaries: Using Subtitles as Weak Supervision
We investigate animal recognition models learned from wildlife video documentaries by using the weak supervision of the textual subtitles. This is a challenging setting, since i) the animals occur in their natural habitat and are often largely occluded and ii) subtitles are to a great degree complementary to the visual content, providing a very weak supervisory signal. This is in contrast to most work on integrated vision and language in the literature, where textual descriptions are tightly linked to the image content, and often generated in a curated fashion for the task at hand. We investigate different image representations and models, in particular a support vector machine on top of activations of a pretrained convolutional neural network, as well as a Naive Bayes framework on a 'bag-of-activations' image representation, where each element of the bag is considered separately. This representation allows key components in the image to be isolated, in spite of vastly varying backgrounds and image clutter, without an object detection or image segmentation step. The methods are evaluated based on how well they transfer to unseen camera-trap images captured across diverse topographical regions under different environmental conditions and illumination settings, involving a large domain shift.
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
pethe-etal-2020-chapter
https://aclanthology.org/2020.emnlp-main.672
Chapter Captor: Text Segmentation in Novels
Books are typically segmented into chapters and sections, representing coherent subnarratives and topics. We investigate the task of predicting chapter boundaries, as a proxy for the general task of segmenting long texts. We build a Project Gutenberg chapter segmentation data set of 9,126 English novels, using a hybrid approach combining neural inference and rule matching to recognize chapter title headers in books, achieving an F1-score of 0.77 on this task. Using this annotated data as ground truth after removing structural cues, we present cut-based and neural methods for chapter segmentation, achieving an F1-score of 0.453 on the challenging task of exact break prediction over book-length documents. Finally, we reveal interesting historical trends in the chapter structure of novels.
false
[]
[]
null
null
null
We thank the anonymous reviewers for their helpful feedback. This work was partially supported by NSF grants IIS-1926751, IIS-1927227, and IIS-1546113.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
piasecki-etal-2012-recognition
http://www.lrec-conf.org/proceedings/lrec2012/pdf/926_Paper.pdf
Recognition of Polish Derivational Relations Based on Supervised Learning Scheme
The paper presents construction of Derywator-a language tool for the recognition of Polish derivational relations. It was built on the basis of machine learning in a way following the bootstrapping approach: a limited set of derivational pairs described manually by linguists in plWordNet is used to train Derivator. The tool is intended to be applied in semi-automated expansion of plWordNet with new instances of derivational relations. The training process is based on the construction of two transducers working in the opposite directions: one for prefixes and one for suffixes. Internal stem alternations are recognised, recorded in a form of mapping sequences and stored together with transducers. Raw results produced by Derivator undergo next corpus-based and morphological filtering. A set of derivational relations defined in plWordNet is presented. Results of tests for different derivational relations are discussed. A problem of the necessary corpus-based semantic filtering is analysed. The presented tool depends to a very little extent on the hand-crafted knowledge for a particular language, namely only a table of possible alternations and morphological filtering rules must be exchanged and it should not take longer than a couple of working days.
false
[]
[]
null
null
null
Work financed by the Polish Ministry of Education and Science, Project N N516 068637.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
boudin-etal-2010-clinical
https://aclanthology.org/N10-1124
Clinical Information Retrieval using Document and PICO Structure
In evidence-based medicine, clinical questions involve four aspects: Patient/Problem (P), Intervention (I), Comparison (C) and Outcome (O), known as PICO elements. In this paper we present a method that extends the language modeling approach to incorporate both document structure and PICO query formulation. We present an analysis of the distribution of PICO elements in medical abstracts that motivates the use of a location-based weighting strategy. In experiments carried out on a collection of 1.5 million abstracts, the method was found to lead to an improvement of roughly 60% in MAP and 70% in P@10 as compared to state-of-the-art methods.
true
[]
[]
Good Health and Well-Being
null
null
The work described in this paper was funded by the Social Sciences and Humanities Research Council (SSHRC). The authors would like to thank Dr. Ann McKibbon, Dr. Dina Demner-Fushman, Lorie Kloda, Laura Shea, Lucas Baire and Lixin Shi for their contribution in the project.
2010
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
safi-samghabadi-etal-2018-ritual
https://aclanthology.org/W18-4402
RiTUAL-UH at TRAC 2018 Shared Task: Aggression Identification
This paper presents our system for "TRAC 2018 Shared Task on Aggression Identification". Our best systems for the English dataset use a combination of lexical and semantic features. However, for Hindi data using only lexical features gave us the best results. We obtained weighted F1measures of 0.5921 for the English Facebook task (ranked 12th), 0.5663 for the English Social Media task (ranked 6th), 0.6292 for the Hindi Facebook task (ranked 1st), and 0.4853 for the Hindi Social Media task (ranked 2nd).
true
[]
[]
Peace, Justice and Strong Institutions
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
mueller-waibel-2015-using
https://aclanthology.org/2015.iwslt-papers.7
Using language adaptive deep neural networks for improved multilingual speech recognition
null
false
[]
[]
null
null
null
null
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
silvestre-baquero-mitkov-2017-translation
https://doi.org/10.26615/978-954-452-042-7_006
Translation Memory Systems Have a Long Way to Go
null
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
steinberger-etal-2011-jrc
https://aclanthology.org/R11-1015
JRC-NAMES: A Freely Available, Highly Multilingual Named Entity Resource
This paper describes a new, freely available, highly multilingual named entity resource for person and organisation names that has been compiled over seven years of large-scale multilingual news analysis combined with Wikipedia mining, resulting in 205,000 person and organisation names plus about the same number of spelling variants written in over 20 different scripts and in many more languages. This resource, produced as part of the Europe Media Monitor activity (EMM, http://emm.newsbrief.eu/overview.html), can be used for a number of purposes. These include improving name search in databases or on the internet, seeding machine learning systems to learn named entity recognition rules, improve machine translation results, and more. We describe here how this resource was created; we give statistics on its current size; we address the issue of morphological inflection; and we give details regarding its functionality. Updates to this resource will be made available daily.
false
[]
[]
null
null
null
The Europe Media Monitor EMM is a multiannual group effort involving many tasks, of which some are much less visible to the outside world. We would thus like to thank all past and present OPTIMA team members for their help and dedication. We would also like to thank our Unit Head Delilah Al Khudhairy for her support.
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
li-etal-2022-seeking
https://aclanthology.org/2022.findings-acl.195
Seeking Patterns, Not just Memorizing Procedures: Contrastive Learning for Solving Math Word Problems
Math Word Problem (MWP) solving needs to discover the quantitative relationships over natural language narratives. Recent work shows that existing models memorize procedures from context and rely on shallow heuristics to solve MWPs. In this paper, we look at this issue and argue that the cause is a lack of overall understanding of MWP patterns. We first investigate how a neural network understands patterns only from semantics, and observe that, if the prototype equations like n 1 + n 2 are the same, most problems get closer representations and those representations apart from them or close to other prototypes tend to produce wrong solutions. Inspired by it, we propose a contrastive learning approach, where the neural network perceives the divergence of patterns. We collect contrastive examples by converting the prototype equation into a tree and seeking similar tree structures. The solving model is trained with an auxiliary objective on the collected examples, resulting in the representations of problems with similar prototypes being pulled closer. We conduct experiments 1 on the Chinese dataset Math23k and the English dataset MathQA. Our method greatly improves the performance in monolingual and multilingual settings.
false
[]
[]
null
null
null
null
2022
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sakaguchi-etal-2018-comprehensive
https://aclanthology.org/L18-1050
Comprehensive Annotation of Various Types of Temporal Information on the Time Axis
In order to make the temporal interpretation of text, there have been many studies linking event and temporal information, such as temporal ordering of events and timeline generation. To train and evaluate models in these studies, many corpora that associate event information with time information have been developed. In this paper, we propose an annotation scheme that anchors expressions in text to the time axis comprehensively, extending the previous studies in the following two points. One of the points is to annotate not only expressions with strong temporality but also expressions with weak temporality, such as states and habits. The other point is that various types of temporal information, such as frequency and duration, can be anchored to the time axis. Using this annotation scheme, we annotated a subset of Kyoto University Text Corpus. Since the corpus has already been annotated predicate-argument structures and coreference relations, it can be utilized for integrated information analysis of events, entities and time.
false
[]
[]
null
null
null
This work was partially supported by JST CREST Grant Number JPMJCR1301 including AIP challenge program, Japan. We also thank Manami Ishikawa, Marika Horiuchi and Natsuki Nikaido for their careful annotation.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yu-2007-chinese
https://aclanthology.org/N07-2050
Chinese Named Entity Recognition with Cascaded Hybrid Model
We propose a high-performance cascaded hybrid model for Chinese NER. Firstly, we use Boosting, a standard and theoretically wellfounded machine learning method to combine a set of weak classifiers together into a base system. Secondly, we introduce various types of heuristic human knowledge into Markov Logic Networks (MLNs), an effective combination of first-order logic and probabilistic graphical models to validate Boosting NER hypotheses. Experimental results show that the cascaded hybrid model significantly outperforms the state-of-the-art Boosting model.
false
[]
[]
null
null
null
null
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
freedman-etal-2011-language
https://aclanthology.org/P11-2059
Language Use: What can it tell us?
For 20 years, information extraction has focused on facts expressed in text. In contrast, this paper is a snapshot of research in progress on inferring properties and relationships among participants in dialogs, even though these properties/relationships need not be expressed as facts. For instance, can a machine detect that someone is attempting to persuade another to action or to change beliefs or is asserting their credibility? We report results on both English and Arabic discussion forums.
false
[]
[]
null
null
null
This research was funded by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), through the _____. All statements of fact, opinion or conclusions contained herein are those of the authors and should not be construed as representing the official views or policies of IARPA, the ODNI or the U.S. Government.
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
guinaudeau-strube-2013-graph
https://aclanthology.org/P13-1010
Graph-based Local Coherence Modeling
We propose a computationally efficient graph-based approach for local coherence modeling. We evaluate our system on three tasks: sentence ordering, summary coherence rating and readability assessment. The performance is comparable to entity grid based approaches though these rely on a computationally expensive training phase and face data sparsity problems.
false
[]
[]
null
null
null
Acknowledgments. This work has been funded by the Klaus Tschira Foundation, Heidelberg, Germany. The first author has been supported by a HITS postdoctoral scholarship. We would like to thank Mirella Lapata and Regina Barzilay for making their data available and Micha Elsner for providing his toolkit.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
schuster-hegelich-2022-berts
https://aclanthology.org/2022.findings-acl.89
From BERT`s Point of View: Revealing the Prevailing Contextual Differences
Though successfully applied in research and industry large pretrained language models of the BERT family are not yet fully understood. While much research in the field of BERTology has tested whether specific knowledge can be extracted from layer activations, we invert the popular probing design to analyze the prevailing differences and clusters in BERT's high dimensional space. By extracting coarse features from masked token representations and predicting them by probing models with access to only partial information we can apprehend the variation from 'BERT's point of view'. By applying our new methodology to different datasets we show how much the differences can be described by syntax but further how they are to a great extent shaped by the most simple positional information.
false
[]
[]
null
null
null
This work was supported by the Heinrich Böll Foundation through a doctoral scholarship. We would like to thank the anonymous reviewers for their valuable feedback.
2022
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hossain-schwitter-2018-specifying
https://aclanthology.org/U18-1005
Specifying Conceptual Models Using Restricted Natural Language
The key activity to design an information system is conceptual modelling which brings out and describes the general knowledge that is required to build a system. In this paper we propose a novel approach to conceptual modelling where the domain experts will be able to specify and construct a model using a restricted form of natural language. A restricted natural language is a subset of a natural language that has well-defined computational properties and therefore can be translated unambiguously into a formal notation. We will argue that a restricted natural language is suitable for writing precise and consistent specifications that lead to executable conceptual models. Using a restricted natural language will allow the domain experts to describe a scenario in the terminology of the application domain without the need to formally encode this scenario. The resulting textual specification can then be automatically translated into the language of the desired conceptual modelling framework.
false
[]
[]
null
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
madaan-sadat-2020-multilingual
https://aclanthology.org/2020.wildre-1.6
Multilingual Neural Machine Translation involving Indian Languages
Neural Machine Translations (NMT) models are capable of translating a single bilingual pair and require a new model for each new language pair. Multilingual Neural Machine Translation models are capable of translating multiple language pairs, even pairs which it hasn't seen before in training. Availability of parallel sentences is a known problem in machine translation. Multilingual NMT model leverages information from all the languages to improve itself and performs better. We propose a data augmentation technique that further improves this model profoundly. The technique helps achieve a jump of more than 15 points in BLEU score from the Multilingual NMT Model. A BLEU score of 36.2 was achieved for Sindhi-English translation, which is higher than any score on the leaderboard of the LoResMT SharedTask at MT Summit 2019, which provided the data for the experiments.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ueffing-etal-2002-generation
https://aclanthology.org/W02-1021
Generation of Word Graphs in Statistical Machine Translation
null
false
[]
[]
null
null
null
null
2002
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kuo-chen-2004-event
https://aclanthology.org/W04-0703
Event Clustering on Streaming News Using Co-Reference Chains and Event Words
Event clustering on streaming news aims to group documents by events automatically. This paper employs co-reference chains to extract the most representative sentences, and then uses them to select the most informative features for clustering. Due to the long span of events, a fixed threshold approach prohibits the latter documents to be clustered and thus decreases the performance. A dynamic threshold using time decay function and spanning window is proposed. Besides the noun phrases in co-reference chains, event words in each sentence are also introduced to improve the related performance. Two models are proposed. The experimental results show that both event words and co-reference chains are useful on event clustering.
false
[]
[]
null
null
null
null
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chert-etal-1998-ntu
https://aclanthology.org/X98-1022
An NTU-Approach to Automatic Sentence Extraction for Summary Generation
Automatic summarization and information extraction are two important Internet services. MUC and SUMMAC play their appropriate roles in the next generation Internet. This paper focuses on the automatic summarization and proposes two different models to extract sentences for summary generation under two tasks initiated by SUMMAC-1. For categorization task, positive feature vectors and negative feature vectors are used cooperatively to construct generic, indicative summaries. For adhoc task, a text model based on relationship between nouns and verbs is used to filter out irrelevant discourse segment, to rank relevant sentences, and to generate the user-directed summaries. The result shows that the NormF of the best summary and that of the fixed summary for adhoc tasks are 0.456 and 0.447. The NormF of the best summary and that of the fixed summary for categorization task are 0.4090 and 0.4023. Our system outperforms the average system in categorization task but does a common job in adhoc task.
false
[]
[]
null
null
null
null
1998
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kao-chen-2011-diagnosing
https://aclanthology.org/O11-2010
Diagnosing Discoursal Organization in Learner Writing via Conjunctive Adverbials (診斷學習者英語寫作篇章結構:以篇章連接副詞為例)
The present study aims to investigate genre influence on the use and misuse of conjunctive adverbials (hereafter CAs) by compiling a learner corpus annotated with discoursal information on CAs. To do so, an online interface is constructed to collect and annotate data, and an annotating system for identifying the use and misuse of CAs is developed. The results show that genre difference has no impact on the use and misuse of CAs, but that there does exist a norm distribution of textual relations performed by CAs, indicating a preference preset in human cognition. Statistic analysis also shows that the proposed misuse patterns do significantly differ from one another in terms of appropriateness and necessity, ratifying the need to differentiate these misuse patterns. The results in the present study have three possible applications. First, the annotate data can serve as training data for developing technology that automatically diagnoses learner writing on the discoursal level. Second, the founding that textual relations performed by CAs form a distribution norm can be used as a principle to evaluate discoursal organization in learner writing. Lastly, the misuse framework not only identifies the location of misuse of CAs but also indicates direction for correction.
true
[]
[]
Quality Education
null
null
null
2011
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
macherey-och-2007-empirical
https://aclanthology.org/D07-1105
An Empirical Study on Computing Consensus Translations from Multiple Machine Translation Systems
This paper presents an empirical study on how different selections of input translation systems affect translation quality in system combination. We give empirical evidence that the systems to be combined should be of similar quality and need to be almost uncorrelated in order to be beneficial for system combination. Experimental results are presented for composite translations computed from large numbers of different research systems as well as a set of translation systems derived from one of the bestranked machine translation engines in the 2006 NIST machine translation evaluation.
false
[]
[]
null
null
null
null
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
van-noord-bouma-1997-hdrug
https://aclanthology.org/W97-1513
Hdrug. A Flexible and Extendible Development Environment for Natural Language Processing.
Alfa-informatica & BCN, University of Groningen vannoord, gosse@let, rug. nl Hdrug is an environment to develop grammars, parsers and generators for natural languages. The package is written in Sicstus Prolog and Tcl/Tk. The system provides a graphical user interface with a command interpreter, and a number of visualisation tools, including visualisation of feature structures, syntax trees, type hierarchies, lexical hierarchies, feature structure trees, definite clause definitions, grammar rules, lexical entries, and graphs of statistical information of various kinds. Hdrug is designed to be as flexible and extendible as possible. This is illustrated by the fact that Hdrug has been used both for the development of practical realtime systems, but also as a tool to experiment with new theoretical notions and alternative processing strategies. Grammatical formalisms that have been used range from context-free grammars to concatenative feature-based grammars (such as the grammars written for ALE) and nonconcatenative grammars such as Tree Adjoining Grammars.
false
[]
[]
null
null
null
Part of this research is being carried out within the framework of the Priority Programme Language and Speech Technology (TST). The TST-Programme is sponsored by NWO (Dutch Organisation for Scientific Research).
1997
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
langedijk-etal-2022-meta
https://aclanthology.org/2022.acl-long.582
Meta-Learning for Fast Cross-Lingual Adaptation in Dependency Parsing
Meta-learning, or learning to learn, is a technique that can help to overcome resource scarcity in cross-lingual NLP problems, by enabling fast adaptation to new tasks. We apply model-agnostic meta-learning (MAML) to the task of cross-lingual dependency parsing. We train our model on a diverse set of languages to learn a parameter initialization that can adapt quickly to new languages. We find that meta-learning with pre-training can significantly improve upon the performance of language transfer and standard supervised learning baselines for a variety of unseen, typologically diverse, and low-resource languages, in a few-shot learning setup.
false
[]
[]
null
null
null
null
2022
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kuhn-etal-2010-phrase
https://aclanthology.org/C10-1069
Phrase Clustering for Smoothing TM Probabilities - or, How to Extract Paraphrases from Phrase Tables
This paper describes how to cluster together the phrases of a phrase-based statistical machine translation (SMT) system, using information in the phrase table itself. The clustering is symmetric and recursive: it is applied both to sourcelanguage and target-language phrases, and the clustering in one language helps determine the clustering in the other. The phrase clusters have many possible uses. This paper looks at one of these uses: smoothing the conditional translation model (TM) probabilities employed by the SMT system. We incorporated phrase-cluster-derived probability estimates into a baseline loglinear feature combination that included relative frequency and lexically-weighted conditional probability estimates. In Chinese-English (C-E) and French-English (F-E) learning curve experiments, we obtained a gain over the baseline in 29 of 30 tests, with a maximum gain of 0.55 BLEU points (though most gains were fairly small). The largest gains came with medium (200-400K sentence pairs) rather than with small (less than 100K sentence pairs) amounts of training data, contrary to what one would expect from the paraphrasing literature. We have only begun to explore the original smoothing approach described here.
false
[]
[]
null
null
null
null
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
pasquier-2010-single
https://aclanthology.org/S10-1032
Single Document Keyphrase Extraction Using Sentence Clustering and Latent Dirichlet Allocation
This paper describes the design of a system for extracting keyphrases from a single document The principle of the algorithm is to cluster sentences of the documents in order to highlight parts of text that are semantically related. The clusters of sentences, that reflect the themes of the document, are then analyzed to find the main topics of the text. Finally, the most important words, or groups of words, from these topics are proposed as keyphrases.
false
[]
[]
null
null
null
null
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
von-essen-hesslow-2020-building
https://aclanthology.org/2020.pam-1.16
Building a Swedish Question-Answering Model
High quality datasets for question answering exist in a few languages, but far from all. Producing such datasets for new languages requires extensive manual labour. In this work we look at different methods for using existing datasets to train question-answering models in languages lacking such datasets. We show that machine translation followed by cross-lingual projection is a viable way to create a full question-answering dataset in a new language. We introduce new methods both for bitext alignment, using optimal transport, and for direct cross-lingual projection, utilizing multilingual BERT. We show that our methods produce good Swedish question-answering models without any manual work. Finally, we apply our proposed methods on Spanish and evaluate it on the XQuAD and MLQA benchmarks where we achieve new state-of-the-art values of 80.4 F1 and 62.9 Exact Match (EM) points on the Spanish XQuAD corpus and 70.8 F1 and 53.0 EM on the Spanish MLQA corpus, showing that the technique is readily applicable to other languages.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kelly-etal-2009-investigating
https://aclanthology.org/W09-0623
Investigating Content Selection for Language Generation using Machine Learning
The content selection component of a natural language generation system decides which information should be communicated in its output. We use information from reports on the game of cricket. We first describe a simple factoid-to-text alignment algorithm then treat content selection as a collective classification problem and demonstrate that simple 'grouping' of statistics at various levels of granularity yields substantially improved results over a probabilistic baseline. We additionally show that holding back of specific types of input data, and linking database structures with commonality further increase performance.
false
[]
[]
null
null
null
This paper is based on Colin Kelly's M.Phil. thesis, written towards his completion of the University of Cambridge Computer Laboratory's Computer Speech, Text and Internet Technology course. Grateful thanks go to the EPSRC for funding.
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
van-den-bogaert-etal-2020-mice
https://aclanthology.org/2020.eamt-1.59
MICE: a middleware layer for MT
The MICE project (2018-2020) will deliver a middleware layer for improving the output quality of the eTranslation system of EC's Connecting Europe Facility through additional services, such as domain adaptation and named-entity recognition. It will also deliver a user portal, allowing for human post-editing.
false
[]
[]
null
null
null
MICE is funded by the EC's CEF Telecom programme (project 2017-EU-IA-0169).
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
deleger-zweigenbaum-2010-identifying
http://www.lrec-conf.org/proceedings/lrec2010/pdf/472_Paper.pdf
Identifying Paraphrases between Technical and Lay Corpora
In previous work, we presented a preliminary study to identify paraphrases between technical and lay discourse types from medical corpora dedicated to the French language. In this paper, we test the hypothesis that the same kinds of paraphrases as for French can be detected between English technical and lay discourse types and report the adaptation of our method from French to English. Starting from the constitution of monolingual comparable corpora, we extract two kinds of paraphrases: paraphrases between nominalizations and verbal constructions and paraphrases between neo-classical compounds and modern-language phrases. We do this relying on morphological resources and a set of extraction rules we adapt from the original approach for French. Results show that paraphrases could be identified with a rather good precision, and that these types of paraphrase are relevant in the context of the opposition between technical and lay discourse types. These observations are consistent with the results obtained for French, which demonstrates the portability of the approach as well as the similarity of the two languages as regards the use of those kinds of expressions in technical and lay discourse types.
true
[]
[]
Quality Education
null
null
null
2010
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
rosenthal-mckeown-2013-columbia
https://aclanthology.org/S13-2079
Columbia NLP: Sentiment Detection of Subjective Phrases in Social Media
We present a supervised sentiment detection system that classifies the polarity of subjective phrases as positive, negative, or neutral. It is tailored towards online genres, specifically Twitter, through the inclusion of dictionaries developed to capture vocabulary used in online conversations (e.g., slang and emoticons) as well as stylistic features common to social media. We show how to incorporate these new features within a state of the art system and evaluate it on subtask A in SemEval-2013 Task 2: Sentiment Analysis in Twitter.
false
[]
[]
null
null
null
This research was partially funded by (a) the ODNI, IARPA, through the U.S. Army Research Lab and (b) the DARPA DEFT Program. All statements of fact, opinion or conclusions contained herein are those of the authors and should not be construed as representing the official views, policies, or positions of IARPA, the ODNI, the Department of Defense, or the U.S. Government.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gupta-2020-finlp
https://aclanthology.org/2020.fnp-1.12
FiNLP at FinCausal 2020 Task 1: Mixture of BERTs for Causal Sentence Identification in Financial Texts
This paper describes our system developed for the sub-task 1 of the FinCausal shared task in the FNP-FNS workshop held in conjunction with COLING-2020. The system classifies whether a financial news text segment contains causality or not. To address this task, we fine-tune and ensemble the generic and domain-specific BERT language models pre-trained on financial text corpora. The task data is highly imbalanced with the majority non-causal class; therefore, we train the models using strategies such as under-sampling, cost-sensitive learning, and data augmentation. Our best system achieves a weighted F1-score of 96.98 securing 4 th position on the evaluation leaderboard. The code is available at https:
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bisk-etal-2016-natural
https://aclanthology.org/N16-1089
Natural Language Communication with Robots
We propose a framework for devising empirically testable algorithms for bridging the communication gap between humans and robots. We instantiate our framework in the context of a problem setting in which humans give instructions to robots using unrestricted natural language commands, with instruction sequences being subservient to building complex goal configurations in a blocks world. We show how one can collect meaningful training data and we propose three neural architectures for interpreting contextually grounded natural language commands. The proposed architectures allow us to correctly understand/ground the blocks that the robot should move when instructed by a human who uses unrestricted language. The architectures have more difficulty in correctly understanding/grounding the spatial relations required to place blocks correctly, especially when the blocks are not easily identifiable.
false
[]
[]
null
null
null
This work was supported by Contract W911NF-15-1-0543 with the US Defense Advanced Research Projects Agency (DARPA) and the Army Research Office (ARO).
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
patchala-bhatnagar-2018-authorship
https://aclanthology.org/C18-1234
Authorship Attribution By Consensus Among Multiple Features
Most existing research on authorship attribution uses various lexical, syntactic and semantic features. In this paper we demonstrate an effective template-based approach for combining various syntactic features of a document for authorship analysis. The parse-tree based features that we propose are independent of the topic of a document and reflect the innate writing styles of authors. We show that the use of templates including sub-trees of parse trees in conjunction with other syntactic features result in improved author attribution rates. Another contribution is the demonstration that Dempster's rule based combination of evidence from syntactic features performs better than other evidence-combination methods. We also demonstrate that our methodology works well for the case where actual author is not included in the candidate author set.
false
[]
[]
null
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
peng-etal-2016-news
https://aclanthology.org/P16-1037
News Citation Recommendation with Implicit and Explicit Semantics
In this work, we focus on the problem of news citation recommendation. The task aims to recommend news citations for both authors and readers to create and search news references. Due to the sparsity issue of news citations and the engineering difficulty in obtaining information on authors, we focus on content similarity-based methods instead of collaborative filtering-based approaches. In this paper, we explore word embedding (i.e., implicit semantics) and grounded entities (i.e., explicit semantics) to address the variety and ambiguity issues of language. We formulate the problem as a reranking task and integrate different similarity measures under the learning to rank framework. We evaluate our approach on a real-world dataset. The experimental results show the efficacy of our method.
false
[]
[]
null
null
null
null
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zheng-etal-2022-fewnlu
https://aclanthology.org/2022.acl-long.38
FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language Understanding
The few-shot natural language understanding (NLU) task has attracted much recent attention. However, prior methods have been evaluated under a disparate set of protocols, which hinders fair comparison and measuring progress of the field. To address this issue, we introduce an evaluation framework that improves previous evaluation procedures in three key aspects, i.e., test performance, dev-test correlation, and stability. Under this new evaluation framework, we re-evaluate several stateof-the-art few-shot methods for NLU tasks. Our framework reveals new insights: (1) both the absolute performance and relative gap of the methods were not accurately estimated in prior literature; (2) no single method dominates most tasks with consistent performance; (3) improvements of some methods diminish with a larger pretrained model; and (4) gains from different methods are often complementary and the best combined model performs close to a strong fully-supervised baseline. We open-source our toolkit, FewNLU, that implements our evaluation framework along with a number of state-of-the-art methods. 1 2
false
[]
[]
null
null
null
We thank Dani Yogatama for valuable feedback on a draft of this paper.
2022
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
liu-etal-2020-zero
https://aclanthology.org/2020.repl4nlp-1.1
Zero-Resource Cross-Domain Named Entity Recognition
Existing models for cross-domain named entity recognition (NER) rely on numerous unlabeled corpus or labeled NER training data in target domains. However, collecting data for low-resource target domains is not only expensive but also time-consuming. Hence, we propose a cross-domain NER model that does not use any external resources. We first introduce a Multi-Task Learning (MTL) by adding a new objective function to detect whether tokens are named entities or not. We then introduce a framework called Mixture of Entity Experts (MoEE) to improve the robustness for zero-resource domain adaptation. Finally, experimental results show that our model outperforms strong unsupervised cross-domain sequence labeling models, and the performance of our model is close to that of the state-of-theart model which leverages extensive resources.
false
[]
[]
null
null
null
This work is partially funded by ITF/319/16FP and MRP/055/18 of the Innovation Technology Commission, the Hong Kong SAR Government.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dufter-etal-2021-static
https://aclanthology.org/2021.naacl-main.186
Static Embeddings as Efficient Knowledge Bases?
Recent research investigates factual knowledge stored in large pretrained language models (PLMs). Instead of structural knowledge base (KB) queries, masked sentences such as "Paris is the capital of [MASK]" are used as probes. The good performance on this analysis task has been interpreted as PLMs becoming potential repositories of factual knowledge. In experiments across ten linguistically diverse languages, we study knowledge contained in static embeddings. We show that, when restricting the output space to a candidate set, simple nearest neighbor matching using static embeddings performs better than PLMs. E.g., static embeddings perform 1.6% points better than BERT while just using 0.3% of energy for training. One important factor in their good comparative performance is that static embeddings are standardly learned for a large vocabulary. In contrast, BERT exploits its more sophisticated, but expensive ability to compose meaningful representations from a much smaller subword vocabulary. * Equal contribution-random order. Model Vocabulary Size p1 LAMA LAMA-UHN Oracle 22.0 23.7
false
[]
[]
null
null
null
Acknowledgements. This work was supported by the European Research Council (# 740516) and the German Federal Ministry of Education and Research (BMBF) under Grant No. 01IS18036A. The authors of this work take full responsibility for its content. The first author was supported by the Bavarian research institute for digital transformation (bidt) through their fellowship program. We thank Yanai Elazar and the anonymous reviewers for valuable comments.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chen-etal-2021-improving
https://aclanthology.org/2021.naacl-main.475
Improving Faithfulness in Abstractive Summarization with Contrast Candidate Generation and Selection
Despite significant progress in neural abstractive summarization, recent studies have shown that the current models are prone to generating summaries that are unfaithful to the original context. To address the issue, we study contrast candidate generation and selection as a model-agnostic post-processing technique to correct the extrinsic hallucinations (i.e. information not present in the source text) in unfaithful summaries. We learn a discriminative correction model by generating alternative candidate summaries where named entities and quantities in the generated summary are replaced with ones with compatible semantic types from the source document. This model is then used to select the best candidate as the final output summary. Our experiments and analysis across a number of neural summarization systems show that our proposed method is effective in identifying and correcting extrinsic hallucinations. We analyze the typical hallucination phenomenon by different types of neural summarization systems, in hope to provide insights for future work on the direction.
false
[]
[]
null
null
null
We thank Sunita Verma and Sugato Basu for valuable input and feedback on drafts of the paper. This work was supported in part by a Focused Award from Google, a gift from Tencent, and by Contract FA8750-19-2-1004 with the US Defense Advanced Research Projects Agency (DARPA). The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
tsai-etal-2016-cross
https://aclanthology.org/K16-1022
Cross-Lingual Named Entity Recognition via Wikification
Named Entity Recognition (NER) models for language L are typically trained using annotated data in that language. We study cross-lingual NER, where a model for NER in L is trained on another, source, language (or multiple source languages). We introduce a language independent method for NER, building on cross-lingual wikification, a technique that grounds words and phrases in non-English text into English Wikipedia entries. Thus, mentions in any language can be described using a set of categories and FreeBase types, yielding, as we show, strong language-independent features. With this insight, we propose an NER model that can be applied to all languages in Wikipedia. When trained on English, our model outperforms comparable approaches on the standard CoNLL datasets (Spanish, German, and Dutch) and also performs very well on lowresource languages (e.g., Turkish, Tagalog, Yoruba, Bengali, and Tamil) that have significantly smaller Wikipedia. Moreover, our method allows us to train on multiple source languages, typically improving NER results on the target languages. Finally, we show that our languageindependent features can be used also to enhance monolingual NER systems, yielding improved results for all 9 languages.
false
[]
[]
null
null
null
This research is supported by NIH grant U54-GM114838, a grant from the Allen Institute for Artificial Intelligence (allenai.org), and Contract HR0011-15-2-0025 with the US Defense Advanced Research Projects Agency (DARPA). Approved for Public Release, Distribution Unlimited. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government.
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
arapov-herz-1973-frequency
https://aclanthology.org/C73-2001
Frequency and Age as Characteristics of a Word
1. The problem of rdation between the frequency and age of a word is only a small part of the general problem of opposition of the synchronic and diachronic aspects of language. The frequency is obviously a purely synchronic characteristic of a word whereas the age (the time interval t between the appearance of the word and the present moment) is a purely diachronic one. However there is a simple dependence between both characteristics: the old age of a word corresponds to a high frequency ranking and vice-versa: among the words with low frequency the proportion of ancient words is small. The existence of this dependency was first discovered by G. K. Zlvr (1947) . 2. To obtain this dependency in analytical form let us split the whole frequency dictionary into a number of groups of equal size (n words in each of the groups).* Each group consists of words of equal or nearly equal values of frequency. The most frequently used n words belong to the group with rank 1, the following n words constitute the group with rank 2 and the words having in the dictionary numbers from (i-1) n + 1 till i.n constitute a group with rank i.
false
[]
[]
null
null
null
null
1973
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhang-liu-2015-corpus
https://aclanthology.org/Y15-2029
A Corpus-based Comparatively Study on the Semantic Features and Syntactic patterns of Y\`ou/H\'ai in Mandarin Chinese
This study points out that Yòu () and Hái (✁) have their own prominent semantic features and syntactic patterns compared with each other. The differences reflect in the combination with verbs 1. Hái (✁) has absolute superiority in collocation with V+Bu (✂)+V, which tends to express [durative]. Yòu () has advantages in collocations with V+Le (✄)+V and derogatory verbs. Yòu ()+V+Le (✄)+V tends to express [repetition], and Yòu ()+derogatory verbs tends to express [repetition, derogatory]. We also find that the two words represent different semantic features when they match with grammatical aspect markers Le (✄), Zhe (☎) and Guo (✆). Different distributions have a close relation with their semantic features. This study is based on the investigation of the large-scale corpus and data statistics, applying methods of corpus linguistics, computational linguistics and semantic background model, etc. We also described and explained the language facts.
false
[]
[]
null
null
null
The study is supported by 1) National Language Committee Research Project .2) The Fundamental Research Funds for the Central Universities, and the Research Funds of Beijing Language and Culture University (No.15YCX101). 3) Science Foundation of Beijing Language and Culture University (supported by "the Fundamental Research Funds for the Central Universities") (13ZDY03)
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
emele-1991-unification
https://aclanthology.org/P91-1042
Unification With Lazy Non-Redundant Copying
This paper presents a unification procedure which eliminates the redundant copying of structures by using a lazy incremental copying appr0a~:h to achieve structure sharing. Copying of structures accounts for a considerable amount of the total processing time. Several methods have been proposed to minimize the amount of necessary copying. Lazy Incremental Copying (LIC) is presented as a new solution to the copying problem. It synthesizes ideas of lazy copying with the notion of chronological dereferencing for achieving a high amount of structure sharing.
false
[]
[]
null
null
null
null
1991
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
tu-etal-2016-modeling
https://aclanthology.org/P16-1008
Modeling Coverage for Neural Machine Translation
Attention mechanism has enhanced stateof-the-art Neural Machine Translation (NMT) by jointly learning to align and translate. It tends to ignore past alignment information, however, which often leads to over-translation and under-translation. To address this problem, we propose coverage-based NMT in this paper. We maintain a coverage vector to keep track of the attention history. The coverage vector is fed to the attention model to help adjust future attention, which lets NMT system to consider more about untranslated source words. Experiments show that the proposed approach significantly improves both translation quality and alignment quality over standard attention-based NMT. 1
false
[]
[]
null
null
null
This work is supported by China National 973 project 2014CB340301. Yang Liu is supported by the National Natural Science Foundation of China (No. 61522204) and the 863 Program (2015AA011808). We thank the anonymous reviewers for their insightful comments.
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
munteanu-etal-2004-improved
https://aclanthology.org/N04-1034
Improved Machine Translation Performance via Parallel Sentence Extraction from Comparable Corpora
null
false
[]
[]
null
null
null
null
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
le-hoi-2020-video
https://aclanthology.org/2020.acl-main.518
Video-Grounded Dialogues with Pretrained Generation Language Models
Pre-trained language models have shown remarkable success in improving various downstream NLP tasks due to their ability to capture dependencies in textual data and generate natural responses. In this paper, we leverage the power of pre-trained language models for improving video-grounded dialogue, which is very challenging and involves complex features of different dynamics: (1) Video features which can extend across both spatial and temporal dimensions; and (2) Dialogue features which involve semantic dependencies over multiple dialogue turns. We propose a framework by extending GPT-2 models to tackle these challenges by formulating videogrounded dialogue tasks as a sequence-tosequence task, combining both visual and textual representation into a structured sequence, and fine-tuning a large pre-trained GPT-2 network. Our framework allows fine-tuning language models to capture dependencies across multiple modalities over different levels of information: spatio-temporal level in video and token-sentence level in dialogue context. We achieve promising improvement on the AudioVisual Scene-Aware Dialogues (AVSD) benchmark from DSTC7, which supports a potential direction in this line of research.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
pacak-1963-slavic
https://aclanthology.org/1963.earlymt-1.27
Slavic languages---comparative morphosyntactic research
null
false
[]
[]
null
null
null
null
1963
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
liu-etal-2021-continual
https://aclanthology.org/2021.findings-acl.239
Continual Mixed-Language Pre-Training for Extremely Low-Resource Neural Machine Translation
The data scarcity in low-resource languages has become a bottleneck to building robust neural machine translation systems. Finetuning a multilingual pre-trained model (e.g., mBART (Liu et al., 2020a)) on the translation task is a good approach for low-resource languages; however, its performance will be greatly limited when there are unseen languages in the translation pairs. In this paper, we present a continual pre-training (CPT) framework on mBART to effectively adapt it to unseen languages. We first construct noisy mixed-language text from the monolingual corpus of the target language in the translation pair to cover both the source and target languages, and then, we continue pretraining mBART to reconstruct the original monolingual text. Results show that our method can consistently improve the finetuning performance upon the mBART baseline, as well as other strong baselines, across all tested low-resource translation pairs containing unseen languages. Furthermore, our approach also boosts the performance on translation pairs where both languages are seen in the original mBART's pre-training. The code is available at https://github.com/ zliucr/cpt-nmt.
false
[]
[]
null
null
null
We want to say thanks to the anonymous reviewers for the insightful reviews and constructive feedback. This work is partially funded by ITF/319/16FP and MRP/055/18 of the Innovation Technology Commission, the Hong Kong SAR Government.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
austin-etal-1992-bbn
https://aclanthology.org/H92-1049
BBN Real-Time Speech Recognition Demonstrations
Typically, real-time speech recognition -if achieved at all -is accomplished either by greatly simplifying the processing to be done, or by the use of special-purpose hardware. Each of these approaches has obvious problems. The former results in a substantial loss in accuracy, while the latter often results in obsolete hardware being developed at great expense and delay. Starting in 1990 [1] [2] we have taken a different approach based on modifying the algorithms to provide increased speed without loss in accuracy. Our goal has been to use commercially available off-the-shelf (COTS) hardware to perform speech recognition. Initially, this meant using workstations with powerful but standard signal processing boards acting as accelerators. However, even these signal processing boards have two significant disadvantages:
false
[]
[]
null
null
null
null
1992
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
aggarwal-etal-2021-efficient
https://aclanthology.org/2021.ranlp-1.3
Efficient Multilingual Text Classification for Indian Languages
India is one of the richest language hubs on the earth and is very diverse and multilingual. But apart from a few Indian languages, most of them are still considered to be resource poor. Since most of the NLP techniques either require linguistic knowledge that can only be developed by experts and native speakers of that language or they require a lot of labelled data which is again expensive to generate, the task of text classification becomes challenging for most of the Indian languages. The main objective of this paper is to see how one can benefit from the lexical similarity found in Indian languages in a multilingual scenario. Can a classification model trained on one Indian language be reused for other Indian languages? So, we performed zero-shot text classification via exploiting lexical similarity and we observed that our model performs best in those cases where the vocabulary overlap between the language datasets is maximum. Our experiments also confirm that a single multilingual model trained via exploiting language relatedness outperforms the baselines by significant margins.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
uehara-harada-2020-unsupervised
https://aclanthology.org/2020.nlpbt-1.6
Unsupervised Keyword Extraction for Full-Sentence VQA
In the majority of the existing Visual Question Answering (VQA) research, the answers consist of short, often single words, as per instructions given to the annotators during dataset construction. This study envisions a VQA task for natural situations, where the answers are more likely to be sentences rather than single words. To bridge the gap between this natural VQA and existing VQA approaches, a novel unsupervised keyword extraction method is proposed. The method is based on the principle that the full-sentence answers can be decomposed into two parts: one that contains new information answering the question (i.e., keywords), and one that contains information already included in the question. Discriminative decoders were designed to achieve such decomposition, and the method was experimentally implemented on VQA datasets containing full-sentence answers. The results show that the proposed model can accurately extract the keywords without being given explicit annotations describing them.
false
[]
[]
null
null
null
Acknowledgement This work was partially supported by JST CREST Grant Number JP-MJCR1403, and partially supported by JSPS KAKENHI Grant Number JP19H01115 and JP20H05556. We would like to thank Yang Li, Sho Maeoki, Sho Inayoshi, and Antonio Tejerode-Pablos for helpful discussions.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dinu-moldovan-2021-automatic
https://aclanthology.org/2021.ranlp-1.41
Automatic Detection and Classification of Mental Illnesses from General Social Media Texts
Mental health is getting more and more attention recently, depression being a very common illness nowadays, but also other disorders like anxiety, obsessive-compulsive disorders, feeding disorders, autism, or attention-deficit/hyperactivity disorders. The huge amount of data from social media and the recent advances of deep learning models provide valuable means to automatically detecting mental disorders from plain text. In this article, we experiment with state-of-the-art methods on the SMHD mental health conditions dataset from Reddit (Cohan et al., 2018). Our contribution is threefold: using a dataset consisting of more illnesses than most studies, focusing on general text rather than mental health support groups and classification by posts rather than individuals or groups. For the automatic classification of the diseases, we employ three deep learning models: BERT, RoBERTa and XLNET. We double the baseline established by Cohan et al. (2018), on just a sample of their dataset. We improve the results obtained by Jiang et al. (2020) on post-level classification. The accuracy obtained by the eating disorder classifier is the highest due to the pregnant presence of discussions related to calories, diets, recipes etc., whereas depression had the lowest F1 score, probably because depression is more difficult to identify in linguistic acts.
true
[]
[]
Good Health and Well-Being
null
null
This research is supported by a grant of the Ministry of Research, Innovation and Digitization, CNCS/CCCDI Metric DEPR CONT SCHIZ CONTR OCD CONT EAT CONT BPD CONT ADHD CONT PTSD CONT AUT CONT ANX CONT
2021
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wang-etal-2021-secoco-self
https://aclanthology.org/2021.findings-emnlp.396
Secoco: Self-Correcting Encoding for Neural Machine Translation
This paper presents Self-correcting Encoding (Secoco), a framework that effectively deals with input noise for robust neural machine translation by introducing self-correcting predictors. Different from previous robust approaches, Secoco enables NMT to explicitly correct noisy inputs and delete specific errors simultaneously with the translation decoding process. Secoco is able to achieve significant improvements of 1.6 BLEU points over strong baselines on two real-world test sets and a benchmark WMT dataset with good interpretability. The code and dataset are publicly available at https://github.com/rgwt123/Secoco.
false
[]
[]
null
null
null
Deyi Xiong was partially supported by the National Key Research and Development Program of China (Grant No.2019QY1802) and Natural Science Foundation of Tianjin (Grant No.19JCZDJC31400). We would like to thank the three anonymous reviewers for their insightful comments.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bustamante-diaz-2006-spelling
http://www.lrec-conf.org/proceedings/lrec2006/pdf/119_pdf.pdf
Spelling Error Patterns in Spanish for Word Processing Applications
This paper reports findings from the elaboration of a typology of spelling errors for Spanish. It also discusses previous generalizations about spelling error patterns found in other studies and offers new insights on them. The typology is based on the analysis of around 76K misspellings found in real-life texts produced by humans. The main goal of the elaboration of the typology was to help in the implementation of a spell checker that detects context-independent misspellings in general unrestricted texts with the most common confusion pairs (i.e. error/correction pairs) to improve the set of ranked correction candidates for misspellings. We found that spelling errors are language dependent and are closely related to the orthographic rules of each language. The statistical data we provide on spelling error patterns in Spanish and their comparison with other data in other related works are the novel contribution of this paper. In this line, this paper shows that some of the general statements found in the literature about spelling error patterns apply mainly to English and cannot be extrapolated to other languages.
false
[]
[]
null
null
null
null
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
koshorek-etal-2018-text
https://aclanthology.org/N18-2075
Text Segmentation as a Supervised Learning Task
Text segmentation, the task of dividing a document into contiguous segments based on its semantic structure, is a longstanding challenge in language understanding. Previous work on text segmentation focused on unsupervised methods such as clustering or graph search, due to the paucity in labeled data. In this work, we formulate text segmentation as a supervised learning problem, and present a large new dataset for text segmentation that is automatically extracted and labeled from Wikipedia. Moreover, we develop a segmentation model based on this dataset and show that it generalizes well to unseen natural text.
false
[]
[]
null
null
null
We thank the anonymous reviewers for their constructive feedback. This work was supported by the Israel Science Foundation, grant 942/16.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kaewphan-etal-2014-utu
https://aclanthology.org/S14-2143
UTU: Disease Mention Recognition and Normalization with CRFs and Vector Space Representations
In this paper we present our system participating in the SemEval-2014 Task 7 in both subtasks A and B, aiming at recognizing and normalizing disease and symptom mentions from electronic medical records respectively. In subtask A, we used an existing NER system, NERsuite, with our own feature set tailored for this task. For subtask B, we combined word vector representations and supervised machine learning to map the recognized mentions to the corresponding UMLS concepts. Our system was placed 2nd and 5th out of 21 participants on subtasks A and B respectively showing competitive performance.
true
[]
[]
Good Health and Well-Being
null
null
Computational resources were provided by CSC -IT Center for Science Ltd, Espoo, Finland. This work was supported by the Academy of Finland.
2014
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
niv-1992-right
https://aclanthology.org/P92-1039
Right Association Revisited
Consideration of when Right Association works and when it fails lead to a restatement of this parsing principle in terms of the notion of heaviness. A computational investigation of a syntactically annotated corpus provides evidence for this proposal and suggest circumstances when RA is likely to make correct attachment predictions.
false
[]
[]
null
null
null
null
1992
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false