ID
stringlengths
11
54
url
stringlengths
33
64
title
stringlengths
11
184
abstract
stringlengths
17
3.87k
label_nlp4sg
bool
2 classes
task
list
method
list
goal1
stringclasses
9 values
goal2
stringclasses
9 values
goal3
stringclasses
1 value
acknowledgments
stringlengths
28
1.28k
year
stringlengths
4
4
sdg1
bool
1 class
sdg2
bool
1 class
sdg3
bool
2 classes
sdg4
bool
2 classes
sdg5
bool
2 classes
sdg6
bool
1 class
sdg7
bool
1 class
sdg8
bool
2 classes
sdg9
bool
2 classes
sdg10
bool
2 classes
sdg11
bool
2 classes
sdg12
bool
1 class
sdg13
bool
2 classes
sdg14
bool
1 class
sdg15
bool
1 class
sdg16
bool
2 classes
sdg17
bool
2 classes
huang-etal-2021-seq2emo
https://aclanthology.org/2021.naacl-main.375
Seq2Emo: A Sequence to Multi-Label Emotion Classification Model
Multi-label emotion classification is an important task in NLP and is essential to many applications. In this work, we propose a sequence-to-emotion (Seq2Emo) approach, which implicitly models emotion correlations in a bi-directional decoder. Experiments on SemEval'18 and GoEmotions datasets show that our approach outperforms state-of-the-art methods (without using external data). In particular, Seq2Emo outperforms the binary relevance (BR) and classifier chain (CC) approaches in a fair setting. 1
false
[]
[]
null
null
null
We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC) under grant Nos. RGPIN-2020-04465 and RGPIN-2020-04440. Chenyang Huang is supported by the Borealis AI Graduate Fellowship Program. Lili Mou and Osmar Zaïane are supported by the Amii Fellow Program and the Canada CIFAR AI Chair Program. This research is also supported in part by Compute Canada (www.computecanada.ca).
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
tay-etal-2018-attentive
https://aclanthology.org/D18-1381
Attentive Gated Lexicon Reader with Contrastive Contextual Co-Attention for Sentiment Classification
This paper proposes a new neural architecture that exploits readily available sentiment lexicon resources. The key idea is that that incorporating a word-level prior can aid in the representation learning process, eventually improving model performance. To this end, our model employs two distinctly unique components, i.e., (1) we introduce a lexicon-driven contextual attention mechanism to imbue lexicon words with long-range contextual information and (2), we introduce a contrastive co-attention mechanism that models contrasting polarities between all positive and negative words in a sentence. Via extensive experiments, we show that our approach outperforms many other neural baselines on sentiment classification tasks on multiple benchmark datasets. * Denotes equal contribution.
false
[]
[]
null
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
rikters-2015-multi
https://aclanthology.org/W15-4102
Multi-system machine translation using online APIs for English-Latvian
This paper describes a hybrid machine translation (HMT) system that employs several online MT system application program interfaces (APIs) forming a Multi-System Machine Translation (MSMT) approach. The goal is to improve the automated translation of English-Latvian texts over each of the individual MT APIs. The selection of the best hypothesis translation is done by calculating the perplexity for each hypothesis. Experiment results show a slight improvement of BLEU score and WER (word error rate).
false
[]
[]
null
null
null
This research work was supported by the research project "Optimization methods of large scale statistical models for innovative machine translation technologies", project financed by The State Education Development Agency (Latvia) and European Regional Development Fund, contract No. 2013/0038/2DP/2.1.1.1.0/13/APIA/ VI-AA/029. The author would also like to thank Inguna Skadiņa for advices and contributions, and the anonymous reviewers for their comments and suggestions.
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chandu-etal-2018-code
https://aclanthology.org/W18-3204
Code-Mixed Question Answering Challenge: Crowd-sourcing Data and Techniques
Code-Mixing (CM) is the phenomenon of alternating between two or more languages which is prevalent in bi-and multilingual communities. Most NLP applications today are still designed with the assumption of a single interaction language and are most likely to break given a CM utterance with multiple languages mixed at a morphological, phrase or sentence level. For example, popular commercial search engines do not yet fully understand the intents expressed in CM queries. As a first step towards fostering research which supports CM in NLP applications, we systematically crowd-sourced and curated an evaluation dataset for factoid question answering in three CM languages-Hinglish (Hindi+English), Tenglish (Telugu+English) and Tamlish (Tamil+English) which belong to two language families. We share the details of our data collection process, techniques which were used to avoid inducing lexical bias amongst the crowd workers and other CM specific linguistic properties of the dataset. Our final dataset, which is available freely for research purposes, has 1,694 Hinglish, 2,848 Tamlish and 1,391 Tenglish factoid questions and their answers. We discuss the techniques used by the participants for the first edition of this ongoing challenge.
false
[]
[]
null
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bourdon-etal-1998-case
https://aclanthology.org/W98-0510
A Case Study in Implementing Dependency-Based Grammars
In creating an English grammar checking software product, we implemented a large-coverage grammar based on the dependency grammar formalism. This implementation required some adaptation of current linguistic description to prevent serious overgeneration of parse trees. Here, • we present one particular example, that of preposition stranding and dangling prepositions, where implementing an alternative to existing linguistic analyses is warranted to limit such overgeneration.
false
[]
[]
null
null
null
We would like to thank Les Logiciels Machina Sapiens inc. for supporting us in writing this paper. We are endebted to all the people, past and present, who have contributed to the development of the grammar checkers.We thank Mary Howatt for editing advice and anonymous reviewers for their useful comments. All errors remain those of the authors.
1998
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
nn-1981-technical-correspondence
https://aclanthology.org/J81-1005
Technical Correspondence: On the Utility of Computing Inferences in Data Base Query Systems
On the Utility of Computing Inferences in Data Base Query Systems these implementations were significantly more efficient, but checked a somewhat narrower class of presumptions than COOP. 6. Damerau mentions that queries with non-empty responses can also make presumptions. This is certainly true, even in more subtle ways than noted. (For example, "What is the youngest assistant professors salary?" presumes that there is more than one assistant professor.) Issues such as these are indeed currently under investigation. Overall, we are pleased to see that Damerau has raised some very important issues and we hope that this exchange will be helpful to the natural language processing community.
false
[]
[]
null
null
null
null
1981
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hershcovich-etal-2019-syntactic
https://aclanthology.org/W19-2009
Syntactic Interchangeability in Word Embedding Models
Nearest neighbors in word embedding models are commonly observed to be semantically similar, but the relations between them can vary greatly. We investigate the extent to which word embedding models preserve syntactic interchangeability, as reflected by distances between word vectors, and the effect of hyper-parameters-context window size in particular. We use part of speech (POS) as a proxy for syntactic interchangeability, as generally speaking, words with the same POS are syntactically valid in the same contexts. We also investigate the relationship between interchangeability and similarity as judged by commonly-used word similarity benchmarks, and correlate the result with the performance of word embedding models on these benchmarks. Our results will inform future research and applications in the selection of word embedding model, suggesting a principle for an appropriate selection of the context window size parameter depending on the use-case.
false
[]
[]
null
null
null
We thank the anonymous reviewers for their helpful comments.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
treharne-etal-2006-towards
https://aclanthology.org/U06-1025
Towards Cognitive Optimisation of a Search Engine Interface
Search engine interfaces come in a range of variations from the familiar text-based approach to the more experimental graphical systems. It is rare however that psychological or human factors research is undertaken to properly evaluate or optimize the systems, and to the extent this has been done the results have tended to contradict some of the assumptions that have driven search engine design. Our research is focussed on a model in which at least 100 hits are selected from a corpus of documents based on a set of query words and displayed graphically. Matrix manipulation techniques in the SVD/LSA family are used to identify significant dimensions and display documents according to a subset of these dimensions. The research questions we are investigating in this context relate to the computational methods (how to rescale the data), the linguistic information (how to characterize a document), and the visual attributes (which linguistic dimensions to display using which attributes).
true
[]
[]
Industry, Innovation and Infrastructure
null
null
null
2006
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
nielsen-2019-danish
https://aclanthology.org/2019.gwc-1.5
Danish in Wikidata lexemes
Wikidata introduced support for lexicographic data in 2018. Here we describe the lexicographic part of Wikidata as well as experiences with setting up lexemes for the Danish language. We note various possible annotations for lexemes as well as discuss various choices made.
false
[]
[]
null
null
null
We thank Bolette Sandford Pedersen, Sanni Nimb, Sabine Kirchmeier, Nicolai Hartvig Sørensen and Lars Kai Hansen for discussions and answering questions, and the reviewers for suggestions for improvement of the manuscript. This work is funded by the Innovation Fund Denmark through the projects DAnish Center for Big Data Analytics driven Innovation (DABAI) and Teaching platform for developing and automatically tracking early stage literacy skills (ATEL).
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hur-etal-2020-domain
https://aclanthology.org/2020.bionlp-1.17
Domain Adaptation and Instance Selection for Disease Syndrome Classification over Veterinary Clinical Notes
Identifying the reasons for antibiotic administration in veterinary records is a critical component of understanding antimicrobial usage patterns. This informs antimicrobial stewardship programs designed to fight antimicrobial resistance, a major health crisis affecting both humans and animals in which veterinarians have an important role to play. We propose a document classification approach to determine the reason for administration of a given drug, with particular focus on domain adaptation from one drug to another, and instance selection to minimize annotation effort.
true
[]
[]
Good Health and Well-Being
null
null
We thank Simon Sȗster, Afshin Rahimi, and the anonymous reviewers for their insightful comments and valuable suggestions.This research was undertaken with the assistance of information and other resources from the VetCompass Australia consortium under the project "VetCompass Australia: Big Data and Realtime Surveillance for Veterinary Science", which is supported by the Australian Government through the Australian Research Council LIEF scheme (LE160100026).
2020
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
korkontzelos-etal-2009-graph
https://aclanthology.org/W09-1705
Graph Connectivity Measures for Unsupervised Parameter Tuning of Graph-Based Sense Induction Systems.
Word Sense Induction (WSI) is the task of identifying the different senses (uses) of a target word in a given text. This paper focuses on the unsupervised estimation of the free parameters of a graph-based WSI method, and explores the use of eight Graph Connectivity Measures (GCM) that assess the degree of connectivity in a graph. Given a target word and a set of parameters, GCM evaluate the connectivity of the produced clusters, which correspond to subgraphs of the initial (unclustered) graph. Each parameter setting is assigned a score according to one of the GCM and the highest scoring setting is then selected. Our evaluation on the nouns of SemEval-2007 WSI task (SWSI) shows that: (1) all GCM estimate a set of parameters which significantly outperform the worst performing parameter setting in both SWSI evaluation schemes, (2) all GCM estimate a set of parameters which outperform the Most Frequent Sense (MFS) baseline by a statistically significant amount in the supervised evaluation scheme, and (3) two of the measures estimate a set of parameters that performs closely to a set of parameters estimated in supervised manner.
false
[]
[]
null
null
null
null
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yan-nakashole-2021-grounded
https://aclanthology.org/2021.nlp4posimpact-1.16
A Grounded Well-being Conversational Agent with Multiple Interaction Modes: Preliminary Results
Technologies for enhancing well-being, healthcare vigilance and monitoring are on the rise. However, despite patient interest, such technologies suffer from low adoption. One hypothesis for this limited adoption is loss of human interaction that is central to doctorpatient encounters. In this paper we seek to address this limitation via a conversational agent that adopts one aspect of in-person doctor-patient interactions: A human avatar to facilitate medical grounded question answering. This is akin to the in-person scenario where the doctor may point to the human body or the patient may point to their own body to express their conditions. Additionally, our agent has multiple interaction modes, that may give more options for the patient to use the agent, not just for medical question answering, but also to engage in conversations about general topics and current events. Both the avatar, and the multiple interaction modes could help improve adherence. We present a high level overview of the design of our agent, Marie Bot Wellbeing. We also report implementation details of our early prototype , and present preliminary results.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
qian-etal-2019-comparative
https://aclanthology.org/W19-6714
A Comparative Study of English-Chinese Translations of Court Texts by Machine and Human Translators and the Word2Vec Based Similarity Measure's Ability To Gauge Human Evaluation Biases
In this comparative study, a jury instruction scenario was used to test the translating capabilities of multiple machine translation tools and a human translator with extensive court experience. Three certified translators/interpreters subjectively evaluated the target texts generated using adequacy and fluency as the evaluation metrics. This subjective evaluation found that the machine generated results had much poorer adequacy and fluency compared with results produced by their human counterpart. Human translators can use strategic omission and explicitation strategies such as addition, paraphrasing, substitution, and repetition to remove ambiguity, and achieve a natural flow in the target language. We also investigate instances where human evaluators have major disagreements and found that human experts could have very biased views. On the other hand, a word2vec based algorithm, if given a good reference translation, can serve as a robust and reliable similarity reference to quantify human evalutors' biases beacuse it was trained on a large corpus using neural network models. Even though the machine generated versions had better fluency performance compared to their adequacy
true
[]
[]
Peace, Justice and Strong Institutions
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
liu-etal-2022-end
https://aclanthology.org/2022.findings-acl.46
End-to-End Segmentation-based News Summarization
In this paper, we bring a new way of digesting news content by introducing the task of segmenting a news article into multiple sections and generating the corresponding summary to each section. We make two contributions towards this new task. First, we create and make available a dataset, SEGNEWS, consisting of 27k news articles with sections and aligned heading-style section summaries. Second, we propose a novel segmentation-based language generation model adapted from pretrained language models that can jointly segment a document and produce the summary for each section. Experimental results on SEG-NEWS demonstrate that our model can outperform several state-of-the-art sequence-tosequence generation models for this new task.
false
[]
[]
null
null
null
null
2022
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
schulze-wettendorf-etal-2014-snap
https://aclanthology.org/S14-2101
SNAP: A Multi-Stage XML-Pipeline for Aspect Based Sentiment Analysis
This paper describes the SNAP system, which participated in Task 4 of SemEval-2014: Aspect Based Sentiment Analysis. We use an XML-based pipeline that combines several independent components to perform each subtask. Key resources used by the system are Bing Liu's sentiment lexicon, Stanford CoreNLP, RFTagger, several machine learning algorithms and WordNet. SNAP achieved satisfactory results in the evaluation, placing in the top half of the field for most subtasks.
false
[]
[]
null
null
null
null
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
jindal-2018-generating
https://aclanthology.org/N18-4020
Generating Image Captions in Arabic using Root-Word Based Recurrent Neural Networks and Deep Neural Networks
Image caption generation has gathered widespread interest in the artificial intelligence community. Automatic generation of an image description requires both computer vision and natural language processing techniques. While, there has been advanced research in English caption generation, research on generating Arabic descriptions of an image is extremely limited. Semitic languages like Arabic are heavily influenced by root-words. We leverage this critical dependency of Arabic to generate captions of an image directly in Arabic using root-word based Recurrent Neural Network and Deep Neural Networks. Experimental results on datasets from various Middle Eastern newspaper websites allow us to report the first BLEU score for direct Arabic caption generation. We also compare the results of our approach with BLEU score captions generated in English and translated into Arabic. Experimental results confirm that generating image captions using root-words directly in Arabic significantly outperforms the English-Arabic translated captions using state-of-the-art methods.
false
[]
[]
null
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hale-etal-2018-finding
https://aclanthology.org/P18-1254
Finding syntax in human encephalography with beam search
Recurrent neural network grammars (RNNGs) are generative models of (tree, string) pairs that rely on neural networks to evaluate derivational choices. Parsing with them using beam search yields a variety of incremental complexity metrics such as word surprisal and parser action count. When used as regressors against human electrophysiological responses to naturalistic text, they derive two amplitude effects: an early peak and a P600-like later peak. By contrast, a non-syntactic neural language model yields no reliable effects. Model comparisons attribute the early peak to syntactic composition within the RNNG. This pattern of results recommends the RNNG+beam search combination as a mechanistic model of the syntactic processing that occurs during normal human language comprehension.
false
[]
[]
null
null
null
This material is based upon work supported by the National Science Foundation under Grants No. 1607441 and No. 1607251. We thank Max Cantor and Rachel Eby for helping with data collection.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lee-etal-2016-feature
https://aclanthology.org/W16-4204
Feature-Augmented Neural Networks for Patient Note De-identification
Patient notes contain a wealth of information of potentially great interest to medical investigators. However, to protect patients' privacy, Protected Health Information (PHI) must be removed from the patient notes before they can be legally released, a process known as patient note de-identification. The main objective for a de-identification system is to have the highest possible recall. Recently, the first neural-network-based de-identification system has been proposed, yielding state-of-the-art results. Unlike other systems, it does not rely on human-engineered features, which allows it to be quickly deployed, but does not leverage knowledge from human experts or from electronic health records (EHRs). In this work, we explore a method to incorporate human-engineered features as well as features derived from EHRs to a neural-network-based de-identification system. Our results show that the addition of features, especially the EHR-derived features, further improves the state-of-the-art in patient note de-identification, including for some of the most sensitive PHI types such as patient names. Since in a real-life setting patient notes typically come with EHRs, we recommend developers of de-identification systems to leverage the information EHRs contain.
true
[]
[]
Good Health and Well-Being
Peace, Justice and Strong Institutions
null
The project was supported by Philips Research. The content is solely the responsibility of the authors and does not necessarily represent the official views of Philips Research. We warmly thank Michele Filannino, Alistair Johnson, Li-wei Lehman, Roger Mark, and Tom Pollard for their helpful suggestions and technical assistance.
2016
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
false
sahlgren-coster-2004-using
https://aclanthology.org/C04-1070
Using Bag-of-Concepts to Improve the Performance of Support Vector Machines in Text Categorization
This paper investigates the use of conceptbased representations for text categorization. We introduce a new approach to create concept-based text representations, and apply it to a standard text categorization collection. The representations are used as input to a Support Vector Machine classifier, and the results show that there are certain categories for which concept-based representations constitute a viable supplement to word-based ones. We also demonstrate how the performance of the Support Vector Machine can be improved by combining representations.
false
[]
[]
null
null
null
We have introduced a new method for producing concept-based (BoC) text representations, and we have compared the performance of an SVM classifier on the Reuters-21578 collection using both traditional word-based (BoW), and concept-based representations. The results show that BoC representations outperform BoW when only counting the ten largest categories, and that a combination of BoW and BoC representations improve the performance of the SVM over all categories.We conclude that concept-based representations constitute a viable supplement to wordbased ones, and that there are categories in the Reuters-21578 collection that benefit from using concept-based representations.
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
xiao-etal-2005-principles
https://aclanthology.org/I05-1072
Principles of Non-stationary Hidden Markov Model and Its Applications to Sequence Labeling Task
Hidden Markov Model (Hmm) is one of the most popular language models. To improve its predictive power, one of Hmm hypotheses, named limited history hypothesis, is usually relaxed. Then Higher-order Hmm is built up. But there are several severe problems hampering the applications of highorder Hmm, such as the problem of parameter space explosion, data sparseness problem and system resource exhaustion problem. From another point of view, this paper relaxes the other Hmm hypothesis, named stationary (time invariant) hypothesis, makes use of time information and proposes a non-stationary Hmm (NSHmm). This paper describes NSHmm in detail, including its definition, the representation of time information, the algorithms and the parameter space and so on. Moreover, to further reduce the parameter space for mobile applications, this paper proposes a variant form of NSHmm (VNSHmm). Then NSHmm and VNSHmm are applied to two sequence labeling tasks: pos tagging and pinyin-tocharacter conversion. Experiment results show that compared with Hmm, NSHmm and VNSHmm can greatly reduce the error rate in both of the two tasks, which proves that they have much more predictive power than Hmm does.
false
[]
[]
null
null
null
This investigation was supported emphatically by the National Natural Science Foundation of China (No.60435020) and the High Technology Research and Development Programme of China (2002AA117010-09).We especially thank the three anonymous reviewers for their valuable suggestions and comments.
2005
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
amin-etal-2020-data
https://aclanthology.org/2020.bionlp-1.20
A Data-driven Approach for Noise Reduction in Distantly Supervised Biomedical Relation Extraction
Fact triples are a common form of structured knowledge used within the biomedical domain. As the amount of unstructured scientific texts continues to grow, manual annotation of these texts for the task of relation extraction becomes increasingly expensive. Distant supervision offers a viable approach to combat this by quickly producing large amounts of labeled, but considerably noisy, data. We aim to reduce such noise by extending an entity-enriched relation classification BERT model to the problem of multiple instance learning, and defining a simple data encoding scheme that significantly reduces noise, reaching state-of-the-art performance for distantly-supervised biomedical relation extraction. Our approach further encodes knowledge about the direction of relation triples, allowing for increased focus on relation learning by reducing noise and alleviating the need for joint learning with knowledge graph completion.
true
[]
[]
Good Health and Well-Being
null
null
The authors would like to thank the anonymous reviewers for helpful feedback. The work was partially funded by the European Union's Horizon 2020 research and innovation programme under grant agreement No. 777107 through the project Precise4Q and by the German Federal Ministry of Education and Research (BMBF) through the project DEEPLEE (01IW17001).
2020
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
daille-2003-conceptual
https://aclanthology.org/W03-1802
Conceptual Structuring through Term Variations
Term extraction systems are now an integral part of the compiling of specialized dictionaries and updating of term banks. In this paper, we present a term detection approach that discovers, structures, and infers conceptual relationships between terms for French. Conceptual relationships are deduced from specific types of term variations, morphological and syntagmatic, and are expressed through lexical functions. The linguistic precision of the conceptual structuring through morphological variations is of 95 %. 2 Conceptual systems Terms are generally classified using partitive and generic relationships to be presented in a thesaural structure. But other relationships exist, the so-called complex relationships (Sager, 1990, pages 34-35) which are domain and application dependent. Examples of such complex relationships are: FALLOUT is caused by NUCLEAR EXPLOSION COALMINE is a place for COAL-MINING
false
[]
[]
null
null
null
null
2003
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
tokunaga-etal-2005-automatic
https://aclanthology.org/I05-1010
Automatic Discovery of Attribute Words from Web Documents
We propose a method of acquiring attribute words for a wide range of objects from Japanese Web documents. The method is a simple unsupervised method that utilizes the statistics of words, lexico-syntactic patterns, and HTML tags. To evaluate the attribute words, we also establish criteria and a procedure based on question-answerability about the candidate word. 1 We use C to denote both the class and its class label (the word representing the class). We also use A to denote both the attribute and the word representing it.
false
[]
[]
null
null
null
null
2005
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gomez-1982-towards
https://aclanthology.org/P82-1006
Towards a Theory of Comprehension of Declarative Contexts
An outline of a theory of comprehension of declarative contexts is presented. The main aspect of the theory being developed is based on Kant's distinction between concepts as rules (we have called them conceptual specialists) and concepts as an abstract representation (schemata, frames). Comprehension is viewed as a process dependent on the conceptual specialists (they contain the inferential knowledge), the schemata or frames (they contain the declarative knowledge), and a parser. The function of the parser is to produce a segmentation of the sentences in a case frame structure, thus determininig the meaning of prepositions, polysemous verbs, noun group etc. The function of this parser is not to produce an output to be interpreted by semantic routines or an interpreter~ but to start the parsing process and proceed until a concept relevant to the theme of the text is recognized. Then the concept takes control of the comprehension process overriding the lower level linguistic process. Hence comprehension is viewed as a process in which high level sources of knowledge (concepts) override lower level linguistic processes.
false
[]
[]
null
null
null
This research was supported by the Air Force Office of Scientific Research under contract F49620-79-0152, and was done in part while the author was a member of the AI group at the Ohio State University.I would llke to thank Amar Mukhopadhyay for reading and providing constructive comments on drafts of this paper, and Mrs. Robin Cone for her wonderful work in typing it.
1982
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
cassani-etal-2015-distributional
https://aclanthology.org/W15-2406
Which distributional cues help the most? Unsupervised contexts selection for lexical category acquisition
Starting from the distributional bootstrapping hypothesis, we propose an unsupervised model that selects the most useful distributional information according to its salience in the input, incorporating psycholinguistic evidence. With a supervised Parts-of-Speech tagging experiment, we provide preliminary results suggesting that the distributional contexts extracted by our model yield similar performances as compared to current approaches from the literature, with a gain in psychological plausibility. We also introduce a more principled way to evaluate the effectiveness of distributional contexts in helping learners to group words in syntactic categories.
false
[]
[]
null
null
null
The presented research was supported by a BOF/TOP grant (ID 29072) of the Research Council of the University of Antwerp.
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
grois-2005-learning
https://aclanthology.org/P05-2015
Learning Strategies for Open-Domain Natural Language Question Answering
This work presents a model for learning inference procedures for story comprehension through inductive generalization and reinforcement learning, based on classified examples. The learned inference procedures (or strategies) are represented as of sequences of transformation rules. The approach is compared to three prior systems, and experimental results are presented demonstrating the efficacy of the model.
false
[]
[]
null
null
null
null
2005
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
moreno-etal-2002-speechdat
http://www.lrec-conf.org/proceedings/lrec2002/pdf/269.pdf
SpeechDat across all America: SALA II
SALA II is a project co-sponsored by several companies that focuses on collecting linguistic data dedicated for training speaker independent speech recognizers for mobile/cellular network telephone applications. The goal of the project is to produce SpeechDatlike databases in all the significant languages and dialects spoken across Latin America, US and Canada. Utterances will be recorded directly from calls made from cellular telephones and are composed by read text and answers to specific questions. The goal of the project should be reached within the year 2003.
false
[]
[]
null
null
null
null
2002
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
pham-etal-2016-convolutional
https://aclanthology.org/D16-1123
Convolutional Neural Network Language Models
Convolutional Neural Networks (CNNs) have shown to yield very strong results in several Computer Vision tasks. Their application to language has received much less attention, and it has mainly focused on static classification tasks, such as sentence classification for Sentiment Analysis or relation extraction. In this work, we study the application of CNNs to language modeling, a dynamic, sequential prediction task that needs models to capture local as well as long-range dependency information. Our contribution is twofold. First, we show that CNNs achieve 11-26% better absolute performance than feed-forward neural language models, demonstrating their potential for language representation even in sequential tasks. As for recurrent models, our model outperforms RNNs but is below state of the art LSTM models. Second, we gain some understanding of the behavior of the model, showing that CNNs in language act as feature detectors at a high level of abstraction, like in Computer Vision, and that the model can profitably use information from as far as 16 words before the target.
false
[]
[]
null
null
null
We thank Marco Baroni and three anonymous reviewers for fruitful feedback. This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 655577 (LOVe); ERC 2011 Starting Independent Research Grant n. 283554 (COMPOSES) and the Erasmus Mundus Scholarship for Joint Master Programs. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the GPUs used in our research.
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
padmakumar-he-2021-unsupervised
https://aclanthology.org/2021.eacl-main.213
Unsupervised Extractive Summarization using Pointwise Mutual Information
Unsupervised approaches to extractive summarization usually rely on a notion of sentence importance defined by the semantic similarity between a sentence and the document. We propose new metrics of relevance and redundancy using pointwise mutual information (PMI) between sentences, which can be easily computed by a pre-trained language model. Intuitively, a relevant sentence allows readers to infer the document content (high PMI with the document), and a redundant sentence can be inferred from the summary (high PMI with the summary). We then develop a greedy sentence selection algorithm to maximize relevance and minimize redundancy of extracted sentences. We show that our method outperforms similarity-based methods on datasets in a range of domains including news, medical journal articles, and personal anecdotes.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
swanson-etal-2020-rationalizing
https://aclanthology.org/2020.acl-main.496
Rationalizing Text Matching: Learning Sparse Alignments via Optimal Transport
Selecting input features of top relevance has become a popular method for building selfexplaining models. In this work, we extend this selective rationalization approach to text matching, where the goal is to jointly select and align text pieces, such as tokens or sentences, as a justification for the downstream prediction. Our approach employs optimal transport (OT) to find a minimal cost alignment between the inputs. However, directly applying OT often produces dense and therefore uninterpretable alignments. To overcome this limitation, we introduce novel constrained variants of the OT problem that result in highly sparse alignments with controllable sparsity. Our model is end-to-end differentiable using the Sinkhorn algorithm for OT and can be trained without any alignment annotations. We evaluate our model on the Stack-Exchange, MultiNews, e-SNLI, and MultiRC datasets. Our model achieves very sparse rationale selections with high fidelity while preserving prediction accuracy compared to strong attention baseline models. † * Denotes equal contribution.
false
[]
[]
null
null
null
We thank Jesse Michel, Derek Chen, Yi Yang, and the anonymous reviewers for their valuable discussions. We thank Sam Altschul, Derek Chen, Amit Ganatra, Alex Lin, James Mullenbach, Jen Seale, Siddharth Varia, and Lei Xu for providing the human evaluation.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
negi-buitelaar-2015-curse
https://aclanthology.org/W15-0115
Curse or Boon? Presence of Subjunctive Mood in Opinionated Text
In addition to the expression of positive and negative sentiments in the reviews, customers often tend to express wishes and suggestions regarding improvements in a product/service, which could be worth extracting. Subjunctive mood is often present in sentences which speak about a possibility or action that has not yet occurred. While this phenomena poses challenges to the identification of positive and negative sentiments hidden in a text, it can be helpful to identify wishes and suggestions. In this paper, we extract features from a small dataset of subjunctive mood, and use those features to identify wishes and suggestions in opinionated text. Our study validates that subjunctive features can be good features for the detection of wishes. However, with the given dataset, such features did not perform well for suggestion detection.
false
[]
[]
null
null
null
This work has been funded by the the European Union's Horizon 2020 programme under grant agreement No 644632 MixedEmotions, and Science Foundation Ireland under Grant Number SFI/12/RC/2289.
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sarkar-bandyopadhyay-2008-design
https://aclanthology.org/I08-3012
Design of a Rule-based Stemmer for Natural Language Text in Bengali
This paper presents a rule-based approach for finding out the stems from text in Bengali, a resource-poor language. It starts by introducing the concept of orthographic syllable, the basic orthographic unit of Bengali. Then it discusses the morphological structure of the tokens for different parts of speech, formalizes the inflection rule constructs and formulates a quantitative ranking measure for potential candidate stems of a token. These concepts are applied in the design and implementation of an extensible architecture of a stemmer system for Bengali text. The accuracy of the system is calculated to be ~89% and above.
false
[]
[]
null
null
null
null
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dobrowolski-etal-2021-samsung
https://aclanthology.org/2021.wat-1.27
Samsung R\&D Institute Poland submission to WAT 2021 Indic Language Multilingual Task
This paper describes the submission to the WAT 2021 Indic Language Multilingual Task by Samsung R&D Institute Poland. The task covered translation between 10 Indic Languages (Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Oriya, Punjabi, Tamil and Telugu) and English. We combined a variety of techniques: transliteration, filtering, backtranslation, domain adaptation, knowledge-distillation and finally ensembling of NMT models. We applied an effective approach to low-resource training that consist of pretraining on backtranslations and tuning on parallel corpora. We experimented with two different domainadaptation techniques which significantly improved translation quality when applied to monolingual corpora. We researched and applied a novel approach for finding the best hyperparameters for ensembling a number of translation models. All techniques combined gave significant improvement-up to +8 BLEU over baseline results. The quality of the models has been confirmed by the human evaluation where SRPOL models scored best for all 5 manually evaluated languages.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
boggild-andersen-1990-valence
https://aclanthology.org/W89-0113
Valence Frames Used for Syntactic Disambiguation in the EUROTRA-DK Model
The EEC Machine Translation Programme E U RO TRA is a multi lingual, transfer-based, module-structured machine translation project. The result of the analysis, the interface structure, is bcised on a dependency grammar combined with a frame theory. The valency frames, specified in the lexicon, enable the grammar to analyse or generate the sentences. If information about the syntactical structure of the slot fillers is added to the lexicon, certain erroneous analyses may be discarded exclusively on a syntactical basis, and complex transfer may in some cases be avoided. Where semantic tind syntactical differences are related, problems of am biguity may be solved as well. This will be exemplified, and the frame theory will be explained. The paper concentrates on the valency of verbs; according to the E U RO TRA theory the verb is the governor o f a sentence.
false
[]
[]
null
null
null
null
1990
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zuber-1982-explicit
https://aclanthology.org/C82-2073
Explicit Sentences and Syntactic Complexity
(2) Leslie Is a student (3) Leslie Is a woman and Leslle Is a student It is clear however that nelther (2) nor (3) can be considered as an "exact" translation of (1). Sentence (2) does not carry the information that Leslie Is a woman and sentence 3does not carry thls-information in the same way as (1)i the fact ~hat Leslie is a woman Is presupposed by (1) whereas it is asserted by (3). In other words sentence (3) is more explgclt than sentence (1). Following Keenan (1973) we will say that a sentence S is more explicit than a sentence TIff S and T have the same consequences but some, presupposition ofT Is an assertion of S. Not only translations can be more explicit. Per instance (5) is more explicit that (4) since (4) presupposes (6) wherens (5) esserts (6):
false
[]
[]
null
null
null
null
1982
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
nagaraju-etal-2017-rule
https://aclanthology.org/W17-7550
Rule Based Approch of Clause Boundary Identification in Telugu
One of the major challenges in Natural Language Processing is identifying Clauses and their Boundaries in Computational Linguistics. This paper attempts to develop an Automatic Clause Boundary Identifier (CBI) for Telugu language. The language Telugu belongs to South-Central Dravidian language family with features of head-final, leftbranching and morphologically agglutinative in nature (Bh. Krishnamurti, 2003). A huge amount of corpus is studied to frame the rules for identifying clause boundaries and these rules are trained to a computational algorithm and also discussed some of the issues in identifying clause boundaries. A clause boundary annotated corpus can be developed from raw text which can be used to train a machine learning algorithm which in turns helps in development of a Hybrid Clause Boundary Identification Tool for Telugu. Its implementation and evaluation are discussed in this paper.
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ye-ling-2019-distant
https://aclanthology.org/N19-1288
Distant Supervision Relation Extraction with Intra-Bag and Inter-Bag Attentions
This paper presents a neural relation extraction method to deal with the noisy training data generated by distant supervision. Previous studies mainly focus on sentence-level de-noising by designing neural networks with intra-bag attentions. In this paper, both intrabag and inter-bag attentions are considered in order to deal with the noise at sentence-level and bag-level respectively. First, relationaware bag representations are calculated by weighting sentence embeddings using intrabag attentions. Here, each possible relation is utilized as the query for attention calculation instead of only using the target relation in conventional methods. Furthermore, the representation of a group of bags in the training set which share the same relation label is calculated by weighting bag representations using a similarity-based inter-bag attention module. Finally, a bag group is utilized as a training sample when building our relation extractor. Experimental results on the New York Times dataset demonstrate the effectiveness of our proposed intra-bag and inter-bag attention modules. Our method also achieves better relation extraction accuracy than state-of-the-art methods on this dataset 1 .
false
[]
[]
null
null
null
We thank the anonymous reviewers for their valuable comments.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
fan-etal-2015-hpsg
https://aclanthology.org/W15-3303
An HPSG-based Shared-Grammar for the Chinese Languages: ZHONG [|]
This paper introduces our attempts to model the Chinese language using HPSG and MRS. Chinese refers to a family of various languages including Mandarin Chinese, Cantonese, Min, etc. These languages share a large amount of structure, though they may differ in orthography, lexicon, and syntax. To model these, we are building a family of grammars: ZHONG [ ]. This grammar contains instantiations of various Chinese languages, sharing descriptions where possible. Currently we have prototype grammars for Cantonese and Mandarin in both simplified and traditional script, all based on a common core. The grammars also have facilities for robust parsing, sentence generation, and unknown word handling.
false
[]
[]
null
null
null
We would like to express special thanks to Justin Chunlei Yang and Dan Flickinger for their enormous work on ManGO, which our current grammar is based on. In addition, we received much inspiration from Yi Zhang and Rui Wang and their Mandarin Chinese Grammar. We are grateful to Michael Wayne Goodman, Luis Mortado da Costa, Bo Chen, Joanna Sio Ut Seong, Shan Wang, František Kratochvíl, Huizhen Wang, Wenjie Wang, Giulia Bonansinga, David Moeljadi, Tuấn Anh Lê, Woodley Packard, Leslie Lee, and Jong-Bok Kim for their help and comments. Valuable comments from four anonymous reviewers are also much appreciated. Of course, we are solely responsible for all the remaining errors and infelicities. This research was supported in part by
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
he-etal-2019-pointer
https://aclanthology.org/U19-1013
A Pointer Network Architecture for Context-Dependent Semantic Parsing
Semantic parsing targets at mapping human utterances into structured meaning representations, such as logical forms, programming snippets, SQL queries etc. In this work, we focus on logical form generation, which is extracted from an automated email assistant system. Since this task is dialogue-oriented, information across utterances must be well handled. Furthermore, certain inputs from users are used as arguments for the logical form, which requires a parser to distinguish the functional words and content words. Hence, an intelligent parser should be able to switch between generation mode and copy mode. In order to address the aforementioned issues, we equip the vanilla seq2seq model with a pointer network and a context-dependent architecture to generate more accurate logical forms. Our model achieves state-of-the-art performance on the email assistant task.
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
munkhdalai-yu-2017-neural-semantic
https://aclanthology.org/E17-1038
Neural Semantic Encoders
We present a memory augmented neural network for natural language understanding: Neural Semantic Encoders. NSE is equipped with a novel memory update rule and has a variable sized encoding memory that evolves over time and maintains the understanding of input sequences through read, compose and write operations. NSE can also access 1 multiple and shared memories. In this paper, we demonstrated the effectiveness and the flexibility of NSE on five different natural language tasks: natural language inference, question answering, sentence classification, document sentiment analysis and machine translation where NSE achieved state-of-the-art performance when evaluated on publically available benchmarks. For example, our shared-memory model showed an encouraging result on neural machine translation, improving an attention-based baseline by approximately 1.0 BLEU.
false
[]
[]
null
null
null
We would like to thank Abhyuday Jagannatha and the anonymous reviewers for their insightful comments and suggestions. This work was supported in part by the grant HL125089 from the National Institutes of Health (NIH). Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
han-etal-2015-chinese
https://aclanthology.org/W15-3103
Chinese Named Entity Recognition with Graph-based Semi-supervised Learning Model
Named entity recognition (NER) plays an important role in the NLP literature. The traditional methods tend to employ large annotated corpus to achieve a high performance. Different with many semi-supervised learning models for NER task, in this paper, we employ the graph-based semi-supervised learning (GBSSL) method to utilize the freely available unlabeled data. The experiment shows that the unlabeled corpus can enhance the state-of-theart conditional random field (CRF) learning model and has potential to improve the tagging accuracy even though the margin is a little weak and not satisfying in current experiments.
false
[]
[]
null
null
null
This work was supported by the Research Committee of the University of Macau (Grant No. MYRG2015-00175-FST and MYRG2015-00188-FST) and the Science and Technology Development Fund of Macau (Grant No. 057/2014/A). The first author was supported by
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
leone-etal-2020-building
https://aclanthology.org/2020.lrec-1.366
Building Semantic Grams of Human Knowledge
Word senses are typically defined with textual definitions for human consumption and, in computational lexicons, put in context via lexical-semantic relations such as synonymy, antonymy, hypernymy, etc. In this paper we embrace a radically different paradigm that provides a slot-filler structure, called "semagram", to define the meaning of words in terms of their prototypical semantic information. We propose a semagram-based knowledge model composed of 26 semantic relationships which integrates features from a range of different sources, such as computational lexicons and property norms. We describe an annotation exercise regarding 50 concepts over 10 different categories and put forward different automated approaches for extending the semagram base to thousands of concepts. We finally evaluate the impact of the proposed resource on a semantic similarity task, showing significant improvements over state-of-the-art word embeddings. We release the complete semagram base and other data at http://nlp.uniroma1.it/semagrams.
false
[]
[]
null
null
null
The last author gratefully acknowledges the support of the ERC Consolidator Grant MOUSSE No. 726487 under the European Union's Horizon 2020 research and innovation programme.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kotonya-etal-2021-graph
https://aclanthology.org/2021.fever-1.3
Graph Reasoning with Context-Aware Linearization for Interpretable Fact Extraction and Verification
This paper presents an end-to-end system for fact extraction and verification using textual and tabular evidence, the performance of which we demonstrate on the FEVEROUS dataset. We experiment with both a multi-task learning paradigm to jointly train a graph attention network for both the task of evidence extraction and veracity prediction, as well as a single objective graph model for solely learning veracity prediction and separate evidence extraction. In both instances, we employ a framework for per-cell linearization of tabular evidence, thus allowing us to treat evidence from tables as sequences. The templates we employ for linearizing tables capture the context as well as the content of table data. We furthermore provide a case study to show the interpretability our approach. Our best performing system achieves a FEVEROUS score of 0.23 and 53% label accuracy on the blind test data. 1 * Work done while the author was an intern at J.P. Morgan AI Research.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
molla-van-zaanen-2005-learning
https://aclanthology.org/U05-1005
Learning of Graph Rules for Question Answering
AnswerFinder is a framework for the development of question-answering systems. An-swerFinder is currently being used to test the applicability of graph representations for the detection and extraction of answers. In this paper we briefly describe AnswerFinder and introduce our method to learn graph patterns that link questions with their corresponding answers in arbitrary sentences. The method is based on the translation of the logical forms of questions and answer sentences into graphs, and the application of operations based on graph overlaps and the construction of paths within graphs. The method is general and can be applied to any graph-based representation of the contents of questions and answers.
false
[]
[]
null
null
null
This research is funded by the Australian Research Council, ARC Discovery Grant no DP0450750.
2005
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
shao-etal-2018-greedy
https://aclanthology.org/D18-1510
Greedy Search with Probabilistic N-gram Matching for Neural Machine Translation
Neural machine translation (NMT) models are usually trained with the word-level loss using the teacher forcing algorithm, which not only evaluates the translation improperly but also suffers from exposure bias. Sequence-level training under the reinforcement framework can mitigate the problems of the word-level loss, but its performance is unstable due to the high variance of the gradient estimation. On these grounds, we present a method with a differentiable sequence-level training objective based on probabilistic n-gram matching which can avoid the reinforcement framework. In addition, this method performs greedy search in the training which uses the predicted words as context just as at inference to alleviate the problem of exposure bias. Experiment results on the NIST Chinese-to-English translation tasks show that our method significantly outperforms the reinforcement-based algorithms and achieves an improvement of 1.5 BLEU points on average over a strong baseline system.
false
[]
[]
null
null
null
We thank the anonymous reviewers for their insightful comments. This work was supported by the National Natural Science Foundation of China (NSFC) under the project NO.61472428 and the project NO. 61662077.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hall-nivre-2006-generic
https://aclanthology.org/W05-1708
A generic architecture for data-driven dependency parsing
We present a software architecture for data-driven dependency parsing of unrestricted natural language text, which achieves a strict modularization of parsing algorithm, feature model and learning method such that these parameters can be varied independently. The design has been realized in MaltParser, which supports several parsing algorithms and learning methods, for which complex feature models can be defined in a special description language. 2 Inductive dependency parsing Given a set R of dependency types, we define a dependency graph for a sentence x = (w 1 ,. .. , w n) to be a labeled directed graph G =
false
[]
[]
null
null
null
null
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
jiang-etal-2018-supervised
https://aclanthology.org/P18-1252
Supervised Treebank Conversion: Data and Approaches
Xinzhou Jiang 2 * , Bo Zhang 2 , Zhenghua Li 1,2 , Min Zhang 1,2 , Sheng Li 3 , Luo Si 3 1. Institute of Artificial Intelligence, Soochow University, Suzhou, China 2. School of Computer Science and Technology, Soochow University, Suzhou, China xzjiang, bzhang17@stu.suda.edu.cn, zhli13,minzhang@suda.edu.cn 3. Alibaba Inc., Hangzhou, China lisheng.ls,luo.si@alibaba-inc.com Treebank conversion is a straightforward and effective way to exploit various heterogeneous treebanks for boosting parsing accuracy. However, previous work mainly focuses on unsupervised treebank conversion and makes little progress due to the lack of manually labeled data where each sentence has two syntactic trees complying with two different guidelines at the same time, referred as bi-tree aligned data.
false
[]
[]
null
null
null
The authors would like to thank the anonymous reviewers for the helpful comments. We are greatly grateful to all participants in data annotation for their hard work. We also thank Guodong Zhou and Wenliang Chen for the helpful discussions, and Meishan Zhang for his help on the re-implementation of the Biaffine Parser. This work was supported by National Natural Science Foundation of China (Grant No. 61525205, 61502325 61432013), and was also partially supported by the joint research project of Alibaba and Soochow University.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sulem-etal-2015-conceptual
https://aclanthology.org/W15-3502
Conceptual Annotations Preserve Structure Across Translations: A French-English Case Study
Divergence of syntactic structures between languages constitutes a major challenge in using linguistic structure in Machine Translation (MT) systems. Here, we examine the potential of semantic structures. While semantic annotation is appealing as a source of cross-linguistically stable structures, little has been accomplished in demonstrating this stability through a detailed corpus study. In this paper, we experiment with the UCCA conceptual-cognitive annotation scheme in an English-French case study. First, we show that UCCA can be used to annotate French, through a systematic type-level analysis of the major French grammatical phenomena. Second, we annotate a parallel English-French corpus with UCCA, and quantify the similarity of the structures on both sides. Results show a high degree of stability across translations, supporting the usage of semantic annotations over syntactic ones in structure-aware MT systems.
false
[]
[]
null
null
null
We would like to thank Roy Schwartz for helpful comments. This research was supported by the Language, Logic and Cognition Center (LLCC) at the Hebrew University of Jerusalem (for the first author) and by the ERC Advanced Fellowship 249520 GRAMPLUS (for the second author).
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kalouli-etal-2021-really
https://aclanthology.org/2021.iwcs-1.13
Is that really a question? Going beyond factoid questions in NLP
Research in NLP has mainly focused on factoid questions, with the goal of finding quick and reliable ways of matching a query to an answer. However, human discourse involves more than that: it contains non-canonical questions deployed to achieve specific communicative goals. In this paper, we investigate this under-studied aspect of NLP by introducing a targeted task, creating an appropriate corpus for the task and providing baseline models of diverse nature. With this, we are also able to generate useful insights on the task and open the way for future research in this direction.
false
[]
[]
null
null
null
We thank the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) for funding within project BU 1806/10-2 "Questions Visualized" of the FOR2111 "Questions at the Interfaces". We also thank our annotators, as well as the anonymous reviewers for their helpful comments.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
malmasi-dras-2015-large
https://aclanthology.org/N15-1160
Large-Scale Native Language Identification with Cross-Corpus Evaluation
We present a large-scale Native Language Identification (NLI) experiment on new data, with a focus on cross-corpus evaluation to identify corpus-and genre-independent language transfer features. We test a new corpus and show it is comparable to other NLI corpora and suitable for this task. Cross-corpus evaluation on two large corpora achieves good accuracy and evidences the existence of reliable language transfer features, but lower performance also suggests that NLI models are not completely portable across corpora. Finally, we present a brief case study of features distinguishing Japanese learners' English writing, demonstrating the presence of cross-corpus and cross-genre language transfer features that are highly applicable to SLA and ESL research.
false
[]
[]
null
null
null
null
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sultan-etal-2014-dls
https://aclanthology.org/S14-2039
DLS@CU: Sentence Similarity from Word Alignment
We present an algorithm for computing the semantic similarity between two sentences. It adopts the hypothesis that semantic similarity is a monotonically increasing function of the degree to which (1) the two sentences contain similar semantic units, and (2) such units occur in similar semantic contexts. With a simplistic operationalization of the notion of semantic units with individual words, we experimentally show that this hypothesis can lead to state-of-the-art results for sentencelevel semantic similarity. At the Sem-Eval 2014 STS task (task 10), our system demonstrated the best performance (measured by correlation with human annotations) among 38 system runs.
false
[]
[]
null
null
null
This material is based in part upon work supported by the National Science Foundation under Grant
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
surcin-etal-2005-evaluation
https://aclanthology.org/2005.mtsummit-papers.16
Evaluation of Machine Translation with Predictive Metrics beyond BLEU/NIST: CESTA Evaluation Campaign \# 1
In this paper, we report on the results of a full-size evaluation campaign of various MT systems. This campaign is novel compared to the classical DARPA/NIST MT evaluation campaigns in the sense that French is the target language, and that it includes an experiment of meta-evaluation of various metrics claiming to better predict different attributes of translation quality. We first describe the campaign, its context, its protocol and the data we used. Then we summarise the results obtained by the participating systems and discuss the meta-evaluation of the metrics used.
false
[]
[]
null
null
null
null
2005
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
choi-2016-sketch
https://aclanthology.org/W16-6607
Sketch-to-Text Generation: Toward Contextual, Creative, and Coherent Composition
The need for natural language generation (NLG) arises in diverse, multimodal contexts: ranging from describing stories captured in a photograph, to instructing how to prepare a dish using a given set of ingredients, and to composing a sonnet for a given topic phrase. One common challenge among these types of NLG tasks is that the generation model often needs to work with relatively loose semantic correspondence between the input prompt and the desired output text. For example, an image caption that appeals to readers may require pragmatic interpretation of the scene beyond the literal content of the image. Similarly, composing a new recipe requires working out detailed how-to instructions that are not directly specified by the given set of ingredient names. In this talk, I will discuss our recent approaches to generating contextual, creative, and coherent text given a relatively lean and noisy input prompt with respect to three NLG tasks: (1) creative image captioning, (2) recipe composition, and (3) sonnet composition. A recurring theme is that our models learn most of the end-to-end mappings between the input and the output directly from data without requiring manual annotations for intermediate meaning representations. I will conclude the talk by discussing the strengths and the limitations of these types of data-driven approaches and point to avenues for future research.
false
[]
[]
null
null
null
null
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gardent-2011-generation
https://aclanthology.org/2011.jeptalnrecital-invite.3
G\'en\'eration de phrase : entr\'ee, algorithmes et applications (Sentence Generation: Input, Algorithms and Applications)
Sentence Generation maps abstract linguistic representations into sentences. A necessary part of any natural language generation system, sentence generation has also recently received increasing attention in applications such as transfer based machine translation (cf. the LOGON project) and natural language interfaces to knowledge bases (e.g., to verbalise, to author and/or to query ontologies).
false
[]
[]
null
null
null
null
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhu-etal-2020-language
https://aclanthology.org/2020.acl-main.150
Language-aware Interlingua for Multilingual Neural Machine Translation
Multilingual neural machine translation (NMT) has led to impressive accuracy improvements in low-resource scenarios by sharing common linguistic information across languages. However, the traditional multilingual model fails to capture the diversity and specificity of different languages, resulting in inferior performance compared with individual models that are sufficiently trained. In this paper, we incorporate a language-aware interlingua into the Encoder-Decoder architecture. The interlingual network enables the model to learn a language-independent representation from the semantic spaces of different languages, while still allowing for language-specific specialization of a particular language-pair. Experiments show that our proposed method achieves remarkable improvements over state-of-the-art multilingual NMT baselines and produces comparable performance with strong individual models.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sakaguchi-etal-2016-reassessing
https://aclanthology.org/Q16-1013
Reassessing the Goals of Grammatical Error Correction: Fluency Instead of Grammaticality
The field of grammatical error correction (GEC) has grown substantially in recent years, with research directed at both evaluation metrics and improved system performance against those metrics. One unvisited assumption, however, is the reliance of GEC evaluation on error-coded corpora, which contain specific labeled corrections. We examine current practices and show that GEC's reliance on such corpora unnaturally constrains annotation and automatic evaluation, resulting in (a) sentences that do not sound acceptable to native speakers and (b) system rankings that do not correlate with human judgments. In light of this, we propose an alternate approach that jettisons costly error coding in favor of unannotated, whole-sentence rewrites. We compare the performance of existing metrics over different gold-standard annotations, and show that automatic evaluation with our new annotation scheme has very strong correlation with expert rankings (ρ = 0.82). As a result, we advocate for a fundamental and necessary shift in the goal of GEC, from correcting small, labeled error types, to producing text that has native fluency.
false
[]
[]
null
null
null
We would like to thank Christopher Bryant, Mariano Felice, Roman Grundkiewicz and Marcin Junczys-Dowmunt for providing data and code. We would also like to thank the TACL editor, Chris Quirk, and the three anonymous reviewers for their comments and feedback. This material is based upon work partially supported by the National Science Foundation Graduate Research Fellowship under Grant No. 1232825.
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kirchner-2020-insights
https://aclanthology.org/2020.eamt-1.38
Insights from Gathering MT Productivity Metrics at Scale
In this paper, we describe Dell EMC's framework to automatically collect MTrelated productivity metrics from a large translation supply chain over an extended period of time, the characteristics and volume of the gathered data, and the insights from analyzing the data to guide our MT strategy. Aligning tools, processes and people required decisions, concessions and contributions from Dell management, technology providers, tool implementors, LSPs and linguists to harvest data at scale over 2+ years while Dell EMC migrated from customized SMT to generic NMT and then customized NMT systems.
false
[]
[]
null
null
null
The following individuals and organizations were instrumental in creating an environment to harvest MT metrics automatically for Dell EMC: Nancy Anderson, head of the EMC translation team at the time supported the proposal to take translations "online". She negotiated with our LSPs the necessary process and tools concessions. Keith Brazil and his team at Translations.com optimized GlobalLink as a collaborative platform for a multi-vendor supply chain. Jaap van der Meer proposed an integration with the TAUS DQF Dashboard. TAUS and
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yoo-2001-floating
https://aclanthology.org/Y01-1020
Floating Quantifiers and Lexical Specification of Quantifier Retrieval
Floating quantifiers (FQs) in English exhibit both universal and language-specific properties, and this paper shows that such syntactic and semantic characteristics can be explained in terms of a constraint-based, lexical approach to the construction within the framework of Head-Driven Phrase Structure Grammar (HPSG). Based on the assumption that FQs are base-generated VP modifiers, this paper proposes an account in which the semantic contribution of FQs consists of a "lexically retrieved" universal quantifier taking scope over the VP meaning.
false
[]
[]
null
null
null
null
2001
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
rogers-1996-model
https://aclanthology.org/P96-1002
A Model-Theoretic Framework for Theories of Syntax
A natural next step in the evolution of constraint-based grammar formalisms from rewriting formalisms is to abstract fully away from the details of the grammar mechanism-to express syntactic theories purely in terms of the properties of the class of structures they license. By focusing on the structural properties of languages rather than on mechanisms for generating or checking structures that exhibit those properties, this model-theoretic approach can offer simpler and significantly clearer expression of theories and can potentially provide a uniform formalization, allowing disparate theories to be compared on the basis of those properties. We discuss L2,p, a monadic second-order logical framework for such an approach to syntax that has the distinctive virtue of being superficially expressive-supporting direct statement of most linguistically significant syntactic properties-but having well-defined strong generative capacity-languages are definable in L2K,p iff they are strongly context-free. We draw examples from the realms of GPSG and GB.
false
[]
[]
null
null
null
null
1996
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wolfe-etal-2015-predicate
https://aclanthology.org/N15-1002
Predicate Argument Alignment using a Global Coherence Model
We present a joint model for predicate argument alignment. We leverage multiple sources of semantic information, including temporal ordering constraints between events. These are combined in a max-margin framework to find a globally consistent view of entities and events across multiple documents, which leads to improvements over a very strong local baseline.
false
[]
[]
null
null
null
null
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hashimoto-etal-2019-high
https://aclanthology.org/W19-5212
A High-Quality Multilingual Dataset for Structured Documentation Translation
This paper presents a high-quality multilingual dataset for the documentation domain to advance research on localization of structured text. Unlike widely-used datasets for translation of plain text, we collect XML-structured parallel text segments from the online documentation for an enterprise software platform. These Web pages have been professionally translated from English into 16 languages and maintained by domain experts, and around 100,000 text segments are available for each language pair. We build and evaluate translation models for seven target languages from English, with several different copy mechanisms and an XML-constrained beam search. We also experiment with a non-English pair to show that our dataset has the potential to explicitly enable 17 × 16 translation settings. Our experiments show that learning to translate with the XML tags improves translation accuracy, and the beam search accurately generates XML structures. We also discuss tradeoffs of using the copy mechanisms by focusing on translation of numerical words and named entities. We further provide a detailed human analysis of gaps between the model output and human translations for real-world applications, including suitability for post-editing. * Now at Google Brain. 1 https://www.ldc.upenn.edu/-Example (a) English: You can use this report on your Community Management Home dashboard or in <ph>Community Workspaces</ph> under <menucascade><uicontrol>Dashboards</uicontrol><uicontrol>Home </uicontrol></menucascade>. Japanese: このレポートは、 [コミュニティ管理 ] のホームのダッシュボード、または <ph>コミュニティワークスペース </ph>の <menucascade><uicontrol>[ダッシュボード ]</uicontrol> <uicontrol>[ホーム]</uicontrol></menucascade> で使用できます。-Example (b) English: Results with <b>both</b><i>beach</i> and <i>house</i> in the searchable fields of the record. Japanese: レコードの検索可能な項目に <i>beach</i> と <i>house</i> の <b>両方</b>が含まれている結果。-Example (c) English: You can only predefine this field to an email address. You can predefine it using either T (used to define email addresses) or To Recipients (used to define contact, lead, and user IDs).
false
[]
[]
null
null
null
We thank anonymous reviewers and Xi Victoria Lin for their helpful feedbacks.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ahrenberg-2019-towards
https://aclanthology.org/W19-8011
Towards an adequate account of parataxis in Universal Dependencies
The parataxis relation as defined for Universal Dependencies 2.0 is general and, for this reason, sometimes hard to distinguish from competing analyses, such as coordination, conj, or apposition, appos. The specific subtypes that are listed for parataxis are also quite different in character. In this study we first show that the actual practice by UD-annotators is varied, using the parallel UD (PUD-) treebanks as data. We then review the current definitions and guidelines and suggest improvements.
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
atterer-schlangen-2009-rubisc
https://aclanthology.org/W09-0509
RUBISC - a Robust Unification-Based Incremental Semantic Chunker
We present RUBISC, a new incremental chunker that can perform incremental slot filling and revising as it receives a stream of words. Slot values can influence each other via a unification mechanism. Chunks correspond to sense units, and end-of-sentence detection is done incrementally based on a notion of semantic/pragmatic completeness. One of RU-BISC's main fields of application is in dialogue systems where it can contribute to responsiveness and hence naturalness, because it can provide a partial or complete semantics of an utterance while the speaker is still speaking. The chunker is evaluated on a German transcribed speech corpus and achieves a concept error rate of 43.3% and an F-Score of 81.5.
false
[]
[]
null
null
null
This work was funded by the DFG Emmy-Noether grant SCHL845/3-1. Many thanks to Ewan Klein for valuable comments. All errors are of course ours.
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
cook-etal-2016-dictionary
https://aclanthology.org/W16-3006
A dictionary- and rule-based system for identification of bacteria and habitats in text
The number of scientific papers published each year is growing exponentially and given the rate of this growth, automated information extraction is needed to efficiently extract information from this corpus. A critical first step in this process is to accurately recognize the names of entities in text. Previous efforts, such as SPECIES, have identified bacteria strain names, among other taxonomic groups, but have been limited to those names present in NCBI taxonomy. We have implemented a dictionary-based named entity tagger, TagIt, that is followed by a rule based expansion system to identify bacteria strain names and habitats and resolve them to the closest match possible in the NCBI taxonomy and the OntoBiotope ontology respectively. The rule based post processing steps expand acronyms, and extend strain names according to a set of rules, which captures additional aliases and strains that are not present in the dictionary. TagIt has the best performance out of three entries to BioNLP-ST BB3 cat+ner, with an overall SER of 0.628 on the independent test set.
true
[]
[]
Good Health and Well-Being
null
null
EU BON (EU FP7 Contract No. 308454 program), the Micro B3 Project (287589), the Earth System Science and Environmental Management COST Action (ES1103) and the Novo Nordisk Foundation (NNF14CC0001).
2016
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chen-etal-2021-advpicker
https://aclanthology.org/2021.acl-long.61
AdvPicker: Effectively Leveraging Unlabeled Data via Adversarial Discriminator for Cross-Lingual NER
Neural methods have been shown to achieve high performance in Named Entity Recognition (NER), but rely on costly high-quality labeled data for training, which is not always available across languages. While previous works have shown that unlabeled data in a target language can be used to improve crosslingual model performance, we propose a novel adversarial approach (AdvPicker) to better leverage such data and further improve results. We design an adversarial learning framework in which an encoder learns entity domain knowledge from labeled source-language data and better shared features are captured via adversarial training-where a discriminator selects less language-dependent target-language data via similarity to the source language. Experimental results on standard benchmark datasets well demonstrate that the proposed method benefits strongly from this data selection process and outperforms existing state-ofthe-art methods; without requiring any additional external resources (e.g., gazetteers or via machine translation). 1
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
meister-cotterell-2021-language
https://aclanthology.org/2021.acl-long.414
Language Model Evaluation Beyond Perplexity
We propose an alternate approach to quantifying how well language models learn natural language: we ask how well they match the statistical tendencies of natural language. To answer this question, we analyze whether text generated from language models exhibits the statistical tendencies present in the humangenerated text on which they were trained. We provide a framework-paired with significance tests-for evaluating the fit of language models to these trends. We find that neural language models appear to learn only a subset of the tendencies considered, but align much more closely with empirical trends than proposed theoretical distributions (when present). Further, the fit to different distributions is highly-dependent on both model architecture and generation strategy. As concrete examples, text generated under the nucleus sampling scheme adheres more closely to the typetoken relationship of natural language than text produced using standard ancestral sampling; text from LSTMs reflects the natural language distributions over length, stopwords, and symbols surprisingly well.
false
[]
[]
null
null
null
We thank Adhi Kuncoro for helpful discussion and feedback in the middle stages of our work and Tiago Pimentel, Jason Wei, and our anonymous reviewers for insightful feedback on the manuscript. We additionally thank B. Bou for his concern.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
cui-bollegala-2019-self
https://aclanthology.org/R19-1025
Self-Adaptation for Unsupervised Domain Adaptation
Lack of labelled data in the target domain for training is a common problem in domain adaptation. To overcome this problem, we propose a novel unsupervised domain adaptation method that combines projection and self-training based approaches. Using the labelled data from the source domain, we first learn a projection that maximises the distance among the nearest neighbours with opposite labels in the source domain. Next, we project the source domain labelled data using the learnt projection and train a classifier for the target class prediction. We then use the trained classifier to predict pseudo labels for the target domain unlabelled data. Finally, we learn a projection for the target domain as we did for the source domain using the pseudo-labelled target domain data, where we maximise the distance between nearest neighbours having opposite pseudo labels. Experiments on a standard benchmark dataset for domain adaptation show that the proposed method consistently outperforms numerous baselines and returns competitive results comparable to that of SOTA including self-training, tri-training, and neural adaptations.
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
graham-etal-2015-accurate
https://aclanthology.org/N15-1124
Accurate Evaluation of Segment-level Machine Translation Metrics
Evaluation of segment-level machine translation metrics is currently hampered by: (1) low inter-annotator agreement levels in human assessments; (2) lack of an effective mechanism for evaluation of translations of equal quality; and (3) lack of methods of significance testing improvements over a baseline. In this paper, we provide solutions to each of these challenges and outline a new human evaluation methodology aimed specifically at assessment of segment-level metrics. We replicate the human evaluation component of WMT-13 and reveal that the current state-of-the-art performance of segment-level metrics is better than previously believed. Three segment-level metrics-METEOR, NLEPOR and SENTBLEU-MOSES-are found to correlate with human assessment at a level not significantly outperformed by any other metric in both the individual language pair assessment for Spanish-to-English and the aggregated set of 9 language pairs.
false
[]
[]
null
null
null
We wish to thank the anonymous reviewers for their valuable comments. This research was supported by funding from the Australian Research Council and Science Foundation Ireland (Grant 12/CE/12267).
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
augenstein-etal-2017-semeval
https://aclanthology.org/S17-2091
SemEval 2017 Task 10: ScienceIE - Extracting Keyphrases and Relations from Scientific Publications
We describe the SemEval task of extracting keyphrases and relations between them from scientific documents, which is crucial for understanding which publications describe which processes, tasks and materials. Although this was a new task, we had a total of 26 submissions across 3 evaluation scenarios. We expect the task and the findings reported in this paper to be relevant for researchers working on understanding scientific content, as well as the broader knowledge base population and information extraction communities. Keyphrase Extraction (TASK), as well as extracting semantic relations between keywords, e.g. Keyphrase Extraction HYPONYM-OF Information Extraction. These tasks are related to the tasks of named entity recognition, named entity
true
[]
[]
Industry, Innovation and Infrastructure
null
null
We would like to thank Elsevier for supporting this shared task. Special thanks go to Ronald Daniel Jr. for his feedback on the task setup and Pontus Stenetorp for his advice on brat and shared task organisation.
2017
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
ramsay-field-2009-using
https://aclanthology.org/W09-3717
Using English for commonsense knowledge
The work reported here arises from an attempt to provide a body of simple information about diet and its effect on various common medical conditions. Expressing this knowledge in natural language has a number of advantages. It also raises a number of difficult issues. We will consider solutions, and partial solutions, to these issues below. 1 Commonse knowledge Suppose you wanted to have a system that could provide advice about what you should and should not eat if you suffer from various common medical conditions. You might expect, at the very least, to be able to have dialogues like (1). (1) a. User: I am allergic to eggs. Computer: OK User: Should I eat pancakes Computer: No, because pancakes contain eggs, and eating things which contain eggs will make you ill if you are allergic to eggs. b. User: My son is very fat. Computer: OK User: Should he go swimming. Computer: Yes, because swimming is a form of exercise, and exercise is good for people who are overweight.
false
[]
[]
null
null
null
null
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hirao-etal-2017-enumeration
https://aclanthology.org/E17-1037
Enumeration of Extractive Oracle Summaries
To analyze the limitations and the future directions of the extractive summarization paradigm, this paper proposes an Integer Linear Programming (ILP) formulation to obtain extractive oracle summaries in terms of ROUGE n. We also propose an algorithm that enumerates all of the oracle summaries for a set of reference summaries to exploit F-measures that evaluate which system summaries contain how many sentences that are extracted as an oracle summary. Our experimental results obtained from Document Understanding Conference (DUC) corpora demonstrated the following: (1) room still exists to improve the performance of extractive summarization; (2) the F-measures derived from the enumerated oracle summaries have significantly stronger correlations with human judgment than those derived from single oracle summaries.
false
[]
[]
null
null
null
The authors thank three anonymous reviewers for their valuable comments and suggestions to improve the quality of the paper.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
okumura-hovy-1994-lexicon
https://aclanthology.org/1994.amta-1.23
Lexicon-to-Ontology Concept Association Using a Bilingual Dictionary
This paper describes a semi-automatic method for associating a Japanese lexicon with a semantic concept taxonomy called an ontology, using a Japanese-English bilingual dictionary as a "bridge". The ontology supports semantic processing in a knowledge-based machine translation system by providing a set of language-neutral symbols with semantic information. To put the ontology to use, lexical items of each language of interest must be linked to appropriate ontology items. The association of ontology items with lexical items of various languages is a process fraught with difficulty: since much of this work depends on the subjective decisions of human workers, large MT dictionaries tend to be subject to some dispersion and inconsistency. The problem we focus on here is how to associate concepts in the ontology with Japanese lexical entities by automatic methods, since it is too difficult to define adequately many concepts manually. We have designed three algorithms to associate a Japanese lexicon with the concepts of the ontology: the equivalent-word match, the argument match, and the example match.
false
[]
[]
null
null
null
We would like to thank Kevin Knight and Matthew Haines for their significant assistance with this work. We also appreciate Kazunori Muraki of NEC Labs. for his support.
1994
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
stanojevic-steedman-2021-formal
https://aclanthology.org/2021.cl-1.2
Formal Basis of a Language Universal
Steedman (2020) proposes as a formal universal of natural language grammar that grammatical permutations of the kind that have given rise to transformational rules are limited to a class known to mathematicians and computer scientists as the "separable" permutations. This class of permutations is exactly the class that can be expressed in combinatory categorial grammars (CCGs). The excluded non-separable permutations do in fact seem to be absent in a number of studies of crosslinguistic variation in word order in nominal and verbal constructions. The number of permutations that are separable grows in the number n of lexical elements in the construction as the Large Schröder Number S n−1. Because that number grows much more slowly than the n! number of all permutations, this generalization is also of considerable practical interest for computational applications such as parsing and machine translation. The present article examines the mathematical and computational origins of this restriction, and the reason it is exactly captured in CCG without the imposition of any further constraints.
false
[]
[]
null
null
null
We are grateful to Peter Buneman, Shay Cohen, Paula Merlo, Chris Stone, Bonnie Webber, and the Referees for Computational Linguistics for helpful comments and advice. The work was supported by ERC Advanced Fellowship 742137 SEMANTAX.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
horacek-1997-generating
https://aclanthology.org/W97-1408
Generating Referential Descriptions in Multimedia Environments
All known algorithms dedicated to the generation of referential descriptions use natural language alone to accomplish this communicative goal. Motivated by some limitations underlying these algorithms and the resulting restrictions in their scope, we attempt to extend the basic schema of these procedures to multimedia environments, that is, to descriptions consisting of images and text. We discuss several issues in this enterprise, including the transfer of basic ingredients to images and the hereby reinterpretation of language-specific concepts, matters of choice in the generation process, and the extended application potential in some typical scenarios. Moreover, we sketch our intended area of application, the identification of a particular object in the large visualization of mathematical proofs, which has some characteristic properties of each of these scenarios. Our achievement lies in extending the scope of techniques for generating referential descriptions through the incorporation of multimedia components and in enhancing the application areas for these techniques.
false
[]
[]
null
null
null
The graphical proof visualization component by which the proof tree representations depicted in this paper are produced has been designed and implemented by Stephan Hess. Work on this component is going on.
1997
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lopatkova-kettnerova-2016-alternations
https://aclanthology.org/W16-3804
Alternations: From Lexicon to Grammar And Back Again
An excellent example of a phenomenon bridging a lexicon and a grammar is provided by grammaticalized alternations (e.g., passivization, reflexivity, and reciprocity): these alternations represent productive grammatical processes which are, however, lexically determined. While grammaticalized alternations keep lexical meaning of verbs unchanged, they are usually characterized by various changes in their morphosyntactic structure. In this contribution, we demonstrate on the example of reciprocity and its representation in the valency lexicon of Czech verbs, VALLEX how a linguistic description of complex (and still systemic) changes characteristic of grammaticalized alternations can benefit from an integration of grammatical rules into a valency lexicon. In contrast to other types of grammaticalized alternations, reciprocity in Czech has received relatively little attention although it closely interacts with various linguistic phenomena (e.g., with light verbs, diatheses, and reflexivity).
false
[]
[]
null
null
null
The work on this project was partially supported by the grant GA 15-09979S of the Grant Agency of the Czech Republic. This work has been using language resources developed, stored, and distributed by the LINDAT/CLARIN project of the Ministry of Education, Youth and Sports of the Czech Republic (project LM2015071).
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
loaiciga-wehrli-2015-rule
https://aclanthology.org/W15-2512
Rule-Based Pronominal Anaphora Treatment for Machine Translation
In this paper we describe the rule-based MT system Its-2 developed at the University of Geneva and submitted for the shared task on pronoun translation organized within the Second DiscoMT Workshop. For improving pronoun translation, an Anaphora Resolution (AR) step based on Chomsky's Binding Theory and Hobbs' algorithm has been implemented. Since this strategy is currently restricted to 3rd person personal pronouns (i.e. they, it translated as elle, elles, il, ils only), absolute performance is affected. However, qualitative differences between the submitted system and a baseline without the AR procedure can be observed.
false
[]
[]
null
null
null
null
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
harriett-1983-tools
https://aclanthology.org/1983.tc-1.2
The tools for the job: an overview
Today, 10 November 1983, is just fifty-two days away from 1984. If you have read George Orwell's famous novel, then you will know that he predicted the invasion of video screens which would monitor everything you do and say, in order to ensure that everyone was loyal to the State. If he walked round offices and homes today he could be forgiven for believing that his prediction, made back in 1949, had already come true. But he wasn't far from the truth, was he? If a video screen is not attached to every product, then a microchip is certain to be incorporated. Even filing systems use microprocessors today, so that at the touch of a button the document you require, one out of thousands, appears in front of you without you having to search for it. Technological development is marvellous, if used for everyone's benefit. But I wonder how many of you will believe that the developments in speech recognition and speech synthesis are beneficial to you. At the Telecoms 83 exhibition held in Geneva two weeks ago, the Japanese company, NEC, showed off its world leadership in speech technology by demonstrating a research model of an automatic interpreting system. A conversation was held in Japanese and English, and another in English and Spanish; both were taking place as if the language barrier just didn't exist. At the moment only around 150 words are utilised, but it is not simply word recognition: it is continuous speech recognition with sentences being composed which are almost grammatically correct. NEC is also researching a speaker-independent system which can recognise words spoken by a variety of people, with the aim of producing an operational automatic interpreting system by the turn of the century.
false
[]
[]
null
null
null
null
1983
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
munot-nenkova-2019-emotion
https://aclanthology.org/N19-3003
Emotion Impacts Speech Recognition Performance
It has been established that the performance of speech recognition systems depends on multiple factors including the lexical content, speaker identity and dialect. Here we use three English datasets of acted emotion to demonstrate that emotional content also impacts the performance of commercial systems. On two of the corpora, emotion is a bigger contributor to recognition errors than speaker identity and on two, neutral speech is recognized considerably better than emotional speech. We further evaluate the commercial systems on spontaneous interactions that contain portions of emotional speech. We propose and validate on the acted datasets, a method that allows us to evaluate the overall impact of emotion on recognition even when manual transcripts are not available. Using this method, we show that emotion in natural spontaneous dialogue is a less prominent but still significant factor in recognition accuracy.
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
johnson-zhang-2017-deep
https://aclanthology.org/P17-1052
Deep Pyramid Convolutional Neural Networks for Text Categorization
This paper proposes a low-complexity word-level deep convolutional neural network (CNN) architecture for text categorization that can efficiently represent longrange associations in text. In the literature, several deep and complex neural networks have been proposed for this task, assuming availability of relatively large amounts of training data. However, the associated computational complexity increases as the networks go deeper, which poses serious challenges in practical applications. Moreover, it was shown recently that shallow word-level CNNs are more accurate and much faster than the state-of-the-art very deep nets such as character-level CNNs even in the setting of large training data. Motivated by these findings, we carefully studied deepening of word-level CNNs to capture global representations of text, and found a simple network architecture with which the best accuracy can be obtained by increasing the network depth without increasing computational cost by much. We call it deep pyramid CNN. The proposed model with 15 weight layers outperforms the previous best models on six benchmark datasets for sentiment classification and topic categorization.
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mayfield-finin-2012-evaluating
https://aclanthology.org/W12-3013
Evaluating the Quality of a Knowledge Base Populated from Text
The steady progress of information extraction systems has been helped by sound methodologies for evaluating their performance in controlled experiments. Annual events like MUC, ACE and TAC have developed evaluation approaches enabling researchers to score and rank their systems relative to reference results. Yet these evaluations have only assessed component technologies needed by a knowledge base population system; none has required the construction of a knowledge base that is then evaluated directly. We describe an approach to the direct evaluation of a knowledge base and an instantiation that will be used in a 2012 TAC Knowledge Base Population track.
false
[]
[]
null
null
null
null
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hajicova-kucerova-2002-argument
http://www.lrec-conf.org/proceedings/lrec2002/pdf/63.pdf
Argument/Valency Structure in PropBank, LCS Database and Prague Dependency Treebank: A Comparative Pilot Study
null
false
[]
[]
null
null
null
null
2002
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
de-amaral-2013-rule
https://aclanthology.org/R13-2009
Rule-based Named Entity Extraction For Ontology Population
Currently, Text analysis techniques such as named entity recognition rely mainly on ontologies which represent the semantics of an application domain. To build such an ontology from specialized texts, this article presents a tool which detects proper names, locations and dates from texts by using manually written linguistic rules. The most challenging task is to extract not only entities but also interpret the information and adapt in a specific corpus in French.
false
[]
[]
null
null
null
null
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
li-etal-2018-joint-learning
https://aclanthology.org/K18-2006
Joint Learning of POS and Dependencies for Multilingual Universal Dependency Parsing
This paper describes the system of team LeisureX in the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. Our system predicts the part-of-speech tag and dependency tree jointly. For the basic tasks, including tokenization, lemmatization and morphology prediction, we employ the official baseline model (UDPipe). To train the low-resource languages, we adopt a sampling method based on other richresource languages. Our system achieves a macro-average of 68.31% LAS F1 score, with an improvement of 2.51% compared with the UDPipe.
false
[]
[]
null
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
tam-etal-2007-bilingual
https://aclanthology.org/P07-1066
Bilingual-LSA Based LM Adaptation for Spoken Language Translation
We propose a novel approach to crosslingual language model (LM) adaptation based on bilingual Latent Semantic Analysis (bLSA). A bLSA model is introduced which enables latent topic distributions to be efficiently transferred across languages by enforcing a one-to-one topic correspondence during training. Using the proposed bLSA framework crosslingual LM adaptation can be performed by, first, inferring the topic posterior distribution of the source text and then applying the inferred distribution to the target language N-gram LM via marginal adaptation. The proposed framework also enables rapid bootstrapping of LSA models for new languages based on a source LSA model from another language. On Chinese to English speech and text translation the proposed bLSA framework successfully reduced word perplexity of the English LM by over 27% for a unigram LM and up to 13.6% for a 4-gram LM. Furthermore, the proposed approach consistently improved machine translation quality on both speech and text based adaptation.
false
[]
[]
null
null
null
This work is partly supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR0011-06-2-0001. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA.
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
liu-etal-2017-using-context
https://aclanthology.org/D17-1231
Using Context Information for Dialog Act Classification in DNN Framework
Previous work on dialog act (DA) classification has investigated different methods, such as hidden Markov models, maximum entropy, conditional random fields, graphical models, and support vector machines. A few recent studies explored using deep learning neural networks for DA classification, however, it is not clear yet what is the best method for using dialog context or DA sequential information, and how much gain it brings. This paper proposes several ways of using context information for DA classification, all in the deep learning framework. The baseline system classifies each utterance using the convolutional neural networks (CNN). Our proposed methods include using hierarchical models (recurrent neural networks (RNN) or CNN) for DA sequence tagging where the bottom layer takes the sentence CNN representation as input, concatenating predictions from the previous utterances with the CNN vector for classification, and performing sequence decoding based on the predictions from the sentence CNN model. We conduct thorough experiments and comparisons on the Switchboard corpus, demonstrate that incorporating context information significantly improves DA classification, and show that we achieve new state-of-the-art performance for this task.
false
[]
[]
null
null
null
The authors thank Yandi Xia for preparing the Switchboard data, Xian Qian, Antoine Raux and Benoit Dumoulin for various discussions.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
stilo-velardi-2017-hashtag
https://aclanthology.org/J17-1005
Hashtag Sense Clustering Based on Temporal Similarity
Hashtags are creative labels used in micro-blogs to characterize the topic of a message/discussion. Regardless of the use for which they were originally intended, hashtags cannot be used as a means to cluster messages with similar content. First, because hashtags are created in a spontaneous and highly dynamic way by users in multiple languages, the same topic can be associated with different hashtags, and conversely, the same hashtag may refer to different topics in different time periods. Second, contrary to common words, hashtag disambiguation is complicated by the fact that no sense catalogs (e.g., Wikipedia or WordNet) are available; and, furthermore, hashtag labels are difficult to analyze, as they often consist of acronyms, concatenated words, and so forth. A common way to determine the meaning of hashtags has been to analyze their context, but, as we have just pointed out, hashtags can have multiple and variable meanings. In this article, we propose a temporal sense clustering algorithm based on the idea that semantically related hashtags have similar and synchronous usage patterns.
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
doan-etal-2021-phomt
https://aclanthology.org/2021.emnlp-main.369
PhoMT: A High-Quality and Large-Scale Benchmark Dataset for Vietnamese-English Machine Translation
We introduce a high-quality and large-scale Vietnamese-English parallel dataset of 3.02M sentence pairs, which is 2.9M pairs larger than the benchmark Vietnamese-English machine translation corpus IWSLT15. We conduct experiments comparing strong neural baselines and well-known automatic translation engines on our dataset and find that in both automatic and human evaluations: the best performance is obtained by fine-tuning the pretrained sequence-to-sequence denoising autoencoder mBART. To our best knowledge, this is the first large-scale Vietnamese-English machine translation study. We hope our publicly available dataset and study can serve as a starting point for future research and applications on Vietnamese-English machine translation.
false
[]
[]
null
null
null
The authors would like to thank the anonymous reviewers for their helpful feedback.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lopopolo-etal-2019-dependency
https://aclanthology.org/W19-2909
Dependency Parsing with your Eyes: Dependency Structure Predicts Eye Regressions During Reading
Backward saccades during reading have been hypothesized to be involved in structural reanalysis, or to be related to the level of text difficulty. We test the hypothesis that backward saccades are involved in online syntactic analysis. If this is the case we expect that saccades will coincide, at least partially, with the edges of the relations computed by a dependency parser. In order to test this, we analyzed a large eye-tracking dataset collected while 102 participants read three short narrative texts. Our results show a relation between backward saccades and the syntactic structure of sentences.
false
[]
[]
null
null
null
The work presented here was funded by the Netherlands Organisation for Scientific Research (NWO) Gravitation Grant 024.001.006 to the Language in Interaction Consortium. The authors thank Marloes Mak for providing the eye-tracker data and help in the analyses.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
forcada-2003-45
https://aclanthology.org/2003.mtsummit-tttt.2
A 45-hour computers in translation course
This paper describes how a 45-hour Computers in Translation course is actually taught to 3rd-year translation students at the University of Alacant; the course described started in year 1995-1996 and has undergone substantial redesign until its present form. It is hoped that this description may be of use to instructors who are forced to teach a similar subject in such as small slot of time and need some design guidelines.
false
[]
[]
null
null
null
Acknowledgements: I thank Andy Way for comments and suggestions on the manuscript.
2003
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
savoldi-etal-2021-gender
https://aclanthology.org/2021.tacl-1.51
Gender Bias in Machine Translation
Machine translation (MT) technology has facilitated our daily tasks by providing accessible shortcuts for gathering, processing, and communicating information. However, it can suffer from biases that harm users and society at large. As a relatively new field of inquiry, studies of gender bias in MT still lack cohesion. This advocates for a unified framework to ease future research. To this end, we: i) critically review current conceptualizations of bias in light of theoretical insights from related disciplines, ii) summarize previous analyses aimed at assessing gender bias in MT, iii) discuss the mitigating strategies proposed so far, and iv) point toward potential directions for future work.
true
[]
[]
Gender Equality
null
null
We would like to thank the anonymous reviewers and the TACL Action Editors. Their insightful comments helped us improve on the current version of the paper.
2021
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
masala-etal-2020-robert
https://aclanthology.org/2020.coling-main.581
RoBERT -- A Romanian BERT Model
Deep pre-trained language models tend to become ubiquitous in the field of Natural Language Processing (NLP). These models learn contextualized representations by using a huge amount of unlabeled text data and obtain state of the art results on a multitude of NLP tasks, by enabling efficient transfer learning. For other languages besides English, there are limited options of such models, most of which are trained only on multilingual corpora. In this paper we introduce a Romanian-only pre-trained BERT model-RoBERT-and compare it with different multilingual models on seven Romanian specific NLP tasks grouped into three categories, namely: sentiment analysis, dialect and cross-dialect topic identification, and diacritics restoration. Our model surpasses the multilingual models, as well as a another mono-lingual implementation of BERT, on all tasks.
false
[]
[]
null
null
null
This research was supported by a grant of the Romanian National Authority for Scientific Research and Innovation, CNCS -UEFISCDI, project number PN-III 54PCCDI /2018, INTELLIT -"Prezervarea s , i valorificarea patrimoniului literar românesc folosind solut , ii digitale inteligente pentru extragerea s , i sistematizarea de cunos , tint , e", by the "Semantic Media Analytics -SeMAntic" subsidiary contract no. 20176/30.10.2019, from the NETIO project ID: P 40 270, MySMIS Code: 105976, as well as by "Spacetime Vision -Towards Unsupervised Learning in the 4D World", project Code: EEA-RO-NO-2018-0496.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ji-smith-2017-neural
https://aclanthology.org/P17-1092
Neural Discourse Structure for Text Categorization
We show that discourse structure, as defined by Rhetorical Structure Theory and provided by an existing discourse parser, benefits text categorization. Our approach uses a recursive neural network and a newly proposed attention mechanism to compute a representation of the text that focuses on salient content, from the perspective of both RST and the task. Experiments consider variants of the approach and illustrate its strengths and weaknesses.
false
[]
[]
null
null
null
We thank anonymous reviewers and members of Noah's ARK for helpful feedback on this work. We thank Dallas Card and Jesse Dodge for helping prepare the Media Frames Corpus and the Congressional bill corpus. This work was made possible by a University of Washington Innovation Award.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sidner-etal-2013-demonstration
https://aclanthology.org/W13-4024
Demonstration of an Always-On Companion for Isolated Older Adults
We summarize the status of an ongoing project to develop and evaluate a companion for isolated older adults. Four key scientific issues in the project are: embodiment, interaction paradigm, engagement and relationship. The system architecture is extensible and handles realtime behaviors. The system supports multiple activities, including discussing the weather, playing cards, telling stories, exercise coaching and video conferencing. A live, working demo system will be presented at the meeting.
true
[]
[]
Good Health and Well-Being
null
null
This work is supported in part by the National Science Foundation under award IIS-1012083. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
2013
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
nie-bansal-2017-shortcut
https://aclanthology.org/W17-5308
Shortcut-Stacked Sentence Encoders for Multi-Domain Inference
We present a simple sequential sentence encoder for multi-domain natural language inference. Our encoder is based on stacked bidirectional LSTM-RNNs with shortcut connections and fine-tuning of word embeddings. The overall supervised model uses the above encoder to encode two input sentences into two vectors, and then uses a classifier over the vector combination to label the relationship between these two sentences as that of entailment, contradiction, or neural. Our Shortcut-Stacked sentence encoders achieve strong improvements over existing encoders on matched and mismatched multi-domain natural language inference (top singlemodel result in the EMNLP RepEval 2017 Shared Task (Nangia et al., 2017)). Moreover, they achieve the new state-of-theart encoding result on the original SNLI dataset (Bowman et al., 2015).
false
[]
[]
null
null
null
We thank the shared task organizers and the anonymous reviewers. This work was partially supported by a Google Faculty Research Award, an IBM Faculty Award, a Bloomberg Data Science Research Grant, and NVidia GPU awards.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ohara-wiebe-2003-preposition
https://aclanthology.org/W03-0411
Preposition Semantic Classification via Treebank and FrameNet
This paper reports on experiments in classifying the semantic role annotations assigned to prepositional phrases in both the PENN TREEBANK and FRAMENET. In both cases, experiments are done to see how the prepositions can be classified given the dataset's role inventory, using standard word-sense disambiguation features. In addition to using traditional word collocations, the experiments incorporate class-based collocations in the form of WordNet hypernyms. For Treebank, the word collocations achieve slightly better performance: 78.5% versus 77.4% when separate classifiers are used per preposition. When using a single classifier for all of the prepositions together, the combined approach yields a significant gain at 85.8% accuracy versus 81.3% for wordonly collocations. For FrameNet, the combined use of both collocation types achieves better performance for the individual classifiers: 70.3% versus 68.5%. However, classification using a single classifier is not effective due to confusion among the fine-grained roles.
false
[]
[]
null
null
null
The first author is supported by a generous GAANN fellowship from the Department of Education. Some of the work used computing resources at NMSU made possible through MII Grants EIA-9810732 and EIA-0220590.
2003
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
boufaden-2003-ontology
https://aclanthology.org/P03-2002
An Ontology-based Semantic Tagger for IE system
In this paper, we present a method for the semantic tagging of word chunks extracted from a written transcription of conversations. This work is part of an ongoing project for an information extraction system in the field of maritime Search And Rescue (SAR). Our purpose is to automatically annotate parts of texts with concepts from a SAR ontology. Our approach combines two knowledge sources a SAR ontology and the Wordsmyth dictionarythesaurus, and it uses a similarity measure for the classification. Evaluation is carried out by comparing the output of the system with key answers of predefined extraction templates.
false
[]
[]
null
null
null
We are grateful to Robert Parks at Wordsmyth organization for giving us the electronic Wordsmyth version. Thanks to the Defense Research Establishment Valcartier for providing us with the dialog transcriptions and to National Search and rescue Secretariat for the valuable SAR manuals.
2003
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bauer-etal-2012-dependency
http://www.lrec-conf.org/proceedings/lrec2012/pdf/1037_Paper.pdf
The Dependency-Parsed FrameNet Corpus
When training semantic role labeling systems, the syntax of example sentences is of particular importance. Unfortunately, for the FrameNet annotated sentences, there is no standard parsed version. The integration of the automatic parse of an annotated sentence with its semantic annotation, while conceptually straightforward, is complex in practice. We present a standard dataset that is publicly available and that can be used in future research. This dataset contains parser-generated dependency structures (with POS tags and lemmas) for all FrameNet 1.5 sentences, with nodes automatically associated with FrameNet annotations.
false
[]
[]
null
null
null
null
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sai-sharma-2021-towards
https://aclanthology.org/2021.dravidianlangtech-1.3
Towards Offensive Language Identification for Dravidian Languages
Offensive speech identification in countries like India poses several challenges due to the usage of code-mixed and romanized variants of multiple languages by the users in their posts on social media. The challenge of offensive language identification on social media for Dravidian languages is harder, considering the low resources available for the same. In this paper, we explored the zeroshot learning and few-shot learning paradigms based on multilingual language models for offensive speech detection in code-mixed and romanized variants of three Dravidian languages-Malayalam, Tamil, and Kannada. We propose a novel and flexible approach of selective translation and transliteration to reap better results from fine-tuning and ensembling multilingual transformer networks like XLM-RoBERTa and mBERT. We implemented pretrained, fine-tuned, and ensembled versions of XLM-RoBERTa for offensive speech classification. Further, we experimented with interlanguage, inter-task, and multi-task transfer learning techniques to leverage the rich resources available for offensive speech identification in the English language and to enrich the models with knowledge transfer from related tasks. The proposed models yielded good results and are promising for effective offensive speech identification in low resource settings. 1
true
[]
[]
Peace, Justice and Strong Institutions
null
null
The authors would like to convey their sincere thanks to the Department of Science and Technology (ICPS Division), New Delhi, India, for providing financial assistance under the Data Science (DS) Research of Interdisciplinary Cyber Physical Systems (ICPS) Programme [DST /ICPS /CLUS-TER /Data Science/2018/Proposal-16:(T-856)] at the department of computer science, Birla Institute of Technology and Science, Pilani, India.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
nanba-etal-2009-automatic
https://aclanthology.org/P09-2052
Automatic Compilation of Travel Information from Automatically Identified Travel Blogs
In this paper, we propose a method for compiling travel information automatically. For the compilation, we focus on travel blogs, which are defined as travel journals written by bloggers in diary form. We consider that travel blogs are a useful information source for obtaining travel information, because many bloggers' travel experiences are written in this form. Therefore, we identified travel blogs in a blog database and extracted travel information from them. We have confirmed the effectiveness of our method by experiment. For the identification of travel blogs, we obtained scores of 38.1% for Recall and 86.7% for Precision. In the extraction of travel information from travel blogs, we obtained 74.0% for Precision at the top 100 extracted local products, thereby confirming that travel blogs are a useful source of travel information.
false
[]
[]
null
null
null
null
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhu-etal-2020-multitask
https://aclanthology.org/2020.coling-main.430
A Multitask Active Learning Framework for Natural Language Understanding
Natural language understanding (NLU) aims at identifying user intent and extracting semantic slots. This requires sufficient annotating data to get considerable performance in real-world situations. Active learning (AL) has been well-studied to decrease the needed amount of the annotating data and successfully applied to NLU. However, no research has been done on investigating how the relation information between intents and slots can improve the efficiency of AL algorithms. In this paper, we propose a multitask AL framework for NLU. Our framework enables poolbased AL algorithms to make use of the relation information between sub-tasks provided by a joint model, and we propose an efficient computation for the entropy of a joint model. Experimental results show our framework can achieve competitive performance with less training data than baseline methods on all datasets. We also demonstrate that when using the entropy as the query strategy, the model with complete relation information can perform better than those with partial information. Additionally, we demonstrate that the efficiency of these active learning algorithms in our framework is still effective when incorporate with the Bidirectional Encoder Representations from Transformers (BERT).
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false