ID
stringlengths
11
54
url
stringlengths
33
64
title
stringlengths
11
184
abstract
stringlengths
17
3.87k
label_nlp4sg
bool
2 classes
task
list
method
list
goal1
stringclasses
9 values
goal2
stringclasses
9 values
goal3
stringclasses
1 value
acknowledgments
stringlengths
28
1.28k
year
stringlengths
4
4
sdg1
bool
1 class
sdg2
bool
1 class
sdg3
bool
2 classes
sdg4
bool
2 classes
sdg5
bool
2 classes
sdg6
bool
1 class
sdg7
bool
1 class
sdg8
bool
2 classes
sdg9
bool
2 classes
sdg10
bool
2 classes
sdg11
bool
2 classes
sdg12
bool
1 class
sdg13
bool
2 classes
sdg14
bool
1 class
sdg15
bool
1 class
sdg16
bool
2 classes
sdg17
bool
2 classes
milajevs-etal-2016-robust
https://aclanthology.org/P16-3009
Robust Co-occurrence Quantification for Lexical Distributional Semantics
Previous optimisations of parameters affecting the word-context association measure used in distributional vector space models have focused either on highdimensional vectors with hundreds of thousands of dimensions, or dense vectors with dimensionality of a few hundreds; but dimensionality of a few thousands is often applied in compositional tasks as it is still computationally feasible and does not require the dimensionality reduction step. We present a systematic study of the interaction of the parameters of the association measure and vector dimensionality, and derive parameter selection heuristics that achieve performance across word similarity and relevance datasets competitive with the results previously reported in the literature achieved by highly dimensional or dense models.
false
[]
[]
null
null
null
We thank Ann Copestake for her valuable comments as part of the ACL SRW mentorship program and the anonymous reviewers for their comments. Support from EPSRC grant EP/J002607/1 is gratefully acknowledged by Dmitrijs Milajevs and Mehrnoosh Sadrzadeh. Matthew Purver is partly supported by ConCreTe: the project ConCreTe acknowledges the financial support of the Future and Emerging Technologies (FET) programme within the Seventh Framework Pro-gramme for Research of the European Commission, under FET grant number 611733.
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lin-etal-2019-task
https://aclanthology.org/D19-1463
Task-Oriented Conversation Generation Using Heterogeneous Memory Networks
How to incorporate external knowledge into a neural dialogue model is critically important for dialogue systems to behave like real humans. To handle this problem, memory networks are usually a great choice and a promising way. However, existing memory networks do not perform well when leveraging heterogeneous information from different sources. In this paper, we propose a novel and versatile external memory networks called Heterogeneous Memory Networks (HMNs), to simultaneously utilize user utterances, dialogue history and background knowledge tuples. In our method, historical sequential dialogues are encoded and stored into the context-aware memory enhanced by gating mechanism while grounding knowledge tuples are encoded and stored into the context-free memory. During decoding, the decoder augmented with HMNs recurrently selects each word in one response utterance from these two memories and a general vocabulary. Experimental results on multiple real-world datasets show that HMNs significantly outperform the state-of-the-art datadriven task-oriented dialogue models in most domains.
false
[]
[]
null
null
null
We thank the anonymous reviewers for their insightful comments on this paper. This work was supported by the NSFC (No.61402403), DAMO Academy (Alibaba Group), Alibaba-Zhejiang University Joint Institute of Frontier Technologies, Chinese Knowledge Center for Engineering Sciences and Technology, and the Fundamental Research Funds for the Central Universities.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
agarwal-kann-2020-acrostic
https://aclanthology.org/2020.emnlp-main.94
Acrostic Poem Generation
We propose a new task in the area of computational creativity: acrostic poem generation in English. Acrostic poems are poems that contain a hidden message; typically, the first letter of each line spells out a word or short phrase. We define the task as a generation task with multiple constraints: given an input word, 1) the initial letters of each line should spell out the provided word, 2) the poem's semantics should also relate to it, and 3) the poem should conform to a rhyming scheme. We further provide a baseline model for the task, which consists of a conditional neural language model in combination with a neural rhyming model. Since no dedicated datasets for acrostic poem generation exist, we create training data for our task by first training a separate topic prediction model on a small set of topic-annotated poems and then predicting topics for additional poems. Our experiments show that the acrostic poems generated by our baseline are received well by humans and do not lose much quality due to the additional constraints. Last, we confirm that poems generated by our model are indeed closely related to the provided prompts, and that pretraining on Wikipedia can boost performance.
false
[]
[]
null
null
null
We would like to thank the members of NYU's ML 2 group for their help with the human evaluation and their feedback on our paper! We are also grateful to the anonymous reviewers for their insightful comments.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
vu-etal-2018-sentence
https://aclanthology.org/N18-2013
Sentence Simplification with Memory-Augmented Neural Networks
Sentence simplification aims to simplify the content and structure of complex sentences, and thus make them easier to interpret for human readers, and easier to process for downstream NLP applications. Recent advances in neural machine translation have paved the way for novel approaches to the task. In this paper, we adapt an architecture with augmented memory capacities called Neural Semantic Encoders (Munkhdalai and Yu, 2017) for sentence simplification. Our experiments demonstrate the effectiveness of our approach on different simplification datasets, both in terms of automatic evaluation measures and human judgments.
false
[]
[]
null
null
null
We would like to thank Emily Druhl, Jesse Lingeman, and the UMass BioNLP team for their help with this work. We also thank Xingxing Zhang and Sergiu Nisioi for valuable discussions, and the anonymous reviewers for their thoughtful comments and suggestions.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
pedersen-2001-machine
https://aclanthology.org/S01-1034
Machine Learning with Lexical Features: The Duluth Approach to SENSEVAL-2
This paper describes the sixteen Duluth entries in the SENSEVAL-2 comparative exercise among word sense disambiguation systems. There were eight pairs of Duluth systems entered in the Spanish and English lexical sample tasks. These are all based on standard machine learning algorithms that induce classifiers from sense-tagged training text where the context in which ambiguous words occur are represented by simple lexical features. These are highly portable, robust methods that can serve as a foundation for more tailored approaches.
false
[]
[]
null
null
null
This work has been partially supported by a National Science Foundation Faculty Early CA-REER Development award (#0092784).The Bigram Statistics Package and Sense-Tools have been implemented by Satanjeev Banerjee.
2001
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
huang-etal-2009-bilingually
https://aclanthology.org/D09-1127
Bilingually-Constrained (Monolingual) Shift-Reduce Parsing
Jointly parsing two languages has been shown to improve accuracies on either or both sides. However, its search space is much bigger than the monolingual case, forcing existing approaches to employ complicated modeling and crude approximations. Here we propose a much simpler alternative, bilingually-constrained monolingual parsing, where a source-language parser learns to exploit reorderings as additional observation, but not bothering to build the target-side tree as well. We show specifically how to enhance a shift-reduce dependency parser with alignment features to resolve shift-reduce conflicts. Experiments on the bilingual portion of Chinese Treebank show that, with just 3 bilingual features, we can improve parsing accuracies by 0.6% (absolute) for both English and Chinese over a state-of-the-art baseline, with negligible (∼6%) efficiency overhead, thus much faster than biparsing.
false
[]
[]
null
null
null
We thank the anonymous reviewers for pointing to us references about "arc-standard". We also thank Aravind Joshi and Mitch Marcus for insights on PP attachment, Joakim Nivre for discussions on arc-eager, Yang Liu for suggestion to look at manual alignments, and David A. Smith for sending us his paper. The second and third authors were supported by National Natural Science Foundation of China, Contracts 60603095 and 60736014, and 863 State Key Project No. 2006AA010108.
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
shirakawa-etal-2017-never
https://aclanthology.org/D17-1251
Never Abandon Minorities: Exhaustive Extraction of Bursty Phrases on Microblogs Using Set Cover Problem
We propose a language-independent datadriven method to exhaustively extract bursty phrases of arbitrary forms (e.g., phrases other than simple noun phrases) from microblogs. The burst (i.e., the rapid increase of the occurrence) of a phrase causes the burst of overlapping Ngrams including incomplete ones. In other words, bursty incomplete N-grams inevitably overlap bursty phrases. Thus, the proposed method performs the extraction of bursty phrases as the set cover problem in which all bursty N-grams are covered by a minimum set of bursty phrases. Experimental results using Japanese Twitter data showed that the proposed method outperformed word-based, noun phrase-based, and segmentation-based methods both in terms of accuracy and coverage.
false
[]
[]
null
null
null
This research is partially supported by the Grantin-Aid for Scientific Research (A)(2620013) of the Ministry of Education, Culture, Sports, Science and Technology, Japan, and JST, Strategic International Collaborative Research Program, SICORP.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mohammadi-etal-2017-native
https://aclanthology.org/W17-5022
Native Language Identification Using a Mixture of Character and Word N-grams
Native language identification (NLI) is the task of determining an author's native language, based on a piece of his/her writing in a second language. In recent years, NLI has received much attention due to its challenging nature and its applications in language pedagogy and forensic linguistics. We participated in the NLI Shared Task 2017 under the name UT-DSP. In our effort to implement a method for native language identification, we made use of a mixture of character and word Ngrams, and achieved an optimal F1-score of 0.7748, using both essay and speech transcription datasets.
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
goyal-etal-2012-distributed
https://aclanthology.org/C12-1062
A Distributed Platform for Sanskrit Processing
Sanskrit, the classical language of India, presents specific challenges for computational linguistics: exact phonetic transcription in writing that obscures word boundaries, rich morphology and an enormous corpus, among others. Recent international cooperation has developed innovative solutions to these problems and significant resources for linguistic research. Solutions include efficient segmenting and tagging algorithms and dependency parsers based on constraint programming. The integration of lexical resources, text archives and linguistic software is achieved by distributed interoperable Web services. Resources include a morphological tagger and tagged corpus.
false
[]
[]
null
null
null
null
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dusek-jurcicek-2015-training
https://aclanthology.org/P15-1044
Training a Natural Language Generator From Unaligned Data
We present a novel syntax-based natural language generation system that is trainable from unaligned pairs of input meaning representations and output sentences. It is divided into sentence planning, which incrementally builds deep-syntactic dependency trees, and surface realization. Sentence planner is based on A* search with a perceptron ranker that uses novel differing subtree updates and a simple future promise estimation; surface realization uses a rule-based pipeline from the Treex NLP toolkit. Our first results show that training from unaligned data is feasible, the outputs of our generator are mostly fluent and relevant.
false
[]
[]
null
null
null
This work was funded by the Ministry of Education, Youth and Sports of the Czech Republic under the grant agreement LK11221 and core research funding, SVV project 260 104, and GAUK grant 2058214 of Charles University in Prague. It used language resources stored and distributed by the LINDAT/CLARIN project of the Ministry of Education, Youth and Sports of the Czech Republic (project LM2010013).The authors would like to thank Lukáš Žilka, Ondřej Plátek, and the anonymous reviewers for helpful comments on the draft.
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
jiang-etal-2016-ecnu
https://aclanthology.org/S16-1058
ECNU at SemEval-2016 Task 5: Extracting Effective Features from Relevant Fragments in Sentence for Aspect-Based Sentiment Analysis in Reviews
This paper describes our systems submitted to the Sentence-level and Text-level Aspect-Based Sentiment Analysis (ABSA) task (i.e., Task 5) in SemEval-2016. The task involves two phases, namely, Aspect Detection phase and Sentiment Polarity Classification phase. We participated in the second phase of both subtasks in laptop and restaurant domains, which focuses on the sentiment analysis based on the given aspect. In this task, we extracted four types of features (i.e., Sentiment Lexicon Features, Linguistic Features, Topic Model Features and Word2vec Feature) from certain fragments related to aspect rather than the whole sentence. Then the proposed features are fed into supervised classifiers for sentiment analysis. Our submissions rank above average.
false
[]
[]
null
null
null
This research is supported by grants from Science and Technology Commission of Shanghai Municipality (14DZ2260800 and 15ZR1410700), Shanghai Collaborative Innovation Center of Trustworthy Software for Internet of Things (ZF1213).
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
takehisa-2016-possessor
https://aclanthology.org/Y16-3014
On the Possessor Interpretation of Non-Agentive Subjects
It has been observed that the relation of possession contributes to the formation of socalled adversity causatives, whose subject is understood as a possessor of an object referent. This interpretation is reflected at face value in some studies, and it is assumed there that the subject argument is introduced as a possessor in syntax. This paper addresses the question of whether the observed relation should be directly encoded as such and argues that the subject argument is introduced as merely an event participant whose manner is underspecified. Moreover, it argues that the possessor interpretation arises from inference based on both linguistic and extralinguistic contexts, such as the presence of a possessum argument. This view is implemented as an analysis making use of a kind of applicative head (Pylkkänen, 2008) in conjunction with the postsyntactic inferential strategy (Rivero, 2004). 1 The following abbreviations are used: ACC = accusative, CAUS, C = causative, CL = classifier, COP = copula, DAT = dative, DV = dummy verb, GEN = genitive, INCH, I = inchoative, INST = instrumental, LOC = locative, NEG = negative, NML = nominalizer, NPST = nonpast, PASS = passive, pro = null pronoun, PST = past, TOP = topic, ¥verb = verbal root. (1) Taroo 1-ga kare 1-no/ zibun 1-no/Ø 1 T.-NOM he-GEN/ self-GEN/ pro ude-o or-Ø-ta (>ot-ta) arm-ACC ¥break-CAUS-PST 'Taroo broke his arm.' That the ambiguity is real can be shown by the sentence in (2), where the second conjunct serves to ensure the subject is not an agent. (2) Taroo 1-ga kare 1-no/ zibun 1-no/Ø 1 T.-NOM he-GEN/ self-GEN/ pro ude-o or-Ø-ta (>ot-ta) kedo, arm-ACC ¥break-CAUS-PST but zibun 1-de-wa or-Ø-anak-at-ta self-INST-TOP break-CAUS-NEG-DV-PST 'Taroo broke his arm, but he didn't break it himself.'
false
[]
[]
null
null
null
I am grateful to Chigusa Morita and three anonymous reviewers for their invaluable comments, which helped clarify the manuscript. I am solely responsible for any errors and inadequacies contained herein.
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wang-etal-2020-neural
https://aclanthology.org/2020.aacl-main.21
Neural Gibbs Sampling for Joint Event Argument Extraction
Event Argument Extraction (EAE) aims at predicting event argument roles of entities in text, which is a crucial subtask and bottleneck of event extraction. Existing EAE methods either extract each event argument roles independently or sequentially, which cannot adequately model the joint probability distribution among event arguments and their roles. In this paper, we propose a Bayesian model named Neural Gibbs Sampling (NGS) to jointly extract event arguments. Specifically, we train two neural networks to model the prior distribution and conditional distribution over event arguments respectively and then use Gibbs sampling to approximate the joint distribution with the learned distributions. For overcoming the shortcoming of the high complexity of the original Gibbs sampling algorithm, we further apply simulated annealing to efficiently estimate the joint probability distribution over event arguments and make predictions. We conduct experiments on the two widely-used benchmark datasets ACE 2005 and TAC KBP 2016. The Experimental results show that our NGS model can achieve comparable results to existing state-of-the-art EAE methods. The source code can be obtained from https:// github.com/THU-KEG/NGS.
false
[]
[]
null
null
null
We thank Hedong (Ben) Hou for his help in the mathematical proof. This work is supported by the Key-Area Research and Development Program of Guangdong Province (2019B010153002), NSFC Key Projects (U1736204, 61533018), a grant from Institute for Guo Qiang, Tsinghua University (2019GQB0003) and THUNUS NExT Co-Lab. This work is also supported by the Pattern Recognition Center, WeChat AI, Tencent Inc. Xiaozhi Wang is supported by Tsinghua University Initiative Scientific Research Program.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
tan-etal-2019-expressing
https://aclanthology.org/P19-1182
Expressing Visual Relationships via Language
Describing images with text is a fundamental problem in vision-language research. Current studies in this domain mostly focus on single image captioning. However, in various real applications (e.g., image editing, difference interpretation, and retrieval), generating relational captions for two images, can also be very useful. This important problem has not been explored mostly due to lack of datasets and effective models. To push forward the research in this direction, we first introduce a new language-guided image editing dataset that contains a large number of real image pairs with corresponding editing instructions. We then propose a new relational speaker model based on an encoder-decoder architecture with static relational attention and sequential multi-head attention. We also extend the model with dynamic relational attention, which calculates visual alignment while decoding. Our models are evaluated on our newly collected and two public datasets consisting of image pairs annotated with relationship sentences. Experimental results, based on both automatic and human evaluation, demonstrate that our model outperforms all baselines and existing methods on all the datasets. 1
false
[]
[]
null
null
null
We thank the reviewers for their helpful comments and Nham Le for helping with the initial data collection. This work was supported by Adobe, ARO-YIP Award #W911NF-18-1-0336, and faculty awards from Google, Facebook, and Salesforce. The views, opinions, and/or findings contained in this article are those of the authors and should not be interpreted as representing the official views or policies, either expressed or implied, of the funding agency.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hao-etal-2019-modeling
https://aclanthology.org/N19-1122
Modeling Recurrence for Transformer
Recently, the Transformer model (Vaswani et al., 2017) that is based solely on attention mechanisms, has advanced the state-of-the-art on various machine translation tasks. However, recent studies reveal that the lack of recurrence hinders its further improvement of translation capacity (Chen et al., 2018; Dehghani et al., 2019). In response to this problem, we propose to directly model recurrence for Transformer with an additional recurrence encoder. In addition to the standard recurrent neural network, we introduce a novel attentive recurrent network to leverage the strengths of both attention and recurrent networks. Experimental results on the widely-used WMT14 English⇒German and WMT17 Chinese⇒English translation tasks demonstrate the effectiveness of the proposed approach. Our studies also reveal that the proposed model benefits from a shortcut that bridges the source and target sequences with a single recurrent layer, which outperforms its deep counterpart.
false
[]
[]
null
null
null
Acknowledgments J.Z. was supported by the National Institute of General Medical Sciences of the National Institute of Health under award number R01GM126558. We thank the anonymous reviewers for their insightful comments.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gustavson-1981-forarbeten
https://aclanthology.org/W81-0119
F\"orarbeten till en datoriserad runordbok (Preliminary work for a computerized rune lexicon) [In Swedish]
I det följande beskrivs arbetet med att upprätta ett ADB-baserat re gister över det språkliga materialet i Sveriges runinskrifter.Det skall också tjäna som en utgångspunkt för en planerad runordbok. Ur datorteknisk synpunkt kan det vara av intresse, eftersom det bygger på ett mikrodatorsystem och tillämpning av interaktiva program, som medger direktkommuniktion i klartext.Registrets innehåll kommer huvud sakligen att bygga på innehållet i inskrifterna i seriverket Sveriges runinskrifter (19OO-). Registret är tänkt att bestå av två större delar: ett ordregister och ett register över de enskilda in skrifterna. Till ordregistret knyts, om så är lämpligt och möjligt, ett namnregister över personnamnen och ortnamnen i runinskrifterna. Förutom att registren kommer att ligga lagrade för löpande ADBbehandling blir de också utgångspunkt för tryckta förteckningar, bl a i form av den nämnda runordboken.
false
[]
[]
null
null
null
null
1981
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yu-etal-2020-mooccube
https://aclanthology.org/2020.acl-main.285
MOOCCube: A Large-scale Data Repository for NLP Applications in MOOCs
The prosperity of Massive Open Online Courses (MOOCs) provides fodder for many NLP and AI research for education applications, e.g., course concept extraction, prerequisite relation discovery, etc. However, the publicly available datasets of MOOC are limited in size with few types of data, which hinders advanced models and novel attempts in related topics. Therefore, we present MOOCCube, a large-scale data repository of over 700 MOOC courses, 100k concepts, 8 million student behaviors with an external resource. Moreover, we conduct a prerequisite discovery task as an example application to show the potential of MOOCCube in facilitating relevant research. The data repository is now available at http: //moocdata.cn/data/MOOCCube.
true
[]
[]
Quality Education
null
null
Zhiyuan Liu is supported by the National KeyResearch and Development Program of China(No. 2018YFB1004503), and others are supported by NSFC key project (U1736204, 61533018), a grant from Beijing Academy of Artificial Intelligence (BAAI2019ZD0502), a grant from the Insititute for Guo Qiang, Tsinghua University, THUNUS NExT Co-Lab, the Center for Massive Online Education of Tsinghua Univerisity, and XuetangX.
2020
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
chen-chang-1998-topical
https://aclanthology.org/J98-1003
Topical Clustering of MRD Senses Based on Information Retrieval Techniques
This paper describes a heuristic approach capable of automatically clustering senses in a machinereadable dictionary (MRD). Including these clusters in the MRD-based lexical database offers several positive benefits for word sense disambiguation (WSD). First, the clusters can be used as a coarser sense division, so unnecessarily fine sense distinction can be avoided. The clustered entries in the MRD can also be used as materials for supervised training to develop a WSD system. Furthermore, if the algorithm is run on several MRDs, the clusters also provide a means of linking different senses across multiple MRDs to create an integrated lexical database. An implementation of the method for clustering definition sentences in the Longman Dictionary of Contemporary English (LDOCE) is described. To this end, the topical word lists and topical cross-references in the Longman Lexicon of Contemporary English (LLOCE) are used. Nearly half of the senses in the LDOCE can be linked precisely to a relevant LLOCE topic using a simple heuristic. With the definitions of senses linked to the same topic viewed as a document, topical clustering of the MRD senses bears a striking resemblance to retrieval of relevant documents for a given query in information retrieval (IR) research. Relatively well-established IR techniques of weighting terms and ranking document relevancy are applied to find the topical clusters that are most relevant to the definition of each MRD sense. Finally, we describe an implemented version of the algorithms for the LDOCE and the LLOCE and assess the performance of the proposed approach in a series of experiments and evaluations.
false
[]
[]
null
null
null
This work is partially supported by ROC NSC grants 84-2213-E-007-023 and NSC 85-2213-E-007-042. We are grateful to Betty Teng and Nora Liu from Longman Asia Limited for the permission to use their lexicographical resources for research purposes. Finally, we would like to thank the anonymous reviewers for many constructive and insightful suggestions.
1998
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
jagarlamudi-daume-iii-2012-low
https://aclanthology.org/N12-1088
Low-Dimensional Discriminative Reranking
The accuracy of many natural language processing tasks can be improved by a reranking step, which involves selecting a single output from a list of candidate outputs generated by a baseline system. We propose a novel family of reranking algorithms based on learning separate low-dimensional embeddings of the task's input and output spaces. This embedding is learned in such a way that prediction becomes a low-dimensional nearest-neighbor search, which can be done computationally efficiently. A key quality of our approach is that feature engineering can be done separately on the input and output spaces; the relationship between inputs and outputs is learned automatically. Experiments on part-of-speech tagging task in four languages show significant improvements over a baseline decoder and existing reranking approaches.
false
[]
[]
null
null
null
We thank Zhongqiang Huang for providing the code for the baseline systems, Raghavendra Udupa and the anonymous reviewers for their insightful comments. This work is partially funded by NSF grants IIS-1153487 and IIS-1139909.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
massung-etal-2016-meta
https://aclanthology.org/P16-4016
MeTA: A Unified Toolkit for Text Retrieval and Analysis
META is developed to unite machine learning, information retrieval, and natural language processing in one easy-to-use toolkit. Its focus on indexing allows it to perform well on large datasets, supporting online classification and other out-of-core algorithms. META's liberal open source license encourages contributions, and its extensive online documentation, forum, and tutorials make this process straightforward. We run experiments and show META's performance is competitive with or better than existing software.
false
[]
[]
null
null
null
This material is based upon work supported by the NSF GRFP under Grant Number DGE-1144245. 22 ftp://largescale.ml.tu-berlin.de/largescale/ 23 It took 12m 24s to generate the index.
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kay-1987-machines
https://aclanthology.org/1987.mtsummit-1.21
Machines and People in Translation
It is useful to distinguish a narrower and a wider use for the term "machine translation". The narrow sense is the more usual one. In this sense, the term refers to a batch process in which a text is given over to a machine from which, some time later, a result is collected which we think of as the output of the machine translation process. When we use the term in the wider sense, it includes all the process required to obtain final translation output on paper. In particular, the wider usage allows for the possibility of an interactive process involving people and machines. Machine translation, narrowly conceived, is not appropriate for achieving engineering objectives. Machine translation, narrowly conceived, provides an extremely rich framework within which to conduct research on theoretical and computational linguistics, on cognitive modeling and, indeed, a variety of scientific problems. I believe that it provides the best view we can get of human cognitive performance, without introducing a perceptual component. When we learn more about vision, or other perceptual modalities, this situation may change. Machine translation, narrowly conceived, requires a solution to be found to almost every imaginable linguistic problem, and the solutions must be coherent with one another, so that it is a very demanding framework in which to work.
false
[]
[]
null
null
null
null
1987
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
pilault-etal-2020-extractive
https://aclanthology.org/2020.emnlp-main.748
On Extractive and Abstractive Neural Document Summarization with Transformer Language Models
We present a method to produce abstractive summaries of long documents that exceed several thousand words via neural abstractive summarization. We perform a simple extractive step before generating a summary, which is then used to condition the transformer language model on relevant information before being tasked with generating a summary. We also show that this approach produces more abstractive summaries compared to prior work that employs a copy mechanism while still achieving higher ROUGE scores. We provide extensive comparisons with strong baseline methods, prior state of the art work as well as multiple variants of our approach including those using only transformers, only extractive techniques and combinations of the two. We examine these models using four different summarization tasks and datasets: arXiv papers, PubMed papers, the Newsroom and BigPatent datasets. We find that transformer based methods produce summaries with fewer n-gram copies, leading to n-gram copying statistics that are more similar to human generated abstracts. We include a human evaluation, finding that transformers are ranked highly for coherence and fluency, but purely extractive methods score higher for informativeness and relevance. We hope that these architectures and experiments may serve as strong points of comparison for future work. 1
false
[]
[]
null
null
null
1 Note: The abstract above was collaboratively written by the authors and one of the models presented in this paper based on an earlier draft of this paper.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
neubig-etal-2018-xnmt
https://aclanthology.org/W18-1818
XNMT: The eXtensible Neural Machine Translation Toolkit
This paper describes XNMT, the eXtensible Neural Machine Translation toolkit. XNMT distinguishes itself from other open-source NMT toolkits by its focus on modular code design, with the purpose of enabling fast iteration in research and replicable, reliable results. In this paper we describe the design of XNMT and its experiment configuration system, and demonstrate its utility on the tasks of machine translation, speech recognition, and multi-tasked machine translation/parsing. XNMT is available open-source at https://github.com/neulab/xnmt.
false
[]
[]
null
null
null
Part of the development of XNMT was performed at the Jelinek Summer Workshop in Speech and Language Technology (JSALT) "Speaking Rosetta Stone" project (Scharenborg et al., 2018) , and we are grateful to the JSALT organizers for the financial/logistical support, and also participants of the workshop for their feedback on XNMT as a tool.Parts of this work were sponsored by Defense Advanced Research Projects Agency Information Innovation Office (I2O). Program: Low Resource Languages for Emergent Incidents (LORELEI). Issued by DARPA/I2O under Contract No. HR0011-15-C-0114. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
scheible-schutze-2014-multi
https://aclanthology.org/E14-4039
Multi-Domain Sentiment Relevance Classification with Automatic Representation Learning
Sentiment relevance (SR) aims at identifying content that does not contribute to sentiment analysis. Previously, automatic SR classification has been studied in a limited scope, using a single domain and feature augmentation techniques that require large hand-crafted databases. In this paper, we present experiments on SR classification with automatically learned feature representations on multiple domains. We show that a combination of transfer learning and in-task supervision using features learned unsupervisedly by the stacked denoising autoencoder significantly outperforms a bag-of-words baseline for in-domain and cross-domain classification.
false
[]
[]
null
null
null
null
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
stede-grishina-2016-anaphoricity
https://aclanthology.org/W16-0706
Anaphoricity in Connectives: A Case Study on German
Anaphoric connectives are event anaphors (or abstract anaphors) that in addition convey a coherence relation holding between the antecedent and the host clause of the connective. Some of them carry an explicitly-anaphoric morpheme, others do not. We analysed the set of German connectives for this property and found that many have an additional nonconnective reading, where they serve as nominal anaphors. Furthermore, many connectives can have multiple senses, so altogether the processing of these words can involve substantial disambiguation. We study the problem for one specific German word, demzufolge, which can be taken as representative for a large group of similar words.
false
[]
[]
null
null
null
We thank Tatjana Scheffler and Erik Haegert for their help with corpus annotation, and the anonymous reviewers for their valuable suggestions on improving the paper.
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
goldwasser-zhang-2016-understanding
https://aclanthology.org/Q16-1038
Understanding Satirical Articles Using Common-Sense
Automatic satire detection is a subtle text classification task, for machines and at times, even for humans. In this paper we argue that satire detection should be approached using common-sense inferences, rather than traditional text classification methods. We present a highly structured latent variable model capturing the required inferences. The model abstracts over the specific entities appearing in the articles, grouping them into generalized categories, thus allowing the model to adapt to previously unseen situations.
false
[]
[]
null
null
null
null
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dong-etal-2020-transformer
https://aclanthology.org/2020.figlang-1.38
Transformer-based Context-aware Sarcasm Detection in Conversation Threads from Social Media
We present a transformer-based sarcasm detection model that accounts for the context from the entire conversation thread for more robust predictions. Our model uses deep transformer layers to perform multi-head attentions among the target utterance and the relevant context in the thread. The context-aware models are evaluated on two datasets from social media, Twitter and Reddit, and show 3.1% and 7.0% improvements over their baselines. Our best models give the F1-scores of 79.0% and 75.0% for the Twitter and Reddit datasets respectively, becoming one of the highest performing systems among 36 participants in this shared task.
false
[]
[]
null
null
null
We gratefully acknowledge the support of the AWS Machine Learning Research Awards (MLRA). Any contents in this material are those of the authors and do not necessarily reflect the views of AWS.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
klementiev-roth-2006-weakly
https://aclanthology.org/P06-1103
Weakly Supervised Named Entity Transliteration and Discovery from Multilingual Comparable Corpora
Named Entity recognition (NER) is an important part of many natural language processing tasks. Current approaches often employ machine learning techniques and require supervised data. However, many languages lack such resources. This paper presents an (almost) unsupervised learning algorithm for automatic discovery of Named Entities (NEs) in a resource free language, given a bilingual corpora in which it is weakly temporally aligned with a resource rich language. NEs have similar time distributions across such corpora, and often some of the tokens in a multi-word NE are transliterated. We develop an algorithm that exploits both observations iteratively. The algorithm makes use of a new, frequency based, metric for time distributions and a resource free discriminative approach to transliteration. Seeded with a small number of transliteration pairs, our algorithm discovers multi-word NEs, and takes advantage of a dictionary (if one exists) to account for translated or partially translated NEs. We evaluate the algorithm on an English-Russian corpus, and show high level of NEs discovery in Russian.
false
[]
[]
null
null
null
We thank Richard Sproat, ChengXiang Zhai, and Kevin Small for their useful feedback during this work, and the anonymous referees for their helpful comments. This research is supported by the Advanced Research and Development Activity (ARDA)'s Advanced Question Answering for Intelligence (AQUAINT) Program and a DOI grant under the Reflex program.
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
costa-etal-2016-mapping
https://aclanthology.org/2016.gwc-1.36
Mapping and Generating Classifiers using an Open Chinese Ontology
In languages such as Chinese, classifiers (CLs) play a central role in the quantification of noun-phrases. This can be a problem when generating text from input that does not specify the classifier, as in machine translation (MT) from English to Chinese. Many solutions to this problem rely on dictionaries of noun-CL pairs. However, there is no open large-scale machine-tractable dictionary of noun-CL associations. Many published resources exist, but they tend to focus on how a CL is used (e.g. what kinds of nouns can be used with it, or what features seem to be selected by each CL). In fact, since nouns are open class words, producing an exhaustive definite list of noun-CL associations is not possible, since it would quickly get out of date. Our work tries to address this problem by providing an algorithm for automatic building of a frequency based dictionary of noun-CL pairs, mapped to concepts in the Chinese Open Wordnet (Wang and Bond, 2013), an open machinetractable dictionary for Chinese. All results will released under an open license.
false
[]
[]
null
null
null
This research was supported in part by the MOE Tier 2 grant That's what you meant: a Rich Representation for Manipulation of Meaning (MOE ARC41/13).
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bernier-colborne-drouin-2016-evaluation-distributional
https://aclanthology.org/W16-4707
Evaluation of distributional semantic models: a holistic approach
We investigate how both model-related factors and application-related factors affect the accuracy of distributional semantic models (DSMs) in the context of specialized lexicography, and how these factors interact. This holistic approach to the evaluation of DSMs provides valuable guidelines for the use of these models and insight into the kind of semantic information they capture.
false
[]
[]
null
null
null
This work was supported by the Social Sciences and Humanities Research Council (SSHRC) of Canada.
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
tong-etal-2021-learning
https://aclanthology.org/2021.acl-long.487
Learning from Miscellaneous Other-Class Words for Few-shot Named Entity Recognition
Few-shot Named Entity Recognition (NER) exploits only a handful of annotations to identify and classify named entity mentions. Prototypical network shows superior performance on few-shot NER. However, existing prototypical methods fail to differentiate rich semantics in other-class words, which will aggravate overfitting under few shot scenario. To address the issue, we propose a novel model, Mining Undefined Classes from Other-class (MUCO), that can automatically induce different undefined classes from the other class to improve few-shot NER. With these extra-labeled undefined classes, our method will improve the discriminative ability of NER classifier and enhance the understanding of predefined classes with stand-by semantic knowledge. Experimental results demonstrate that our model outperforms five state-of-the-art models in both 1shot and 5-shots settings on four NER benchmarks. We will release the code upon acceptance. The source code is released on https: //github.com/shuaiwa16/OtherClassNER.git.
false
[]
[]
null
null
null
This work is supported by the National Key Research and Development Program of China (2018YFB1005100 and 2018YFB1005101) and NSFC Key Project (U1736204). This work is supported by National Engineering Laboratory for Cyberlearning and Intelligent Technology, Beijing Key Lab of Networked Multimedia and the Institute for Guo Qiang, Tsinghua University (2019GQB0003). This research was conducted in collaboration with SenseTime. This work is partially supported by A*STAR through the Industry Alignment Fund -Industry Collaboration Projects Grant, by NTU (NTU-ACE2020-01) and Ministry of Education (RG96/20).
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
krishna-iyyer-2019-generating
https://aclanthology.org/P19-1224
Generating Question-Answer Hierarchies
The process of knowledge acquisition can be viewed as a question-answer game between a student and a teacher in which the student typically starts by asking broad, open-ended questions before drilling down into specifics (Hintikka, 1981; Hakkarainen and Sintonen, 2002). This pedagogical perspective motivates a new way of representing documents. In this paper, we present SQUASH (Specificity-controlled Question-Answer Hierarchies), a novel and challenging text generation task that converts an input document into a hierarchy of question-answer pairs. Users can click on high-level questions (e.g., "Why did Frodo leave the Fellowship?") to reveal related but more specific questions (e.g., "Who did Frodo leave with?"). Using a question taxonomy loosely based on Lehnert (1978), we classify questions in existing reading comprehension datasets as either GENERAL or SPECIFIC. We then use these labels as input to a pipelined system centered around a conditional neural language model. We extensively evaluate the quality of the generated QA hierarchies through crowdsourced experiments and report strong empirical results. Q. What was the iPhone application Fantom? A. The app... let users hear parts of ... real time, Q. Who created it? A. ... team including ... Robert Del Naja On 26 July 2016, Massive Attack previewed three new songs: "Come Near Me", "The Spoils", and "Dear Friend" on Fantom, an iPhone application on which they previously previewed the four songs from the Ritual Spirit EP ... The video for "The Spoils", featuring Cate Blanchett, and directed by Australian director John Hillcoat, ... Q. What is Ritual Spirit? A. On ... Attack released a new EP Ritual Spirit Q. What did they do in 2016? A. On ... 2016 ... three new songs: "Come Near Me", "The Spoils", and "Dear Friend" on Fantom, Q. Who was in the video? A. The video ... featuring Cate Blanchett, Q. Who was the Australian director? A. John Hillcoat, Q. When did they release a song with "Fantom''? A. On 28
false
[]
[]
null
null
null
We thank the anonymous reviewers for their insightful comments. In addition, we thank Nader Akoury, Ari Kobren, Tu Vu and the other members of the UMass NLP group for helpful comments on earlier drafts of the paper and suggestions on the paper's presentation. This work was supported in part by research awards from the Allen Institute for Artificial Intelligence and Adobe Research.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
clark-curran-2006-partial
https://aclanthology.org/N06-1019
Partial Training for a Lexicalized-Grammar Parser
We propose a solution to the annotation bottleneck for statistical parsing, by exploiting the lexicalized nature of Combinatory Categorial Grammar (CCG). The parsing model uses predicate-argument dependencies for training, which are derived from sequences of CCG lexical categories rather than full derivations. A simple method is used for extracting dependencies from lexical category sequences, resulting in high precision, yet incomplete and noisy data. The dependency parsing model of Clark and Curran (2004b) is extended to exploit this partial training data. Remarkably, the accuracy of the parser trained on data derived from category sequences alone is only 1.3% worse in terms of F-score than the parser trained on complete dependency structures.
false
[]
[]
null
null
null
null
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
alshenaifi-azmi-2020-faheem
https://aclanthology.org/2020.wanlp-1.29
Faheem at NADI shared task: Identifying the dialect of Arabic tweet
This paper describes Faheem (adj. of understand), our submission to NADI (Nuanced Arabic Dialect Identification) shared task. With so many Arabic dialects being understudied due to the scarcity of the resources, the objective is to identify the Arabic dialect used in the tweet, at the country-level. We propose a machine learning approach where we utilize word-level ngram (n = 1 to 3) and tf-idf features and feed them to six different classifiers. We train the system using a data set of 21,000 tweets-provided by the organizers-covering twenty-one Arab countries. Our top performing classifiers are: Logistic Regression, Support Vector Machines, and Multinomial Naïve Bayes (MNB). We achieved our best result of macro-F 1 = 0.151 using the MNB classifier.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
nguyen-etal-2021-trankit
https://aclanthology.org/2021.eacl-demos.10
Trankit: A Light-Weight Transformer-based Toolkit for Multilingual Natural Language Processing
We introduce Trankit, a lightweight Transformer-based Toolkit for multilingual Natural Language Processing (NLP). It provides a trainable pipeline for fundamental NLP tasks over 100 languages, and 90 pretrained pipelines for 56 languages. Built on a state-of-the-art pretrained language model, Trankit significantly outperforms prior multilingual NLP pipelines over sentence segmentation, part-of-speech tagging, morphological feature tagging, and dependency parsing while maintaining competitive performance for tokenization, multi-word token expansion, and lemmatization over 90 Universal Dependencies treebanks. Despite the use of a large pretrained transformer, our toolkit is still efficient in memory usage and speed. This is achieved by our novel plugand-play mechanism with Adapters where a multilingual pretrained transformer is shared across pipelines for different languages. Our toolkit along with pretrained models and code are publicly available at: https: //github.com/nlp-uoregon/trankit.
false
[]
[]
null
null
null
Henry Tsai, Jason Riesa, Melvin Johnson, Naveen Arivazhagan, Xin Li, and Amelia Archer. 2019
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kurematsu-1993-automatic
https://aclanthology.org/1993.mtsummit-1.8
Automatic Speech Translation at ATR
Since Graham Bell first invented the telephone in 1876, it has become an indispensable means for communications. We can easily communicate with others domestically as well as internationally. However, another great barrier has not been overcome yet; communications between people speaking different languages. An interpreting telephone system, or a speech translation system, will solve this problem which has been annoying human-being from the beginning of their history. The first effort was made by NEC; they demonstrated a system in Telecom'83 held in Geneva. In 1987, British Telecom Research Laboratories implemented an experimental system which was based on fixed phrase translation [Stentiford] . At Carnegie-Mellon University (CMU), a speech translation system was developed on doctor patient domain in 1988 [Saitoh] . These systems were small and simple, but showed the possibility of speech translation.
false
[]
[]
null
null
null
null
1993
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
van-der-meer-2013-dqf
https://aclanthology.org/2013.tc-1.8
The DQF - industry best-practices, metrics and benchmarks for translation quality estimation
null
false
[]
[]
null
null
null
null
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
louis-nenkova-2014-verbose
https://aclanthology.org/E14-1067
Verbose, Laconic or Just Right: A Simple Computational Model of Content Appropriateness under Length Constraints
Length constraints impose implicit requirements on the type of content that can be included in a text. Here we propose the first model to computationally assess if a text deviates from these requirements. Specifically, our model predicts the appropriate length for texts based on content types present in a snippet of constant length. We consider a range of features to approximate content type, including syntactic phrasing, constituent compression probability, presence of named entities, sentence specificity and intersentence continuity. Weights for these features are learned using a corpus of summaries written by experts and on high quality journalistic writing. During test time, the difference between actual and predicted length allows us to quantify text verbosity. We use data from manual evaluation of summarization systems to assess the verbosity scores produced by our model. We show that the automatic verbosity scores are significantly negatively correlated with manual content quality scores given to the summaries.
false
[]
[]
null
null
null
This work was partially supported by a NSF CA-REER 0953445 award. We also thank the anonymous reviewers for their comments.
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
valkenier-etal-2011-psycho
https://aclanthology.org/W11-4630
Psycho-acoustically motivated formant feature extraction
Psycho-acoustical research investigates how human listeners are able to separate sounds that stem from different sources. This ability might be one of the reasons that human speech processing is robust to noise but methods that exploit this are, to our knowledge, not used in systems for automatic formant extraction or in modern speech recognition systems. Therefore we investigate the possibility to use harmonics that are consistent with a harmonic complex as the basis for a robust formant extraction algorithm. With this new method we aim to overcome limitations of most modern automatic speech recognition systems by taking advantage of the robustness of harmonics at formant positions. We tested the effectiveness of our formant detection algorithm on Hillenbrand's annotated American English Vowels dataset and found that in pink noise the results are competitive with existing systems. Furthermore, our method needs no training and is implementable as a realtime system which contrasts many of the existing systems.
false
[]
[]
null
null
null
BV was supported by STW grant DTF 7459, JDK was supported by NWO grant 634.000.432. The authors would like to thank Odette Scharenborg, Jennifer Spenader, Maria Niessen, Hedde van de Vooren and three anonymous reviewers for their useful comments on earlier versions of this manuscript.
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
popescu-2009-person
https://aclanthology.org/D09-1104
Person Cross Document Coreference with Name Perplexity Estimates
The Person Cross Document Coreference systems depend on the context for making decisions on the possible coreferences between person name mentions. The amount of context required is a parameter that varies from corpora to corpora, which makes it difficult for usual disambiguation methods. In this paper we show that the amount of context required can be dynamically controlled on the basis of the prior probabilities of coreference and we present a new statistical model for the computation of these probabilities. The experiment we carried on a news corpus proves that the prior probabilities of coreference are an important factor for maintaining a good balance between precision and recall for cross document coreference systems.
false
[]
[]
null
null
null
The corpus used in this paper is Adige500k, a seven-year news corpus from an Italian local newspaper. The author thanks to all the people involved in the construction of Adige500k.
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
rita-etal-2020-lazimpa
https://aclanthology.org/2020.conll-1.26
``LazImpa'': Lazy and Impatient neural agents learn to communicate efficiently
Previous work has shown that artificial neural agents naturally develop surprisingly nonefficient codes. This is illustrated by the fact that in a referential game involving a speaker and a listener neural networks optimizing accurate transmission over a discrete channel, the emergent messages fail to achieve an optimal length. Furthermore, frequent messages tend to be longer than infrequent ones, a pattern contrary to the Zipf Law of Abbreviation (ZLA) observed in all natural languages. Here, we show that near-optimal and ZLA-compatible messages can emerge, but only if both the speaker and the listener are modified. We hence introduce a new communication system, "Laz-Impa", where the speaker is made increasingly lazy, i.e., avoids long messages, and the listener impatient, i.e., seeks to guess the intended content as soon as possible.
false
[]
[]
null
null
null
We would like to thank Emmanuel Chemla, Marco Baroni, Eugene Kharitonov, and the anonymous reviewers for helpful comments and suggestions.This work was funded in part by the European Research Council (ERC-2011-AdG-295810 BOOT-PHON), the Agence Nationale pour la Recherche (ANR-17-EURE-0017 Frontcog, ANR-10-IDEX-0001-02 PSL*, ANR-19-P3IA-0001 PRAIRIE 3IA Institute) and grants from CIFAR (Learning in Machines and Brains), Facebook AI Research (Research Grant), Google (Faculty Research Award), Microsoft Research (Azure Credits and Grant), and Amazon Web Service (AWS Research Credits).
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
weller-seppi-2020-rjokes
https://aclanthology.org/2020.lrec-1.753
The rJokes Dataset: a Large Scale Humor Collection
Humor is a complicated language phenomenon that depends upon many factors, including topic, date, and recipient. Because of this variation, it can be hard to determine what exactly makes a joke humorous, leading to difficulties in joke identification and related tasks. Furthermore, current humor datasets are lacking in both joke variety and size, with almost all current datasets having less than 100k jokes. In order to alleviate this issue we compile a collection of over 550,000 jokes posted over an 11 year period on the Reddit r/Jokes subreddit (an online forum), providing a large scale humor dataset that can easily be used for a myriad of tasks. This dataset also provides quantitative metrics for the level of humor in each joke, as determined by subreddit user feedback. We explore this dataset through the years, examining basic statistics, most mentioned entities, and sentiment proportions. We also introduce this dataset as a task for future work, where models learn to predict the level of humor in a joke. On that task we provide strong state-of-the-art baseline models and show room for future improvement. We hope that this dataset will not only help those researching computational humor, but also help social scientists who seek to understand popular culture through humor.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bicici-van-genabith-2013-cngl-grading
https://aclanthology.org/S13-2098
CNGL: Grading Student Answers by Acts of Translation
We invent referential translation machines (RTMs), a computational model for identifying the translation acts between any two data sets with respect to a reference corpus selected in the same domain, which can be used for automatically grading student answers. RTMs make quality and semantic similarity judgments possible by using retrieved relevant training data as interpretants for reaching shared semantics. An MTPP (machine translation performance predictor) model derives features measuring the closeness of the test sentences to the training data, the difficulty of translating them, and the presence of acts of translation involved. We view question answering as translation from the question to the answer, from the question to the reference answer, from the answer to the reference answer, or from the question and the answer to the reference answer. Each view is modeled by an RTM model, giving us a new perspective on the ternary relationship between the question, the answer, and the reference answer. We show that all RTM models contribute and a prediction model based on all four perspectives performs the best. Our prediction model is the 2nd best system on some tasks according to the official results of the Student Response Analysis (SRA 2013) challenge.
true
[]
[]
Quality Education
null
null
This work is supported in part by SFI (07/CE/I1142) as part of the Centre for Next Generation Localisation (www.cngl.ie) at Dublin City University and in part by the European Commission through the QTLaunchPad FP7 project (No: 296347). We also thank the SFI/HEA Irish Centre for High-End Computing (ICHEC) for the provision of computational facilities and support.
2013
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
nyberg-etal-2002-deriving
https://link.springer.com/chapter/10.1007/3-540-45820-4_15
Deriving semantic knowledge from descriptive texts using an MT system
null
false
[]
[]
null
null
null
null
2002
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
morales-etal-2018-linguistically
https://aclanthology.org/W18-0602
A Linguistically-Informed Fusion Approach for Multimodal Depression Detection
Automated depression detection is inherently a multimodal problem. Therefore, it is critical that researchers investigate fusion techniques for multimodal design. This paper presents the first ever comprehensive study of fusion techniques for depression detection. In addition, we present novel linguistically-motivated fusion techniques, which we find outperform existing approaches.
true
[]
[]
Good Health and Well-Being
null
null
null
2018
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
jain-mausam-2016-knowledge
https://aclanthology.org/N16-1011
Knowledge-Guided Linguistic Rewrites for Inference Rule Verification
A corpus of inference rules between a pair of relation phrases is typically generated using the statistical overlap of argument-pairs associated with the relations (e.g., PATTY, CLEAN). We investigate knowledge-guided linguistic rewrites as a secondary source of evidence and find that they can vastly improve the quality of inference rule corpora, obtaining 27 to 33 point precision improvement while retaining substantial recall. The facts inferred using cleaned inference rules are 29-32 points more accurate.
false
[]
[]
null
null
null
Acknowledgments: We thank Ashwini Vaidya and the anonymous reviewers for their helpful suggestions and feedback. We thank Abhishek, Aditya, Ankit, Jatin, Kabir, and Shikhar for helping with the data annotation. This work was supported by Google language understanding and knowledge discovery focused research grants to Mausam, a KISTI grant and a Bloomberg grant also to Mausam. Prachi was supported by a TCS fellowship.
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sokolova-etal-2008-telling
https://aclanthology.org/I08-1034
The Telling Tail: Signals of Success in Electronic Negotiation Texts
We analyze the linguistic behaviour of participants in bilateral electronic negotiations, and discover that particular language characteristics are in contrast with face-to-face negotiations. Language patterns in the later part of electronic negotiation are highly indicative of the successful or unsuccessful outcome of the process, whereas in face-toface negotiations, the first part of the negotiation is more useful for predicting the outcome. We formulate our problem in terms of text classification on negotiation segments of different sizes. The data are represented by a variety of linguistic features that capture the gist of the discussion: negotiationor strategy-related words. We show that, as we consider ever smaller final segments of a negotiation transcript, the negotiationrelated words become more indicative of the negotiation outcome, and give predictions with higher Accuracy than larger segments from the beginning of the process.
true
[]
[]
Partnership for the goals
null
null
Partial support for this work came from the Natural Sciences and Engineering Research Council of Canada.
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
jana-biemann-2021-investigation
https://aclanthology.org/2021.privatenlp-1.4
An Investigation towards Differentially Private Sequence Tagging in a Federated Framework
To build machine learning-based applications for sensitive domains like medical, legal, etc. where the digitized text contains private information, anonymization of text is required for preserving privacy. Sequence tagging, e.g. as used for Named Entity Recognition (NER), can help to detect private information. However, to train sequence tagging models, a sufficient amount of labeled data are required but for privacy-sensitive domains, such labeled data also can not be shared directly. In this paper, we investigate the applicability of a privacy-preserving framework for sequence tagging tasks, specifically NER. Hence, we analyze a framework for the NER task, which incorporates two levels of privacy protection. Firstly, we deploy a federated learning (FL) framework where the labeled data are neither shared with the centralized server nor with the peer clients. Secondly, we apply differential privacy (DP) while the models are being trained in each client instance. While both privacy measures are suitable for privacy-aware models, their combination results in unstable models. To our knowledge, this is the first study of its kind on privacy-aware sequence tagging models.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
This research was funded by the German Federal Ministry of Education and Research (BMBF) as part of the HILANO project, ID 01IS18085C.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
fang-etal-2020-video2commonsense
https://aclanthology.org/2020.emnlp-main.61
Video2Commonsense: Generating Commonsense Descriptions to Enrich Video Captioning
Captioning is a crucial and challenging task for video understanding. In videos that involve active agents such as humans, the agent's actions can bring about myriad changes in the scene. Observable changes such as movements, manipulations, and transformations of the objects in the scene, are reflected in conventional video captioning. Unlike images, actions in videos are also inherently linked to social aspects such as intentions (why the action is taking place), effects (what changes due to the action), and attributes that describe the agent. Thus for video understanding, such as when captioning videos or when answering questions about videos, one must have an understanding of these commonsense aspects. We present the first work on generating commonsense captions directly from videos, to describe latent aspects such as intentions, effects, and attributes. We present a new dataset "Video-to-Commonsense (V2C)" that contains ∼ 9k videos of human agents performing various actions, annotated with 3 types of commonsense descriptions. Additionally we explore the use of open-ended video-based commonsense question answering (V2C-QA) as a way to enrich our captions. Both the generation task and the QA task can be used to enrich video captions.
false
[]
[]
null
null
null
The authors acknowledge support from the NSF Robust Intelligence Program project #1816039, the DARPA KAIROS program (LESTAT project), the DARPA SAIL-ON program, and ONR award N00014-20-1-2332. ZF, TG, YY thank the organizers and the participants of the Telluride Neuromorphic Cognition Workshop, especially the Machine Common Sense (MCS) group.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hossain-etal-2019-cnl
https://aclanthology.org/U19-1017
CNL-ER: A Controlled Natural Language for Specifying and Verbalising Entity Relationship Models
The first step towards designing an information system is conceptual modelling where domain experts and knowledge engineers identify the necessary information together to build an information system. Entity relationship modelling is one of the most popular conceptual modelling techniques that represents an information system in terms of entities, attributes and relationships. Entity relationship models are constructed graphically but are often difficult to understand by domain experts. To overcome this problem, we suggest to verbalise these models in a controlled natural language. In this paper, we present CNL ER , a controlled natural language for specifying and verbalising entity relationship (ER) models that not only solves the verbalisation problem for these models but also provides the benefits of automatic verification and validation, and semantic round-tripping which makes the communication process transparent between the domain experts and the knowledge engineers.
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kumar-etal-2015-error
https://aclanthology.org/2015.mtsummit-papers.18
Error-tolerant speech-to-speech translation
Recent efforts to improve two-way speech-to-speech translation (S2S) systems have focused on developing error detection and interactive error recovery capabilities. This article describes our current work on developing an eyes-free English-Iraqi Arabic S2S system that detects ASR errors and attempts to resolve them by eliciting user feedback. Here, we report improvements in performance across multiple system components (ASR, MT and error detection). We also present a controlled evaluation of the S2S system that quantifies the effect of error recovery on user effort and conversational goal achievement.
false
[]
[]
null
null
null
null
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
piccioni-zanchetta-2004-xterm
http://www.lrec-conf.org/proceedings/lrec2004/pdf/588.pdf
XTERM: A Flexible Standard-Compliant XML-Based Termbase Management System
This paper introduces XTerm, a Termbase management system (TBMS) currently under development at the Terminology Center of the School for Interpreters and Translators of the University of Bologna. The system is designed to be ISO and XML compliant and to provide a friendly environment for the insertion and visualization of terminological data. It is also open to the future evolution of international standards since it does not rely on a closed set of hard-coded data representation models. In this paper we will first introduce the project "Languages and Productive Activities", then we will outline the main features of the XTerm TBMS: XTerm.NET, the graphical user interface (the main tool of the terminographer), XTerm.portal, the web application that provides online access to the termbase and two tools that provide innovative functionalities to the whole system: CARMA and COSY Generator.
false
[]
[]
null
null
null
null
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
beigman-klebanov-etal-2010-vocabulary
https://aclanthology.org/P10-2047
Vocabulary Choice as an Indicator of Perspective
We establish the following characteristics of the task of perspective classification: (a) using term frequencies in a document does not improve classification achieved with absence/presence features; (b) for datasets allowing the relevant comparisons, a small number of top features is found to be as effective as the full feature set and indispensable for the best achieved performance, testifying to the existence of perspective-specific keywords. We relate our findings to research on word frequency distributions and to discourse analytic studies of perspective.
false
[]
[]
null
null
null
null
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chen-yang-2021-structure
https://aclanthology.org/2021.naacl-main.109
Structure-Aware Abstractive Conversation Summarization via Discourse and Action Graphs
ive conversation summarization has received much attention recently. However, these generated summaries often suffer from insufficient, redundant, or incorrect content, largely due to the unstructured and complex characteristics of human-human interactions. To this end, we propose to explicitly model the rich structures in conversations for more precise and accurate conversation summarization, by first incorporating discourse relations between utterances and action triples ("WHO-DOING-WHAT") in utterances through structured graphs to better encode conversations, and then designing a multi-granularity decoder to generate summaries by combining all levels of information. Experiments show that our proposed models outperform state-of-theart methods and generalize well in other domains in terms of both automatic evaluations and human judgments. We have publicly released our code at https://github.com/ GT-SALT/Structure-Aware-BART.
false
[]
[]
null
null
null
We would like to thank the anonymous reviewers for their helpful comments, and the members of Georgia Tech SALT group for their feedback. This work is supported in part by grants from Google, Amazon and Salesforce.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
san-segundo-etal-2001-telephone
https://aclanthology.org/W01-1619
A Telephone-Based Railway Information System for Spanish: Development of a Methodology for Spoken Dialogue Design
This methodology is similar to the Life-Cycle Model presented in (Bernsen, 1998) and (www.disc2.dk), but we incorporate the step "design by observation" where human-human interactions are analysed and we present measures to evaluate the different design alternatives at every step of the methodology.
false
[]
[]
null
null
null
null
2001
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhang-clark-2010-fast
https://aclanthology.org/D10-1082
A Fast Decoder for Joint Word Segmentation and POS-Tagging Using a Single Discriminative Model
We show that the standard beam-search algorithm can be used as an efficient decoder for the global linear model of Zhang and Clark (2008) for joint word segmentation and POS-tagging, achieving a significant speed improvement. Such decoding is enabled by: (1) separating full word features from partial word features so that feature templates can be instantiated incrementally, according to whether the current character is separated or appended; (2) deciding the POS-tag of a potential word when its first character is processed. Early-update is used with perceptron training so that the linear model gives a high score to a correct partial candidate as well as a full output. Effective scoring of partial structures allows the decoder to give high accuracy with a small beam-size of 16. In our 10-fold crossvalidation experiments with the Chinese Treebank, our system performed over 10 times as fast as Zhang and Clark (2008) with little accuracy loss. The accuracy of our system on the standard CTB 5 test was competitive with the best in the literature.
false
[]
[]
null
null
null
We thank Canasai Kruengkrai for discussion on efficiency issues, and the anonymous reviewers for their suggestions. Yue Zhang and Stephen Clark are supported by the European Union Seventh Framework Programme (FP7-ICT-2009-4) under grant agreement no. 247762.
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
eskander-etal-2013-automatic-correction
https://aclanthology.org/W13-2301
Automatic Correction and Extension of Morphological Annotations
For languages with complex morphologies, limited resources and tools, and/or lack of standard grammars, developing annotated resources can be a challenging task. Annotated resources developed under time/money constraints for such languages tend to tradeoff depth of representation with degree of noise. We present two methods for automatic correction and extension of morphological annotations, and demonstrate their success on three divergent Egyptian Arabic corpora. 2 Habash and Rambow (2006) reported that a state-of-theart MSA morphological analyzer has only 60% coverage of Levantine Arabic verb forms. 3 Arabic orthographic transliteration is presented in the Habash-Soudi-Buckwalter scheme (Habash et al., 2007): A b t θ j H x dð r z s š S D TĎ ς γ f q k l m n hw y
false
[]
[]
null
null
null
This paper is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under contracts No. HR0011-12-C-0014 and HR0011-11-C-0145. Any opinions, findings and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of DARPA. We also would like to thank Emad Mohamed and Kemal Oflazer for providing us with the CMUEAC corpus. We thank Ryan Roth for help with MADA-ARZ. Finally, we thank Owen Rambow, Mona Diab and Warren Churchill for helpful discussions.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sokolova-schramm-2011-building
https://aclanthology.org/R11-1111
Building a Patient-based Ontology for User-written Web Messages
We introduce an ontology that is representative of health discussions and vocabulary used by the general public. The ontology structure is built upon general categories of information that patients use when describing their health in clinical encounters. The pilot study shows that the general structure makes the ontology useful in text mining of social networking web sites.
true
[]
[]
Good Health and Well-Being
null
null
This work is in part funded by a NSERC Discovery grant available to the first author and The Ottawa Hospital Academic Medical Organizationto the second author.
2011
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
l-2014-keynote
https://aclanthology.org/W14-5110
Keynote Lecture 2: Text Analysis for identifying Entities and their mentions in Indian languages
The talk deals with the analysis of text at syntactic-semantic level to identify a common feature set which can work across various Indian languages for recognizing named entities and their mentions. The development of corpora and the method adopted to develop each module is discussed. The talk includes the evaluation of the common feature set using a statistical method which gives acceptable levels of recall and precision.
false
[]
[]
null
null
null
null
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
nn-2007-briefly-noted
https://aclanthology.org/J07-4008
Briefly Noted/Publications Received
This comprehensive NLP textbook is strongly algorithm-oriented and designed for talented computer programmers who might or might not be linguists. The book occupies a market niche in between that of Jurafsky and Martin (2008) and my own humble effort (Covington 1994); it resembles the latter in approach and the former in scope. Perhaps more than either of those, Nugues's book is also useful to working professionals as a handbook of techniques and algorithms. Everything is here-everything, that is, except speech synthesis and recognition; phonetics receives only a four-page summary. Those wanting to start an NLP course by covering phonetics in some depth should consider Coleman (2005) as well as Jurafsky and Martin (2008). After a brief overview, Nugues covers corpus linguistics, markup languages, text statistics, morphology, part-of-speech tagging (two ways), parsing (several ways), semantics, and discourse. "Neat" and "scruffy" approaches are deftly interleaved and compared. Unification-based grammar, event semantics, and tools such as WordNet and the Penn Treebank are covered in some detail. The syntax section includes dependency grammar and even the very recent work of Nivre (2006), as well as partial parsing and statistical approaches. Many important algorithms are presented ready to run, or nearly so, as Prolog or Perl code. If, for example, you want to build a Cocke-Kasami-Younger parser, this is the place to look for directions. Explanations are lucid and to-the-point. Here is an example. Nugues is discussing the fact that, if you sample a corpus for n-grams, some will not occur in your sample at all, but it would be a mistake to consider the unseen ones to be infinitely rare (frequency 0). Thus the counts need to be adjusted: Good-Turing estimation. .. reestimates the counts of the n-grams observed in the corpus by discounting them, and it shifts the probability mass it has shaved to the unseen bigrams.
false
[]
[]
null
null
null
null
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
fischer-1997-formal
https://aclanthology.org/W97-0804
Formal redundancy and consistency checking rules for the lexicai database WordNet 1.5
In a manually built-up semantic net in which not the concept definitions automatically determine the position of the concepts in the net, but rather the links coded by the lexicographers, the formal properties of the encoded attributes and relations provide necessary but not sufficient conditions to support maintenance of internal consistency and avoidance of redundancy. According to our experience the potential of this methodology has not yet been fully exploited due to lack of understanding of applicable formal rules, or due to inflexibility of available software tools. Based on a more comprehensive inquiry performed on the lexical database Word-Net TM 1.5, this paper presents a selection of pertinent checking rules and the results of their application to WordNet 1.5. Transferable insights are: 1. Semantic relations which are closely related but differing in a checkable property, should be differentiated. 2. Inferable relations-such as the transitive closure of a hierarchical relation or semantic relations induced by lexical ones-need to be taken into account when checking real relations, i.e. directly stored relations. 3. A semantic net needs proper representation of lexical gaps. A disjunctive hypernym, implemented as a set of hypernyms, is considered harmful.
false
[]
[]
null
null
null
I am indebted to Melina Alexa and John Bateman for encouraging this work, and to them both and Wiebke Mt~hr, Renato Reinau, Lothar Rostek, and Ingrid Schmidt for valuable help to improve this paper.
1997
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
barrena-etal-2016-alleviating
https://aclanthology.org/P16-1179
Alleviating Poor Context with Background Knowledge for Named Entity Disambiguation
Named Entity Disambiguation (NED) algorithms disambiguate mentions of named entities with respect to a knowledge-base, but sometimes the context might be poor or misleading. In this paper we introduce the acquisition of two kinds of background information to alleviate that problem: entity similarity and selectional preferences for syntactic positions. We show, using a generative Näive Bayes model for NED, that the additional sources of context are complementary, and improve results in the CoNLL 2003 and TAC KBP DEL 2014 datasets, yielding the third best and the best results, respectively. We provide examples and analysis which show the value of the acquired background information.
false
[]
[]
null
null
null
We thank the reviewers for their suggestions. This work was partially funded by MINECO (TUNER project, TIN2015-65308-C5-1-R). The IXA group
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
xie-etal-2021-zjuklab
https://aclanthology.org/2021.semeval-1.108
ZJUKLAB at SemEval-2021 Task 4: Negative Augmentation with Language Model for Reading Comprehension of Abstract Meaning
This paper presents our systems for the three Subtasks of SemEval Task4: Reading Comprehension of Abstract Meaning (ReCAM). We explain the algorithms used to learn our models and the process of tuning the algorithms and selecting the best model. Inspired by the similarity of the ReCAM task and the language pre-training, we propose a simple yet effective technology, namely, negative augmentation with language model. Evaluation results demonstrate the effectiveness of our proposed approach. Our models achieve the 4th rank on both official test sets of Subtask 1 and Subtask 2 with an accuracy of 87.9% and an accuracy of 92.8%, respectively 1. We further conduct comprehensive model analysis and observe interesting error cases, which may promote future researches.
false
[]
[]
null
null
null
We want to express gratitude to the anonymous reviewers for their hard work and kind comments.This work is funded by 2018YFB1402800/NSFC91846204/NSFCU19B2027.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gimenez-marquez-2006-low
https://aclanthology.org/P06-2037
Low-Cost Enrichment of Spanish WordNet with Automatically Translated Glosses: Combining General and Specialized Models
This paper studies the enrichment of Spanish WordNet with synset glosses automatically obtained from the English Word-Net glosses using a phrase-based Statistical Machine Translation system. We construct the English-Spanish translation system from a parallel corpus of proceedings of the European Parliament, and study how to adapt statistical models to the domain of dictionary definitions. We build specialized language and translation models from a small set of parallel definitions and experiment with robust manners to combine them. A statistically significant increase in performance is obtained. The best system is finally used to generate a definition for all Spanish synsets, which are currently ready for a manual revision. As a complementary issue, we analyze the impact of the amount of in-domain data needed to improve a system trained entirely on out-of-domain data.
false
[]
[]
null
null
null
This research has been funded by the Spanish Ministry of Science and Technology (ALIADO TIC2002-04447-C02) and the Spanish Ministry of Education and Science (TRANGRAM, TIN2004-07925-C03-02). Our research group, TALP Research Center, is recognized as a Quality Research Group (2001 SGR 00254) by DURSI, the Research Department of the Catalan Government. Authors are grateful to Patrik Lambert for providing us with the implementation of the Simplex Method, and specially to German Rigau for motivating in its origin all this work.
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
vijay-etal-2018-corpus
https://aclanthology.org/N18-4018
Corpus Creation and Emotion Prediction for Hindi-English Code-Mixed Social Media Text
Emotion Prediction is a Natural Language Processing (NLP) task dealing with detection and classification of emotions in various monolingual and bilingual texts. While some work has been done on code-mixed social media text and in emotion prediction separately, our work is the first attempt which aims at identifying the emotion associated with Hindi-English code-mixed social media text. In this paper, we analyze the problem of emotion identification in code-mixed content and present a Hindi-English code-mixed corpus extracted from twitter and annotated with the associated emotion. For every tweet in the dataset, we annotate the source language of all the words present, and also the causal language of the expressed emotion. Finally, we propose a supervised classification system which uses various machine learning techniques for detecting the emotion associated with the text using a variety of character level, word level, and lexicon based features.
false
[]
[]
null
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
schafer-burtenshaw-2019-offence
https://aclanthology.org/R19-1125
Offence in Dialogues: A Corpus-Based Study
In recent years an increasing number of analyses of offensive language has been published, however, dealing mainly with the automatic detection and classification of isolated instances. In this paper we aim to understand the impact of offensive messages in online conversations diachronically, and in particular the change in offensiveness of dialogue turns. In turn, we aim to measure the progression of offence level as well as its direction-For example, whether a conversation is escalating or declining in offence. We present our method of extracting linear dialogues from tree-structured conversations in social media data and make our code publicly available. 1 Furthermore, we discuss methods to analyse this dataset through changes in discourse offensiveness. Our paper includes two main contributions; first, using a neural network to measure the level of offensiveness in conversations; and second, the analysis of conversations around offensive comments using decoupling functions.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
minkov-cohen-2012-graph
https://aclanthology.org/W12-4104
Graph Based Similarity Measures for Synonym Extraction from Parsed Text
We learn graph-based similarity measures for the task of extracting word synonyms from a corpus of parsed text. A constrained graph walk variant that has been successfully applied in the past in similar settings is shown to outperform a state-of-the-art syntactic vectorbased approach on this task. Further, we show that learning specialized similarity measures for different word types is advantageous.
false
[]
[]
null
null
null
null
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yin-neubig-2018-tranx
https://aclanthology.org/D18-2002
TRANX: A Transition-based Neural Abstract Syntax Parser for Semantic Parsing and Code Generation
We present TRANX, a transition-based neural semantic parser that maps natural language (NL) utterances into formal meaning representations (MRs). TRANX uses a transition system based on the abstract syntax description language for the target MR, which gives it two major advantages: (1) it is highly accurate, using information from the syntax of the target MR to constrain the output space and model the information flow, and (2) it is highly generalizable, and can easily be applied to new types of MR by just writing a new abstract syntax description corresponding to the allowable structures in the MR. Experiments on four different semantic parsing and code generation tasks show that our system is generalizable, extensible, and effective, registering strong results compared to existing neural semantic parsers. 1
false
[]
[]
null
null
null
This material is based upon work supported by the National Science Foundation under Grant No. 1815287.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
grishman-1976-survey
https://aclanthology.org/J76-2006
A Survey of Syntactic Analysis Procedures for Natural Language
FYris survey was prepared under contract No. N00014-67A-0467-0032 w i t h the Office of N a v a l Research, and was o r i g i n a l l y i s s u e d as Report No. NSO-8 of the Courant Institute of Mathematical Sciences, New York ~niversity.
false
[]
[]
null
null
null
null
1976
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bai-etal-2013-translating
https://aclanthology.org/I13-1103
Translating Chinese Unknown Words by Automatically Acquired Templates
In this paper, we present a translation template model to translate Chinese unknown words. The model exploits translation templates, which are extracted automatically from a word-aligned parallel corpus, to translate unknown words. The translation templates are designed in accordance with the structure of unknown words. When an unknown word is detected during translation, the model applies translation templates to the word to get a set of matched templates, and then translates the word into a set of suggested translations. Our experiment results demonstrate that the translations suggested by the unknown word translation template model significantly improve the performance of the Moses machine translation system.
false
[]
[]
null
null
null
null
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wu-1995-trainable
https://aclanthology.org/W95-0106
Trainable Coarse Bilingual Grammars for Parallel Text Bracketing
We describe two new strategies to automatic bracketing of parallel corpora, with particular application to languages where prior grammar resources are scarce: (1) coarse bilingual grammars, and (2) unsupervised training of such grammars via EM (expectation-maximization). Both methods build upon a formalism we recently introduced called stochastic inversion transduction grammars. The first approach borrows a coarse monolingual grammar into our bilingual formalism, in order to transfer knowledge of one language's constraints to the task of bracketing the texts in both languages. The second approach generalizes the inside-outside algorithm to adjust the grammar parameters so as to improve the likelihood of a training corpus. Preliminary experiments on parallel English-Chinese text are supportive of these strategies.
false
[]
[]
null
null
null
null
1995
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
laokulrat-etal-2018-incorporating
https://aclanthology.org/L18-1477
Incorporating Semantic Attention in Video Description Generation
Automatically generating video description is one of the approaches to enable computers to deeply understand videos, which can have a great impact and can be useful to many other applications. However, generated descriptions by computers often fail to correctly mention objects and actions appearing in the videos. This work aims to alleviate this problem by including external fine-grained visual information, which can be detected from all video frames, in the description generation model. In this paper, we propose an LSTM-based sequence-to-sequence model with semantic attention mechanism for video description generation. The model is flexible so that we can change the source of the external information without affecting the encoding and decoding parts of the model. The results show that using semantic attention to selectively focus on external fine-grained visual information can guide the system to correctly mention objects and actions in videos and have a better quality of video descriptions.
false
[]
[]
null
null
null
This paper is based on results obtained from a project commissioned by the New Energy and Industrial Technology Development Organization (NEDO). We also would like to thank the anonymous reviewers for their insightful comments and suggestions, which were helpful in improving the quality of the paper.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
walker-etal-1992-case
https://aclanthology.org/C92-2122
A Case Study of Natural Language Customisation: The Practical Effects of World Knowledge
This paper proposes a methodology for the eustomisation of natural language interfaces to information retrieval applications. We report a field study in which we tested this methodology by customising a commercially available natural language system to a large database of sales and marketing information. We note that it was difficult to tailor the common sense reasoning capabilities of the particular system we used to our application. This study validates aspects of the suggested methodology as well as providing insights that should inform the design of natural lauguage systems for this class of applications.
false
[]
[]
null
null
null
null
1992
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
liu-etal-2013-tuning
https://aclanthology.org/I13-1032
Tuning SMT with a Large Number of Features via Online Feature Grouping
In this paper, we consider the tuning of statistical machine translation (SMT) models employing a large number of features. We argue that existing tuning methods for these models suffer serious sparsity problems, in which features appearing in the tuning data may not appear in the testing data and thus those features may be over tuned in the tuning data. As a result, we face an over-fitting problem, which limits the generalization abilities of the learned models. Based on our analysis, we propose a novel method based on feature grouping via OSCAR to overcome these pitfalls. Our feature grouping is implemented within an online learning framework and thus it is efficient for a large scale (both for features and examples) of learning in our scenario. Experiment results on IWSLT translation tasks show that the proposed method significantly outperforms the state of the art tuning methods.
false
[]
[]
null
null
null
We would like to thank our colleagues in both HIT and NICT for insightful discussions, and three anonymous reviewers for many invaluable comments and suggestions to improve our paper. This work is supported by National Natural Science Foundation of China (61173073, 61100093, 61073130, 61272384), and the Key Project of the National High Technology Research and Development Program of China (2011AA01A207).
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ying-etal-2021-longsumm
https://aclanthology.org/2021.sdp-1.12
LongSumm 2021: Session based automatic summarization model for scientific document
Most summarization task focuses on generating relatively short summaries. Such a length constraint might not be appropriate when summarizing scientific work. The LongSumm task needs participants generate long summary for scientific document. This task usual can be solved by language model. But an important problem is that model like BERT is limit to memory, and can not deal with a long input like a document. Also generate a long output is hard. In this paper, we propose a session based automatic summarization model (SBAS) which using a session and ensemble mechanism to generate long summary. And our model achieves the best performance in the LongSumm task.
true
[]
[]
Industry, Innovation and Infrastructure
null
null
null
2021
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
brugman-etal-2008-common
http://www.lrec-conf.org/proceedings/lrec2008/pdf/330_paper.pdf
A Common Multimedia Annotation Framework for Cross Linking Cultural Heritage Digital Collections
In the context of the CATCH research program that is currently carried out at a number of large Dutch cultural heritage institutions our ambition is to combine and exchange heterogeneous multimedia annotations between projects and institutions. As first step we designed an Annotation Meta Model: a simple but powerful RDF/OWL model mainly addressing the anchoring of annotations to segments of the many different media types used in the collections of the archives, museums and libraries involved. The model includes support for the annotation of annotations themselves, and of segments of annotation values, to be able to layer annotations and in this way enable projects to process each other's annotation data as the primary data for further annotation. On basis of AMM we designed an application programming interface for accessing annotation repositories and implemented it both as a software library and as a web service. Finally, we report on our experiences with the application of model, API and repository when developing web applications for collection managers in cultural heritage institutions.
false
[]
[]
null
null
null
null
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wilton-1973-bilingual
https://aclanthology.org/C73-1029
Bilingual Lexicography: Computer-Aided Editing
Bilingual dictionaries present special difficulties for the lexicographer who is determined to employ the computer to facilitate his editorial work. In a sense, these dictionaries include evdrything contained in a normal monolingual edition and a good deal more. The singlelanguage definition dictionary is consulted as the authority on orthography, pronunciation and stress-pattern, grammar, level of formality, field of application, definitions, examples, usage and etymology. A bilingual dictionary which purports to be more than a pocket edition will treat all of these with the exception of etymology, which is not normally in the domain of the translator. In addition, it will devote itself to providing accurate translations, which necessarily presuppose an intimate acquaintance with the correct definitions in both languages. Such a dictionary• is a far cry from its mediaeval ancestor, the two-language glossary, which was usually a one-way device furnishing equivalent forms for simple words and expressions in the opposite language. The modern bilingual dictionary is usually two-way, each section constituting a complete dictionary in its own right and contrived to cater for a variety of translation requirements. Yet the two sections are inextric-• ably linked by an intricate network of translations and cross-references which guide the consulter and ensure that he does not falter when semantic equivalence fails to overlap smoothly. Since semantic equivalence is the important basic feature of bilingual dictionaries, deviations from the normal pattern will require special treatment. In closely related languages, like French and English, numerous pairs of words of common origin are only slightly, if at all, altered in their modern form (e.g. Eng. versatile/ Fr. versatile). But the disparate development of two modes of expression in different cultural and historical environments has left a residue of such word pairs whose only similarity is in fact the visual image of the sign. Their definitions are often very remote from each other. It is yet another task of bilingual lexicography to distinguish clearly between the meanings of these deceptive cognates or "faux amis " These, then, in very brief outline, are some of the features common to all good bilingual dictionaries. The Canadian Dictionary (/Dictionnaire canadien) i is no exception to these general remarks. First published ten years ago under the editorship of Professor Jean-Paul Vinay at the University of Montreal, it is now undergoing a major revision and updating at the University of Victoria, still under Vinay's supervision. The new editions should see the corpus of the original version increased from 40,000 to about 100,000 entry words. The first edition was specifically tailored for the unique linguistic situation in Canada and takes into account the two main dialects of each of the official languages it represents, namely, European and Canadian French, and British and Canadian English. This, however, is a gross simplification of a complicated dialect situation fraught with all the problems associated with social and official acceptability. But it is sufficient for the purposes of this discussion to mention that a good deal of importance is attached to Canadian content in both languages, thereby adding a further unit of complexity to the material to be presented. Accordingly, in addition to the data common to all bilingual dictionaries, The Canadian Dictionary furnishes information on the dialect status of most words and expressions.
false
[]
[]
null
null
null
null
1973
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
jing-2000-sentence
https://aclanthology.org/A00-1043
Sentence Reduction for Automatic Text Summarization
We present a novel sentence reduction system for automatically removing extraneous phrases from sentences that are extracted from a document for summarization purpose. The system uses multiple sources of knowledge to decide which phrases in an extracted sentence can be removed, including syntactic knowledge, context information, and statistics computed from a corpus which consists of examples written by human professionals. Reduction can significantly improve the conciseness of automatic summaries.
false
[]
[]
null
null
null
This material is based upon work supported by the National Science Foundation under Grant No. IRI 96-19124 and IRI 96-18797. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
2000
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
shirani-etal-2021-psed
https://aclanthology.org/2021.findings-acl.377
PSED: A Dataset for Selecting Emphasis in Presentation Slides
Emphasizing words in presentation slides allows viewers to direct their gaze to focal points without reading the entire slide, retaining their attention on the speaker. Despite many studies on automatic slide generation, few have addressed helping authors choose which words to emphasize. Motivated by this, we study the problem of choosing candidates for emphasis by introducing a new dataset containing presentation slides with a wide variety of topics. We evaluated a range of state-of-the-art models on this novel dataset by organizing a shared task and inviting multiple researchers to model emphasis in slides.
false
[]
[]
null
null
null
We thank the reviewers for their thoughtful comments and efforts towards improving our work. We also thank Andrew Greene for his help in creating the corpus.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
christodoulopoulos-etal-2012-turning
https://aclanthology.org/W12-1913
Turning the pipeline into a loop: Iterated unsupervised dependency parsing and PoS induction
Most unsupervised dependency systems rely on gold-standard Part-of-Speech (PoS) tags, either directly, using the PoS tags instead of words, or indirectly in the back-off mechanism of fully lexicalized models (Headden et al., 2009) .
false
[]
[]
null
null
null
null
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sheng-etal-2021-nice
https://aclanthology.org/2021.naacl-main.60
``Nice Try, Kiddo'': Investigating Ad Hominems in Dialogue Responses
Ad hominem attacks are those that target some feature of a person's character instead of the position the person is maintaining. These attacks are harmful because they propagate implicit biases and diminish a person's credibility. Since dialogue systems respond directly to user input, it is important to study ad hominems in dialogue responses. To this end, we propose categories of ad hominems, compose an annotated dataset, and build a classifier to analyze human and dialogue system responses to English Twitter posts. We specifically compare responses to Twitter topics about marginalized communities (#Black-LivesMatter, #MeToo) versus other topics (#Vegan, #WFH), because the abusive language of ad hominems could further amplify the skew of power away from marginalized populations. Furthermore, we propose a constrained decoding technique that uses salient n-gram similarity as a soft constraint for top-k sampling to reduce the amount of ad hominems generated. Our results indicate that 1) responses from both humans and DialoGPT contain more ad hominems for discussions around marginalized communities, 2) different quantities of ad hominems in the training data can influence the likelihood of generating ad hominems, and 3) we can use constrained decoding techniques to reduce ad hominems in generated dialogue responses. Post: Many are trying to co-opt and mischaracterize the #blacklivesmatter movement. We won't allow it! Resp: I hate how much of a victim complex you guys have.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
We would like to thank members of the PLUS Lab and the anonymous reviewers for the helpful feedback, and Jason Teoh for the many discussions. This paper is supported in part by NSF IIS 1927554 and by the CwC program under Con-tract W911NF-15-1-0543 with the US Defense Advanced Research Projects Agency (DARPA). The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
zhou-etal-2013-statistical
https://aclanthology.org/P13-1084
Statistical Machine Translation Improves Question Retrieval in Community Question Answering via Matrix Factorization
Community question answering (CQA) has become an increasingly popular research topic. In this paper, we focus on the problem of question retrieval. Question retrieval in CQA can automatically find the most relevant and recent questions that have been solved by other users. However, the word ambiguity and word mismatch problems bring about new challenges for question retrieval in CQA. State-of-the-art approaches address these issues by implicitly expanding the queried questions with additional words or phrases using monolingual translation models. While useful, the effectiveness of these models is highly dependent on the availability of quality parallel monolingual corpora (e.g., question-answer pairs) in the absence of which they are troubled by noise issue. In this work, we propose an alternative way to address the word ambiguity and word mismatch problems by taking advantage of potentially rich semantic information drawn from other languages. Our proposed method employs statistical machine translation to improve question retrieval and enriches the question representation with the translated words from other languages via matrix factorization. Experiments conducted on a real CQA data show that our proposed approach is promising.
false
[]
[]
null
null
null
This work was supported by the National Natural Science Foundation of China (No. 61070106, No. 61272332 and No. 61202329) We thank the anonymous reviewers for their insightful comments. We also thank Dr. Gao Cong for providing the data set and Dr. Li Cai for some discussion.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gokhman-etal-2012-search
https://aclanthology.org/W12-0404
In Search of a Gold Standard in Studies of Deception
In this study, we explore several popular techniques for obtaining corpora for deception research. Through a survey of traditional as well as non-gold standard creation approaches, we identify advantages and limitations of these techniques for webbased deception detection and offer crowdsourcing as a novel avenue toward achieving a gold standard corpus. Through an indepth case study of online hotel reviews, we demonstrate the implementation of this crowdsourcing technique and illustrate its applicability to a broad array of online reviews.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
This work was supported in part by National Science Foundation Grant NSCC-0904913, and the Jack Kent Cooke Foundation. We also thank the EACL reviewers for their insightful comments, suggestions and advice on various aspects of this work.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
sen-etal-2018-tempo
https://aclanthology.org/N18-1026
Tempo-Lexical Context Driven Word Embedding for Cross-Session Search Task Extraction
Search task extraction in information retrieval is the process of identifying search intents over a set of queries relating to the same topical information need. Search tasks may potentially span across multiple search sessions. Most existing research on search task extraction has focused on identifying tasks within a single session, where the notion of a session is defined by a fixed length time window. By contrast, in this work we seek to identify tasks that span across multiple sessions. To identify tasks, we conduct a global analysis of a query log in its entirety without restricting analysis to individual temporal windows. To capture inherent task semantics, we represent queries as vectors in an abstract space. We learn the embedding of query words in this space by leveraging the temporal and lexical contexts of queries. To evaluate the effectiveness of the proposed query embedding, we conduct experiments of clustering queries into tasks with a particular interest of measuring the cross-session search task recall. Results of our experiments demonstrate that task extraction effectiveness, including cross-session recall, is improved significantly with the help of our proposed method of embedding the query terms by leveraging the temporal and templexical contexts of queries.
false
[]
[]
null
null
null
This work was supported by Science Foundation Ireland as part of the ADAPT Centre (Grant No. 13/RC/2106) (www.adaptcentre.ie).
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
xu-etal-2018-unpaired
https://aclanthology.org/P18-1090
Unpaired Sentiment-to-Sentiment Translation: A Cycled Reinforcement Learning Approach
The goal of sentiment-to-sentiment "translation" is to change the underlying sentiment of a sentence while keeping its content. The main challenge is the lack of parallel data. To solve this problem, we propose a cycled reinforcement learning method that enables training on unpaired data by collaboration between a neutralization module and an emotionalization module. We evaluate our approach on two review datasets, Yelp and Amazon. Experimental results show that our approach significantly outperforms the state-of-the-art systems. Especially, the proposed method substantially improves the content preservation performance. The BLEU score is improved from 1.64 to 22.46 and from 0.56 to 14.06 on the two datasets, respectively. 1
false
[]
[]
null
null
null
This work was supported in part by National Natural Science Foundation of China (No. 61673028), National High Technology Research and Development Program of China (863 Program, No. 2015AA015404), and the National Thousand Young Talents Program. Xu Sun is the corresponding author of this paper.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
delpeuch-preller-2014-natural
https://aclanthology.org/W14-1407
From Natural Language to RDF Graphs with Pregroups
We define an algorithm translating natural language sentences to the formal syntax of RDF, an existential conjunctive logic widely used on the Semantic Web. Our translation is based on pregroup grammars, an efficient type-logical grammatical framework with a transparent syntax-semantics interface. We introduce a restricted notion of side effects in the semantic category of finitely generated free semimodules over 0, 1 to that end. The translation gives an intensional counterpart to previous extensional models. We establish a one-to-one correspondence between extensional models and RDF models such that satisfaction is preserved. Our translation encompasses the expressivity of the target language and supports complex linguistic constructions like relative clauses and unbounded dependencies.
false
[]
[]
null
null
null
This work was supported by the École Normale Supérieure and the LIRMM. The first author wishes to thank David Naccache, Alain Lecomte, Antoine Amarilli, Hugo Vanneuville and both authors the members of the TEXTE group at the LIRMM for their interest in the project.
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lager-1998-logic
https://aclanthology.org/W98-1616
Logic for Part-of-Speech Tagging and Shallow Parsing
null
false
[]
[]
null
null
null
T h is w o rk w a s c o n d u c te d w ith in th e T a g L o g P ro je c t, s u p p o rte d b y N IJT E K a n d H S F R . I am g ra te fu l to m y c o lle a g u e s a t U p p sa la U n iv e rsity a n d G ö te b o rg U n iv e rs ity fo r u se fu l d isc u ssio n s , a n d in p a rtic u la r to J o a k im N iv re in G ö teb o rg .
1998
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wang-etal-2018-denoising
https://aclanthology.org/W18-6314
Denoising Neural Machine Translation Training with Trusted Data and Online Data Selection
Measuring domain relevance of data and identifying or selecting well-fit domain data for machine translation (MT) is a well-studied topic, but denoising is not yet. Denoising is concerned with a different type of data quality and tries to reduce the negative impact of data noise on MT training, in particular, neural MT (NMT) training. This paper generalizes methods for measuring and selecting data for domain MT and applies them to denoising NMT training. The proposed approach uses trusted data and a denoising curriculum realized by online data selection. Intrinsic and extrinsic evaluations of the approach show its significant effectiveness for NMT to train on data with severe noise.
false
[]
[]
null
null
null
The authors would like to thank George Foster for his help refine the paper and advice on various technical isses in the paper, Thorsten Brants for his earlier work on the topic, Christian Buck for his help with the Paracrawl data, Yuan Cao for his valuable comments and suggestions on the paper, and the anonymous reviewers for their constructive reviews.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
templeton-burger-1983-problems
https://aclanthology.org/A83-1002
Problems in Natural-Language Interface to DBMS With Examples From EUFID
For five years the End-User Friendly Interface to Data management (EUFID) project team at System Development Corporation worked on the design and implementation of a Natural-Language Interface (NLI) system that was to be independent of both the application and the database management system. In this paper we describe application, natural-language and database management problems involved in NLI development, with specific reference to the EUFID system as an example. Language", "World Language", and "Data Base Language" and appear to correspond roughly to the "external", "conceptual", and "internal" views of data as described by C. J. Date
false
[]
[]
null
null
null
We would like to acknowledge
1983
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kar-etal-2018-folksonomication
https://aclanthology.org/C18-1244
Folksonomication: Predicting Tags for Movies from Plot Synopses using Emotion Flow Encoded Neural Network
Folksonomy of movies covers a wide range of heterogeneous information about movies, like the genre, plot structure, visual experiences, soundtracks, metadata, and emotional experiences from watching a movie. Being able to automatically generate or predict tags for movies can help recommendation engines improve retrieval of similar movies, and help viewers know what to expect from a movie in advance. In this work, we explore the problem of creating tags for movies from plot synopses. We propose a novel neural network model that merges information from synopses and emotion flows throughout the plots to predict a set of tags for movies. We compare our system with multiple baselines and found that the addition of emotion flows boosts the performance of the network by learning ≈18% more tags than a traditional machine learning system.
false
[]
[]
null
null
null
This work was partially supported by the National Science Foundation under grant number 1462141 and by the U.S. Department of Defense under grant W911NF-16-1-0422.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
thomas-1980-computer
https://aclanthology.org/P80-1022
The Computer as an Active Communication Medium
goals r4imetacomments that direct the conversation [~ Communication is often conceived of in basically the following terms. A person has some idea which he or she wants to communicate to a second person. The first person translates that idea into some symbol system which is transmitted through some medium to the receiver. The receiver receives the transmission and translates it into some internal idea. Communication, in this view, is considered good to the extent that there is an isomorphism between the idea in the head of the sender before sending the message and the idea in the receiver's head after recieving the message. A good medium of communication, in this view, is one that adds minimal noise to the signal. Messages are considered good partly to the extent that they are unabmiguous. This is, by and large, the view of many of the people concerned with computers and communication. For a moment, consider a quite different view of communication. In this view, communication is basically a design-interpretation process. One person has goals that they believe can be aided by communicating. The person therefore designs a message which is intended to facillitate those goals. In most cases, the goal includes changing some cognitive structure in one or more other people's minds. Each receiver of a message however has his or her own goals in mind and a model of the world (including a model of the sender) and interprets the received message in light of that other world information and relative to the perceived goals of the sender. This view has been articulated further elsewhere !~] .
false
[]
[]
null
null
null
null
1980
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
joshi-srinivas-1994-disambiguation
https://aclanthology.org/C94-1024
Disambiguation of Super Parts of Speech (or Supertags): Almost Parsing
In a lexicalized grammar Ibrnlalisni such as LexicMized Tree-Adjoining (h'~unmar (I3'AG), each lexicM item is associated with at least one elementary structure (supertag) that localizes syntactic a.nd semantic dependencies. Thus a parser for a lexicalized grammar must search a large set of supertags to choose the right ones to combine for the parse of the sentence. We present techniques I'or dlsambiguating supertags using local inlorlnlttion s~Lch as lexicM preference and local lexicN dependencies. Tim similarity between LTAG and l)ependency grammars is exploited in the dependency niodel of snpertag disa.mbiguation. The performance results for variotis models of supert;tg disambigu~ttk)n such as unigram; trigram and dependency-based models are presented.
false
[]
[]
null
null
null
null
1994
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dugast-etal-2008-relearn
https://aclanthology.org/W08-0327
Can we Relearn an RBMT System?
This paper describes SYSTRAN submissions for the shared task of the third Workshop on Statistical Machine Translation at ACL. Our main contribution consists in a French-English statistical model trained without the use of any human-translated parallel corpus. In substitution, we translated a monolingual corpus with SYSTRAN rule-based translation engine to produce the parallel corpus. The results are provided herein, along with a measure of error analysis.
false
[]
[]
null
null
null
null
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
shutova-2009-sense
https://aclanthology.org/P09-3001
Sense-based Interpretation of Logical Metonymy Using a Statistical Method
The use of figurative language is ubiquitous in natural language texts and it is a serious bottleneck in automatic text understanding. We address the problem of interpretation of logical metonymy, using a statistical method. Our approach originates from that of Lapata and Lascarides (2003), which generates a list of nondisambiguated interpretations with their likelihood derived from a corpus. We propose a novel sense-based representation of the interpretation of logical metonymy and a more thorough evaluation method than that of Lapata and Lascarides (2003). By carrying out a human experiment we prove that such a representation is intuitive to human subjects. We derive a ranking scheme for verb senses using an unannotated corpus, WordNet sense numbering and glosses. We also provide an account of the requirements that different aspectual verbs impose onto the interpretation of logical metonymy. We tested our system on verb-object metonymic phrases. It identifies and ranks metonymic interpretations with the mean average precision of 0.83 as compared to the gold standard.
false
[]
[]
null
null
null
I would like to thank Simone Teufel and Anna Korhonen for their valuable feedback on this project and my anonymous reviewers whose comments helped to improve the paper. I am also very grateful to Cambridge Overseas Trust who made this research possible by funding my studies.
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wilks-1993-corpora
https://aclanthology.org/1993.mtsummit-1.12
Corpora and Machine Translation
The paper discusses the benefits of the world-wide move in recent years towards the use of corpora in natural language processing. The spoken paper will discuss a range of trends in that area, but this version concentrates on one extreme example of work based only on corpora and statistics: the IBM approach to machine translation, where I argue that it has done rather better after a few years than many sceptics believed it could. However, it is neither as novel as its proponents suggest nor is it making claims as clear and simple as they would have us believe. The performance of the purely statistical system (and we discuss what that phrase could mean) has not equalled the performance of SYSTRAN. More importantly, the system is now being shifted to a hybrid that incorporates much of the linguistic information that it was initially claimed by IBM would not be needed for MT. Hence, one might infer that its own proponent do not believe "pure" statistics sufficient for MT of a usable quality. In addition to real limits on the statistical method, there are also strong economic limits imposed by their methodology of data gathering. However, the paper concludes that the IBM group have done the field a great service in pushing these methods far further than before, and by reminding everyone of the virtues of empiricism in the field and the need for large scale gathering of data.
false
[]
[]
null
null
null
Acknowledgements: James Pustejovsky, Bob Ingria, Bran Boguraev, Sergei Nirenburg, Ted Dunning and others in the CRL natural language processing group.
1993
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
luce-etal-2016-cogalex
https://aclanthology.org/W16-5315
CogALex-V Shared Task: LOPE
This paper attempts the answer two questions posed by the CogALex shared task: How to determine if two words are semantically related and, if they are related, which semantic relation holds between them. We present a simple, effective approach to the first problem, using word vectors to calculate similarity, and a naive approach to the second problem, by assigning word pairs semantic relations based on their parts of speech. The results of the second task are significantly improved in our post-hoc experiment, where we attempt to apply linguistic regularities in word representations (Mikolov 2013b) to these particular semantic relations.
false
[]
[]
null
null
null
null
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yoshimura-etal-2020-reference
https://aclanthology.org/2020.coling-main.573
SOME: Reference-less Sub-Metrics Optimized for Manual Evaluations of Grammatical Error Correction
We propose a reference-less metric trained on manual evaluations of system outputs for grammatical error correction. Previous studies have shown that reference-less metrics are promising; however, existing metrics are not optimized for manual evaluation of the system output because there is no dataset of system output with manual evaluation. This study manually evaluates the output of grammatical error correction systems to optimize the metrics. Experimental results show that the proposed metric improves the correlation with manual evaluation in both systemand sentence-level meta-evaluation. Our dataset and metric will be made publicly available. 1 2 Related Work Napoles et al. (2016) pioneered the reference-less GEC metric. They presented a metric based on grammatical error detection tools and linguistic features such as language models, and demonstrated that its performance was close to that of reference-based metrics. Asano et al. (2017) combined three submetrics: grammaticality, fluency, and meaning preservation, and outperformed reference-based metrics. They trained a logistic regression model on the GUG dataset 2 (Heilman et al.
false
[]
[]
null
null
null
This work was supported by JSPS KAKENHI Grant Number JP20K19861. We would like to thank Hiroki Asano for giving the implementation code and Keisuke Sakaguchi for the system output of JFLEG.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
michel-etal-2020-exploring
https://aclanthology.org/2020.lrec-1.313
Exploring Bilingual Word Embeddings for Hiligaynon, a Low-Resource Language
This paper investigates the use of bilingual word embeddings for mining Hiligaynon translations of English words. There is very little research on Hiligaynon, an extremely low-resource language of Malayo-Polynesian origin with over 9 million speakers in the Philippines (we found just one paper). We use a publicly available Hiligaynon corpus with only 300K words, and match it with a comparable corpus in English. As there are no bilingual resources available, we manually develop a English-Hiligaynon lexicon and use this to train bilingual word embeddings. But we fail to mine accurate translations due to the small amount of data. To find out if the same holds true for a related language pair, we simulate the same low-resource setup on English to German and arrive at similar results. We then vary the size of the comparable English and German corpora to determine the minimum corpus size necessary to achieve competitive results. Further, we investigate the role of the seed lexicon. We show that with the same corpus size but with a smaller seed lexicon, performance can surpass results of previous studies. We release the lexicon of 1,200 English-Hiligaynon word pairs we created to encourage further investigation.
false
[]
[]
null
null
null
We would like to thank the reviewers for their valuable input. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement № 640550).
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhang-zong-2013-learning
https://aclanthology.org/P13-1140
Learning a Phrase-based Translation Model from Monolingual Data with Application to Domain Adaptation
Currently, almost all of the statistical machine translation (SMT) models are trained with the parallel corpora in some specific domains. However, when it comes to a language pair or a different domain without any bilingual resources, the traditional SMT loses its power. Recently, some research works study the unsupervised SMT for inducing a simple word-based translation model from the monolingual corpora. It successfully bypasses the constraint of bitext for SMT and obtains a relatively promising result. In this paper, we take a step forward and propose a simple but effective method to induce a phrase-based model from the monolingual corpora given an automatically-induced translation lexicon or a manually-edited translation dictionary. We apply our method for the domain adaptation task and the extensive experiments show that our proposed method can substantially improve the translation quality.
false
[]
[]
null
null
null
The research work has been funded by the Hi-Tech Research and Development Program ("863" Program) of China under Grant No. 2011AA01A207, 2012AA011101 and 2012AA011102, and also supported by the Key Project of Knowledge Innovation of Program of Chinese Academy of Sciences under Grant No. KGZD-EW-501. We would also like to thank the anonymous reviewers for their valuable suggestions.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
grimm-cimiano-2021-biquad
https://aclanthology.org/2021.starsem-1.10
BiQuAD: Towards QA based on deeper text understanding
Recent question answering and machine reading benchmarks frequently reduce the task to one of pinpointing spans within a certain text passage that answers the given question. Typically, these systems are not required to actually understand the text on a deeper level that allows for more complex reasoning on the information contained. We introduce a new dataset called BiQuAD that requires deeper comprehension in order to answer questions in both extractive and deductive fashion. The dataset consist of 4, 190 closed-domain texts and a total of 99, 149 question-answer pairs. The texts are synthetically generated soccer match reports that verbalize the main events of each match. All texts are accompanied by a structured Datalog program that represents a (logical) model of its information. We show that state-of-the-art QA models do not perform well on the challenging long form contexts and reasoning requirements posed by the dataset. In particular, transformer based state-of-theart models achieve F 1-scores of only 39.0. We demonstrate how these synthetic datasets align structured knowledge with natural text and aid model introspection when approaching complex text understanding.
false
[]
[]
null
null
null
We would like to thank the anonymous reviewers for their valuable feedback.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false