ID
stringlengths 11
54
| url
stringlengths 33
64
| title
stringlengths 11
184
| abstract
stringlengths 17
3.87k
⌀ | label_nlp4sg
bool 2
classes | task
list | method
list | goal1
stringclasses 9
values | goal2
stringclasses 9
values | goal3
stringclasses 1
value | acknowledgments
stringlengths 28
1.28k
⌀ | year
stringlengths 4
4
| sdg1
bool 1
class | sdg2
bool 1
class | sdg3
bool 2
classes | sdg4
bool 2
classes | sdg5
bool 2
classes | sdg6
bool 1
class | sdg7
bool 1
class | sdg8
bool 2
classes | sdg9
bool 2
classes | sdg10
bool 2
classes | sdg11
bool 2
classes | sdg12
bool 1
class | sdg13
bool 2
classes | sdg14
bool 1
class | sdg15
bool 1
class | sdg16
bool 2
classes | sdg17
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
tomita-1989-parsing
|
https://aclanthology.org/W89-0243
|
Parsing 2-Dimensional Language
|
2-Dimensional Context-Free Grammar (2D-CFG) for 2-dimensional input text is introduced and efficient parsing algorithms for 2D-CFG are presented. In 2D-CFG, a grammar rule's right hand side symbols can be placed not only horizontally but also vertically. Terminal symbols in a 2-dimensional input text are combined to form a rectangular region, and regions are combined to form a larger region using a 2dimensional phrase structure rule. The parsing algorithms presented in this paper are the 2D-Ear1ey algorithm and 2D-LR algorithm, which are 2-dimensionally extended versions of Earley's algorithm and the LR(O) algorithm, respectively. 1. In tro d u c tio n Existing grammar formalisms and formal language theories, as well as parsing algorithms, deal only with one-dimensional strings. However, 2-dimensional layout information plays an important role In understanding a text. It is especially crucial for such texts as title pages of artldes, business cards, announcements and formal letters to be read by an optical character reader (OCR). A number of projects [1 1 ,6 ,7 ,2 ], most notably by Fujisawa et al. [4], try to analyze and utilize the 2-dimensional layout information. Fujisawa et al., unlike others, uses a procedural language called Form Definition Language (FDL) [5, 12] to specify layout rules. On the other hand, in the area of image understanding, several attempts have been also made to define a language to describe 2-dimensional images [3 , 10]. This paper presents a formalism called 2-Dimensional Context-Free Grammar (2D-CFG), and two parsing algorithms to parse 2-dimensional language with 2D-CFG. Unlike all the previous attempts mentioned above, our approach is to extend existing well-studied (one dimensional) grammar formalisms and parsing techniques to handle 2-dimensional language. In the rest of this section, we informally describe the 2-dimensional context-free grammar (2D-CFG) in comparison with the 1-dimensional traditional context-free grammar. Input to the traditional context-free grammar is a string, or sentence; namely a one-dimensional array of terminal symbols. Input to the 2-dimensional context-free grammar, on the other hand, is a rectangular block of symbols, or text, namely, a 2-dimensional array of terminal symbols. In the traditional context-free grammar, a non-terminal symbol represents a phrase, which is a substring of the original input string. A grammar rule is applied to combine adjoining phrases to form a larger phrase. In the 2-dimensional context-free grammar, on the other hand, a non-terminal represents a region, which is a rectangular sub-block of the input text. A grammar rule is applied to combine two adjoining regions to form a larger region. Rules like 1T h» research was supported by the National Science Foundation under contract IRI-8858085.-414-International Parsing Workshop '89 S: start symbol Let LEFT(p) be the left hand side symbol of p. Let RIGHT(p, i) be the i-th right hand side symbol of p.
| false
|
[] |
[] | null | null | null | null |
1989
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
li-etal-2020-event
|
https://aclanthology.org/2020.findings-emnlp.73
|
Event Extraction as Multi-turn Question Answering
|
Event extraction, which aims to identify event triggers of pre-defined event types and their arguments of specific roles, is a challenging task in NLP. Most traditional approaches formulate this task as classification problems, with event types or argument roles taken as golden labels. Such approaches fail to model rich interactions among event types and arguments of different roles, and cannot generalize to new types or roles. This work proposes a new paradigm that formulates event extraction as multi-turn question answering. Our approach, MQAEE, casts the extraction task into a series of reading comprehension problems, by which it extracts triggers and arguments successively from a given sentence. A history answer embedding strategy is further adopted to model question answering history in the multi-turn process. By this new formulation, MQAEE makes full use of dependency among arguments and event types, and generalizes well to new types with new argument roles. Empirical results on ACE 2005 shows that MQAEE outperforms current state-of-the-art, pushing the final F1 of argument extraction to 53.4% (+2.0%). And it also has a good generalization ability, achieving competitive performance on 13 new event types even if trained only with a few samples of them.
| false
|
[] |
[] | null | null | null | null |
2020
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
munoz-etal-2000-semantic
|
https://aclanthology.org/2000.bcs-1.17
|
Semantic approach to bridging reference resolution
| null | false
|
[] |
[] | null | null | null | null |
2000
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
maguino-valencia-etal-2018-wordnet
|
https://aclanthology.org/L18-1697
|
WordNet-Shp: Towards the Building of a Lexical Database for a Peruvian Minority Language
|
WordNet-like resources are lexical databases with highly relevance information and data which could be exploited in more complex computational linguistics research and applications. The building process requires manual and automatic tasks, that could be more arduous if the language is a minority one with fewer digital resources. This study focuses in the construction of an initial WordNet database for a low-resourced and indigenous language in Peru: Shipibo-Konibo (shp). First, the stages of development from a scarce scenario (a bilingual dictionary shp-es) are described. Then, it is proposed a synset alignment method by comparing the definition glosses in the dictionary (written in Spanish) with the content of a Spanish WordNet. In this sense, word2vec similarity was the chosen metric for the proximity measure. Finally, an evaluation process is performed for the synsets, using a manually annotated Gold Standard in Shipibo-Konibo. The obtained results are promising, and this resource is expected to serve well in further applications, such as word sense disambiguation and even machine translation in the shp-es language pair.
| false
|
[] |
[] | null | null | null |
We highly appreciate the linguistic team effort that made possible the creation of this resource: Dr. Roberto Zariquiey, Alonso Vásquez, Gabriela Tello, Renzo Ego-Aguirre, Lea Reinhardt and Marcela Castro. We are also thankful to our native speakers (Shipibo-Konibo) collaborators: Juan Agustín, Carlos Guimaraes, Ronald Suárez and Miguel Gomez. Finally, we gratefully acknowledge the support of the "Consejo Nacional de Ciencia, Tecnología e Innovación Tecnológica" (CONCYTEC, Peru) under the contract 225-2015-FONDECYT.
|
2018
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
pu-etal-2017-sense
|
https://aclanthology.org/W17-4701
|
Sense-Aware Statistical Machine Translation using Adaptive Context-Dependent Clustering
|
Statistical machine translation (SMT) systems use local cues from n-gram translation and language models to select the translation of each source word. Such systems do not explicitly perform word sense disambiguation (WSD), although this would enable them to select translations depending on the hypothesized sense of each word. Previous attempts to constrain word translations based on the results of generic WSD systems have suffered from their limited accuracy. We demonstrate that WSD systems can be adapted to help SMT, thanks to three key achievements: (1) we consider a larger context for WSD than SMT can afford to consider; (2) we adapt the number of senses per word to the ones observed in the training data using clustering-based WSD with K-means; and (3) we initialize senseclustering with definitions or examples extracted from WordNet. Our WSD system is competitive, and in combination with a factored SMT system improves noun and verb translation from English to Chinese, Dutch, French, German, and Spanish.
| false
|
[] |
[] | null | null | null |
We are grateful for their support to the Swiss National Science Foundation (SNSF) under the Sinergia MODERN project (grant n. 147653, see www.idiap.ch/project/modern/) and to the European Union under the Horizon 2020 SUMMA project (grant n. 688139, see www.summaproject.eu). We thank the reviewers for their helpful suggestions.
|
2017
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
brun-2011-detecting
|
https://aclanthology.org/R11-1054
|
Detecting Opinions Using Deep Syntactic Analysis
|
In this paper, we present an opinion detection system built on top of a robust syntactic parser. The goal of this system is to extract opinions associated with products but also with characteristics of these products, i.e. to perform feature-based opinion extraction. To carry out this task, and following a target corpus study, the robust syntactic parser is enriched by associating polarities to pertinent lexical elements and by developing generic rules to extract relations of opinions together with their polarity, i.e. positive or negative. These relations are used to feed an opinion representation model. A first evaluation shows very encouraging results, but numerous perspectives and developments remain to be investigated.
| false
|
[] |
[] | null | null | null | null |
2011
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
burlot-etal-2016-limsi
|
https://aclanthology.org/2016.iwslt-1.19
|
LIMSI@IWSLT'16: MT Track
|
This paper describes LIMSI's submission to the MT track of IWSLT 2016. We report results for translation from English into Czech. Our submission is an attempt to address the difficulties of translating into a morphologically rich language by paying special attention to the morphology generation on target side. To this end, we propose two ways of improving the morphological fluency of the output: 1. by performing translation and inflection of the target language in two separate steps, and 2. by using a neural language model with characted-based word representation. We finally present the combination of both methods used for our primary system submission.
| false
|
[] |
[] | null | null | null |
This work has been partly funded by the European Unions Horizon 2020 research and innovation programme under grant agreement No. 645452 (QT21).
|
2016
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
dubbin-blunsom-2014-modelling
|
https://aclanthology.org/E14-1013
|
Modelling the Lexicon in Unsupervised Part of Speech Induction
|
Automatically inducing the syntactic partof-speech categories for words in text is a fundamental task in Computational Linguistics. While the performance of unsupervised tagging models has been slowly improving, current state-of-the-art systems make the obviously incorrect assumption that all tokens of a given word type must share a single part-of-speech tag. This one-tag-per-type heuristic counters the tendency of Hidden Markov Model based taggers to over generate tags for a given word type. However, it is clearly incompatible with basic syntactic theory. In this paper we extend a state-ofthe-art Pitman-Yor Hidden Markov Model tagger with an explicit model of the lexicon. In doing so we are able to incorporate a soft bias towards inducing few tags per type. We develop a particle filter for drawing samples from the posterior of our model and present empirical results that show that our model is competitive with and faster than the state-of-the-art without making any unrealistic restrictions.
| false
|
[] |
[] | null | null | null | null |
2014
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
rogers-2004-wrapping
|
https://aclanthology.org/P04-1071
|
Wrapping of Trees
|
We explore the descriptive power, in terms of syntactic phenomena, of a formalism that extends Tree-Adjoining Grammar (TAG) by adding a fourth level of hierarchical decomposition to the three levels TAG already employs. While extending the descriptive power minimally, the additional level of decomposition allows us to obtain a uniform account of a range of phenomena that has heretofore been difficult to encompass, an account that employs unitary elementary structures and eschews synchronized derivation operations, and which is, in many respects, closer to the spirit of the intuitions underlying TAG-based linguistic theory than previously considered extensions to TAG.
| false
|
[] |
[] | null | null | null | null |
2004
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
khairunnisa-etal-2020-towards
|
https://aclanthology.org/2020.aacl-srw.10
|
Towards a Standardized Dataset on Indonesian Named Entity Recognition
|
In recent years, named entity recognition (NER) tasks in the Indonesian language have undergone extensive development. There are only a few corpora for Indonesian NER; hence, recent Indonesian NER studies have used diverse datasets. Although an open dataset is available, it includes only approximately 2,000 sentences and contains inconsistent annotations, thereby preventing accurate training of NER models without reliance on pre-trained models. Therefore, we re-annotated the dataset and compared the two annotations' performance using the Bidirectional Long Short-Term Memory and Conditional Random Field (BiLSTM-CRF) approach. Fixing the annotation yielded a more consistent result for the organization tag and improved the prediction score by a large margin. Moreover, to take full advantage of pre-trained models, we compared different feature embeddings to determine their impact on the NER task for the Indonesian language.
| false
|
[] |
[] | null | null | null | null |
2020
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
wilcock-jokinen-2015-multilingual
|
https://aclanthology.org/W15-4623
|
Multilingual WikiTalk: Wikipedia-based talking robots that switch languages.
|
At SIGDIAL-2013 our talking robot demonstrated Wikipedia-based spoken information access in English. Our new demo shows a robot speaking different languages, getting content from different language Wikipedias, and switching languages to meet the linguistic capabilities of different dialogue partners.
| false
|
[] |
[] | null | null | null |
The second author gratefully acknowledges the financial support of Estonian Science Foundation project IUT20-56 (Eesti keele arvutimudelid; computational models for Estonian)We thank Niklas Laxström for his work on the internationalization of WikiTalk and the localized Finnish version. We also thank Kenichi Okonogi and Seiichi Yamamoto for their collaboration on the localized Japanese version.
|
2015
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
schwenk-etal-2009-smt
|
https://aclanthology.org/W09-0423
|
SMT and SPE Machine Translation Systems for WMT`09
|
This paper describes the development of several machine translation systems for the 2009 WMT shared task evaluation. We only consider the translation between French and English. We describe a statistical system based on the Moses decoder and a statistical post-editing system using SYSTRAN's rule-based system. We also investigated techniques to automatically extract additional bilingual texts from comparable corpora.
| false
|
[] |
[] | null | null | null |
This work has been partially funded by the French Government under the project INSTAR (ANR JCJC06 143038) and the by the Higher Education Commission, Pakistan through the HEC Overseas Scholarship 2005.
|
2009
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
ren-etal-2014-positive
|
https://aclanthology.org/D14-1055
|
Positive Unlabeled Learning for Deceptive Reviews Detection
|
Deceptive reviews detection has attracted significant attention from both business and research communities. However, due to the difficulty of human labeling needed for supervised learning, the problem remains to be highly challenging. This paper proposed a novel angle to the problem by modeling PU (positive unlabeled) learning. A semi-supervised model, called mixing population and individual property PU learning (MPIPUL), is proposed. Firstly, some reliable negative examples are identified from the unlabeled dataset. Secondly, some representative positive examples and negative examples are generated based on LDA (Latent Dirichlet Allocation). Thirdly, for the remaining unlabeled examples (we call them spy examples), which can not be explicitly identified as positive and negative, two similarity weights are assigned, by which the probability of a spy example belonging to the positive class and the negative class are displayed. Finally, spy examples and their similarity weights are incorporated into SVM (Support Vector Machine) to build an accurate classifier. Experiments on gold-standard dataset demonstrate the effectiveness of MPIPUL which outperforms the state-of-the-art baselines.
| true
|
[] |
[] |
Peace, Justice and Strong Institutions
| null | null |
We are grateful to the anonymous reviewers for their thoughtful comments.
|
2014
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
|
leinonen-etal-2018-new
|
https://aclanthology.org/W18-0208
|
New Baseline in Automatic Speech Recognition for Northern S\'ami
|
Automatic speech recognition has gone through many changes in recent years. Advances both in computer hardware and machine learning have made it possible to develop systems far more capable and complex than the previous state-of-theart. However, almost all of these improvements have been tested in major wellresourced languages. In this paper, we show that these techniques are capable of yielding improvements even in a small data scenario. We experiment with different deep neural network architectures for acoustic modeling for Northern Sámi and report up to 50% relative error rate reductions. We also run experiments to compare the performance of subwords as language modeling units in Northern Sámi. Tiivistelmä Automaattinen puheentunnistus on kehittynyt viime vuosina merkittävästi. Uudet innovaatiot sekä laitteistossa että koneoppimisessa ovat mahdollistaneet entistä paljon tehokkaammat ja monimutkaisemmat järjestelmät. Suurin osa näistä parannuksista on kuitenkin testattu vain valtakielillä, joiden kehittämiseen on tarjolla runsaasti aineistoja. Tässä paperissa näytämme että nämä tekniikat tuottavat parannuksia myös kielillä, joista aineistoa on vähän. Kokeilemme ja vertailemme erilaisia syviä neuroverkkoja pohjoissaamen akustisina malleina ja onnistumme vähentämään tunnistusvirheitä jopa 50%:lla. Tutkimme myös tapoja pilkkoa sanoja pienempiin osiin pohjoissaamen kielimalleissa.
| false
|
[] |
[] | null | null | null |
We thank the University of Tromsø for the access to their Northern Sámi datasets and acknowledge the computational resources provided by the Aalto Science-IT project.This work was financially supported by the Tekes Challenge Finland project TELLme, Academy of Finland under the grant number 251170, and Kone foundation.
|
2018
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
randria-etal-2020-subjective
|
https://aclanthology.org/2020.lrec-1.286
|
Subjective Evaluation of Comprehensibility in Movie Interactions
|
Various research works have dealt with the comprehensibility of textual, audio, or audiovisual documents, and showed that factors related to text (e.g., linguistic complexity), sound (e.g., speech intelligibility), image (e.g., presence of visual context), or even to cognition and emotion can play a major role in the ability of humans to understand the semantic and pragmatic contents of a given document. However, to date, no reference human data is available that could help investigating the role of the linguistic and extralinguistic information present at these different levels (i.e., linguistic, audio/phonetic, and visual) in multimodal documents (e.g., movies). The present work aimed at building a corpus of human annotations that would help to study further how much and in which way the human perception of comprehensibility (i.e., of the difficulty of comprehension, referred to in this paper as overall difficulty) of audiovisual documents is affected (1) by lexical complexity, grammatical complexity, and speech intelligibility, and (2) by the modality/ies (text, audio, video) available to the human recipient. To this end, a corpus of 55 short movie clips was created. Fifteen experts (language teachers) assessed the overall difficulty, the lexical difficulty, the grammatical difficulty and the speech intelligibility of the clips under different conditions in which one or more modality/ies was/were available. A study of the distribution of the experts' ratings showed that the perceived difficulty of the 55 clips range from very easy to very difficult, in all the aspects studied except for the grammatical complexity, for which most of the clips were considered as easy or moderately difficult. The study reflected the relationship existing between lexical complexity and difficulty, grammatical complexity and difficulty and speech intelligibility and difficulty, as lexical complexity and speech intelligibility are strongly and positively correlated to difficulty and the grammatical difficulty is moderately and positively correlated to difficulty. A multiple linear regression with difficulty as the dependent variable and lexical complexity, grammatical complexity and intelligibility as the independent variable achieved an adjusted R 2 of 0.82, indicating that these three variables explain most of the variance associated with the overall perceived difficulty. The results also suggest that documents were considered as most difficult when only the audio modality was available, and that adding text and/or video modalities allowed to decrease the difficulty, the difficulty scores being minimized by the combination of text, audio and video modalities.
| false
|
[] |
[] | null | null | null | null |
2020
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
li-etal-2011-engtube
|
https://aclanthology.org/2011.mtsummit-systems.2
|
ENGtube: an Integrated Subtitle Environment for ESL
|
Movies and TV shows are probably the most attractive media of language learning, and the associated subtitle is an important resource in the learning process. Despite its significance, subtitle has never been exploited effectively as it could be. In this paper we present ENGtube, which is a video service for ESL (English as Second Language) learners. The key component of this service is an integrated environment for displaying the video clips, the source subtitle and the translated subtitle with rich information at users' disposal. The rich information of subtitle is produced by various speech and language technologies.
| true
|
[] |
[] |
Quality Education
| null | null | null |
2011
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
akama-etal-2018-unsupervised
|
https://aclanthology.org/P18-2091
|
Unsupervised Learning of Style-sensitive Word Vectors
|
This paper presents the first study aimed at capturing stylistic similarity between words in an unsupervised manner. We propose extending the continuous bag of words (CBOW) model (Mikolov et al., 2013a) to learn style-sensitive word vectors using a wider context window under the assumption that the style of all the words in an utterance is consistent. In addition, we introduce a novel task to predict lexical stylistic similarity and to create a benchmark dataset for this task. Our experiment with this dataset supports our assumption and demonstrates that the proposed extensions contribute to the acquisition of stylesensitive word embeddings.
| false
|
[] |
[] | null | null | null |
This work was supported by JSPS KAKENHI Grant Number 15H01702. We thank our anonymous reviewers for their helpful comments and suggestions.
|
2018
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
li-etal-2020-using
|
https://aclanthology.org/2020.coling-main.132
|
Using a Penalty-based Loss Re-estimation Method to Improve Implicit Discourse Relation Classification
|
We tackle implicit discourse relation classification, a task of automatically determining semantic relationships between arguments. The attention-worthy words in arguments are crucial clues for classifying the discourse relations. Attention mechanisms have been proven effective in highlighting the attention-worthy words during encoding. However, our survey shows that some inessential words are unintentionally misjudged as the attention-worthy words and, therefore, assigned heavier attention weights than should be. We propose a penalty-based loss re-estimation method to regulate the attention learning process, integrating penalty coefficients into the computation of loss by means of overstability of attention weight distributions. We conduct experiments on the Penn Discourse TreeBank (PDTB) corpus. The test results show that our loss re-estimation method leads to substantial improvements for a variety of attention mechanisms.
| false
|
[] |
[] | null | null | null |
We are grateful for the insightful comments of reviewers. This work is supported by the national NSF of China via Grant Nos. 62076174, 61672368, 61751206 and 61672367, as well as the Stability Support Program of National Defense Key Laboratory of Science and Technology via Grant No. 61421100407.
|
2020
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
zhang-wallace-2017-sensitivity
|
https://aclanthology.org/I17-1026
|
A Sensitivity Analysis of (and Practitioners' Guide to) Convolutional Neural Networks for Sentence Classification
|
Convolutional Neural Networks (CNNs) have recently achieved remarkably strong performance on the practically important task of sentence classification (Kim, 2014; Kalchbrenner et al., 2014; Johnson and Zhang, 2014; Zhang et al., 2016). However, these models require practitioners to specify an exact model architecture and set accompanying hyperparameters, including the filter region size, regularization parameters, and so on. It is currently unknown how sensitive model performance is to changes in these configurations for the task of sentence classification. We thus conduct a sensitivity analysis of one-layer CNNs to explore the effect of architecture components on model performance; our aim is to distinguish between important and comparatively inconsequential design decisions for sentence classification. We focus on one-layer CNNs (to the exclusion of more complex models) due to their comparative simplicity and strong empirical performance, which makes it a modern standard baseline method akin to Support Vector Machine (SVMs) and logistic regression. We derive practical advice from our extensive empirical results for those interested in getting the most out of CNNs for sentence classification in real world settings.
| false
|
[] |
[] | null | null | null | null |
2017
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
widdows-dorow-2002-graph
|
https://aclanthology.org/C02-1114
|
A Graph Model for Unsupervised Lexical Acquisition
|
This paper presents an unsupervised method for assembling semantic knowledge from a part-ofspeech tagged corpus using graph algorithms. The graph model is built by linking pairs of words which participate in particular syntactic relationships. We focus on the symmetric relationship between pairs of nouns which occur together in lists. An incremental cluster-building algorithm using this part of the graph achieves 82% accuracy at a lexical acquisition task, evaluated against WordNet classes. The model naturally realises domain and corpus specific ambiguities as distinct components in the graph surrounding an ambiguous word.
| false
|
[] |
[] | null | null | null |
The authors would like to thank the anonymous reviewers whose comments were a great help in making this paper more focussed: any shortcomings remain entirely our own responsibility. This research was supported in part by the Research Collaboration between the NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation and CSLI, Stanford University, and by EC/NSF grant IST-1999-11438 for the MUCHMORE project. 2
|
2002
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
sogaard-etal-2015-inverted
|
https://aclanthology.org/P15-1165
|
Inverted indexing for cross-lingual NLP
|
We present a novel, count-based approach to obtaining inter-lingual word representations based on inverted indexing of Wikipedia. We present experiments applying these representations to 17 datasets in document classification, POS tagging, dependency parsing, and word alignment. Our approach has the advantage that it is simple, computationally efficient and almost parameter-free, and, more importantly, it enables multi-source crosslingual learning. In 14/17 cases, we improve over using state-of-the-art bilingual embeddings.
| false
|
[] |
[] | null | null | null | null |
2015
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
he-etal-2021-fast
|
https://aclanthology.org/2021.acl-long.246
|
Fast and Accurate Neural Machine Translation with Translation Memory
|
It is generally believed that a translation memory (TM) should be beneficial for machine translation tasks. Unfortunately, existing wisdom demonstrates the superiority of TMbased neural machine translation (NMT) only on the TM-specialized translation tasks rather than general tasks, with a non-negligible computational overhead. In this paper, we propose a fast and accurate approach to TM-based NMT within the Transformer framework: the model architecture is simple and employs a single bilingual sentence as its TM, leading to efficient training and inference; and its parameters are effectively optimized through a novel training criterion. Extensive experiments on six TM-specialized tasks show that the proposed approach substantially surpasses several strong baselines that use multiple TMs, in terms of BLEU and running time. In particular, the proposed approach also advances the strong baselines on two general tasks (WMT news Zh→En and En→De).
| false
|
[] |
[] | null | null | null |
This work is supported by NSFC (grant No. 61877051). We thank Jiatao Gu and Mengzhou Xia for providing their preprocessed datasets. We also thank the anonymous reviewers for providing valuable suggestions and feedbacks.
|
2021
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
daniels-2005-parsing
|
https://aclanthology.org/W05-1523
|
Parsing Generalized ID/LP Grammars
|
The Generalized ID/LP (GIDLP) grammar formalism (Daniels and Meurers 2004a,b; Daniels 2005) was developed to serve as a processing backbone for linearization-HPSG grammars, separating the declaration of the recursive constituent structure from the declaration of word order domains. This paper shows that the key aspects of this formalismthe ability for grammar writers to explicitly declare word order domains and to arrange the right-hand side of each grammar rule to minimize the parser's search space -lead directly to improvements in parsing efficiency.
| false
|
[] |
[] | null | null | null | null |
2005
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
bramsen-etal-2011-extracting
|
https://aclanthology.org/P11-1078
|
Extracting Social Power Relationships from Natural Language
|
Sociolinguists have long argued that social context influences language use in all manner of ways, resulting in lects 1. This paper explores a text classification problem we will call lect modeling, an example of what has been termed computational sociolinguistics. In particular, we use machine learning techniques to identify social power relationships between members of a social network, based purely on the content of their interpersonal communication. We rely on statistical methods, as opposed to language-specific engineering, to extract features which represent vocabulary and grammar usage indicative of social power lect. We then apply support vector machines to model the social power lects representing superior-subordinate communication in the Enron email corpus. Our results validate the treatment of lect modeling as a text classification problem-albeit a hard one-and constitute a case for future research in computational sociolinguistics. * This work was done while these authors were at SET Corporation, an SAIC Company. 1 Fields that deal with society and language have inconsistent terminology; "lect" is chosen here because "lect" has no other English definitions and the etymology of the word gives it the sense we consider most relevant.
| false
|
[] |
[] | null | null | null |
Dr. Richard Sproat contributed time, valuable insights, and wise counsel on several occasions during the course of the research. Dr. Lillian Lee and her students in Natural Language Processing and Social Interaction reviewed the paper, offering valuable feedback and helpful leads.Our colleague, Diane Bramsen, created an excellent graphical interface for probing and understanding the results. Jeff Lau guided and advised throughout the project.We thank our anonymous reviewers for prudent advice.This work was funded by the Army Studies Board and sponsored by Col. Timothy Hill of the United Stated Army Intelligence and Security Command (INSCOM) Futures Directorate under contract W911W4-08-D-0011.
|
2011
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
lynn-etal-2017-human
|
https://aclanthology.org/D17-1119
|
Human Centered NLP with User-Factor Adaptation
|
We pose the general task of user-factor adaptation-adapting supervised learning models to real-valued user factors inferred from a background of their language, reflecting the idea that a piece of text should be understood within the context of the user that wrote it. We introduce a continuous adaptation technique, suited for real-valued user factors that are common in social science and bringing us closer to personalized NLP, adapting to each user uniquely. We apply this technique with known user factors including age, gender, and personality traits, as well as latent factors, evaluating over five tasks:
| false
|
[] |
[] | null | null | null |
This publication was made possible, in part, through the support of a grant from the Templeton Religion Trust -TRT0048. We wish to thank the following colleagues for their annotation help for the PP-attachment task: Chetan Naik, Heeyoung Kwon, Ibrahim Hammoud, Jun Kang, Masoud Rouhizadeh, Mohammadzaman Zamani, and Samuel Louvan.
|
2017
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
angelidis-lapata-2018-multiple
|
https://aclanthology.org/Q18-1002
|
Multiple Instance Learning Networks for Fine-Grained Sentiment Analysis
|
We consider the task of fine-grained sentiment analysis from the perspective of multiple instance learning (MIL). Our neural model is trained on document sentiment labels, and learns to predict the sentiment of text segments, i.e. sentences or elementary discourse units (EDUs), without segment-level supervision. We introduce an attention-based polarity scoring method for identifying positive and negative text snippets and a new dataset which we call SPOT (as shorthand for Segment-level POlariTy annotations) for evaluating MILstyle sentiment models like ours. Experimental results demonstrate superior performance against multiple baselines, whereas a judgement elicitation study shows that EDU-level opinion extraction produces more informative summaries than sentence-based alternatives.
| false
|
[] |
[] | null | null | null |
The authors gratefully acknowledge the support of the European Research Council (award number 681760). We thank TACL action editor Ani Nenkova and the anonymous reviewers whose feedback helped improve the present paper, as well as Charles Sutton, Timothy Hospedales, and members of EdinburghNLP for helpful discussions and suggestions.
|
2018
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
teixeira-etal-2004-acoustic
|
http://www.lrec-conf.org/proceedings/lrec2004/pdf/610.pdf
|
An Acoustic Corpus Contemplating Regional Variation for Studies of European Portuguese Nasals
|
Portuguese is one of the two standard Romance varieties having nasal vowels as independent phonemes. These are complex sounds that have a dynamic nature and present several problems for a complete description. In this paper we present a new corpus especially recorded to allow studies of European Portuguese nasal vowels. The main purpose of these studies is the improvement of knowledge about these sounds so that it can be applied to language teaching, speech therapy materials and the articulatory speech synthesizer that is being developed at the University of Aveiro. The corpus described is a valuable resource for such studies due to the regional and contextual coverage and the simultaneous availability of speech and EGG signal. Details about corpus definition, recording, annotation and availability are given.
| false
|
[] |
[] | null | null | null |
We thank all the informants participating in corpora recordings. Without their patience and cooperation this work wouldn't be possible. We also thank FCT for the funding of Project POSI/36427/PLP/2000, Phonetics Applied to Speech Processing: The Portuguese Nasals.
|
2004
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
watson-etal-2005-efficient
|
https://aclanthology.org/W05-1517
|
Efficient Extraction of Grammatical Relations
|
We present a novel approach for applying the Inside-Outside Algorithm to a packed parse forest produced by a unificationbased parser. The approach allows a node in the forest to be assigned multiple inside and outside probabilities, enabling a set of 'weighted GRs' to be computed directly from the forest. The approach improves on previous work which either loses efficiency by unpacking the parse forest before extracting weighted GRs, or places extra constraints on which nodes can be packed, leading to less compact forests. Our experiments demonstrate substantial increases in parser accuracy and throughput for weighted GR output.
| false
|
[] |
[] | null | null | null |
This work is in part funded by the Overseas Research Students Awards Scheme and the Poynton Scholarship appointed by the Cambridge Australia Trust in collaboration with the Cambridge Commonwealth Trust. We would like to thank four anonymous reviewers who provided many useful suggestions for improvement.
|
2005
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
niebuhr-etal-2013-speech
|
https://aclanthology.org/W13-4040
|
Speech Reduction, Intensity, and F0 Shape are Cues to Turn-Taking
|
Based on German production data from the 'Kiel Corpus of Spontaneous Speech', we conducted two perception experiments, using an innovative interactive task in which participants gave real oral responses to resynthesized question stimuli. Differences in the time interval between stimulus question and response show that segmental reduction, intensity level, and the shape of the phrase-final rise all function as cues to turn-taking in conversation. Thus, the phonetics of turntaking goes beyond the traditional triad of duration, voice quality, and F0 level.
| false
|
[] |
[] | null | null | null | null |
2013
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
gupta-etal-2020-human
|
https://aclanthology.org/2020.sigdial-1.30
|
Human-Human Health Coaching via Text Messages: Corpus, Annotation, and Analysis
|
Our goal is to develop and deploy a virtual assistant health coach that can help patients set realistic physical activity goals and live a more active lifestyle. Since there is no publicly shared dataset of health coaching dialogues, the first phase of our research focused on data collection. We hired a certified health coach and 28 patients to collect the first round of human-human health coaching interaction which took place via text messages. This resulted in 2853 messages. The data collection phase was followed by conversation analysis to gain insight into the way information exchange takes place between a health coach and a patient. This was formalized using two annotation schemas: one that focuses on the goals the patient is setting and another that models the higher-level structure of the interactions. In this paper, we discuss these schemas and briefly talk about their application for automatically extracting activity goals and annotating the second round of data, collected with different health coaches and patients. Given the resource-intensive nature of data annotation, successfully annotating a new dataset automatically is key to answer the need for high quality, large datasets.
| true
|
[] |
[] |
Good Health and Well-Being
| null | null |
This work is supported by the National Science Foundation through awards IIS 1650900 and 1838770.
|
2020
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
hegde-etal-2022-mucs
|
https://aclanthology.org/2022.dravidianlangtech-1.23
|
MUCS@DravidianLangTech@ACL2022: Ensemble of Logistic Regression Penalties to Identify Emotions in Tamil Text
|
Emotion Analysis (EA) is the process of automatically analyzing and categorizing the input text into one of the predefined sets of emotions. In recent years, people have turned to social media to express their emotions, opinions or feelings about news, movies, products, services, and so on. These users' emotions may help the public, governments, business organizations, film producers, and others in devising strategies, making decisions, and so on. The increasing number of social media users and the increasing amount of user generated text containing emotions on social media demands automated tools for the analysis of such data as handling this data manually is labor intensive and error prone. Further, the characteristics of social media data makes the EA challenging. Most of the EA research works have focused on English language leaving several Indian languages including Tamil unexplored for this task. To address the challenges of EA in Tamil texts, in this paper, we-team MUCS, describe the model submitted to the shared task on Emotion Analysis in Tamil at DravidianLangTech@ACL 2022. Out of the two subtasks in this shared task, our team submitted the model only for Task a. The proposed model comprises of an Ensemble of Logistic Regression (LR) classifiers with three penalties, namely: L1, L2, and Elasticnet. This Ensemble model trained with Term Frequency-Inverse Document Frequency (TF-IDF) of character bigrams and trigrams secured 4 th rank in Task a with a macro averaged F1-score of 0.04. The code to reproduce the proposed models is available in github 1 .
| false
|
[] |
[] | null | null | null | null |
2022
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
chen-ng-2012-chinese
|
https://aclanthology.org/C12-2019
|
Chinese Noun Phrase Coreference Resolution: Insights into the State of the Art
|
Compared to the amount of research on English coreference resolution, relatively little work has been done on Chinese coreference resolution. Worse still, it has been difficult to determine the state of the art in Chinese coreference resolution, owing in part to the lack of a standard evaluation dataset. The organizers of the CoNLL-2012 shared task, Modeling Unrestricted Multilingual Coreference in OntoNotes, have recently addressed this issue by providing standard training and test sets for developing and evaluating Chinese coreference resolvers. We aim to gain insights into the state of the art via extensive experimentation with our Chinese resolver, which is ranked first in the shared task on the Chinese test data.
| false
|
[] |
[] | null | null | null |
We thank the three anonymous reviewers for their invaluable comments on an earlier draft of the paper. This work was supported in part by NSF Grants IIS-1147644 and IIS-1219142.
|
2012
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
milward-1992-dynamics
|
https://aclanthology.org/C92-4171
|
Dynamics, Dependency Grammar and Incremental Interpretation
|
The paper describes two equiwtlent grammatical for-malisnLs. The first is a lexicalised version of depen dency grammar, and this can be nsed to provide tree-structured analyses of sentences (though somewhat tlatter than those usually provided by phra.se structure grammars). The second is a new forrnal ism, 'Dynamic Dependency Gramniar', which uses axioms and deduction rules to provide analyses of sentences in terms of transitioos between states.
A reformulation of dependency grammar usiug state transitions is of interest on several grounds. Firstly, it can be used to show that incremental interpretation is possible without requiring notions of overlapping, or flexible constituency (as ill some versions of categorial grammar), and without destroy ing a trasmparent link between syntax and semantics. Secondly, the reformulation provides a level of description which can act as an intermediate stage between the original grammar and a parsing algorithm. Thirdly, it is possible to extend the relbrnm lated grammars with further axioii~s and deduction rules to provide coverage of syntactic constructions such as coortlination which are tlitficult to encode lexically.
| false
|
[] |
[] | null | null | null | null |
1992
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
kelly-etal-2012-semi
|
https://aclanthology.org/W12-1702
|
Semi-supervised learning for automatic conceptual property extraction
|
For a given concrete noun concept, humans are usually able to cite properties (e.g., elephant is animal, car has wheels) of that concept; cognitive psychologists have theorised that such properties are fundamental to understanding the abstract mental representation of concepts in the brain. Consequently, the ability to automatically extract such properties would be of enormous benefit to the field of experimental psychology. This paper investigates the use of semi-supervised learning and support vector machines to automatically extract concept-relation-feature triples from two large corpora (Wikipedia and UKWAC) for concrete noun concepts. Previous approaches have relied on manually-generated rules and hand-crafted resources such as WordNet; our method requires neither yet achieves better performance than these prior approaches, measured both by comparison with a property norm-derived gold standard as well as direct human evaluation. Our technique performs particularly well on extracting features relevant to a given concept, and suggests a number of promising areas for future focus.
| false
|
[] |
[] | null | null | null |
This research was supported by EPSRC grant EP/F030061/1. We are grateful to McRae and colleagues for making their norms publicly available, and to the anonymous reviewers for their helpful input.
|
2012
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
pal-etal-2010-handling
|
https://aclanthology.org/W10-3707
|
Handling Named Entities and Compound Verbs in Phrase-Based Statistical Machine Translation
|
Data preprocessing plays a crucial role in phrase-based statistical machine translation (PB-SMT). In this paper, we show how single-tokenization of two types of multi-word expressions (MWE), namely named entities (NE) and compound verbs, as well as their prior alignment can boost the performance of PB-SMT. Single-tokenization of compound verbs and named entities (NE) provides significant gains over the baseline PB-SMT system. Automatic alignment of NEs substantially improves the overall MT performance, and thereby the word alignment quality indirectly. For establishing NE alignments, we transliterate source NEs into the target language and then compare them with the target NEs. Target language NEs are first converted into a canonical form before the comparison takes place. Our best system achieves statistically significant improvements (4.59 BLEU points absolute, 52.5% relative improvement) on an English-Bangla translation task.
| false
|
[] |
[] | null | null | null |
This research is partially supported by the Science Foundation Ireland (Grant 07/CE/I1142) as part of the Centre for Next Generation Localisation (www.cngl.ie) at Dublin City University, and EU projects PANACEA (Grant 7FP-ITC-248064) and META-NET (Grant FP7-ICT-249119).
|
2010
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
molino-etal-2019-parallax
|
https://aclanthology.org/P19-3028
|
Parallax: Visualizing and Understanding the Semantics of Embedding Spaces via Algebraic Formulae
|
Embeddings are a fundamental component of many modern machine learning and natural language processing models. Understanding them and visualizing them is essential for gathering insights about the information they capture and the behavior of the models. In this paper, we introduce Parallax 1 , a tool explicitly designed for this task. Parallax allows the user to use both state-of-the-art embedding analysis methods (PCA and t-SNE) and a simple yet effective task-oriented approach where users can explicitly define the axes of the projection through algebraic formulae. In this approach, embeddings are projected into a semantically meaningful subspace, which enhances interpretability and allows for more fine-grained analysis. We demonstrate 2 the power of the tool and the proposed methodology through a series of case studies and a user study.
| false
|
[] |
[] | null | null | null | null |
2019
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
litkowski-2005-cl
|
https://aclanthology.org/P05-3004
|
CL Research's Knowledge Management System
|
CL Research began experimenting with massive XML tagging of texts to answer questions in TREC 2002. In DUC 2003, the experiments were extended into text summarization. Based on these experiments, The Knowledge Management System (KMS) was developed to combine these two capabilities and to serve as a unified basis for other types of document exploration. KMS has been extended to include web question answering, both general and topic-based summarization, information extraction, and document exploration. The document exploration functionality includes identification of semantically similar concepts and dynamic ontology creation. As development of KMS has continued, user modeling has become a key research issue: how will different users want to use the information they identify.
| false
|
[] |
[] | null | null | null | null |
2005
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
loukanova-2019-computational
|
https://aclanthology.org/W19-1005
|
Computational Syntax-Semantics Interface with Type-Theory of Acyclic Recursion for Underspecified Semantics
|
The paper provides a technique for algorithmic syntax-semantics interface in computational grammar with underspecified semantic representations of human language. The technique is introduced for expressions that contain NP quantifiers, by using computational, generalised Constraint-Based Lexicalised Grammar (GCBLG) that represents major, common syntactic characteristics of a variety of approaches to formal grammar and natural language processing (NLP). Our solution can be realised by any of the grammar formalisms in the CBLG class, e.g., Head-Driven Phrase Structure Grammar (HPSG), Lexical Functional Grammar (LFG), Categorial Grammar (CG). The type-theory of acyclic recursion L λ ar , provides facility for representing major semantic ambiguities, as underspecification, at the object level of the formal language of L λ ar , without recourse of metalanguage variables. Specific semantic representations can be obtained by instantiations of underspecified L λ arterms, in context. These are subject to constraints provided by a newly introduced feature-structure description of syntax-semantics interface in GCBLG. 1 Introduction Ambiguity permeates human language, in all of its manifestations, by interdependences, across lexicon, syntax, semantics, discourse, context, etc. Alternative interpretations may persist even when specific context and discourse resolve or discard some specific instances in syntax and semantics. We present computational grammar that integrates lexicon, syntax, types, constraints, and semantics. The formal facilities of the grammar have components that integrate syntactic constructions with semantic representations. The syntax-semantic interface, internally in the grammar, handles some ambiguities as phenomena of underspecification in human language. We employ a computational grammar, which we call Generalised Constraint-Based Lexicalised Grammar (GCBLG). The formal system GCBLG uses feature-value descriptions and constraints in a grammar with a hierarchy of dependent types, which covers lexicon, phrasal structures, and semantic representations. In GCBLG, for the syntax, we use feature-value descriptions, similar to that in Sag et al. (2003), which are presented formally in Loukanova (2017a) as a class of formal languages designating mathematical structures of functional domains of linguistics information. GCBLG is a generalisation from major lexical and syntactic facilities of frameworks in the class of Constraint-Based Lexicalist Grammar (CBLG) approaches. To some extend, this is reminiscence of Vijay-Shanker and Weir (1994). We lift the idea of extending classic formal grammars to cover semantic representations with semantic underspecification via syntax-semantics interface within computational grammar. We introduce the technique here for varieties of grammar formalisms from the CBLG approach, in particular:
| false
|
[] |
[] | null | null | null | null |
2019
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
ciaramita-johnson-2003-supersense
|
https://aclanthology.org/W03-1022
|
Supersense Tagging of Unknown Nouns in WordNet
|
We present a new framework for classifying common nouns that extends namedentity classification. We used a fixed set of 26 semantic labels, which we called supersenses. These are the labels used by lexicographers developing WordNet. This framework has a number of practical advantages. We show how information contained in the dictionary can be used as additional training data that improves accuracy in learning new nouns. We also define a more realistic evaluation procedure than cross-validation.
| false
|
[] |
[] | null | null | null | null |
2003
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
xu-etal-2020-tero
|
https://aclanthology.org/2020.coling-main.139
|
TeRo: A Time-aware Knowledge Graph Embedding via Temporal Rotation
|
In the last few years, there has been a surge of interest in learning representations of entities and relations in knowledge graph (KG). However, the recent availability of temporal knowledge graphs (TKGs) that contain time information for each fact created the need for reasoning over time in such TKGs. In this regard, we present a new approach of TKG embedding, TeRo, which defines the temporal evolution of entity embedding as a rotation from the initial time to the current time in the complex vector space. Specially, for facts involving time intervals, each relation is represented as a pair of dual complex embeddings to handle the beginning and the end of the relation, respectively. We show our proposed model overcomes the limitations of the existing KG embedding models and TKG embedding models and has the ability of learning and inferring various relation patterns over time. Experimental results on four different TKGs show that TeRo significantly outperforms existing state-of-the-art models for link prediction. In addition, we analyze the effect of time granularity on link prediction over TKGs, which as far as we know has not been investigated in previous literature.
| false
|
[] |
[] | null | null | null |
This work is supported by the CLEOPATRA project (GA no. 812997), the German national funded BmBF project MLwin and the BOOST project.
|
2020
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
nikolova-ma-2008-assistive
|
https://aclanthology.org/W08-0806
|
Assistive Mobile Communication Support
|
This paper reflects on our work in providing communication support for people with speech and language disabilities. We discuss the role of mobile technologies in assistive systems and share ongoing research efforts.
| true
|
[] |
[] |
Reduced Inequalities
| null | null | null |
2008
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
|
roy-2016-perception
|
https://aclanthology.org/W16-6340
|
Perception of Phi-Phrase boundaries in Hindi.
|
This paper proposes an algorithm for finding phonological phrase boundaries in sentences with neutral focus spoken in both normal and fast tempos. A perceptual experiment is designed using Praat's experiment MFC program to investigate the phonological phrase boundaries. Phonological phrasing and its relation to syntactic structure in the framework of the endbased rules proposed by (Selkirk, 1986), and relation to purely phonological rules, i.e., the principle of increasing units proposed by (Ghini, 1993) are investigated. In addition to that, this paper explores the acoustic cues signalling phonological phrase boundaries in both normal and fast tempos speech. It is found that phonological phrasing in Hindi follows both endbased rule (Selkirk, 1986) and the principle of increasing units (Ghini, 1993). The end-based rules are used for phonological phrasing and the principle of increasing units is used for phonological phrase restructuring.
| false
|
[] |
[] | null | null | null | null |
2016
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
jayez-rossari-1998-discourse
|
https://aclanthology.org/W98-0313
|
Discourse Relations versus Discourse Marker Relations
|
While it seems intuitively obvious that many discourse markers (DMs) are able to express discourse relations (DRs) which exist independently, the specific contribution of DMs-if any-is not clear. In this paper, we investigate the status of some consequence DMs in French. We observe that it is difficult to construct a clear and simple definition based on DRs for these DMs. Next, we show that the lexical constraints associated with such DMs extend far beyond simple compatibility with DRs. This suggests that the view of DMs as signaling general allpurpose DRs is to be seriously amended in favor of more precise descriptions of DMs, in which the compatibility with DRs is derived from a lexical semantic profile.
| false
|
[] |
[] | null | null | null | null |
1998
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
liu-etal-2021-progressively
|
https://aclanthology.org/2021.emnlp-main.733
|
Progressively Guide to Attend: An Iterative Alignment Framework for Temporal Sentence Grounding
|
A key solution to temporal sentence grounding (TSG) exists in how to learn effective alignment between vision and language features extracted from an untrimmed video and a sentence description. Existing methods mainly leverage vanilla soft attention to perform the alignment in a single-step process. However, such single-step attention is insufficient in practice, since complicated relations between inter-and intra-modality are usually obtained through multi-step reasoning. In this paper, we propose an Iterative Alignment Network (IA-Net) for TSG task, which iteratively interacts inter-and intra-modal features within multiple steps for more accurate grounding. Specifically, during the iterative reasoning process, we pad multi-modal features with learnable parameters to alleviate the nowhere-toattend problem of non-matched frame-word pairs, and enhance the basic co-attention mechanism in a parallel manner. To further calibrate the misaligned attention caused by each reasoning step, we also devise a calibration module following each attention module to refine the alignment knowledge. With such iterative alignment scheme, our IA-Net can robustly capture the fine-grained relations between vision and language domains step-bystep for progressively reasoning the temporal boundaries. Extensive experiments conducted on three challenging benchmarks demonstrate that our proposed model performs better than the state-of-the-arts.
| false
|
[] |
[] | null | null | null |
This work was supported in part by the National Natural Science Foundation of China under grant No. 61972448.
|
2021
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
zhang-etal-2021-point
|
https://aclanthology.org/2021.acl-long.307
|
Point, Disambiguate and Copy: Incorporating Bilingual Dictionaries for Neural Machine Translation
|
This paper proposes a sophisticated neural architecture to incorporate bilingual dictionaries into Neural Machine Translation (NMT) models. By introducing three novel components: Pointer, Disambiguator, and Copier, our method PDC achieves the following merits inherently compared with previous efforts: (1) Pointer leverages the semantic information from bilingual dictionaries, for the first time, to better locate source words whose translation in dictionaries can potentially be used; (2) Disambiguator synthesizes contextual information from the source view and the target view, both of which contribute to distinguishing the proper translation of a specific source word from multiple candidates in dictionaries; (3) Copier systematically connects Pointer and Disambiguator based on a hierarchical copy mechanism seamlessly integrated with Transformer, thereby building an end-to-end architecture that could avoid error propagation problems in alternative pipeline methods. The experimental results on Chinese-English and English-Japanese benchmarks demonstrate the PDC's overall superiority and effectiveness of each component.
| false
|
[] |
[] | null | null | null |
We thank anonymous reviewers for valuable comments. This research was supported by the Na-
|
2021
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
bolanos-etal-2009-multi
|
https://aclanthology.org/N09-2026
|
Multi-scale Personalization for Voice Search Applications
|
Voice Search applications provide a very convenient and direct access to a broad variety of services and information. However, due to the vast amount of information available and the open nature of the spoken queries, these applications still suffer from recognition errors. This paper explores the utilization of personalization features for the post-processing of recognition results in the form of n-best lists. Personalization is carried out from three different angles: short-term, long-term and Web-based, and a large variety of features are proposed for use in a log-linear classification framework. Experimental results on data obtained from a commercially deployed Voice Search system show that the combination of the proposed features leads to a substantial sentence error rate reduction. In addition, it is shown that personalization features which are very different in nature can successfully complement each other.
| false
|
[] |
[] | null | null | null | null |
2009
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
peters-etal-2019-knowledge
|
https://aclanthology.org/D19-1005
|
Knowledge Enhanced Contextual Word Representations
|
Contextual word representations, typically trained on unstructured, unlabeled text, do not contain any explicit grounding to real world entities and are often unable to remember facts about those entities. We propose a general method to embed multiple knowledge bases (KBs) into large scale models, and thereby enhance their representations with structured, human-curated knowledge. For each KB, we first use an integrated entity linker to retrieve relevant entity embeddings, then update contextual word representations via a form of word-to-entity attention. In contrast to previous approaches, the entity linkers and selfsupervised language modeling objective are jointly trained end-to-end in a multitask setting that combines a small amount of entity linking supervision with a large amount of raw text. After integrating WordNet and a subset of Wikipedia into BERT, the knowledge enhanced BERT (KnowBert) demonstrates improved perplexity, ability to recall facts as measured in a probing task and downstream performance on relationship extraction, entity typing, and word sense disambiguation. KnowBert's runtime is comparable to BERT's and it scales to large KBs.
| false
|
[] |
[] | null | null | null |
The authors acknowledge helpful feedback from anonymous reviewers and the AllenNLP team. This research was funded in part by the NSF under awards IIS-1817183 and CNS-1730158.
|
2019
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
dimitroff-etal-2013-weighted
|
https://aclanthology.org/R13-1027
|
Weighted maximum likelihood loss as a convenient shortcut to optimizing the F-measure of maximum entropy classifiers
|
We link the weighted maximum entropy and the optimization of the expected F βmeasure, by viewing them in the framework of a general common multi-criteria optimization problem. As a result, each solution of the expected F β-measure maximization can be realized as a weighted maximum likelihood solution-a well understood and behaved problem. The specific structure of maximum entropy models allows us to approximate this characterization via the much simpler class-wise weighted maximum likelihood. Our approach reveals any probabilistic learning scheme as a specific trade-off between different objectives and provides the framework to link it to the expected F β-measure.
| false
|
[] |
[] | null | null | null | null |
2013
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
jang-mostow-2012-inferring
|
https://aclanthology.org/E12-1038
|
Inferring Selectional Preferences from Part-Of-Speech N-grams
|
We present the PONG method to compute selectional preferences using part-of-speech (POS) N-grams. From a corpus labeled with grammatical dependencies, PONG learns the distribution of word relations for each POS N-gram. From the much larger but unlabeled Google N-grams corpus, PONG learns the distribution of POS N-grams for a given pair of words. We derive the probability that one word has a given grammatical relation to the other. PONG estimates this probability by combining both distributions, whether or not either word occurs in the labeled corpus. PONG achieves higher average precision on 16 relations than a state-of-the-art baseline in a pseudo-disambiguation task, but lower coverage and recall.
| false
|
[] |
[] | null | null | null |
The research reported here was supported by the Institute of Education Sciences, U.S. Department of Education, through Grant R305A080157. The opinions expressed are those of the authors and do not necessarily represent the views of the Institute or the U.S. Department of Education. We thank the helpful reviewers and Katrin Erk for her generous assistance.
|
2012
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
bian-etal-2021-attention
|
https://aclanthology.org/2021.naacl-main.72
|
On Attention Redundancy: A Comprehensive Study
|
Multi-layer multi-head self-attention mechanism is widely applied in modern neural language models. Attention redundancy has been observed among attention heads but has not been deeply studied in the literature. Using BERT-base model as an example, this paper provides a comprehensive study on attention redundancy which is helpful for model interpretation and model compression. We analyze the attention redundancy with Five-Ws and How. (What) We define and focus the study on redundancy matrices generated from pre-trained and fine-tuned BERT-base model for GLUE datasets. (How) We use both token-based and sentence-based distance functions to measure the redundancy. (Where) Clear and similar redundancy patterns (cluster structure) are observed among attention heads. (When) Redundancy patterns are similar in both pre-training and fine-tuning phases. (Who) We discover that redundancy patterns are task-agnostic. Similar redundancy patterns even exist for randomly generated token sequences. ("Why") We also evaluate influences of the pre-training dropout ratios on attention redundancy. Based on the phaseindependent and task-agnostic attention redundancy patterns, we propose a simple zero-shot pruning method as a case study. Experiments on fine-tuning GLUE tasks verify its effectiveness. The comprehensive analyses on attention redundancy make model understanding and zero-shot model pruning promising.
| false
|
[] |
[] | null | null | null | null |
2021
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
guo-kok-2021-bique
|
https://aclanthology.org/2021.emnlp-main.657
|
BiQUE: Biquaternionic Embeddings of Knowledge Graphs
|
Knowledge graph embeddings (KGEs) compactly encode multi-relational knowledge graphs (KGs). Existing KGE models rely on geometric operations to model relational patterns. Euclidean (circular) rotation is useful for modeling patterns such as symmetry, but cannot represent hierarchical semantics. In contrast, hyperbolic models are effective at modeling hierarchical relations, but do not perform as well on patterns on which circular rotation excels. It is crucial for KGE models to unify multiple geometric transformations so as to fully cover the multifarious relations in KGs. To do so, we propose BiQUE, a novel model that employs biquaternions to integrate multiple geometric transformations, viz., scaling, translation, Euclidean rotation, and hyperbolic rotation. BiQUE makes the best tradeoffs among geometric operators during training, picking the best one (or their best combination) for each relation. Experiments on five datasets show BiQUE's effectiveness.
| false
|
[] |
[] | null | null | null |
This research is partly supported by MOE's AcRF Tier 1 Grant to Stanley Kok. Any opinions, findings, conclusions, or recommendations expressed herein are solely those of the authors.
|
2021
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
ghosh-srivastava-2022-epic
|
https://aclanthology.org/2022.acl-long.276
|
ePiC: Employing Proverbs in Context as a Benchmark for Abstract Language Understanding
|
While large language models have shown exciting progress on several NLP benchmarks, evaluating their ability for complex analogical reasoning remains under-explored. Here, we introduce a high-quality crowdsourced dataset of narratives for employing proverbs in context as a benchmark for abstract language understanding. The dataset provides fine-grained annotation of aligned spans between proverbs and narratives, and contains minimal lexical overlaps between narratives and proverbs, ensuring that models need to go beyond surfacelevel reasoning to succeed. We explore three tasks: (1) proverb recommendation and alignment prediction, (2) narrative generation for a given proverb and topic, and (3) identifying narratives with similar motifs. Our experiments show that neural language models struggle on these tasks compared to humans, and these tasks pose multiple learning challenges.
| false
|
[] |
[] | null | null | null | null |
2022
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
ousidhoum-etal-2021-probing
|
https://aclanthology.org/2021.acl-long.329
|
Probing Toxic Content in Large Pre-Trained Language Models
|
Large pre-trained language models (PTLMs) have been shown to carry biases towards different social groups which leads to the reproduction of stereotypical and toxic content by major NLP systems. We propose a method based on logistic regression classifiers to probe English, French, and Arabic PTLMs and quantify the potentially harmful content that they convey with respect to a set of templates. The templates are prompted by a name of a social group followed by a cause-effect relation. We use PTLMs to predict masked tokens at the end of a sentence in order to examine how likely they enable toxicity towards specific communities. We shed the light on how such negative content can be triggered within unrelated and benign contexts based on evidence from a large-scale study, then we explain how to take advantage of our methodology to assess and mitigate the toxicity transmitted by PTLMs.
| true
|
[] |
[] |
Peace, Justice and Strong Institutions
|
Reduced Inequalities
|
Gender Equality
|
We thank the annotators and anonymous reviewers and meta-reviewer for their valuable feedback.This paper was supported by the Theme-based Research Scheme Project (T31-604/18-N), the NSFC Grant (No. U20B2053) from China, the Early Career Scheme (ECS, No. 26206717), the General Research Fund (GRF, No. 16211520), and the Research Impact Fund (RIF, from the Research Grants Council (RGC) of Hong Kong.
|
2021
| false
| false
| false
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| true
| false
|
och-ney-2001-statistical
|
https://aclanthology.org/2001.mtsummit-papers.46
|
Statistical multi-source translation
|
We describe methods for translating a text given in multiple source languages into a single target language. The goal is to improve translation quality in applications where the ultimate goal is to translate the same document into many languages. We describe a statistical approach and two specific statistical models to deal with this problem. Our method is generally applicable as it is independent of specific models, languages or application domains. We evaluate the approach on a multilingual corpus covering all eleven official European Union languages that was collected automatically from the Internet. In various tests we show that these methods can significantly improve translation quality. As a side effect, we also compare the quality of statistical machine translation systems for many European languages in the same domain.
| false
|
[] |
[] | null | null | null | null |
2001
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
bin-wasi-etal-2014-cmuq
|
https://aclanthology.org/S14-2029
|
CMUQ@Qatar:Using Rich Lexical Features for Sentiment Analysis on Twitter
|
In this paper, we describe our system for the Sentiment Analysis of Twitter shared task in SemEval 2014. Our system uses an SVM classifier along with rich set of lexical features to detect the sentiment of a phrase within a tweet (Task-A) and also the sentiment of the whole tweet (Task-B). We start from the lexical features that were used in the 2013 shared tasks, we enhance the underlying lexicon and also introduce new features. We focus our feature engineering effort mainly on Task-A. Moreover, we adapt our initial framework and introduce new features for Task-B. Our system reaches weighted score of 87.11% in Task-A and 64.52% in Task-B. This places us in the 4th rank in the Task-A and 15th in the Task-B.
| false
|
[] |
[] | null | null | null |
We would like to thank Kemal Oflazer and the shared task organizers for their support throughout this work.
|
2014
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
ngo-ho-yvon-2020-generative
|
https://aclanthology.org/2020.amta-research.6
|
Generative latent neural models for automatic word alignment
|
Word alignments identify translational correspondences between words in a parallel sentence pair and are used, for instance, to learn bilingual dictionaries, to train statistical machine translation systems or to perform quality estimation. Variational autoencoders have been recently used in various of natural language processing to learn in an unsupervised way latent representations that are useful for language generation tasks. In this paper, we study these models for the task of word alignment and propose and assess several evolutions of a vanilla variational autoencoders. We demonstrate that these techniques can yield competitive results as compared to Giza++ and to a strong neural network alignment system for two language pairs.
| false
|
[] |
[] | null | null | null |
2 We omit the initial step, consisting in sampling the lengths I and J and the dependencies wrt. these variables.
|
2020
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
ochitani-etal-1997-goal
|
https://aclanthology.org/W97-0708
|
Goal-Directed Approach for Text Summarization
|
The information to InClude m a summary vanes depending on the author's mtentmn and the use of the summary To create the best summaries, the appropriate goals of the extracting process should be set and a guide should be outlined that instructs the system how to meet the tasks The approach described m thin report m intended to be a basic archltecture to extract a set of concme sentences that are indicated or predlcted by goals and contexts To evaluate a sentence, the sentence selection algorithm simply measures the mformatlveness of each sentence by comparing with the determined goals, and the algorlthm extracts a set of the hlghest scored bentences by repeat apphcatmn of thin comparmon Thin approach m apphed m the summary of newspaper artlcles The headhnes are used as the goals Also the method to extract charactenstlc sentences by using property mformatlon of text is shown In thls experiment m whlch Japanese news articles are summarized, the sunlmarles consmt of about 30% of the original text On avelage, thin method extracts 50% less text than the slmple tltle-keyword method
| false
|
[] |
[] | null | null | null | null |
1997
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
kuhn-2004-experiments
|
https://aclanthology.org/P04-1060
|
Experiments in parallel-text based grammar induction
|
This paper discusses the use of statistical word alignment over multiple parallel texts for the identification of string spans that cannot be constituents in one of the languages. This information is exploited in monolingual PCFG grammar induction for that language, within an augmented version of the inside-outside algorithm. Besides the aligned corpus, no other resources are required. We discuss an implemented system and present experimental results with an evaluation against the Penn Treebank.
| false
|
[] |
[] | null | null | null | null |
2004
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
edmonds-1997-choosing
|
https://aclanthology.org/P97-1067
|
Choosing the Word Most Typical in Context Using a Lexical Co-occurrence Network
|
This paper presents a partial solution to a component of the problem of lexical choice: choosing the synonym most typical, or expected, in context. We apply a new statistical approach to representing the context of a word through lexical co-occurrence networks. The implementation was trained and evaluated on a large corpus, and results show that the inclusion of second-order co-occurrence relations improves the performance of our implemented lexical choice program.
| false
|
[] |
[] | null | null | null |
For comments and advice, I thank Graeme Hirst, Eduard Hovy, and Stephen Green. This work is financially supported by the Natural Sciences and Engineering Council of Canada.
|
1997
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
mackinlay-2005-using
|
https://aclanthology.org/U05-1011
|
Using Diverse Information Sources to Retrieve Samples of Low Density Languages
|
Language samples are useful as an object of study for a diverse range of people. Samples of low-density languages in particular are often valuable in their own right, yet it is these samples which are most difficult to locate, especially in a vast repository of information such as the World Wide Web. We identify here some shortcomings to the more obvious approaches to locating such samples and present an alternative technique based on a search query using publicly available wordlists augmented with geospatial evidence, and show that the technique is successful for a number of languages.
| false
|
[] |
[] | null | null | null | null |
2005
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
chung-etal-2014-sampling
|
https://aclanthology.org/J14-1007
|
Sampling Tree Fragments from Forests
|
We study the problem of sampling trees from forests, in the setting where probabilities for each tree may be a function of arbitrarily large tree fragments. This setting extends recent work for sampling to learn Tree Substitution Grammars to the case where the tree structure (TSG derived tree) is not fixed. We develop a Markov chain Monte Carlo algorithm which corrects for the bias introduced by unbalanced forests, and we present experiments using the algorithm to learn Synchronous Context-Free Grammar rules for machine translation. In this application, the forests being sampled represent the set of Hiero-style rules that are consistent with fixed input word-level alignments. We demonstrate equivalent machine translation performance to standard techniques but with much smaller grammars.
| false
|
[] |
[] | null | null | null |
1 We randomly sampled our data from various different sources (LDC2006E86, LDC2006E93, LDC2002E18, LDC2002L27, LDC2003E07, LDC2003E14, LDC2004T08, LDC2005T06, LDC2005T10, LDC2005T34, LDC2006E26, LDC2005E83, LDC2006E34, LDC2006E85, LDC2006E92, LDC2006E24, LDC2006E92, LDC2006E24). The language model is trained on the English side of entire data (1.65M sentences, which is 39.3M words).
|
2014
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
antona-tsujii-1993-treatment
|
https://aclanthology.org/1993.tmi-1.11
|
Treatment of Tense and Aspect in Translation from Italian to Greek --- An Example of Treatment of Implicit Information in Knowledge-based Transfer MT ---
|
Treatment of tense and aspect is one of the well-known difficulties in MT, since individual languages differ as to their temporal and aspectual systems and do not allow simple correspondence of verbal forms of two languages. An approach to time suitable for MT has been elaborated in the EUROTRA project (e.g. [van Eynde 1988] ) which avoids a direct mapping of forms by:
| false
|
[] |
[] | null | null | null |
We are grateful to Sophia Ananiadou for her comments on an earlier draft of the paper and for examples of Greek translations.
|
1993
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
granfeldt-etal-2006-cefle
|
http://www.lrec-conf.org/proceedings/lrec2006/pdf/246_pdf.pdf
|
CEFLE and Direkt Profil: a New Computer Learner Corpus in French L2 and a System for Grammatical Profiling
|
The importance of computer learner corpora for research in both second language acquisition and foreign language teaching is rapidly increasing. Computer learner corpora can provide us with data to describe the learner's interlanguage system at different points of its development and they can be used to create pedagogical tools. In this paper, we first present a new computer learner corpora in French. We then describe an analyzer called Direkt Profil, that we have developed using this corpus. The system carries out a sentence analysis based on developmental sequences, i.e. local morphosyntactic phenomena linked to a development in the acquisition of French as a foreign language. We present a brief introduction to developmental sequences and some examples in French. In the final section, we introduce and evaluate a method to optimize the definition and detection of learner profiles using machine-learning techniques.
| true
|
[] |
[] |
Quality Education
| null | null |
The research presented here is supported by a grant from the Swedish Research Council, grant number 2004-1674 to the first author and by grants from the Elisabeth Rausing foundation for research in the Humanities and from Erik Philip-Sörenssens foundation for research.
|
2006
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
zhao-etal-2016-textual
|
https://aclanthology.org/C16-1212
|
Textual Entailment with Structured Attentions and Composition
|
Deep learning techniques are increasingly popular in the textual entailment task, overcoming the fragility of traditional discrete models with hard alignments and logics. In particular, the recently proposed attention models (Rocktäschel et al., 2015; Wang and Jiang, 2015) achieves state-of-the-art accuracy by computing soft word alignments between the premise and hypothesis sentences. However, there remains a major limitation: this line of work completely ignores syntax and recursion, which is helpful in many traditional efforts. We show that it is beneficial to extend the attention model to tree nodes between premise and hypothesis. More importantly, this subtree-level attention reveals information about entailment relation. We study the recursive composition of this subtree-level entailment relation, which can be viewed as a soft version of the Natural Logic framework (MacCartney and Manning, 2009). Experiments show that our structured attention and entailment composition model can correctly identify and infer entailment relations from the bottom up, and bring significant improvements in accuracy.
| false
|
[] |
[] | null | null | null |
We thank the anonymous reviewers for helpful comments. We are also grateful to James Cross, Dezhong Deng, and Lemao Liu for suggestions. This project was supported in part by NSF IIS-1656051, DARPA FA8750-13-2-0041 (DEFT), and a Google Faculty Research Award.
|
2016
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
jin-etal-2022-good
|
https://aclanthology.org/2022.acl-long.197
|
A Good Prompt Is Worth Millions of Parameters: Low-resource Prompt-based Learning for Vision-Language Models
|
Large pre-trained vision-language (VL) models can learn a new task with a handful of examples and generalize to a new task without fine-tuning. However, these VL models are hard to deploy for real-world applications due to their impractically huge sizes and slow inference speed. To solve this limitation, we study prompt-based low-resource learning of VL tasks with our proposed method, FEWVLM, relatively smaller than recent fewshot learners. For FEWVLM, we pre-train a sequence-to-sequence transformer model with prefix language modeling (PrefixLM) and masked language modeling (MaskedLM). Furthermore, we analyze the effect of diverse prompts for few-shot tasks. Experimental results on VQA show that FEWVLM with prompt-based learning outperforms Frozen (Tsimpoukelli et al., 2021) which is 31× larger than FEWVLM by 18.2% point and achieves comparable results to a 246× larger model, PICa (Yang et al., 2021). In our analysis, we observe that (1) prompts significantly affect zero-shot performance but marginally affect few-shot performance, (2) models with noisy prompts learn as quickly as hand-crafted prompts given larger training data, and (3) MaskedLM helps VQA tasks while PrefixLM boosts captioning performance. Our code is publicly available at https://github. com/woojeongjin/FewVLM * Work was mainly done while interning at Microsoft Azure AI.
| false
|
[] |
[] | null | null | null | null |
2022
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
cheng-etal-2020-exploiting
|
https://aclanthology.org/2020.rocling-1.27
|
Exploiting Text Prompts for the Development of an End-to-End Computer-Assisted Pronunciation Training System
|
More recently, there is a growing demand for the development of computer assisted pronunciation training (CAPT) systems, which can be capitalized to automatically assess the pronunciation quality of L2 learners. However, current CAPT systems that build on end-to-end (E2E) neural network architectures still fall short of expectation for the detection of
| false
|
[] |
[] | null | null | null | null |
2020
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
mohammad-2018-word
|
https://aclanthology.org/L18-1027
|
Word Affect Intensities
|
Words often convey affect-emotions, feelings, and attitudes. Further, different words can convey affect to various degrees (intensities). However, existing manually created lexicons for basic emotions (such as anger and fear) indicate only coarse categories of affect association (for example, associated with anger or not associated with anger). Automatic lexicons of affect provide fine degrees of association, but they tend not to be accurate as human-created lexicons. Here, for the first time, we present a manually created affect intensity lexicon with real-valued scores of intensity for four basic emotions: anger, fear, joy, and sadness. (We will subsequently add entries for more emotions such as disgust, anticipation, trust, and surprise.) We refer to this dataset as the NRC Affect Intensity Lexicon, or AIL for short. AIL has entries for close to 6,000 English words. We used a technique called best-worst scaling (BWS) to create the lexicon. BWS improves annotation consistency and obtains reliable fine-grained scores (split-half reliability > 0.91). We also compare the entries in AIL with the entries in the NRC VAD Lexicon, which has valence, arousal, and dominance (VAD) scores for 20K English words. We find that anger, fear, and sadness words, on average, have very similar VAD scores. However, sadness words tend to have slightly lower dominance scores than fear and anger words. The Affect Intensity Lexicon has applications in automatic emotion analysis in a number of domains such as commerce, education, intelligence, and public health. AIL is also useful in the building of natural language generation systems.
| false
|
[] |
[] | null | null | null |
Many thanks to Svetlana Kiritchenko and Tara Small for helpful discussions.
|
2018
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
leavitt-1992-morphe
|
https://aclanthology.org/A92-1034
|
MORPHE: A Practical Compiler for Reversible Morphology Rules
|
Morph~ is a Common Lisp compiler for reversible inflectional morphology rules developed at the Center for Machine Translation at Carnegie Mellon University. This paper describes the Morph~ processing model, its implementation, and how it handles some common morphological processes.
| false
|
[] |
[] | null | null | null |
I would like to thank Alex Franz, Nicholas Brownlow, and Deryle Lonsdale for their comments on drafts of this paper.
|
1992
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
sun-iyyer-2021-revisiting
|
https://aclanthology.org/2021.naacl-main.407
|
Revisiting Simple Neural Probabilistic Language Models
|
Recent progress in language modeling has been driven not only by advances in neural architectures, but also through hardware and optimization improvements. In this paper, we revisit the neural probabilistic language model (NPLM) of Bengio et al. (2003), which simply concatenates word embeddings within a fixed window and passes the result through a feed-forward network to predict the next word. When scaled up to modern hardware, this model (despite its many limitations) performs much better than expected on word-level language model benchmarks. Our analysis reveals that the NPLM achieves lower perplexity than a baseline Transformer with short input contexts but struggles to handle long-term dependencies. Inspired by this result, we modify the Transformer by replacing its first selfattention layer with the NPLM's local concatenation layer, which results in small but consistent perplexity decreases across three wordlevel language modeling datasets.
| false
|
[] |
[] | null | null | null |
We thank Nader Akoury, Andrew Drozdov, Shufan Wang, and the rest of UMass NLP group for their constructive suggestions on the draft of this paper. We also thank the anonymous reviewers for their helpful comments. This work was supported by award IIS-1955567 from the National Science Foundation (NSF).
|
2021
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
huang-kurohashi-2017-improving
|
https://aclanthology.org/W17-2704
|
Improving Shared Argument Identification in Japanese Event Knowledge Acquisition
|
Event relation knowledge represents the knowledge of causal and temporal relations between events. Shared arguments of event relation knowledge encode patterns of role shifting in successive events. A two-stage framework was proposed for the task of Japanese event relation knowledge acquisition, in which related event pairs are first extracted, and shared arguments are then identified to form the complete event relation knowledge. This paper focuses on the second stage of this framework, and proposes a method to improve the shared argument identification of related event pairs. We constructed a gold dataset for shared argument learning. By evaluating our system on this gold dataset, we found that our proposed model outperformed the baseline models by a large margin.
| false
|
[] |
[] | null | null | null | null |
2017
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
abbes-etal-2004-architecture
|
https://aclanthology.org/W04-1604
|
The Architecture of a Standard Arabic Lexical Database. Some Figures, Ratios and Categories from the DIINAR.1 Source Program
|
This paper is a contribution to the issuewhich has, in the course of the last decade, become critical-of the basic requirements and validation criteria for lexical language resources in Standard Arabic. The work is based on a critical analysis of the architecture of the DIINAR.1 lexical database, the entries of which are associated with grammar-lexis relations operating at word-form level (i.e. in morphological analysis). Investigation shows a crucial difference, in the concept of 'lexical database', between source program and generated lexica. The source program underlying DIINAR.1 is analysed, and some figures and ratios are presented. The original categorisations are, in the course of scrutiny, partly revisited. Results and ratios given here for basic entries on the one hand, and for generated lexica of inflected word-forms on the other. They aim at giving a first answer to the question of the ratios between the number of lemma-entries and inflected word-forms that can be expected to be included in, or generated by, a Standard Arabic lexical dB. These ratios can be considered as one overall language-specific criterion for the analysis, evaluation and validation of lexical dB-s in Arabic.
| false
|
[] |
[] | null | null | null | null |
2004
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
pust-etal-2015-parsing
|
https://aclanthology.org/D15-1136
|
Parsing English into Abstract Meaning Representation Using Syntax-Based Machine Translation
|
We present a parser for Abstract Meaning Representation (AMR). We treat Englishto-AMR conversion within the framework of string-to-tree, syntax-based machine translation (SBMT). To make this work, we transform the AMR structure into a form suitable for the mechanics of SBMT and useful for modeling. We introduce an AMR-specific language model and add data and features drawn from semantic resources. Our resulting AMR parser significantly improves upon state-of-the-art results.
| false
|
[] |
[] | null | null | null |
Thanks to Julian Schamper and Allen Schmaltz for early attempts at this problem. This work was sponsored by DARPA DEFT (FA8750-13-2-0045), DARPA BOLT (HR0011-12-C-0014), and DARPA Big Mechanism (W911NF-14-1-0364).
|
2015
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
sogaard-2010-inversion
|
https://aclanthology.org/2010.eamt-1.5
|
Can inversion transduction grammars generate hand alignments
|
The adequacy of inversion transduction grammars (ITGs) has been widely debated, and the discussion's crux seems to be whether the search space is inclusive enough (Zens and Ney, 2003; Wellington et al., 2006; Søgaard and Wu, 2009). Parse failure rate when parses are constrained by word alignments is one metric that has been used, but no one has studied parse failure rates of the full class of ITGs on representative hand aligned corpora. It has also been noted that ITGs in Chomsky normal form induce strictly less alignments than ITGs (Søgaard and Wu, 2009). This study is the first study that directly compares parse failure rates for this subclass and the full class of ITGs.
| false
|
[] |
[] | null | null | null | null |
2010
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
klie-etal-2018-inception
|
https://aclanthology.org/C18-2002
|
The INCEpTION Platform: Machine-Assisted and Knowledge-Oriented Interactive Annotation
|
We introduce INCEpTION, a new annotation platform for tasks including interactive and semantic annotation (e.g., concept linking, fact linking, knowledge base population, semantic frame annotation). These tasks are very time consuming and demanding for annotators, especially when knowledge bases are used. We address these issues by developing an annotation platform that incorporates machine learning capabilities which actively assist and guide annotators. The platform is both generic and modular. It targets a range of research domains in need of semantic annotation, such as digital humanities, bioinformatics, or linguistics. INCEpTION is publicly available as open-source software. 1
| false
|
[] |
[] | null | null | null |
We thank Wei Ding, Peter Jiang and Marcel de Boer and Naveen Kumar for their valuable contributions and Teresa Botschen and Yevgeniy Puzikov for their helpful comments. This work was supported by the German Research Foundation under grant No. EC 503/1-1 and GU 798/21-1 (INCEpTION).
|
2018
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
shain-2021-cdrnn
|
https://aclanthology.org/2021.acl-long.288
|
CDRNN: Discovering Complex Dynamics in Human Language Processing
|
The human mind is a dynamical system, yet many analysis techniques used to study it are limited in their ability to capture the complex dynamics that may characterize mental processes. This study proposes the continuoustime deconvolutional regressive neural network (CDRNN), a deep neural extension of continuous-time deconvolutional regression (CDR, Shain and Schuler, 2021) that jointly captures time-varying, non-linear, and delayed influences of predictors (e.g. word surprisal) on the response (e.g. reading time). Despite this flexibility, CDRNN is interpretable and able to illuminate patterns in human cognition that are otherwise difficult to study. Behavioral and fMRI experiments reveal detailed and plausible estimates of human language processing dynamics that generalize better than CDR and other baselines, supporting a potential role for CDRNN in studying human language processing.
| false
|
[] |
[] | null | null | null | null |
2021
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
hovy-etal-2002-computer
|
http://www.lrec-conf.org/proceedings/lrec2002/pdf/5.pdf
|
Computer-Aided Specification of Quality Models for Machine Translation Evaluation
|
This article describes the principles and mechanism of an integrative effort in machine translation (MT) evaluation. Building upon previous standardization initiatives, above all ISO/IEC 9126, 14598 and EAGLES, we attempt to classify into a coherent taxonomy most of the characteristics, attributes and metrics that have been proposed for MT evaluation. The main articulation of this flexible framework is the link between a taxonomy that helps evaluators define a context of use for the evaluated software, and a taxonomy of the quality characteristics and associated metrics. The article explains the theoretical grounds of this articulation, along with an overview of the taxonomies in their present state, and a perspective on ongoing work in MT evaluation standardization.
| false
|
[] |
[] | null | null | null | null |
2002
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
trujillo-1992-locations
|
https://aclanthology.org/1992.tmi-1.2
|
Locations in the machine translation of prepositional phrases
|
An approach to the machine translation of locative prepositional phrases (PP) is presented. The technique has been implemented for use in an experimental transfer-based, multilingual machine translation system. Previous approaches to this problem are described and they are compared to the solution presented.
| false
|
[] |
[] | null | null | null |
This work was funded by the UK Science and Engineering Research Council. Many thanks to Ted Briscoe, Antonio Sanfilippo, John Beaven, Ann Copestake, Valeria de Paiva, and three anonymous reviewers. Thanks also to Trinity Hall, Cambridge, for a travel grant. All remaining errors are mine.
|
1992
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
zupon-etal-2019-lightly
|
https://aclanthology.org/W19-1504
|
Lightly-supervised Representation Learning with Global Interpretability
|
We propose a lightly-supervised approach for information extraction, in particular named entity classification, which combines the benefits of traditional bootstrapping, i.e., use of limited annotations and interpretability of extraction patterns, with the robust learning approaches proposed in representation learning. Our algorithm iteratively learns custom embeddings for both the multi-word entities to be extracted and the patterns that match them from a few example entities per category. We demonstrate that this representation-based approach outperforms three other state-of-theart bootstrapping approaches on two datasets: CoNLL-2003 and OntoNotes. Additionally, using these embeddings, our approach outputs a globally-interpretable model consisting of a decision list, by ranking patterns based on their proximity to the average entity embedding in a given class. We show that this interpretable model performs close to our complete bootstrapping model, proving that representation learning can be used to produce interpretable models with small loss in performance. This decision list can be edited by human experts to mitigate some of that loss and in some cases outperform the original model.
| false
|
[] |
[] | null | null | null |
We gratefully thank Yoav Goldberg for his suggestions for the manual curation experiments.This work was supported by the Defense Advanced Research Projects Agency (DARPA) under the Big Mechanism program, grant W911NF-14-1-0395, and by the Bill and Melinda Gates Foundation HBGDki Initiative. Marco Valenzuela-Escárcega and Mihai Surdeanu declare a financial interest in lum.ai. This interest has been properly disclosed to the University of Arizona Institutional Review Committee and is managed in accordance with its conflict of interest policies.
|
2019
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
biesialska-etal-2019-talp
|
https://aclanthology.org/W19-5424
|
The TALP-UPC System for the WMT Similar Language Task: Statistical vs Neural Machine Translation
|
Although the problem of similar language translation has been an area of research interest for many years, yet it is still far from being solved. In this paper, we study the performance of two popular approaches: statistical and neural. We conclude that both methods yield similar results; however, the performance varies depending on the language pair. While the statistical approach outperforms the neural one by a difference of 6 BLEU points for the Spanish-Portuguese language pair, the proposed neural model surpasses the statistical one by a difference of 2 BLEU points for Czech-Polish. In the former case, the language similarity (based on perplexity) is much higher than in the latter case. Additionally, we report negative results for the system combination with back-translation. Our TALP-UPC system submission won 1st place for Czech→Polish and 2nd place for Spanish→Portuguese in the official evaluation of the 1st WMT Similar Language Translation task.
| false
|
[] |
[] | null | null | null |
The authors want to thank Pablo Gamallo, José Ramom Pichel Campos and Iñaki Alegria for sharing their valuable insights on their language distance studies.This work is supported in part by the Spanish Ministerio de Economía y Competitividad, the European Regional Development Fund and the Agencia Estatal de Investigación, through the postdoctoral senior grant Ramón y Cajal, the contract TEC2015-69266-P (MINECO/FEDER,EU) and the contract PCIN-2017-079 (AEI/MINECO).
|
2019
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
bhat-sharma-2013-animacy
|
https://aclanthology.org/I13-1008
|
Animacy Acquisition Using Morphological Case
|
Animacy is an inherent property of entities that nominals refer to in the physical world. This semantic property of a nominal has received much attention in both linguistics and computational linguistics. In this paper, we present a robust unsupervised technique to infer the animacy of nominals in languages with rich morphological case. The intuition behind our method is that the control/agency of a noun depicted by case marking can approximate its animacy. A higher control over an action implies higher animacy. Our experiments on Hindi show promising results with F β and P urity scores of 89 and 86 respectively.
| false
|
[] |
[] | null | null | null |
We would like to thank the anonymous reviewers for their useful comments which helped to improve this paper. We furthermore thank Sambhav Jain for his help and useful feedback.
|
2013
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
ahuja-desai-2020-accelerating
|
https://aclanthology.org/2020.nlp4convai-1.6
|
Accelerating Natural Language Understanding in Task-Oriented Dialog
|
Task-oriented dialog models typically leverage complex neural architectures and large-scale, pre-trained Transformers to achieve state-ofthe-art performance on popular natural language understanding benchmarks. However, these models frequently have in excess of tens of millions of parameters, making them impossible to deploy on-device where resourceefficiency is a major concern. In this work, we show that a simple convolutional model compressed with structured pruning achieves largely comparable results to BERT (Devlin et al., 2019) on ATIS and Snips, with under 100K parameters. Moreover, we perform acceleration experiments on CPUs, where we observe our multi-task model predicts intents and slots nearly 63× faster than even DistilBERT (Sanh et al., 2019).
| false
|
[] |
[] | null | null | null |
Thanks to our anonymous reviewers for their helpful comments and feedback.
|
2020
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
jha-etal-2010-corpus
|
https://aclanthology.org/W10-0702
|
Corpus Creation for New Genres: A Crowdsourced Approach to PP Attachment
|
This paper explores the task of building an accurate prepositional phrase attachment corpus for new genres while avoiding a large investment in terms of time and money by crowdsourcing judgments. We develop and present a system to extract prepositional phrases and their potential attachments from ungrammatical and informal sentences and pose the subsequent disambiguation tasks as multiple choice questions to workers from Amazon's Mechanical Turk service. Our analysis shows that this two-step approach is capable of producing reliable annotations on informal and potentially noisy blog text, and this semi-automated strategy holds promise for similar annotation projects in new genres.
| false
|
[] |
[] | null | null | null |
The authors would like to thank Kevin Lerman for his help in formulating the original ideas for this work. This material is based on research supported in part by the U.S. National Science Foundation (NSF) under IIS-05-34871. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF.
|
2010
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
st-jacques-barriere-2006-similarity
|
https://aclanthology.org/W06-1103
|
Similarity Judgments: Philosophical, Psychological and Mathematical Investigations
|
This study investigates similarity judgments from two angles. First, we look at models suggested in the psychology and philosophy literature which capture the essence of concept similarity evaluation for humans. Second, we analyze the properties of many metrics which simulate such evaluation capabilities. The first angle reveals that non-experts can judge similarity and that their judgments need not be based on predefined traits. We use such conclusions to inform us on how gold standards for word sense disambiguation tasks could be established. From the second angle, we conclude that more attention should be paid to metric properties before assigning them to perform a particular task.
| false
|
[] |
[] | null | null | null | null |
2006
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
tomuro-1998-semi
|
https://aclanthology.org/W98-0715
|
Semi-automatic Induction of Systematic Polysemy from WordNet
|
This paper describes a semi-automatic method of inducing underspecified semantic classes from WordNet verbs and nouns. An underspecified semantic class is an abstract semantic class which encodes systematic polysem~f, a set of word senses that are related in systematic and predictable ways. We show the usefulness of the induced classes in the semantic interpretations and contextual inferences of real-word texts by applying them to the predicate-argument structures in Brown corpus.
| false
|
[] |
[] | null | null | null |
The author would like to thank Paul Buitelaar for helpful discussions, insights and encouragement.
|
1998
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
yazdani-etal-2015-learning
|
https://aclanthology.org/D15-1201
|
Learning Semantic Composition to Detect Non-compositionality of Multiword Expressions
|
Non-compositionality of multiword expressions is an intriguing problem that can be the source of error in a variety of NLP tasks such as language generation, machine translation and word sense disambiguation. We present methods of non-compositionality detection for English noun compounds using the unsupervised learning of a semantic composition function. Compounds which are not well modeled by the learned semantic composition function are considered noncompositional. We explore a range of distributional vector-space models for semantic composition, empirically evaluate these models, and propose additional methods which improve results further. We show that a complex function such as polynomial projection can learn semantic composition and identify non-compositionality in an unsupervised way, beating all other baselines ranging from simple to complex. We show that enforcing sparsity is a useful regularizer in learning complex composition functions. We show further improvements by training a decomposition function in addition to the composition function. Finally, we propose an EM algorithm over latent compositionality annotations that also improves the performance.
| false
|
[] |
[] | null | null | null |
This research was partially funded by Hasler foundation project no. 15019, "Deep Neural Network Dependency Parser for Context-aware Representation Learning".
|
2015
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
liu-etal-2009-capturing
|
https://aclanthology.org/P09-2007
|
Capturing Errors in Written Chinese Words
|
A collection of 3208 reported errors of Chinese words were analyzed. Among which, 7.2% involved rarely used character, and 98.4% were assigned common classifications of their causes by human subjects. In particular, 80% of the errors observed in writings of middle school students were related to the pronunciations and 30% were related to the compositions of words. Experimental results show that using intuitive Web-based statistics helped us capture only about 75% of these errors. In a related task, the Web-based statistics are useful for recommending incorrect characters for composing test items for "incorrect character identification" tests about 93% of the time.
| false
|
[] |
[] | null | null | null |
This research has been funded in part by the National Science Council of Taiwan under the grant NSC-97-2221-E-004-007-MY2. We thank the anonymous reviewers for invaluable comments, and more responses to the comments are available in (Liu et al. 2009) .
|
2009
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
saumya-etal-2021-offensive
|
https://aclanthology.org/2021.dravidianlangtech-1.5
|
Offensive language identification in Dravidian code mixed social media text
|
Hate speech and offensive language recognition in social media platforms have been an active field of research over recent years. In non-native English spoken countries, social media texts are mostly in code mixed or script mixed/switched form. The current study presents extensive experiments using multiple machine learning, deep learning, and transfer learning models to detect offensive content on Twitter. The data set used for this study are in Tanglish (Tamil and English), Manglish (Malayalam and English) code-mixed, and Malayalam script-mixed. The experimental results showed that 1 to 6-gram character TF-IDF features are better for the said task. The best performing models were naive bayes, logistic regression, and vanilla neural network for the dataset Tamil code-mix, Malayalam code-mixed, and Malayalam script-mixed, respectively instead of more popular transfer learning models such as BERT and ULMFiT and hybrid deep models.
| true
|
[] |
[] |
Peace, Justice and Strong Institutions
| null | null | null |
2021
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
|
lialin-etal-2022-life
|
https://aclanthology.org/2022.acl-long.227
|
Life after BERT: What do Other Muppets Understand about Language?
|
Existing pre-trained transformer analysis works usually focus only on one or two model families at a time, overlooking the variability of the architecture and pre-training objectives. In our work, we utilize the oLMpics benchmark and psycholinguistic probing datasets for a diverse set of 29 models including T5, BART, and ALBERT. Additionally, we adapt the oLMpics zero-shot setup for autoregressive models and evaluate GPT networks of different sizes. Our findings show that none of these models can resolve compositional questions in a zero-shot fashion, suggesting that this skill is not learnable using existing pre-training objectives. Furthermore, we find that global model decisions such as architecture, directionality, size of the dataset, and pre-training objective are not predictive of a model's linguistic capabilities. The code for this study is available on GitHub 1 .
| false
|
[] |
[] | null | null | null |
This work is funded in part by the NSF award number IIS-1844740.
|
2022
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
yan-etal-2021-unified-generative
|
https://aclanthology.org/2021.acl-long.451
|
A Unified Generative Framework for Various NER Subtasks
|
Named Entity Recognition (NER) is the task of identifying spans that represent entities in sentences. Whether the entity spans are nested or discontinuous, the NER task can be categorized into the flat NER, nested NER, and discontinuous NER subtasks. These subtasks have been mainly solved by the token-level sequence labelling or span-level classification. However, these solutions can hardly tackle the three kinds of NER subtasks concurrently. To that end, we propose to formulate the NER subtasks as an entity span sequence generation task, which can be solved by a unified sequence-to-sequence (Seq2Seq) framework. Based on our unified framework, we can leverage the pre-trained Seq2Seq model to solve all three kinds of NER subtasks without the special design of the tagging schema or ways to enumerate spans. We exploit three types of entity representations to linearize entities into a sequence. Our proposed framework is easy-to-implement and achieves state-of-theart (SoTA) or near SoTA performance on eight English NER datasets, including two flat NER datasets, three nested NER datasets, and three discontinuous NER datasets 1 .
| false
|
[] |
[] | null | null | null |
We would like to thank the anonymous reviewers for their insightful comments. The discussion with colleagues in AWS Shanghai AI Lab was quite fruitful. We also thank the developers of fastNLP 10 and fitlog 11 . We thank Juntao Yu for helpful discussion about dataset processing. This work was supported by the National Key Research and Development Program of China (No. 2020AAA0106700) and National Natural Science Foundation of China (No. 62022027).
|
2021
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
siddharthan-2003-preserving
|
https://aclanthology.org/W03-2314
|
Preserving Discourse Structure when Simplifying Text
| null | false
|
[] |
[] | null | null | null | null |
2003
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
kilbury-etal-1991-datr
|
https://aclanthology.org/E91-1024
|
Datr as a Lexical Component for PATR
|
means that associated information is represented together or bundled. One advantage of this bundled information is its reusability, which allows redundancy to be reduced. The representation of lexical information should enable us to express a further kind of generalization, namely the relations between regularity, subregularity, and irregularity. Furthermore, the representation has to be computationaUy tractable and --possibly with the addition of"syntactic sugar" --more or less readable for human users.
In the project "Simulation of Lexical Acquisition" (SIMLEX) unification is used to create new lexical entries through the monotonic accumulation of contextual grammatical information during parsing. The system which we implemented for this purpose is a variant of PATR as described in (Shieber, 1986) .
| false
|
[] |
[] | null | null | null |
The research project SLMLEX is supported by the DFG under grant number Ki 374/1. The authors are indebted to the participants of the Workshop on Inheritance, Tilburg 1990.
|
1991
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
thorne-etal-2013-automated
|
https://aclanthology.org/I13-1160
|
Automated Activity Recognition in Clinical Documents
|
We describe a first experiment on the identification and extraction of computerinterpretable guideline (CIG) components (activities, actors and consumed artifacts) from clinical documents, based on clinical entity recognition techniques. We rely on MetaMap and the UMLS Metathesaurus to provide lexical information, and study the impact of clinical document syntax and semantics on activity recognition.
| true
|
[] |
[] |
Good Health and Well-Being
| null | null | null |
2013
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
farahmand-henderson-2016-modeling
|
https://aclanthology.org/W16-1809
|
Modeling the Non-Substitutability of Multiword Expressions with Distributional Semantics and a Log-Linear Model
|
Non-substitutability is a property of Multiword Expressions (MWEs) that often causes lexical rigidity and is relevant for most types of MWEs. Efficient identification of this property can result in the efficient identification of MWEs. In this work we propose using distributional semantics, in the form of word embeddings, to identify candidate substitutions for a candidate MWE and model its substitutability. We use our models to rank MWEs based on their lexical rigidity and study their performance in comparison with association measures. We also study the interaction between our models and association measures. We show that one of our models can significantly improve over the association measure baselines, identifying collocations.
| false
|
[] |
[] | null | null | null | null |
2016
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
vandeghinste-schuurman-2014-linking
|
http://www.lrec-conf.org/proceedings/lrec2014/pdf/189_Paper.pdf
|
Linking Pictographs to Synsets: Sclera2Cornetto
|
Social inclusion of people with Intellectual and Developmental Disabilities can be promoted by offering them ways to independently use the internet. People with reading or writing disabilities can use pictographs instead of text. We present a resource in which we have linked a set of 5710 pictographs to lexical-semantic concepts in Cornetto, a Wordnet-like database for Dutch. We show that, by using this resource in a text-to-pictograph translation system, we can greatly improve the coverage comparing with a baseline where words are converted into pictographs only if the word equals the filename.
| false
|
[] |
[] | null | null | null |
This research is done in the Picto project, funded by the Support Fund Marguerite-Marie Delacroix. 17 Follow up work on the localisation of the text to pictograph translator is funded by the European Commission CIP-621055 in the Able-to-Include project.
|
2014
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
fu-etal-2014-improving
|
https://aclanthology.org/W14-6807
|
Improving Chinese Sentence Polarity Classification via Opinion Paraphrasing
|
While substantial studies have been achieved on sentiment polarity classification to date, lacking enough opinion-annotated corpora for reliable t rain ing is still a challenge. In this paper we propose to improve a supported vector mach ines based polarity classifier by enriching both training data and test data via opinion paraphrasing. In particular, we first extract an equivalent set of attributeevaluation pairs fro m the training data and then exploit it to generate opinion paraphrases in order to expand the training corpus or enrich opinionated sentences for polarity classification. We tested our system over two sets of online product reviews in car and mobilephone domains. The experimental results show that using opinion paraphrases results in significant performance imp rovement in polarity classification.
| false
|
[] |
[] | null | null | null |
This study was supported by National Natural Science Foundation of China under Grant No.61170148 and No.60973081, the Returned Scholar Foundation of Heilongjiang Province, and Harbin Innovative Foundation for Returnees under Grant No.2009RFLXG007, respectively.
|
2014
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
liu-etal-2020-unsupervised
|
https://aclanthology.org/2020.acl-main.28
|
Unsupervised Paraphrasing by Simulated Annealing
|
We propose UPSA, a novel approach that accomplishes Unsupervised Paraphrasing by Simulated Annealing. We model paraphrase generation as an optimization problem and propose a sophisticated objective function, involving semantic similarity, expression diversity, and language fluency of paraphrases. UPSA searches the sentence space towards this objective by performing a sequence of local edits. We evaluate our approach on various datasets, namely, Quora, Wikianswers, MSCOCO, and Twitter. Extensive results show that UPSA achieves the state-of-the-art performance compared with previous unsupervised methods in terms of both automatic and human evaluations. Further, our approach outperforms most existing domain-adapted supervised models, showing the generalizability of UPSA. 1
| false
|
[] |
[] | null | null | null |
We thank the anonymous reviewers for their insightful suggestions. This work was supported in part by the Beijing Innovation Center for Future Chip. Lili Mou is supported by AltaML, the Amii Fellow Program, and the Canadian CIFAR AI Chair Program; he also acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), RGPIN-2020-04465. Sen Song is the corresponding author of this paper.
|
2020
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
dohsaka-etal-2010-user
|
https://aclanthology.org/W10-4358
|
User-adaptive Coordination of Agent Communicative Behavior in Spoken Dialogue
|
In this paper, which addresses smooth spoken interaction between human users and conversational agents, we present an experimental study that evaluates a method for user-adaptive coordination of agent communicative behavior. Our method adapts the pause duration preceding agent utterances and the agent gaze duration to reduce the discomfort perceived by individual users during interaction. The experimental results showed a statistically significant tendency: the duration of the agent pause and the gaze converged during interaction with the method. The method also significantly improved the perceived relevance of the agent communicative behavior.
| true
|
[] |
[] |
Industry, Innovation and Infrastructure
| null | null |
This work was partially supported by a Grant-in-Aid for Scientific Research on Innovative Areas, "Founding a creative society via collaboration between humans and robots" (21118004), from the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan.
|
2010
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
|
ikeda-etal-1998-information
|
https://aclanthology.org/C98-1090
|
Information Classification and Navigation Based on 5WlH of the Target Information
|
This paper proposes a method by which 5W1H (who, when, where, what, why, how, and predicate) information is used to classify and navigate Japaneselanguage texts. 5WlH information, extracted from text data, has an access platform with three functions: episodic retrieval, multi-dimensional classification, and overall classification. In a six-month trial, the platform was used by 50 people to access 6400 newspaper articles. The three functions proved to be effective for office documentation work and the precision of extraction was approximately 82%.
| false
|
[] |
[] | null | null | null |
We would like to thank Dr. Satoshi Goto and Dr. Takao Watanabe for their encouragement and continued support throughout this work.We also appreciate the contribution of Mr. Kenji Satoh, Mr. Takayoshi Ochiai, Mr. Satoshi Shimokawara, and Mr. Masahito Abe to this work.
|
1998
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
klein-nabi-2020-contrastive
|
https://aclanthology.org/2020.acl-main.671
|
Contrastive Self-Supervised Learning for Commonsense Reasoning
|
We propose a self-supervised method to solve Pronoun Disambiguation and Winograd Schema Challenge problems. Our approach exploits the characteristic structure of training corpora related to so-called "trigger" words, which are responsible for flipping the answer in pronoun disambiguation. We achieve such commonsense reasoning by constructing pairwise contrastive auxiliary predictions. To this end, we leverage a mutual exclusive loss regularized by a contrastive margin. Our architecture is based on the recently introduced transformer networks, BERT, that exhibits strong performance on many NLP benchmarks. Empirical results show that our method alleviates the limitation of current supervised approaches for commonsense reasoning. This study opens up avenues for exploiting inexpensive self-supervision to achieve performance gain in commonsense reasoning tasks. 1
| false
|
[] |
[] | null | null | null | null |
2020
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
kumar-etal-2020-vocabulary
|
https://aclanthology.org/2020.aacl-main.78
|
Vocabulary Matters: A Simple yet Effective Approach to Paragraph-level Question Generation
|
Question generation (QG) has recently attracted considerable attention. Most of the current neural models take as input only one or two sentences and perform poorly when multiple sentences or complete paragraphs are given as input. However, in real-world scenarios, it is very important to be able to generate high-quality questions from complete paragraphs. In this paper, we present a simple yet effective technique for answer-aware question generation from paragraphs. We augment a basic sequence-to-sequence QG model with dynamic, paragraph-specific dictionary and copy attention that is persistent across the corpus, without requiring features generated by sophisticated NLP pipelines or handcrafted rules. Our evaluation on SQuAD shows that our model significantly outperforms current state-of-theart systems in question generation from paragraphs in both automatic and human evaluation. We achieve a 6-point improvement over the best system on BLEU-4, from 16.38 to 22.62.
| false
|
[] |
[] | null | null | null | null |
2020
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.