title stringlengths 15 188 | abstract stringlengths 400 1.8k | introduction stringlengths 9 10.5k | content stringlengths 778 41.9k | abstract_len int64 400 1.8k | intro_len int64 9 10.5k | abs_len int64 400 1.8k |
|---|---|---|---|---|---|---|
Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection | An important task for designing QA systems is answer sentence selection (AS2): selecting the sentence containing (or constituting) the answer to a question from a set of retrieved relevant documents. In this paper, we propose three novel sentence-level transformer pre-training objectives that incorporate paragraph-leve... | Question Answering (QA) finds itself at the core of several commercial applications, for e.g., virtual assistants such as Google Home, Alexa and Siri. Answer Sentence Selection (AS2) is an important task for QA Systems operating on unstructured text such as web documents. When presented with a set of relevant documents... | Answer Sentence Selection (AS2) Earlier approaches for AS2 used CNNs Paragraph/Document-level Semantics Transformers for Long Inputs Longformer In this section we formally define the task of AS2. Given a question q and a set of answer candidates A={a 1 , . . ., a n }, the objective is to select the candidate ā ∈ A that... | 893 | 2,037 | 893 |
Style Transfer Through Back-Translation | Style transfer is the task of rephrasing the text to contain specific stylistic properties without changing the intent or affect within the context. This paper introduces a new method for automatic style transfer. We first learn a latent representation of the input sentence which is grounded in a language translation m... | Intelligent, situation-aware applications must produce naturalistic outputs, lexicalizing the same meaning differently, depending upon the environment. This is particularly relevant for language generation tasks such as machine translation This paper introduces a novel approach to transferring style of a sentence while... | Given two datasets 2 } which represent two different styles s 1 and s 2 , respectively, our task is to generate sentences of the desired style while preserving the meaning of the input sentence. Specifically, we generate samples of dataset X 1 such that they belong to style s 2 and samples of X 2 such that they belong ... | 815 | 1,317 | 815 |
Don't Say That! Making Inconsistent Dialogue Unlikely with Unlikelihood Training | Generative dialogue models currently suffer from a number of problems which standard maximum likelihood training does not address. They tend to produce generations that (i) rely too much on copying from the context, (ii) contain repetitions within utterances, (iii) overuse frequent words, and (iv) at a deeper level, co... | Open-ended tasks such as dialogue reveal a number of issues with current neural text generation methods. In more strongly grounded tasks such as machine translation and image captioning, current encoder-decoder architectures provide strong performance, where mostly word-level decisions are often taken correctly by the ... | Dialogue Generation Dialogue generation consists in predicting an utterance y = (y 1 , . . . , y |y| ) given a context x = {s 1 , . . . , s k , u 1 , . . . , u t } that consists of initial context sentences s 1:k (e.g., scenario, knowledge, personas, etc.) followed by dialogue history utterances u 1:t from speakers who... | 964 | 1,409 | 964 |
Automatic Metric Validation for Grammatical Error Correction | Correction (GEC) is currently done by observing the correlation between human and metric-induced rankings. However, such correlation studies are costly, methodologically troublesome, and suffer from low inter-rater agreement. We propose MAEGE, an automatic methodology for GEC metric validation, that overcomes many of t... | Much recent effort has been devoted to automatic evaluation, both within GEC Human rankings are often considered as ground truth in text-to-text generation, but using them reliably can be challenging. Other than the costs of compiling a sizable validation set, human rank-ings are known to yield poor inter-rater agreeme... | We turn to presenting the metrics we experiment with. The standard practice in GEC evaluation is to define differences between the source and a correction (or a reference) as a set of edits BLEU. BLEU GLEU. GLEU iBLEU. iBLEU We set α = 0.8 as suggested by Sun and Zhou. F -Score computes the overlap of edits to the sour... | 672 | 2,406 | 672 |
Robust Hate Speech Detection via Mitigating Spurious Correlations | We develop a novel robust hate speech detection model that can defend against both wordand character-level adversarial attacks. We identify the essential factor that vanilla detection models are vulnerable to adversarial attacks is the spurious correlation between certain target words in the text and the prediction lab... | Online social media bring people together and encourage people to share their thoughts freely. However, it also allows some users to misuse the platforms to promote the hateful language. As a result, hate speech, which "expresses hate or encourages violence towards a person or group based on characteristics such as rac... | A hate speech detection model can be defined as a functional mapping from T to Y , where t ∈ T is a set of input texts and y ∈ Y is the target label set. In general, the output of the detection model is the softmax probability of predicting each class k, i.e., f k (t; θ) = P (Y = y k |t), where θ is the parameters of t... | 717 | 913 | 717 |
Named Entity Recognition with Character-Level Models | We discuss two named-entity recognition models which use characters and character ¤ -grams either exclusively or as an important part of their data representation. The first model is a character-level HMM with minimal context information, and the second model is a maximum-entropy conditional markov model with substanti... | For most sequence-modeling tasks with word-level evaluation, including named-entity recognition and part-ofspeech tagging, it has seemed natural to use entire words as the basic input features. For example, the classic HMM view of these two tasks is one in which the observations are words and the hidden states encode c... | Figure When using character-level models for word-evaluated tasks, one would not want multiple characters inside a single word to receive different labels. This can be avoided in two ways: by explicitly locking state transitions inside words, or by careful choice of transition topology. In our current implementation, w... | 565 | 888 | 565 |
A Formal Hierarchy of RNN Architectures | We develop a formal hierarchy of the expressive capacity of RNN architectures. The hierarchy is based on two formal properties: space complexity, which measures the RNN's memory, and rational recurrence, defined as whether the recurrent update can be described by a weighted finite-state machine. We place several RNN va... | While neural networks are central to the performance of today's strongest NLP systems, theoretical understanding of the formal properties of different kinds of networks is still limited. It is established, for example, that the Elman (1990) RNN is Turing-complete, given infinite precision and computation time Recently,... | We introduce a unified hierarchy (Figure We provide the first formal proof that LSTMs can encode functions that rational recurrences cannot. On the other hand, we show that the saturated Elman RNN and GRU are rational recurrences with constant space complexity, whereas the QRNN has unbounded space complexity. We also s... | 667 | 683 | 667 |
A Simple and Effective Approach to Coverage-Aware Neural Machine Translation | We offer a simple and effective method to seek a better balance between model confidence and length preference for Neural Machine Translation (NMT). Unlike the popular length normalization and coverage models, our model does not require training nor reranking the limited n-best outputs. Moreover, it is robust to large ... | In the past few years, Neural Machine Translation (NMT) has achieved state-of-the-art performance in many translation tasks. It models the translation problem using neural networks with no assumption of the hidden structures between two languages, and learns the model parameters from bilingual texts in an end-to-end fa... | Given a word sequence, a coverage vector indicates whether the word of each position is translated. This is trivial for statistical machine translation However, it is not the case for NMT where the coverage is modeled in a soft way. In NMT, no explicit translation units or rules are used. The attention mechanism is use... | 522 | 2,040 | 522 |
Deep-speare: A joint neural model of poetic language, meter and rhyme | In this paper, we propose a joint architecture that captures language, rhyme and meter for sonnet modelling. We assess the quality of generated poems using crowd and expert judgements. The stress and rhyme models perform very well, as generated poems are largely indistinguishable from human-written poems. Expert evalua... | With the recent surge of interest in deep learning, one question that is being asked across a number of fronts is: can deep learning techniques be harnessed for creative purposes? Creative applications where such research exists include the composition of music (Humphrey et al., 2013; Sturm et al., 2016; Choi et al., ... | Early poetry generation systems were generally rule-based, and based on rhyming/TTS dictionaries and syllable counting (Gervás, 2000; Wu et al., 2009; Netzer et al., 2009; Colton et al., 2012; Toivanen et al., 2013). The earliest attempt at using statistical modelling for poetry generation was Greene et al. (2010), b... | 649 | 1,971 | 649 |
Evolutionary Data Measures: Understanding the Difficulty of Text Classification Tasks | Classification tasks are usually analysed and improved through new model architectures or hyperparameter optimisation but the underlying properties of datasets are discovered on an ad-hoc basis as errors occur. However, understanding the properties of the data is crucial in perfecting models. In this paper we analyse e... | If a machine learning (ML) model is trained on a dataset then the same machine learning model on the same dataset but with more granular labels will frequently have lower performance scores than the original model (see results in Such a difficulty measure would be useful as an analysis tool and as a performance estimat... | One source of difficulty in a dataset is mislabelled items of data (noise). Class Diversity. Class diversity provides information about the composition of a dataset by measuring the relative abundances of different classes Class Balance. Unbalanced classes are a known problem in machine learning Data Complexity. Humans... | 1,080 | 715 | 1,080 |
Neural Readability Pairwise Ranking for Sentences in Italian Administrative Language | Automatic Readability Assessment aims at assigning a complexity level to a given text, which could help improve the accessibility to information in specific domains, such as the administrative one. In this paper, we investigate the behavior of a Neural Pairwise Ranking Model (NPRM) for sentence-level readability assess... | Due to its complexity, the style of Italian administrative texts has been defined as "artificial" and "obscure" One way to tackle this problem is with technologies for Automatic Readability Assessment (ARA) that predict the complexity of texts In this paper, we tackle the data scarcity issue in two ways. First, we intr... | Early ARA techniques consisted in the so-called "readability formulae". Such formulae were created for educational purposes and mainly considered shallow text features, like word and sentence length or lists of common words However, longer words and sentences are not necessarily complex, and these formulae have been pr... | 1,114 | 1,897 | 1,114 |
A Human-Centric Evaluation Platform for Explainable Knowledge Graph Completion | Explanations for AI are expected to help human users understand AI-driven predictions. Evaluating plausibility, the helpfulness of the explanations, is therefore essential for developing eXplainable AI (XAI) that can really aid human users. Here we propose a human-centric evaluation platform 1 to measure plausibility o... | A Knowledge Graph (KG) is a structured representation of knowledge that captures the relationships between entities. It is composed of triples in the format (subject, relation, object), denoted as t = (s, r, o), where two entities are connected by a specified relation. For example, in the triple (London, isCapitalOf, U... | We build an online system to evaluate XKGCs in a human centric manner. Our system considers the real needs and interests of human users in collaboration with AI, allowing us to investigate: can humans assess correctness of a KGC prediction based on its explanations? Which explanations are helpful for human users? The a... | 721 | 2,363 | 721 |
Privacy Implications of Retrieval-Based Language Models | Retrieval-based language models (LMs) have demonstrated improved interpretability, factuality, and adaptability compared to their parametric counterparts by incorporating retrieved text from external datastores. While it is well known that parametric models are prone to leaking private data, it remains unclear how the ... | Retrieval-based language models | Email: mailme@alice.com mailme@bob.com mailme@charlie.com … URL: alice@bob.com harry@hogwarts.edu … text passages that are most relevant to the prompt provided to the model. These retrieved results are then utilized as additional information when generating the model's response to the prompt. Retrievalbased language mo... | 1,389 | 31 | 1,389 |
End of preview. Expand in Data Studio
Description
This dataset contains filtered ACL papers between certain abstract length ranges. The data includes columns such as paper_name, year, venue, url, bibkey, and cite_acl.
We also have the filtered_dataset.jsonl that holds the main text info.
Note: Some records might be missing certain fields, especially bibkey or cite_acl. We plan to fill them via partial manual / fuzzy matching.
License
- Materials prior to 2016: CC BY-NC-SA 3.0.
- Materials from 2016 onward: CC BY 4.0.
Because these are ACL materials, the dataset is non-commercial for older content and requires attribution. For more details, see ACL policy.
Citation
If you use this dataset, please cite the original ACL papers accordingly:
- Downloads last month
- 16