id stringlengths 1 4 | year int64 2.01k 2.03k | title stringlengths 12 519 | abstract stringlengths 7 12.7k | pdf_url stringlengths 36 61 | content stringlengths 7 46.5k | __index_level_0__ int64 0 41.4k |
|---|---|---|---|---|---|---|
40 | 2,023 | N e M o Guardrails: A Toolkit for Controllable and Safe LLM Applications with Programmable Rails | NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems. Guardrails (or rails for short) are a specific way of controlling the output of an LLM, such as not talking about topics considered harmful, following a predefined dialogue path, using a particular l... | https://aclanthology.org/2023.emnlp-demo.40 | ## introduction steerability and trustworthiness are key factors for deploying large language models (llms) in production. enabling these models to stay on track for multiple turns of a conversation is essential for developing task-oriented dialogue systems. this seems like a serious challenge as llms can be easily led... | 22,868 |
553 | 2,024 | Disentangling Dialect from Social Bias via Multitask Learning to Improve Fairness | Dialects introduce syntactic and lexical variations in language that occur in regional or social groups. Most NLP methods are not sensitive to such variations. This may lead to unfair behavior of the methods, conveying negative bias towards dialect speakers. While previous work has studied dialect-related fairness for ... | https://aclanthology.org/2024.findings-acl.553 | ## introduction the term social bias is used broadly in the field of nlp. existing works approach various facets, such as the affected social group @xcite , the tasks for which bias is evaluated @xcite , and the limited fairness of nlp systems in real-world settings, which may put specific social groups at a disadvanta... | 31,874 |
3 | 2,022 | Disentangled Variational Topic Inference for Topic-Accurate Financial Report Generation | Automatic generating financial report from a set of news is important but challenging. The financial reports is composed of key points of the news and corresponding inferring and reasoning from specialists in financial domain with professional knowledge. The challenges lie in the effective learning of the extra knowled... | https://aclanthology.org/2022.finnlp-1.3 | ## introduction automatically generating long financial reports from a set of macro news have been recently studied with the objective to assist analysts to perform the time-consuming reporting task. a macro news, as shown in fig. 1 , is one paragraph with multiple sentences describing a finance-domain event with suppo... | 17,003 |
157 | 2,023 | Analyzing Modular Approaches for Visual Question Decomposition | Modular neural networks without additional training have recently been shown to surpass end-to-end neural networks on challenging vision–language tasks. The latest such methods simultaneously introduce LLM-based code generation to build programs and a number of skill-specific, task-oriented modules to execute them. In ... | https://aclanthology.org/2023.emnlp-main.157 | ## introduction end-to-end neural networks @xcite have been the predominant solution for visionlanguage tasks, like visual question answering (vqa) @xcite . however, these methods suffer from a lack of interpretability and generalization capabilities. instead, modular (or neurosymbolic) approaches @xcite @xcite have be... | 21,933 |
179 | 2,020 | Multi-Turn Dialogue Generation in E -Commerce Platform with the Context of Historical Dialogue | As an important research topic, customer service dialogue generation tends to generate generic seller responses by leveraging current dialogue information. In this study, we propose a novel and extensible dialogue generation method by leveraging sellers’ historical dialogue information, which can be both accessible and... | https://aclanthology.org/2020.findings-emnlp.179 | ## introduction over the past years, online shopping has experienced incredible growth. in e-commerce platforms, e.g., amazon and taobao, brilliant customer service is becoming increasingly important because of significantly reducing the workload of shop sellers. ideally, sellers should provide highquality responses to... | 4,751 |
14 | 2,021 | Familiar words but strange voices: Modelling the influence of speech variability on word recognition | We present a deep neural model of spoken word recognition which is trained to retrieve the meaning of a word (in the form of a word embedding) given its spoken form, a task which resembles that faced by a human listener. Furthermore, we investigate the influence of variability in speech signals on the model’s performan... | https://aclanthology.org/2021.eacl-srw.14 | ## introduction human speech is highly complex and variable. the sources underlying this variability include speakerrelated factors such as vocal tract shape, gender, age, and dialect as well as context-related factors such as word surprisal and phonological prominence. as a result, two acoustic realizations of the sam... | 8,569 |
35 | 2,022 | M ultitrai NMT Erasmus+ project: Machine Translation Training for multilingual citizens (multitrainmt.eu) | The MultitraiNMT Erasmus+ project has developed an open innovative syl-labus in machine translation, focusing on neural machine translation (NMT) and targeting both language learners and translators. The training materials include an open access coursebook with more than 250 activities and a pedagogical NMT interface c... | https://aclanthology.org/2022.eamt-1.35 | ## the coursebook the open access coursebook addresses both the technical foundations of machine translation, and the ethical, societal and professional implications of this approach. it will soon be available from language science press. the coursebook is organized in 9 chapters: (1) multilingualism. (2) introduction ... | 14,836 |
742 | 2,020 | GRADE : Automatic Graph-Enhanced Coherence Metric for Evaluating Open-Domain Dialogue Systems | Automatically evaluating dialogue coherence is a challenging but high-demand ability for developing high-quality open-domain dialogue systems. However, current evaluation metrics consider only surface features or utterance-level semantics, without explicitly considering the fine-grained topic transition dynamics of dia... | https://aclanthology.org/2020.emnlp-main.742 | ## introduction coherence, what makes dialogue utterances unified rather than a random group of sentences, is an essential property to pursue an open-domain * equal contribution. ## related work automatic evaluation for open-domain dialogue systems is difficult since there are many appropriate responses for a dialogue ... | 4,463 |
285 | 2,020 | Task-Aware Representation of Sentences for Generic Text Classification | State-of-the-art approaches for text classification leverage a transformer architecture with a linear layer on top that outputs a class distribution for a given prediction problem. While effective, this approach suffers from conceptual limitations that affect its utility in few-shot or zero-shot transfer learning scena... | https://aclanthology.org/2020.coling-main.285 | ## introduction text classification is the task of predicting one or multiple class labels for a given text. it is used in a large number of applications such as spam filtering @xcite , sentiment analysis @xcite , intent detection @xcite or news topic classification @xcite . the current state-of-the-art approach to tex... | 3,163 |
180 | 2,023 | Operator Selection and Ordering in a Pipeline Approach to Efficiency Optimizations for Transformers | There exists a wide variety of efficiency methods for natural language processing (NLP) tasks, such as pruning, distillation, dynamic inference, quantization, etc. From a different perspective, we can consider an efficiency method as an operator applied on a model. Naturally, we may construct a pipeline of operators, i... | https://aclanthology.org/2023.findings-acl.180 | ## introduction natural language processing (nlp) tasks nowadays heavily rely on complex neural models, especially large-scale pre-trained language models based on the transformer architecture @xcite , such as bert @xcite , roberta @xcite , and gpt @xcite . despite being more accurate than previous models, transformer-... | 23,368 |
130 | 2,022 | Probing Structured Pruning on Multilingual Pre-trained Models: Settings, Algorithms, and Efficiency | Structured pruning has been extensively studied on monolingual pre-trained language models and is yet to be fully evaluated on their multilingual counterparts. This work investigates three aspects of structured pruning on multilingual pre-trained language models: settings, algorithms, and efficiency. Experiments on nin... | https://aclanthology.org/2022.acl-long.130 | ## introduction large-scale pre-trained monolingual language models like bert @xcite and roberta @xcite have shown promising results in various nlp tasks while suffering from their large model size and high latency. structured pruning has proven to be an effective approach to compressing and accelerating these large mo... | 12,933 |
90 | 2,023 | A lex- U 2023 NLP at W ojood NER shared task: A ra BINDER (Bi-encoder for A rabic Named Entity Recognition) | Named Entity Recognition (NER) is a crucial task in natural language processing that facilitates the extraction of vital information from text. However, NER for Arabic presents a significant challenge due to the language’s unique characteristics. In this paper, we introduce AraBINDER, our submission to the Wojood NER S... | https://aclanthology.org/2023.arabicnlp-1.90 | ## introduction named entity recognition (ner) is a fundamental task in natural language processing that involves identifying and classifying named entities, such as person names, locations, organizations, and temporal expressions, within text. in recent years, deep learning models, particularly transformer-based archi... | 20,729 |
4 | 2,024 | Explainable CED : A Dataset for Explainable Critical Error Detection in Machine Translation | Critical error detection (CED) in machine translation is a task that aims to detect errors that significantly distort the intended meaning. However, the existing study of CED lacks explainability due to the absence of content addressing the reasons for catastrophic errors. To address this limitation, we propose Explain... | https://aclanthology.org/2024.naacl-srw.4 | ## introduction critical error detection (ced) is a sub-task of quality estimation (qe) that aims to identify sentences where the intended meaning from the source text is distorted due to catastrophic errors in machine translation (mt) systems @xcite . these distortions potentially lead to offensive interpretations or ... | 34,500 |
4 | 2,024 | nicolay-r at S em E val-2024 Task 3: Using Flan-T5 for Reasoning Emotion Cause in Conversations with Chain-of-Thought on Emotion States | Emotion expression is one of the essential traits of conversations. It may be self-related or caused by another speaker. The variety of reasons may serve as a source of the further emotion causes: conversation history, speaker’s emotional state, etc. Inspired by the most recent advances in Chain-of-Thought, in this wor... | https://aclanthology.org/2024.semeval-1.4 | ## methodology we propose a two-stage training mechanism for performing instruction-tuning on large language models (llms), aimed at accurately inferring of instead of directly asking llm the final result at each stage, we exploit the chain-of-thought (cot) concept in the form of the three-hop reasoning (thor) framewor... | 34,892 |
563 | 2,023 | Document-Level Multi-Event Extraction with Event Proxy Nodes and Hausdorff Distance Minimization | Document-level multi-event extraction aims to extract the structural information from a given document automatically. Most recent approaches usually involve two steps: (1) modeling entity interactions; (2) decoding entity interactions into events. However, such approaches ignore a global view of inter-dependency of mul... | https://aclanthology.org/2023.acl-long.563 | ## introduction event extraction aims to identify event triggers with certain types and extract their corresponding arguments from text. much research has been done on sentence-level event extraction @xcite @xcite . in recent years, there have been growing interests in tackling the more challenging task of document-lev... | 19,882 |
5 | 2,022 | Fractality of sentiment arcs for literary quality assessment: The case of Nobel laureates | In the few works that have used NLP to study literary quality, sentiment and emotion analysis have often been considered valuable sources of information. At the same time, the idea that the nature and polarity of the sentiments expressed by a novel might have something to do with its perceived quality seems limited at ... | https://aclanthology.org/2022.nlp4dh-1.5 | ## introduction the question of what defines the perception of quality in literature is probably as old as narrative itself, but the ability to process and analyze large quantities of literary texts, and to perform complex statistical experiments on them @xcite , has recently made new ways of studying this question pos... | 18,183 |
364 | 2,020 | Diverse dialogue generation with context dependent dynamic loss function | Dialogue systems using deep learning have achieved generation of fluent response sentences to user utterances. Nevertheless, they tend to produce responses that are not diverse and which are less context-dependent. To address these shortcomings, we propose a new loss function, an Inverse N-gram loss (INF), which incorp... | https://aclanthology.org/2020.coling-main.364 | ## introduction recently, many reports have described studies using deep learning for dialogue systems that have achieved good performance. they can generate fluent sentences based on a user's utterances @xcite @xcite . nevertheless, such neural dialogue systems tend to generate phrases such as "yes" and "i do not know... | 3,241 |
345 | 2,025 | Learning Task Representations from In-Context Learning | Large language models (LLMs) have demonstrated remarkable proficiency in in-context learning (ICL), where models adapt to new tasks through example-based prompts without requiring parameter updates. However, understanding how tasks are internally encoded and generalized remains a challenge. To address some of the empir... | https://aclanthology.org/2025.findings-acl.345 | ## introduction large language models (llms) based on the transformer architecture @xcite have seen dramatic improvements in recent years. a notable feature of these models, such as @xcite , is their capability for in-context learning (icl). this process involves the model receiving a prompt that includes demonstration... | 35,637 |
43 | 2,016 | Open D utch W ord N et | We describe Open Dutch WordNet, which has been derived from the Cornetto database, the Princeton WordNet and open source resources. We exploited existing equivalence relations between Cornetto synsets and WordNet synsets in order to move the open source content from Cornetto into WordNet synsets. Currently, Open Dutch ... | https://aclanthology.org/2016.gwc-1.43 | ## introduction the main goal of this project is to convert the dutch lexical semantic database cornetto version 2.0 @xcite into an open source version. cornetto is currently not distributed as open source, because a large portion of the database originates from the commercial publisher van dale. 2 the main task of thi... | 1,029 |
14 | 2,023 | Which is better? Exploring Prompting Strategy For LLM -based Metrics | This paper describes the DSBA submissions to the Prompting Large Language Models as Explainable Metrics shared task, where systems were submitted to two tracks: small and large summarization tracks. With advanced Large Language Models (LLMs) such as GPT-4, evaluating the quality of Natural Language Generation (NLG) has... | https://aclanthology.org/2023.eval4nlp-1.14 | ## introduction as large language models (llms) like gpt-4 continue to advance rapidly, the natural language generation (nlg) capability is approaching a level of expertise comparable to that of a human. as a result, the precise evaluation of nlg has become increasingly paramount. however, traditional similarity-based ... | 22,971 |
152 | 2,021 | An Iterative Multi-Knowledge Transfer Network for Aspect-Based Sentiment Analysis | Aspect-based sentiment analysis (ABSA) mainly involves three subtasks: aspect term extraction, opinion term extraction, and aspect-level sentiment classification, which are typically handled in a separate or joint manner. However, previous approaches do not well exploit the interactive relations among three subtasks an... | https://aclanthology.org/2021.findings-emnlp.152 | ## introduction aspect-based sentiment analysis (absa) has drawn increasing attention in the community, which includes three subtasks: aspect term extraction (ae), opinion term extraction (oe) and aspect-level sentiment classification (sc). the first two subtasks aim to extract the aspect term and the opinion term appe... | 9,701 |
10 | 2,024 | Ensemble-based Multilingual Euphemism Detection: a Behavior-Guided Approach | This paper describes the system submitted by our team to the Multilingual Euphemism Detection Shared Task for the Fourth Workshop on Figurative Language Processing (FigLang 2024). We propose a novel model for multilingual euphemism detection, combining contextual and behavior-related features. The system classifies tex... | https://aclanthology.org/2024.figlang-1.10 | ## introduction euphemism, as defined by the oxford english dictionary, is the substitution of mild or indirect expressions for harsh or blunt ones when referring to unpleasant topics. the american heritage dictionary of the english language similarly defines euphemism as replacing harsh or offensive terms with milder,... | 30,887 |
1383 | 2,025 | I nter F eedback: Unveiling Interactive Intelligence of Large Multimodal Models with Human Feedback | Existing benchmarks do not test Large Multimodal Models (LMMs) on their interactive intelligence with human users which is vital for developing general-purpose AI assistants. We design InterFeedback, an interactive framework, which can be applied to any LMM and dataset to assess this ability autonomously. On top of thi... | https://aclanthology.org/2025.findings-emnlp.1383 | ## introduction in this paper, we are curious about the question "can large multimodal models evolve through interactive human feedback?" it is central to developing general-purpose ai assistants with large multimodal models (lmms). while these models show exceptional performance on tackling multimodal tasks directly, ... | 38,058 |
58 | 2,024 | G ra SAME : Injecting Token-Level Structural Information to Pretrained Language Models via Graph-guided Self-Attention Mechanism | Pretrained Language Models (PLMs) benefit from external knowledge stored in graph structures for various downstream tasks. However, bridging the modality gap between graph structures and text remains a significant challenge. Traditional methods like linearizing graphs for PLMs lose vital graph connectivity, whereas Gra... | https://aclanthology.org/2024.findings-naacl.58 | ## introduction the paradigm of pre-training and fine-tuning has increasingly become the standard approach for leveraging the inherent knowledge of language models in a wide range of natural language processing (nlp) tasks @xcite . pretrained language models (plms) like transformer @xcite , t5 @xcite , and gpt @xcite ,... | 31,104 |
41 | 2,022 | Overview of the Shared Task on Machine Translation in D ravidian Languages | This paper presents an outline of the shared task on translation of under-resourced Dravidian languages at DravidianLangTech-2022 workshop to be held jointly with ACL 2022. A description of the datasets used, approach taken for analysis of submissions and the results have been illustrated in this paper. Five sub-tasks ... | https://aclanthology.org/2022.dravidianlangtech-1.41 | ## introduction the results of the shared task on machine translation (mt) of dravidian languages held as a part of dravidianlangtech-2022 workshop have been presented in this paper. five translation sub-tasks featured in this shared task, namely: kannada to tamil, kannada to telugu, kannada to sanskrit, kannada to mal... | 14,802 |
304 | 2,024 | Multi-Task Inference: Can Large Language Models Follow Multiple Instructions at Once? | Large language models (LLMs) are typically prompted to follow a single instruction per inference call. In this work, we analyze whether LLMs also hold the capability to handle multiple instructions simultaneously, denoted as Multi-Task Inference. For this purpose, we introduce the MTI Bench (Multi-Task Inference Benchm... | https://aclanthology.org/2024.acl-long.304 | ## introduction large language models (llms) capable of following instructions have demonstrated impressive performance across a wide range of tasks @xcite @xcite @xcite . however, since llms are trained to follow a single instruction per inference call, it is questionable whether they also hold the ability to follow c... | 27,471 |
10 | 2,022 | Detecting Urgency in Multilingual Medical SMS in K enya | Access to mobile phones in many low- and middle-income countries has increased exponentially over the last 20 years, providing an opportunity to connect patients with healthcare interventions through mobile phones (known as mobile health). A barrier to large-scale implementation of interactive mobile health interventio... | https://aclanthology.org/2022.aacl-srw.10 | ## introduction in many low-and middle-income countries, access to healthcare is limited and unaffordable. interactive short message service (sms) communication with healthcare workers has shown great potential to promote access to care in such contexts by providing remote information and support @xcite . one such syst... | 12,784 |
14 | 2,025 | Word Clouds as Common Voices: LLM -Assisted Visualization of Participant-Weighted Themes in Qualitative Interviews | Word clouds are a common way to summarize qualitative interviews, yet traditional frequency-based methods often fail in conversational contexts: they surface filler words, ignore paraphrase, and fragment semantically related ideas. This limits their usefulness in early-stage analysis, when researchers need fast, interp... | https://aclanthology.org/2025.hcinlp-1.14 | ## introduction qualitative interviews are a cornerstone of hci practice: they capture lived experience, tacit knowledge, and situated rationales that are difficult to elicit through logs or lab tasks alone @xcite . but precisely because conversational data are rich, early-stage sensemaking can be slow and brittle. tim... | 38,311 |
103 | 2,023 | Low-Resource Compositional Semantic Parsing with Concept Pretraining | Semantic parsing plays a key role in digital voice assistants such as Alexa, Siri, and Google Assistant by mapping natural language to structured meaning representations. When we want to improve the capabilities of a voice assistant by adding a new domain, the underlying semantic parsing model needs to be retrained usi... | https://aclanthology.org/2023.eacl-main.103 | ## introduction voice assistants such as alexa, siri, and google assistant often rely on semantic parsing to understand requests made by their users. the underlying semantic parsing model converts natural language user utterances into logical forms consisting of actions requested by the user (play music, check weather)... | 21,485 |
771 | 2,024 | MOSEL : 950,000 Hours of Speech Data for Open-Source Speech Foundation Model Training on EU Languages | The rise of foundation models (FMs), coupled with regulatory efforts addressing their risks and impacts, has sparked significant interest in open-source models. However, existing speech FMs (SFMs) fall short of full compliance with the open-source principles, even if claimed otherwise, as no existing SFM has model weig... | https://aclanthology.org/2024.emnlp-main.771 | ## introduction the introduction of foundation models trained on large datasets is revolutionizing the landscape of many nlp fields @xcite , particularly with the release of large language models (llms) that demonstrated impressive abilities on various tasks @xcite . the interest attracted by such models has come toget... | 30,180 |
541 | 2,021 | Verb Knowledge Injection for Multilingual Event Processing | Linguistic probing of pretrained Transformer-based language models (LMs) revealed that they encode a range of syntactic and semantic properties of a language. However, they are still prone to fall back on superficial cues and simple heuristics to solve downstream tasks, rather than leverage deeper linguistic informatio... | https://aclanthology.org/2021.acl-long.541 | ## introduction large transformer-based encoders, pretrained with self-supervised language modeling (lm) objectives, form the backbone of state-of-the-art models for most nlp tasks @xcite @xcite . recent probes showed that they implicitly extract a non-negligible amount of linguistic knowledge from text corpora in an u... | 7,359 |
59 | 2,021 | O nto GUM : Evaluating Contextualized SOTA Coreference Resolution on 12 More Genres | SOTA coreference resolution produces increasingly impressive scores on the OntoNotes benchmark. However lack of comparable data following the same scheme for more genres makes it difficult to evaluate generalizability to open domain data. This paper provides a dataset and comprehensive evaluation showing that the lates... | https://aclanthology.org/2021.acl-short.59 | ## introduction coreference resolution is the task of grouping referring expressions that point to the same entity, such as noun phrases and the pronouns that refer to them. the task entails detecting correct mention or 'markable' boundaries and creating a link with previous mentions, or antecedents. a coreference chai... | 7,448 |
69 | 2,023 | IUST _ NLP at S em E val-2023 Task 10: Explainable Detecting Sexism with Transformers and Task-adaptive Pretraining | This paper describes our system on SemEval-2023 Task 10: Explainable Detection of Online Sexism (EDOS). This work aims to design an automatic system for detecting and classifying sexist content in online spaces. We propose a set of transformer-based pre-trained models with task-adaptive pretraining and ensemble learnin... | https://aclanthology.org/2023.semeval-1.69 | ## introduction discriminatory views against women in online environments can be extremely harmful so in recent years it has become a serious problem in social networks. identifying online sexism involves many challenges because sexist discrimination and misogyny have different types and appear in different forms. ther... | 26,348 |
495 | 2,024 | PSST : A Benchmark for Evaluation-driven Text Public-Speaking Style Transfer | Language style is necessary for AI systems to accurately understand and generate diverse human language. However, previous text style transfer primarily focused on sentence-level data-driven approaches, limiting exploration of potential problems in large language models (LLMs) and the ability to meet complex applicatio... | https://aclanthology.org/2024.findings-emnlp.495 | ## introduction text style transfer (tst) is crucial in natural language processing (nlp), focusing on modifying text style while retaining the original content's information (hu et al., 2022; jin et al., 2022). by modeling complex human styles, including personality, habits, and mindset (jin et al., 2022; geroda et al... | 32,767 |
10 | 2,023 | Unsupervised Semantic Frame Induction Revisited | This paper addresses the task of semantic frame induction based on pre-trained language models (LMs). The current state of the art is to directly use contextualized embeddings from models such as BERT and to cluster them in a two step clustering process (first lemma-internal, then over all verb tokens in the data set).... | https://aclanthology.org/2023.iwcs-1.10 | ## introduction in natural language processing, semantic frame induction refers to the task of clustering target word instances, specifically verbs, in a corpus according to their semantic frames in a given context. for example, in the sentences: we would like to cluster the verbs in (a) and (b) in one group and (c) in... | 25,461 |
173 | 2,020 | On Faithfulness and Factuality in Abstractive Summarization | It is well known that the standard likelihood training and approximate decoding objectives in neural text generation models lead to less human-like responses for open-ended tasks such as language modeling and story generation. In this paper we have analyzed limitations of these models for abstractive document summariza... | https://aclanthology.org/2020.acl-main.173 | ## introduction current state of the art conditional text generation models accomplish a high level of fluency and coherence, mostly thanks to advances in sequenceto-sequence architectures with attention and copy @xcite @xcite , fully attention-based transformer architectures @xcite and more recently pretrained languag... | 1,918 |
42 | 2,022 | Self-Repetition in Abstractive Neural Summarizers | We provide a quantitative and qualitative analysis of self-repetition in the output of neural summarizers. We measure self-repetition as the number of n-grams of length four or longer that appear in multiple outputs of the same system. We analyze the behavior of three popular architectures (BART, T5, and Pegasus), fine... | https://aclanthology.org/2022.aacl-short.42 | ## introduction sequence-to-sequence neural models for conditional text generation such as bart @xcite , t5 @xcite , and pegasus @xcite achieve strong empirical results on abstractive summarization tasks. the summaries that such systems output often appear to be novel, in that they repeat text verbatim from inputs spar... | 12,757 |
419 | 2,021 | Journalistic Guidelines Aware News Image Captioning | The task of news article image captioning aims to generate descriptive and informative captions for news article images. Unlike conventional image captions that simply describe the content of the image in general terms, news image captions follow journalistic guidelines and rely heavily on named entities to describe th... | https://aclanthology.org/2021.emnlp-main.419 | ## introduction research on generating textual descriptions of images has made great progress in recent years with the introduction of encoder-decoder architectures @xcite @xcite @xcite @xcite . those models are generally trained and evaluated on image captioning datasets like coco @xcite and flickr @xcite that only co... | 9,038 |
16 | 2,025 | SECQUE : A Benchmark for Evaluating Real-World Financial Analysis Capabilities | We introduce SECQUE, a comprehensive benchmark for evaluating large language models (LLMs) in financial analysis tasks. SECQUE comprises 565 expert-written questions covering SEC filings analysis across four key categories: comparison analysis, ratio calculation, risk assessment, and financial insight generation. To as... | https://aclanthology.org/2025.gem-1.16 | ## introduction recent advances in large language models (llms) have demonstrated their potential across diverse domains, including law @xcite , medicine @xcite , and finance @xcite . however, as these models are increasingly adopted for specialized applications, the need for domainspecific evaluation has become more p... | 38,180 |
263 | 2,020 | It’s Morphin’ Time! C ombating Linguistic Discrimination with Inflectional Perturbations | Training on only perfect Standard English corpora predisposes pre-trained neural networks to discriminate against minorities from non-standard linguistic backgrounds (e.g., African American Vernacular English, Colloquial Singapore English, etc.). We perturb the inflectional morphology of words to craft plausible and se... | https://aclanthology.org/2020.acl-main.263 | ## introduction in recent years, natural language processing (nlp) systems have gotten increasingly better at learning complex patterns in language by pretraining large language models like bert, gpt-2, and ctrl @xcite @xcite , and fine-tuning them on taskspecific data to achieve state of the art results has become a n... | 2,008 |
393 | 2,023 | Automatic Annotation of Direct Speech in Written F rench Narratives | The automatic annotation of direct speech (AADS) in written text has been often used in computational narrative understanding. Methods based on either rules or deep neural networks have been explored, in particular for English or German languages. Yet, for French, our target language, not many works exist. Our goal is ... | https://aclanthology.org/2023.acl-long.393 | ## introduction prose fiction makes whole worlds emerge. authors make use of different strategies to create narratives and convey the storyworld. novels intertwine narrators' words to build the atmosphere and tell the story, with words stemming from characters inhabiting the fictive world that disclose their personalit... | 19,712 |
894 | 2,025 | T w T : Thinking without Tokens by Habitual Reasoning Distillation with Multi-Teachers’ Guidance | Large Language Models (LLMs) have made significant strides in problem-solving by incorporating reasoning processes. However, this enhanced reasoning capability results in an increased number of output tokens during inference, leading to higher computational costs. To address this challenge, we propose TwT (Thinking wit... | https://aclanthology.org/2025.findings-emnlp.894 | ## introduction large language models (llms) have demonstrated remarkable improvements in problemsolving by incorporating reasoning process @xcite @xcite . it enhances the reasoning capability of llms by breaking down complex tasks into intermediate steps, leading to better performance. however, reasoning capability co... | 37,570 |
59 | 2,022 | P roo FV er: Natural Logic Theorem Proving for Fact Verification | Fact verification systems typically rely on neural network classifiers for veracity prediction, which lack explainability. This paper proposes ProoFVer, which uses a seq2seq model to generate natural logic-based inferences as proofs. These proofs consist of lexical mutations between spans in the claim and the evidence ... | https://aclanthology.org/2022.tacl-1.59 | ## introduction fact verification systems typically comprise an evidence retrieval model followed by a textual entailment classifier @xcite . recent high-performing fact verification systems @xcite use neural models for textual entailment whose reasoning is opaque to humans despite advances in interpretablity @xcite . ... | 18,874 |
25 | 2,025 | Improving LLM s’ Learning of Coreference Resolution | Coreference Resolution (CR) is crucial for many NLP tasks, but existing LLMs struggle with hallucination and under-performance. In this paper, we investigate the limitations of existing LLM-based approaches to CR—specifically the Question-Answering (QA) Template and Document Template methods—and propose two novel techn... | https://aclanthology.org/2025.sigdial-1.25 | ## introduction coreference resolution involves detecting and clustering different mentions that refer to the same discourse world entity. as a task that requires linguistic and extra-linguistic understanding, it plays a crucial role for many downstream natural language processing tasks, such as information extraction,... | 40,780 |
13 | 2,021 | Results of the Second SIGMORPHON Shared Task on Multilingual Grapheme-to-Phoneme Conversion | Grapheme-to-phoneme conversion is an important component in many speech technologies, but until recently there were no multilingual benchmarks for this task. The second iteration of the SIGMORPHON shared task on multilingual grapheme-to-phoneme conversion features many improvements from the previous year’s task (Gorman... | https://aclanthology.org/2021.sigmorphon-1.13 | ## introduction many speech technologies demand mappings between written words and their pronunciations. in open-vocabulary systems-as well as certain resource-constrained embedded systems-it is insufficient to simply list all possible pronunciations; these mappings must generalize to rare or unseen words as well. ther... | 11,976 |
818 | 2,025 | Factuality Beyond Coherence: Evaluating LLM Watermarking Methods for Medical Texts | As large language models (LLMs) are adapted to sensitive domains such as medicine, their fluency raises safety risks, particularly regarding provenance and accountability. Watermarking embeds detectable patterns to mitigate these risks, yet its reliability in medical contexts remains untested. Existing benchmarks focus... | https://aclanthology.org/2025.findings-emnlp.818 | ## introduction llms have advanced human-like text generation capability, raising concerns about potentially harmful or biased information in various use case, including in the medical domain @xcite @xcite @xcite . watermarking techniques serve as a safeguard by embedding subtle statistical patterns into generated cont... | 37,494 |
105 | 2,022 | Unified Speech-Text Pre-training for Speech Translation and Recognition | In this work, we describe a method to jointly pre-train speech and text in an encoder-decoder modeling framework for speech translation and recognition. The proposed method utilizes multi-task learning to integrate four self-supervised and supervised subtasks for cross modality learning. A self-supervised speech subtas... | https://aclanthology.org/2022.acl-long.105 | ## introduction pre-training can learn universal feature representations from a large training corpus and is beneficial for downstream tasks with limited amounts of training data @xcite @xcite . with the advancement of computational power and self-supervised pre-training approaches, large volumes of unlabeled data may ... | 12,908 |
424 | 2,020 | ASSET : A Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations | In order to simplify a sentence, human editors perform multiple rewriting transformations: they split it into several shorter sentences, paraphrase words (i.e. replacing complex words or phrases by simpler synonyms), reorder components, and/or delete information deemed unnecessary. Despite these varied range of possibl... | https://aclanthology.org/2020.acl-main.424 | ## introduction sentence simplification (ss) consists in modifying the content and structure of a sentence to make it easier to understand, while retaining its main idea and most of its original meaning @xcite . simplified texts can benefit non-native speakers @xcite , people suffering from aphasia @xcite , dyslexia @x... | 2,168 |
75 | 2,016 | Stress, charge cognitive et signal de parole : étude exploratoire auprès de pilotes de chasse. (Stress, cognitive load and speech signal : an exploratory study among fighter pilots) | Cet article traite des effets de la charge cognitive sur la fréquence fondamentale de pilotes de F-16 placés dans un scénario de vol de nuit. La charge cognitive a été estimée à l’aide de paramètres liés à la tâche (hétéro-évaluation), à l’individu (anxiété, auto-évaluation du stress ressenti) et à la situation (simula... | https://aclanthology.org/2016.jeptalnrecital-jep.75 | ## introduction l'étude présentée ici a été réalisée dans le cadre du projet biovoc 1 , dont l'objectif est d'étudier les effets, sur le sujet aux commandes d'un système complexe, du stress, de la fatigue et de la surcharge cognitive. les variations de l'état du sujet y sont suscitées via la manipulation de variables 1... | 1,118 |
64 | 2,020 | CLPLM : Character Level Pretrained Language Model for E xtracting S upport Phrases for Sentiment Labels | In this paper, we have designed a character-level pre-trained language model for extracting support phrases from tweets based on the sentiment label. We also propose a character-level ensemble model designed by properly blending Pre-trained Contextual Embeddings (PCE) models- RoBERTa, BERT, and ALBERT along with Neural... | https://aclanthology.org/2020.icon-main.64 | ## introduction sentiment analysis has been a trendy topic for the last some decades. whether its a graphical image or textual data, all types of an entity consists of something that conveys the sentiment. with the recent development in machinelearning methods, new innovative and powerful models have developed in the f... | 5,129 |
55 | 2,023 | S t FX NLP at S em E val-2023 Task 1: Multimodal Encoding-based Methods for Visual Word Sense Disambiguation | SemEval-2023’s Task 1, Visual Word Sense Disambiguation, a task about text semantics and visual semantics, selecting an image from a list of candidates, that best exhibits a given target word in a small context. We tried several methods, including the image captioning method and CLIP methods, and submitted our predicti... | https://aclanthology.org/2023.semeval-1.55 | ## introduction semeval-2023's task 1: visual word sense disambiguation (v-wsd) involves selecting an image from a list of candidates, that best exhibits a given target word in a small context. in this task @xcite , each sample will contain one target word, a limited context, and ten candidate images. the ten candidate... | 26,333 |
466 | 2,024 | E mo K nob: Enhance Voice Cloning with Fine-Grained Emotion Control | While recent advances in Text-to-Speech (TTS) technology produce natural and expressive speech, they lack the option for users to select emotion and control intensity. We propose EmoKnob, a framework that allows fine-grained emotion control in speech synthesis with few-shot demonstrative samples of arbitrary emotion. O... | https://aclanthology.org/2024.emnlp-main.466 | ## introduction the complexity of human communication extends far beyond mere verbal exchange. vocal inflections and emotional undertones play pivotal roles in conveying meaning. while text alone can be ambiguous in meaning @xcite , different emotions in voices can articulate different messages in the same piece of tex... | 29,884 |
832 | 2,024 | T ext G en SHAP : Scalable Post-Hoc Explanations in Text Generation with Long Documents | Large language models (LLMs) have attracted great interest in many real-world applications; however, their “black-box” nature necessitates scalable and faithful explanations. Shapley values have matured as an explainability method for deep learning, but extending them to LLMs is difficult due to long input contexts and... | https://aclanthology.org/2024.findings-acl.832 | ## introduction large language models (llms) continue to rapidly excel at different text-generation tasks alongside the continued growth of resources dedicated to training text-based models @xcite @xcite ). llm's impressive capabilities have led to their widespread adoption throughout academic and commercial applicatio... | 32,147 |
313 | 2,023 | Zero-shot Approach to Overcome Perturbation Sensitivity of Prompts | Recent studies have demonstrated that natural-language prompts can help to leverage the knowledge learned by pre-trained language models for the binary sentence-level sentiment classification task. Specifically, these methods utilize few-shot learning settings to fine-tune the sentiment classification model using manua... | https://aclanthology.org/2023.acl-long.313 | ## introduction the recent advance of large language models such as chatgpt (chatgpt, 2022), gpt-3 @xcite , and t5 @xcite has shown an astounding ability to understand natural languages. these pre-trained models can conduct various natural language processing (nlp) tasks under the zero/few-shot settings using natural l... | 19,632 |
27 | 2,023 | Enabling Classifiers to Make Judgements Explicitly Aligned with Human Values | Many NLP classification tasks, such as sexism/racism detection or toxicity detection, are based on human values. Yet, human values can vary under diverse cultural conditions. Therefore, we introduce a framework for value-aligned classification that performs prediction based on explicitly written human values in the com... | https://aclanthology.org/2023.trustnlp-1.27 | ## introduction the demand for responsible nlp technologyto make it more robust, inclusive and fair, as well as more explainable and trustworthy -has increased since pre-trained large-scale language models (llms) have brought about significant progress in making nlp tasks more efficient and broad-ranging @xcite @xcite ... | 26,891 |
152 | 2,020 | T opic BERT for Energy Efficient Document Classification | Prior research notes that BERT’s computational cost grows quadratically with sequence length thus leading to longer training times, higher GPU memory constraints and carbon emissions. While recent work seeks to address these scalability issues at pre-training, these issues are also prominent in fine-tuning especially f... | https://aclanthology.org/2020.findings-emnlp.152 | ## introduction natural language processing (nlp) has recently witnessed a series of breakthroughs by the evolution of large-scale language models (lm) such as elmo @xcite , bert @xcite , roberta @xcite , @xcite etc. due to improved capabilities for language understanding @xcite . however this massive increase in model... | 4,724 |
77 | 2,023 | Transformer-Based Language Models for B ulgarian | This paper presents an approach for training lightweight and robust language models for Bulgarian that mitigate gender, political, racial, and other biases in the data. Our method involves scraping content from major Bulgarian online media providers using a specialized procedure for source filtering, topic selection, a... | https://aclanthology.org/2023.ranlp-1.77 | ## introduction natural language processing has witnessed significant advancements in recent years, driven by the development of large-scale pre-trained language models (lms) such as bert and gpt-2,3,4. however, such models suffer from biases in the data, which can lead to unfair or discriminatory outputs. bulgarian la... | 26,175 |
90 | 2,024 | Monolingual or Multilingual Instruction Tuning: Which Makes a Better Alpaca | Foundational large language models (LLMs) can be instruction-tuned to perform open-domain question answering, facilitating applications like chat assistants. While such efforts are often carried out in a single language, we empirically analyze cost-efficient strategies for multilingual scenarios. Our study employs the ... | https://aclanthology.org/2024.findings-eacl.90 | ## introduction language capacity has attracted much attention in pre-trained language models. some pioneering works focused on a single language @xcite , while later works aim to cover multiple languages @xcite . in the recent blossom of open-source llms, english-centric ones include gpt-2, llama, and pythia @xcite @x... | 30,982 |
7 | 2,020 | D i D i’s Machine Translation System for WMT 2020 | This paper describes the DiDi AI Labs’ submission to the WMT2020 news translation shared task. We participate in the translation direction of Chinese->English. In this direction, we use the Transformer as our baseline model and integrate several techniques for model enhancement, including data filtering, data selection... | https://aclanthology.org/2020.wmt-1.7 | ## introduction we participate in the wmt2020 news translation shared tasks in @xmath0. for this translation direction, we train several variants of transformer @xcite models on the provided parallel data enlarged with synthetic data from monolingual data. we experiment with several techniques proposed in the past tran... | 6,603 |
99 | 2,024 | E mo F ake: An Initial Dataset for Emotion Fake Audio Detection | “To enhance the effectiveness of fake audio detection techniques, researchers have developed mul-tiple datasets such as those for the ASVspoof and ADD challenges. These datasets typically focuson capturing non-emotional characteristics in speech, such as the identity of the speaker and theauthenticity of the content. H... | https://aclanthology.org/2024.ccl-1.99 | ## introduction in the past few years, voice conversion (vc) technology has been able to generate very natural converted audio. but it is still not equipped with human-like emotions adequately. emotion, as a vital component of human communication, plays a crucial role in manifesting itself on the semantic and pragmatic... | 28,664 |
20 | 2,025 | Sparks of Tabular Reasoning via T ext2 SQL Reinforcement Learning | This work reframes the Text-to-SQL task as a pathway for teaching large language models (LLMs) to reason over and manipulate tabular data—moving beyond the traditional focus on query generation. We propose a two-stage framework that leverages SQL supervision to develop transferable table reasoning capabilities. First, ... | https://aclanthology.org/2025.trl-1.20 | ## introduction recent advancements in llms have substantially improved performance on text-to-sql tasks, translating natural language into executable sql queries over relational databases @xcite . progress has been driven primarily by supervised fine-tuning (sft) on sql-focused datasets (e.g., * equal contribution † o... | 40,942 |
186 | 2,025 | S afe C onf: A Confidence-Calibrated Safety Self-Evaluation Method for Large Language Models | Large language models (LLMs) have achieved groundbreaking progress in Natural Language Processing (NLP). Despite the numerous advantages of LLMs, they also pose significant safety risks. Self-evaluation mechanisms have gained increasing attention as a key safeguard to ensure safe and controllable content generation. Ho... | https://aclanthology.org/2025.findings-emnlp.186 | ## introduction large language models (llms) represent a significant milestone in the evolution of artificial general intelligence, demonstrating remarkable potential across natural language processing, robotics, and computer vision @xcite . however, their considerable capabilities are accompanied by significant safety... | 36,860 |
36 | 2,023 | When Does Translation Require Context? A Data-driven, Multilingual Exploration | Although proper handling of discourse significantly contributes to the quality of machine translation (MT), these improvements are not adequately measured in common translation quality metrics. Recent works in context-aware MT attempt to target a small set of discourse phenomena during evaluation, however not in a full... | https://aclanthology.org/2023.acl-long.36 | ## introduction in order to properly translate discourse phenomena including anaphoric pronouns, lexical cohesion, and discourse markers, a machine translation (mt) model must use information from previous utterances @xcite @xcite . however, while generating proper translations of these phenomena is important for compr... | 19,354 |
1236 | 2,024 | RE - RAG : Improving Open-Domain QA Performance and Interpretability with Relevance Estimator in Retrieval-Augmented Generation | The Retrieval Augmented Generation (RAG) framework utilizes a combination of parametric knowledge and external knowledge to demonstrate state-of-the-art performance on open-domain question answering tasks. However, the RAG framework suffers from performance degradation when the query is accompanied by irrelevant contex... | https://aclanthology.org/2024.emnlp-main.1236 | ## introduction in recent years, the retrieval augmented generation framework has shown promising progress in natural language generation, specifically on knowledgeintensive tasks. this approach has been studied in many forms, from traditional rag @xcite , which aggregates answers from multiple contexts using document ... | 30,633 |
422 | 2,025 | COMI - LINGUA : Expert Annotated Large-Scale Dataset for Multitask NLP in H indi- E nglish Code-Mixing | We introduce COMI-LINGUA, the largest manually annotated Hindi-English code-mixed dataset, comprising 125K+ high-quality instances across five core NLP tasks: Token-level Language Identification, Matrix Language Identification, Named Entity Recognition, Part-Of-Speech Tagging and Machine Translation. Each instance is a... | https://aclanthology.org/2025.findings-emnlp.422 | ## introduction code-mixing is the blending of multiple languages within a single utterance-a pervasive phenomenon in multilingual societies, especially on social media platforms @xcite . over half of the world's population is bilingual or multilingual and frequently uses mixed-language expressions in digital communica... | 37,099 |
30 | 2,022 | Incorporating Causal Analysis into Diversified and Logical Response Generation | Although the Conditional Variational Auto-Encoder (CVAE) model can generate more diversified responses than the traditional Seq2Seq model, the responses often have low relevance with the input words or are illogical with the question. A causal analysis is carried out to study the reasons behind, and a methodology of se... | https://aclanthology.org/2022.coling-1.30 | ## introduction with recent advances in deep learning and readily available large-scale dialogue data, generationbased methods have become one of the most prevailing methods for building dialogue systems. based on the seq2seq framework @xcite , generation-based models learn to map the input post to its corresponding re... | 13,984 |
108 | 2,022 | P athway2 T ext: Dataset and Method for Biomedical Pathway Description Generation | Biomedical pathways have been extensively used to characterize the mechanism of complex diseases. One essential step in biomedical pathway analysis is to curate the description of a pathway based on its graph structure and node features. Neural text generation could be a plausible technique to circumvent the tedious ma... | https://aclanthology.org/2022.findings-naacl.108 | ## introduction many complex diseases, such as cancer and neurodegenerative disorders, are driven by reactions among a combination of genes and metabolites instead of one single gene @xcite . these reactions, which are formally referred to as pathways @xcite @xcite , are represented as a heterogeneous graph (figure 1 )... | 16,307 |
3 | 2,023 | Preventing Generation of Verbatim Memorization in Language Models Gives a False Sense of Privacy | Studying data memorization in neural language models helps us understand the risks (e.g., to privacy or copyright) associated with models regurgitating training data and aids in the development of countermeasures. Many prior works—and some recently deployed defenses—focus on “verbatim memorization”, defined as a model ... | https://aclanthology.org/2023.inlg-main.3 | ## introduction the ability of neural language models to memorize their training data has been studied extensively @xcite @xcite @xcite . when language models, especially ones used in production systems, are susceptible to data extraction attacks, it can lead to practical problems ranging from privacy risks to copyrigh... | 25,367 |
14 | 2,021 | Velocidapter: Task-oriented Dialogue Comprehension Modeling Pairing Synthetic Text Generation with Domain Adaptation | We introduce a synthetic dialogue generation framework, Velocidapter, which addresses the corpus availability problem for dialogue comprehension. Velocidapter augments datasets by simulating synthetic conversations for a task-oriented dialogue domain, requiring a small amount of bootstrapping work for each new domain. ... | https://aclanthology.org/2021.sigdial-1.14 | ## introduction humans perform dialogue interactions to accomplish common tasks: work email threads, nursepatient conversations, customer service conversations, etc. (cf. a task-oriented dialogue is a form of information exchange where the system obtains user preferences (i.e. slot values for attributes) by conversatio... | 11,933 |
1 | 2,022 | On Isotropy Calibration of Transformer Models | Different studies of the embedding space of transformer models suggest that the distribution of contextual representations is highly anisotropic - the embeddings are distributed in a narrow cone. Meanwhile, static word representations (e.g., Word2Vec or GloVe) have been shown to benefit from isotropic spaces. Therefore... | https://aclanthology.org/2022.insights-1.1 | ## introduction the impressive performance of transformer models @xcite across almost all areas of natural language processing (nlp) has sparked indepth investigations of these models. a remarkable finding is that the contextual representations computed by transformers are strongly anistropic (ethayarajh, 2019), i.e., ... | 17,247 |
368 | 2,022 | Tiny- N ews R ec: Effective and Efficient PLM -based News Recommendation | News recommendation is a widely adopted technique to provide personalized news feeds for the user. Recently, pre-trained language models (PLMs) have demonstrated the great capability of natural language understanding and benefited news recommendation via improving news modeling. However, most existing works simply fine... | https://aclanthology.org/2022.emnlp-main.368 | ## introduction with the explosion of information, massive news is published on online news platforms such as microsoft news and google news @xcite , which can easily get the users overwhelmed when they try to find the information they are interested in @xcite . many personalized news recommendation methods have been p... | 15,268 |
630 | 2,023 | SAMR ank: Unsupervised Keyphrase Extraction using Self-Attention Map in BERT and GPT -2 | We propose a novel unsupervised keyphrase extraction approach, called SAMRank, which uses only a self-attention map in a pre-trained language model (PLM) to determine the importance of phrases. Most recent approaches for unsupervised keyphrase extraction mainly utilize contextualized embeddings to capture semantic rele... | https://aclanthology.org/2023.emnlp-main.630 | ## introduction keyphrase extraction refers to process of identifying the words or phrases that signify the primary themes of a document. it has a wide range of applications, including document summarization, information retrieval, and topic modeling. the methodologies used for keyphrase extraction are typically catego... | 22,405 |
344 | 2,020 | Event-Related Bias Removal for Real-time Disaster Events | Social media has become an important tool to share information about crisis events such as natural disasters and mass attacks. Detecting actionable posts that contain useful information requires rapid analysis of huge volumes of data in real-time. This poses a complex problem due to the large amount of posts that do no... | https://aclanthology.org/2020.findings-emnlp.344 | ## introduction effective management of crisis situations like natural disasters (e.g. earthquakes, floods) or attacks (e.g. bombings, shootings) is an extremely sensitive and complex phenomenon that requires efficient coordination of people from multiple disciplines along with proper allocation of time and resources @... | 4,915 |
393 | 2,024 | Elastic Weight Removal for Faithful and Abstractive Dialogue Generation | Generating factual responses is a crucial requirement for dialogue systems. To promotemore factual responses, a common strategyis to ground their responses in relevant documents that inform response generation. However, common dialogue models still often hallucinate information that was not containedin these documents ... | https://aclanthology.org/2024.naacl-long.393 | ## introduction current-day large language models (llms) impressively generate coherent, grammatical, and seemingly meaningful text, but are prone to hallucinating incorrect information. while grounding them in relevant documents can alleviate this @xcite , models still tend to generate information that conflicts these... | 34,310 |
17 | 2,023 | Negative documents are positive: Improving event extraction performance using overlooked negative data | The scarcity of data poses a significant challenge in closed-domain event extraction, as is common in complex NLP tasks. This limitation primarily arises from the intricate nature of the annotation process. To address this issue, we present a multi-task model structure and training approach that leverages the additiona... | https://aclanthology.org/2023.case-1.17 | ## introduction closed-domain event extraction is a specialized task in natural language processing (nlp) that focuses on automatically identifying and extracting specific events or occurrences from text within a restricted domain, such as biomedical research, financial markets, political events, or sports @xcite . it ... | 21,041 |
355 | 2,022 | Reducing Disambiguation Biases in NMT by Leveraging Explicit Word Sense Information | Recent studies have shed some light on a common pitfall of Neural Machine Translation (NMT) models, stemming from their struggle to disambiguate polysemous words without lapsing into their most frequently occurring senses in the training corpus. In this paper, we first provide a novel approach for automatically creatin... | https://aclanthology.org/2022.naacl-main.355 | ## introduction translating a sentence requires the underlying meaning to be captured and then expressed in the target language. nonetheless, only little attention has been devoted to studying the actual capabilities of neural machine translation (nmt) approaches of modeling different senses of ambiguous words, with re... | 17,931 |
256 | 2,020 | Ferryman at S em E val-2020 Task 12: BERT -Based Model with Advanced Improvement Methods for Multilingual Offensive Language Identification | Indiscriminately posting offensive remarks on social media may promote the occurrence of negative events such as violence, crime, and hatred. This paper examines different approaches and models for solving offensive tweet classification, which is a part of the OffensEval 2020 competition. The dataset is Offensive Langu... | https://aclanthology.org/2020.semeval-1.256 | ## introduction with the continuous development of society, online social network (osn) and microblog sites have attracted internet users more than any other types of websites. twitter, facebook, and instagram offer a growing variety of services that attract users from different cultures, religions, and interests aroun... | 6,166 |
26 | 2,023 | Scalable Performance Analysis for Vision-Language Models | Joint vision-language models have shown great performance over a diverse set of tasks. However, little is known about their limitations, as the high dimensional space learned by these models makes it difficult to identify semantic errors. Recent work has addressed this problem by designing highly controlled probing tas... | https://aclanthology.org/2023.starsem-1.26 | ## introduction recent years have witnessed an explosion of visionlanguage models @xcite @xcite @xcite . these models have shown great performance in a variety of tasks, such as image/video classification and text-image/video retrieval @xcite , even without leveraging task-specific or in-domain training. in addition, t... | 26,730 |
23 | 2,022 | Unified NMT models for the I ndian subcontinent, transcending script-barriers | Highly accurate machine translation systems are very important in societies and countries where multilinguality is very common, and where English often does not suffice. The Indian subcontinent (or South Asia) is such a region, with all the Indic languages currently being under-represented in the NLP ecosystem. It is e... | https://aclanthology.org/2022.deeplo-1.23 | ## introduction the indian subcontinent is a well-studied linguistic area @xcite , known as south asian sprachbund. the region is home to around a quarter of the world's population, with a total which is projected to reach more than 2 billion in a decade. despite this, the progress in natural language processing is sig... | 14,729 |
459 | 2,021 | Cross-modal Memory Networks for Radiology Report Generation | Medical imaging plays a significant role in clinical practice of medical diagnosis, where the text reports of the images are essential in understanding them and facilitating later treatments. By generating the reports automatically, it is beneficial to help lighten the burden of radiologists and significantly promote c... | https://aclanthology.org/2021.acl-long.459 | ## introduction interpreting radiology images (e.g., chest x-ray) and writing diagnostic reports are essential operations in clinical practice and normally requires considerable manual workload. therefore, radiology report generation, which aims to automatically generate a free-text description based on a radiograph, i... | 7,277 |
204 | 2,024 | GKT : A Novel Guidance-Based Knowledge Transfer Framework For Efficient Cloud-edge Collaboration LLM Deployment | The burgeoning size of Large Language Models (LLMs) has led to enhanced capabilities in generating responses, albeit at the expense of increased inference times and elevated resource demands. Existing methods of acceleration, predominantly hinged on knowledge distillation, generally necessitate fine-tuning of considera... | https://aclanthology.org/2024.findings-acl.204 | ## introduction the swift advancement of large language models (llms) has dramatically pushed the frontiers of ai technology. llms, with their vast number of parameters, are exceptionally adept at comprehending human intentions, offering high-quality reasoning, and responses @xcite @xcite . however, the immense size of... | 31,534 |
1020 | 2,025 | Improving Word Alignment Using Semi-Supervised Learning | Word alignment plays a crucial role in various natural language processing tasks, such as serving as cross-lingual signals for sentence embedding, reducing hallucination and omission in machine translation, and facilitating the construction of training data for simultaneous speech translation.Current state-of-the-art a... | https://aclanthology.org/2025.findings-acl.1020 | ## introduction word alignment aims to identify correspondences between source and target words in a translation sentence pair, as shown in figure 1 . although word alignment was initially proposed to enhance statistical machine translation @xcite , advancements in both word alignment and deep learning techniques have ... | 36,311 |
1024 | 2,023 | R ex UIE : A Recursive Method with Explicit Schema Instructor for Universal Information Extraction | Universal Information Extraction (UIE) is an area of interest due to the challenges posed by varying targets, heterogeneous structures, and demand-specific schemas. Previous works have achieved success by unifying a few tasks, such as Named Entity Recognition (NER) and Relation Extraction (RE), while they fall short of... | https://aclanthology.org/2023.findings-emnlp.1024 | ## introduction as a fundamental task of natural language understanding, information extraction (ie) has been widely studied, such as named entity recognition (ner), relation extraction (re), event extraction (ee), aspect-based sentiment analysis (absa), etc. however, the task-specific model structures hinder the shari... | 25,113 |
7 | 2,008 | Identification automatique de marques d’opinion dans des textes | Nous présentons un modèle conceptuel pour la représentation d’opinions, en analysant les éléments qui les composent et quelques propriétés. Ce modèle conceptuel est implémenté et nous en décrivons le jeu d’annotations. Le processus automatique d’annotation de textes en espagnol est effectué par application de règles co... | https://aclanthology.org/2008.jeptalnrecital-recital.7 | ## introduction dans le cadre de la recherche d'information, l'identification d'opinions correspondantes à de différents énonciateurs présente des difficultés du point de vue du traitement automatique. savoir à qui attribuer les expressions présentes dans un texte, dire si ces opinions sont favorables ou pas envers un ... | 382 |
261 | 2,022 | D elta N et: Conditional Medical Report Generation for COVID -19 Diagnosis | Fast screening and diagnosis are critical in COVID-19 patient treatment. In addition to the gold standard RT-PCR, radiological imaging like X-ray and CT also works as an important means in patient screening and follow-up. However, due to the excessive number of patients, writing reports becomes a heavy burden for radio... | https://aclanthology.org/2022.coling-1.261 | ## introduction since december 2019, the world has been suffering from a serious health crisis: the outbreak of covid-19 @xcite . fast screening and diagnosis is critical in covid-19 patient treatment. in clinical practice, the reverse transcription polymerase chain reaction (rt-pcr) is recognized as the golden standar... | 14,217 |
20 | 2,021 | Prédire l’aspect linguistique en anglais au moyen de transformers (Classifying Linguistic Aspect in E nglish with Transformers ) | L’aspect du verbe décrit la manière dont une action, un événement ou un état exprimé par un verbe est lié au temps ; la télicité est la propriété d’un syntagme verbal qui présente une action ou un événement comme étant mené à son terme ; la durée distingue les verbes qui expriment une action (dynamique) ou un état (sta... | https://aclanthology.org/2021.jeptalnrecital-taln.20 | ## introduction l'aspect est une propriété temporelle des actions, des événements et des états décrits par les verbes, au-delà du temps verbal. il englobe différentes propriétés, telles que la télicité et la durée. l'action du verbe est dite télique si elle a un point final. lorsque le verbe désigne un état ou lorsque ... | 10,369 |
14 | 2,024 | Unknown Script: Impact of Script on Cross-Lingual Transfer | Cross-lingual transfer has become an effective way of transferring knowledge between languages. In this paper, we explore an often overlooked aspect in this domain: the influence of the source language of a language model on language transfer performance. We consider a case where the target language and its script are ... | https://aclanthology.org/2024.naacl-srw.14 | ## introduction the dominant natural language processing (nlp) approach nowadays involves cross-lingual transfer using pre-trained monolingual and multilingual language models. in line with this trend, numerous monolingual models have been released for various languages @xcite @xcite . multilingual models, which are tr... | 34,511 |
373 | 2,020 | He said “who’s gonna take care of your children when you are at ACL ?”: Reported Sexist Acts are Not Sexist | In a context of offensive content mediation on social media now regulated by European laws, it is important not only to be able to automatically detect sexist content but also to identify if a message with a sexist content is really sexist or is a story of sexism experienced by a woman. We propose: (1) a new characteri... | https://aclanthology.org/2020.acl-main.373 | ## introduction sexism is prejudice or discrimination based on a person's gender. it is based on the belief that one sex or gender is superior to another. it can take several forms from sexist remarks, gestures, behaviours, practices, insults to rape or murder. sexist hate speech is a message of inferiority usually dir... | 2,119 |
602 | 2,023 | Towards Conceptualization of “Fair Explanation”: Disparate Impacts of anti- A sian Hate Speech Explanations on Content Moderators | Recent research at the intersection of AI explainability and fairness has focused on how explanations can improve human-plus-AI task performance as assessed by fairness measures. We propose to characterize what constitutes an explanation that is itself “fair” – an explanation that does not adversely impact specific pop... | https://aclanthology.org/2023.emnlp-main.602 | ## introduction most work at the intersection of the ai explainability and fairness focuses on how explanations can improve human-ai task performance regarding some fairness criteria. for example, on a recidivism risk assessment task, @xcite evaluate whether two global and two local explanation methods influence the pe... | 22,378 |
7 | 2,022 | A Dependency Treebank of Spoken Second Language E nglish | In this paper, we introduce a dependency treebank of spoken second language (L2) English that is annotated with part of speech (Penn POS) tags and syntactic dependencies (Universal Dependencies). We then evaluate the degree to which the use of this treebank as training data affects POS and UD annotation accuracy for L1... | https://aclanthology.org/2022.bea-1.7 | ## introduction in the field of applied linguistics, natural language processing tools such as part of speech (pos) taggers and syntactic parsers have been and continue to be used to investigate characteristics of second language (l2) use at scale (e.g., @xcite @xcite . although taggers and parsers are increasingly acc... | 13,660 |
478 | 2,022 | Model and Data Transfer for Cross-Lingual Sequence Labelling in Zero-Resource Settings | Zero-resource cross-lingual transfer approaches aim to apply supervised modelsfrom a source language to unlabelled target languages. In this paper we performan in-depth study of the two main techniques employed so far for cross-lingualzero-resource sequence labelling, based either on data or model transfer.Although pre... | https://aclanthology.org/2022.findings-emnlp.478 | ## introduction sequence labelling is the task of assigning a label to each token in a given input sequence. sequence labelling is a fundamental process in many downstream nlp tasks. currently, most successful approaches for this task apply supervised deep-neural networks @xcite @xcite . however, as it was the case for... | 16,930 |
353 | 2,023 | D elta S core: Fine-Grained Story Evaluation with Perturbations | Numerous evaluation metrics have been developed for natural language generation tasks, but their effectiveness in evaluating stories is limited as they are not specifically tailored to assess intricate aspects of storytelling, such as fluency and interestingness. In this paper, we introduce DeltaScore, a novel methodol... | https://aclanthology.org/2023.findings-emnlp.353 | ## introduction the emergence of large pre-trained language models (plms) @xcite has empowered story generation models to generate plausible narratives @xcite @xcite . the most advanced models have achieved the ability to produce stories which are not easily distinguishable from human-authored ones @xcite @xcite . howe... | 24,441 |
28 | 2,025 | P resi U niv at F in C ausal 2025 Shared Task: Applying Fine-tuned Language Models to Explain Financial Cause and Effect with Zero-shot Learning | Transformer-based multilingual question-answering models are used to detect causality in financial text data. This study employs BERT (CITATION) for English text and XLM-RoBERTa (CITATION) for Spanish data, which were fine-tuned on the SQuAD datasets (CITATION) (CITATION). These pre-trained models are used to extract a... | https://aclanthology.org/2025.finnlp-1.28 | ## introduction as the growing connectivity of global markets and the rising use of multiple languages in communication continue, the need for a model that can interpret text data has become increasingly important. question answering (qa) is a key component in extracting or identifying relevant data across domains. tra... | 38,108 |
9 | 2,020 | Multitask Learning of Negation and Speculation using Transformers | Detecting negation and speculation in language has been a task of considerable interest to the biomedical community, as it is a key component of Information Extraction systems from Biomedical documents. Prior work has individually addressed Negation Detection and Speculation Detection, and both have been addressed in t... | https://aclanthology.org/2020.louhi-1.9 | ## introduction detection of linguistic phenomena like negation and speculation are key components of biomedical information retrieval systems, as they significantly alter the meaning of a sentence. while detecting these are also useful in sentiment analysis systems, and systems used to determine the veracity of inform... | 5,532 |
600 | 2,025 | DS - MHP : Improving Chain-of-Thought through Dynamic Subgraph-Guided Multi-Hop Path | Large language models (LLMs) excel in natural language tasks, with Chain-of-Thought (CoT) prompting enhancing reasoning through step-by-step decomposition. However, CoT struggles in knowledge-intensive tasks with multiple entities and implicit multi-hop relations, failing to connect entities systematically in zero-shot... | https://aclanthology.org/2025.findings-emnlp.600 | ## introduction large language models (llms) @xcite @xcite @xcite have demonstrated remarkable capabilities across a wide range of natural language processing (nlp) tasks, such as question answering (robinson et al., 2022; li et al., * corresponding author. 2024b; @xcite , machine translation @xcite @xcite , and inform... | 37,274 |
671 | 2,024 | Vanessa: Visual Connotation and Aesthetic Attributes Understanding Network for Multimodal Aspect-based Sentiment Analysis | Prevailing research concentrates on superficial features or descriptions of images, revealing a significant gap in the systematic exploration of their connotative and aesthetic attributes. Furthermore, the use of cross-modal relation detection modules to eliminate noise from comprehensive image representations leads to... | https://aclanthology.org/2024.findings-emnlp.671 | ## introduction multimodal aspect-based sentiment analysis (mabsa) marks a pivotal advancement in sentiment analysis by enhancing the machine's ability to interpret human emotions, thus attracting growing scholarly interest @xcite . mabsa aims to identify aspect-sentiment pairs within sentences given image-text pairs. ... | 32,949 |
2 | 2,021 | View Distillation with Unlabeled Data for Extracting Adverse Drug Effects from User-Generated Data | We present an algorithm based on multi-layer transformers for identifying Adverse Drug Reactions (ADR) in social media data. Our model relies on the properties of the problem and the characteristics of contextual word embeddings to extract two views from documents. Then a classifier is trained on each view to label a s... | https://aclanthology.org/2021.smm4h-1.2 | ## introduction social media has made substantial amount of data available for various applications in the financial, educational, and health domains. among these, the applications in healthcare have a particular importance. although previous studies have demonstrated that the self-reported online social data is subjec... | 12,006 |
29 | 2,023 | SAGEV iz: S chem A GE neration and Visualization | Schema induction involves creating a graph representation depicting how events unfold in a scenario. We present SAGEViz, an intuitive and modular tool that utilizes human-AI collaboration to create and update complex schema graphs efficiently, where multiple annotators (humans and models) can work simultaneously on a s... | https://aclanthology.org/2023.emnlp-demo.29 | ## introduction event schemas are central to understanding and reasoning about events. they provide a way to organize and represent how complex events unfold @xcite . schema-based reasoning enables reliable and explainable prediction of next events, inference of missing events or entities @xcite @xcite @xcite @xcite @x... | 22,857 |
20 | 2,008 | Un sens logique pour les graphes sémantiques | Nous discutons du sens des graphes sémantiques, notamment de ceux utilisés en Théorie Sens-Texte. Nous leur donnons un sens précis, éventuellement sous-spécifié, grâce à une traduction simple vers une formule de Minimal Recursion Semantics qui couvre les cas de prédications multiples sur plusieurs entités, de prédicati... | https://aclanthology.org/2008.jeptalnrecital-court.20 | ## introduction les processus d'analyse (du texte vers le sens) et de génération (du sens vers le texte), bien qu'a priori « simplement » inverses l'un de l'autre, reposent généralement sur deux types différents de représentation du sens, respectivement les formules logiques et les graphes sémantiques. ces deux visions... | 373 |
26 | 2,024 | Reference-Based Metrics Are Biased Against Blind and Low-Vision Users’ Image Description Preferences | Image description generation models are sophisticated Vision-Language Models which promise to make visual content, such as images, non-visually accessible through linguistic descriptions. While these systems can benefit all, their primary motivation tends to lie in allowing blind and low-vision (BLV) users access to in... | https://aclanthology.org/2024.nlp4pi-1.26 | ## introduction as the internet becomes increasingly visual, longstanding accessibility issues blind and low-vision (blv) users face remain largely unresolved @xcite . visionlanguage models have enabled the automation of image-to-text description generation, which can be used to generate alt-text descriptions; this cou... | 34,720 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.