id
stringlengths
1
4
year
int64
2.01k
2.03k
title
stringlengths
12
519
abstract
stringlengths
7
12.7k
pdf_url
stringlengths
36
61
content
stringlengths
7
46.5k
__index_level_0__
int64
0
41.4k
41
2,021
Automatic Detection and Classification of Mental Illnesses from General Social Media Texts
Mental health is getting more and more attention recently, depression being a very common illness nowadays, but also other disorders like anxiety, obsessive-compulsive disorders, feeding disorders, autism, or attention-deficit/hyperactivity disorders. The huge amount of data from social media and the recent advances of...
https://aclanthology.org/2021.ranlp-1.41
## introduction an analysis performed by @xcite estimates that approximately 10% of the world's population is living with a mental illness. the global burden of disease @xcite states that depression is a very common illness and there are more than 264 million people affected by it. at its worst, the illness can lead to...
11,477
534
2,023
Beyond Candidates : Adaptive Dialogue Agent Utilizing Persona and Knowledge
To build ultimate dialogue agents, previous studies suggest models that ground both persona and knowledge. However, applying the dialogue system directly to the usual conversation is still limited because the system requires a complete sentence-formed persona and knowledge candidate sets from the given dataset. In cont...
https://aclanthology.org/2023.findings-emnlp.534
## introduction in usual conversations, humans utilize the semantic concept in their minds in terms of the dialogue topic and the preference of the interlocutor. with the semantic-level of concepts, humans communicate each other by aggregating the concepts to convey knowledgeable and empathetic responses @xcite . it im...
24,623
218
2,022
You can’t pick your neighbors, or can you? When and How to Rely on Retrieval in the k NN - LM
Retrieval-enhanced language models (LMs), which condition their predictions on text retrieved from large external datastores, have recently shown significant perplexity improvements compared to standard LMs. One such approach, the kNN-LM, interpolates any existing LM’s predictions with the output of a k-nearest neighbo...
https://aclanthology.org/2022.findings-emnlp.218
## introduction recently, a new class of language models (lms) that are augmented with retrieval capabilities have led to substantial improvements over standard neural lms @xcite @xcite @xcite . furthermore, lms with retrieval warrant investigation as they provide benefits for many tasks @xcite . these approaches gener...
16,675
161
2,020
HERO : Hierarchical Encoder for V ideo+ L anguage Omni-representation Pre-training
We present HERO, a novel framework for large-scale video+language omni-representation learning. HERO encodes multimodal inputs in a hierarchical structure, where local context of a video frame is captured by a Cross-modal Transformer via multimodal fusion, and global video context is captured by a Temporal Transformer....
https://aclanthology.org/2020.emnlp-main.161
## introduction inspired by bert @xcite , largescale multimodal pre-training has prevailed in the realm of vision-and-language research @xcite @xcite . there are many early players in the area, including vilbert @xcite , lxmert @xcite , uniter @xcite , vl-bert @xcite and unicoder-vl @xcite . however, most large-scale p...
3,882
939
2,023
Prompting with Pseudo-Code Instructions
Prompting with natural language instructions has recently emerged as a popular method of harnessing the capabilities of large language models (LLM). Given the inherent ambiguity present in natural language, it is intuitive to consider the possible advantages of prompting with less ambiguous prompt styles, like pseudo-c...
https://aclanthology.org/2023.emnlp-main.939
## introduction prompting with natural language instructions has recently emerged as a popular method of harnessing the capabilities of large language models. in addition to fine-tuning, models are often fine-tuned using instructions on a large collection of datasets listing 1 an example pseudo-code instruction for the...
22,714
17
2,024
Can Rule-Based Insights Enhance LLM s for Radiology Report Classification? Introducing the R ad P rompt Methodology.
Developing imaging models capable of detecting pathologies from chest X-rays can be cost and time-prohibitive for large datasets as it requires supervision to attain state-of-the-art performance. Instead, labels extracted from radiology reports may serve as distant supervision since these are routinely generated as par...
https://aclanthology.org/2024.bionlp-1.17
## introduction supervised deep learning for medical imaging classification has accomplished significant milestones. in the chest x-ray (cxr) domain, such models have exhibited predictive capabilities on par with expert physicians @xcite and are being utilized in collaborative * equal contribution. annotating medical i...
28,491
70
2,023
Multi-Modal Knowledge Graph Transformer Framework for Multi-Modal Entity Alignment
Multi-Modal Entity Alignment (MMEA) is a critical task that aims to identify equivalent entity pairs across multi-modal knowledge graphs (MMKGs). However, this task faces challenges due to the presence of different types of information, including neighboring entities, multi-modal attributes, and entity types. Directly ...
https://aclanthology.org/2023.findings-emnlp.70
## introduction multi-modal entity alignment (mmea) is a challenging task that aims to identify equivalent entity pairs across multiple knowledge graphs that feature different modalities of attributes, such as text and images. to accomplish this task, sophisticated models are required to effectively leverage informatio...
24,158
21
2,023
Triple-Hybrid Energy-based Model Makes Better Calibrated Natural Language Understanding Models
Though pre-trained language models achieve notable success in many applications, it’s usually controversial for over-confident predictions. Specifically, the in-distribution (ID) miscalibration and out-of-distribution (OOD) detection are main concerns. Recently, some works based on energy-based models (EBM) have shown ...
https://aclanthology.org/2023.eacl-main.21
## introduction since many industrial applications involve safety -critical domains such as healthcare @xcite @xcite @xcite , anticipating credit card defaults @xcite and selfdriving @xcite , it's essential for machine learning systems to provide not only accurate but also well-calibrated predictions @xcite , which can...
21,405
39
2,023
UMUT eam and SINAI at S em E val-2023 Task 9: Multilingual Tweet Intimacy Analysis using Multilingual Large Language Models and Data Augmentation
This work presents the participation of the UMUTeam and the SINAI research groups in the SemEval-2023 Task 9: Multilingual Tweet Intimacy Analysis. The goal of this task is to predict the intimacy of a set of tweets in 10 languages: English, Spanish, Italian, Portuguese, French, Chinese, Hindi, Arabic, Dutch and Korean...
https://aclanthology.org/2023.semeval-1.39
## introduction in natural language processing (nlp), intimacy can be described as how people communicate their perception and willingness to share personal data and emotions to their audience @xcite . the semeval 2023 task 9, entitled multilingual tweet intimacy analysis (mtia) @xcite , consists of a regression task i...
26,317
28
2,024
S ea LLM s - Large Language Models for S outheast A sia
Despite the remarkable achievements of large language models (LLMs) in various tasks, there remains a linguistic bias that favors high-resource languages, such as English, often at the expense of low-resource and regional languages. To address this imbalance, we introduce SeaLLMs, an innovative series of language model...
https://aclanthology.org/2024.acl-demos.28
## introduction the advent of large language models (llms) has radically transformed the field of natural language processing, demonstrating remarkable abilities in text generation, comprehension, and decision-making tasks @xcite @xcite @xcite @xcite . while the proficiencies of these models are extraordinary, the majo...
28,143
7
2,022
USST ’s System for A uto S im T rans 2022
This paper describes our submitted text-to-text Simultaneous translation (ST) system, which won the second place in the Chinese→English streaming translation task of AutoSimTrans 2022. Our baseline system is a BPE-based Transformer model trained with the PaddlePaddle framework. In our experiments, we employ data synthe...
https://aclanthology.org/2022.autosimtrans-1.7
## introduction simultaneous translation @xcite consists in generating a translation before the source speaker finishes speaking. it is widely used in many real-time scenarios such as international conferences, business negotiations and legal proceedings. the challenge of simultaneous machine translation is to find a r...
13,653
37
2,024
Listen Again and Choose the Right Answer: A New Paradigm for Automatic Speech Recognition with Large Language Models
Recent advances in large language models (LLMs) have promoted generative error correction (GER) for automatic speech recognition (ASR), which aims to predict the ground-truth transcription from the decoded N-best hypotheses. Thanks to the strong language generation ability of LLMs and rich information in the N-best lis...
https://aclanthology.org/2024.findings-acl.37
## introduction recent advances in large language models (llms) have attracted a surge of research interest thanks to their remarkable language generation and reasoning ability @xcite @xcite , which achieve a wide range of success on natural language processing (nlp) tasks @xcite @xcite . powered by llms, latest work @...
31,370
46
2,020
Understanding Linguistic Accommodation in Code-Switched Human-Machine Dialogues
Code-switching is a ubiquitous phenomenon in multilingual communities. Natural language technologies that wish to communicate like humans must therefore adaptively incorporate code-switching techniques when they are deployed in multilingual settings. To this end, we propose a Hindi-English human-machine dialogue system...
https://aclanthology.org/2020.conll-1.46
## introduction when interlocutors share more than one language, they nearly inevitably engage in codeswitching (cs): shifting from one language to another @xcite @xcite . since most people in the world today are multilingual @xcite , cs is a ubiquitous phenomenon in multilingual communities. it goes beyond simple lexi...
3,580
18
2,023
Extracting Sign Language Articulation from Videos with M edia P ipe
This paper concerns evaluating methods for extracting phonological information of Swedish Sign Language signs from video data with MediaPipe’s pose estimation. The methods involve estimating i) the articulation phase, ii) hand dominance (left vs. right), iii) the number of hands articulating (one- vs. two-handed signs)...
https://aclanthology.org/2023.nodalida-1.18
## introduction sign languages -or, signed languages -are languages produced with gestures articulated in space and perceived visually or tactilely. over 200 sign languages have been documented around the globe @xcite but they are minoritized and under-researched. one challenge for quantitative research on sign languag...
26,007
7
2,022
Part-of-Speech and Morphological Tagging of A lgerian J udeo- A rabic
Most linguistic studies of Judeo-Arabic, the ensemble of dialects spoken and written by Jews in Arab lands, are qualitative in nature and rely on laborious manual annotation work, and are therefore limited in scale. In this work, we develop automatic methods for morpho-syntactic tagging of Algerian Judeo-Arabic texts p...
https://aclanthology.org/2022.nejlt-1.7
## introduction application of natural language processing (nlp) to real-world problems has been the field's goal from its early days. as algorithms advance, the contribution of nlp to real problems has become more evident and more substantial. the present study originates from a real-world challenge faced by linguists...
18,123
141
2,024
Time is Encoded in the Weights of Finetuned Language Models
We present time vectors, a simple tool to customize language models to new time periods. Time vectors are created by finetuning a language model on data from a single time (e.g., a year or month), and then subtracting the weights of the original pretrained model. This vector specifies a direction in weight space that, ...
https://aclanthology.org/2024.acl-long.141
## introduction temporal variation is a fundamental characteristic of language. as we show in §3, it manifests in language model development as temporal misalignment, where deviations in train and test data lead to large performance degradation across different time periods @xcite . this necessitates adaptation techniq...
27,308
710
2,024
Effects of diversity incentives on sample diversity and downstream model performance in LLM -based text augmentation
The latest generative large language models (LLMs) have found their application in data augmentation tasks, where small numbers of text samples are LLM-paraphrased and then used to fine-tune downstream models. However, more research is needed to assess how different prompts, seed data selection strategies, filtering me...
https://aclanthology.org/2024.acl-long.710
## introduction the emergence of large language models (llms) such as gpt-4, llama, etc., has sparked interest in using them to augment textual datasets @xcite @xcite . in these scenarios, the number of samples is expanded by paraphrasing existing ones through llm prompting. the created paraphrases are then added to th...
27,878
16
2,021
Interesting cross-border news discovery using cross-lingual article linking and document similarity
Team Name: team-8 Embeddia Tool: Cross-Lingual Document Retrieval Zosa et al. Dataset: Estonian and Latvian news datasets abstract: Contemporary news media face increasing amounts of available data that can be of use when prioritizing, selecting and discovering new news. In this work we propose a methodology for retrie...
https://aclanthology.org/2021.hackashop-1.16
## introduction this paper presents our results of the participation in the hackaton, which was organised as part of the eacl 2021 hackashop on news media content analysis and automated report generation. we are addressing the embeddia hackathon challenge on identifying interesting news from neighbouring countries @xci...
10,065
11
2,022
Unmet Creativity Support Needs in Computationally Supported Creative Writing
Large language models (LLMs) enabled by the datasets and computing power of the last decade have recently gained popularity for their capacity to generate plausible natural language text from human-provided prompts. This ability makes them appealing to fiction writers as prospective co-creative agents, addressing the c...
https://aclanthology.org/2022.in2writing-1.11
## introduction mixed-initiative co-creative @xcite creativity support tools @xcite for creative writing have recently seen a surge of interest in research communities, coinciding with the introduction of large language models (llms) such as gpt-3 @xcite that can provide coherent suggestions for the continuation of hum...
17,222
114
2,025
QUST _ NLP at S em E val-2025 Task 7: A Three-Stage Retrieval Framework for Monolingual and Crosslingual Fact-Checked Claim Retrieval
This paper describes the participation of team QUST_NLP in the SemEval-2025 Task 7. We propose a three-stage retrieval framework specifically designed for fact-checked claim retrieval. Initially, we evaluate the performance of several retrieval models and select the one that yields the best results for candidate retrie...
https://aclanthology.org/2025.semeval-1.114
## introduction semeval-2025 shared task 7 focuses on the retrieval of monolingual and crosslingual factchecked claims, aiming to tackle the global challenge of misinformation spread @xcite . we engaged in two tracks of the semeval-2025 shared task 7: monolingual and crosslingual. the monolingual track demands methods ...
40,531
15
2,016
Investigating the Impact of Various Partial Diacritization Schemes on A rabic- E nglish Statistical Machine Translation
Most diacritics in Arabic represent short vowels. In Arabic orthography, such diacritics are considered optional. The absence of these diacritics naturally leads to significant word ambiguity to top the inherent ambiguity present in fully diacritized words. Word ambiguity is a significant impediment for machine transla...
https://aclanthology.org/2016.amta-researchers.15
## introduction resolving natural language ambiguity is at the crux of the nlp enterprise. ambiguity refers to the problem of possibly having different interpretations for different segments (words, phrases, etc.) of a sentence. languages such as arabic, hebrew and persian are typically written in a manner that exacerb...
986
468
2,022
Towards Robust Neural Machine Translation with Iterative Scheduled Data-Switch Training
Most existing methods on robust neural machine translation (NMT) construct adversarial examples by injecting noise into authentic examples and indiscriminately exploit two types of examples. They require the model to translate both the authentic source sentence and its adversarial counterpart into the identical target ...
https://aclanthology.org/2022.coling-1.468
## introduction in recent years, neural machine translation (nmt) has achieved great success @xcite @xcite . usually, the nmt models are trained on clean parallel corpus and thus achieve promising performance under clean inputs. however, small perturbations, such as replacing words in the input sentences, can mislead t...
14,424
126
2,024
An Inversion Attack Against Obfuscated Embedding Matrix in Language Model Inference
With the rapidly-growing deployment of large language model (LLM) inference services, privacy concerns have arisen regarding to the user input data. Recent studies are exploring transforming user inputs to obfuscated embedded vectors, so that the data will not be eavesdropped by service provides. However, in this paper...
https://aclanthology.org/2024.emnlp-main.126
## introduction inference services of language models are now gaining popularity, with a considerable number of language models having been deployed on the cloud server. however, users might concern about the privacy of their data when requesting inference service, that is, their data would be eavesdropped by malicious...
29,554
5
2,005
Language and Encoding Scheme Identification of Extremely Large Sets of Multilingual Text
In the paper we present an outline of our approach to identify languages and encoding schemes in extremely large sets of multi-lingual documents. The large sets we are analyzing in our Language Observatory project [1] are formed by dozens of millions of text documents. In the paper we present an approach which allows u...
https://aclanthology.org/2005.mtsummit-posters.5
## introduction identification of written natural languages and character encoding schemes of text documents is not considered to be a difficult problem. it is true if a document is not written in many languages, is long enough, and the number of documents to be analyzed is not extremely large so that the identificatio...
122
7
2,021
ARGUABLY at C om MA @ ICON : Detection of Multilingual Aggressive, Gender Biased, and Communally Charged Tweets Using Ensemble and Fine-Tuned I ndic BERT
The proliferation in Social Networking has increased offensive language, aggression, and hate-speech detection, which has drawn the focus of the NLP community. However, people’s difference in perception makes it difficult to distinguish between acceptable content and aggressive/hateful content, thus making it harder to...
https://aclanthology.org/2021.icon-multigen.7
## introduction a burgeon in social networking has been seen in the past few years. the number of platforms and users has increased by 77% from 2014 to 2021. social media, due to its easy accessibility and freedom of use, has transformed our communities and how we communicate. one of the widespread impacts can be seen ...
10,189
330
2,020
Target Word Masking for Location Metonymy Resolution
Existing metonymy resolution approaches rely on features extracted from external resources like dictionaries and hand-crafted lexical resources. In this paper, we propose an end-to-end word-level classification approach based only on BERT, without dependencies on taggers, parsers, curated dictionaries of place names, o...
https://aclanthology.org/2020.coling-main.330
## introduction metonymy is a widespread linguistic phenomenon, in which a thing or concept is referred to by the name of something closely associated with it. it is an instance of figurative language that can be easily understand by humans through association, but is hard for machines to interpret. for example, in the...
3,208
50
2,021
Unseen Entity Handling in Complex Question Answering over Knowledge Base via Language Generation
Complex question answering over knowledge base remains as a challenging task because it involves reasoning over multiple pieces of information, including intermediate entities/relations and other constraints. Previous methods simplify the SPARQL query of a question into such forms as a list or a graph, missing such con...
https://aclanthology.org/2021.findings-emnlp.50
## introduction answering user's questions via correct relation paths over a knowledge base may facilitate machine-human interaction to understand how the machine gets the answer. the relation path of a question is defined as the sequence of relations from the topic entity mentioned in a question to its answer entity i...
9,598
732
2,024
BEEAR : Embedding-based Adversarial Removal of Safety Backdoors in Instruction-tuned Language Models
Safety backdoor attacks in large language models (LLMs) enable harmful behaviors to be stealthily triggered while evading detection during normal interactions. The high dimensionality of the trigger search space and the diverse range of potential malicious behaviors in LLMs make this a critical open problem. This paper...
https://aclanthology.org/2024.emnlp-main.732
## introduction the widespread deployment of instruction-tuned large language models (llms) @xcite @xcite has revolutionized various sectors, but a critical safety and security vulnerability has emerged: the deceptive impression of safety-alignment induced by backdoor attacks @xcite @xcite . as illustrated in figure 1 ...
30,145
329
2,020
GLUEC o S : An Evaluation Benchmark for Code-Switched NLP
Code-switching is the use of more than one language in the same conversation or utterance. Recently, multilingual contextual embedding models, trained on multiple monolingual corpora, have shown promising results on cross-lingual and multilingual tasks. We present an evaluation benchmark, GLUECoS, for code-switched lan...
https://aclanthology.org/2020.acl-main.329
## introduction code-switching, or code-mixing, is the use of more than one language in the same utterance or conversation and is prevalent in multilingual societies all over the world. it is a spoken phenomenon and is found most often in informal chat and social media on the internet. processing, understanding, and ge...
2,074
23
2,021
Towards Understanding the Role of Gender in Deploying Social Media-Based Mental Health Surveillance Models
Spurred by advances in machine learning and natural language processing, developing social media-based mental health surveillance models has received substantial recent attention. For these models to be maximally useful, it is necessary to understand how they perform on various subgroups, especially those defined in te...
https://aclanthology.org/2021.clpsych-1.23
## introduction the united states centers for disease control and prevention estimates that 8% of american adults suffer from major depression at a given time @xcite . this represents a critical public health threat, as depression is associated with downstream physical health complications @xcite and an increased risk ...
7,964
11
2,022
The D ial P ort tools
The DialPort project ( http://dialport.org/ ), funded by the National Science Foundation (NSF), covers a group of tools and services that aim at fulfilling the needs of the dialog research community. Over the course of six years, several offerings have been created, including the DialPort Portal and DialCrowd. This pap...
https://aclanthology.org/2022.sigdial-1.11
## introduction the dialport project 1 has created tools and services that respond to needs voiced by many in the dialog research community during several workshops organized by the principle investigators (pis). its offerings are available at no cost to the community with the goal of helping researchers gather high qu...
18,591
98
2,023
Do Deep Neural Networks Capture Compositionality in Arithmetic Reasoning?
Compositionality is a pivotal property of symbolic reasoning. However, how well recent neural models capture compositionality remains underexplored in the symbolic reasoning tasks. This study empirically addresses this question by systematically examining recently published pre-trained seq2seq models with a carefully c...
https://aclanthology.org/2023.eacl-main.98
## introduction integrating symbolic reasoning capabilities into neural models has been a crucial goal of artificial intelligence @xcite . with this in mind, many researchers investigated how well modern neural models achieve symbolic reasoning (lake and baroni, 2018). however, recent studies have reported conflicting ...
21,481
15
2,024
Prompting Implicit Discourse Relation Annotation
Pre-trained large language models, such as ChatGPT, archive outstanding performance in various reasoning tasks without supervised training and were found to have outperformed crowdsourcing workers. Nonetheless, ChatGPT’s performance in the task of implicit discourse relation classification, prompted by a standard multi...
https://aclanthology.org/2024.law-1.15
## introduction pre-trained language models have demonstrated superior performance in various nlp tasks for years, and recently prompt-tuning instead of fine-tuning has become the dominant framework to make efficient use of large language models (llms). llms such as chatgpt have demonstrated human-level performance in ...
33,772
278
2,021
S ent N o B : A Dataset for Analysing Sentiment on Noisy B angla Texts
In this paper, we propose an annotated sentiment analysis dataset made of informally written Bangla texts. This dataset comprises public comments on news and videos collected from social media covering 13 different domains, including politics, education, and agriculture. These comments are labeled with one of the polar...
https://aclanthology.org/2021.findings-emnlp.278
## introduction sentiment analysis is one of the classic problems in computational linguistics, and it has shown a massive impact on different real-life applications. the capability to quantify sentiment polarity of english texts has enabled the creation of solutions for a diverse set of problems like understanding the...
9,826
754
2,020
Balancing Training for Multilingual Neural Machine Translation
When training multilingual machine translation (MT) models that can translate to/from multiple languages, we are faced with imbalanced training sets: some languages have much more training data than others. Standard practice is to up-sample less resourced languages to increase representation, and the degree of up-sampl...
https://aclanthology.org/2020.acl-main.754
## introduction multilingual models are trained to process different languages in a single model, and have been applied to a wide variety of nlp tasks such as text classification @xcite , syntactic analysis @xcite , named-entity recognition @xcite , and machine translation (mt) @xcite . these models have two particular...
2,499
7
2,024
B it D istiller: Unleashing the Potential of Sub-4-Bit LLM s via Self-Distillation
The upscaling of Large Language Models (LLMs) has yielded impressive advances in natural language processing, yet it also poses significant deployment challenges. Weight quantization has emerged as a widely embraced solution to reduce memory and computational demands. This paper introduces BitDistiller, a framework tha...
https://aclanthology.org/2024.acl-long.7
## introduction scaling up model sizes has been pivotal to the success of large language models (llms), yielding unprecedented performance across diverse natural language processing tasks @xcite @xcite . however, such escalating model size poses significant challenges in deployment, particularly on resourceconstrained ...
27,176
35
2,025
AMPS : ASR with Multimodal Paraphrase Supervision
Spontaneous or conversational multilingual speech presents many challenges for state-of-the-art automatic speech recognition (ASR) systems. In this work, we present a new technique AMPS, that augments a multilingual multimodal ASR system with paraphrase-based supervision for improved conversational ASR in multiple lang...
https://aclanthology.org/2025.naacl-short.35
## introduction automatic speech recognition (asr) systems have shown considerable progress in recent years but still falter when subjected to spontaneous conversational speech containing disfluencies, loosely articulated sounds, and other noise factors @xcite . this degradation in asr performance could be largely attr...
39,757
19
2,023
The Larger they are, the Harder they Fail: Language Models do not Recognize Identifier Swaps in Python
Large Language Models (LLMs) have successfully been applied to code generation tasks, raising the question of how well these models understand programming. Typical programming languages have invariances and equivariances in their semantics that human programmers intuitively understand and exploit, such as the (near) in...
https://aclanthology.org/2023.findings-acl.19
## introduction pretrained large language models (llms) are rapidly becoming one of the dominant paradigm for large variety of language tasks @xcite , including programming code generation and completion @xcite . llms have demonstrated increasing performance with increasing model size 1 on many practical tasks @xcite i...
23,208
722
2,023
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
In this position paper we argue that the classical evaluation on Natural Language Processing (NLP) tasks using annotated benchmarks is in trouble. The worst kind of data contamination happens when a Large Language Model (LLM) is trained on the test split of a benchmark, and then evaluated in the same benchmark. The ext...
https://aclanthology.org/2023.findings-emnlp.722
## introduction at the core of nlp as a discipline, there is rigorous evaluation on different tasks. the experimental protocols involve strict control over the data, especially test data, which needs to be totally unseen during development, but also over training and development data. this is essential to assess the pe...
24,810
28
2,024
Findings of the A mericas NLP 2024 Shared Task on Machine Translation into Indigenous Languages
This paper presents the findings of the third iteration of the AmericasNLP Shared Task on Machine Translation. This year’s competition features eleven Indigenous languages found across North, Central, and South America. A total of six teams participate with a total of 157 submissions across all languages and models. Tw...
https://aclanthology.org/2024.americasnlp-1.28
## introduction though the field of natural language processing (nlp) has seen a steep increase in interest and impressive performance improvements over the past decade, a large performance gap still remains between a handful of so-called "high-resource," mostly colonial, languages and the remaining majority of the wor...
28,265
32
2,015
Utilisation des réseaux de neurones récurrents pour la projection interlingue d’étiquettes morpho-syntaxiques à partir d’un corpus parallèle
La construction d’outils d’analyse linguistique pour les langues faiblement dotées est limitée, entre autres, par le manque de corpus annotés. Dans cet article, nous proposons une méthode pour construire automatiquement des outils d’analyse via une projection interlingue d’annotations linguistiques en utilisant des cor...
https://aclanthology.org/2015.jeptalnrecital-court.32
## introduction l'annotation linguistique de ressources consiste à ajouter des informations de nature interprétative aux données brutes originales @xcite . ces informations peuvent être d'ordre terminologique, lexical, morphologique, syntaxique ou sémantique et les ressources linguistiques peuvent être des lexiques, de...
939
3
2,023
Keeping an Eye on Context: Attention Allocation over Input Partitions in Referring Expression Generation
In Referring Expression Generation, model inputs are often composed of different representations, including the visual properties of the intended referent, its relative position and size, and the visual context. Yet, the extent to which this information influences the generation process of black-box neural models is la...
https://aclanthology.org/2023.mmnlg-1.3
## introduction context is crucial in multimodal language generation tasks such as referring expression generation (reg), as descriptions for visible entities not only depend on their own appearance but also on their surroundings (e.g. @xcite . for reg, this is especially evident, as the same expression can unambiguous...
25,776
101
2,024
T ag D ebias: Entity and Concept Tagging for Social Bias Mitigation in Pretrained Language Models
Pre-trained language models (PLMs) play a crucial role in various applications, including sensitive domains such as the hiring process. However, extensive research has unveiled that these models tend to replicate social biases present in their pre-training data, raising ethical concerns. In this study, we propose the T...
https://aclanthology.org/2024.findings-naacl.101
## introduction pre-trained language models (plms) are extensively utilized in various natural language processing tasks, acquiring a significant amount of knowledge during their pre-training phase. research has highlighted that these models often inherit substantial social biases present in their pre-training corpora,...
31,145
36
2,024
That’s Optional: A Contemporary Exploration of “that” Omission in E nglish Subordinate Clauses
The Uniform Information Density (UID) hypothesis posits that speakers optimize the communicative properties of their utterances by avoiding spikes in information, thereby maintaining a relatively uniform information profile over time. This paper investigates the impact of UID principles on syntactic reduction, specific...
https://aclanthology.org/2024.acl-short.36
## introduction exploiting the expressive richness of languages, speakers often convey the same messages in multiple ways. a body of research on uniform information density (uid) puts forward the hypothesis that speakers tend to optimize the communicative effectiveness of their utterances when faced with multiple optio...
28,066
246
2,023
HEVS - TUW at S em E val-2023 Task 8: Ensemble of Language Models and Rule-based Classifiers for Claims Identification and PICO Extraction
This paper describes the HEVS-TUW team submission to the SemEval-2023 Task 8: Causal Claims. We participated in two subtasks: (1) causal claims detection and (2) PIO identification. For subtask 1, we experimented with an ensemble of weakly supervised question detection and fine-tuned Transformer-based models. For subta...
https://aclanthology.org/2023.semeval-1.246
## introduction identification and verification of causal claims from unstructured text data is essential for various decision-making processes, particularly in healthcare. the semeval-2023 task 8 @xcite aims to advance the state-of-the-art in this area by focusing on two subtasks: identification of causal claims and e...
26,524
20
2,025
KIT ’s Low-resource Speech Translation Systems for IWSLT 2025: System Enhancement with Synthetic Data and Model Regularization
This paper presents KIT’s submissions to the IWSLT 2025 low-resource track. We develop both cascaded systems, consisting of Automatic Speech Recognition (ASR) and Machine Translation (MT) models, and end-to-end (E2E) Speech Translation (ST) systems for three language pairs: Bemba, North Levantine Arabic, and Tunisian A...
https://aclanthology.org/2025.iwslt-1.20
## introduction in this paper, we present our submissions to the iwslt 2025 low-resource track. we participate in three language pairs, translating from bemba (iso: bem), north levantine arabic (iso: apc), and tunisian arabic (iso: aeb) into english. our approach follows the unconstrained track, reflecting practical sc...
38,516
41
2,025
Can summarization approximate simplification? A gold standard comparison
This study explores the overlap between text summarization and simplification outputs. While summarization evaluation methods are streamlined, simplification lacks cohesion, prompting the question: how closely can abstractive summarization resemble gold-standard simplification? We address this by applying two BART-base...
https://aclanthology.org/2025.nodalida-1.41
## introduction text simplification can operate at various linguistic levels-semantic, syntactic, or lexical-using diverse strategies to achieve specific goals @xcite @xcite . in practice, automatic text simplification (ats) transforms complex text into simpler versions by splitting sentences, shortening length, and si...
40,212
33
2,020
Mitigating Silence in Compliance Terminology during Parsing of Utterances
This paper reports on an approach to increase multi-token-term recall in a parsing task. We use a compliance-domain parser to extract, during the process of parsing raw text, terms that are unlisted in the terminology. The parser uses a similarity measure (Generalized Dice Coefficient) between listed terms and unlisted...
https://aclanthology.org/2020.fnp-1.33
## introduction the task of extracting multi-token terms 1 , i.e. terminological units which denote concepts and entities in a domain, is a core task of natural language processing (nlp). within the tax-and-regulations domain, some terms are compositional (nunberg et al., 1994; baldwin, 2006; krcmar et al., 2013; bogur...
5,048
251
2,020
ENGINE : Energy-Based Inference Networks for Non-Autoregressive Machine Translation
We propose to train a non-autoregressive machine translation model to minimize the energy defined by a pretrained autoregressive model. In particular, we view our non-autoregressive translation system as an inference network (Tu and Gimpel, 2018) trained to minimize the autoregressive teacher energy. This contrasts wit...
https://aclanthology.org/2020.acl-main.251
## introduction the performance of non-autoregressive neural machine translation (nat) systems, which predict tokens in the target language independently of each other conditioned on the source sentence, has been improving steadily in recent years @xcite @xcite . one common ingredient in getting non-autoregressive syst...
1,994
153
2,023
Towards Understanding Chain-of-Thought Prompting: An Empirical Study of What Matters
Chain-of-Thought (CoT) prompting can dramatically improve the multi-step reasoning abilities of large language models (LLMs). CoT explicitly encourages the LLM to generate intermediate rationales for solving a problem, by providing a series of reasoning steps in the demonstrations. Despite its success, there is still l...
https://aclanthology.org/2023.acl-long.153
## introduction large language models (llms) can perform new tasks during inference when prompted with a few demonstrations @xcite . chain-of-thought (cot) prompting @xcite can (figure 1 ) improve the ability of sufficiently large llms to do complex and multi-step reasoning. in addition to (query, answer) example-pair ...
19,470
8
2,021
E xcavator C ovid: Extracting Events and Relations from Text Corpora for Temporal and Causal Analysis for COVID -19
Timely responses from policy makers to mitigate the impact of the COVID-19 pandemic rely on a comprehensive grasp of events, their causes, and their impacts. These events are reported at such a speed and scale as to be overwhelming. In this paper, we present ExcavatorCovid, a machine reading system that ingests open-so...
https://aclanthology.org/2021.emnlp-demo.8
## introduction timely responses from policy makers to mitigate the impact of the covid-19 pandemic rely on a comprehensive grasp of events, their causes, and their impacts. since the beginning of the covid-19 pandemic, an enormous amount of articles are being published every day, that report many events foot_0 and stu...
9,473
12
2,023
PROMT Systems for WMT 23 Shared General Translation Task
This paper describes the PROMT submissions for the WMT23 Shared General Translation Task. This year we participated in two directions of the Shared Translation Task: English to Russian and Russian to English. Our models are trained with the MarianNMT toolkit using the transformer-big configuration. We use BPE for text ...
https://aclanthology.org/2023.wmt-1.12
## introduction the wmt shared general translation task is an annual event where different companies and researchers build and test their systems on the test sets provided by the organizers. this year we decided to participate in two directions: english to russian and russian to english. we use the standard transformer...
27,030
415
2,022
CONFIT : Toward Faithful Dialogue Summarization with Linguistically-Informed Contrastive Fine-tuning
Factual inconsistencies in generated summaries severely limit the practical applications of abstractive dialogue summarization. Although significant progress has been achieved by using pre-trained neural language models, substantial amounts of hallucinated content are found during the human evaluation. In this work, we...
https://aclanthology.org/2022.naacl-main.415
## introduction text summarization is used to generate a concise and accurate summary of a long text while focusing on the sections that convey the most useful information @xcite . in recent years, the resurgence of dialogue summarization has attracted significant research attentions @xcite @xcite @xcite @xcite @xcite ...
18,003
18
2,007
Disambiguating automatic semantic annotation based on a thesaurus structure
The use/use for relationship a thesaurus is usually more complex than the (para-) synonymy recommended in the ISO-2788 standard describing the content of these controlled vocabularies. The fact that a non preferred term can refer to multiple preferred terms (only the latter are relevant in controlled indexing) makes th...
https://aclanthology.org/2007.jeptalnrecital-long.18
## introduction thesauri are controlled vocabularies, often used for indexing and retrieving documents from collections. the standard thesauri contain two types of elements, preferred and non preferred terms, related with a link called use/use for. this link is considered as (para-)synonymy in the iso-2788 standard @xc...
212
27
2,021
LOA : Logical Optimal Actions for Text-based Interaction Games
We present Logical Optimal Actions (LOA), an action decision architecture of reinforcement learning applications with a neuro-symbolic framework which is a combination of neural network and symbolic knowledge acquisition approach for natural language interaction games. The demonstration for LOA experiments consists of ...
https://aclanthology.org/2021.acl-demo.27
## introduction neuro-symbolic (ns) hybrid approaches have been proposed for overcoming the weakness of deep reinforcement learning @xcite @xcite , including less training data with generalization, external knowledge utilization, and direct explainability of what is learned. study of reinforcement learning (rl) in non-...
7,589
40
2,020
E nglish-to- C hinese Transliteration with Phonetic Auxiliary Task
Approaching named entities transliteration as a Neural Machine Translation (NMT) problem is common practice. While many have applied various NMT techniques to enhance machine transliteration models, few focus on the linguistic features particular to the relevant languages. In this paper, we investigate the effect of in...
https://aclanthology.org/2020.aacl-main.40
## introduction transliteration, the act of mapping a name from the orthographic system of one language to another, is directed by the pronunciation in the source and target languages, and often by historical reasons or conventions. it plays an important role in tasks like information retrieval and machine translation ...
1,662
602
2,025
Bayelemabaga: Creating Resources for B ambara NLP
Data curation for under-resource languages enables the development of more accurate and culturally sensitive natural language processing models. However, the scarcity of well-structured multilingual datasets remains a challenge for advancing machine translation in these languages, especially for African languages. This...
https://aclanthology.org/2025.naacl-long.602
## introduction driven by the availability of massive, digitized data sets and advancements in neural architectures @xcite , state-of-the-art natural language processing (nlp) models are widely applied to the world's high-resource languages (e.g., english, french, spanish). they are employed in tasks such as machine tr...
39,686
742
2,022
End-to-End Unsupervised Vision-and-Language Pre-training with Referring Expression Matching
Recently there has been an emerging interest in unsupervised vision-and-language pre-training (VLP) that learns multimodal representations without parallel image-caption data. These pioneering works significantly reduce the cost of VLP on data collection and achieve promising results compared to supervised VLP. However...
https://aclanthology.org/2022.emnlp-main.742
## introduction vision-and-language pre-training (vlp) @xcite @xcite @xcite @xcite has achieved great success on a wide range of vision-and-language tasks, e.g., visual question answering @xcite , image-text retrieval @xcite and text-to-image generation @xcite . the major challenge for vlp is how to bridge the gap betw...
15,639
3
2,020
DART : A Lightweight Quality-Suggestive Data-to-Text Annotation Tool
We present a lightweight annotation tool, the Data AnnotatoR Tool (DART), for the general task of labeling structured data with textual descriptions. The tool is implemented as an interactive application that reduces human efforts in annotating large quantities of structured data, e.g. in the format of a table or tree ...
https://aclanthology.org/2020.coling-demos.3
## introduction neural data-to-text generation has been the subject of much research in recent years @xcite . traditionally, the task takes as input structured data which comes in the form of tables with attribute and value pairs, and generates free-form, human-readable text. unlabeled data data sampler data raw text d...
3,493
574
2,022
Learning Disentangled Representations of Negation and Uncertainty
Negation and uncertainty modeling are long-standing tasks in natural language processing. Linguistic theory postulates that expressions of negation and uncertainty are semantically independent from each other and the content they modify. However, previous works on representation learning do not explicitly model this in...
https://aclanthology.org/2022.acl-long.574
## introduction in formal semantics, negation and uncertainty are operators whose semantic functions are independent of the propositional content they modify @xcite 2 . that is, it is possible to form fluent statements by varying only one of these aspects while leaving the others the same. negation, uncertainty, and co...
13,376
24
2,023
Bidirectional Neural Machine Translation ( NMT ) using Monolingual Data for K hasi- E nglish Pair
Due to a lack of parallel data, low-resource language machine translation has been unable to make the most of Neural Machine Translation. This paper investigates several approaches as to how low-resource Neural Machine Translation can be improved in a strictly low-resource setting, especially for bidirectional Khasi-En...
https://aclanthology.org/2023.icon-1.24
## introduction 1.introduction of machine translation machine translation is a sub-field of natural language processing that deals with the automatic translation of human languages. the translation can be text-text, speech-speech, speech-text and text-speech. text-based machine translation has come a long way, from rul...
25,302
315
2,020
Domain Adaptation of T hai Word Segmentation Models using Stacked Ensemble
Like many Natural Language Processing tasks, Thai word segmentation is domain-dependent. Researchers have been relying on transfer learning to adapt an existing model to a new domain. However, this approach is inapplicable to cases where we can interact with only input and output layers of the models, also known as “bl...
https://aclanthology.org/2020.emnlp-main.315
## introduction word segmentation (ws) is an essential process for several natural language processing (nlp) tasks such as part-of-speech (pos) tagging and machine translation (mt). the accuracy of ws significantly affects the accuracy of these nlp tasks, as shown in experimental results from @xcite while ws is conside...
4,036
442
2,022
A Generalized Method for Automated Multilingual Loanword Detection
Loanwords are words incorporated from one language into another without translation. Suppose two words from distantly-related or unrelated languages sound similar and have a similar meaning. In that case, this is evidence of likely borrowing. This paper presents a method to automatically detect loanwords across various...
https://aclanthology.org/2022.coling-1.442
## introduction throughout history, words and phrases have been exchanged between languages around the world @xcite . this can obscure genetic relations between languages (e.g., many people erroneously believe english and french are more closely related than they are) but may also increase comprehension of foreign lang...
14,398
1
2,024
A uto T emplate: A Simple Recipe for Lexically Constrained Text Generation
Lexically constrained text generation is one of the constrained text generation tasks, which aims to generate text that covers all the given constraint lexicons. While the existing approaches tackle this problem using a lexically constrained beam search algorithm or dedicated model using non-autoregressive decoding, th...
https://aclanthology.org/2024.inlg-main.1
## introduction text generation often requires lexical constraints, i.e., generating a text containing pre-specified lexicons. for example, the summarization task may require the generation of summaries that include specific people and places @xcite , and advertising text requires the inclusion of pre-specified keyword...
33,409
180
2,021
Chase: A Large-Scale and Pragmatic C hinese Dataset for Cross-Database Context-Dependent Text-to- SQL
The cross-database context-dependent Text-to-SQL (XDTS) problem has attracted considerable attention in recent years due to its wide range of potential applications. However, we identify two biases in existing datasets for XDTS: (1) a high proportion of context-independent questions and (2) a high proportion of easy SQ...
https://aclanthology.org/2021.acl-long.180
## introduction the problem of mapping a natural language utterance into an executable sql query in the crossdatabase and context-dependent setting has attracted considerable attention due to its wide range of applications @xcite . this problem is notoriously challenging, due to the complex contextual dependencies amon...
6,996
124
2,025
S ym B a: Symbolic Backward Chaining for Structured Natural Language Reasoning
To improve the performance and explainability of LLM-based natural language reasoning, structured reasoning can be applied to generate explicitly structured proofs. Among different methods for structured reasoning, we specifically focus on backward chaining, where the proof goal is recursively decomposed to subgoals by...
https://aclanthology.org/2025.naacl-long.124
## introduction large language models (llms) trained with massive amounts of natural language text have shown remarkable reasoning ability in various fields, including logical and arithmetic reasoning @xcite . however, autoregressively generated explanations as in chain-ofthoughts might contain factual and logical erro...
39,226
53
2,025
H ate I mg P rompts: Mitigating Generation of Images Spreading Hate Speech
The emergence of artificial intelligence has proven beneficial to numerous organizations, particularly in its various applications for social welfare. One notable application lies in AI-driven image generation tools. These tools produce images based on provided prompts. While this technology holds potential for constru...
https://aclanthology.org/2025.nlp4dh-1.53
## introduction in the era of rapid technological advancement, the emergence of generative ai tools such as dall-e has revolutionized the landscape of content creation @xcite . these tools harness the power of artificial intelligence to generate images based on textual prompts, offering unprecedented versatility and cr...
40,114
152
2,024
D ialog S tudio: Towards Richest and Most Diverse Unified Dataset Collection for Conversational AI
Despite advancements in conversational AI, language models encounter challenges to handle diverse conversational tasks, and existing dialogue dataset collections often lack diversity and comprehensiveness. To tackle these issues, we introduce DialogStudio: the largest and most diverse collection of dialogue datasets, u...
https://aclanthology.org/2024.findings-eacl.152
## introduction recent years have seen remarkable progress in conversational ai, primarily driven by the advent of approaches and language models @xcite @xcite . despite the advancements, these models could fall short when handling various tasks in a conversation due to the lack of comprehensive and diverse training da...
31,044
23
2,023
Naturalistic Causal Probing for Morpho-Syntax
Probing has become a go-to methodology for interpreting and analyzing deep neural models in natural language processing. However, there is still a lack of understanding of the limitations and weaknesses of various types of probes. In this work, we suggest a strategy for input-level intervention on naturalistic sentence...
https://aclanthology.org/2023.tacl-1.23
## introduction contextualized word representations are a byproduct of pre-trained neural language models and have led to improvements in performance on a myriad of downstream natural language processing (nlp) tasks (joshi et al., 2019; kondratyuk, 2019; zellers et al., 2019; brown et al., 2020). despite this performan...
26,774
626
2,024
ORPO : Monolithic Preference Optimization without Reference Model
While recent preference alignment algorithms for language models have demonstrated promising results, supervised fine-tuning (SFT) remains imperative for achieving successful convergence. In this paper, we revisit SFT in the context of preference alignment, emphasizing that a minor penalty for the disfavored style is s...
https://aclanthology.org/2024.emnlp-main.626
## introduction pre-trained language models (plms) with vast training corpora such as web texts @xcite or textbooks @xcite have shown remarkable abilities in diverse natural language processing (nlp) tasks @xcite @xcite @xcite . however, the models must undergo further tuning to be usable in downstream applications, ty...
30,041
663
2,020
Multitask Learning for Cross-Lingual Transfer of Broad-coverage Semantic Dependencies
We describe a method for developing broad-coverage semantic dependency parsers for languages for which no semantically annotated resource is available. We leverage a multitask learning framework coupled with annotation projection. We use syntactic parsing as the auxiliary task in our multitask setup. Our annotation pro...
https://aclanthology.org/2020.emnlp-main.663
## introduction broad-coverage semantic dependency parsing (sdp) 1 was first introduced in the semeval shared task @xcite and aims to provide semantic analysis of sentences by capturing semantic relations between all content-bearing words in a sentence. the rich graph structure introduced by sdp allows the model to cov...
4,380
86
2,025
Improved N orwegian B okmål Translations for FLORES
FLORES+ is a collection of parallel datasets obtained by translation from originally English source texts. FLORES+ contains Norwegian translations for the two official written variants of Norwegian: Norwegian Bokmål and Norwegian Nynorsk. However, the earliest Bokmål version contained non-native-like mistakes, and even...
https://aclanthology.org/2025.wmt-1.86
## introduction this paper describes our submission to the wmt 25 open language data shared task, where participants were asked to contribute to open dataset collections such as flores+, the mt seed dataset or other parallel datasets. we have chosen to focus on the norwegian bokmål part of the flores+ dataset, as the a...
41,218
174
2,020
M eister M orxrc at S em E val-2020 Task 9: Fine-Tune Bert and Multitask Learning for Sentiment Analysis of Code-Mixed Tweets
Natural language processing (NLP) has been applied to various fields including text classification and sentiment analysis. In the shared task of sentiment analysis of code-mixed tweets, which is a part of the SemEval-2020 competition, we preprocess datasets by replacing emoji and deleting uncommon characters and so on,...
https://aclanthology.org/2020.semeval-1.174
## introduction language is an indispensable and important part of human daily life. natural language is everywhere as a most direct and simple tool of expression. natural language processing is to transform the language used for human communication into a machine language that can be understood by machines. it is a mo...
6,081
16
2,021
A Computational Model for Interactive Transcription
Transcribing low resource languages can be challenging in the absence of a good lexicon and trained transcribers. Accordingly, we seek a way to enable interactive transcription whereby the machine amplifies human efforts. This paper presents a data model and a system architecture for interactive transcription, supporti...
https://aclanthology.org/2021.dash-1.16
## introduction understanding the "transcription challenge" is a prerequisite to designing effective solutions, minimizing bottlenecks @xcite . we must face realities such as the lack of a good lexicon, the short supply of transcribers, and the difficulty of engaging people in arduous work. sparse transcription is an a...
8,099
392
2,023
Universal Domain Adaptation for Robust Handling of Distributional Shifts in NLP
When deploying machine learning systems to the wild, it is highly desirable for them to effectively leverage prior knowledge to the unfamiliar domain while also firing alarms to anomalous inputs. In order to address these requirements, Universal Domain Adaptation (UniDA) has emerged as a novel research area in computer...
https://aclanthology.org/2023.findings-emnlp.392
## introduction deep learning models demonstrate satisfactory performance when tested on data from the training distribution. however, real-world inputs encounter novel data ceaselessly that deviate from the trained distribution, commonly known as distributional shift. when confronted with such inputs, machine learning...
24,481
8
2,024
DCU - NLG - PBN at the GEM ’24 Data-to-Text Task: Open-Source LLM PEFT -Tuning for Effective Data-to-Text Generation
LLMs have been used in various tasks with impressive success, including data-to-text generation. However, one concern when LLMs are compared to alternative methods is data contamination, in other words, for many datasets the data used in training these models may have included publicly available test sets. In this pape...
https://aclanthology.org/2024.inlg-genchal.8
## introduction with the advancement of large language models (llms), their capabilities have been explored in many tasks including data-to-text generation, which maps structured input data into a suitable output text containing all and only provided information. however, the datasets for many data-totext tasks have be...
33,473
14
2,021
On Releasing Annotator-Level Labels and Information in Datasets
A common practice in building NLP datasets, especially using crowd-sourced annotations, involves obtaining multiple annotator judgements on the same data instances, which are then flattened to produce a single “ground truth” label or score, through majority voting, averaging, or adjudication. While these approaches may...
https://aclanthology.org/2021.law-1.14
## introduction obtaining multiple annotator judgements on the same data instances is a common practice in nlp in order to improve the quality of final labels @xcite . cases of disagreement between annotations are often resolved through majority voting, averaging, or adjudication in order to derive a single "ground tru...
10,444
20
2,023
Multimodal Hate Speech Event Detection - Shared Task 4, CASE 2023
Ensuring the moderation of hate speech and its targets emerges as a critical imperative within contemporary digital discourse. To facilitate this imperative, the shared task Multimodal Hate Speech Event Detection was organized in the sixth CASE workshop co-located at RANLP 2023. The shared task has two subtasks. The su...
https://aclanthology.org/2023.case-1.20
## introduction the rise of social media has altered the global communication and information landscape, allowing people from all walks of life to share their opinions and perspectives on a wide range of topics, including heated geopolitical events @xcite . this free-flowing exchange of ideas, however, has not been wit...
21,046
24
2,025
Know- AI at TSAR 2025 Shared Task Difficulty-aware Text Simplification System
Text simplification is an active research topic with applications in multiple domains. In a simplification pipeline, assessment of text difficulty plays a crucial role as a quality control mechanism it acts as a critic and guides models to generate text at the difficulty level required by the user. This paper presents ...
https://aclanthology.org/2025.tsar-1.24
## introduction text simplification is a widely studied task in natural language processing (nlp), with applications in accessibility, education, and communication. it is important in many applications where the userse.g., non-native speakers-struggle to understand complex or standard language. the goal is to reduce th...
41,003
150
2,022
CQR - SQL : Conversational Question Reformulation Enhanced Context-Dependent Text-to- SQL Parsers
Context-dependent text-to-SQL is the task of translating multi-turn questions into database-related SQL queries. Existing methods typically focus on making full use of history context or previously predicted SQL for currently SQL parsing, while neglecting to explicitly comprehend the schema and conversational dependenc...
https://aclanthology.org/2022.findings-emnlp.150
## introduction the text-to-sql task is one of the widely followed branches of semantic parsing, which aims to parse natural language questions with a given database into sql queries. previous works @xcite @xcite focus on context-independent text-to-sql task. however, in reality, as users tend to prefer multiple turns ...
16,602
71
2,023
Exploring Zero and Few-shot Techniques for Intent Classification
Conversational NLU providers often need to scale to thousands of intent-classification models where new customers often face the cold-start problem. Scaling to so many customers puts a constraint on storage space as well. In this paper, we explore four different zero and few-shot intent classification approaches with t...
https://aclanthology.org/2023.acl-industry.71
## introduction intent classification is the primary natural language understanding task for a virtual agent or a chatbot. providing intent-utterances for training intent classification models is a laborious process. in this paper, we address this problem by exploring zero and few-shot intent identification using large...
20,554
499
2,023
Reasoning Makes Good Annotators : An Automatic Task-specific Rules Distilling Framework for Low-resource Relation Extraction
Relation extraction is often challenged by insufficient labeled data. Previous methods exploit knowledge from unlabeled data by generating pseudo labels in a self-training pipeline, which suffers a gradual drift problem. Logic rules, a transferable and explainable form of expert knowledge, have achieved promising succe...
https://aclanthology.org/2023.findings-emnlp.499
## introduction relation extraction is a fundamental task in natural language processing. training supervised models with manually annotated data is labor-intensive. this motivates methods for model learning under a low-resource setting with limited annotations. semi-supervised methods @xcite aim to explore knowledge f...
24,587
320
2,021
WER - BERT : Automatic WER Estimation with BERT in a Balanced Ordinal Classification Paradigm
Automatic Speech Recognition (ASR) systems are evaluated using Word Error Rate (WER), which is calculated by comparing the number of errors between the ground truth and the transcription of the ASR system. This calculation, however, requires manual transcription of the speech signal to obtain the ground truth. Since tr...
https://aclanthology.org/2021.eacl-main.320
## introduction asr systems are ubiquitous now. they are available across applications such as voice assistants, assisted living or hands free device usage. however, with the widespread usage of asr systems, there comes a heavy need for asr evaluation as well -to select, compare or improve alternate asr *the authors co...
8,510
35
2,025
Pensez: Moins de données, meilleur raisonnement – Repenser les LLM français
Les grands modèles linguistiques (LLM) ont démontré des capacités remarquables dans diverses tâches de traitement automatique du langage naturel. Cependant, l’obtention de performances élevées dans des domaines spécialisés tels que le raisonnement mathématique et les langues autres que l’anglais nécessite souvent un en...
https://aclanthology.org/2025.jeptalnrecital-taln.35
## training reasoning model we fine-tuned the qwen2.5 7b instruct model on pensez training data. to guide the model in producing step-by-step reasoning, we incorporated special tokens, "<@xmath0, into the training data to mark these reasoning sequences. the training process leveraged deepspeed zero-3 (rasley et al., 20...
38,574
1
2,025
A rabic S ense: A Benchmark for Evaluating Commonsense Reasoning in A rabic with Large Language Models
Recent efforts in natural language processing (NLP) commonsense reasoning research have led to the development of numerous new datasets and benchmarks. However, these resources have predominantly been limited to English, leaving a gap in evaluating commonsense reasoning in other languages. In this paper, we introduce t...
https://aclanthology.org/2025.wacl-1.1
## arabicsense: a new benchmark dataset the aim of this work is twofold: to create a dataset for evaluating arabic commonsense reasoning in llms and to improve their performance in this area. to achieve this, we generate diverse, high-quality data specifically designed for training llms in arabic commonsense reasoning....
41,101
133
2,025
NLP - ADB ench: NLP Anomaly Detection Benchmark
Anomaly detection (AD) is an important machine learning task with applications in fraud detection, content moderation, and user behavior analysis. However, AD is relatively understudied in a natural language processing (NLP) context, limiting its effectiveness in detecting harmful content, phishing attempts, and spam r...
https://aclanthology.org/2025.findings-emnlp.133
## introduction anomaly detection (ad) is a fundamental area in machine learning with diverse applications in web systems, such as fraud detection, content moderation, and user behavior analysis @xcite . substantial progress has been achieved in ad for structured data such as tabular, graph, and time series @xcite @xci...
36,810
10
2,019
Autism Speech Analysis using Acoustic Features
Autism speech has distinct acoustic patterns, different from normal speech. Analyzing acoustic features derived from the speech of children affected with autism spectrum disorder (ASD) can help its early detection. In this study, a comparative analysis of the discriminating acoustic characteristics is carried out betwe...
https://aclanthology.org/2019.icon-1.10
## introduction asd is a pervasive developmental disorder, defined clinically by observing the abnormalities in three areas: communication, social reciprocity, and hyperfocus or reduced behavioral flexibility @xcite @xcite . study shows, at least 50% of the total population of asd tends to show atypical acoustic patter...
1,520
78
2,023
Sartipi-Sedighin at S em E val-2023 Task 2: Fine-grained Named Entity Recognition with Pre-trained Contextual Language Models and Data Augmentation from W ikipedia
This paper presents the system developed by the Sartipi-Sedighin team for SemEval 2023 Task 2, which is a shared task focused on multilingual complex named entity recognition (NER), or MultiCoNER II. The goal of this task is to identify and classify complex named entities (NEs) in text across multiple languages. To tac...
https://aclanthology.org/2023.semeval-1.78
## introduction the multiconer 2023 task 2 was initiated with the purpose of developing ner systems that can accurately detect fine-grained nes across multiple languages. the shared task was organized into 13 tracks, with 12 monolingual tracks and one multilingual track, to facilitate a thorough evaluation of the parti...
26,356
6
2,023
Modelling the Reduplicating L ushootseed Morphology with an FST and LSTM
In this paper, we present an FST based approach for conducting morphological analysis, lemmatization and generation of Lushootseed words. Furthermore, we use the FST to generate training data for an LSTM based neural model and train this model to do morphological analysis. The neural model reaches a 71.9% accuracy on t...
https://aclanthology.org/2023.americasnlp-1.6
## introduction a significant proportion of the world's languages face the threat of endangerment to varying degrees. this endangered status poses certain constraints on the extent to which modern nlp research can be conducted with such languages. this is due to the fact that many endangered languages lack extensive te...
20,624
1
2,025
Are We Paying Attention to Her? Investigating Gender Disambiguation and Attention in Machine Translation
While gender bias in modern Neural Machine Translation (NMT) systems has received much attention, the traditional evaluation metrics for these systems do not fully capture the extent to which models integrate contextual gender cues. We propose a novel evaluation metric called Minimal Pair Accuracy (MPA) which measures ...
https://aclanthology.org/2025.gitt-1.1
## introduction the field of machine translation (mt) has undergone significant technological shifts over the past decades, moving from transparent rule-based systems to increasingly opaque probability-based ones such as statistical and neural mt. furthermore, the complexity and scale of current transformer-based @xcit...
38,291
2
2,024
PROC 2 PDDL : Open-Domain Planning Representations from Texts
Planning in a text-based environment continues to be a significant challenge for AI systems. Recent approaches have utilized language models to predict planning domain definitions (e.g., PDDL) but have only been evaluated in closed-domain simulated environments. To address this, we present Proc2PDDL, the first dataset ...
https://aclanthology.org/2024.nlrse-1.2
## introduction planning is the task of finding a sequence of actions to achieve a goal in a given environment @xcite . in real life, the environment is often described with natural language texts. to enable text-based, automated planning, recent work has used language models (lms) to generate plans @xcite . however, t...
34,786
14
2,023
SAE - NTM : Sentence-Aware Encoder for Neural Topic Modeling
Incorporating external knowledge, such as pre-trained language models (PLMs), into neural topic modeling has achieved great success in recent years. However, employing PLMs for topic modeling generally ignores the maximum sequence length of PLMs and the interaction between external knowledge and bag-of-words (BOW). To ...
https://aclanthology.org/2023.codi-1.14
## introduction topic models have been widely used to identify human-interpretable topics and learn text representations, which have been applied for various tasks in natural language processing (nlp) such as information retrieval @xcite , summarization @xcite , and semantic similarity detection @xcite . a typical topi...
21,194
10
2,024
Identification du locuteur : ouvrir la boîte noire
L’explicabilité des systèmes relevant du deep learning est devenue un enjeu central ces dernières années, dans le droit européen comme le domaine criminalistique. L’approche BA-LR introduit en identification du locuteur un nouveau paradigme de modélisation : elle fait émerger automatiquement les attributs partagés par ...
https://aclanthology.org/2024.jeptalnrecital-jep.10
## introduction la reconnaissance automatique du locuteur consiste à reconnaître ou vérifier l'identité d'une personne à partir d'un échantillon de sa voix. la comparaison de voix s'inscrit dans ce champ et détermine si deux enregistrements de parole ont été produits par le même locuteur, ou deux locuteurs différents. ...
33,565
261
2,022
The Moral Integrity Corpus: A Benchmark for Ethical Dialogue Systems
Conversational agents have come increasingly closer to human competence in open-domain dialogue settings; however, such models can reflect insensitive, hurtful, or entirely incoherent viewpoints that erode a user’s trust in the moral integrity of the system. Moral deviations are difficult to mitigate because moral judg...
https://aclanthology.org/2022.acl-long.261
## related work there is a long-standing interest in the moral responsibility of ai @xcite @xcite @xcite . work in human-computer interaction (hci) reveals that, before users feel they can trust a conversational agent, they will often probe it to identify the limitations which bound its abilities, competence @xcite , a...
13,062
449
2,023
End-to-End Single-Channel Speaker-Turn Aware Conversational Speech Translation
Conventional speech-to-text translation (ST) systems are trained on single-speaker utterances, and they may not generalize to real-life scenarios where the audio contains conversations by multiple speakers. In this paper, we tackle single-channel multi-speaker conversational ST with an end-to-end and multi-task trainin...
https://aclanthology.org/2023.emnlp-main.449
## introduction speech translation (st) has seen wide adoption in commercial products and the research community @xcite due to its effectiveness in bridging language barriers. st aims to translate audio of source languages into text of the target languages. this problem was tackled by a cascaded approach that pipelines...
22,225
18
2,024
Benchmarking Low-Resource Machine Translation Systems
Assessing the performance of machine translation systems is of critical value, especially to languages with lower resource availability.Due to the large evaluation effort required by the translation task, studies often compare new systems against single systems or commercial solutions. Consequently, determining the bes...
https://aclanthology.org/2024.loresmt-1.18
## introduction the machine translation (mt) task is increasingly relevant in today's connected world as accessibility enables knowledge transfer. hence, mt systems are recognized as prime tools in the natural language processing (nlp) domain @xcite . in recent years, neural machine translation (nmt) @xcite has led the...
33,797
98
2,021
Learning to Answer Psychological Questionnaire for Personality Detection
Existing text-based personality detection research mostly relies on data-driven approaches to implicitly capture personality cues in online posts, lacking the guidance of psychological knowledge. Psychological questionnaire, which contains a series of dedicated questions highly related to personality traits, plays a cr...
https://aclanthology.org/2021.findings-emnlp.98
## introduction as a psychological conception, personality aims to explain human behaviors in terms of a few stable and measurable individual characteristics @xcite . the study of personality is fundamental to psychology, and personality detection @xcite has benefited many applications such as dialogue systems @xcite ,...
9,648
23
2,024
Optimizing LLM Based Retrieval Augmented Generation Pipelines in the Financial Domain
Retrieval Augmented Generation (RAG) is a prominent approach in real-word applications for grounding large language model (LLM) generations in up to date and domain-specific knowledge. However, there is a lack of systematic investigations of the impact of each component (retrieval quality, prompts, generation models) o...
https://aclanthology.org/2024.naacl-industry.23
## introduction recent years have seen tremendous improvement in the ability of large language models (llm) such as @xcite and llama-2 @xcite to address users' questions/queries in diverse domains (medical questions, math problems, code assistants etc). despite llms acquiring immense parametric world knowledge during t...
34,554
4
2,025
Beyond Paraphrasing: Analyzing Summarization Abstractiveness and Reasoning
While there have been many studies analyzing the ability of LLMs to solve problems through reasoning, their application of reasoning in summarization remains largely unexamined. This study explores whether reasoning is essential to summarization by investigating three questions: (1) Do humans frequently use reasoning t...
https://aclanthology.org/2025.newsum-main.4
## introduction in recent decades, the amount of textual information available has grown exponentially, creating a pressing need for automatic systems that can process this information and derive meaningful conclusions from it. recent advances in large language models (llms) have shown remarkable progress in handling t...
40,018
702
2,023
Multijugate Dual Learning for Low-Resource Task-Oriented Dialogue System
Dialogue data in real scenarios tend to be sparsely available, rendering data-starved end-to-end dialogue systems trained inadequately. We discover that data utilization efficiency in low-resource scenarios can be enhanced by mining alignment information uncertain utterance and deterministic dialogue state. Therefore, ...
https://aclanthology.org/2023.findings-acl.702
## introduction with the emergence of dialogue data @xcite , and the evolution of pre-trained language models @xcite , end-to-end task-oriented dialogue (tod) systems @xcite @xcite gradually replaced the previous modular cascading dialogue systems @xcite . the end-to-end tod system adopts a uniform training objective, ...
23,891