bibtex_url
stringlengths
41
50
bibtext
stringlengths
693
2.88k
abstract
stringlengths
0
2k
authors
listlengths
1
45
title
stringlengths
21
206
id
stringlengths
7
16
type
stringclasses
2 values
arxiv_id
stringlengths
9
12
https://aclanthology.org/2024.findings-acl.368.bib
@inproceedings{cui-etal-2024-unveiling, title = "Unveiling the Art of Heading Design: A Harmonious Blend of Summarization, Neology, and Algorithm", author = "Cui, Shaobo and Feng, Yiyang and Mao, Yisong and Hou, Yifan and Faltings, Boi", editor = "Ku, Lun-Wei and Martins, ...
Crafting an appealing heading is crucial for attracting readers and marketing work or products. A popular way is to summarize the main idea with a refined description and a memorable acronym. However, there lacks a systematic study and a formal benchmark including datasets and metrics. Motivated by this absence, we int...
[ "Cui, Shaobo", "Feng, Yiyang", "Mao, Yisong", "Hou, Yifan", "Faltings, Boi" ]
Unveiling the Art of Heading Design: A Harmonious Blend of Summarization, Neology, and Algorithm
findings-acl.368
Poster
2006.03743v1
https://aclanthology.org/2024.findings-acl.369.bib
@inproceedings{wuehrl-etal-2024-understanding, title = "Understanding Fine-grained Distortions in Reports of Scientific Findings", author = "Wuehrl, Amelie and Wright, Dustin and Klinger, Roman and Augenstein, Isabelle", editor = "Ku, Lun-Wei and Martins, Andre and Srikuma...
Distorted science communication harms individuals and society as it can lead to unhealthy behavior change and decrease trust in scientific institutions. Given the rapidly increasing volume of science communication in recent years, a fine-grained understanding of how findings from scientific publications are reported to...
[ "Wuehrl, Amelie", "Wright, Dustin", "Klinger, Roman", "Augenstein, Isabelle" ]
Understanding Fine-grained Distortions in Reports of Scientific Findings
findings-acl.369
Poster
2402.12431v1
https://aclanthology.org/2024.findings-acl.370.bib
@inproceedings{jin-etal-2024-mm, title = "{MM}-{SOC}: Benchmarking Multimodal Large Language Models in Social Media Platforms", author = "Jin, Yiqiao and Choi, Minje and Verma, Gaurav and Wang, Jindong and Kumar, Srijan", editor = "Ku, Lun-Wei and Martins, Andre and ...
Social media platforms are hubs for multimodal information exchange, encompassing text, images, and videos, making it challenging for machines to comprehend the information or emotions associated with interactions in online spaces. Multimodal Large Language Models (MLLMs) have emerged as a promising solution to address...
[ "Jin, Yiqiao", "Choi, Minje", "Verma, Gaurav", "Wang, Jindong", "Kumar, Srijan" ]
{MM}-{SOC}: Benchmarking Multimodal Large Language Models in Social Media Platforms
findings-acl.370
Poster
2402.14154v2
https://aclanthology.org/2024.findings-acl.371.bib
@inproceedings{srivastava-etal-2024-instances, title = "Instances Need More Care: Rewriting Prompts for Instances with {LLM}s in the Loop Yields Better Zero-Shot Performance", author = "Srivastava, Saurabh and Huang, Chengyue and Fan, Weiguo and Yao, Ziyu", editor = "Ku, Lun-Wei and ...
Large language models (LLMs) have revolutionized zero-shot task performance, mitigating the need for task-specific annotations while enhancing task generalizability. Despite its advancements, current methods using trigger phrases such as {``}Let{'}s think step by step{''} remain limited. This study introduces PRomPTed,...
[ "Srivastava, Saurabh", "Huang, Chengyue", "Fan, Weiguo", "Yao, Ziyu" ]
Instances Need More Care: Rewriting Prompts for Instances with {LLM}s in the Loop Yields Better Zero-Shot Performance
findings-acl.371
Poster
2310.02107v4
https://aclanthology.org/2024.findings-acl.372.bib
@inproceedings{xiong-etal-2024-benchmarking, title = "Benchmarking Retrieval-Augmented Generation for Medicine", author = "Xiong, Guangzhi and Jin, Qiao and Lu, Zhiyong and Zhang, Aidong", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Find...
While large language models (LLMs) have achieved state-of-the-art performance on a wide range of medical question answering (QA) tasks, they still face challenges with hallucinations and outdated knowledge. Retrieval-augmented generation (RAG) is a promising solution and has been widely adopted. However, a RAG system c...
[ "Xiong, Guangzhi", "Jin, Qiao", "Lu, Zhiyong", "Zhang, Aidong" ]
Benchmarking Retrieval-Augmented Generation for Medicine
findings-acl.372
Poster
2311.09774v1
https://aclanthology.org/2024.findings-acl.373.bib
@inproceedings{yuan-etal-2024-chatmusician, title = "{C}hat{M}usician: Understanding and Generating Music Intrinsically with {LLM}", author = "Yuan, Ruibin and Lin, Hanfeng and Wang, Yi and Tian, Zeyue and Wu, Shangda and Shen, Tianhao and Zhang, Ge and Wu, Yuhan...
While LLMs demonstrate impressive capabilities in musical knowledge, we find that music reasoning is still an unsolved task.We introduce ChatMusician, an open-source large language model (LLM) that integrates intrinsic musical abilities. It is based on continual pre-training and finetuning LLaMA2 on a text-compatible m...
[ "Yuan, Ruibin", "Lin, Hanfeng", "Wang, Yi", "Tian, Zeyue", "Wu, Shangda", "Shen, Tianhao", "Zhang, Ge", "Wu, Yuhang", "Liu, Cong", "Zhou, Ziya", "Xue, Liumeng", "Ma, Ziyang", "Liu, Qin", "Zheng, Tianyu", "Li, Yizhi", "Ma, Yinghao", "Liang, Yiming", "Chi, Xiaowei", "Liu, Ruibo", ...
{C}hat{M}usician: Understanding and Generating Music Intrinsically with {LLM}
findings-acl.373
Poster
2407.21531v1
https://aclanthology.org/2024.findings-acl.374.bib
@inproceedings{tan-etal-2024-towards, title = "Towards Robust Temporal Reasoning of Large Language Models via a Multi-Hop {QA} Dataset and Pseudo-Instruction Tuning", author = "Tan, Qingyu and Ng, Hwee Tou and Bing, Lidong", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, ...
Knowledge in the real world is being updated constantly. However, it is costly to frequently update large language models (LLMs). Therefore, it is crucial for LLMs to understand the concept of temporal knowledge. However, prior works on temporal question answering (TQA) did not emphasize multi-answer and multi-hop type...
[ "Tan, Qingyu", "Ng, Hwee Tou", "Bing, Lidong" ]
Towards Robust Temporal Reasoning of Large Language Models via a Multi-Hop {QA} Dataset and Pseudo-Instruction Tuning
findings-acl.374
Poster
2311.09821v2
https://aclanthology.org/2024.findings-acl.375.bib
@inproceedings{voronov-etal-2024-mind, title = "Mind Your Format: Towards Consistent Evaluation of In-Context Learning Improvements", author = "Voronov, Anton and Wolf, Lena and Ryabinin, Max", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Findin...
Large language models demonstrate a remarkable capability for learning to solve new tasks from a few examples.The $\textit{prompt template}$, or the way the input examples are formatted to obtain the prompt, is an important yet often overlooked aspect of in-context learning.In this work, we conduct a comprehensive stud...
[ "Voronov, Anton", "Wolf, Lena", "Ryabinin, Max" ]
Mind Your Format: Towards Consistent Evaluation of In-Context Learning Improvements
findings-acl.375
Poster
2401.06766v3
https://aclanthology.org/2024.findings-acl.376.bib
@inproceedings{liu-etal-2024-knowledge-graph, title = "Knowledge Graph-Enhanced Large Language Models via Path Selection", author = "Liu, Haochen and Wang, Song and Zhu, Yaochen and Dong, Yushun and Li, Jundong", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar...
Large Language Models (LLMs) have shown unprecedented performance in various real-world applications. However, they are known to generate factually inaccurate outputs, a.k.a. the hallucination problem. In recent years, incorporating external knowledge extracted from Knowledge Graphs (KGs) has become a promising strateg...
[ "Liu, Haochen", "Wang, Song", "Zhu, Yaochen", "Dong, Yushun", "Li, Jundong" ]
Knowledge Graph-Enhanced Large Language Models via Path Selection
findings-acl.376
Poster
2406.13862v1
https://aclanthology.org/2024.findings-acl.377.bib
@inproceedings{huang-etal-2024-ottawa, title = "{OTTAWA}: Optimal {T}ranspor{T} Adaptive Word Aligner for Hallucination and Omission Translation Errors Detection", author = "Huang, Chenyang and Ghaddar, Abbas and Kobyzev, Ivan and Rezagholizadeh, Mehdi and Zaiane, Osmar and Ch...
Recently, there has been considerable attention on detecting hallucinations and omissions in Machine Translation (MT) systems. The two dominant approaches to tackle this task involve analyzing the MT system{'}s internal states or relying on the output of external tools, such as sentence similarity or MT quality estimat...
[ "Huang, Chenyang", "Ghaddar, Abbas", "Kobyzev, Ivan", "Rezagholizadeh, Mehdi", "Zaiane, Osmar", "Chen, Boxing" ]
{OTTAWA}: Optimal {T}ranspor{T} Adaptive Word Aligner for Hallucination and Omission Translation Errors Detection
findings-acl.377
Poster
2406.01919v1
https://aclanthology.org/2024.findings-acl.378.bib
@inproceedings{yu-etal-2024-onsep, title = "{ONSEP}: A Novel Online Neural-Symbolic Framework for Event Prediction Based on Large Language Model", author = "Yu, Xuanqing and Sun, Wangtao and Li, Jingwei and Liu, Kang and Liu, Chengbao and Tan, Jie", editor = "Ku, Lun-Wei ...
In the realm of event prediction, temporal knowledge graph forecasting (TKGF) stands as a pivotal technique. Previous approaches face the challenges of not utilizing experience during testing and relying on a single short-term history, which limits adaptation to evolving data. In this paper, we introduce the Online Neu...
[ "Yu, Xuanqing", "Sun, Wangtao", "Li, Jingwei", "Liu, Kang", "Liu, Chengbao", "Tan, Jie" ]
{ONSEP}: A Novel Online Neural-Symbolic Framework for Event Prediction Based on Large Language Model
findings-acl.378
Poster
2311.17351v1
https://aclanthology.org/2024.findings-acl.379.bib
@inproceedings{sun-etal-2024-speech, title = "Speech-based Slot Filling using Large Language Models", author = "Sun, Guangzhi and Feng, Shutong and Jiang, Dongcheng and Zhang, Chao and Gasic, Milica and Woodland, Phil", editor = "Ku, Lun-Wei and Martins, Andre and ...
Recently, advancements in large language models (LLMs) have shown an unprecedented ability across various language tasks. This paper investigates the potential application of LLMs to slot filling with noisy ASR transcriptions, via both in-context learning and task-specific fine-tuning. Dedicated prompt designs and nois...
[ "Sun, Guangzhi", "Feng, Shutong", "Jiang, Dongcheng", "Zhang, Chao", "Gasic, Milica", "Woodl", ", Phil" ]
Speech-based Slot Filling using Large Language Models
findings-acl.379
Poster
1811.01331v2
https://aclanthology.org/2024.findings-acl.380.bib
@inproceedings{li-etal-2024-big, title = "Too Big to Fail: Larger Language Models are Disproportionately Resilient to Induction of Dementia-Related Linguistic Anomalies", author = "Li, Changye and Sheng, Zhecheng and Cohen, Trevor and Pakhomov, Serguei", editor = "Ku, Lun-Wei and ...
As artificial neural networks grow in complexity, understanding their inner workings becomes increasingly challenging, which is particularly important in healthcare applications. The intrinsic evaluation metrics of autoregressive neural language models (NLMs), perplexity (PPL), can reflect how {``}surprised{''} an NLM ...
[ "Li, Changye", "Sheng, Zhecheng", "Cohen, Trevor", "Pakhomov, Serguei" ]
Too Big to Fail: Larger Language Models are Disproportionately Resilient to Induction of Dementia-Related Linguistic Anomalies
findings-acl.380
Poster
2406.02830v1
https://aclanthology.org/2024.findings-acl.381.bib
@inproceedings{paz-argaman-etal-2024-hesum, title = "{H}e{S}um: a Novel Dataset for Abstractive Text Summarization in {H}ebrew", author = "Paz-Argaman, Tzuf and Mondshine, Itai and Achi Mordechai, Asaf and Tsarfaty, Reut", editor = "Ku, Lun-Wei and Martins, Andre and Sriku...
While large language models (LLMs) excel in various natural language tasks in English, their performance in low-resource languages like Hebrew, especially for generative tasks such as abstractive summarization, remains unclear. The high morphological richness in Hebrew adds further challenges due to the ambiguity in se...
[ "Paz-Argaman, Tzuf", "Mondshine, Itai", "Achi Mordechai, Asaf", "Tsarfaty, Reut" ]
{H}e{S}um: a Novel Dataset for Abstractive Text Summarization in {H}ebrew
findings-acl.381
Poster
2406.03897v2
https://aclanthology.org/2024.findings-acl.382.bib
@inproceedings{wang-zhao-2024-tram, title = "{TRAM}: Benchmarking Temporal Reasoning for Large Language Models", author = "Wang, Yuqing and Zhao, Yun", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Findings of the Association for Computational Linguisti...
Reasoning about time is essential for understanding the nuances of events described in natural language. Previous research on this topic has been limited in scope, characterized by a lack of standardized benchmarks that would allow for consistent evaluations across different studies. In this paper, we introduce TRAM, a...
[ "Wang, Yuqing", "Zhao, Yun" ]
{TRAM}: Benchmarking Temporal Reasoning for Large Language Models
findings-acl.382
Poster
2406.09072v1
https://aclanthology.org/2024.findings-acl.383.bib
@inproceedings{amayuelas-etal-2024-knowledge, title = "Knowledge of Knowledge: Exploring Known-Unknowns Uncertainty with Large Language Models", author = "Amayuelas, Alfonso and Wong, Kyle and Pan, Liangming and Chen, Wenhu and Wang, William Yang", editor = "Ku, Lun-Wei and ...
This paper investigates the capabilities of Large Language Models (LLMs) in understanding their knowledge and uncertainty over questions. Specifically, we focus on addressing known-unknown questions, characterized by high uncertainty due to the absence of definitive answers. To facilitate our study, we collect a new da...
[ "Amayuelas, Alfonso", "Wong, Kyle", "Pan, Liangming", "Chen, Wenhu", "Wang, William Yang" ]
Knowledge of Knowledge: Exploring Known-Unknowns Uncertainty with Large Language Models
findings-acl.383
Poster
2003.10775v2
https://aclanthology.org/2024.findings-acl.384.bib
@inproceedings{cui-etal-2024-exploring, title = "Exploring Defeasibility in Causal Reasoning", author = "Cui, Shaobo and Milikic, Lazar and Feng, Yiyang and Ismayilzada, Mete and Paul, Debjit and Bosselut, Antoine and Faltings, Boi", editor = "Ku, Lun-Wei and ...
Defeasibility in causal reasoning implies that the causal relationship between cause and effect can be strengthened or weakened. Namely, the causal strength between cause and effect should increase or decrease with the incorporation of strengthening arguments (supporters) or weakening arguments (defeaters), respectivel...
[ "Cui, Shaobo", "Milikic, Lazar", "Feng, Yiyang", "Ismayilzada, Mete", "Paul, Debjit", "Bosselut, Antoine", "Faltings, Boi" ]
Exploring Defeasibility in Causal Reasoning
findings-acl.384
Poster
2401.03183v2
https://aclanthology.org/2024.findings-acl.385.bib
@inproceedings{gandhi-etal-2024-better, title = "Better Synthetic Data by Retrieving and Transforming Existing Datasets", author = "Gandhi, Saumya and Gala, Ritu and Viswanathan, Vijay and Wu, Tongshuang and Neubig, Graham", editor = "Ku, Lun-Wei and Martins, Andre and ...
Despite recent advances in large language models, building dependable and deployable NLP models typically requires abundant, high-quality training data. However, task-specific data is not available for many use cases, and manually curating task-specific data is labor-intensive. Recent work has studied prompt-driven syn...
[ "G", "hi, Saumya", "Gala, Ritu", "Viswanathan, Vijay", "Wu, Tongshuang", "Neubig, Graham" ]
Better Synthetic Data by Retrieving and Transforming Existing Datasets
findings-acl.385
Poster
2404.14361v3
https://aclanthology.org/2024.findings-acl.386.bib
@inproceedings{xiang-etal-2024-addressing, title = "Addressing Order Sensitivity of In-Context Demonstration Examples in Causal Language Models", author = "Xiang, Yanzheng and Yan, Hanqi and Gui, Lin and He, Yulan", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vi...
In-context learning has become a popular paradigm in natural language processing. However, its performance can be significantly influenced by the order of in-context demonstration examples. In this paper, we found that causal language models (CausalLMs) are more sensitive to this order compared to prefix language model...
[ "Xiang, Yanzheng", "Yan, Hanqi", "Gui, Lin", "He, Yulan" ]
Addressing Order Sensitivity of In-Context Demonstration Examples in Causal Language Models
findings-acl.386
Poster
2402.15637v2
https://aclanthology.org/2024.findings-acl.387.bib
@inproceedings{plepi-etal-2024-perspective, title = "Perspective Taking through Generating Responses to Conflict Situations", author = "Plepi, Joan and Welch, Charles and Flek, Lucie", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Findings of the...
Although language model performance across diverse tasks continues to improve, these models still struggle to understand and explain the beliefs of other people. This skill requires perspective-taking, the process of conceptualizing the point of view of another person. Perspective taking becomes challenging when the te...
[ "Plepi, Joan", "Welch, Charles", "Flek, Lucie" ]
Perspective Taking through Generating Responses to Conflict Situations
findings-acl.387
Poster
2310.00935v1
https://aclanthology.org/2024.findings-acl.388.bib
@inproceedings{lee-etal-2024-llm2llm, title = "{LLM}2{LLM}: Boosting {LLM}s with Novel Iterative Data Enhancement", author = "Lee, Nicholas and Wattanawong, Thanakul and Kim, Sehoon and Mangalam, Karttikeya and Shen, Sheng and Anumanchipalli, Gopala and Mahoney, Michael...
Pretrained large language models (LLMs) are currently state-of-the-art for solving the vast majority of natural language processing tasks. While many real-world applications still require fine-tuning to reach satisfactory levels of performance, many of them are in the low-data regime, making fine-tuning challenging. To...
[ "Lee, Nicholas", "Wattanawong, Thanakul", "Kim, Sehoon", "Mangalam, Karttikeya", "Shen, Sheng", "Anumanchipalli, Gopala", "Mahoney, Michael", "Keutzer, Kurt", "Gholami, Amir" ]
{LLM}2{LLM}: Boosting {LLM}s with Novel Iterative Data Enhancement
findings-acl.388
Poster
2402.12146v3
https://aclanthology.org/2024.findings-acl.389.bib
@inproceedings{ernst-etal-2024-power, title = "The Power of Summary-Source Alignments", author = "Ernst, Ori and Shapira, Ori and Slobodkin, Aviv and Adar, Sharon and Bansal, Mohit and Goldberger, Jacob and Levy, Ran and Dagan, Ido", editor = "Ku, Lun-Wei an...
Multi-document summarization (MDS) is a challenging task, often decomposed to subtasks of salience and redundancy detection, followed by text generation.In this context, alignment of corresponding sentences between a reference summary and its source documents has been leveraged to generate training data for some of the...
[ "Ernst, Ori", "Shapira, Ori", "Slobodkin, Aviv", "Adar, Sharon", "Bansal, Mohit", "Goldberger, Jacob", "Levy, Ran", "Dagan, Ido" ]
The Power of Summary-Source Alignments
findings-acl.389
Poster
2303.08494v1
https://aclanthology.org/2024.findings-acl.390.bib
@inproceedings{bhatt-etal-2024-experimental, title = "An Experimental Design Framework for Label-Efficient Supervised Finetuning of Large Language Models", author = "Bhatt, Gantavya and Chen, Yifang and Das, Arnav and Zhang, Jifan and Truong, Sang and Mussmann, Stephen and ...
Supervised finetuning (SFT) on instruction datasets has played a crucial role in achieving the remarkable zero-shot generalization capabilities observed in modern large language models (LLMs). However, the annotation efforts required to produce high quality responses for instructions are becoming prohibitively expensiv...
[ "Bhatt, Gantavya", "Chen, Yifang", "Das, Arnav", "Zhang, Jifan", "Truong, Sang", "Mussmann, Stephen", "Zhu, Yinglun", "Bilmes, Jeff", "Du, Simon", "Jamieson, Kevin", "Ash, Jordan", "Nowak, Robert" ]
An Experimental Design Framework for Label-Efficient Supervised Finetuning of Large Language Models
findings-acl.390
Poster
2407.02770v1
https://aclanthology.org/2024.findings-acl.391.bib
@inproceedings{tian-etal-2024-learning, title = "Learning Multimodal Contrast with Cross-modal Memory and Reinforced Contrast Recognition", author = "Tian, Yuanhe and Xia, Fei and Song, Yan", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Findings...
In many practical scenarios, contents from different modalities are not semantically aligned; for instance, visual and textual information may conflict with each other, resulting in non-compositional expression effects such as irony or humor. Effective modeling and smooth integration of multimodal information are cruci...
[ "Tian, Yuanhe", "Xia, Fei", "Song, Yan" ]
Learning Multimodal Contrast with Cross-modal Memory and Reinforced Contrast Recognition
findings-acl.391
Poster
2401.17032v2
https://aclanthology.org/2024.findings-acl.392.bib
@inproceedings{bahrainian-etal-2024-text, title = "Text Simplification via Adaptive Teaching", author = "Bahrainian, Seyed Ali and Dou, Jonathan and Eickhoff, Carsten", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Findings of the Association for...
Text simplification is the process of rewriting a piece of text using simpler vocabulary and grammatical structure in order to make the text more accessible and understandable for a larger audience. In this paper, we introduce a new text simplification model based on the notion of adaptive teaching using a teacher netw...
[ "Bahrainian, Seyed Ali", "Dou, Jonathan", "Eickhoff, Carsten" ]
Text Simplification via Adaptive Teaching
findings-acl.392
Poster
2305.12463v1
https://aclanthology.org/2024.findings-acl.393.bib
@inproceedings{gokceoglu-etal-2024-multi, title = "A multi-level multi-label text classification dataset of 19th century Ottoman and {R}ussian literary and critical texts", author = {Gokceoglu, Gokcen and {\c{C}}avu{\c{s}}o{\u{g}}lu, Devrim and Akbas, Emre and Dolcerocca, {\"O}zen}, edi...
This paper introduces a multi-level, multi-label text classification dataset comprising over 3000 documents. The dataset features literary and critical texts from 19th-century Ottoman Turkish and Russian. It is the first study to apply large language models (LLMs) to this dataset, sourced from prominent literary period...
[ "Gokceoglu, Gokcen", "{\\c{C}}avu{\\c{s}}o{\\u{g}}lu, Devrim", "Akbas, Emre", "Dolcerocca, {\\\"O}zen" ]
A multi-level multi-label text classification dataset of 19th century Ottoman and {R}ussian literary and critical texts
findings-acl.393
Poster
2407.15136v1
https://aclanthology.org/2024.findings-acl.394.bib
@inproceedings{cabello-akujuobi-2024-simple, title = "It is Simple Sometimes: A Study On Improving Aspect-Based Sentiment Analysis Performance", author = "Cabello, Laura and Akujuobi, Uchenna", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Findings of t...
Aspect-Based Sentiment Analysis (ABSA) involves extracting opinions from textual data about specific entities and their corresponding aspects through various complementary subtasks. Several prior research has focused on developing ad hoc designs of varying complexities for these subtasks. In this paper, we build upon t...
[ "Cabello, Laura", "Akujuobi, Uchenna" ]
It is Simple Sometimes: A Study On Improving Aspect-Based Sentiment Analysis Performance
findings-acl.394
Poster
2010.11731v2
https://aclanthology.org/2024.findings-acl.395.bib
@inproceedings{he-etal-2024-whose, title = "Whose Emotions and Moral Sentiments do Language Models Reflect?", author = "He, Zihao and Guo, Siyi and Rao, Ashwin and Lerman, Kristina", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Findings o...
Language models (LMs) are known to represent the perspectives of some social groups better than others, which may impact their performance, especially on subjective tasks such as content moderation and hate speech detection. To explore how LMs represent different perspectives, existing research focused on positional al...
[ "He, Zihao", "Guo, Siyi", "Rao, Ashwin", "Lerman, Kristina" ]
Whose Emotions and Moral Sentiments do Language Models Reflect?
findings-acl.395
Poster
2402.11114v2
https://aclanthology.org/2024.findings-acl.396.bib
@inproceedings{wang-etal-2024-llm-achieve, title = "{LLM} can Achieve Self-Regulation via Hyperparameter Aware Generation", author = "Wang, Siyin and Li, Shimin and Sun, Tianxiang and Fu, Jinlan and Cheng, Qinyuan and Ye, Jiasheng and Ye, Junjie and Qiu, Xipeng ...
In the realm of Large Language Models (LLMs), users commonly employ diverse decoding strategies and adjust hyperparameters to control the generated text. However, a critical question emerges: Are LLMs conscious of the existence of these decoding strategies and capable of regulating themselves? The current decoding gene...
[ "Wang, Siyin", "Li, Shimin", "Sun, Tianxiang", "Fu, Jinlan", "Cheng, Qinyuan", "Ye, Jiasheng", "Ye, Junjie", "Qiu, Xipeng", "Huang, Xuanjing" ]
{LLM} can Achieve Self-Regulation via Hyperparameter Aware Generation
findings-acl.396
Poster
2402.11251v1
https://aclanthology.org/2024.findings-acl.397.bib
@inproceedings{jiang-etal-2024-forward, title = "Forward-Backward Reasoning in Large Language Models for Mathematical Verification", author = "Jiang, Weisen and Shi, Han and Yu, Longhui and Liu, Zhengying and Zhang, Yu and Li, Zhenguo and Kwok, James", editor = "Ku,...
Self-Consistency samples diverse reasoning chains with answers and chooses the final answer by majority voting. It is based on forward reasoning and cannot further improve performance by sampling more reasoning chains when saturated. To further boost performance, we introduce backward reasoning to verify candidate answ...
[ "Jiang, Weisen", "Shi, Han", "Yu, Longhui", "Liu, Zhengying", "Zhang, Yu", "Li, Zhenguo", "Kwok, James" ]
Forward-Backward Reasoning in Large Language Models for Mathematical Verification
findings-acl.397
Poster
2405.16802v3
https://aclanthology.org/2024.findings-acl.398.bib
@inproceedings{han-etal-2024-towards, title = "Towards Uncertainty-Aware Language Agent", author = "Han, Jiuzhou and Buntine, Wray and Shareghi, Ehsan", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Findings of the Association for Computational L...
While Language Agents have achieved promising success by placing Large Language Models at the core of a more versatile design that dynamically interacts with the external world, the existing approaches neglect the notion of uncertainty during these interactions. We present the Uncertainty-Aware Language Agent (UALA), a...
[ "Han, Jiuzhou", "Buntine, Wray", "Shareghi, Ehsan" ]
Towards Uncertainty-Aware Language Agent
findings-acl.398
Poster
2404.05337v1
https://aclanthology.org/2024.findings-acl.399.bib
@inproceedings{lin-etal-2024-detection, title = "Detection and Positive Reconstruction of Cognitive Distortion Sentences: {M}andarin Dataset and Evaluation", author = "Lin, Shuya and Wang, Yuxiong and Dong, Jonathan and Ni, Shiguang", editor = "Ku, Lun-Wei and Martins, Andre and...
This research introduces a Positive Reconstruction Framework based on positive psychology theory. Overcoming negative thoughts can be challenging, our objective is to address and reframe them through a positive reinterpretation. To tackle this challenge, a two-fold approach is necessary: identifying cognitive distortio...
[ "Lin, Shuya", "Wang, Yuxiong", "Dong, Jonathan", "Ni, Shiguang" ]
Detection and Positive Reconstruction of Cognitive Distortion Sentences: {M}andarin Dataset and Evaluation
findings-acl.399
Poster
2405.15334v1
https://aclanthology.org/2024.findings-acl.400.bib
@inproceedings{han-etal-2024-pive, title = "{P}i{V}e: Prompting with Iterative Verification Improving Graph-based Generative Capability of {LLM}s", author = "Han, Jiuzhou and Collier, Nigel and Buntine, Wray and Shareghi, Ehsan", editor = "Ku, Lun-Wei and Martins, Andre and ...
Large language models (LLMs) have shown great abilities of solving various natural language tasks in different domains. Due to the training objective of LLMs and their pre-training data, LLMs are not very well equipped for tasks involving structured data generation. We propose a framework, Prompting with Iterative Veri...
[ "Han, Jiuzhou", "Collier, Nigel", "Buntine, Wray", "Shareghi, Ehsan" ]
{P}i{V}e: Prompting with Iterative Verification Improving Graph-based Generative Capability of {LLM}s
findings-acl.400
Poster
2305.12392v3
https://aclanthology.org/2024.findings-acl.401.bib
@inproceedings{gao-etal-2024-two, title = "Two-stage Generative Question Answering on Temporal Knowledge Graph Using Large Language Models", author = "Gao, Yifu and Qiao, Linbo and Kan, Zhigang and Wen, Zhihua and He, Yongquan and Li, Dongsheng", editor = "Ku, Lun-Wei and...
Temporal knowledge graph question answering (TKGQA) poses a significant challenge task, due to the temporal constraints hidden in questions and the answers sought from dynamic structured knowledge. Although large language models (LLMs) have made considerable progress in their reasoning ability over structured data, the...
[ "Gao, Yifu", "Qiao, Linbo", "Kan, Zhigang", "Wen, Zhihua", "He, Yongquan", "Li, Dongsheng" ]
Two-stage Generative Question Answering on Temporal Knowledge Graph Using Large Language Models
findings-acl.401
Poster
2402.16568v2
https://aclanthology.org/2024.findings-acl.402.bib
@inproceedings{akter-etal-2024-visreas, title = "{VISREAS}: Complex Visual Reasoning with Unanswerable Questions", author = "Akter, Syeda Nahida and Lee, Sangwu and Chang, Yingshan and Bisk, Yonatan and Nyberg, Eric", editor = "Ku, Lun-Wei and Martins, Andre and Sri...
Verifying a question{'}s validity before answering is crucial in real-world applications, where users may provide imperfect instructions. In this scenario, an ideal model should address the discrepancies in the query and convey them to the users rather than generating the best possible answer. Addressing this requireme...
[ "Akter, Syeda Nahida", "Lee, Sangwu", "Chang, Yingshan", "Bisk, Yonatan", "Nyberg, Eric" ]
{VISREAS}: Complex Visual Reasoning with Unanswerable Questions
findings-acl.402
Poster
2212.10189v2
https://aclanthology.org/2024.findings-acl.403.bib
@inproceedings{hu-etal-2024-unified, title = "A Unified Generative Framework for Bilingual Euphemism Detection and Identification", author = "Hu, Yuxue and Li, Junsong and Wang, Tongguan and Su, Dongyu and Su, Guixin and Sha, Ying", editor = "Ku, Lun-Wei and Martins...
Various euphemisms are emerging in social networks, attracting widespread attention from the natural language processing community. However, existing euphemism datasets are only domain-specific or language-specific. In addition, existing approaches to the study of euphemisms are one-sided. Either only the euphemism det...
[ "Hu, Yuxue", "Li, Junsong", "Wang, Tongguan", "Su, Dongyu", "Su, Guixin", "Sha, Ying" ]
A Unified Generative Framework for Bilingual Euphemism Detection and Identification
findings-acl.403
Poster
2103.16808v1
https://aclanthology.org/2024.findings-acl.404.bib
@inproceedings{cong-etal-2024-styledubber, title = "{S}tyle{D}ubber: Towards Multi-Scale Style Learning for Movie Dubbing", author = "Cong, Gaoxiang and Qi, Yuankai and Li, Liang and Beheshti, Amin and Zhang, Zhedong and Hengel, Anton and Yang, Ming-Hsuan and Yan...
Given a script, the challenge in Movie Dubbing (Visual Voice Cloning, V2C) is to generate speech that aligns well with the video in both time and emotion, based on the tone of a reference audio track. Existing state-of-the-art V2C models break the phonemes in the script according to the divisions between video frames, ...
[ "Cong, Gaoxiang", "Qi, Yuankai", "Li, Liang", "Beheshti, Amin", "Zhang, Zhedong", "Hengel, Anton", "Yang, Ming-Hsuan", "Yan, Chenggang", "Huang, Qingming" ]
{S}tyle{D}ubber: Towards Multi-Scale Style Learning for Movie Dubbing
findings-acl.404
Poster
2402.12636v3
https://aclanthology.org/2024.findings-acl.405.bib
@inproceedings{yang-liu-2024-etas, title = "{ETAS}: Zero-Shot Transformer Architecture Search via Network Trainability and Expressivity", author = "Yang, Jiechao and Liu, Yong", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Findings of the Association f...
Transformer Architecture Search (TAS) methods aim to automate searching for the optimal Transformer architecture configurations for a given task. However, they are impeded by the prohibitive cost of evaluating Transformer architectures. Recently, several Zero-Shot TAS methods have been proposed to mitigate this problem...
[ "Yang, Jiechao", "Liu, Yong" ]
{ETAS}: Zero-Shot Transformer Architecture Search via Network Trainability and Expressivity
findings-acl.405
Poster
1809.02209v2
https://aclanthology.org/2024.findings-acl.406.bib
@inproceedings{xu-etal-2024-reasoning, title = "Reasoning Like a Doctor: Improving Medical Dialogue Systems via Diagnostic Reasoning Process Alignment", author = "Xu, Kaishuai and Cheng, Yi and Hou, Wenjun and Tan, Qiaoyu and Li, Wenjie", editor = "Ku, Lun-Wei and Martins,...
Medical dialogue systems have attracted significant attention for their potential to act as medical assistants. Enabling these medical systems to emulate clinicians{'} diagnostic reasoning process has been the long-standing research focus. Previous studies rudimentarily realized the simulation of clinicians{'} diagnost...
[ "Xu, Kaishuai", "Cheng, Yi", "Hou, Wenjun", "Tan, Qiaoyu", "Li, Wenjie" ]
Reasoning Like a Doctor: Improving Medical Dialogue Systems via Diagnostic Reasoning Process Alignment
findings-acl.406
Poster
2406.13934v1
https://aclanthology.org/2024.findings-acl.407.bib
@inproceedings{wu-etal-2024-conceptmath, title = "{C}oncept{M}ath: A Bilingual Concept-wise Benchmark for Measuring Mathematical Reasoning of Large Language Models", author = "Wu, Yanan and Liu, Jie and Bu, Xingyuan and Liu, Jiaheng and Zhou, Zhanhui and Zhang, Yuanxing and ...
This paper introduces ConceptMath, a bilingual (English and Chinese), fine-grained benchmark that evaluates concept-wise mathematical reasoning of Large Language Models (LLMs). Unlike traditional benchmarks that evaluate general mathematical reasoning with an average accuracy, ConceptMath systemically organizes math pr...
[ "Wu, Yanan", "Liu, Jie", "Bu, Xingyuan", "Liu, Jiaheng", "Zhou, Zhanhui", "Zhang, Yuanxing", "Zhang, Chenchen", "ZhiqiBai, ZhiqiBai", "Chen, Haibin", "Ge, Tiezheng", "Ouyang, Wanli", "Su, Wenbo", "Zheng, Bo" ]
{C}oncept{M}ath: A Bilingual Concept-wise Benchmark for Measuring Mathematical Reasoning of Large Language Models
findings-acl.407
Poster
2402.14660v2
https://aclanthology.org/2024.findings-acl.408.bib
@inproceedings{chen-etal-2024-reinstruct, title = "{REI}nstruct: Building Instruction Data from Unlabeled Corpus", author = "Chen, Shu and Guan, Xinyan and Lu, Yaojie and Lin, Hongyu and Han, Xianpei and Sun, Le", editor = "Ku, Lun-Wei and Martins, Andre and ...
Manually annotating instruction data for large language models is difficult, costly, and hard to scale. Meanwhile, current automatic annotation methods typically rely on distilling synthetic data from proprietary LLMs, which not only limits the upper bound of the quality of the instruction data but also raises potentia...
[ "Chen, Shu", "Guan, Xinyan", "Lu, Yaojie", "Lin, Hongyu", "Han, Xianpei", "Sun, Le" ]
{REI}nstruct: Building Instruction Data from Unlabeled Corpus
findings-acl.408
Poster
2210.09175v1
https://aclanthology.org/2024.findings-acl.409.bib
@inproceedings{chen-etal-2024-learning-maximize, title = "Learning to Maximize Mutual Information for Chain-of-Thought Distillation", author = "Chen, Xin and Huang, Hanxian and Gao, Yanjun and Wang, Yi and Zhao, Jishen and Ding, Ke", editor = "Ku, Lun-Wei and Martin...
Knowledge distillation, the technique of transferring knowledge from large, complex models to smaller ones, marks a pivotal step towards efficient AI deployment. Distilling Step-by-Step (DSS), a novel method utilizing chain-of-thought (CoT) distillation, has demonstrated promise by imbuing smaller models with the super...
[ "Chen, Xin", "Huang, Hanxian", "Gao, Yanjun", "Wang, Yi", "Zhao, Jishen", "Ding, Ke" ]
Learning to Maximize Mutual Information for Chain-of-Thought Distillation
findings-acl.409
Poster
2403.03348v3
https://aclanthology.org/2024.findings-acl.410.bib
@inproceedings{lin-etal-2024-pemt, title = "{PEMT}: Multi-Task Correlation Guided Mixture-of-Experts Enables Parameter-Efficient Transfer Learning", author = "Lin, Zhisheng and Fu, Han and Liu, Chenghao and Li, Zhuo and Sun, Jianling", editor = "Ku, Lun-Wei and Martins, An...
Parameter-efficient fine-tuning (PEFT) has emerged as an effective method for adapting pre-trained language models to various tasks efficiently. Recently, there has been a growing interest in transferring knowledge from one or multiple tasks to the downstream target task to achieve performance improvements. However, cu...
[ "Lin, Zhisheng", "Fu, Han", "Liu, Chenghao", "Li, Zhuo", "Sun, Jianling" ]
{PEMT}: Multi-Task Correlation Guided Mixture-of-Experts Enables Parameter-Efficient Transfer Learning
findings-acl.410
Poster
2303.16154v1
https://aclanthology.org/2024.findings-acl.411.bib
@inproceedings{liu-etal-2024-mathbench, title = "{M}ath{B}ench: Evaluating the Theory and Application Proficiency of {LLM}s with a Hierarchical Mathematics Benchmark", author = "Liu, Hongwei and Zheng, Zilong and Qiao, Yuxuan and Duan, Haodong and Fei, Zhiwei and Zhou, Fengzhe...
Recent advancements in large language models (LLMs) have showcased significant improvements in mathematics. However, traditional math benchmarks like GSM8k offer a unidimensional perspective, which fall short in providing a holistic assessment of the LLMs{'} math capabilities. To address this gap, we introduce MathBenc...
[ "Liu, Hongwei", "Zheng, Zilong", "Qiao, Yuxuan", "Duan, Haodong", "Fei, Zhiwei", "Zhou, Fengzhe", "Zhang, Wenwei", "Zhang, Songyang", "Lin, Dahua", "Chen, Kai" ]
{M}ath{B}ench: Evaluating the Theory and Application Proficiency of {LLM}s with a Hierarchical Mathematics Benchmark
findings-acl.411
Poster
2405.12209v1
https://aclanthology.org/2024.findings-acl.412.bib
@inproceedings{ren-etal-2024-identifying, title = "Identifying Semantic Induction Heads to Understand In-Context Learning", author = "Ren, Jie and Guo, Qipeng and Yan, Hang and Liu, Dongrui and Zhang, Quanshi and Qiu, Xipeng and Lin, Dahua", editor = "Ku, Lun-Wei a...
Although large language models (LLMs) have demonstrated remarkable performance, the lack of transparency in their inference logic raises concerns about their trustworthiness. To gain a better understanding of LLMs, we conduct a detailed analysis of the operations of attention heads and aim to better understand the in-c...
[ "Ren, Jie", "Guo, Qipeng", "Yan, Hang", "Liu, Dongrui", "Zhang, Quanshi", "Qiu, Xipeng", "Lin, Dahua" ]
Identifying Semantic Induction Heads to Understand In-Context Learning
findings-acl.412
Poster
2402.13055v2
https://aclanthology.org/2024.findings-acl.413.bib
@inproceedings{jiang-etal-2024-chinese, title = "{C}hinese Spelling Corrector Is Just a Language Learner", author = "Jiang, Lai and Wu, Hongqiu and Zhao, Hai and Zhang, Min", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Findings of the As...
This paper emphasizes Chinese spelling correction by means of self-supervised learning, which means there are no annotated errors within the training data. Our intuition is that humans are naturally good correctors with exposure to error-free sentences, which contrasts with current unsupervised methods that strongly re...
[ "Jiang, Lai", "Wu, Hongqiu", "Zhao, Hai", "Zhang, Min" ]
{C}hinese Spelling Corrector Is Just a Language Learner
findings-acl.413
Poster
2004.14166v2
https://aclanthology.org/2024.findings-acl.414.bib
@inproceedings{wu-etal-2024-logical, title = "Logical Closed Loop: Uncovering Object Hallucinations in Large Vision-Language Models", author = "Wu, Junfei and Liu, Qiang and Wang, Ding and Zhang, Jinghao and Wu, Shu and Wang, Liang and Tan, Tieniu", editor = "Ku, Lu...
Object hallucination has been an Achilles{'} heel which hinders the broader applications of large vision-language models (LVLMs). Object hallucination refers to the phenomenon that the LVLMs claim non-existent objects in the image. To mitigate the object hallucinations, instruction tuning and external model-based detec...
[ "Wu, Junfei", "Liu, Qiang", "Wang, Ding", "Zhang, Jinghao", "Wu, Shu", "Wang, Liang", "Tan, Tieniu" ]
Logical Closed Loop: Uncovering Object Hallucinations in Large Vision-Language Models
findings-acl.414
Poster
2402.11622v2
https://aclanthology.org/2024.findings-acl.415.bib
@inproceedings{zhang-etal-2024-retrievalqa, title = "{R}etrieval{QA}: Assessing Adaptive Retrieval-Augmented Generation for Short-form Open-Domain Question Answering", author = "Zhang, Zihan and Fang, Meng and Chen, Ling", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vi...
Adaptive retrieval-augmented generation (ARAG) aims to dynamically determine the necessity of retrieval for queries instead of retrieving indiscriminately to enhance the efficiency and relevance of the sourced information. However, previous works largely overlook the evaluation of ARAG approaches, leading to their effe...
[ "Zhang, Zihan", "Fang, Meng", "Chen, Ling" ]
{R}etrieval{QA}: Assessing Adaptive Retrieval-Augmented Generation for Short-form Open-Domain Question Answering
findings-acl.415
Poster
2209.11396v1
https://aclanthology.org/2024.findings-acl.416.bib
@inproceedings{chen-etal-2024-llast, title = "{LL}a{ST}: Improved End-to-end Speech Translation System Leveraged by Large Language Models", author = "Chen, Xi and Zhang, Songyang and Bai, Qibing and Chen, Kai and Nakamura, Satoshi", editor = "Ku, Lun-Wei and Martins, Andre...
We introduces ***LLaST***, a framework for building high-performance Large Language model based Speech-to-text Translation systems. We address the limitations of end-to-end speech translation (E2E ST) models by exploring model architecture design and optimization techniques tailored for LLMs. Our approach includes LLM-...
[ "Chen, Xi", "Zhang, Songyang", "Bai, Qibing", "Chen, Kai", "Nakamura, Satoshi" ]
{LL}a{ST}: Improved End-to-end Speech Translation System Leveraged by Large Language Models
findings-acl.416
Poster
2107.06959v2
https://aclanthology.org/2024.findings-acl.417.bib
@inproceedings{gu-yang-2024-plan, title = "Plan, Generate and Complicate: Improving Low-resource Dialogue State Tracking via Easy-to-Difficult Zero-shot Data Augmentation", author = "Gu, Ming and Yang, Yan", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = ...
Data augmentation methods have been a promising direction to improve the performance of small models for low-resource dialogue state tracking. However, traditional methods rely on pre-defined user goals and neglect the importance of data complexity in this task. In this paper, we propose EDZ-DA, an Easy-to-Difficult Ze...
[ "Gu, Ming", "Yang, Yan" ]
Plan, Generate and Complicate: Improving Low-resource Dialogue State Tracking via Easy-to-Difficult Zero-shot Data Augmentation
findings-acl.417
Poster
2406.08860v1
https://aclanthology.org/2024.findings-acl.418.bib
@inproceedings{quan-2024-dmoerm, title = "{DM}o{ERM}: Recipes of Mixture-of-Experts for Effective Reward Modeling", author = "Quan, Shanghaoran", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Findings of the Association for Computational Linguistics ACL 2024",...
The performance of the reward model (RM) is a critical factor in improving the effectiveness of the large language model (LLM) during alignment fine-tuning. There remain two challenges in RM training: 1) training the same RM using various categories of data may cause its generalization performance to suffer from multi-...
[ "Quan, Shanghaoran" ]
{DM}o{ERM}: Recipes of Mixture-of-Experts for Effective Reward Modeling
findings-acl.418
Poster
2407.04185v2
https://aclanthology.org/2024.findings-acl.419.bib
@inproceedings{yamada-ri-2024-leia, title = "{LEIA}: Facilitating Cross-lingual Knowledge Transfer in Language Models with Entity-based Data Augmentation", author = "Yamada, Ikuya and Ri, Ryokan", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Findings o...
Adapting English-based large language models (LLMs) to other languages has become increasingly popular due to the efficiency and potential of cross-lingual transfer. However, existing language adaptation methods often overlook the benefits of cross-lingual supervision. In this study, we introduce LEIA, a language adapt...
[ "Yamada, Ikuya", "Ri, Ryokan" ]
{LEIA}: Facilitating Cross-lingual Knowledge Transfer in Language Models with Entity-based Data Augmentation
findings-acl.419
Poster
2309.12763v2
https://aclanthology.org/2024.findings-acl.420.bib
@inproceedings{chen-etal-2024-comments, title = "Comments as Natural Logic Pivots: Improve Code Generation via Comment Perspective", author = "Chen, Yijie and Liu, Yijin and Meng, Fandong and Chen, Yufeng and Xu, Jinan and Zhou, Jie", editor = "Ku, Lun-Wei and Marti...
Code generation aims to understand the problem description and generate corresponding code snippets, where existing works generally decompose such complex tasks into intermediate steps by prompting strategies, such as Chain-of-Thought and its variants. While these studies have achieved some success, their effectiveness...
[ "Chen, Yijie", "Liu, Yijin", "Meng, F", "ong", "Chen, Yufeng", "Xu, Jinan", "Zhou, Jie" ]
Comments as Natural Logic Pivots: Improve Code Generation via Comment Perspective
findings-acl.420
Poster
2404.07549v1
https://aclanthology.org/2024.findings-acl.421.bib
@inproceedings{dai-etal-2024-cocktail, title = "Cocktail: A Comprehensive Information Retrieval Benchmark with {LLM}-Generated Documents Integration", author = "Dai, Sunhao and Liu, Weihao and Zhou, Yuqi and Pang, Liang and Ruan, Rongju and Wang, Gang and Dong, Zhenhua ...
The proliferation of Large Language Models (LLMs) has led to an influx of AI-generated content (AIGC) on the internet, transforming the corpus of Information Retrieval (IR) systems from solely human-written to a coexistence with LLM-generated content. The impact of this surge in AIGC on IR systems remains an open quest...
[ "Dai, Sunhao", "Liu, Weihao", "Zhou, Yuqi", "Pang, Liang", "Ruan, Rongju", "Wang, Gang", "Dong, Zhenhua", "Xu, Jun", "Wen, Ji-Rong" ]
Cocktail: A Comprehensive Information Retrieval Benchmark with {LLM}-Generated Documents Integration
findings-acl.421
Poster
2405.16546v2
https://aclanthology.org/2024.findings-acl.422.bib
@inproceedings{feng-etal-2024-continual, title = "Continual Dialogue State Tracking via Reason-of-Select Distillation", author = "Feng, Yujie and Liu, Bo and Dong, Xiaoyu and Lu, Zexin and Zhan, Li-Ming and Wu, Xiao-Ming and Lam, Albert", editor = "Ku, Lun-Wei and ...
An ideal dialogue system requires continuous skill acquisition and adaptation to new tasks while retaining prior knowledge. Dialogue State Tracking (DST), vital in these systems, often involves learning new services, confronting catastrophic forgetting and a critical capability loss termed the {``}Value Selection Quand...
[ "Feng, Yujie", "Liu, Bo", "Dong, Xiaoyu", "Lu, Zexin", "Zhan, Li-Ming", "Wu, Xiao-Ming", "Lam, Albert" ]
Continual Dialogue State Tracking via Reason-of-Select Distillation
findings-acl.422
Poster
2302.08220v2
https://aclanthology.org/2024.findings-acl.423.bib
@inproceedings{li-etal-2024-spotting, title = "Spotting {AI}{'}s Touch: Identifying {LLM}-Paraphrased Spans in Text", author = "Li, Yafu and Wang, Zhilin and Cui, Leyang and Bi, Wei and Shi, Shuming and Zhang, Yue", editor = "Ku, Lun-Wei and Martins, Andre and ...
AI-generated text detection has attracted increasing attention as powerful language models approach human-level generation. Limited work is devoted to detecting (partially) AI-paraphrased texts. However, AI paraphrasing is commonly employed in various application scenarios for text refinement and diversity. To this end...
[ "Li, Yafu", "Wang, Zhilin", "Cui, Leyang", "Bi, Wei", "Shi, Shuming", "Zhang, Yue" ]
Spotting {AI}{'}s Touch: Identifying {LLM}-Paraphrased Spans in Text
findings-acl.423
Poster
2405.12689v2
https://aclanthology.org/2024.findings-acl.424.bib
@inproceedings{lu-etal-2024-sofa, title = "{S}o{FA}: Shielded On-the-fly Alignment via Priority Rule Following", author = "Lu, Xinyu and Yu, Bowen and Lu, Yaojie and Lin, Hongyu and Yu, Haiyang and Sun, Le and Han, Xianpei and Li, Yongbin", editor = "Ku, Lun-...
The alignment problem in Large Language Models (LLMs) involves adapting them to the broad spectrum of human values. This requirement challenges existing alignment methods due to diversity of preferences and regulatory standards. This paper introduces a novel alignment paradigm, priority rule following, which defines ru...
[ "Lu, Xinyu", "Yu, Bowen", "Lu, Yaojie", "Lin, Hongyu", "Yu, Haiyang", "Sun, Le", "Han, Xianpei", "Li, Yongbin" ]
{S}o{FA}: Shielded On-the-fly Alignment via Priority Rule Following
findings-acl.424
Poster
2402.17358v1
https://aclanthology.org/2024.findings-acl.425.bib
@inproceedings{goldstein-stanovsky-2024-zombies, title = "Do Zombies Understand? A Choose-Your-Own-Adventure Exploration of Machine Cognition", author = "Goldstein, Ariel and Stanovsky, Gabriel", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Findings of...
Recent advances in LLMs have sparked a debate on whether they understand text. In this position paper, we argue that opponents in this debate hold different definitions for understanding, and particularly differ in their view on the role of consciousness. To substantiate this claim, we propose a thought experiment invo...
[ "Goldstein, Ariel", "Stanovsky, Gabriel" ]
Do Zombies Understand? A Choose-Your-Own-Adventure Exploration of Machine Cognition
findings-acl.425
Poster
2403.00499v2
https://aclanthology.org/2024.findings-acl.426.bib
@inproceedings{christ-etal-2024-modeling, title = "Modeling Emotional Trajectories in Written Stories Utilizing Transformers and Weakly-Supervised Learning", author = {Christ, Lukas and Amiriparian, Shahin and Milling, Manuel and Aslan, Ilhan and Schuller, Bj{\"o}rn}, editor = "K...
Telling stories is an integral part of human communication which can evoke emotions and influence the affective states of the audience. Automatically modeling emotional trajectories in stories has thus attracted considerable scholarly interest. However, as most existing works have been limited to unsupervised dictionar...
[ "Christ, Lukas", "Amiriparian, Shahin", "Milling, Manuel", "Aslan, Ilhan", "Schuller, Bj{\\\"o}rn" ]
Modeling Emotional Trajectories in Written Stories Utilizing Transformers and Weakly-Supervised Learning
findings-acl.426
Poster
2406.02251v1
https://aclanthology.org/2024.findings-acl.427.bib
@inproceedings{cao-etal-2024-rap, title = "{RAP}: Efficient Text-Video Retrieval with Sparse-and-Correlated Adapter", author = "Cao, Meng and Tang, Haoran and Huang, Jinfa and Jin, Peng and Zhang, Can and Liu, Ruyang and Chen, Long and Liang, Xiaodan and Y...
Text-Video Retrieval (TVR) aims to align relevant video content with natural language queries. To date, most of the state-of-the-art TVR methods learn image-to-video transfer learning based on the large-scale pre-trained vision-language models (e.g., CLIP). However, fully fine-tuning these pre-trained models for TVR in...
[ "Cao, Meng", "Tang, Haoran", "Huang, Jinfa", "Jin, Peng", "Zhang, Can", "Liu, Ruyang", "Chen, Long", "Liang, Xiaodan", "Yuan, Li", "Li, Ge" ]
{RAP}: Efficient Text-Video Retrieval with Sparse-and-Correlated Adapter
findings-acl.427
Poster
2303.13220v1
https://aclanthology.org/2024.findings-acl.428.bib
@inproceedings{wang-etal-2024-benchmarking, title = "Benchmarking and Improving Long-Text Translation with Large Language Models", author = "Wang, Longyue and Du, Zefeng and Jiao, Wenxiang and Lyu, Chenyang and Pang, Jianhui and Cui, Leyang and Song, Kaiqiang and ...
Recent studies have illuminated the promising capabilities of large language models (LLMs) in handling long texts. However, their performance in machine translation (MT) of long documents remains underexplored. This paper aims to shed light on how LLMs navigate this complex task, offering a comprehensive evaluation of ...
[ "Wang, Longyue", "Du, Zefeng", "Jiao, Wenxiang", "Lyu, Chenyang", "Pang, Jianhui", "Cui, Leyang", "Song, Kaiqiang", "Wong, Derek", "Shi, Shuming", "Tu, Zhaopeng" ]
Benchmarking and Improving Long-Text Translation with Large Language Models
findings-acl.428
Poster
2405.04164v1
https://aclanthology.org/2024.findings-acl.429.bib
@inproceedings{fan-etal-2024-personalized, title = "Personalized Topic Selection Model for Topic-Grounded Dialogue", author = "Fan, Shixuan and Wei, Wei and Wen, Xiaofei and Mao, Xian-Ling and Chen, Jixiong and Chen, Dangyang", editor = "Ku, Lun-Wei and Martins, And...
Recently, the topic-grounded dialogue (TGD) system has become increasingly popular as its powerful capability to actively guide users to accomplish specific tasks through topic-guided conversations. Most existing works utilize side information (e.g. topics or personas) in isolation to enhance the topic selection abilit...
[ "Fan, Shixuan", "Wei, Wei", "Wen, Xiaofei", "Mao, Xian-Ling", "Chen, Jixiong", "Chen, Dangyang" ]
Personalized Topic Selection Model for Topic-Grounded Dialogue
findings-acl.429
Poster
2406.01988v1
https://aclanthology.org/2024.findings-acl.430.bib
@inproceedings{li-etal-2024-debiasing, title = "Debiasing In-Context Learning by Instructing {LLM}s How to Follow Demonstrations", author = "Li, Lvxue and Chen, Jiaqi and Lu, Xinyu and Lu, Yaojie and Lin, Hongyu and Zhou, Shuheng and Zhu, Huijia and Wang, Weiqian...
In-context learning(ICL) has gained considerable attention due to its data efficiency and task adaptability. Unfortunately, ICL suffers from the demonstration bias, i.e., its performance and robustness are severely affected by the selection and ordering of demonstrations. In this paper, we identify that such demonstrat...
[ "Li, Lvxue", "Chen, Jiaqi", "Lu, Xinyu", "Lu, Yaojie", "Lin, Hongyu", "Zhou, Shuheng", "Zhu, Huijia", "Wang, Weiqiang", "Liu, Zhongyi", "Han, Xianpei", "Sun, Le" ]
Debiasing In-Context Learning by Instructing {LLM}s How to Follow Demonstrations
findings-acl.430
Poster
2407.02030v1
https://aclanthology.org/2024.findings-acl.431.bib
@inproceedings{vlachos-etal-2024-comparing, title = "Comparing Data Augmentation Methods for End-to-End Task-Oriented Dialog Systems", author = "Vlachos, Christos and Stafylakis, Themos and Androutsopoulos, Ion", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", ...
Creating effective and reliable task-oriented dialog systems (ToDSs) is challenging, not only because of the complex structure of these systems, but also due to the scarcity of training data, especially when several modules need to be trained separately, each one with its own input/output training examples. Data augmen...
[ "Vlachos, Christos", "Stafylakis, Themos", "Androutsopoulos, Ion" ]
Comparing Data Augmentation Methods for End-to-End Task-Oriented Dialog Systems
findings-acl.431
Poster
2310.10380v1
https://aclanthology.org/2024.findings-acl.432.bib
@inproceedings{ma-etal-2024-ms2sl, title = "{MS}2{SL}: Multimodal Spoken Data-Driven Continuous Sign Language Production", author = "Ma, Jian and Wang, Wenguan and Yang, Yi and Zheng, Feng", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Fi...
Sign language understanding has made significant strides; however, there is still no viable solution for generating sign sequences directlyfrom entire spoken content, e.g., text or speech. In this paper, we propose a unified framework for continuous sign language production, easing communication between sign and non-si...
[ "Ma, Jian", "Wang, Wenguan", "Yang, Yi", "Zheng, Feng" ]
{MS}2{SL}: Multimodal Spoken Data-Driven Continuous Sign Language Production
findings-acl.432
Poster
2407.12842v1
https://aclanthology.org/2024.findings-acl.433.bib
@inproceedings{zhao-etal-2024-bba, title = "{BBA}: Bi-Modal Behavioral Alignment for Reasoning with Large Vision-Language Models", author = "Zhao, Xueliang and Huang, Xinting and Fu, Tingchen and Li, Qintong and Gong, Shansan and Liu, Lemao and Bi, Wei and Kong, ...
Multimodal reasoning stands as a pivotal capability for large vision-language models (LVLMs). The integration with Domain-Specific Languages (DSL), offering precise visual representations, equips these models with the opportunity to execute more accurate reasoning in complex and professional domains. However, the vanil...
[ "Zhao, Xueliang", "Huang, Xinting", "Fu, Tingchen", "Li, Qintong", "Gong, Shansan", "Liu, Lemao", "Bi, Wei", "Kong, Lingpeng" ]
{BBA}: Bi-Modal Behavioral Alignment for Reasoning with Large Vision-Language Models
findings-acl.433
Poster
2311.10947v2
https://aclanthology.org/2024.findings-acl.434.bib
@inproceedings{zheng-etal-2024-partialformer, title = "{P}artial{F}ormer: Modeling Part Instead of Whole for Machine Translation", author = "Zheng, Tong and Li, Bei and Bao, Huiwen and Wang, Jiale and Shan, Weiqiao and Xiao, Tong and Zhu, JingBo", editor = "Ku, Lun-...
The design choices in Transformer feed-forward neural networks have resulted in significant computational and parameter overhead. In this work, we emphasize the importance of hidden dimensions in designing lightweight FFNs, a factor often overlooked in previous architectures. Guided by this principle, we introduce Part...
[ "Zheng, Tong", "Li, Bei", "Bao, Huiwen", "Wang, Jiale", "Shan, Weiqiao", "Xiao, Tong", "Zhu, JingBo" ]
{P}artial{F}ormer: Modeling Part Instead of Whole for Machine Translation
findings-acl.434
Poster
2310.14921v2
https://aclanthology.org/2024.findings-acl.435.bib
@inproceedings{kim-etal-2024-self-consistent, title = "Self-Consistent Reasoning-based Aspect-Sentiment Quad Prediction with Extract-Then-Assign Strategy", author = "Kim, Jieyong and Heo, Ryang and Seo, Yongsik and Kang, SeongKu and Yeo, Jinyoung and Lee, Dongha", editor =...
In the task of aspect sentiment quad prediction (ASQP), generative methods for predicting sentiment quads have shown promisingresults. However, they still suffer from imprecise predictions and limited interpretability, caused by data scarcity and inadequate modeling of the quadruplet composition process. In this paper,...
[ "Kim, Jieyong", "Heo, Ryang", "Seo, Yongsik", "Kang, SeongKu", "Yeo, Jinyoung", "Lee, Dongha" ]
Self-Consistent Reasoning-based Aspect-Sentiment Quad Prediction with Extract-Then-Assign Strategy
findings-acl.435
Poster
2403.00354v2
https://aclanthology.org/2024.findings-acl.436.bib
@inproceedings{dong-etal-2024-pace, title = "{PACE}: Improving Prompt with Actor-Critic Editing for Large Language Model", author = "Dong, Yihong and Luo, Kangcheng and Jiang, Xue and Jin, Zhi and Li, Ge", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek...
Large language models (LLMs) have showcased remarkable potential across various tasks by conditioning on prompts. However, the quality of different human-written prompts leads to substantial discrepancies in LLMs{'} performance, and improving prompts usually necessitates considerable human effort and expertise. To this...
[ "Dong, Yihong", "Luo, Kangcheng", "Jiang, Xue", "Jin, Zhi", "Li, Ge" ]
{PACE}: Improving Prompt with Actor-Critic Editing for Large Language Model
findings-acl.436
Poster
2308.10088v2
https://aclanthology.org/2024.findings-acl.437.bib
@inproceedings{xu-etal-2024-penetrative, title = "Penetrative {AI}: Making {LLM}s Comprehend the Physical World", author = "Xu, Huatao and Han, Liying and Yang, Qirui and Li, Mo and Srivastava, Mani", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", ...
Recent developments in Large Language Models (LLMs) have demonstrated their remarkable capabilities across a range of tasks. Questions, however, persist about the nature of LLMs and their potential to integrate common-sense human knowledge when performing tasks involving information about the real physical world. This ...
[ "Xu, Huatao", "Han, Liying", "Yang, Qirui", "Li, Mo", "Srivastava, Mani" ]
Penetrative {AI}: Making {LLM}s Comprehend the Physical World
findings-acl.437
Poster
2310.09605v3
https://aclanthology.org/2024.findings-acl.438.bib
@inproceedings{zhang-etal-2024-impact, title = "The Impact of Demonstrations on Multilingual In-Context Learning: A Multidimensional Analysis", author = "Zhang, Miaoran and Gautam, Vagrant and Wang, Mingyang and Alabi, Jesujoba and Shen, Xiaoyu and Klakow, Dietrich and ...
In-context learning is a popular inference strategy where large language models solve a task using only a few labeled demonstrations without needing any parameter updates. Although there have been extensive studies on English in-context learning, multilingual in-context learning remains under-explored, and we lack an i...
[ "Zhang, Miaoran", "Gautam, Vagrant", "Wang, Mingyang", "Alabi, Jesujoba", "Shen, Xiaoyu", "Klakow, Dietrich", "Mosbach, Marius" ]
The Impact of Demonstrations on Multilingual In-Context Learning: A Multidimensional Analysis
findings-acl.438
Poster
2402.12976v2
https://aclanthology.org/2024.findings-acl.439.bib
@inproceedings{dong-etal-2024-rich, title = "Rich Semantic Knowledge Enhanced Large Language Models for Few-shot {C}hinese Spell Checking", author = "Dong, Ming and Chen, Yujing and Zhang, Miao and Sun, Hao and He, Tingting", editor = "Ku, Lun-Wei and Martins, Andre and ...
Chinese Spell Checking (CSC) is a widely used technology, which plays a vital role in speech to text (STT) and optical character recognition (OCR). Most of the existing CSC approaches relying on BERT architecture achieve excellent performance. However, limited by the scale of the foundation model, BERT-based method doe...
[ "Dong, Ming", "Chen, Yujing", "Zhang, Miao", "Sun, Hao", "He, Tingting" ]
Rich Semantic Knowledge Enhanced Large Language Models for Few-shot {C}hinese Spell Checking
findings-acl.439
Poster
2403.08492v2
https://aclanthology.org/2024.findings-acl.440.bib
@inproceedings{chitale-etal-2024-empirical, title = "An Empirical Study of In-context Learning in {LLM}s for Machine Translation", author = "Chitale, Pranjal and Gala, Jay and Dabre, Raj", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Findings of...
Recent interest has surged in employing Large Language Models (LLMs) for machine translation (MT) via in-context learning (ICL) (Vilar et al., 2023). Most prior studies primarily focus on optimizing translation quality, with limited attention to understanding the specific aspects of ICL that influence the said quality....
[ "Chitale, Pranjal", "Gala, Jay", "Dabre, Raj" ]
An Empirical Study of In-context Learning in {LLM}s for Machine Translation
findings-acl.440
Poster
2402.10207v5
https://aclanthology.org/2024.findings-acl.441.bib
@inproceedings{wang-etal-2024-answer-c, title = "{``}My Answer is {C}{''}: First-Token Probabilities Do Not Match Text Answers in Instruction-Tuned Language Models", author = {Wang, Xinpeng and Ma, Bolei and Hu, Chengzhi and Weber-Genzel, Leon and R{\"o}ttger, Paul and Kreuter...
The open-ended nature of language generation makes the evaluation of autoregressive large language models (LLMs) challenging. One common evaluation approach uses multiple-choice questions to limit the response space. The model is then evaluated by ranking the candidate answers by the log probability of the first token ...
[ "Wang, Xinpeng", "Ma, Bolei", "Hu, Chengzhi", "Weber-Genzel, Leon", "R{\\\"o}ttger, Paul", "Kreuter, Frauke", "Hovy, Dirk", "Plank, Barbara" ]
{``}My Answer is {C}{''}: First-Token Probabilities Do Not Match Text Answers in Instruction-Tuned Language Models
findings-acl.441
Poster
2404.08382v1
https://aclanthology.org/2024.findings-acl.442.bib
@inproceedings{sun-etal-2024-oda, title = "{ODA}: Observation-Driven Agent for integrating {LLM}s and Knowledge Graphs", author = "Sun, Lei and Tao, Zhengwei and Li, Youdi and Arakawa, Hiroshi", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle =...
The integration of Large Language Models (LLMs) and knowledge graphs (KGs) has achieved remarkable success in various natural language processing tasks. However, existing methodologies that integrate LLMs and KGs often navigate the task-solving process solely based on the LLM{'}s analysis of the question, overlooking t...
[ "Sun, Lei", "Tao, Zhengwei", "Li, Youdi", "Arakawa, Hiroshi" ]
{ODA}: Observation-Driven Agent for integrating {LLM}s and Knowledge Graphs
findings-acl.442
Poster
2402.11163v1
https://aclanthology.org/2024.findings-acl.443.bib
@inproceedings{xu-etal-2024-comprehensive, title = "A Comprehensive Study of Jailbreak Attack versus Defense for Large Language Models", author = "Xu, Zihao and Liu, Yi and Deng, Gelei and Li, Yuekang and Picek, Stjepan", editor = "Ku, Lun-Wei and Martins, Andre and ...
Large Language Models (LLMs) have increasingly become central to generating content with potential societal impacts. Notably, these models have demonstrated capabilities for generating content that could be deemed harmful. To mitigate these risks, researchers have adopted safety training techniques to align model outpu...
[ "Xu, Zihao", "Liu, Yi", "Deng, Gelei", "Li, Yuekang", "Picek, Stjepan" ]
A Comprehensive Study of Jailbreak Attack versus Defense for Large Language Models
findings-acl.443
Poster
2401.16765v1
https://aclanthology.org/2024.findings-acl.444.bib
@inproceedings{kaliosis-etal-2024-data, title = "A Data-Driven Guided Decoding Mechanism for Diagnostic Captioning", author = "Kaliosis, Panagiotis and Pavlopoulos, John and Charalampakos, Foivos and Moschovis, Georgios and Androutsopoulos, Ion", editor = "Ku, Lun-Wei and ...
Diagnostic Captioning (DC) automatically generates a diagnostic text from one or more medical images (e.g., X-rays, MRIs) of a patient. Treated as a draft, the generated text may assist clinicians, by providing an initial estimation of the patient{'}s condition, speeding up and helping safeguard the diagnostic process....
[ "Kaliosis, Panagiotis", "Pavlopoulos, John", "Charalampakos, Foivos", "Moschovis, Georgios", "Androutsopoulos, Ion" ]
A Data-Driven Guided Decoding Mechanism for Diagnostic Captioning
findings-acl.444
Poster
2406.14164v1
https://aclanthology.org/2024.findings-acl.445.bib
@inproceedings{zhang-etal-2024-balancing, title = "Balancing Speciality and Versatility: a Coarse to Fine Framework for Supervised Fine-tuning Large Language Model", author = "Zhang, Hengyuan and Wu, Yanru and Li, Dawei and Yang, Sak and Zhao, Rui and Jiang, Yong and Ta...
Aligned Large Language Models (LLMs) showcase remarkable versatility, capable of handling diverse real-world tasks. Meanwhile, aligned LLMs are also expected to exhibit speciality, excelling in specific applications. However, fine-tuning with extra data, a common practice to gain speciality, often leads to catastrophic...
[ "Zhang, Hengyuan", "Wu, Yanru", "Li, Dawei", "Yang, Sak", "Zhao, Rui", "Jiang, Yong", "Tan, Fei" ]
Balancing Speciality and Versatility: a Coarse to Fine Framework for Supervised Fine-tuning Large Language Model
findings-acl.445
Poster
2404.10306v5
https://aclanthology.org/2024.findings-acl.446.bib
@inproceedings{xu-etal-2024-two, title = "A Two-Agent Game for Zero-shot Relation Triplet Extraction", author = "Xu, Ting and Yang, Haiqin and Zhao, Fei and Wu, Zhen and Dai, Xinyu", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Fin...
Relation triplet extraction is a fundamental task in natural language processing that aims to identify semantic relationships between entities in text. It is particularly challenging in the zero-shot setting, i.e., zero-shot relation triplet extraction (ZeroRTE), where the relation sets between training and test are di...
[ "Xu, Ting", "Yang, Haiqin", "Zhao, Fei", "Wu, Zhen", "Dai, Xinyu" ]
A Two-Agent Game for Zero-shot Relation Triplet Extraction
findings-acl.446
Poster
2010.02609v3
https://aclanthology.org/2024.findings-acl.447.bib
@inproceedings{gu-etal-2024-light, title = "Light-{PEFT}: Lightening Parameter-Efficient Fine-Tuning via Early Pruning", author = "Gu, Naibin and Fu, Peng and Liu, Xiyu and Shen, Bowen and Lin, Zheng and Wang, Weiping", editor = "Ku, Lun-Wei and Martins, Andre and ...
Parameter-efficient fine-tuning (PEFT) has emerged as the predominant technique for fine-tuning in the era of large language models. However, existing PEFT methods still have inadequate training efficiency. Firstly, the utilization of large-scale foundation models during the training process is excessively redundant fo...
[ "Gu, Naibin", "Fu, Peng", "Liu, Xiyu", "Shen, Bowen", "Lin, Zheng", "Wang, Weiping" ]
Light-{PEFT}: Lightening Parameter-Efficient Fine-Tuning via Early Pruning
findings-acl.447
Poster
2110.12007v1
https://aclanthology.org/2024.findings-acl.448.bib
@inproceedings{lardelli-etal-2024-building, title = "Building Bridges: A Dataset for Evaluating Gender-Fair Machine Translation into {G}erman", author = "Lardelli, Manuel and Attanasio, Giuseppe and Lauscher, Anne", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", ...
The translation of gender-neutral person-referring terms (e.g.,the students) is often non-trivial.Translating from English into German poses an interesting case{---}in German, person-referring nouns are usually gender-specific, and if the gender of the referent(s) is unknown or diverse, the generic masculine (die Stude...
[ "Lardelli, Manuel", "Attanasio, Giuseppe", "Lauscher, Anne" ]
Building Bridges: A Dataset for Evaluating Gender-Fair Machine Translation into {G}erman
findings-acl.448
Poster
2406.06131v1
https://aclanthology.org/2024.findings-acl.449.bib
@inproceedings{sun-etal-2024-prompt, title = "Prompt Chaining or Stepwise Prompt? Refinement in Text Summarization", author = "Sun, Shichao and Yuan, Ruifeng and Cao, Ziqiang and Li, Wenjie and Liu, Pengfei", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vi...
Large language models (LLMs) have demonstrated the capacity to improve summary quality by mirroring a human-like iterative process of critique and refinement starting from the initial draft. Two strategies are designed to perform this iterative process: $\textit{Prompt Chaining}$ and $\textit{Stepwise Prompt}$. Prompt ...
[ "Sun, Shichao", "Yuan, Ruifeng", "Cao, Ziqiang", "Li, Wenjie", "Liu, Pengfei" ]
Prompt Chaining or Stepwise Prompt? Refinement in Text Summarization
findings-acl.449
Poster
2406.00507v1
https://aclanthology.org/2024.findings-acl.450.bib
@inproceedings{long-etal-2024-trust, title = "Trust in Internal or External Knowledge? Generative Multi-Modal Entity Linking with Knowledge Retriever", author = "Long, Xinwei and Zeng, Jiali and Meng, Fandong and Zhou, Jie and Zhou, Bowen", editor = "Ku, Lun-Wei and Martin...
Multi-modal entity linking (MEL) is a challenging task that requires accurate prediction of entities within extensive search spaces, utilizing multi-modal contexts. Existing generative approaches struggle with the knowledge gap between visual entity information and the intrinsic parametric knowledge of LLMs. To address...
[ "Long, Xinwei", "Zeng, Jiali", "Meng, F", "ong", "Zhou, Jie", "Zhou, Bowen" ]
Trust in Internal or External Knowledge? Generative Multi-Modal Entity Linking with Knowledge Retriever
findings-acl.450
Poster
1810.10004v1
https://aclanthology.org/2024.findings-acl.451.bib
@inproceedings{aida-bollegala-2024-semantic, title = "A Semantic Distance Metric Learning approach for Lexical Semantic Change Detection", author = "Aida, Taichi and Bollegala, Danushka", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Findings of the Ass...
Detecting temporal semantic changes of words is an important task for various NLP applications that must make time-sensitive predictions.Lexical Semantic Change Detection (SCD) task involves predicting whether a given target word, $w$, changes its meaning between two different text corpora, $C_1$ and $C_2$.For this pur...
[ "Aida, Taichi", "Bollegala, Danushka" ]
A Semantic Distance Metric Learning approach for Lexical Semantic Change Detection
findings-acl.451
Poster
2403.00226v3
https://aclanthology.org/2024.findings-acl.452.bib
@inproceedings{li-etal-2024-achieved, title = "What Have We Achieved on Non-autoregressive Translation?", author = "Li, Yafu and Zhang, Huajian and Yan, Jianhao and Yin, Yongjing and Zhang, Yue", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", boo...
Recent advances have made non-autoregressive (NAT) translation comparable to autoregressive methods (AT). However, their evaluation using BLEU has been shown to weakly correlate with human annotations. Limited research compares non-autoregressive translation and autoregressive translation comprehensively, leaving uncer...
[ "Li, Yafu", "Zhang, Huajian", "Yan, Jianhao", "Yin, Yongjing", "Zhang, Yue" ]
What Have We Achieved on Non-autoregressive Translation?
findings-acl.452
Poster
2406.14267v1
https://aclanthology.org/2024.findings-acl.453.bib
@inproceedings{reiss-etal-2024-zero, title = "From Zero to Hero: Cold-Start Anomaly Detection", author = "Reiss, Tal and Kour, George and Zwerdling, Naama and Anaby Tavor, Ateret and Hoshen, Yedid", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", ...
When first deploying an anomaly detection system, e.g., to detect out-of-scope queries in chatbots, there are no observed data, making data-driven approaches ineffective. Zero-shot anomaly detection methods offer a solution to such {``}cold-start{''} cases, but unfortunately they are often not accurate enough. This pap...
[ "Reiss, Tal", "Kour, George", "Zwerdling, Naama", "Anaby Tavor, Ateret", "Hoshen, Yedid" ]
From Zero to Hero: Cold-Start Anomaly Detection
findings-acl.453
Poster
2306.09067v2
https://aclanthology.org/2024.findings-acl.454.bib
@inproceedings{zhao-etal-2024-large, title = "Large Language Models Fall Short: Understanding Complex Relationships in Detective Narratives", author = "Zhao, Runcong and Zhu, Qinglin and Xu, Hainiu and Li, Jiazheng and Zhou, Yuxiang and He, Yulan and Gui, Lin", edit...
Existing datasets for narrative understanding often fail to represent the complexity and uncertainty of relationships in real-life social scenarios. To address this gap, we introduce a new benchmark, Conan, designed for extracting and analysing intricate character relation graphs from detective narratives. Specifically...
[ "Zhao, Runcong", "Zhu, Qinglin", "Xu, Hainiu", "Li, Jiazheng", "Zhou, Yuxiang", "He, Yulan", "Gui, Lin" ]
Large Language Models Fall Short: Understanding Complex Relationships in Detective Narratives
findings-acl.454
Poster
2402.11051v1
https://aclanthology.org/2024.findings-acl.455.bib
@inproceedings{qiao-etal-2024-distillmike, title = "{D}istill{MIKE}: Editing Distillation of Massive In-Context Knowledge Editing in Large Language Models", author = "Qiao, Shanbao and Liu, Xuebing and Na, Seung-Hoon", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek"...
Among the recently emerged knowledge editing methods, in-context knowledge editing (IKE) has shown respectable abilities on knowledge editing in terms of generalization and specificity. Noting the promising advantages but unexplored issues of IKE, we propose **DistillMIKE** as a novel extension of IKE, i.e., editing **...
[ "Qiao, Shanbao", "Liu, Xuebing", "Na, Seung-Hoon" ]
{D}istill{MIKE}: Editing Distillation of Massive In-Context Knowledge Editing in Large Language Models
findings-acl.455
Poster
2310.10322v1
https://aclanthology.org/2024.findings-acl.456.bib
@inproceedings{xia-etal-2024-unlocking, title = "Unlocking Efficiency in Large Language Model Inference: A Comprehensive Survey of Speculative Decoding", author = "Xia, Heming and Yang, Zhe and Dong, Qingxiu and Wang, Peiyi and Li, Yongqi and Ge, Tao and Liu, Tianyu an...
To mitigate the high inference latency stemming from autoregressive decoding in Large Language Models (LLMs), Speculative Decoding has emerged as a novel decoding paradigm for LLM inference. In each decoding step, this method first drafts several future tokens efficiently and then verifies them in parallel. Unlike auto...
[ "Xia, Heming", "Yang, Zhe", "Dong, Qingxiu", "Wang, Peiyi", "Li, Yongqi", "Ge, Tao", "Liu, Tianyu", "Li, Wenjie", "Sui, Zhifang" ]
Unlocking Efficiency in Large Language Model Inference: A Comprehensive Survey of Speculative Decoding
findings-acl.456
Poster
2401.07851v3
https://aclanthology.org/2024.findings-acl.457.bib
@inproceedings{kim-etal-2024-hierarchy, title = "Hierarchy-aware Biased Bound Margin Loss Function for Hierarchical Text Classification", author = "Kim, Gibaeg and Im, SangHun and Oh, Heung-Seon", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Fin...
Hierarchical text classification (HTC) is a challenging problem with two key issues: utilizing structural information and mitigating label imbalance. Recently, the unit-based approach generating unit-based feature representations has outperformed the global approach focusing on a global feature representation. Neverthe...
[ "Kim, Gibaeg", "Im, SangHun", "Oh, Heung-Seon" ]
Hierarchy-aware Biased Bound Margin Loss Function for Hierarchical Text Classification
findings-acl.457
Poster
2306.09132v1
https://aclanthology.org/2024.findings-acl.458.bib
@inproceedings{chen-etal-2024-improving-retrieval, title = "Improving Retrieval Augmented Open-Domain Question-Answering with Vectorized Contexts", author = "Chen, Zhuo and Wang, Xinyu and Jiang, Yong and Xie, Pengjun and Huang, Fei and Tu, Kewei", editor = "Ku, Lun-Wei a...
In the era of large language models, applying techniques such as Retrieval Augmented Generation can better address Open-Domain Question-Answering problems. Due to constraints including model sizes and computing resources, the length of context is often limited, and it becomes challenging to empower the model to cover o...
[ "Chen, Zhuo", "Wang, Xinyu", "Jiang, Yong", "Xie, Pengjun", "Huang, Fei", "Tu, Kewei" ]
Improving Retrieval Augmented Open-Domain Question-Answering with Vectorized Contexts
findings-acl.458
Poster
2310.03184v2
https://aclanthology.org/2024.findings-acl.459.bib
@inproceedings{randl-etal-2024-cicle, title = "{CICL}e: Conformal In-Context Learning for Largescale Multi-Class Food Risk Classification", author = "Randl, Korbinian and Pavlopoulos, John and Henriksson, Aron and Lindgren, Tony", editor = "Ku, Lun-Wei and Martins, Andre and ...
Contaminated or adulterated food poses a substantial risk to human health. Given sets of labeled web texts for training, Machine Learning and Natural Language Processing can be applied to automatically detect such risks. We publish a dataset of 7,546 short texts describing public food recall announcements. Each text is...
[ "R", "l, Korbinian", "Pavlopoulos, John", "Henriksson, Aron", "Lindgren, Tony" ]
{CICL}e: Conformal In-Context Learning for Largescale Multi-Class Food Risk Classification
findings-acl.459
Poster
2403.11904v3
https://aclanthology.org/2024.findings-acl.460.bib
@inproceedings{liu-etal-2024-intactkv, title = "{I}ntact{KV}: Improving Large Language Model Quantization by Keeping Pivot Tokens Intact", author = "Liu, Ruikang and Bai, Haoli and Lin, Haokun and Li, Yuening and Gao, Han and Xu, Zhengzhuo and Hou, Lu and Yao, Ju...
Large language models (LLMs) excel in natural language processing but demand intensive computation. To mitigate this, various quantization methods have been explored, yet they compromise LLM performance. This paper unveils a previously overlooked type of outliers in LLMs. Such outliers are found to allocate most of the...
[ "Liu, Ruikang", "Bai, Haoli", "Lin, Haokun", "Li, Yuening", "Gao, Han", "Xu, Zhengzhuo", "Hou, Lu", "Yao, Jun", "Yuan, Chun" ]
{I}ntact{KV}: Improving Large Language Model Quantization by Keeping Pivot Tokens Intact
findings-acl.460
Poster
2403.01241v2
https://aclanthology.org/2024.findings-acl.461.bib
@inproceedings{taniguchi-etal-2024-learning, title = "Learning Adverbs with Spectral Mixture Kernels", author = "Taniguchi, Tomoe and Mochihashi, Daichi and Kobayashi, Ichiro", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Findings of the Associa...
For humans and robots to collaborate more in the real world, robots need to understand human intentions from the different manner of their behaviors. In our study, we focus on the meaning of adverbs which describe human motions. We propose a topic model, Hierarchical Dirichlet Process-Spectral Mixture Latent Dirichlet ...
[ "Taniguchi, Tomoe", "Mochihashi, Daichi", "Kobayashi, Ichiro" ]
Learning Adverbs with Spectral Mixture Kernels
findings-acl.461
Poster
2309.15086v1
https://aclanthology.org/2024.findings-acl.462.bib
@inproceedings{hou-etal-2024-e, title = "{E}-{EVAL}: A Comprehensive {C}hinese K-12 Education Evaluation Benchmark for Large Language Models", author = "Hou, Jinchang and Ao, Chang and Wu, Haihong and Kong, Xiangtao and Zheng, Zhigang and Tang, Daijia and Li, Chengming ...
The rapid development of Large Language Models (LLMs) has led to their increasing utilization in Chinese K-12 education. Despite the growing integration of LLMs and education, the absence of a dedicated benchmark for evaluating LLMs within this domain presents a pressing concern. Consequently, there is an urgent need f...
[ "Hou, Jinchang", "Ao, Chang", "Wu, Haihong", "Kong, Xiangtao", "Zheng, Zhigang", "Tang, Daijia", "Li, Chengming", "Hu, Xiping", "Xu, Ruifeng", "Ni, Shiwen", "Yang, Min" ]
{E}-{EVAL}: A Comprehensive {C}hinese K-12 Education Evaluation Benchmark for Large Language Models
findings-acl.462
Poster
2401.15927v1
https://aclanthology.org/2024.findings-acl.463.bib
@inproceedings{meng-etal-2024-chartassistant, title = "{C}hart{A}ssistant: A Universal Chart Multimodal Language Model via Chart-to-Table Pre-training and Multitask Instruction Tuning", author = "Meng, Fanqing and Shao, Wenqi and Lu, Quanfeng and Gao, Peng and Zhang, Kaipeng and ...
Charts play a vital role in data visualization, understanding data patterns, and informed decision-making. However, their unique combination of graphical elements (e.g., bars, lines) and textual components (e.g., labels, legends) poses challenges for general-purpose multimodal models. While vision-language models train...
[ "Meng, Fanqing", "Shao, Wenqi", "Lu, Quanfeng", "Gao, Peng", "Zhang, Kaipeng", "Qiao, Yu", "Luo, Ping" ]
{C}hart{A}ssistant: A Universal Chart Multimodal Language Model via Chart-to-Table Pre-training and Multitask Instruction Tuning
findings-acl.463
Poster
2401.02384v3
https://aclanthology.org/2024.findings-acl.464.bib
@inproceedings{li-etal-2024-teaching, title = "Teaching Small Language Models to Reason for Knowledge-Intensive Multi-Hop Question Answering", author = "Li, Xiang and He, Shizhu and Lei, Fangyu and JunYang, JunYang and Su, Tianhuang and Liu, Kang and Zhao, Jun", edi...
Large Language Models (LLMs) can teach small language models (SLMs) to solve complex reasoning tasks (e.g., mathematical question answering) by Chain-of-thought Distillation (CoTD). Specifically, CoTD fine-tunes SLMs by utilizing rationales generated from LLMs such as ChatGPT. However, CoTD has certain limitations that...
[ "Li, Xiang", "He, Shizhu", "Lei, Fangyu", "JunYang, JunYang", "Su, Tianhuang", "Liu, Kang", "Zhao, Jun" ]
Teaching Small Language Models to Reason for Knowledge-Intensive Multi-Hop Question Answering
findings-acl.464
Poster
2305.03453v4
https://aclanthology.org/2024.findings-acl.465.bib
@inproceedings{lai-etal-2024-alarm, title = "{AL}a{RM}: Align Language Models via Hierarchical Rewards Modeling", author = "Lai, Yuhang and Wang, Siyuan and Liu, Shujun and Huang, Xuanjing and Wei, Zhongyu", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Viv...
We introduce ALaRM, the first framework modeling hierarchical rewards in reinforcement learning from human feedback (RLHF), which is designed to enhance the alignment of large language models (LLMs) with human preferences. The framework addresses the limitations of current alignment approaches, which often struggle wit...
[ "Lai, Yuhang", "Wang, Siyuan", "Liu, Shujun", "Huang, Xuanjing", "Wei, Zhongyu" ]
{AL}a{RM}: Align Language Models via Hierarchical Rewards Modeling
findings-acl.465
Poster
2403.06754v2
https://aclanthology.org/2024.findings-acl.466.bib
@inproceedings{liu-etal-2024-lstprompt, title = "{LSTP}rompt: Large Language Models as Zero-Shot Time Series Forecasters by Long-Short-Term Prompting", author = "Liu, Haoxin and Zhao, Zhiyuan and Wang, Jindong and Kamarthi, Harshavardhan and Prakash, B. Aditya", editor = "Ku, Lun...
Time-series forecasting (TSF) finds broad applications in real-world scenarios. Prompting off-the-shelf Large Language Models (LLMs) demonstrates strong zero-shot TSF capabilities while preserving computational efficiency. However, existing prompting methods oversimplify TSF as language next-token predictions, overlook...
[ "Liu, Haoxin", "Zhao, Zhiyuan", "Wang, Jindong", "Kamarthi, Harshavardhan", "Prakash, B. Aditya" ]
{LSTP}rompt: Large Language Models as Zero-Shot Time Series Forecasters by Long-Short-Term Prompting
findings-acl.466
Poster
2210.08964v5
https://aclanthology.org/2024.findings-acl.467.bib
@inproceedings{lu-etal-2024-mitigating, title = "Mitigating Boundary Ambiguity and Inherent Bias for Text Classification in the Era of Large Language Models", author = "Lu, Zhenyi and Tian, Jie and Wei, Wei and Qu, Xiaoye and Cheng, Yu and Xie, Wenfeng and Chen, Dangyan...
Text classification is a crucial task encountered frequently in practical scenarios, yet it is still under-explored in the era of large language models (LLMs). This study shows that LLMs are vulnerable to changes in the number and arrangement of options in text classification. Our extensive empirical analyses reveal th...
[ "Lu, Zhenyi", "Tian, Jie", "Wei, Wei", "Qu, Xiaoye", "Cheng, Yu", "Xie, Wenfeng", "Chen, Dangyang" ]
Mitigating Boundary Ambiguity and Inherent Bias for Text Classification in the Era of Large Language Models
findings-acl.467
Poster
2406.07001v1