bibtex_url
stringlengths
41
50
bibtext
stringlengths
693
2.88k
abstract
stringlengths
0
2k
authors
listlengths
1
45
title
stringlengths
21
206
id
stringlengths
7
16
type
stringclasses
2 values
arxiv_id
stringlengths
9
12
https://aclanthology.org/2024.acl-long.1.bib
@inproceedings{zhang-etal-2024-quantized, title = "Quantized Side Tuning: Fast and Memory-Efficient Tuning of Quantized Large Language Models", author = "Zhang, Zhengxin and Zhao, Dan and Miao, Xupeng and Oliaro, Gabriele and Zhang, Zhihao and Li, Qing and Jiang, Yong ...
Finetuning large language models (LLMs) has been empirically effective on a variety of downstream tasks. Existing approaches to finetuning an LLM either focus on parameter-efficient finetuning, which only updates a small number of trainable parameters, or attempt to reduce the memory footprint during the training phase...
[ "Zhang, Zhengxin", "Zhao, Dan", "Miao, Xupeng", "Oliaro, Gabriele", "Zhang, Zhihao", "Li, Qing", "Jiang, Yong", "Jia, Zhihao" ]
Quantized Side Tuning: Fast and Memory-Efficient Tuning of Quantized Large Language Models
acl-long.1
Oral
2402.04902v3
https://aclanthology.org/2024.acl-long.2.bib
@inproceedings{zhang-etal-2024-unsupervised, title = "Unsupervised Multimodal Clustering for Semantics Discovery in Multimodal Utterances", author = "Zhang, Hanlei and Xu, Hua and Long, Fei and Wang, Xin and Gao, Kai", editor = "Ku, Lun-Wei and Martins, Andre and Sr...
Discovering the semantics of multimodal utterances is essential for understanding human language and enhancing human-machine interactions. Existing methods manifest limitations in leveraging nonverbal information for discerning complex semantics in unsupervised scenarios. This paper introduces a novel unsupervised mult...
[ "Zhang, Hanlei", "Xu, Hua", "Long, Fei", "Wang, Xin", "Gao, Kai" ]
Unsupervised Multimodal Clustering for Semantics Discovery in Multimodal Utterances
acl-long.2
Poster
2405.12775v1
https://aclanthology.org/2024.acl-long.3.bib
@inproceedings{li-etal-2024-mage, title = "{MAGE}: Machine-generated Text Detection in the Wild", author = "Li, Yafu and Li, Qintong and Cui, Leyang and Bi, Wei and Wang, Zhilin and Wang, Longyue and Yang, Linyi and Shi, Shuming and Zhang, Yue", editor...
Large language models (LLMs) have achieved human-level text generation, emphasizing the need for effective deepfake text detection to mitigate risks like the spread of fake news and plagiarism. Existing research has been constrained by evaluating detection methods o specific domains or particular language models. In pr...
[ "Li, Yafu", "Li, Qintong", "Cui, Leyang", "Bi, Wei", "Wang, Zhilin", "Wang, Longyue", "Yang, Linyi", "Shi, Shuming", "Zhang, Yue" ]
{MAGE}: Machine-generated Text Detection in the Wild
acl-long.3
Poster
2210.07903v2
https://aclanthology.org/2024.acl-long.4.bib
@inproceedings{li-etal-2024-privlm, title = "{P}riv{LM}-Bench: A Multi-level Privacy Evaluation Benchmark for Language Models", author = "Li, Haoran and Guo, Dadi and Li, Donghao and Fan, Wei and Hu, Qi and Liu, Xin and Chan, Chunkit and Yao, Duanyi and Ya...
The rapid development of language models (LMs) brings unprecedented accessibility and usage for both models and users. On the one hand, powerful LMs achieve state-of-the-art performance over numerous downstream NLP tasks. On the other hand, more and more attention is paid to unrestricted model accesses that may bring m...
[ "Li, Haoran", "Guo, Dadi", "Li, Donghao", "Fan, Wei", "Hu, Qi", "Liu, Xin", "Chan, Chunkit", "Yao, Duanyi", "Yao, Yuan", "Song, Yangqiu" ]
{P}riv{LM}-Bench: A Multi-level Privacy Evaluation Benchmark for Language Models
acl-long.4
Oral
2212.10011v2
https://aclanthology.org/2024.acl-long.5.bib
@inproceedings{hu-etal-2024-gentranslate, title = "{G}en{T}ranslate: Large Language Models are Generative Multilingual Speech and Machine Translators", author = "Hu, Yuchen and Chen, Chen and Yang, Chao-Han and Li, Ruizhe and Zhang, Dong and Chen, Zhehuai and Chng, EngS...
Recent advances in large language models (LLMs) have stepped forward the development of multilingual speech and machine translation by its reduced representation errors and incorporated external knowledge. However, both translation tasks typically utilize beam search decoding and top-1 hypothesis selection for inferenc...
[ "Hu, Yuchen", "Chen, Chen", "Yang, Chao-Han", "Li, Ruizhe", "Zhang, Dong", "Chen, Zhehuai", "Chng, EngSiong" ]
{G}en{T}ranslate: Large Language Models are Generative Multilingual Speech and Machine Translators
acl-long.5
Oral
1910.00254v2
https://aclanthology.org/2024.acl-long.6.bib
@inproceedings{xu-etal-2024-exploring, title = "Exploring Chain-of-Thought for Multi-modal Metaphor Detection", author = "Xu, Yanzhi and Hua, Yueying and Li, Shichen and Wang, Zhongqing", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proce...
Metaphors are commonly found in advertising and internet memes. However, the free form of internet memes often leads to a lack of high-quality textual data. Metaphor detection demands a deep interpretation of both textual and visual elements, requiring extensive common-sense knowledge, which poses a challenge to langua...
[ "Xu, Yanzhi", "Hua, Yueying", "Li, Shichen", "Wang, Zhongqing" ]
Exploring Chain-of-Thought for Multi-modal Metaphor Detection
acl-long.6
Poster
1508.04515v1
https://aclanthology.org/2024.acl-long.7.bib
@inproceedings{du-etal-2024-bitdistiller, title = "{B}it{D}istiller: Unleashing the Potential of Sub-4-Bit {LLM}s via Self-Distillation", author = "Du, DaYou and Zhang, Yijia and Cao, Shijie and Guo, Jiaqi and Cao, Ting and Chu, Xiaowen and Xu, Ningyi", editor = "Ku...
The upscaling of Large Language Models (LLMs) has yielded impressive advances in natural language processing, yet it also poses significant deployment challenges. Weight quantization has emerged as a widely embraced solution to reduce memory and computational demands. This paper introduces BitDistiller, a framework tha...
[ "Du, DaYou", "Zhang, Yijia", "Cao, Shijie", "Guo, Jiaqi", "Cao, Ting", "Chu, Xiaowen", "Xu, Ningyi" ]
{B}it{D}istiller: Unleashing the Potential of Sub-4-Bit {LLM}s via Self-Distillation
acl-long.7
Poster
2402.10631v1
https://aclanthology.org/2024.acl-long.8.bib
@inproceedings{chen-etal-2024-unified, title = "A Unified Temporal Knowledge Graph Reasoning Model Towards Interpolation and Extrapolation", author = "Chen, Kai and Wang, Ye and Li, Yitong and Li, Aiping and Yu, Han and Song, Xin", editor = "Ku, Lun-Wei and Martins,...
Temporal knowledge graph (TKG) reasoning has two settings: interpolation reasoning and extrapolation reasoning. Both of them draw plenty of research interest and have great significance. Methods of the former de-emphasize the temporal correlations among facts sequences, while methods of the latter require strict chrono...
[ "Chen, Kai", "Wang, Ye", "Li, Yitong", "Li, Aiping", "Yu, Han", "Song, Xin" ]
A Unified Temporal Knowledge Graph Reasoning Model Towards Interpolation and Extrapolation
acl-long.8
Poster
2405.18106v1
https://aclanthology.org/2024.acl-long.9.bib
@inproceedings{xu-etal-2024-unsupervised, title = "Unsupervised Information Refinement Training of Large Language Models for Retrieval-Augmented Generation", author = "Xu, Shicheng and Pang, Liang and Yu, Mo and Meng, Fandong and Shen, Huawei and Cheng, Xueqi and Zhou, ...
Retrieval-augmented generation (RAG) enhances large language models (LLMs) by incorporating additional information from retrieval. However, studies have shown that LLMs still face challenges in effectively using the retrieved information, even ignore it or be misled by it. The key reason is that the training of LLMs do...
[ "Xu, Shicheng", "Pang, Liang", "Yu, Mo", "Meng, F", "ong", "Shen, Huawei", "Cheng, Xueqi", "Zhou, Jie" ]
Unsupervised Information Refinement Training of Large Language Models for Retrieval-Augmented Generation
acl-long.9
Poster
2402.18150v2
https://aclanthology.org/2024.acl-long.10.bib
@inproceedings{hu-etal-2024-cscd, title = "{CSCD}-{NS}: a {C}hinese Spelling Check Dataset for Native Speakers", author = "Hu, Yong and Meng, Fandong and Zhou, Jie", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Mee...
In this paper, we present CSCD-NS, the first Chinese spelling check (CSC) dataset designed for native speakers, containing 40,000 samples from a Chinese social platform. Compared with existing CSC datasets aimed at Chinese learners, CSCD-NS is ten times larger in scale and exhibits a distinct error distribution, with a...
[ "Hu, Yong", "Meng, F", "ong", "Zhou, Jie" ]
{CSCD}-{NS}: a {C}hinese Spelling Check Dataset for Native Speakers
acl-long.10
Poster
2211.08788v3
https://aclanthology.org/2024.acl-long.11.bib
@inproceedings{karakkaparambil-james-etal-2024-evaluating, title = "Evaluating Dynamic Topic Models", author = "Karakkaparambil James, Charu and Nagda, Mayank and Haji Ghassemi, Nooshin and Kloft, Marius and Fellenz, Sophie", editor = "Ku, Lun-Wei and Martins, Andre and ...
There is a lack of quantitative measures to evaluate the progression of topics through time in dynamic topic models (DTMs). Filling this gap, we propose a novel evaluation measure for DTMs that analyzes the changes in the quality of each topic over time. Additionally, we propose an extension combining topic quality wit...
[ "Karakkaparambil James, Charu", "Nagda, Mayank", "Haji Ghassemi, Nooshin", "Kloft, Marius", "Fellenz, Sophie" ]
Evaluating Dynamic Topic Models
acl-long.11
Poster
2406.18907v1
https://aclanthology.org/2024.acl-long.12.bib
@inproceedings{dong-etal-2024-abilities, title = "How Abilities in Large Language Models are Affected by Supervised Fine-tuning Data Composition", author = "Dong, Guanting and Yuan, Hongyi and Lu, Keming and Li, Chengpeng and Xue, Mingfeng and Liu, Dayiheng and Wang, We...
Large language models (LLMs) with enormous pre-training tokens and parameters emerge diverse abilities, including math reasoning, codegeneration, and instruction following. These abilities are further enhanced by supervised fine-tuning (SFT). While the open-source community has explored ad-hoc SFT for enhancing individ...
[ "Dong, Guanting", "Yuan, Hongyi", "Lu, Keming", "Li, Chengpeng", "Xue, Mingfeng", "Liu, Dayiheng", "Wang, Wei", "Yuan, Zheng", "Zhou, Chang", "Zhou, Jingren" ]
How Abilities in Large Language Models are Affected by Supervised Fine-tuning Data Composition
acl-long.12
Poster
2310.05492v4
https://aclanthology.org/2024.acl-long.13.bib
@inproceedings{xu-etal-2024-lens, title = "Through the Lens of Split Vote: Exploring Disagreement, Difficulty and Calibration in Legal Case Outcome Classification", author = "Xu, Shanshan and T.y.s.s, Santosh and Ichim, Oana and Plank, Barbara and Grabmair, Matthias", editor = "K...
In legal decisions, split votes (SV) occur when judges cannot reach a unanimous decision, posing a difficulty for lawyers who must navigate diverse legal arguments and opinions. In high-stakes domains, {\%}as human-AI interaction systems become increasingly important, understanding the alignment of perceived difficulty...
[ "Xu, Shanshan", "T.y.s.s, Santosh", "Ichim, Oana", "Plank, Barbara", "Grabmair, Matthias" ]
Through the Lens of Split Vote: Exploring Disagreement, Difficulty and Calibration in Legal Case Outcome Classification
acl-long.13
Oral
2402.07214v3
https://aclanthology.org/2024.acl-long.14.bib
@inproceedings{dalal-etal-2024-inference, title = "Inference to the Best Explanation in Large Language Models", author = "Dalal, Dhairya and Valentino, Marco and Freitas, Andre and Buitelaar, Paul", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktit...
While Large Language Models (LLMs) have found success in real-world applications, their underlying explanatory process is still poorly understood. This paper proposes \textit{IBE-Eval}, a framework inspired by philosophical accounts on \textit{Inference to the Best Explanation (IBE)} to advance the interpretation and e...
[ "Dalal, Dhairya", "Valentino, Marco", "Freitas, Andre", "Buitelaar, Paul" ]
Inference to the Best Explanation in Large Language Models
acl-long.14
Poster
2402.10767v1
https://aclanthology.org/2024.acl-long.15.bib
@inproceedings{poesina-etal-2024-novel, title = "A Novel Cartography-Based Curriculum Learning Method Applied on {R}o{NLI}: The First {R}omanian Natural Language Inference Corpus", author = "Poesina, Eduard and Caragea, Cornelia and Ionescu, Radu", editor = "Ku, Lun-Wei and Martins, And...
Natural language inference (NLI), the task of recognizing the entailment relationship in sentence pairs, is an actively studied topic serving as a proxy for natural language understanding. Despite the relevance of the task in building conversational agents and improving text classification, machine translation and othe...
[ "Poesina, Eduard", "Caragea, Cornelia", "Ionescu, Radu" ]
A Novel Cartography-Based Curriculum Learning Method Applied on {R}o{NLI}: The First {R}omanian Natural Language Inference Corpus
acl-long.15
Poster
2405.11877v4
https://aclanthology.org/2024.acl-long.16.bib
@inproceedings{chen-etal-2024-minprompt, title = "{M}in{P}rompt: Graph-based Minimal Prompt Data Augmentation for Few-shot Question Answering", author = "Chen, Xiusi and Jiang, Jyun-Yu and Chang, Wei-Cheng and Hsieh, Cho-Jui and Yu, Hsiang-Fu and Wang, Wei", editor = "Ku, ...
Recent advances in few-shot question answering (QA) mostly rely on the power of pre-trained large language models (LLMs) and fine-tuning in specific settings. Although the pre-training stage has already equipped LLMs with powerful reasoning capabilities, LLMs still need to be fine-tuned to adapt to specific domains to ...
[ "Chen, Xiusi", "Jiang, Jyun-Yu", "Chang, Wei-Cheng", "Hsieh, Cho-Jui", "Yu, Hsiang-Fu", "Wang, Wei" ]
{M}in{P}rompt: Graph-based Minimal Prompt Data Augmentation for Few-shot Question Answering
acl-long.16
Poster
2306.04101v1
https://aclanthology.org/2024.acl-long.17.bib
@inproceedings{hu-etal-2024-sportsmetrics, title = "{S}ports{M}etrics: Blending Text and Numerical Data to Understand Information Fusion in {LLM}s", author = "Hu, Yebowen and Song, Kaiqiang and Cho, Sangwoo and Wang, Xiaoyang and Foroosh, Hassan and Yu, Dong and Liu, Fe...
Large language models hold significant potential for integrating various data types, such as text documents and database records, for advanced analytics. However, blending text and numerical data presents substantial challenges. LLMs need to process and cross-reference entities and numbers, handle data inconsistencies ...
[ "Hu, Yebowen", "Song, Kaiqiang", "Cho, Sangwoo", "Wang, Xiaoyang", "Foroosh, Hassan", "Yu, Dong", "Liu, Fei" ]
{S}ports{M}etrics: Blending Text and Numerical Data to Understand Information Fusion in {LLM}s
acl-long.17
Poster
2402.10979v2
https://aclanthology.org/2024.acl-long.18.bib
@inproceedings{wang-etal-2024-scimon, title = "{S}ci{MON}: Scientific Inspiration Machines Optimized for Novelty", author = "Wang, Qingyun and Downey, Doug and Ji, Heng and Hope, Tom", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedi...
We explore and enhance the ability of neural language models to generate novel scientific directions grounded in literature. Work on literature-based hypothesis generation has traditionally focused on binary link prediction{---}severely limiting the expressivity of hypotheses. This line of work also does not focus on o...
[ "Wang, Qingyun", "Downey, Doug", "Ji, Heng", "Hope, Tom" ]
{S}ci{MON}: Scientific Inspiration Machines Optimized for Novelty
acl-long.18
Poster
2305.14259v7
https://aclanthology.org/2024.acl-long.19.bib
@inproceedings{jian-etal-2024-expedited, title = "Expedited Training of Visual Conditioned Language Generation via Redundancy Reduction", author = "Jian, Yiren and Liu, Tingkai and Tao, Yunzhe and Zhang, Chunhui and Vosoughi, Soroush and Yang, Hongxia", editor = "Ku, Lun-W...
We introduce $\text{EVL}_{\text{Gen}}$, a streamlined framework designed for the pre-training of visually conditioned language generation models with high computational demands, utilizing frozen pre-trained large language models (LLMs). The conventional approach in vision-language pre-training (VLP) typically involves ...
[ "Jian, Yiren", "Liu, Tingkai", "Tao, Yunzhe", "Zhang, Chunhui", "Vosoughi, Soroush", "Yang, Hongxia" ]
Expedited Training of Visual Conditioned Language Generation via Redundancy Reduction
acl-long.19
Oral
2310.03291v3
https://aclanthology.org/2024.acl-long.20.bib
@inproceedings{kumar-etal-2024-confidence, title = "Confidence Under the Hood: An Investigation into the Confidence-Probability Alignment in Large Language Models", author = "Kumar, Abhishek and Morabito, Robert and Umbet, Sanzhar and Kabbara, Jad and Emami, Ali", editor = "Ku, L...
As the use of Large Language Models (LLMs) becomes more widespread, understanding their self-evaluation of confidence in generated responses becomes increasingly important as it is integral to the reliability of the output of these models. We introduce the concept of Confidence-Probability Alignment, that connects an L...
[ "Kumar, Abhishek", "Morabito, Robert", "Umbet, Sanzhar", "Kabbara, Jad", "Emami, Ali" ]
Confidence Under the Hood: An Investigation into the Confidence-Probability Alignment in Large Language Models
acl-long.20
Poster
2405.16282v5
https://aclanthology.org/2024.acl-long.21.bib
@inproceedings{wang-etal-2024-retrieval, title = "Retrieval-Augmented Multilingual Knowledge Editing", author = "Wang, Weixuan and Haddow, Barry and Birch, Alexandra", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual M...
Knowledge represented in Large Language Models (LLMs) is quite often incorrect and can also become obsolete over time. Updating knowledge via fine-tuning is computationally resource-hungry and not reliable, and so knowledge editing (KE) has developed as an effective and economical alternative to inject new knowledge or...
[ "Wang, Weixuan", "Haddow, Barry", "Birch, Alex", "ra" ]
Retrieval-Augmented Multilingual Knowledge Editing
acl-long.21
Poster
2312.13040v1
https://aclanthology.org/2024.acl-long.22.bib
@inproceedings{park-etal-2024-picturing, title = "Picturing Ambiguity: A Visual Twist on the {W}inograd Schema Challenge", author = "Park, Brendan and Janecek, Madeline and Ezzati-Jivan, Naser and Li, Yifeng and Emami, Ali", editor = "Ku, Lun-Wei and Martins, Andre and ...
Large Language Models (LLMs) have demonstrated remarkable success in tasks like the Winograd Schema Challenge (WSC), showcasing advanced textual common-sense reasoning. However, applying this reasoning to multimodal domains, where understanding text and images together is essential, remains a substantial challenge. To ...
[ "Park, Brendan", "Janecek, Madeline", "Ezzati-Jivan, Naser", "Li, Yifeng", "Emami, Ali" ]
Picturing Ambiguity: A Visual Twist on the {W}inograd Schema Challenge
acl-long.22
Oral
2405.16277v3
https://aclanthology.org/2024.acl-long.23.bib
@inproceedings{kumar-etal-2024-subtle, title = "Subtle Biases Need Subtler Measures: Dual Metrics for Evaluating Representative and Affinity Bias in Large Language Models", author = "Kumar, Abhishek and Yunusov, Sarfaroz and Emami, Ali", editor = "Ku, Lun-Wei and Martins, Andre and ...
Research on Large Language Models (LLMs) has often neglected subtle biases that, although less apparent, can significantly influence the models{'} outputs toward particular social narratives. This study addresses two such biases within LLMs: representative bias, which denotes a tendency of LLMs to generate outputs that...
[ "Kumar, Abhishek", "Yunusov, Sarfaroz", "Emami, Ali" ]
Subtle Biases Need Subtler Measures: Dual Metrics for Evaluating Representative and Affinity Bias in Large Language Models
acl-long.23
Poster
2405.14555v4
https://aclanthology.org/2024.acl-long.24.bib
@inproceedings{leto-etal-2024-framing, title = "Framing in the Presence of Supporting Data: A Case Study in {U}.{S}. Economic News", author = "Leto, Alexandria and Pickens, Elliot and Needell, Coen and Rothschild, David and Pacheco, Maria", editor = "Ku, Lun-Wei and Martin...
The mainstream media has much leeway in what it chooses to cover and how it covers it. These choices have real-world consequences on what people know and their subsequent behaviors. However, the lack of objective measures to evaluate editorial choices makes research in this area particularly difficult. In this paper, w...
[ "Leto, Alex", "ria", "Pickens, Elliot", "Needell, Coen", "Rothschild, David", "Pacheco, Maria" ]
Framing in the Presence of Supporting Data: A Case Study in {U}.{S}. Economic News
acl-long.24
Poster
2402.14224v2
https://aclanthology.org/2024.acl-long.25.bib
@inproceedings{wang-etal-2024-mementos, title = "Mementos: A Comprehensive Benchmark for Multimodal Large Language Model Reasoning over Image Sequences", author = "Wang, Xiyao and Zhou, Yuhang and Liu, Xiaoyu and Lu, Hongjin and Xu, Yuancheng and He, Feihong and Yoon, J...
Multimodal Large Language Models (MLLMs) have demonstrated proficiency in handling a variety of visual-language tasks. However, current MLLM benchmarks are predominantly designed to evaluate reasoning based on static information about a single image, and the ability of modern MLLMs to extrapolate from image sequences, ...
[ "Wang, Xiyao", "Zhou, Yuhang", "Liu, Xiaoyu", "Lu, Hongjin", "Xu, Yuancheng", "He, Feihong", "Yoon, Jaehong", "Lu, Taixi", "Liu, Fuxiao", "Bertasius, Gedas", "Bansal, Mohit", "Yao, Huaxiu", "Huang, Furong" ]
Mementos: A Comprehensive Benchmark for Multimodal Large Language Model Reasoning over Image Sequences
acl-long.25
Poster
2401.10529v2
https://aclanthology.org/2024.acl-long.26.bib
@inproceedings{gao-etal-2024-ttm, title = "{TTM}-{RE}: Memory-Augmented Document-Level Relation Extraction", author = "Gao, Chufan and Wang, Xuan and Sun, Jimeng", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeti...
Document-level relation extraction aims to categorize the association between any two entities within a document.We find that previous methods for document-level relation extraction are ineffective in exploiting the full potential of large amounts of training data with varied noise levels. For example, in the ReDocRED ...
[ "Gao, Chufan", "Wang, Xuan", "Sun, Jimeng" ]
{TTM}-{RE}: Memory-Augmented Document-Level Relation Extraction
acl-long.26
Poster
2310.09265v1
https://aclanthology.org/2024.acl-long.27.bib
@inproceedings{peng-etal-2024-answer, title = "Answer is All You Need: Instruction-following Text Embedding via Answering the Question", author = "Peng, Letian and Zhang, Yuwei and Wang, Zilong and Srinivasa, Jayanth and Liu, Gaowen and Wang, Zihan and Shang, Jingbo", ...
This work aims to build a text embedder that can capture characteristics of texts specified by user instructions clarifying the similarity criterion. While previous methods improve general task awareness by injecting the instruction information into encoding, they fail to be sensitive to clearer criteria like {``}evalu...
[ "Peng, Letian", "Zhang, Yuwei", "Wang, Zilong", "Srinivasa, Jayanth", "Liu, Gaowen", "Wang, Zihan", "Shang, Jingbo" ]
Answer is All You Need: Instruction-following Text Embedding via Answering the Question
acl-long.27
Poster
2402.09642v1
https://aclanthology.org/2024.acl-long.28.bib
@inproceedings{zhou-etal-2024-explore, title = "Explore Spurious Correlations at the Concept Level in Language Models for Text Classification", author = "Zhou, Yuhang and Xu, Paiheng and Liu, Xiaoyu and An, Bang and Ai, Wei and Huang, Furong", editor = "Ku, Lun-Wei and ...
Language models (LMs) have achieved notable success in numerous NLP tasks, employing both fine-tuning and in-context learning (ICL) methods. While language models demonstrate exceptional performance, they face robustness challenges due to spurious correlations arising from imbalanced label distributions in training dat...
[ "Zhou, Yuhang", "Xu, Paiheng", "Liu, Xiaoyu", "An, Bang", "Ai, Wei", "Huang, Furong" ]
Explore Spurious Correlations at the Concept Level in Language Models for Text Classification
acl-long.28
Poster
2311.08648v4
https://aclanthology.org/2024.acl-long.29.bib
@inproceedings{cheng-etal-2024-every, title = "Every Answer Matters: Evaluating Commonsense with Probabilistic Measures", author = "Cheng, Qi and Boratko, Michael and Yelugam, Pranay Kumar and O{'}Gorman, Tim and Singh, Nalini and McCallum, Andrew and Li, Xiang", ed...
Large language models have demonstrated impressive performance on commonsense tasks; however, these tasks are often posed as multiple-choice questions, allowing models to exploit systematic biases. Commonsense is also inherently probabilistic with multiple correct answers. The purpose of {``}boiling water{''} could be ...
[ "Cheng, Qi", "Boratko, Michael", "Yelugam, Pranay Kumar", "O{'}Gorman, Tim", "Singh, Nalini", "McCallum, Andrew", "Li, Xiang" ]
Every Answer Matters: Evaluating Commonsense with Probabilistic Measures
acl-long.29
Poster
2406.04145v1
https://aclanthology.org/2024.acl-long.30.bib
@inproceedings{xie-etal-2024-gradsafe, title = "{G}rad{S}afe: Detecting Jailbreak Prompts for {LLM}s via Safety-Critical Gradient Analysis", author = "Xie, Yueqi and Fang, Minghong and Pi, Renjie and Gong, Neil", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek...
Large Language Models (LLMs) face threats from jailbreak prompts. Existing methods for detecting jailbreak prompts are primarily online moderation APIs or finetuned LLMs. These strategies, however, often require extensive and resource-intensive data collection and training processes. In this study, we propose GradSafe,...
[ "Xie, Yueqi", "Fang, Minghong", "Pi, Renjie", "Gong, Neil" ]
{G}rad{S}afe: Detecting Jailbreak Prompts for {LLM}s via Safety-Critical Gradient Analysis
acl-long.30
Poster
2402.13494v2
https://aclanthology.org/2024.acl-long.31.bib
@inproceedings{lee-etal-2024-pouring, title = "Pouring Your Heart Out: Investigating the Role of Figurative Language in Online Expressions of Empathy", author = "Lee, Gyeongeun and Wong, Christina and Guo, Meghan and Parde, Natalie", editor = "Ku, Lun-Wei and Martins, Andre and ...
Empathy is a social mechanism used to support and strengthen emotional connection with others, including in online communities. However, little is currently known about the nature of these online expressions, nor the particular factors that may lead to their improved detection. In this work, we study the role of a spec...
[ "Lee, Gyeongeun", "Wong, Christina", "Guo, Meghan", "Parde, Natalie" ]
Pouring Your Heart Out: Investigating the Role of Figurative Language in Online Expressions of Empathy
acl-long.31
Poster
2009.08441v1
https://aclanthology.org/2024.acl-long.32.bib
@inproceedings{wang-etal-2024-information, title = "An Information-Theoretic Approach to Analyze {NLP} Classification Tasks", author = "Wang, Luran and Gales, Mark and Raina, Vatsal", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of t...
Understanding the contribution of the inputs on the output is useful across many tasks. This work provides an information-theoretic framework to analyse the influence of inputs for text classification tasks. Natural language processing (NLP) tasks take either a single or multiple text elements to predict an output vari...
[ "Wang, Luran", "Gales, Mark", "Raina, Vatsal" ]
An Information-Theoretic Approach to Analyze {NLP} Classification Tasks
acl-long.32
Poster
2402.00978v1
https://aclanthology.org/2024.acl-long.33.bib
@inproceedings{zhang-etal-2024-model, title = "Can Your Model Tell a Negation from an Implicature? Unravelling Challenges With Intent Encoders", author = "Zhang, Yuwei and Singh, Siffi and Sengupta, Sailik and Shalyminov, Igor and Su, Hang and Song, Hwanjun and Mansour,...
Conversational systems often rely on embedding models for intent classification and intent clustering tasks. The advent of Large Language Models (LLMs), which enable instructional embeddings allowing one to adjust semantics over the embedding space using prompts, are being viewed as a panacea for these downstream conve...
[ "Zhang, Yuwei", "Singh, Siffi", "Sengupta, Sailik", "Shalyminov, Igor", "Su, Hang", "Song, Hwanjun", "Mansour, Saab" ]
Can Your Model Tell a Negation from an Implicature? Unravelling Challenges With Intent Encoders
acl-long.33
Poster
2403.04314v1
https://aclanthology.org/2024.acl-long.34.bib
@inproceedings{he-etal-2024-wav2gloss, title = "{W}av2{G}loss: Generating Interlinear Glossed Text from Speech", author = "He, Taiqi and Choi, Kwanghee and Tjuatja, Lindia and Robinson, Nathaniel and Shi, Jiatong and Watanabe, Shinji and Neubig, Graham and Morten...
Thousands of the world{'}s languages are in danger of extinction{---}a tremendous threat to cultural identities and human language diversity. Interlinear Glossed Text (IGT) is a form of linguistic annotation that can support documentation and resource creation for these languages{'} communities. IGT typically consists ...
[ "He, Taiqi", "Choi, Kwanghee", "Tjuatja, Lindia", "Robinson, Nathaniel", "Shi, Jiatong", "Watanabe, Shinji", "Neubig, Graham", "Mortensen, David", "Levin, Lori" ]
{W}av2{G}loss: Generating Interlinear Glossed Text from Speech
acl-long.34
Poster
2403.13169v2
https://aclanthology.org/2024.acl-long.35.bib
@inproceedings{hu-etal-2024-leveraging, title = "Leveraging Codebook Knowledge with {NLI} and {C}hat{GPT} for Zero-Shot Political Relation Classification", author = "Hu, Yibo and Skorupa Parolin, Erick and Khan, Latifur and Brandt, Patrick and Osorio, Javier and D{'}Orazio, Vi...
Is it possible accurately classify political relations within evolving event ontologies without extensive annotations? This study investigates zero-shot learning methods that use expert knowledge from existing annotation codebook, and evaluates the performance of advanced ChatGPT (GPT-3.5/4) and a natural language infe...
[ "Hu, Yibo", "Skorupa Parolin, Erick", "Khan, Latifur", "Br", "t, Patrick", "Osorio, Javier", "D{'}Orazio, Vito" ]
Leveraging Codebook Knowledge with {NLI} and {C}hat{GPT} for Zero-Shot Political Relation Classification
acl-long.35
Poster
2308.07876v3
https://aclanthology.org/2024.acl-long.36.bib
@inproceedings{xu-wang-2024-spor, title = "{SPOR}: A Comprehensive and Practical Evaluation Method for Compositional Generalization in Data-to-Text Generation", author = "Xu, Ziyao and Wang, Houfeng", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Procee...
Compositional generalization is an important ability of language models and has many different manifestations. For data-to-text generation, previous research on this ability is limited to a single manifestation called Systematicity and lacks consideration of large language models (LLMs), which cannot fully cover practi...
[ "Xu, Ziyao", "Wang, Houfeng" ]
{SPOR}: A Comprehensive and Practical Evaluation Method for Compositional Generalization in Data-to-Text Generation
acl-long.36
Poster
2405.10650v8
https://aclanthology.org/2024.acl-long.37.bib
@inproceedings{shi-etal-2024-opex, title = "{OPE}x: A Component-Wise Analysis of {LLM}-Centric Agents in Embodied Instruction Following", author = "Shi, Haochen and Sun, Zhiyuan and Yuan, Xingdi and C{\^o}t{\'e}, Marc-Alexandre and Liu, Bang", editor = "Ku, Lun-Wei and Mar...
Embodied Instruction Following (EIF) is a crucial task in embodied learning, requiring agents to interact with their environment through egocentric observations to fulfill natural language instructions. Recent advancements have seen a surge in employing large language models (LLMs) within a framework-centric approach t...
[ "Shi, Haochen", "Sun, Zhiyuan", "Yuan, Xingdi", "C{\\^o}t{\\'e}, Marc-Alex", "re", "Liu, Bang" ]
{OPE}x: A Component-Wise Analysis of {LLM}-Centric Agents in Embodied Instruction Following
acl-long.37
Poster
2310.12344v1
https://aclanthology.org/2024.acl-long.38.bib
@inproceedings{shen-etal-2024-multimodal, title = "Multimodal Instruction Tuning with Conditional Mixture of {L}o{RA}", author = "Shen, Ying and Xu, Zhiyang and Wang, Qifan and Cheng, Yu and Yin, Wenpeng and Huang, Lifu", editor = "Ku, Lun-Wei and Martins, Andre an...
Multimodal Large Language Models (MLLMs) have demonstrated remarkable proficiency in diverse tasks across different domains, with an increasing focus on improving their zero-shot generalization capabilities for unseen multimodal tasks. Multimodal instruction tuning has emerged as a successful strategy for achieving zer...
[ "Shen, Ying", "Xu, Zhiyang", "Wang, Qifan", "Cheng, Yu", "Yin, Wenpeng", "Huang, Lifu" ]
Multimodal Instruction Tuning with Conditional Mixture of {L}o{RA}
acl-long.38
Poster
2402.15896v1
https://aclanthology.org/2024.acl-long.39.bib
@inproceedings{xie-etal-2024-doclens, title = "{D}oc{L}ens: Multi-aspect Fine-grained Medical Text Evaluation", author = "Xie, Yiqing and Zhang, Sheng and Cheng, Hao and Liu, Pengfei and Gero, Zelalem and Wong, Cliff and Naumann, Tristan and Poon, Hoifung and ...
Medical text generation aims to assist with administrative work and highlight salient information to support decision-making.To reflect the specific requirements of medical text, in this paper, we propose a set of metrics to evaluate the completeness, conciseness, and attribution of the generated text at a fine-grained...
[ "Xie, Yiqing", "Zhang, Sheng", "Cheng, Hao", "Liu, Pengfei", "Gero, Zelalem", "Wong, Cliff", "Naumann, Tristan", "Poon, Hoifung", "Rose, Carolyn" ]
{D}oc{L}ens: Multi-aspect Fine-grained Medical Text Evaluation
acl-long.39
Poster
2404.07613v1
https://aclanthology.org/2024.acl-long.40.bib
@inproceedings{xia-etal-2024-fofo, title = "{FOFO}: A Benchmark to Evaluate {LLM}s{'} Format-Following Capability", author = "Xia, Congying and Xing, Chen and Du, Jiangshu and Yang, Xinyi and Feng, Yihao and Xu, Ran and Yin, Wenpeng and Xiong, Caiming", edito...
This paper presents FoFo, a pioneering benchmark for evaluating large language models{'} (LLMs) ability to follow complex, domain-specific formats, a crucial yet under-examined capability for their application as AI agents. Despite LLMs{'} advancements, existing benchmarks fail to assess their format-following proficie...
[ "Xia, Congying", "Xing, Chen", "Du, Jiangshu", "Yang, Xinyi", "Feng, Yihao", "Xu, Ran", "Yin, Wenpeng", "Xiong, Caiming" ]
{FOFO}: A Benchmark to Evaluate {LLM}s{'} Format-Following Capability
acl-long.40
Poster
2403.12316v1
https://aclanthology.org/2024.acl-long.41.bib
@inproceedings{yoo-etal-2024-hyper, title = "Hyper-{CL}: Conditioning Sentence Representations with Hypernetworks", author = "Yoo, Young and Cha, Jii and Kim, Changhyeon and Kim, Taeuk", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Procee...
While the introduction of contrastive learning frameworks in sentence representation learning has significantly contributed to advancements in the field, it still remains unclear whether state-of-the-art sentence embeddings can capture the fine-grained semantics of sentences, particularly when conditioned on specific p...
[ "Yoo, Young", "Cha, Jii", "Kim, Changhyeon", "Kim, Taeuk" ]
Hyper-{CL}: Conditioning Sentence Representations with Hypernetworks
acl-long.41
Poster
2403.09490v2
https://aclanthology.org/2024.acl-long.42.bib
@inproceedings{lim-etal-2024-analysis, title = "Analysis of Multi-Source Language Training in Cross-Lingual Transfer", author = "Lim, Seonghoon and Yun, Taejun and Kim, Jinhyeon and Choi, Jihun and Kim, Taeuk", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, ...
The successful adaptation of multilingual language models (LMs) to a specific language-task pair critically depends on the availability of data tailored for that condition. While cross-lingual transfer (XLT) methods have contributed to addressing this data scarcity problem, there still exists ongoing debate about the m...
[ "Lim, Seonghoon", "Yun, Taejun", "Kim, Jinhyeon", "Choi, Jihun", "Kim, Taeuk" ]
Analysis of Multi-Source Language Training in Cross-Lingual Transfer
acl-long.42
Poster
1712.01813v1
https://aclanthology.org/2024.acl-long.43.bib
@inproceedings{ghosh-etal-2024-abex, title = "{ABEX}: Data Augmentation for Low-Resource {NLU} via Expanding Abstract Descriptions", author = "Ghosh, Sreyan and Tyagi, Utkarsh and Kumar, Sonal and Evuru, Chandra Kiran and S, Ramaneswaran and Sakshi, S and Manocha, Dines...
We present ABEX, a novel and effective generative data augmentation methodology for low-resource Natural Language Understanding (NLU) tasks. ABEX is based on ABstract-and-EXpand, a novel paradigm for generating diverse forms of an input document {--} we first convert a document into its concise, abstract description an...
[ "Ghosh, Sreyan", "Tyagi, Utkarsh", "Kumar, Sonal", "Evuru, Ch", "ra Kiran", "S, Ramaneswaran", "Sakshi, S", "Manocha, Dinesh" ]
{ABEX}: Data Augmentation for Low-Resource {NLU} via Expanding Abstract Descriptions
acl-long.43
Poster
2406.04286v1
https://aclanthology.org/2024.acl-long.44.bib
@inproceedings{bandarkar-etal-2024-belebele, title = "The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants", author = "Bandarkar, Lucas and Liang, Davis and Muller, Benjamin and Artetxe, Mikel and Shukla, Satya Narayan and Husa, Donald and...
We present Belebele, a multiple-choice machine reading comprehension (MRC) dataset spanning 122 language variants. Significantly expanding the language coverage of natural language understanding (NLU) benchmarks, this dataset enables the evaluation of text models in high-, medium-, and low-resource languages. Each ques...
[ "B", "arkar, Lucas", "Liang, Davis", "Muller, Benjamin", "Artetxe, Mikel", "Shukla, Satya Narayan", "Husa, Donald", "Goyal, Naman", "Krishnan, Abhin", "an", "Zettlemoyer, Luke", "Khabsa, Madian" ]
The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants
acl-long.44
Poster
2308.16884v2
https://aclanthology.org/2024.acl-long.45.bib
@inproceedings{an-etal-2024-learn, title = "Learn from Failure: Fine-tuning {LLM}s with Trial-and-Error Data for Intuitionistic Propositional Logic Proving", author = "An, Chenyang and Chen, Zhibo and Ye, Qihao and First, Emily and Peng, Letian and Zhang, Jiayun and Wan...
Recent advances in Automated Theorem Proving have shown the effectiveness of leveraging a (large) language model that generates tactics (i.e. proof steps) to search through proof states. The current model, while trained solely on successful proof paths, faces a discrepancy at the inference stage, as it must sample and ...
[ "An, Chenyang", "Chen, Zhibo", "Ye, Qihao", "First, Emily", "Peng, Letian", "Zhang, Jiayun", "Wang, Zihan", "Lerner, Sorin", "Shang, Jingbo" ]
Learn from Failure: Fine-tuning {LLM}s with Trial-and-Error Data for Intuitionistic Propositional Logic Proving
acl-long.45
Poster
2207.07306v1
https://aclanthology.org/2024.acl-long.46.bib
@inproceedings{lee-etal-2024-interactive, title = "Interactive Text-to-Image Retrieval with Large Language Models: A Plug-and-Play Approach", author = "Lee, Saehyung and Yu, Sangwon and Park, Junsung and Yi, Jihun and Yoon, Sungroh", editor = "Ku, Lun-Wei and Martins, Andr...
In this paper, we primarily address the issue of dialogue-form context query within the interactive text-to-image retrieval task. Our methodology, PlugIR, actively utilizes the general instruction-following capability of LLMs in two ways. First, by reformulating the dialogue-form context, we eliminate the necessity of ...
[ "Lee, Saehyung", "Yu, Sangwon", "Park, Junsung", "Yi, Jihun", "Yoon, Sungroh" ]
Interactive Text-to-Image Retrieval with Large Language Models: A Plug-and-Play Approach
acl-long.46
Oral
2404.05825v1
https://aclanthology.org/2024.acl-long.47.bib
@inproceedings{lin-etal-2024-imbue, title = "{IMBUE}: Improving Interpersonal Effectiveness through Simulation and Just-in-time Feedback with Human-Language Model Interaction", author = "Lin, Inna and Sharma, Ashish and Rytting, Christopher and Miner, Adam and Suh, Jina and Al...
Navigating certain communication situations can be challenging due to individuals{'} lack of skills and the interference of strong emotions. However, effective learning opportunities are rarely accessible. In this work, we conduct a human-centered study that uses language models to simulate bespoke communication traini...
[ "Lin, Inna", "Sharma, Ashish", "Rytting, Christopher", "Miner, Adam", "Suh, Jina", "Althoff, Tim" ]
{IMBUE}: Improving Interpersonal Effectiveness through Simulation and Just-in-time Feedback with Human-Language Model Interaction
acl-long.47
Poster
2402.12556v1
https://aclanthology.org/2024.acl-long.48.bib
@inproceedings{lin-etal-2024-token, title = "Token-wise Influential Training Data Retrieval for Large Language Models", author = "Lin, Huawei and Long, Jikai and Xu, Zhaozhuo and Zhao, Weijie", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = ...
Given a Large Language Model (LLM) generation, how can we identify which training data led to this generation? In this paper, we proposed RapidIn, a scalable framework adapting to LLMs for estimating the influence of each training data. The proposed framework consists of two stages: caching and retrieval. First, we com...
[ "Lin, Huawei", "Long, Jikai", "Xu, Zhaozhuo", "Zhao, Weijie" ]
Token-wise Influential Training Data Retrieval for Large Language Models
acl-long.48
Poster
2305.13286v2
https://aclanthology.org/2024.acl-long.49.bib
@inproceedings{weinzierl-harabagiu-2024-tree, title = "Tree-of-Counterfactual Prompting for Zero-Shot Stance Detection", author = "Weinzierl, Maxwell and Harabagiu, Sanda", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Mee...
Stance detection enables the inference of attitudes from human communications. Automatic stance identification was mostly cast as a classification problem. However, stance decisions involve complex judgments, which can be nowadays generated by prompting Large Language Models (LLMs). In this paper we present a new metho...
[ "Weinzierl, Maxwell", "Harabagiu, S", "a" ]
Tree-of-Counterfactual Prompting for Zero-Shot Stance Detection
acl-long.49
Poster
2310.19750v1
https://aclanthology.org/2024.acl-long.50.bib
@inproceedings{koh-etal-2024-visualwebarena, title = "{V}isual{W}eb{A}rena: Evaluating Multimodal Agents on Realistic Visual Web Tasks", author = "Koh, Jing Yu and Lo, Robert and Jang, Lawrence and Duvvur, Vikram and Lim, Ming and Huang, Po-Yu and Neubig, Graham and ...
Autonomous agents capable of planning, reasoning, and executing actions on the web offer a promising avenue for automating computer tasks. However, the majority of existing benchmarks primarily focus on text-based agents, neglecting many natural tasks that require visual information to effectively solve. Given that mos...
[ "Koh, Jing Yu", "Lo, Robert", "Jang, Lawrence", "Duvvur, Vikram", "Lim, Ming", "Huang, Po-Yu", "Neubig, Graham", "Zhou, Shuyan", "Salakhutdinov, Russ", "Fried, Daniel" ]
{V}isual{W}eb{A}rena: Evaluating Multimodal Agents on Realistic Visual Web Tasks
acl-long.50
Poster
2401.13649v2
https://aclanthology.org/2024.acl-long.51.bib
@inproceedings{song-etal-2024-finesure, title = "{F}ine{S}ur{E}: Fine-grained Summarization Evaluation using {LLM}s", author = "Song, Hwanjun and Su, Hang and Shalyminov, Igor and Cai, Jason and Mansour, Saab", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, ...
Automated evaluation is crucial for streamlining text summarization benchmarking and model development, given the costly and time-consuming nature of human evaluation. Traditional methods like ROUGE do not correlate well with human judgment, while recently proposed LLM-based metrics provide only summary-level assessmen...
[ "Song, Hwanjun", "Su, Hang", "Shalyminov, Igor", "Cai, Jason", "Mansour, Saab" ]
{F}ine{S}ur{E}: Fine-grained Summarization Evaluation using {LLM}s
acl-long.51
Poster
2402.17008v1
https://aclanthology.org/2024.acl-long.52.bib
@inproceedings{ahn-etal-2024-tuning, title = "Tuning Large Multimodal Models for Videos using Reinforcement Learning from {AI} Feedback", author = "Ahn, Daechul and Choi, Yura and Yu, Youngjae and Kang, Dongyeop and Choi, Jonghyun", editor = "Ku, Lun-Wei and Martins, Andre...
Recent advancements in large language models have influenced the development of video large multimodal models (VLMMs). Previous approaches for VLMMs involve Supervised Fine-Tuning (SFT) with instruction-tuned datasets, integrating LLM with visual encoders, and additional learnable parameters. Here, aligning video with ...
[ "Ahn, Daechul", "Choi, Yura", "Yu, Youngjae", "Kang, Dongyeop", "Choi, Jonghyun" ]
Tuning Large Multimodal Models for Videos using Reinforcement Learning from {AI} Feedback
acl-long.52
Oral
2402.03746v3
https://aclanthology.org/2024.acl-long.53.bib
@inproceedings{zhan-etal-2024-prompt, title = "Prompt Refinement with Image Pivot for Text-to-Image Generation", author = "Zhan, Jingtao and Ai, Qingyao and Liu, Yiqun and Pan, Yingwei and Yao, Ting and Mao, Jiaxin and Ma, Shaoping and Mei, Tao", editor = "Ku...
For text-to-image generation, automatically refining user-provided natural language prompts into the keyword-enriched prompts favored by systems is essential for the user experience. Such a prompt refinement process is analogous to translating the prompt from {``}user languages{''} into {``}system languages{''}. Howeve...
[ "Zhan, Jingtao", "Ai, Qingyao", "Liu, Yiqun", "Pan, Yingwei", "Yao, Ting", "Mao, Jiaxin", "Ma, Shaoping", "Mei, Tao" ]
Prompt Refinement with Image Pivot for Text-to-Image Generation
acl-long.53
Poster
2407.00247v1
https://aclanthology.org/2024.acl-long.54.bib
@inproceedings{mita-etal-2024-striking, title = "Striking Gold in Advertising: Standardization and Exploration of Ad Text Generation", author = "Mita, Masato and Murakami, Soichiro and Kato, Akihiko and Zhang, Peinan", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar,...
In response to the limitations of manual ad creation, significant research has been conducted in the field of automatic ad text generation (ATG). However, the lack of comprehensive benchmarks and well-defined problem sets has made comparing different methods challenging. To tackle these challenges, we standardize the t...
[ "Mita, Masato", "Murakami, Soichiro", "Kato, Akihiko", "Zhang, Peinan" ]
Striking Gold in Advertising: Standardization and Exploration of Ad Text Generation
acl-long.54
Poster
2309.12030v2
https://aclanthology.org/2024.acl-long.55.bib
@inproceedings{wang-etal-2024-absinstruct, title = "{A}bs{I}nstruct: Eliciting Abstraction Ability from {LLM}s through Explanation Tuning with Plausibility Estimation", author = "Wang, Zhaowei and Fan, Wei and Zong, Qing and Zhang, Hongming and Choi, Sehyun and Fang, Tianqing ...
Abstraction ability is crucial in human intelligence, which can also benefit various tasks in NLP study. Existing work shows that LLMs are deficient in abstract ability, and how to improve it remains unexplored. In this work, we design the framework AbsInstruct to enhance LLMs{'} abstraction ability through instruction...
[ "Wang, Zhaowei", "Fan, Wei", "Zong, Qing", "Zhang, Hongming", "Choi, Sehyun", "Fang, Tianqing", "Liu, Xin", "Song, Yangqiu", "Wong, Ginny", "See, Simon" ]
{A}bs{I}nstruct: Eliciting Abstraction Ability from {LLM}s through Explanation Tuning with Plausibility Estimation
acl-long.55
Poster
2402.10646v2
https://aclanthology.org/2024.acl-long.56.bib
@inproceedings{zhou-etal-2024-reflect, title = "Reflect-{RL}: Two-Player Online {RL} Fine-Tuning for {LM}s", author = "Zhou, Runlong and Du, Simon and Li, Beibin", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeti...
As language models (LMs) demonstrate their capabilities in various fields, their application to tasks requiring multi-round interactions has become increasingly popular. These tasks usually have complex dynamics, so supervised fine-tuning (SFT) on a limited offline dataset does not yield good performance. However, only...
[ "Zhou, Runlong", "Du, Simon", "Li, Beibin" ]
Reflect-{RL}: Two-Player Online {RL} Fine-Tuning for {LM}s
acl-long.56
Poster
2402.12621v2
https://aclanthology.org/2024.acl-long.57.bib
@inproceedings{yang-etal-2024-chatgpts, title = "Can {C}hat{GPT}{'}s Performance be Improved on Verb Metaphor Detection Tasks? Bootstrapping and Combining Tacit Knowledge", author = "Yang, Cheng and Chen, Puli and Huang, Qingbao", editor = "Ku, Lun-Wei and Martins, Andre and Srik...
Metaphors detection, as an important task in the field of NLP, has been receiving sustained academic attention in recent years. Current researches focus supervised metaphors detection systems, which usually require large-scale, high-quality labeled data support. The emerge of large language models (e.g., ChatGPT) has m...
[ "Yang, Cheng", "Chen, Puli", "Huang, Qingbao" ]
Can {C}hat{GPT}{'}s Performance be Improved on Verb Metaphor Detection Tasks? Bootstrapping and Combining Tacit Knowledge
acl-long.57
Poster
2306.17177v1
https://aclanthology.org/2024.acl-long.58.bib
@inproceedings{yang-etal-2024-self, title = "Self-Distillation Bridges Distribution Gap in Language Model Fine-Tuning", author = "Yang, Zhaorui and Pang, Tianyu and Feng, Haozhe and Wang, Han and Chen, Wei and Zhu, Minfeng and Liu, Qian", editor = "Ku, Lun-Wei and ...
The surge in Large Language Models (LLMs) has revolutionized natural language processing, but fine-tuning them for specific tasks often encounters challenges in balancing performance and preserving general instruction-following abilities. In this paper, we posit that the distribution gap between task datasets and the L...
[ "Yang, Zhaorui", "Pang, Tianyu", "Feng, Haozhe", "Wang, Han", "Chen, Wei", "Zhu, Minfeng", "Liu, Qian" ]
Self-Distillation Bridges Distribution Gap in Language Model Fine-Tuning
acl-long.58
Poster
2205.07830v1
https://aclanthology.org/2024.acl-long.59.bib
@inproceedings{zhu-etal-2024-information, title = "An Information Bottleneck Perspective for Effective Noise Filtering on Retrieval-Augmented Generation", author = "Zhu, Kun and Feng, Xiaocheng and Du, Xiyuan and Gu, Yuxuan and Yu, Weijiang and Wang, Haotian and Chen, Q...
Retrieval-augmented generation integrates the capabilities of large language models with relevant information retrieved from an extensive corpus, yet encounters challenges when confronted with real-world noisy data. One recent solution is to train a filter module to find relevant content but only achieve suboptimal noi...
[ "Zhu, Kun", "Feng, Xiaocheng", "Du, Xiyuan", "Gu, Yuxuan", "Yu, Weijiang", "Wang, Haotian", "Chen, Qianglong", "Chu, Zheng", "Chen, Jingchang", "Qin, Bing" ]
An Information Bottleneck Perspective for Effective Noise Filtering on Retrieval-Augmented Generation
acl-long.59
Oral
2406.01549v2
https://aclanthology.org/2024.acl-long.60.bib
@inproceedings{jiang-etal-2024-rora, title = "{RORA}: Robust Free-Text Rationale Evaluation", author = "Jiang, Zhengping and Lu, Yining and Chen, Hanjie and Khashabi, Daniel and Van Durme, Benjamin and Liu, Anqi", editor = "Ku, Lun-Wei and Martins, Andre and ...
Free-text rationales play a pivotal role in explainable NLP, bridging the knowledge and reasoning gaps behind a model{'}s decision-making. However, due to the diversity of potential reasoning paths and a corresponding lack of definitive ground truth, their evaluation remains a challenge. Existing metrics rely on the de...
[ "Jiang, Zhengping", "Lu, Yining", "Chen, Hanjie", "Khashabi, Daniel", "Van Durme, Benjamin", "Liu, Anqi" ]
{RORA}: Robust Free-Text Rationale Evaluation
acl-long.60
Poster
2010.04736v1
https://aclanthology.org/2024.acl-long.61.bib
@inproceedings{qian-etal-2024-tell, title = "Tell Me More! Towards Implicit User Intention Understanding of Language Model Driven Agents", author = "Qian, Cheng and He, Bingxiang and Zhuang, Zhong and Deng, Jia and Qin, Yujia and Cong, Xin and Zhang, Zhong and Zh...
Current language model-driven agents often lack mechanisms for effective user participation, which is crucial given the vagueness commonly found in user instructions. Although adept at devising strategies and performing tasks, these agents struggle with seeking clarification and grasping precise user intentions. To bri...
[ "Qian, Cheng", "He, Bingxiang", "Zhuang, Zhong", "Deng, Jia", "Qin, Yujia", "Cong, Xin", "Zhang, Zhong", "Zhou, Jie", "Lin, Yankai", "Liu, Zhiyuan", "Sun, Maosong" ]
Tell Me More! Towards Implicit User Intention Understanding of Language Model Driven Agents
acl-long.61
Poster
2309.17057v2
https://aclanthology.org/2024.acl-long.62.bib
@inproceedings{wang-etal-2024-instructprotein, title = "{I}nstruct{P}rotein: Aligning Human and Protein Language via Knowledge Instruction", author = "Wang, Zeyuan and Zhang, Qiang and Ding, Keyan and Qin, Ming and Zhuang, Xiang and Li, Xiaotong and Chen, Huajun", e...
Large Language Models (LLMs) have revolutionized the field of natural language processing, but they fall short in comprehending biological sequences such as proteins. To address this challenge, we propose InstructProtein, an innovative LLM that possesses bidirectional generation capabilities in both human and protein l...
[ "Wang, Zeyuan", "Zhang, Qiang", "Ding, Keyan", "Qin, Ming", "Zhuang, Xiang", "Li, Xiaotong", "Chen, Huajun" ]
{I}nstruct{P}rotein: Aligning Human and Protein Language via Knowledge Instruction
acl-long.62
Poster
2310.03269v1
https://aclanthology.org/2024.acl-long.63.bib
@inproceedings{elangovan-etal-2024-considers, title = "{C}on{S}i{DERS}-The-Human Evaluation Framework: Rethinking Human Evaluation for Generative Large Language Models", author = "Elangovan, Aparna and Liu, Ling and Xu, Lei and Bodapati, Sravan Babu and Roth, Dan", editor = "Ku, ...
In this position paper, we argue that human evaluation of generative large language models (LLMs) should be a multidisciplinary undertaking that draws upon the insights from disciplines such as user experience research and human behavioral psychology to ensure that the experimental design and results are reliable. The ...
[ "Elangovan, Aparna", "Liu, Ling", "Xu, Lei", "Bodapati, Sravan Babu", "Roth, Dan" ]
{C}on{S}i{DERS}-The-Human Evaluation Framework: Rethinking Human Evaluation for Generative Large Language Models
acl-long.63
Poster
2310.19740v1
https://aclanthology.org/2024.acl-long.64.bib
@inproceedings{tu-etal-2024-linguistically, title = "Linguistically Conditioned Semantic Textual Similarity", author = "Tu, Jingxuan and Xu, Keer and Yue, Liulu and Ye, Bingyang and Rim, Kyeongmin and Pustejovsky, James", editor = "Ku, Lun-Wei and Martins, Andre an...
Semantic textual similarity (STS) is a fundamental NLP task that measures the semantic similarity between a pair of sentences. In order to reduce the inherent ambiguity posed from the sentences, a recent work called Conditional STS (C-STS) has been proposed to measure the sentences{'} similarity conditioned on a certai...
[ "Tu, Jingxuan", "Xu, Keer", "Yue, Liulu", "Ye, Bingyang", "Rim, Kyeongmin", "Pustejovsky, James" ]
Linguistically Conditioned Semantic Textual Similarity
acl-long.64
Poster
2305.07893v2
https://aclanthology.org/2024.acl-long.65.bib
@inproceedings{chu-etal-2024-navigate, title = "Navigate through Enigmatic Labyrinth A Survey of Chain of Thought Reasoning: Advances, Frontiers and Future", author = "Chu, Zheng and Chen, Jingchang and Chen, Qianglong and Yu, Weijiang and He, Tao and Wang, Haotian and ...
Reasoning, a fundamental cognitive process integral to human intelligence, has garnered substantial interest within artificial intelligence.Notably, recent studies have revealed that chain-of-thought prompting significantly enhances LLM{'}s reasoning capabilities, which attracts widespread attention from both academics...
[ "Chu, Zheng", "Chen, Jingchang", "Chen, Qianglong", "Yu, Weijiang", "He, Tao", "Wang, Haotian", "Peng, Weihua", "Liu, Ming", "Qin, Bing", "Liu, Ting" ]
Navigate through Enigmatic Labyrinth A Survey of Chain of Thought Reasoning: Advances, Frontiers and Future
acl-long.65
Poster
2309.15402v3
https://aclanthology.org/2024.acl-long.66.bib
@inproceedings{chu-etal-2024-timebench, title = "{T}ime{B}ench: A Comprehensive Evaluation of Temporal Reasoning Abilities in Large Language Models", author = "Chu, Zheng and Chen, Jingchang and Chen, Qianglong and Yu, Weijiang and Wang, Haotian and Liu, Ming and Qin, B...
Grasping the concept of time is a fundamental facet of human cognition, indispensable for truly comprehending the intricacies of the world.Previous studies typically focus on specific aspects of time, lacking a comprehensive temporal reasoning benchmark.To address this, we propose TimeBench, a comprehensive hierarchica...
[ "Chu, Zheng", "Chen, Jingchang", "Chen, Qianglong", "Yu, Weijiang", "Wang, Haotian", "Liu, Ming", "Qin, Bing" ]
{T}ime{B}ench: A Comprehensive Evaluation of Temporal Reasoning Abilities in Large Language Models
acl-long.66
Poster
2311.17667v2
https://aclanthology.org/2024.acl-long.67.bib
@inproceedings{chu-etal-2024-beamaggr, title = "{B}eam{A}gg{R}: Beam Aggregation Reasoning over Multi-source Knowledge for Multi-hop Question Answering", author = "Chu, Zheng and Chen, Jingchang and Chen, Qianglong and Wang, Haotian and Zhu, Kun and Du, Xiyuan and Yu, W...
Large language models (LLMs) have demonstrated strong reasoning capabilities.Nevertheless, they still suffer from factual errors when tackling knowledge-intensive tasks.Retrieval-augmented reasoning represents a promising approach.However, significant challenges still persist, including inaccurate and insufficient retr...
[ "Chu, Zheng", "Chen, Jingchang", "Chen, Qianglong", "Wang, Haotian", "Zhu, Kun", "Du, Xiyuan", "Yu, Weijiang", "Liu, Ming", "Qin, Bing" ]
{B}eam{A}gg{R}: Beam Aggregation Reasoning over Multi-source Knowledge for Multi-hop Question Answering
acl-long.67
Oral
2406.19820v1
https://aclanthology.org/2024.acl-long.68.bib
@inproceedings{yuan-etal-2024-analogykb, title = "{ANALOGYKB}: Unlocking Analogical Reasoning of Language Models with A Million-scale Knowledge Base", author = "Yuan, Siyu and Chen, Jiangjie and Sun, Changzhi and Liang, Jiaqing and Xiao, Yanghua and Yang, Deqing", editor =...
Analogical reasoning is a fundamental cognitive ability of humans. However, current language models (LMs) still struggle to achieve human-like performance in analogical reasoning tasks due to a lack of resources for model training. In this work, we address this gap by proposing ANALOGYKB, a million-scale analogy knowle...
[ "Yuan, Siyu", "Chen, Jiangjie", "Sun, Changzhi", "Liang, Jiaqing", "Xiao, Yanghua", "Yang, Deqing" ]
{ANALOGYKB}: Unlocking Analogical Reasoning of Language Models with A Million-scale Knowledge Base
acl-long.68
Poster
2305.05994v2
https://aclanthology.org/2024.acl-long.69.bib
@inproceedings{feng-etal-2024-tasl, title = "{T}a{SL}: Continual Dialog State Tracking via Task Skill Localization and Consolidation", author = "Feng, Yujie and Chu, Xu and Xu, Yongxin and Shi, Guangyuan and Liu, Bo and Wu, Xiao-Ming", editor = "Ku, Lun-Wei and Mart...
A practical dialogue system requires the capacity for ongoing skill acquisition and adaptability to new tasks while preserving prior knowledge. However, current methods for Continual Dialogue State Tracking (DST), a crucial function of dialogue systems, struggle with the catastrophic forgetting issue and knowledge tran...
[ "Feng, Yujie", "Chu, Xu", "Xu, Yongxin", "Shi, Guangyuan", "Liu, Bo", "Wu, Xiao-Ming" ]
{T}a{SL}: Continual Dialog State Tracking via Task Skill Localization and Consolidation
acl-long.69
Poster
2408.05200v1
https://aclanthology.org/2024.acl-long.70.bib
@inproceedings{dai-etal-2024-deepseekmoe, title = "{D}eep{S}eek{M}o{E}: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models", author = "Dai, Damai and Deng, Chengqi and Zhao, Chenggang and Xu, R.x. and Gao, Huazuo and Chen, Deli and Li, Jiashi ...
In the era of large language models, Mixture-of-Experts (MoE) is a promising architecture for managing computational costs when scaling up model parameters. However, conventional MoE architectures like GShard, which activate the top-$K$ out of $N$ experts, face challenges in ensuring expert specialization, i.e. each ex...
[ "Dai, Damai", "Deng, Chengqi", "Zhao, Chenggang", "Xu, R.x.", "Gao, Huazuo", "Chen, Deli", "Li, Jiashi", "Zeng, Wangding", "Yu, Xingkai", "Wu, Y.", "Xie, Zhenda", "Li, Y.k.", "Huang, Panpan", "Luo, Fuli", "Ruan, Chong", "Sui, Zhifang", "Liang, Wenfeng" ]
{D}eep{S}eek{M}o{E}: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models
acl-long.70
Poster
2401.06066v1
https://aclanthology.org/2024.acl-long.71.bib
@inproceedings{qian-etal-2024-grounding, title = "Grounding Language Model with Chunking-Free In-Context Retrieval", author = "Qian, Hongjin and Liu, Zheng and Mao, Kelong and Zhou, Yujia and Dou, Zhicheng", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Viv...
This paper presents a novel Chunking-Free In-Context (CFIC) retrieval approach, specifically tailored for Retrieval-Augmented Generation (RAG) systems. Traditional RAG systems often struggle with grounding responses using precise evidence text due to the challenges of processing lengthy documents and filtering out irre...
[ "Qian, Hongjin", "Liu, Zheng", "Mao, Kelong", "Zhou, Yujia", "Dou, Zhicheng" ]
Grounding Language Model with Chunking-Free In-Context Retrieval
acl-long.71
Poster
2305.09731v1
https://aclanthology.org/2024.acl-long.72.bib
@inproceedings{bai-etal-2024-advancing, title = "Advancing Abductive Reasoning in Knowledge Graphs through Complex Logical Hypothesis Generation", author = "Bai, Jiaxin and Wang, Yicheng and Zheng, Tianshi and Guo, Yue and Liu, Xin and Song, Yangqiu", editor = "Ku, Lun-Wei...
Abductive reasoning is the process of making educated guesses to provide explanations for observations. Although many applications require the use of knowledge for explanations, the utilization of abductive reasoning in conjunction with structured knowledge, such as a knowledge graph, remains largely unexplored. To fil...
[ "Bai, Jiaxin", "Wang, Yicheng", "Zheng, Tianshi", "Guo, Yue", "Liu, Xin", "Song, Yangqiu" ]
Advancing Abductive Reasoning in Knowledge Graphs through Complex Logical Hypothesis Generation
acl-long.72
Poster
2312.15643v3
https://aclanthology.org/2024.acl-long.73.bib
@inproceedings{diao-etal-2024-active, title = "Active Prompting with Chain-of-Thought for Large Language Models", author = "Diao, Shizhe and Wang, Pengcheng and Lin, Yong and Pan, Rui and Liu, Xiang and Zhang, Tong", editor = "Ku, Lun-Wei and Martins, Andre and ...
The increasing scale of large language models (LLMs) brings emergent abilities to various complex tasks requiring reasoning, such as arithmetic and commonsense reasoning. It is known that the effective design of task-specific prompts is critical for LLMs{'} ability to produce high-quality answers. In particular, an eff...
[ "Diao, Shizhe", "Wang, Pengcheng", "Lin, Yong", "Pan, Rui", "Liu, Xiang", "Zhang, Tong" ]
Active Prompting with Chain-of-Thought for Large Language Models
acl-long.73
Poster
2402.11755v1
https://aclanthology.org/2024.acl-long.74.bib
@inproceedings{zhao-etal-2024-easygen, title = "{E}asy{G}en: Easing Multimodal Generation with {B}i{D}iffuser and {LLM}s", author = "Zhao, Xiangyu and Liu, Bo and Liu, Qijiong and Shi, Guangyuan and Wu, Xiao-Ming", editor = "Ku, Lun-Wei and Martins, Andre and Srikum...
We present EasyGen, an efficient model designed to enhance multimodal understanding and generation by harnessing the capabilities of diffusion models and large language models (LLMs). Unlike existing multimodal models that predominately depend on encoders like CLIP or ImageBind and need ample amounts of training data t...
[ "Zhao, Xiangyu", "Liu, Bo", "Liu, Qijiong", "Shi, Guangyuan", "Wu, Xiao-Ming" ]
{E}asy{G}en: Easing Multimodal Generation with {B}i{D}iffuser and {LLM}s
acl-long.74
Poster
2310.08949v3
https://aclanthology.org/2024.acl-long.75.bib
@inproceedings{li-etal-2024-rewriting, title = "Rewriting the Code: A Simple Method for Large Language Model Augmented Code Search", author = "Li, Haochen and Zhou, Xin and Shen, Zhiqi", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings o...
In code search, the Generation-Augmented Retrieval (GAR) framework, which generates exemplar code snippets to augment queries, has emerged as a promising strategy to address the principal challenge of modality misalignment between code snippets and natural language queries, particularly with the demonstrated code gener...
[ "Li, Haochen", "Zhou, Xin", "Shen, Zhiqi" ]
Rewriting the Code: A Simple Method for Large Language Model Augmented Code Search
acl-long.75
Oral
2401.04514v2
https://aclanthology.org/2024.acl-long.76.bib
@inproceedings{baes-etal-2024-multidimensional, title = "A Multidimensional Framework for Evaluating Lexical Semantic Change with Social Science Applications", author = "Baes, Naomi and Haslam, Nick and Vylomova, Ekaterina", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, ...
Historical linguists have identified multiple forms of lexical semantic change. We present a three-dimensional framework for integrating these forms and a unified computational methodology for evaluating them concurrently. The dimensions represent increases or decreases in semantic 1) sentiment (valence of a target wor...
[ "Baes, Naomi", "Haslam, Nick", "Vylomova, Ekaterina" ]
A Multidimensional Framework for Evaluating Lexical Semantic Change with Social Science Applications
acl-long.76
Poster
2406.06052v1
https://aclanthology.org/2024.acl-long.77.bib
@inproceedings{huang-etal-2024-mitigating, title = "Mitigating Catastrophic Forgetting in Large Language Models with Self-Synthesized Rehearsal", author = "Huang, Jianheng and Cui, Leyang and Wang, Ante and Yang, Chengyi and Liao, Xinting and Song, Linfeng and Yao, Junf...
Large language models (LLMs) suffer from catastrophic forgetting during continual learning. Conventional rehearsal-based methods rely on previous training data to retain the model{'}s ability, which may not be feasible in real-world applications. When conducting continual learning based on a publicly-released LLM check...
[ "Huang, Jianheng", "Cui, Leyang", "Wang, Ante", "Yang, Chengyi", "Liao, Xinting", "Song, Linfeng", "Yao, Junfeng", "Su, Jinsong" ]
Mitigating Catastrophic Forgetting in Large Language Models with Self-Synthesized Rehearsal
acl-long.77
Poster
2403.01244v2
https://aclanthology.org/2024.acl-long.78.bib
@inproceedings{huang-etal-2024-enhancing, title = "Enhancing Large Language Models in Coding Through Multi-Perspective Self-Consistency", author = "Huang, Baizhou and Lu, Shuai and Wan, Xiaojun and Duan, Nan", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", ...
Large language models (LLMs) have exhibited remarkable ability in code generation. However, generating the correct solution in a single attempt still remains a challenge. Prior works utilize verification properties in software engineering to verify and re-rank solutions in a majority voting manner. But the assumption b...
[ "Huang, Baizhou", "Lu, Shuai", "Wan, Xiaojun", "Duan, Nan" ]
Enhancing Large Language Models in Coding Through Multi-Perspective Self-Consistency
acl-long.78
Poster
2404.13149v1
https://aclanthology.org/2024.acl-long.79.bib
@inproceedings{li-etal-2024-citation, title = "Citation-Enhanced Generation for {LLM}-based Chatbots", author = "Li, Weitao and Li, Junkai and Ma, Weizhi and Liu, Yang", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd...
Large language models (LLMs) exhibit powerful general intelligence across diverse scenarios, including their integration into chatbots. However, a vital challenge of LLM-based chatbots is that they may produce hallucinated content in responses, which significantly limits their applicability. Various efforts have been m...
[ "Li, Weitao", "Li, Junkai", "Ma, Weizhi", "Liu, Yang" ]
Citation-Enhanced Generation for {LLM}-based Chatbots
acl-long.79
Poster
2104.04842v1
https://aclanthology.org/2024.acl-long.80.bib
@inproceedings{wen-etal-2024-transitive, title = "Transitive Consistency Constrained Learning for Entity-to-Entity Stance Detection", author = "Wen, Haoyang and Hovy, Eduard and Hauptmann, Alexander", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = ...
Entity-to-entity stance detection identifies the stance between a pair of entities with a directed link that indicates the source, target and polarity. It is a streamlined task without the complex dependency structure for structural sentiment analysis, while it is more informative compared to most previous work assumin...
[ "Wen, Haoyang", "Hovy, Eduard", "Hauptmann, Alex", "er" ]
Transitive Consistency Constrained Learning for Entity-to-Entity Stance Detection
acl-long.80
Poster
2405.10991v1
https://aclanthology.org/2024.acl-long.81.bib
@inproceedings{li-etal-2024-feature-adaptive, title = "Feature-Adaptive and Data-Scalable In-Context Learning", author = "Li, Jiahao and Wang, Quan and Zhang, Licheng and Jin, Guoqing and Mao, Zhendong", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek",...
In-context learning (ICL), which promotes inference with several demonstrations, has become a widespread paradigm to stimulate LLM capabilities for downstream tasks. Due to context length constraints, it cannot be further improved in spite of more training data, and general features directly from LLMs in ICL are not ad...
[ "Li, Jiahao", "Wang, Quan", "Zhang, Licheng", "Jin, Guoqing", "Mao, Zhendong" ]
Feature-Adaptive and Data-Scalable In-Context Learning
acl-long.81
Poster
2311.10609v1
https://aclanthology.org/2024.acl-long.82.bib
@inproceedings{zhang-etal-2024-probing, title = "Probing the Multi-turn Planning Capabilities of {LLM}s via 20 Question Games", author = "Zhang, Yizhe and Lu, Jiarui and Jaitly, Navdeep", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings ...
Large language models (LLMs) are effective at answering questions that are clearly asked. However, when faced with ambiguous queries they can act unpredictably and produce incorrect outputs. This underscores the need for the development of intelligent agents capable of asking clarification questions to resolve ambiguit...
[ "Zhang, Yizhe", "Lu, Jiarui", "Jaitly, Navdeep" ]
Probing the Multi-turn Planning Capabilities of {LLM}s via 20 Question Games
acl-long.82
Poster
2310.01468v3
https://aclanthology.org/2024.acl-long.83.bib
@inproceedings{tu-etal-2024-waterbench, title = "{W}ater{B}ench: Towards Holistic Evaluation of Watermarks for Large Language Models", author = "Tu, Shangqing and Sun, Yuliang and Bai, Yushi and Yu, Jifan and Hou, Lei and Li, Juanzi", editor = "Ku, Lun-Wei and Marti...
To mitigate the potential misuse of large language models (LLMs), recent research has developed watermarking algorithms, which restrict the generation process to leave an invisible trace for watermark detection. Due to the two-stage nature of the task, most studies evaluate the generation and detection separately, ther...
[ "Tu, Shangqing", "Sun, Yuliang", "Bai, Yushi", "Yu, Jifan", "Hou, Lei", "Li, Juanzi" ]
{W}ater{B}ench: Towards Holistic Evaluation of Watermarks for Large Language Models
acl-long.83
Poster
2311.07138v2
https://aclanthology.org/2024.acl-long.84.bib
@inproceedings{zhao-etal-2024-dependency, title = "Dependency Transformer Grammars: Integrating Dependency Structures into Transformer Language Models", author = "Zhao, Yida and Lou, Chao and Tu, Kewei", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle...
Syntactic Transformer language models aim to achieve better generalization through simultaneously modeling syntax trees and sentences. While prior work has been focusing on adding constituency-based structures to Transformers, we introduce Dependency Transformer Grammars (DTGs), a new class of Transformer language mode...
[ "Zhao, Yida", "Lou, Chao", "Tu, Kewei" ]
Dependency Transformer Grammars: Integrating Dependency Structures into Transformer Language Models
acl-long.84
Poster
2407.17406v1
https://aclanthology.org/2024.acl-long.85.bib
@inproceedings{ma-etal-2024-non, title = "A Non-autoregressive Generation Framework for End-to-End Simultaneous Speech-to-Any Translation", author = "Ma, Zhengrui and Fang, Qingkai and Zhang, Shaolei and Guo, Shoutao and Feng, Yang and Zhang, Min", editor = "Ku, Lun-Wei a...
Simultaneous translation models play a crucial role in facilitating communication. However, existing research primarily focuses on text-to-text or speech-to-text models, necessitating additional cascade components to achieve speech-to-speech translation. These pipeline methods suffer from error propagation and accumula...
[ "Ma, Zhengrui", "Fang, Qingkai", "Zhang, Shaolei", "Guo, Shoutao", "Feng, Yang", "Zhang, Min" ]
A Non-autoregressive Generation Framework for End-to-End Simultaneous Speech-to-Any Translation
acl-long.85
Poster
1911.03154v2
https://aclanthology.org/2024.acl-long.86.bib
@inproceedings{liu-etal-2024-probing, title = "Probing Language Models for Pre-training Data Detection", author = "Liu, Zhenhua and Zhu, Tong and Tan, Chuanyuan and Liu, Bing and Lu, Haonan and Chen, Wenliang", editor = "Ku, Lun-Wei and Martins, Andre and Sri...
Large Language Models (LLMs) have shown their impressive capabilities, while also raising concerns about the data contamination problems due to privacy issues and leakage of benchmark datasets in the pre-training phase. Therefore, it is vital to detect the contamination by checking whether an LLM has been pre-trained o...
[ "Liu, Zhenhua", "Zhu, Tong", "Tan, Chuanyuan", "Liu, Bing", "Lu, Haonan", "Chen, Wenliang" ]
Probing Language Models for Pre-training Data Detection
acl-long.86
Poster
2306.16774v1
https://aclanthology.org/2024.acl-long.87.bib
@inproceedings{zhang-etal-2024-analyzing, title = "Analyzing Temporal Complex Events with Large Language Models? A Benchmark towards Temporal, Long Context Understanding", author = "Zhang, Zhihan and Cao, Yixin and Ye, Chenchen and Ma, Yunshan and Liao, Lizi and Chua, Tat-Seng...
The digital landscape is rapidly evolving with an ever-increasing volume of online news, emphasizing the need for swift and precise analysis of complex events.We refer to the complex events composed of many news articles over an extended period as Temporal Complex Event (TCE). This paper proposes a novel approach using...
[ "Zhang, Zhihan", "Cao, Yixin", "Ye, Chenchen", "Ma, Yunshan", "Liao, Lizi", "Chua, Tat-Seng" ]
Analyzing Temporal Complex Events with Large Language Models? A Benchmark towards Temporal, Long Context Understanding
acl-long.87
Poster
2406.02472v1
https://aclanthology.org/2024.acl-long.88.bib
@inproceedings{han-etal-2024-ibsen, title = "{IBSEN}: Director-Actor Agent Collaboration for Controllable and Interactive Drama Script Generation", author = "Han, Senyu and Chen, Lu and Lin, Li-Min and Xu, Zhengshan and Yu, Kai", editor = "Ku, Lun-Wei and Martins, Andre a...
Large language models have demonstrated their capabilities in storyline creation and human-like character role-playing. Current language model agents mainly focus on reasonable behaviors from the level of individuals, and their behaviors might be hard to constraint on the level of the whole storyline. In this paper we ...
[ "Han, Senyu", "Chen, Lu", "Lin, Li-Min", "Xu, Zhengshan", "Yu, Kai" ]
{IBSEN}: Director-Actor Agent Collaboration for Controllable and Interactive Drama Script Generation
acl-long.88
Poster
2407.01093v1
https://aclanthology.org/2024.acl-long.89.bib
@inproceedings{wang-etal-2024-language-model, title = "Language Model Adaption for Reinforcement Learning with Natural Language Action Space", author = "Wang, Jiangxing and Li, Jiachen and Han, Xiao and Ye, Deheng and Lu, Zongqing", editor = "Ku, Lun-Wei and Martins, Andre...
Reinforcement learning with natural language action space often suffers from the curse of dimensionality due to the combinatorial nature of the natural language. Previous research leverages pretrained language models to capture action semantics and reduce the size of the action space. However, since pretrained models a...
[ "Wang, Jiangxing", "Li, Jiachen", "Han, Xiao", "Ye, Deheng", "Lu, Zongqing" ]
Language Model Adaption for Reinforcement Learning with Natural Language Action Space
acl-long.89
Poster
1705.09906v1
https://aclanthology.org/2024.acl-long.90.bib
@inproceedings{sakurai-miyao-2024-evaluating, title = "Evaluating Intention Detection Capability of Large Language Models in Persuasive Dialogues", author = "Sakurai, Hiromasa and Miyao, Yusuke", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings...
We investigate intention detection in persuasive multi-turn dialogs employing the largest available Large Language Models (LLMs).Much of the prior research measures the intention detection capability of machine learning models without considering the conversational history.To evaluate LLMs{'} intention detection capabi...
[ "Sakurai, Hiromasa", "Miyao, Yusuke" ]
Evaluating Intention Detection Capability of Large Language Models in Persuasive Dialogues
acl-long.90
Poster
2402.04631v1
https://aclanthology.org/2024.acl-long.91.bib
@inproceedings{jiang-etal-2024-longllmlingua, title = "{L}ong{LLML}ingua: Accelerating and Enhancing {LLM}s in Long Context Scenarios via Prompt Compression", author = "Jiang, Huiqiang and Wu, Qianhui and Luo, Xufang and Li, Dongsheng and Lin, Chin-Yew and Yang, Yuqing and ...
In long context scenarios, large language models (LLMs) face three main challenges: higher computational cost, performance reduction, and position bias. Research indicates that LLM performance hinges on the density and position of key information in the input prompt. Inspired by these findings, we propose LongLLMLingua...
[ "Jiang, Huiqiang", "Wu, Qianhui", "Luo, Xufang", "Li, Dongsheng", "Lin, Chin-Yew", "Yang, Yuqing", "Qiu, Lili" ]
{L}ong{LLML}ingua: Accelerating and Enhancing {LLM}s in Long Context Scenarios via Prompt Compression
acl-long.91
Poster
2310.06839v2
https://aclanthology.org/2024.acl-long.92.bib
@inproceedings{jin-etal-2024-persuading, title = "Persuading across Diverse Domains: a Dataset and Persuasion Large Language Model", author = "Jin, Chuhao and Ren, Kening and Kong, Lingzhen and Wang, Xiting and Song, Ruihua and Chen, Huan", editor = "Ku, Lun-Wei and ...
Persuasive dialogue requires multi-turn following and planning abilities to achieve the goal of persuading users, which is still challenging even for state-of-the-art large language models (LLMs). Previous works focus on retrieval-based models or generative models in a specific domain due to a lack of data across multi...
[ "Jin, Chuhao", "Ren, Kening", "Kong, Lingzhen", "Wang, Xiting", "Song, Ruihua", "Chen, Huan" ]
Persuading across Diverse Domains: a Dataset and Persuasion Large Language Model
acl-long.92
Poster
2311.06239v1
https://aclanthology.org/2024.acl-long.93.bib
@inproceedings{xiao-etal-2024-healme, title = "{H}eal{M}e: Harnessing Cognitive Reframing in Large Language Models for Psychotherapy", author = "Xiao, Mengxi and Xie, Qianqian and Kuang, Ziyan and Liu, Zhicheng and Yang, Kailai and Peng, Min and Han, Weiguang and ...
Large Language Models (LLMs) can play a vital role in psychotherapy by adeptly handling the crucial task of cognitive reframing and overcoming challenges such as shame, distrust, therapist skill variability, and resource scarcity. Previous LLMs in cognitive reframing mainly converted negative emotions to positive ones,...
[ "Xiao, Mengxi", "Xie, Qianqian", "Kuang, Ziyan", "Liu, Zhicheng", "Yang, Kailai", "Peng, Min", "Han, Weiguang", "Huang, Jimin" ]
{H}eal{M}e: Harnessing Cognitive Reframing in Large Language Models for Psychotherapy
acl-long.93
Poster
2403.05574v3
https://aclanthology.org/2024.acl-long.94.bib
@inproceedings{guo-etal-2024-multimodal, title = "Multimodal Prompt Learning with Missing Modalities for Sentiment Analysis and Emotion Recognition", author = "Guo, Zirun and Jin, Tao and Zhao, Zhou", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = ...
The development of multimodal models has significantly advanced multimodal sentiment analysis and emotion recognition. However, in real-world applications, the presence of various missing modality cases often leads to a degradation in the model{'}s performance. In this work, we propose a novel multimodal Transformer fr...
[ "Guo, Zirun", "Jin, Tao", "Zhao, Zhou" ]
Multimodal Prompt Learning with Missing Modalities for Sentiment Analysis and Emotion Recognition
acl-long.94
Poster
2407.05374v1
https://aclanthology.org/2024.acl-long.95.bib
@inproceedings{yan-etal-2024-effective, title = "An Effective Pronunciation Assessment Approach Leveraging Hierarchical Transformers and Pre-training Strategies", author = "Yan, Bi-Cheng and Li, Jiun-Ting and Wang, Yi-Cheng and Wang, Hsin Wei and Lo, Tien-Hong and Hsu, Yung-Ch...
Automatic pronunciation assessment (APA) manages to quantify a second language (L2) learner{'}s pronunciation proficiency in a target language by providing fine-grained feedback with multiple pronunciation aspect scores at various linguistic levels. Most existing efforts on APA typically parallelize the modeling proces...
[ "Yan, Bi-Cheng", "Li, Jiun-Ting", "Wang, Yi-Cheng", "Wang, Hsin Wei", "Lo, Tien-Hong", "Hsu, Yung-Chang", "Chao, Wei-Cheng", "Chen, Berlin" ]
An Effective Pronunciation Assessment Approach Leveraging Hierarchical Transformers and Pre-training Strategies
acl-long.95
Poster
2302.10444v2
https://aclanthology.org/2024.acl-long.96.bib
@inproceedings{li-wang-2024-detection, title = "Detection-Correction Structure via General Language Model for Grammatical Error Correction", author = "Li, Wei and Wang, Houfeng", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annu...
Grammatical error correction (GEC) is a task dedicated to rectifying texts with minimal edits, which can be decoupled into two components: detection and correction. However, previous works have predominantly focused on direct correction, with no prior efforts to integrate both into a single model. Moreover, the explora...
[ "Li, Wei", "Wang, Houfeng" ]
Detection-Correction Structure via General Language Model for Grammatical Error Correction
acl-long.96
Poster
2405.17804v1
https://aclanthology.org/2024.acl-long.97.bib
@inproceedings{zhu-etal-2024-generative, title = "Generative Pre-trained Speech Language Model with Efficient Hierarchical Transformer", author = "Zhu, Yongxin and Su, Dan and He, Liqiang and Xu, Linli and Yu, Dong", editor = "Ku, Lun-Wei and Martins, Andre and Srik...
While recent advancements in speech language models have achieved significant progress, they face remarkable challenges in modeling the long acoustic sequences of neural audio codecs. In this paper, we introduce \textbf{G}enerative \textbf{P}re-trained \textbf{S}peech \textbf{T}ransformer (GPST), a hierarchical transfo...
[ "Zhu, Yongxin", "Su, Dan", "He, Liqiang", "Xu, Linli", "Yu, Dong" ]
Generative Pre-trained Speech Language Model with Efficient Hierarchical Transformer
acl-long.97
Poster
2406.00976v1
https://aclanthology.org/2024.acl-long.98.bib
@inproceedings{zhang-etal-2024-selene, title = "Selene: Pioneering Automated Proof in Software Verification", author = "Zhang, Lichen and Lu, Shuai and Duan, Nan", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeti...
Ensuring correctness is a pivotal aspect of software engineering. Among the various strategies available, software verification offers a definitive assurance of correctness. Nevertheless, writing verification proofs is resource-intensive and manpower-consuming, and there is a great need to automate this process. We int...
[ "Zhang, Lichen", "Lu, Shuai", "Duan, Nan" ]
Selene: Pioneering Automated Proof in Software Verification
acl-long.98
Poster
2401.07663v2
https://aclanthology.org/2024.acl-long.99.bib
@inproceedings{li-etal-2024-dissecting, title = "Dissecting Human and {LLM} Preferences", author = "Li, Junlong and Zhou, Fan and Sun, Shichao and Zhang, Yikai and Zhao, Hai and Liu, Pengfei", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", ...
As a relative quality comparison of model responses, human and Large Language Model (LLM) preferences serve as common alignment goals in model fine-tuning and criteria in evaluation. Yet, these preferences merely reflect broad tendencies, resulting in less explainable and controllable models with potential safety risks...
[ "Li, Junlong", "Zhou, Fan", "Sun, Shichao", "Zhang, Yikai", "Zhao, Hai", "Liu, Pengfei" ]
Dissecting Human and {LLM} Preferences
acl-long.99
Poster
2402.11296v1
https://aclanthology.org/2024.acl-long.100.bib
@inproceedings{sun-etal-2024-unicoder, title = "{U}ni{C}oder: Scaling Code Large Language Model via Universal Code", author = "Sun, Tao and Chai, Linzheng and Yang, Jian and Yin, Yuwei and Guo, Hongcheng and Liu, Jiaheng and Wang, Bing and Yang, Liqun and ...
Intermediate reasoning or acting steps have successfully improved large language models (LLMs) for handling various downstream natural language processing (NLP) tasks.When applying LLMs for code generation, recent works mainly focus on directing the models to articulate intermediate natural-language reasoning steps, as...
[ "Sun, Tao", "Chai, Linzheng", "Yang, Jian", "Yin, Yuwei", "Guo, Hongcheng", "Liu, Jiaheng", "Wang, Bing", "Yang, Liqun", "Li, Zhoujun" ]
{U}ni{C}oder: Scaling Code Large Language Model via Universal Code
acl-long.100
Poster
2406.16441v1