bibtex_url
stringlengths
41
50
bibtext
stringlengths
693
2.88k
abstract
stringlengths
0
2k
authors
listlengths
1
45
title
stringlengths
21
206
id
stringlengths
7
16
type
stringclasses
2 values
arxiv_id
stringlengths
9
12
https://aclanthology.org/2024.acl-long.201.bib
@inproceedings{gaido-etal-2024-sbaam, title = "{SBAAM}! Eliminating Transcript Dependency in Automatic Subtitling", author = "Gaido, Marco and Papi, Sara and Negri, Matteo and Cettolo, Mauro and Bentivogli, Luisa", editor = "Ku, Lun-Wei and Martins, Andre and Srikum...
Subtitling plays a crucial role in enhancing the accessibility of audiovisual content and encompasses three primary subtasks: translating spoken dialogue, segmenting translations into concise textual units, and estimating timestamps that govern their on-screen duration. Past attempts to automate this process rely, to v...
[ "Gaido, Marco", "Papi, Sara", "Negri, Matteo", "Cettolo, Mauro", "Bentivogli, Luisa" ]
{SBAAM}! Eliminating Transcript Dependency in Automatic Subtitling
acl-long.201
Poster
2002.10829v1
https://aclanthology.org/2024.acl-long.202.bib
@inproceedings{papi-etal-2024-streamatt, title = "{S}tream{A}tt: Direct Streaming Speech-to-Text Translation with Attention-based Audio History Selection", author = "Papi, Sara and Gaido, Marco and Negri, Matteo and Bentivogli, Luisa", editor = "Ku, Lun-Wei and Martins, Andre an...
Streaming speech-to-text translation (StreamST) is the task of automatically translating speech while incrementally receiving an audio stream. Unlike simultaneous ST (SimulST), which deals with pre-segmented speech, StreamST faces the challenges of handling continuous and unbounded audio streams. This requires addition...
[ "Papi, Sara", "Gaido, Marco", "Negri, Matteo", "Bentivogli, Luisa" ]
{S}tream{A}tt: Direct Streaming Speech-to-Text Translation with Attention-based Audio History Selection
acl-long.202
Poster
2406.06097v1
https://aclanthology.org/2024.acl-long.203.bib
@inproceedings{zhang-etal-2024-arl2, title = "{ARL}2: Aligning Retrievers with Black-box Large Language Models via Self-guided Adaptive Relevance Labeling", author = "Zhang, LingXi and Yu, Yue and Wang, Kuan and Zhang, Chao", editor = "Ku, Lun-Wei and Martins, Andre and Sr...
Retrieval-augmented generation enhances large language models (LLMs) by incorporating relevant information from external knowledge sources. This enables LLMs to adapt to specific domains and mitigate hallucinations in knowledge-intensive tasks. However, existing retrievers are often misaligned with LLMs due to separate...
[ "Zhang, LingXi", "Yu, Yue", "Wang, Kuan", "Zhang, Chao" ]
{ARL}2: Aligning Retrievers with Black-box Large Language Models via Self-guided Adaptive Relevance Labeling
acl-long.203
Poster
2402.13542v2
https://aclanthology.org/2024.acl-long.204.bib
@inproceedings{bang-etal-2024-crayon, title = "Crayon: Customized On-Device {LLM} via Instant Adapter Blending and Edge-Server Hybrid Inference", author = "Bang, Jihwan and Lee, Juntae and Shim, Kyuhong and Yang, Seunghan and Chang, Simyung", editor = "Ku, Lun-Wei and Mart...
The customization of large language models (LLMs) for user-specified tasks gets important. However, maintaining all the customized LLMs on cloud servers incurs substantial memory and computational overheads, and uploading user data can also lead to privacy concerns. On-device LLMs can offer a promising solution by miti...
[ "Bang, Jihwan", "Lee, Juntae", "Shim, Kyuhong", "Yang, Seunghan", "Chang, Simyung" ]
Crayon: Customized On-Device {LLM} via Instant Adapter Blending and Edge-Server Hybrid Inference
acl-long.204
Poster
2406.07007v1
https://aclanthology.org/2024.acl-long.205.bib
@inproceedings{lee-etal-2024-fleur, title = "{FLEUR}: An Explainable Reference-Free Evaluation Metric for Image Captioning Using a Large Multimodal Model", author = "Lee, Yebin and Park, Imseong and Kang, Myungjoo", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", ...
Most existing image captioning evaluation metrics focus on assigning a single numerical score to a caption by comparing it with reference captions. However, these methods do not provide an explanation for the assigned score. Moreover, reference captions are expensive to acquire. In this paper, we propose FLEUR, an expl...
[ "Lee, Yebin", "Park, Imseong", "Kang, Myungjoo" ]
{FLEUR}: An Explainable Reference-Free Evaluation Metric for Image Captioning Using a Large Multimodal Model
acl-long.205
Poster
2406.06004v1
https://aclanthology.org/2024.acl-long.206.bib
@inproceedings{wang-etal-2024-mentalmanip, title = "{M}ental{M}anip: A Dataset For Fine-grained Analysis of Mental Manipulation in Conversations", author = "Wang, Yuxin and Yang, Ivory and Hassanpour, Saeed and Vosoughi, Soroush", editor = "Ku, Lun-Wei and Martins, Andre and ...
Mental manipulation, a significant form of abuse in interpersonal conversations, presents a challenge to identify due to its context-dependent and often subtle nature. The detection of manipulative language is essential for protecting potential victims, yet the field of Natural Language Processing (NLP) currently faces...
[ "Wang, Yuxin", "Yang, Ivory", "Hassanpour, Saeed", "Vosoughi, Soroush" ]
{M}ental{M}anip: A Dataset For Fine-grained Analysis of Mental Manipulation in Conversations
acl-long.206
Oral
2405.16584v1
https://aclanthology.org/2024.acl-long.207.bib
@inproceedings{dai-etal-2024-mpcoder, title = "{MPC}oder: Multi-user Personalized Code Generator with Explicit and Implicit Style Representation Learning", author = "Dai, Zhenlong and Yao, Chang and Han, WenKang and Yuanying, Yuanying and Gao, Zhipeng and Chen, Jingyuan", ...
Large Language Models (LLMs) have demonstrated great potential for assisting developers in their daily development. However, most research focuses on generating correct code, how to use LLMs to generate personalized code has seldom been investigated. To bridge this gap, we proposed MPCoder (Multi-user Personalized Code...
[ "Dai, Zhenlong", "Yao, Chang", "Han, WenKang", "Yuanying, Yuanying", "Gao, Zhipeng", "Chen, Jingyuan" ]
{MPC}oder: Multi-user Personalized Code Generator with Explicit and Implicit Style Representation Learning
acl-long.207
Poster
2406.17255v1
https://aclanthology.org/2024.acl-long.208.bib
@inproceedings{patel-etal-2024-datadreamer, title = "{D}ata{D}reamer: A Tool for Synthetic Data Generation and Reproducible {LLM} Workflows", author = "Patel, Ajay and Raffel, Colin and Callison-Burch, Chris", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", boo...
Large language models (LLMs) have become a dominant and important tool for NLP researchers in a wide range of tasks. Today, many researchers use LLMs in synthetic data generation, task evaluation, fine-tuning, distillation, and other model-in-the-loop research workflows. However, challenges arise when using these model...
[ "Patel, Ajay", "Raffel, Colin", "Callison-Burch, Chris" ]
{D}ata{D}reamer: A Tool for Synthetic Data Generation and Reproducible {LLM} Workflows
acl-long.208
Poster
2402.10379v2
https://aclanthology.org/2024.acl-long.209.bib
@inproceedings{shao-etal-2024-understanding, title = "Understanding and Addressing the Under-Translation Problem from the Perspective of Decoding Objective", author = "Shao, Chenze and Meng, Fandong and Zeng, Jiali and Zhou, Jie", editor = "Ku, Lun-Wei and Martins, Andre and ...
Neural Machine Translation (NMT) has made remarkable progress over the past years. However, under-translation and over-translation remain two challenging problems in state-of-the-art NMT systems. In this work, we conduct an in-depth analysis on the underlying cause of under-translation in NMT, providing an explanation ...
[ "Shao, Chenze", "Meng, F", "ong", "Zeng, Jiali", "Zhou, Jie" ]
Understanding and Addressing the Under-Translation Problem from the Perspective of Decoding Objective
acl-long.209
Poster
2405.18922v1
https://aclanthology.org/2024.acl-long.210.bib
@inproceedings{liu-etal-2024-identifying, title = "Identifying while Learning for Document Event Causality Identification", author = "Liu, Cheng and Xiang, Wei and Wang, Bang", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd...
Event Causality Identification (ECI) aims to detect whether there exists a causal relation between two events in a document. Existing studies adopt a kind of *identifying after learning* paradigm, where events{'} representations are first learned and then used for the identification. Furthermore, they mainly focus on t...
[ "Liu, Cheng", "Xiang, Wei", "Wang, Bang" ]
Identifying while Learning for Document Event Causality Identification
acl-long.210
Poster
2405.20608v1
https://aclanthology.org/2024.acl-long.211.bib
@inproceedings{he-etal-2024-olympiadbench, title = "{O}lympiad{B}ench: A Challenging Benchmark for Promoting {AGI} with Olympiad-Level Bilingual Multimodal Scientific Problems", author = "He, Chaoqun and Luo, Renjie and Bai, Yuzhuo and Hu, Shengding and Thai, Zhen and Shen, Ju...
Recent advancements have seen Large Language Models (LLMs) and Large Multimodal Models (LMMs) surpassing general human capabilities in various tasks, approaching the proficiency level of human experts across multiple domains. With traditional benchmarks becoming less challenging for these models, new rigorous challenge...
[ "He, Chaoqun", "Luo, Renjie", "Bai, Yuzhuo", "Hu, Shengding", "Thai, Zhen", "Shen, Junhao", "Hu, Jinyi", "Han, Xu", "Huang, Yujie", "Zhang, Yuxiang", "Liu, Jie", "Qi, Lei", "Liu, Zhiyuan", "Sun, Maosong" ]
{O}lympiad{B}ench: A Challenging Benchmark for Promoting {AGI} with Olympiad-Level Bilingual Multimodal Scientific Problems
acl-long.211
Poster
2402.14008v2
https://aclanthology.org/2024.acl-long.212.bib
@inproceedings{xue-etal-2024-insert, title = "Insert or Attach: Taxonomy Completion via Box Embedding", author = "Xue, Wei and Shen, Yongliang and Ren, Wenqi and Guo, Jietian and Pu, Shiliang and Lu, Weiming", editor = "Ku, Lun-Wei and Martins, Andre and Srik...
Taxonomy completion, enriching existing taxonomies by inserting new concepts as parents or attaching them as children, has gained significant interest. Previous approaches embed concepts as vectors in Euclidean space, which makes it difficult to model asymmetric relations in taxonomy. In addition, they introduce pseudo...
[ "Xue, Wei", "Shen, Yongliang", "Ren, Wenqi", "Guo, Jietian", "Pu, Shiliang", "Lu, Weiming" ]
Insert or Attach: Taxonomy Completion via Box Embedding
acl-long.212
Poster
2305.11004v4
https://aclanthology.org/2024.acl-long.213.bib
@inproceedings{lee-etal-2024-semiparametric, title = "Semiparametric Token-Sequence Co-Supervision", author = "Lee, Hyunji and Kim, Doyoung and Jun, Jihoon and Joo, Se June and Jang, Joel and On, Kyoung-Woon and Seo, Minjoon", editor = "Ku, Lun-Wei and Martin...
In this work, we introduce a semiparametric token-sequence co-supervision training method. It trains a language model by simultaneously leveraging supervision from the traditional next token prediction loss which is calculated over the parametric token embedding space and the next sequence prediction loss which is calc...
[ "Lee, Hyunji", "Kim, Doyoung", "Jun, Jihoon", "Joo, Se June", "Jang, Joel", "On, Kyoung-Woon", "Seo, Minjoon" ]
Semiparametric Token-Sequence Co-Supervision
acl-long.213
Poster
2402.15505v1
https://aclanthology.org/2024.acl-long.214.bib
@inproceedings{guo-etal-2024-instruction, title = "Instruction Fusion: Advancing Prompt Evolution through Hybridization", author = "Guo, Weidong and Yang, Jiuding and Yang, Kaitong and Li, Xiangyang and Rao, Zhuwei and Xu, Yu and Niu, Di", editor = "Ku, Lun-Wei and...
The fine-tuning of Large Language Models (LLMs) specialized in code generation has seen notable advancements through the use of open-domain coding queries. Despite the successes, existing methodologies like Evol-Instruct encounter performance limitations, impeding further enhancements in code generation tasks. This pap...
[ "Guo, Weidong", "Yang, Jiuding", "Yang, Kaitong", "Li, Xiangyang", "Rao, Zhuwei", "Xu, Yu", "Niu, Di" ]
Instruction Fusion: Advancing Prompt Evolution through Hybridization
acl-long.214
Poster
2312.15692v4
https://aclanthology.org/2024.acl-long.215.bib
@inproceedings{zhang-etal-2024-timearena, title = "{T}ime{A}rena: Shaping Efficient Multitasking Language Agents in a Time-Aware Simulation", author = "Zhang, Yikai and Yuan, Siyu and Hu, Caiyu and Richardson, Kyle and Xiao, Yanghua and Chen, Jiangjie", editor = "Ku, Lun-W...
Despite remarkable advancements in emulating human-like behavior through Large Language Models (LLMs), current textual simulations do not adequately address the notion of time. To this end, we introduce TimeArena, a novel textual simulated environment that incorporates complex temporal dynamics and constraints that bet...
[ "Zhang, Yikai", "Yuan, Siyu", "Hu, Caiyu", "Richardson, Kyle", "Xiao, Yanghua", "Chen, Jiangjie" ]
{T}ime{A}rena: Shaping Efficient Multitasking Language Agents in a Time-Aware Simulation
acl-long.215
Poster
2402.05733v1
https://aclanthology.org/2024.acl-long.216.bib
@inproceedings{zeng-etal-2024-exploring, title = "Exploring Memorization in Fine-tuned Language Models", author = "Zeng, Shenglai and Li, Yaxin and Ren, Jie and Liu, Yiding and Xu, Han and He, Pengfei and Xing, Yue and Wang, Shuaiqiang and Tang, Jiliang a...
Large language models (LLMs) have shown great capabilities in various tasks but also exhibited memorization of training data, raising tremendous privacy and copyright concerns. While prior works have studied memorization during pre-training, the exploration of memorization during fine-tuning is rather limited. Compared...
[ "Zeng, Shenglai", "Li, Yaxin", "Ren, Jie", "Liu, Yiding", "Xu, Han", "He, Pengfei", "Xing, Yue", "Wang, Shuaiqiang", "Tang, Jiliang", "Yin, Dawei" ]
Exploring Memorization in Fine-tuned Language Models
acl-long.216
Poster
2305.04673v2
https://aclanthology.org/2024.acl-long.217.bib
@inproceedings{zhang-etal-2024-towards-real, title = "Towards Real-world Scenario: Imbalanced New Intent Discovery", author = "Zhang, Shun and Chaoran, Yan and Yang, Jian and Liu, Jiaheng and Mo, Ying and Bai, Jiaqi and Li, Tongliang and Li, Zhoujun", editor ...
New Intent Discovery (NID) aims at detecting known and previously undefined categories of user intent by utilizing limited labeled and massive unlabeled data. Most prior works often operate under the unrealistic assumption that the distribution of both familiar and new intent classes is uniform, overlooking the skewed ...
[ "Zhang, Shun", "Chaoran, Yan", "Yang, Jian", "Liu, Jiaheng", "Mo, Ying", "Bai, Jiaqi", "Li, Tongliang", "Li, Zhoujun" ]
Towards Real-world Scenario: Imbalanced New Intent Discovery
acl-long.217
Poster
2310.10184v1
https://aclanthology.org/2024.acl-long.218.bib
@inproceedings{wang-etal-2024-m4gt, title = "{M}4{GT}-Bench: Evaluation Benchmark for Black-Box Machine-Generated Text Detection", author = "Wang, Yuxia and Mansurov, Jonibek and Ivanov, Petar and Su, Jinyan and Shelmanov, Artem and Tsvigun, Akim and Mohammed Afzal, Osa...
The advent of Large Language Models (LLMs) has brought an unprecedented surge in machine-generated text (MGT) across diverse channels. This raises legitimate concerns about its potential misuse and societal implications. The need to identify and differentiate such content from genuine human-generated text is critical i...
[ "Wang, Yuxia", "Mansurov, Jonibek", "Ivanov, Petar", "Su, Jinyan", "Shelmanov, Artem", "Tsvigun, Akim", "Mohammed Afzal, Osama", "Mahmoud, Tarek", "Puccetti, Giovanni", "Arnold, Thomas", "Aji, Alham", "Habash, Nizar", "Gurevych, Iryna", "Nakov, Preslav" ]
{M}4{GT}-Bench: Evaluation Benchmark for Black-Box Machine-Generated Text Detection
acl-long.218
Poster
2010.08660v4
https://aclanthology.org/2024.acl-long.219.bib
@inproceedings{wang-etal-2024-instruct, title = "Instruct Once, Chat Consistently in Multiple Rounds: An Efficient Tuning Framework for Dialogue", author = "Wang, Jian and Leong, Chak Tou and Wang, Jiashuo and Lin, Dongding and Li, Wenjie and Wei, Xiaoyong", editor = "Ku, ...
Tuning language models for dialogue generation has been a prevalent paradigm for building capable dialogue agents. Yet, traditional tuning narrowly views dialogue generation as resembling other language generation tasks, ignoring the role disparities between two speakers and the multi-round interactive process that dia...
[ "Wang, Jian", "Leong, Chak Tou", "Wang, Jiashuo", "Lin, Dongding", "Li, Wenjie", "Wei, Xiaoyong" ]
Instruct Once, Chat Consistently in Multiple Rounds: An Efficient Tuning Framework for Dialogue
acl-long.219
Poster
2402.06967v2
https://aclanthology.org/2024.acl-long.220.bib
@inproceedings{he-etal-2024-softdedup, title = "{S}oft{D}edup: an Efficient Data Reweighting Method for Speeding Up Language Model Pre-training", author = "He, Nan and Xiong, Weichen and Liu, Hanwen and Liao, Yi and Ding, Lei and Zhang, Kai and Tang, Guohua and H...
The effectiveness of large language models (LLMs) is often hindered by duplicated data in their extensive pre-training datasets. Current approaches primarily focus on detecting and removing duplicates, which risks the loss of valuable information and neglects the varying degrees of duplication. To address this, we prop...
[ "He, Nan", "Xiong, Weichen", "Liu, Hanwen", "Liao, Yi", "Ding, Lei", "Zhang, Kai", "Tang, Guohua", "Han, Xiao", "Wei, Yang" ]
{S}oft{D}edup: an Efficient Data Reweighting Method for Speeding Up Language Model Pre-training
acl-long.220
Poster
2407.06654v1
https://aclanthology.org/2024.acl-long.221.bib
@inproceedings{bian-etal-2024-rule, title = "Rule or Story, Which is a Better Commonsense Expression for Talking with Large Language Models?", author = "Bian, Ning and Han, Xianpei and Lin, Hongyu and Lu, Yaojie and He, Ben and Sun, Le", editor = "Ku, Lun-Wei and Ma...
Building machines with commonsense has been a longstanding challenge in NLP due to the reporting bias of commonsense rules and the exposure bias of rule-based commonsense reasoning. In contrast, humans convey and pass down commonsense implicitly through stories. This paper investigates the inherent commonsense ability ...
[ "Bian, Ning", "Han, Xianpei", "Lin, Hongyu", "Lu, Yaojie", "He, Ben", "Sun, Le" ]
Rule or Story, Which is a Better Commonsense Expression for Talking with Large Language Models?
acl-long.221
Poster
2402.14355v2
https://aclanthology.org/2024.acl-long.222.bib
@inproceedings{tan-etal-2024-learning, title = "Learning Global Controller in Latent Space for Parameter-Efficient Fine-Tuning", author = "Tan, Zeqi and Shen, Yongliang and Cheng, Xiaoxia and Zong, Chang and Zhang, Wenqi and Shao, Jian and Lu, Weiming and Zhuang,...
While large language models (LLMs) have showcased remarkable prowess in various natural language processing tasks, their training costs are exorbitant. Consequently, a plethora of parameter-efficient fine-tuning methods have emerged to tailor large models for downstream tasks, including low-rank training. Recent approa...
[ "Tan, Zeqi", "Shen, Yongliang", "Cheng, Xiaoxia", "Zong, Chang", "Zhang, Wenqi", "Shao, Jian", "Lu, Weiming", "Zhuang, Yueting" ]
Learning Global Controller in Latent Space for Parameter-Efficient Fine-Tuning
acl-long.222
Poster
2306.11378v1
https://aclanthology.org/2024.acl-long.223.bib
@inproceedings{chen-etal-2024-camml, title = "{C}a{MML}: Context-Aware Multimodal Learner for Large Models", author = "Chen, Yixin and Zhang, Shuai and Han, Boran and He, Tong and Li, Bo", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle ...
In this work, we introduce Context-Aware MultiModal Learner (CaMML), for tuning large multimodal models (LMMs). CaMML, a lightweight module, is crafted to seamlessly integrate multimodal contextual samples into large models, thereby empowering the model to derive knowledge from analogous, domain-specific, up-to-date in...
[ "Chen, Yixin", "Zhang, Shuai", "Han, Boran", "He, Tong", "Li, Bo" ]
{C}a{MML}: Context-Aware Multimodal Learner for Large Models
acl-long.223
Oral
2401.03149v3
https://aclanthology.org/2024.acl-long.224.bib
@inproceedings{wang-etal-2024-maven, title = "{MAVEN}-{ARG}: Completing the Puzzle of All-in-One Event Understanding Dataset with Event Argument Annotation", author = "Wang, Xiaozhi and Peng, Hao and Guan, Yong and Zeng, Kaisheng and Chen, Jianhui and Hou, Lei and Han, ...
Understanding events in texts is a core objective of natural language understanding, which requires detecting event occurrences, extracting event arguments, and analyzing inter-event relationships. However, due to the annotation challenges brought by task complexity, a large-scale dataset covering the full process of e...
[ "Wang, Xiaozhi", "Peng, Hao", "Guan, Yong", "Zeng, Kaisheng", "Chen, Jianhui", "Hou, Lei", "Han, Xu", "Lin, Yankai", "Liu, Zhiyuan", "Xie, Ruobing", "Zhou, Jie", "Li, Juanzi" ]
{MAVEN}-{ARG}: Completing the Puzzle of All-in-One Event Understanding Dataset with Event Argument Annotation
acl-long.224
Oral
2311.09105v2
https://aclanthology.org/2024.acl-long.225.bib
@inproceedings{fan-etal-2024-nphardeval, title = "{NPH}ard{E}val: Dynamic Benchmark on Reasoning Ability of Large Language Models via Complexity Classes", author = "Fan, Lizhou and Hua, Wenyue and Li, Lingyao and Ling, Haoyang and Zhang, Yongfeng", editor = "Ku, Lun-Wei and ...
Complex reasoning ability is one of the most important features of Large Language Models (LLMs). Numerous benchmarks have been established to assess the reasoning abilities of LLMs. However, they are inadequate in offering a rigorous evaluation and prone to the risk of overfitting, as these publicly accessible and stat...
[ "Fan, Lizhou", "Hua, Wenyue", "Li, Lingyao", "Ling, Haoyang", "Zhang, Yongfeng" ]
{NPH}ard{E}val: Dynamic Benchmark on Reasoning Ability of Large Language Models via Complexity Classes
acl-long.225
Poster
2312.14890v4
https://aclanthology.org/2024.acl-long.226.bib
@inproceedings{he-etal-2024-watermarks, title = "Can Watermarks Survive Translation? On the Cross-lingual Consistency of Text Watermark for Large Language Models", author = "He, Zhiwei and Zhou, Binglin and Hao, Hongkun and Liu, Aiwei and Wang, Xing and Tu, Zhaopeng and ...
Text watermarking technology aims to tag and identify content produced by large language models (LLMs) to prevent misuse. In this study, we introduce the concept of cross-lingual consistency in text watermarking, which assesses the ability of text watermarks to maintain their effectiveness after being translated into o...
[ "He, Zhiwei", "Zhou, Binglin", "Hao, Hongkun", "Liu, Aiwei", "Wang, Xing", "Tu, Zhaopeng", "Zhang, Zhuosheng", "Wang, Rui" ]
Can Watermarks Survive Translation? On the Cross-lingual Consistency of Text Watermark for Large Language Models
acl-long.226
Oral
2402.14007v2
https://aclanthology.org/2024.acl-long.227.bib
@inproceedings{chaszczewicz-etal-2024-multi, title = "Multi-Level Feedback Generation with Large Language Models for Empowering Novice Peer Counselors", author = "Chaszczewicz, Alicja and Shah, Raj and Louie, Ryan and Arnow, Bruce and Kraut, Robert and Yang, Diyi", editor ...
Realistic practice and tailored feedback are key processes for training peer counselors with clinical skills. However, existing mechanisms of providing feedback largely rely on human supervision. Peer counselors often lack mechanisms to receive detailed feedback from experienced mentors, making it difficult for them to...
[ "Chaszczewicz, Alicja", "Shah, Raj", "Louie, Ryan", "Arnow, Bruce", "Kraut, Robert", "Yang, Diyi" ]
Multi-Level Feedback Generation with Large Language Models for Empowering Novice Peer Counselors
acl-long.227
Poster
2403.15482v1
https://aclanthology.org/2024.acl-long.228.bib
@inproceedings{shankar-etal-2024-context, title = "In-context Mixing ({ICM}): Code-mixed Prompts for Multilingual {LLM}s", author = "Shankar, Bhavani and Jyothi, Preethi and Bhattacharyya, Pushpak", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "P...
We introduce a simple and effective prompting technique called in-context mixing (ICM) for effective in-context learning (ICL) with multilingual large language models (MLLMs). With ICM, we modify the few-shot examples within ICL prompts to be intra-sententially code-mixed by randomly swapping content words in the targe...
[ "Shankar, Bhavani", "Jyothi, Preethi", "Bhattacharyya, Pushpak" ]
In-context Mixing ({ICM}): Code-mixed Prompts for Multilingual {LLM}s
acl-long.228
Poster
null
https://aclanthology.org/2024.acl-long.229.bib
@inproceedings{zhang-etal-2024-respond, title = "Respond in my Language: Mitigating Language Inconsistency in Response Generation based on Large Language Models", author = "Zhang, Liang and Jin, Qin and Huang, Haoyang and Zhang, Dongdong and Wei, Furu", editor = "Ku, Lun-Wei and...
Large Language Models (LLMs) show strong instruction understanding ability across multiple languages. However, they are easily biased towards English in instruction tuning, and generate English responses even given non-English instructions. In this paper, we investigate the language inconsistent generation problem in m...
[ "Zhang, Liang", "Jin, Qin", "Huang, Haoyang", "Zhang, Dongdong", "Wei, Furu" ]
Respond in my Language: Mitigating Language Inconsistency in Response Generation based on Large Language Models
acl-long.229
Poster
2309.02654v3
https://aclanthology.org/2024.acl-long.230.bib
@inproceedings{huang-etal-2024-transferable, title = "Transferable Embedding Inversion Attack: Uncovering Privacy Risks in Text Embeddings without Model Queries", author = "Huang, Yu-Hsiang and Tsai, Yuche and Hsiao, Hsiang and Lin, Hong-Yi and Lin, Shou-De", editor = "Ku, Lun-We...
This study investigates the privacy risks associated with text embeddings, focusing on the scenario where attackers cannot access the original embedding model. Contrary to previous research requiring direct model access, we explore a more realistic threat model by developing a transfer attack method. This approach uses...
[ "Huang, Yu-Hsiang", "Tsai, Yuche", "Hsiao, Hsiang", "Lin, Hong-Yi", "Lin, Shou-De" ]
Transferable Embedding Inversion Attack: Uncovering Privacy Risks in Text Embeddings without Model Queries
acl-long.230
Poster
2406.10280v1
https://aclanthology.org/2024.acl-long.231.bib
@inproceedings{liao-etal-2024-enhancing, title = "Enhancing Reinforcement Learning with Label-Sensitive Reward for Natural Language Understanding", author = "Liao, Kuo and Li, Shuang and Zhao, Meng and Liu, Liqun and Xue, Mengge and Hu, Zhenyu and Han, Honglin and ...
Recent strides in large language models (LLMs) have yielded remarkable performance, leveraging reinforcement learning from human feedback (RLHF) to significantly enhance generation and alignment capabilities. However, RLHF encounters numerous challenges, including the objective mismatch issue, leading to suboptimal per...
[ "Liao, Kuo", "Li, Shuang", "Zhao, Meng", "Liu, Liqun", "Xue, Mengge", "Hu, Zhenyu", "Han, Honglin", "Yin, Chengguo" ]
Enhancing Reinforcement Learning with Label-Sensitive Reward for Natural Language Understanding
acl-long.231
Poster
2405.19763v1
https://aclanthology.org/2024.acl-long.232.bib
@inproceedings{ying-etal-2024-intuitive, title = "Intuitive or Dependent? Investigating {LLM}s{'} Behavior Style to Conflicting Prompts", author = "Ying, Jiahao and Cao, Yixin and Xiong, Kai and Cui, Long and He, Yidong and Liu, Yongbin", editor = "Ku, Lun-Wei and M...
This study investigates the behaviors of Large Language Models (LLMs) when faced with conflicting prompts versus their internal memory. This will not only help to understand LLMs{'} decision mechanism but also benefit real-world applications, such as retrieval-augmented generation (RAG).Drawing on cognitive theory, we ...
[ "Ying, Jiahao", "Cao, Yixin", "Xiong, Kai", "Cui, Long", "He, Yidong", "Liu, Yongbin" ]
Intuitive or Dependent? Investigating {LLM}s{'} Behavior Style to Conflicting Prompts
acl-long.232
Poster
2309.17415v3
https://aclanthology.org/2024.acl-long.233.bib
@inproceedings{zhu-etal-2024-coca, title = "{C}o{CA}: Fusing Position Embedding with Collinear Constrained Attention in Transformers for Long Context Window Extending", author = "Zhu, Shiyi and Ye, Jing and Jiang, Wei and Xue, Siqiao and Zhang, Qi and Wu, Yifan and Li, ...
Self-attention and position embedding are two crucial modules in transformer-based Large Language Models (LLMs). However, the potential relationship between them is far from well studied, especially for long context window extending. In fact, anomalous behaviors that hinder long context extrapolation exist between Rota...
[ "Zhu, Shiyi", "Ye, Jing", "Jiang, Wei", "Xue, Siqiao", "Zhang, Qi", "Wu, Yifan", "Li, Jianguo" ]
{C}o{CA}: Fusing Position Embedding with Collinear Constrained Attention in Transformers for Long Context Window Extending
acl-long.233
Poster
2309.08646v3
https://aclanthology.org/2024.acl-long.234.bib
@inproceedings{trienes-etal-2024-infolossqa, title = "{I}nfo{L}oss{QA}: Characterizing and Recovering Information Loss in Text Simplification", author = {Trienes, Jan and Joseph, Sebastian and Schl{\"o}tterer, J{\"o}rg and Seifert, Christin and Lo, Kyle and Xu, Wei and ...
Text simplification aims to make technical texts more accessible to laypeople but often results in deletion of information and vagueness. This work proposes InfoLossQA, a framework to characterize and recover simplification-induced information loss in form of question-and-answer (QA) pairs. Building on the theory of Qu...
[ "Trienes, Jan", "Joseph, Sebastian", "Schl{\\\"o}tterer, J{\\\"o}rg", "Seifert, Christin", "Lo, Kyle", "Xu, Wei", "Wallace, Byron", "Li, Junyi Jessy" ]
{I}nfo{L}oss{QA}: Characterizing and Recovering Information Loss in Text Simplification
acl-long.234
Poster
2401.16475v2
https://aclanthology.org/2024.acl-long.235.bib
@inproceedings{zhang-etal-2024-cogenesis, title = "{C}o{G}enesis: A Framework Collaborating Large and Small Language Models for Secure Context-Aware Instruction Following", author = "Zhang, Kaiyan and Wang, Jianyu and Hua, Ermo and Qi, Biqing and Ding, Ning and Zhou, Bowen", ...
With the advancement of language models (LMs), their exposure to private data is increasingly inevitable, and their deployment (especially for smaller ones) on personal devices, such as PCs and smartphones, has become a prevailing trend. In contexts laden with user information, enabling models to both safeguard user pr...
[ "Zhang, Kaiyan", "Wang, Jianyu", "Hua, Ermo", "Qi, Biqing", "Ding, Ning", "Zhou, Bowen" ]
{C}o{G}enesis: A Framework Collaborating Large and Small Language Models for Secure Context-Aware Instruction Following
acl-long.235
Poster
2311.18215v1
https://aclanthology.org/2024.acl-long.236.bib
@inproceedings{wang-etal-2024-dapr, title = "{DAPR}: A Benchmark on Document-Aware Passage Retrieval", author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meet...
The work of neural retrieval so far focuses on ranking short texts and is challenged with long documents. There are many cases where the users want to find a relevant passage within a long document from a huge corpus, e.g. Wikipedia articles, research papers, etc. We propose and name this task \textit{Document-Aware Pa...
[ "Wang, Kexin", "Reimers, Nils", "Gurevych, Iryna" ]
{DAPR}: A Benchmark on Document-Aware Passage Retrieval
acl-long.236
Poster
1805.03797v1
https://aclanthology.org/2024.acl-long.237.bib
@inproceedings{xue-etal-2024-strengthened, title = "Strengthened Symbol Binding Makes Large Language Models Reliable Multiple-Choice Selectors", author = "Xue, Mengge and Hu, Zhenyu and Liu, Liqun and Liao, Kuo and Li, Shuang and Han, Honglin and Zhao, Meng and Y...
Multiple-Choice Questions (MCQs) constitute a critical area of research in the study of Large Language Models (LLMs). Previous works have investigated the selection bias problem in MCQs within few-shot scenarios, in which the LLM{'}s performance may be influenced by the presentation of answer choices, leaving the selec...
[ "Xue, Mengge", "Hu, Zhenyu", "Liu, Liqun", "Liao, Kuo", "Li, Shuang", "Han, Honglin", "Zhao, Meng", "Yin, Chengguo" ]
Strengthened Symbol Binding Makes Large Language Models Reliable Multiple-Choice Selectors
acl-long.237
Poster
2406.01026v2
https://aclanthology.org/2024.acl-long.238.bib
@inproceedings{chen-etal-2024-sac, title = "{SAC}-{KG}: Exploiting Large Language Models as Skilled Automatic Constructors for Domain Knowledge Graph", author = "Chen, Hanzhu and Shen, Xu and Lv, Qitan and Wang, Jie and Ni, Xiaoqi and Ye, Jieping", editor = "Ku, Lun-Wei a...
Knowledge graphs (KGs) play a pivotal role in knowledge-intensive tasks across specialized domains, where the acquisition of precise and dependable knowledge is crucial. However, existing KG construction methods heavily rely on human intervention to attain qualified KGs, which severely hinders the practical applicabili...
[ "Chen, Hanzhu", "Shen, Xu", "Lv, Qitan", "Wang, Jie", "Ni, Xiaoqi", "Ye, Jieping" ]
{SAC}-{KG}: Exploiting Large Language Models as Skilled Automatic Constructors for Domain Knowledge Graph
acl-long.238
Poster
2102.08827v2
https://aclanthology.org/2024.acl-long.239.bib
@inproceedings{yang-etal-2024-uncertainty-guided, title = "Uncertainty-Guided Modal Rebalance for Hateful Memes Detection", author = "Yang, Chuanpeng and Liu, Yaxin and Zhu, Fuqing and Han, Jizhong and Hu, Songlin", editor = "Ku, Lun-Wei and Martins, Andre and Sriku...
Hateful memes detection is a challenging multimodal understanding task that requires comprehensive learning of vision, language, and cross-modal interactions. Previous research has focused on developing effective fusion strategies for integrating hate information from different modalities. However, these methods excess...
[ "Yang, Chuanpeng", "Liu, Yaxin", "Zhu, Fuqing", "Han, Jizhong", "Hu, Songlin" ]
Uncertainty-Guided Modal Rebalance for Hateful Memes Detection
acl-long.239
Poster
2212.06573v2
https://aclanthology.org/2024.acl-long.240.bib
@inproceedings{glockner-etal-2024-missci, title = "Missci: Reconstructing Fallacies in Misrepresented Science", author = "Glockner, Max and Hou, Yufang and Nakov, Preslav and Gurevych, Iryna", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "...
Health-related misinformation on social networks can lead to poor decision-making and real-world dangers. Such misinformation often misrepresents scientific publications and cites them as {``}proof{''} to gain perceived credibility. To effectively counter such claims automatically, a system must explain how the claim w...
[ "Glockner, Max", "Hou, Yufang", "Nakov, Preslav", "Gurevych, Iryna" ]
Missci: Reconstructing Fallacies in Misrepresented Science
acl-long.240
Poster
2406.03181v1
https://aclanthology.org/2024.acl-long.241.bib
@inproceedings{reich-schultz-2024-uncovering, title = "Uncovering the Full Potential of Visual Grounding Methods in {VQA}", author = "Reich, Daniel and Schultz, Tanja", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting...
Visual Grounding (VG) methods in Visual Question Answering (VQA) attempt to improve VQA performance by strengthening a model{'}s reliance on question-relevant visual information. The presence of such relevant information in the visual input is typically assumed in training and testing. This assumption, however, is inhe...
[ "Reich, Daniel", "Schultz, Tanja" ]
Uncovering the Full Potential of Visual Grounding Methods in {VQA}
acl-long.241
Poster
2401.07803v2
https://aclanthology.org/2024.acl-long.242.bib
@inproceedings{tan-etal-2024-small, title = "Small Models, Big Insights: Leveraging Slim Proxy Models To Decide When and What to Retrieve for {LLM}s", author = "Tan, Jiejun and Dou, Zhicheng and Zhu, Yutao and Guo, Peidong and Fang, Kun and Wen, Ji-Rong", editor = "Ku, Lun...
The integration of large language models (LLMs) and search engines represents a significant evolution in knowledge acquisition methodologies. However, determining the knowledge that an LLM already possesses and the knowledge that requires the help of a search engine remains an unresolved issue. Most existing methods so...
[ "Tan, Jiejun", "Dou, Zhicheng", "Zhu, Yutao", "Guo, Peidong", "Fang, Kun", "Wen, Ji-Rong" ]
Small Models, Big Insights: Leveraging Slim Proxy Models To Decide When and What to Retrieve for {LLM}s
acl-long.242
Poster
2402.12052v3
https://aclanthology.org/2024.acl-long.243.bib
@inproceedings{von-daniken-etal-2024-favi, title = "Favi-Score: A Measure for Favoritism in Automated Preference Ratings for Generative {AI} Evaluation", author = {Von D{\"a}niken, Pius and Deriu, Jan and Tuggener, Don and Cieliebak, Mark}, editor = "Ku, Lun-Wei and Martins, Andr...
Generative AI systems have become ubiquitous for all kinds of modalities, which makes the issue of the evaluation of such models more pressing. One popular approach is preference ratings, where the generated outputs of different systems are shown to evaluators who choose their preferences. In recent years the field shi...
[ "Von D{\\\"a}niken, Pius", "Deriu, Jan", "Tuggener, Don", "Cieliebak, Mark" ]
Favi-Score: A Measure for Favoritism in Automated Preference Ratings for Generative {AI} Evaluation
acl-long.243
Poster
2406.01131v1
https://aclanthology.org/2024.acl-long.244.bib
@inproceedings{ziegenbein-etal-2024-llm, title = "{LLM}-based Rewriting of Inappropriate Argumentation using Reinforcement Learning from Machine Feedback", author = "Ziegenbein, Timon and Skitalinskaya, Gabriella and Bayat Makou, Alireza and Wachsmuth, Henning", editor = "Ku, Lun-Wei a...
Ensuring that online discussions are civil and productive is a major challenge for social media platforms. Such platforms usually rely both on users and on automated detection tools to flag inappropriate arguments of other users, which moderators then review. However, this kind of post-hoc moderation is expensive and t...
[ "Ziegenbein, Timon", "Skitalinskaya, Gabriella", "Bayat Makou, Alireza", "Wachsmuth, Henning" ]
{LLM}-based Rewriting of Inappropriate Argumentation using Reinforcement Learning from Machine Feedback
acl-long.244
Poster
2406.03363v1
https://aclanthology.org/2024.acl-long.245.bib
@inproceedings{plenz-frank-2024-graph, title = "Graph Language Models", author = "Plenz, Moritz and Frank, Anette", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Vo...
While Language Models (LMs) are the workhorses of NLP, their interplay with structured knowledge graphs (KGs) is still actively researched. Current methods for encoding such graphs typically either (i) linearize them for embedding with LMs {--} which underutilize structural information, or (ii) use Graph Neural Network...
[ "Plenz, Moritz", "Frank, Anette" ]
Graph Language Models
acl-long.245
Oral
2310.08487v1
https://aclanthology.org/2024.acl-long.246.bib
@inproceedings{periti-etal-2024-analyzing, title = "Analyzing Semantic Change through Lexical Replacements", author = "Periti, Francesco and Cassotti, Pierluigi and Dubossarsky, Haim and Tahmasebi, Nina", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", b...
Modern language models are capable of contextualizing words based on their surrounding context. However, this capability is often compromised due to semantic change that leads to words being used in new, unexpected contexts not encountered during pre-training. In this paper, we model semantic change by studying the eff...
[ "Periti, Francesco", "Cassotti, Pierluigi", "Dubossarsky, Haim", "Tahmasebi, Nina" ]
Analyzing Semantic Change through Lexical Replacements
acl-long.246
Poster
2404.18570v1
https://aclanthology.org/2024.acl-long.247.bib
@inproceedings{xu-etal-2024-exploiting, title = "Exploiting Intrinsic Multilateral Logical Rules for Weakly Supervised Natural Language Video Localization", author = "Xu, Zhe and Wei, Kun and Yang, Xu and Deng, Cheng", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar,...
Weakly supervised natural language video localization (WS-NLVL) aims to retrieve the moment corresponding to a language query in a video with only video-language pairs utilized during training. Despite great success, existing WS-NLVL methods seldomly consider the complex temporal relations enclosing the language query ...
[ "Xu, Zhe", "Wei, Kun", "Yang, Xu", "Deng, Cheng" ]
Exploiting Intrinsic Multilateral Logical Rules for Weakly Supervised Natural Language Video Localization
acl-long.247
Poster
1909.13784v2
https://aclanthology.org/2024.acl-long.248.bib
@inproceedings{weber-etal-2024-interpretability, title = "Interpretability of Language Models via Task Spaces", author = "Weber, Lucas and Jumelet, Jaap and Bruni, Elia and Hupkes, Dieuwke", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Pr...
The usual way to interpret language models (LMs) is to test their performance on different benchmarks and subsequently infer their internal processes.In this paper, we present an alternative approach, concentrating on the {\_}quality{\_} of LM processing, with a focus on their language abilities.To this end, we constru...
[ "Weber, Lucas", "Jumelet, Jaap", "Bruni, Elia", "Hupkes, Dieuwke" ]
Interpretability of Language Models via Task Spaces
acl-long.248
Oral
2406.06441v1
https://aclanthology.org/2024.acl-long.249.bib
@inproceedings{cassotti-etal-2024-using, title = "Using Synchronic Definitions and Semantic Relations to Classify Semantic Change Types", author = "Cassotti, Pierluigi and De Pascale, Stefano and Tahmasebi, Nina", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", ...
There is abundant evidence of the fact that the way words change their meaning can be classified in different types of change, highlighting the relationship between the old and new meanings (among which generalisation, specialisation and co-hyponymy transfer).In this paper, we present a way of detecting these types of ...
[ "Cassotti, Pierluigi", "De Pascale, Stefano", "Tahmasebi, Nina" ]
Using Synchronic Definitions and Semantic Relations to Classify Semantic Change Types
acl-long.249
Poster
2406.03452v3
https://aclanthology.org/2024.acl-long.250.bib
@inproceedings{mahaut-etal-2024-factual, title = "Factual Confidence of {LLM}s: on Reliability and Robustness of Current Estimators", author = {Mahaut, Mat{\'e}o and Aina, Laura and Czarnowska, Paula and Hardalov, Momchil and M{\"u}ller, Thomas and Marquez, Lluis}, editor ...
Large Language Models (LLMs) tend to be unreliable on fact-based answers.To address this problem, NLP researchers have proposed a range of techniques to estimate LLM{'}s confidence over facts. However, due to the lack of a systematic comparison, it is not clear how the different methods compare to one other.To fill thi...
[ "Mahaut, Mat{\\'e}o", "Aina, Laura", "Czarnowska, Paula", "Hardalov, Momchil", "M{\\\"u}ller, Thomas", "Marquez, Lluis" ]
Factual Confidence of {LLM}s: on Reliability and Robustness of Current Estimators
acl-long.250
Poster
2406.13415v1
https://aclanthology.org/2024.acl-long.251.bib
@inproceedings{dou-etal-2024-stepcoder, title = "{S}tep{C}oder: Improving Code Generation with Reinforcement Learning from Compiler Feedback", author = "Dou, Shihan and Liu, Yan and Jia, Haoxiang and Zhou, Enyu and Xiong, Limao and Shan, Junjie and Huang, Caishuang and...
The advancement of large language models (LLMs) has significantly propelled the field of code generation. Previous work integrated reinforcement learning (RL) with compiler feedback for exploring the output space of LLMs to enhance code generation quality. However, the lengthy code generated by LLMs in response to comp...
[ "Dou, Shihan", "Liu, Yan", "Jia, Haoxiang", "Zhou, Enyu", "Xiong, Limao", "Shan, Junjie", "Huang, Caishuang", "Wang, Xiao", "Fan, Xiaoran", "Xi, Zhiheng", "Zhou, Yuhao", "Ji, Tao", "Zheng, Rui", "Zhang, Qi", "Gui, Tao", "Huang, Xuanjing" ]
{S}tep{C}oder: Improving Code Generation with Reinforcement Learning from Compiler Feedback
acl-long.251
Poster
2203.05132v1
https://aclanthology.org/2024.acl-long.252.bib
@inproceedings{li-etal-2024-one, title = "One-Shot Learning as Instruction Data Prospector for Large Language Models", author = "Li, Yunshui and Hui, Binyuan and Xia, Xiaobo and Yang, Jiaxi and Yang, Min and Zhang, Lei and Si, Shuzheng and Chen, Ling-Hao and ...
Contemporary practices in instruction tuning often hinge on enlarging data scaling without a clear strategy for ensuring data quality, inadvertently introducing noise that may compromise model performance. To address this challenge, we introduce Nuggets, a novel and efficient methodology that leverages one-shot learnin...
[ "Li, Yunshui", "Hui, Binyuan", "Xia, Xiaobo", "Yang, Jiaxi", "Yang, Min", "Zhang, Lei", "Si, Shuzheng", "Chen, Ling-Hao", "Liu, Junhao", "Liu, Tongliang", "Huang, Fei", "Li, Yongbin" ]
One-Shot Learning as Instruction Data Prospector for Large Language Models
acl-long.252
Poster
2312.10302v4
https://aclanthology.org/2024.acl-long.253.bib
@inproceedings{shi-etal-2024-navigating, title = "Navigating the {O}ver{K}ill in Large Language Models", author = "Shi, Chenyu and Wang, Xiao and Ge, Qiming and Gao, Songyang and Yang, Xianjun and Gui, Tao and Zhang, Qi and Huang, Xuanjing and Zhao, Xun a...
Large language models are meticulously aligned to be both helpful and harmless. However, recent research points to a potential overkill which means models may refuse to answer benign queries. In this paper, we investigate the factors for overkill by exploring how models handle and determine the safety of queries. Our f...
[ "Shi, Chenyu", "Wang, Xiao", "Ge, Qiming", "Gao, Songyang", "Yang, Xianjun", "Gui, Tao", "Zhang, Qi", "Huang, Xuanjing", "Zhao, Xun", "Lin, Dahua" ]
Navigating the {O}ver{K}ill in Large Language Models
acl-long.253
Poster
2401.17633v1
https://aclanthology.org/2024.acl-long.254.bib
@inproceedings{jacovi-etal-2024-chain, title = "A Chain-of-Thought Is as Strong as Its Weakest Link: A Benchmark for Verifiers of Reasoning Chains", author = "Jacovi, Alon and Bitton, Yonatan and Bohnet, Bernd and Herzig, Jonathan and Honovich, Or and Tseng, Michael and ...
Prompting language models to provide step-by-step answers (e.g., {``}Chain-of-Thought{''}) is the prominent approach for complex reasoning tasks, where more accurate reasoning chains typically improve downstream task performance. Recent literature discusses automatic methods to verify reasoning to evaluate and improve ...
[ "Jacovi, Alon", "Bitton, Yonatan", "Bohnet, Bernd", "Herzig, Jonathan", "Honovich, Or", "Tseng, Michael", "Collins, Michael", "Aharoni, Roee", "Geva, Mor" ]
A Chain-of-Thought Is as Strong as Its Weakest Link: A Benchmark for Verifiers of Reasoning Chains
acl-long.254
Oral
2402.00559v4
https://aclanthology.org/2024.acl-long.255.bib
@inproceedings{ruan-etal-2024-re3, title = "Re3: A Holistic Framework and Dataset for Modeling Collaborative Document Revision", author = "Ruan, Qian and Kuznetsov, Ilia and Gurevych, Iryna", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedi...
Collaborative review and revision of textual documents is the core of knowledge work and a promising target for empirical analysis and NLP assistance. Yet, a holistic framework that would allow modeling complex relationships between document revisions, reviews and author responses is lacking. To address this gap, we in...
[ "Ruan, Qian", "Kuznetsov, Ilia", "Gurevych, Iryna" ]
Re3: A Holistic Framework and Dataset for Modeling Collaborative Document Revision
acl-long.255
Poster
2406.00197v1
https://aclanthology.org/2024.acl-long.256.bib
@inproceedings{czinczoll-etal-2024-nextlevelbert, title = "{N}ext{L}evel{BERT}: Masked Language Modeling with Higher-Level Representations for Long Documents", author = {Czinczoll, Tamara and H{\"o}nes, Christoph and Schall, Maximilian and De Melo, Gerard}, editor = "Ku, Lun-Wei and ...
While (large) language models have significantly improved over the last years, they still struggle to sensibly process long sequences found, e.g., in books, due to the quadratic scaling of the underlying attention mechanism. To address this, we propose NextLevelBERT, a Masked Language Model operating not on tokens, but...
[ "Czinczoll, Tamara", "H{\\\"o}nes, Christoph", "Schall, Maximilian", "De Melo, Gerard" ]
{N}ext{L}evel{BERT}: Masked Language Modeling with Higher-Level Representations for Long Documents
acl-long.256
Poster
2106.01040v3
https://aclanthology.org/2024.acl-long.257.bib
@inproceedings{jiang-etal-2024-followbench, title = "{F}ollow{B}ench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models", author = "Jiang, Yuxin and Wang, Yufei and Zeng, Xingshan and Zhong, Wanjun and Li, Liangyou and Mi, Fei and Shan...
The ability to follow instructions is crucial for Large Language Models (LLMs) to handle various real-world applications. Existing benchmarks primarily focus on evaluating pure response quality, rather than assessing whether the response follows constraints stated in the instruction. To fill this research gap, in this ...
[ "Jiang, Yuxin", "Wang, Yufei", "Zeng, Xingshan", "Zhong, Wanjun", "Li, Liangyou", "Mi, Fei", "Shang, Lifeng", "Jiang, Xin", "Liu, Qun", "Wang, Wei" ]
{F}ollow{B}ench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models
acl-long.257
Poster
2406.13892v1
https://aclanthology.org/2024.acl-long.258.bib
@inproceedings{jiang-etal-2024-learning, title = "Learning to Edit: Aligning {LLM}s with Knowledge Editing", author = "Jiang, Yuxin and Wang, Yufei and Wu, Chuhan and Zhong, Wanjun and Zeng, Xingshan and Gao, Jiahui and Li, Liangyou and Jiang, Xin and Shan...
Knowledge editing techniques, aiming to efficiently modify a minor proportion of knowledge in large language models (LLMs) without negatively impacting performance across other inputs, have garnered widespread attention. However, existing methods predominantly rely on memorizing the updated knowledge, impeding LLMs fro...
[ "Jiang, Yuxin", "Wang, Yufei", "Wu, Chuhan", "Zhong, Wanjun", "Zeng, Xingshan", "Gao, Jiahui", "Li, Liangyou", "Jiang, Xin", "Shang, Lifeng", "Tang, Ruiming", "Liu, Qun", "Wang, Wei" ]
Learning to Edit: Aligning {LLM}s with Knowledge Editing
acl-long.258
Poster
2308.09954v1
https://aclanthology.org/2024.acl-long.259.bib
@inproceedings{wang-etal-2024-dolphcoder, title = "{D}olph{C}oder: Echo-Locating Code Large Language Models with Diverse and Multi-Objective Instruction Tuning", author = "Wang, Yejie and He, Keqing and Dong, Guanting and Wang, Pei and Zeng, Weihao and Diao, Muxi and Xu...
Code Large Language Models (Code LLMs) have demonstrated outstanding performance in code-related tasks. Various instruction finetuning approaches have been proposed to boost the code generation performance of pre-trained Code LLMs. In this paper, we introduce a diverse instruction model DolphCoder with self-evaluating ...
[ "Wang, Yejie", "He, Keqing", "Dong, Guanting", "Wang, Pei", "Zeng, Weihao", "Diao, Muxi", "Xu, Weiran", "Wang, Jingang", "Zhang, Mengdi", "Cai, Xunliang" ]
{D}olph{C}oder: Echo-Locating Code Large Language Models with Diverse and Multi-Objective Instruction Tuning
acl-long.259
Poster
2403.00338v1
https://aclanthology.org/2024.acl-long.260.bib
@inproceedings{madureira-etal-2024-time, title = "When Only Time Will Tell: Interpreting How Transformers Process Local Ambiguities Through the Lens of Restart-Incrementality", author = "Madureira, Brielen and Kahardipraja, Patrick and Schlangen, David", editor = "Ku, Lun-Wei and Martin...
Incremental models that process sentences one token at a time will sometimes encounter points where more than one interpretation is possible. Causal models are forced to output one interpretation and continue, whereas models that can revise may edit their previous output as the ambiguity is resolved. In this work, we l...
[ "Madureira, Brielen", "Kahardipraja, Patrick", "Schlangen, David" ]
When Only Time Will Tell: Interpreting How Transformers Process Local Ambiguities Through the Lens of Restart-Incrementality
acl-long.260
Poster
2402.13113v2
https://aclanthology.org/2024.acl-long.261.bib
@inproceedings{rizvi-etal-2024-sparc, title = "{S}pa{RC} and {S}pa{RP}: Spatial Reasoning Characterization and Path Generation for Understanding Spatial Reasoning Capability of Large Language Models", author = "Rizvi, Md Imbesat and Zhu, Xiaodan and Gurevych, Iryna", editor = "Ku, Lun-Wei and...
Spatial reasoning is a crucial component of both biological and artificial intelligence. In this work, we present a comprehensive study of the capability of current state-of-the-art large language models (LLMs) on spatial reasoning. To support our study, we created and contribute a novel Spatial Reasoning Characterizat...
[ "Rizvi, Md Imbesat", "Zhu, Xiaodan", "Gurevych, Iryna" ]
{S}pa{RC} and {S}pa{RP}: Spatial Reasoning Characterization and Path Generation for Understanding Spatial Reasoning Capability of Large Language Models
acl-long.261
Poster
2406.04566v1
https://aclanthology.org/2024.acl-long.262.bib
@inproceedings{he-etal-2024-planning, title = "Planning Like Human: A Dual-process Framework for Dialogue Planning", author = "He, Tao and Liao, Lizi and Cao, Yixin and Liu, Yuanxing and Liu, Ming and Chen, Zerui and Qin, Bing", editor = "Ku, Lun-Wei and Mart...
In proactive dialogue, the challenge lies not just in generating responses but in steering conversations toward predetermined goals, a task where Large Language Models (LLMs) typically struggle due to their reactive nature. Traditional approaches to enhance dialogue planning in LLMs, ranging from elaborate prompt engin...
[ "He, Tao", "Liao, Lizi", "Cao, Yixin", "Liu, Yuanxing", "Liu, Ming", "Chen, Zerui", "Qin, Bing" ]
Planning Like Human: A Dual-process Framework for Dialogue Planning
acl-long.262
Poster
2406.05374v1
https://aclanthology.org/2024.acl-long.263.bib
@inproceedings{cancedda-2024-spectral, title = "Spectral Filters, Dark Signals, and Attention Sinks", author = "Cancedda, Nicola", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguis...
Projecting intermediate representations onto the vocabulary is an increasingly popular interpretation tool for transformer-based LLMs, also known as the logit lens (Nostalgebraist). We propose a quantitative extension to this approach and define spectral filters on intermediate representations based on partitioning the...
[ "Cancedda, Nicola" ]
Spectral Filters, Dark Signals, and Attention Sinks
acl-long.263
Poster
2402.09221v1
https://aclanthology.org/2024.acl-long.264.bib
@inproceedings{gao-etal-2024-diffucomet, title = "{D}iffu{COMET}: Contextual Commonsense Knowledge Diffusion", author = "Gao, Silin and Ismayilzada, Mete and Zhao, Mengjie and Wakaki, Hiromi and Mitsufuji, Yuki and Bosselut, Antoine", editor = "Ku, Lun-Wei and Marti...
Inferring contextually-relevant and diverse commonsense to understand narratives remains challenging for knowledge models. In this work, we develop a series of knowledge models, DiffuCOMET, that leverage diffusion to learn to reconstruct the implicit semantic connections between narrative contexts and relevant commonse...
[ "Gao, Silin", "Ismayilzada, Mete", "Zhao, Mengjie", "Wakaki, Hiromi", "Mitsufuji, Yuki", "Bosselut, Antoine" ]
{D}iffu{COMET}: Contextual Commonsense Knowledge Diffusion
acl-long.264
Poster
2402.17011v1
https://aclanthology.org/2024.acl-long.265.bib
@inproceedings{sahinuc-etal-2024-systematic, title = "Systematic Task Exploration with {LLM}s: A Study in Citation Text Generation", author = "{\c{S}}ahinu{\c{c}}, Furkan and Kuznetsov, Ilia and Hou, Yufang and Gurevych, Iryna", editor = "Ku, Lun-Wei and Martins, Andre and ...
Large language models (LLMs) bring unprecedented flexibility in defining and executing complex, creative natural language generation (NLG) tasks. Yet, this flexibility brings new challenges, as it introduces new degrees of freedom in formulating the task inputs and instructions and in evaluating model performance. To f...
[ "{\\c{S}}ahinu{\\c{c}}, Furkan", "Kuznetsov, Ilia", "Hou, Yufang", "Gurevych, Iryna" ]
Systematic Task Exploration with {LLM}s: A Study in Citation Text Generation
acl-long.265
Poster
2407.04046v1
https://aclanthology.org/2024.acl-long.266.bib
@inproceedings{bortoletto-etal-2024-limits, title = "Limits of Theory of Mind Modelling in Dialogue-Based Collaborative Plan Acquisition", author = "Bortoletto, Matteo and Ruhdorfer, Constantin and Abdessaied, Adnen and Shi, Lei and Bulling, Andreas", editor = "Ku, Lun-Wei and ...
Recent work on dialogue-based collaborative plan acquisition (CPA) has suggested that Theory of Mind (ToM) modelling can improve missing knowledge prediction in settings with asymmetric skill-sets and knowledge. Although ToM was claimed to be important for effective collaboration, its real impact on this novel task rem...
[ "Bortoletto, Matteo", "Ruhdorfer, Constantin", "Abdessaied, Adnen", "Shi, Lei", "Bulling, Andreas" ]
Limits of Theory of Mind Modelling in Dialogue-Based Collaborative Plan Acquisition
acl-long.266
Poster
2405.12621v2
https://aclanthology.org/2024.acl-long.267.bib
@inproceedings{chen-etal-2024-temporal, title = "Temporal Knowledge Question Answering via Abstract Reasoning Induction", author = "Chen, Ziyang and Li, Dongfang and Zhao, Xiang and Hu, Baotian and Zhang, Min", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, ...
In this study, we address the challenge of enhancing temporal knowledge reasoning in Large Language Models (LLMs). LLMs often struggle with this task, leading to the generation of inaccurate or misleading responses. This issue mainly arises from their limited ability to handle evolving factual knowledge and complex tem...
[ "Chen, Ziyang", "Li, Dongfang", "Zhao, Xiang", "Hu, Baotian", "Zhang, Min" ]
Temporal Knowledge Question Answering via Abstract Reasoning Induction
acl-long.267
Poster
2311.09149v2
https://aclanthology.org/2024.acl-long.268.bib
@inproceedings{lee-etal-2024-wrote, title = "Who Wrote this Code? Watermarking for Code Generation", author = "Lee, Taehyun and Hong, Seokhee and Ahn, Jaewoo and Hong, Ilgee and Lee, Hwaran and Yun, Sangdoo and Shin, Jamin and Kim, Gunhee", editor = "Ku, Lun-...
Since the remarkable generation performance of large language models raised ethical and legal concerns, approaches to detect machine-generated text by embedding watermarks are being developed.However, we discover that the existing works fail to function appropriately in code generation tasks due to the task{'}s nature ...
[ "Lee, Taehyun", "Hong, Seokhee", "Ahn, Jaewoo", "Hong, Ilgee", "Lee, Hwaran", "Yun, Sangdoo", "Shin, Jamin", "Kim, Gunhee" ]
Who Wrote this Code? Watermarking for Code Generation
acl-long.268
Poster
2211.11883v1
https://aclanthology.org/2024.acl-long.269.bib
@inproceedings{islam-etal-2024-mapcoder, title = "{M}ap{C}oder: Multi-Agent Code Generation for Competitive Problem Solving", author = "Islam, Md. Ashraful and Ali, Mohammed Eunus and Parvez, Md Rizwan", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle...
Code synthesis, which requires a deep understanding of complex natural language (NL) problem descriptions, generation of code instructions for complex algorithms and data structures, and the successful execution of comprehensive unit tests, presents a significant challenge. Thus, while large language models (LLMs) demo...
[ "Islam, Md. Ashraful", "Ali, Mohammed Eunus", "Parvez, Md Rizwan" ]
{M}ap{C}oder: Multi-Agent Code Generation for Competitive Problem Solving
acl-long.269
Poster
2305.10679v1
https://aclanthology.org/2024.acl-long.270.bib
@inproceedings{zhu-etal-2024-relayattention, title = "{R}elay{A}ttention for Efficient Large Language Model Serving with Long System Prompts", author = "Zhu, Lei and Wang, Xinjiang and Zhang, Wayne and Lau, Rynson", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vi...
A practical large language model (LLM) service may involve a long system prompt, which specifies the instructions, examples, and knowledge documents of the task and is reused across requests. However, the long system prompt causes throughput/latency bottlenecks as the cost of generating the next token grows w.r.t the s...
[ "Zhu, Lei", "Wang, Xinjiang", "Zhang, Wayne", "Lau, Rynson" ]
{R}elay{A}ttention for Efficient Large Language Model Serving with Long System Prompts
acl-long.270
Poster
2402.14808v3
https://aclanthology.org/2024.acl-long.271.bib
@inproceedings{wang-etal-2024-boosting-language, title = "Boosting Language Models Reasoning with Chain-of-Knowledge Prompting", author = "Wang, Jianing and Sun, Qiushi and Li, Xiang and Gao, Ming", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktit...
Recently, Chain-of-Thought (CoT) prompting has delivered success on complex reasoning tasks, which aims at designing a simple prompt like {``}Let{'}s think step by step{''} or multiple in-context exemplars with well-designed rationales to elicit Large Language Models (LLMs) to generate intermediate reasoning steps. How...
[ "Wang, Jianing", "Sun, Qiushi", "Li, Xiang", "Gao, Ming" ]
Boosting Language Models Reasoning with Chain-of-Knowledge Prompting
acl-long.271
Poster
2304.05970v1
https://aclanthology.org/2024.acl-long.272.bib
@inproceedings{guo-etal-2024-open, title = "Open Grounded Planning: Challenges and Benchmark Construction", author = "Guo, Shiguang and Deng, Ziliang and Lin, Hongyu and Lu, Yaojie and Han, Xianpei and Sun, Le", editor = "Ku, Lun-Wei and Martins, Andre and Sr...
The emergence of large language models (LLMs) has increasingly drawn attention to the use of LLMs for human-like planning. Existing work on LLM-based planning either focuses on leveraging the inherent language generation capabilities of LLMs to produce free-style plans, or employs reinforcement learning approaches to l...
[ "Guo, Shiguang", "Deng, Ziliang", "Lin, Hongyu", "Lu, Yaojie", "Han, Xianpei", "Sun, Le" ]
Open Grounded Planning: Challenges and Benchmark Construction
acl-long.272
Poster
2406.02903v1
https://aclanthology.org/2024.acl-long.273.bib
@inproceedings{xu-etal-2024-llm, title = "{LLM} Knows Body Language, Too: Translating Speech Voices into Human Gestures", author = "Xu, Chenghao and Lyu, Guangtao and Yan, Jiexi and Yang, Muli and Deng, Cheng", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, ...
In response to the escalating demand for digital human representations, progress has been made in the generation of realistic human gestures from given speeches. Despite the remarkable achievements of recent research, the generation process frequently includes unintended, meaningless, or non-realistic gestures. To addr...
[ "Xu, Chenghao", "Lyu, Guangtao", "Yan, Jiexi", "Yang, Muli", "Deng, Cheng" ]
{LLM} Knows Body Language, Too: Translating Speech Voices into Human Gestures
acl-long.273
Poster
2405.13336v1
https://aclanthology.org/2024.acl-long.274.bib
@inproceedings{huang-etal-2024-queryagent, title = "{Q}uery{A}gent: A Reliable and Efficient Reasoning Framework with Environmental Feedback based Self-Correction", author = "Huang, Xiang and Cheng, Sitao and Huang, Shanshan and Shen, Jiayu and Xu, Yong and Zhang, Chaoyun and...
Employing Large Language Models (LLMs) for semantic parsing has achieved remarkable success. However, we find existing methods fall short in terms of reliability and efficiency when hallucinations are encountered. In this paper, we address these challenges with a framework called QueryAgent, which solves a question ste...
[ "Huang, Xiang", "Cheng, Sitao", "Huang, Shanshan", "Shen, Jiayu", "Xu, Yong", "Zhang, Chaoyun", "Qu, Yuzhong" ]
{Q}uery{A}gent: A Reliable and Efficient Reasoning Framework with Environmental Feedback based Self-Correction
acl-long.274
Oral
2403.11886v2
https://aclanthology.org/2024.acl-long.275.bib
@inproceedings{sun-etal-2024-pita, title = "{PITA}: Prompting Task Interaction for Argumentation Mining", author = "Sun, Yang and Wang, Muyi and Bao, Jianzhu and Liang, Bin and Zhao, Xiaoyan and Yang, Caihua and Yang, Min and Xu, Ruifeng", editor = "Ku, Lun-W...
Argumentation mining (AM) aims to detect the arguments and their inherent relations from argumentative textual compositions. Generally, AM comprises three key challenging subtasks, including argument component type classification (ACTC), argumentative relation identification (ARI), and argumentative relation type class...
[ "Sun, Yang", "Wang, Muyi", "Bao, Jianzhu", "Liang, Bin", "Zhao, Xiaoyan", "Yang, Caihua", "Yang, Min", "Xu, Ruifeng" ]
{PITA}: Prompting Task Interaction for Argumentation Mining
acl-long.275
Poster
2404.02529v1
https://aclanthology.org/2024.acl-long.276.bib
@inproceedings{duan-etal-2024-shifting, title = "Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models", author = "Duan, Jinhao and Cheng, Hao and Wang, Shiqi and Zavalny, Alex and Wang, Chenan and Xu, Renjing an...
Large Language Models (LLMs) show promising results in language generation and instruction following but frequently {``}hallucinate{''}, making their outputs less reliable. Despite Uncertainty Quantification{'}s (UQ) potential solutions, implementing it accurately within LLMs is challenging. Our research introduces a s...
[ "Duan, Jinhao", "Cheng, Hao", "Wang, Shiqi", "Zavalny, Alex", "Wang, Chenan", "Xu, Renjing", "Kailkhura, Bhavya", "Xu, Kaidi" ]
Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models
acl-long.276
Poster
2408.06816v1
https://aclanthology.org/2024.acl-long.277.bib
@inproceedings{geigle-etal-2024-babel, title = "Babel-{I}mage{N}et: Massively Multilingual Evaluation of Vision-and-Language Representations", author = "Geigle, Gregor and Timofte, Radu and Glava{\v{s}}, Goran", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", b...
Vision-and-language (VL) models with separate encoders for each modality (e.g., CLIP) have become the go-to models for zero-shot image classification and image-text retrieval. They are, however, mostly evaluated in English as multilingual benchmarks are limited in availability. We introduce Babel-ImageNet, a massively ...
[ "Geigle, Gregor", "Timofte, Radu", "Glava{\\v{s}}, Goran" ]
Babel-{I}mage{N}et: Massively Multilingual Evaluation of Vision-and-Language Representations
acl-long.277
Poster
2303.15697v1
https://aclanthology.org/2024.acl-long.278.bib
@inproceedings{li-etal-2024-estimating, title = "Estimating Agreement by Chance for Sequence Annotation", author = "Li, Diya and Rose, Carolyn and Yuan, Ao and Zhou, Chunxiao", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of t...
In the field of natural language processing, correction of performance assessment for chance agreement plays a crucial role in evaluating the reliability of annotations. However, there is a notable dearth of research focusing on chance correction for assessing the reliability of sequence annotation tasks, despite their...
[ "Li, Diya", "Rose, Carolyn", "Yuan, Ao", "Zhou, Chunxiao" ]
Estimating Agreement by Chance for Sequence Annotation
acl-long.278
Poster
2407.11371v1
https://aclanthology.org/2024.acl-long.279.bib
@inproceedings{lu-etal-2024-emergent, title = "Are Emergent Abilities in Large Language Models just In-Context Learning?", author = "Lu, Sheng and Bigoulaeva, Irina and Sachdeva, Rachneet and Tayyar Madabushi, Harish and Gurevych, Iryna", editor = "Ku, Lun-Wei and Martins,...
Large language models, comprising billions of parameters and pre-trained on extensive web-scale corpora, have been claimed to acquire certain capabilities without having been specifically trained on them. These capabilities, referred to as {``}emergent abilities,{''} have been a driving force in discussions regarding t...
[ "Lu, Sheng", "Bigoulaeva, Irina", "Sachdeva, Rachneet", "Tayyar Madabushi, Harish", "Gurevych, Iryna" ]
Are Emergent Abilities in Large Language Models just In-Context Learning?
acl-long.279
Poster
2210.16433v3
https://aclanthology.org/2024.acl-long.280.bib
@inproceedings{yu-etal-2024-wavecoder, title = "{W}ave{C}oder: Widespread And Versatile Enhancement For Code Large Language Models By Instruction Tuning", author = "Yu, Zhaojian and Zhang, Xin and Shang, Ning and Huang, Yangyu and Xu, Can and Zhao, Yishujie and Hu, Wenx...
Recent work demonstrates that, after instruction tuning, Code Large Language Models (Code LLMs) can obtain impressive capabilities to address a wide range of code-related tasks. However, current instruction tuning methods for Code LLMs mainly focus on the traditional code generation task, resulting in poor performance ...
[ "Yu, Zhaojian", "Zhang, Xin", "Shang, Ning", "Huang, Yangyu", "Xu, Can", "Zhao, Yishujie", "Hu, Wenxiang", "Yin, Qiufeng" ]
{W}ave{C}oder: Widespread And Versatile Enhancement For Code Large Language Models By Instruction Tuning
acl-long.280
Poster
2312.14187v5
https://aclanthology.org/2024.acl-long.281.bib
@inproceedings{li-etal-2024-eliciting-better, title = "Eliciting Better Multilingual Structured Reasoning from {LLM}s through Code", author = "Li, Bryan and Alkhouli, Tamer and Bonadiman, Daniele and Pappas, Nikolaos and Mansour, Saab", editor = "Ku, Lun-Wei and Martins, A...
The development of large language models (LLM) has shown progress on reasoning, though studies have largely considered either English or simple reasoning tasks. To address this, we introduce a multilingual structured reasoning and explanation dataset, termed xSTREET, that covers four tasks across six languages. xSTREET...
[ "Li, Bryan", "Alkhouli, Tamer", "Bonadiman, Daniele", "Pappas, Nikolaos", "Mansour, Saab" ]
Eliciting Better Multilingual Structured Reasoning from {LLM}s through Code
acl-long.281
Poster
2403.02567v2
https://aclanthology.org/2024.acl-long.282.bib
@inproceedings{ossowski-hu-2024-olive, title = "{OLIVE}: Object Level In-Context Visual Embeddings", author = "Ossowski, Timothy and Hu, Junjie", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for...
Recent generalist vision-language models (VLMs) have demonstrated impressive reasoning capabilities across diverse multimodal tasks. However, these models still struggle with fine-grained object-level understanding and grounding. In terms of modeling, existing VLMs implicitly align text tokens with image patch tokens, ...
[ "Ossowski, Timothy", "Hu, Junjie" ]
{OLIVE}: Object Level In-Context Visual Embeddings
acl-long.282
Poster
2009.09561v1
https://aclanthology.org/2024.acl-long.283.bib
@inproceedings{chen-mueller-2024-quantifying, title = "Quantifying Uncertainty in Answers from any Language Model and Enhancing their Trustworthiness", author = "Chen, Jiuhai and Mueller, Jonas", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings...
We introduce BSDetector, a method for detecting bad and speculative answers from a pretrained Large Language Model by estimating a numeric confidence score for any output it generated. Our uncertainty quantification technique works for any LLM accessible only via a black-box API, whose training data remains unknown. By...
[ "Chen, Jiuhai", "Mueller, Jonas" ]
Quantifying Uncertainty in Answers from any Language Model and Enhancing their Trustworthiness
acl-long.283
Poster
2308.16175v2
https://aclanthology.org/2024.acl-long.284.bib
@inproceedings{zhang-etal-2024-marathon, title = "Marathon: A Race Through the Realm of Long Context with Large Language Models", author = "Zhang, Lei and Li, Yunshui and Liu, Ziqiang and Yang, Jiaxi and Liu, Junhao and Chen, Longze and Luo, Run and Yang, Min", ...
With the advancement of large language models (LLMs) and the expansion of their context windows, existing long-context benchmarks fall short in effectively evaluating the models{'} comprehension and reasoning abilities in extended texts. Moreover, conventional benchmarks relying on F1 metrics often inaccurately score r...
[ "Zhang, Lei", "Li, Yunshui", "Liu, Ziqiang", "Yang, Jiaxi", "Liu, Junhao", "Chen, Longze", "Luo, Run", "Yang, Min" ]
Marathon: A Race Through the Realm of Long Context with Large Language Models
acl-long.284
Poster
2312.09542v2
https://aclanthology.org/2024.acl-long.285.bib
@inproceedings{gao-etal-2024-beyond, title = "Beyond Scaling: Predicting Patent Approval with Domain-specific Fine-grained Claim Dependency Graph", author = "Gao, Xiaochen and Yao, Feng and Zhao, Kewen and He, Beilei and Kumar, Animesh and Krishnan, Vish and Shang, Jing...
Model scaling is becoming the default choice for many language tasks due to the success of large language models (LLMs). However, it can fall short in specific scenarios where simple customized methods excel. In this paper, we delve into the patent approval prediction task and unveil that simple domain-specific graph m...
[ "Gao, Xiaochen", "Yao, Feng", "Zhao, Kewen", "He, Beilei", "Kumar, Animesh", "Krishnan, Vish", "Shang, Jingbo" ]
Beyond Scaling: Predicting Patent Approval with Domain-specific Fine-grained Claim Dependency Graph
acl-long.285
Oral
2404.14372v1
https://aclanthology.org/2024.acl-long.286.bib
@inproceedings{zhuang-etal-2024-pcad, title = "{PCAD}: Towards {ASR}-Robust Spoken Language Understanding via Prototype Calibration and Asymmetric Decoupling", author = "Zhuang, Xianwei and Cheng, Xuxin and Liang, Liming and Xie, Yuxin and Wang, Zhichang and Huang, Zhiqi and ...
Spoken language understanding (SLU) inevitably suffers from error propagation from automatic speech recognition (ASR) in actual scenarios. Some recent works attempt to alleviate this issue through contrastive learning. However, they (1) sample negative pairs incorrectly in pre-training; (2) only focus on implicit metri...
[ "Zhuang, Xianwei", "Cheng, Xuxin", "Liang, Liming", "Xie, Yuxin", "Wang, Zhichang", "Huang, Zhiqi", "Zou, Yuexian" ]
{PCAD}: Towards {ASR}-Robust Spoken Language Understanding via Prototype Calibration and Asymmetric Decoupling
acl-long.286
Poster
9411028v1
https://aclanthology.org/2024.acl-long.287.bib
@inproceedings{jin-etal-2024-rethinking, title = "Rethinking the Multimodal Correlation of Multimodal Sequential Learning via Generalizable Attentional Results Alignment", author = "Jin, Tao and Lin, Wang and Wang, Ye and Li, Linjun and Cheng, Xize and Zhao, Zhou", editor ...
Transformer-based methods have gone mainstream in multimodal sequential learning. The intra and inter modality interactions are captured by the query-key associations of multi-head attention. In this way, the calculated multimodal contexts (attentional results) are expected to be relevant to the query modality. However...
[ "Jin, Tao", "Lin, Wang", "Wang, Ye", "Li, Linjun", "Cheng, Xize", "Zhao, Zhou" ]
Rethinking the Multimodal Correlation of Multimodal Sequential Learning via Generalizable Attentional Results Alignment
acl-long.287
Poster
2407.03836v1
https://aclanthology.org/2024.acl-long.288.bib
@inproceedings{liang-etal-2024-uhgeval, title = "{UHGE}val: Benchmarking the Hallucination of {C}hinese Large Language Models via Unconstrained Generation", author = "Liang, Xun and Song, Shichao and Niu, Simin and Li, Zhiyu and Xiong, Feiyu and Tang, Bo and Wang, Yezha...
Large language models (LLMs) produce hallucinated text, compromising their practical utility in professional contexts. To assess the reliability of LLMs, numerous initiatives have developed benchmark evaluations for hallucination phenomena. However, they often employ constrained generation techniques to produce the eva...
[ "Liang, Xun", "Song, Shichao", "Niu, Simin", "Li, Zhiyu", "Xiong, Feiyu", "Tang, Bo", "Wang, Yezhaohui", "He, Dawei", "Peng, Cheng", "Wang, Zhonghao", "Deng, Haiying" ]
{UHGE}val: Benchmarking the Hallucination of {C}hinese Large Language Models via Unconstrained Generation
acl-long.288
Poster
2311.15296v3
https://aclanthology.org/2024.acl-long.289.bib
@inproceedings{lin-etal-2024-preflmr, title = "{P}re{FLMR}: Scaling Up Fine-Grained Late-Interaction Multi-modal Retrievers", author = "Lin, Weizhe and Mei, Jingbiao and Chen, Jinghong and Byrne, Bill", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", boo...
Large Multimodal Models (LMMs) excel in natural language and visual understanding but are challenged by exacting tasks such as Knowledge-based Visual Question Answering (KB-VQA) which involve the retrieval of relevant information from document collections to use in shaping answers to questions. We present an extensive ...
[ "Lin, Weizhe", "Mei, Jingbiao", "Chen, Jinghong", "Byrne, Bill" ]
{P}re{FLMR}: Scaling Up Fine-Grained Late-Interaction Multi-modal Retrievers
acl-long.289
Poster
1712.09550v2
https://aclanthology.org/2024.acl-long.290.bib
@inproceedings{erker-etal-2024-triple, title = "Triple-Encoders: Representations That Fire Together, Wire Together", author = "Erker, Justus-Jonas and Mai, Florian and Reimers, Nils and Spanakis, Gerasimos and Gurevych, Iryna", editor = "Ku, Lun-Wei and Martins, Andre and...
Search-based dialog models typically re-encode the dialog history at every turn, incurring high cost.Curved Contrastive Learning, a representation learning method that encodes relative distances between utterances into the embedding space via a bi-encoder, has recently shown promising results for dialog modeling at far...
[ "Erker, Justus-Jonas", "Mai, Florian", "Reimers, Nils", "Spanakis, Gerasimos", "Gurevych, Iryna" ]
Triple-Encoders: Representations That Fire Together, Wire Together
acl-long.290
Poster
2110.08232v4
https://aclanthology.org/2024.acl-long.291.bib
@inproceedings{mei-etal-2024-improving, title = "Improving Hateful Meme Detection through Retrieval-Guided Contrastive Learning", author = "Mei, Jingbiao and Chen, Jinghong and Lin, Weizhe and Byrne, Bill and Tomalin, Marcus", editor = "Ku, Lun-Wei and Martins, Andre and ...
Hateful memes have emerged as a significant concern on the Internet. Detecting hateful memes requires the system to jointly understand the visual and textual modalities. Our investigation reveals that the embedding space of existing CLIP-based systems lacks sensitivity to subtle differences in memes that are vital for ...
[ "Mei, Jingbiao", "Chen, Jinghong", "Lin, Weizhe", "Byrne, Bill", "Tomalin, Marcus" ]
Improving Hateful Meme Detection through Retrieval-Guided Contrastive Learning
acl-long.291
Poster
2311.08110v2
https://aclanthology.org/2024.acl-long.292.bib
@inproceedings{zhang-etal-2024-agent, title = "Agent-Pro: Learning to Evolve via Policy-Level Reflection and Optimization", author = "Zhang, Wenqi and Tang, Ke and Wu, Hai and Wang, Mengna and Shen, Yongliang and Hou, Guiyang and Tan, Zeqi and Li, Peng and ...
Large Language Models (LLMs) exhibit robust problem-solving capabilities for diverse tasks. However, most LLM-based agents are designed as specific task solvers with sophisticated prompt engineering, rather than agents capable of learning and evolving through interactions. These task solvers necessitate manually crafte...
[ "Zhang, Wenqi", "Tang, Ke", "Wu, Hai", "Wang, Mengna", "Shen, Yongliang", "Hou, Guiyang", "Tan, Zeqi", "Li, Peng", "Zhuang, Yueting", "Lu, Weiming" ]
Agent-Pro: Learning to Evolve via Policy-Level Reflection and Optimization
acl-long.292
Poster
2402.17574v3
https://aclanthology.org/2024.acl-long.293.bib
@inproceedings{razzhigaev-etal-2024-transformer, title = "Your Transformer is Secretly Linear", author = "Razzhigaev, Anton and Mikhalchuk, Matvey and Goncharova, Elizaveta and Gerasimenko, Nikolai and Oseledets, Ivan and Dimitrov, Denis and Kuznetsov, Andrey", edit...
This paper reveals a novel linear characteristic exclusive to transformer decoders, including models like GPT, LLaMA, OPT, BLOOM and others. We analyze embedding transformations between sequential layers, uncovering an almost perfect linear relationship (Procrustes similarity score of 0.99). However, linearity decrease...
[ "Razzhigaev, Anton", "Mikhalchuk, Matvey", "Goncharova, Elizaveta", "Gerasimenko, Nikolai", "Oseledets, Ivan", "Dimitrov, Denis", "Kuznetsov, Andrey" ]
Your Transformer is Secretly Linear
acl-long.293
Poster
2405.12250v1
https://aclanthology.org/2024.acl-long.294.bib
@inproceedings{jinadu-ding-2024-noise, title = "Noise Correction on Subjective Datasets", author = "Jinadu, Uthman and Ding, Yi", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational L...
Incorporating every annotator{'}s perspective is crucial for unbiased data modeling. Annotator fatigue and changing opinions over time can distort dataset annotations. To combat this, we propose to learn a more accurate representation of diverse opinions by utilizing multitask learning in conjunction with loss-based la...
[ "Jinadu, Uthman", "Ding, Yi" ]
Noise Correction on Subjective Datasets
acl-long.294
Poster
2206.10609v1
https://aclanthology.org/2024.acl-long.295.bib
@inproceedings{senel-etal-2024-generative, title = "Generative Explore-Exploit: Training-free Optimization of Generative Recommender Systems using {LLM} Optimizers", author = {Senel, L{\"u}tfi Kerem and Fetahu, Besnik and Yoshida, Davis and Chen, Zhiyu and Castellucci, Giuseppe and ...
Recommender systems are widely used to suggest engaging content, and Large Language Models (LLMs) have given rise to generative recommenders. Such systems can directly generate items, including for open-set tasks like question suggestion. While the world knowledge of LLMs enables good recommendations, improving the gen...
[ "Senel, L{\\\"u}tfi Kerem", "Fetahu, Besnik", "Yoshida, Davis", "Chen, Zhiyu", "Castellucci, Giuseppe", "Vedula, Nikhita", "Choi, Jason Ingyu", "Malmasi, Shervin" ]
Generative Explore-Exploit: Training-free Optimization of Generative Recommender Systems using {LLM} Optimizers
acl-long.295
Poster
2406.05255v1
https://aclanthology.org/2024.acl-long.296.bib
@inproceedings{jiang-etal-2024-instruction, title = "Instruction-tuned Language Models are Better Knowledge Learners", author = "Jiang, Zhengbao and Sun, Zhiqing and Shi, Weijia and Rodriguez, Pedro and Zhou, Chunting and Neubig, Graham and Lin, Xi and Yih, Wen-t...
In order for large language model (LLM)-based assistants to effectively adapt to evolving information needs, it must be possible to update their factual knowledge through continued training on new data. The standard recipe for doing so involves continued pre-training on new documents followed by instruction-tuning on q...
[ "Jiang, Zhengbao", "Sun, Zhiqing", "Shi, Weijia", "Rodriguez, Pedro", "Zhou, Chunting", "Neubig, Graham", "Lin, Xi", "Yih, Wen-tau", "Iyer, Srini" ]
Instruction-tuned Language Models are Better Knowledge Learners
acl-long.296
Poster
2306.14101v1
https://aclanthology.org/2024.acl-long.297.bib
@inproceedings{ngo-kim-2024-language, title = "What Do Language Models Hear? Probing for Auditory Representations in Language Models", author = "Ngo, Jerry and Kim, Yoon", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meet...
This work explores whether language models encode meaningfully grounded representations of sounds of objects. We learn a linear probe that retrieves the correct text representation of an object given a snippet of audio related to that object, where the sound representation is given by a pretrained audio model. This pro...
[ "Ngo, Jerry", "Kim, Yoon" ]
What Do Language Models Hear? Probing for Auditory Representations in Language Models
acl-long.297
Poster
2402.16998v1
https://aclanthology.org/2024.acl-long.298.bib
@inproceedings{kim-etal-2024-threads, title = "Threads of Subtlety: Detecting Machine-Generated Texts Through Discourse Motifs", author = "Kim, Zae Myung and Lee, Kwang and Zhu, Preston and Raheja, Vipul and Kang, Dongyeop", editor = "Ku, Lun-Wei and Martins, Andre and ...
With the advent of large language models (LLM), the line between human-crafted and machine-generated texts has become increasingly blurred. This paper delves into the inquiry of identifying discernible and unique linguistic properties in texts that were written by humans, particularly uncovering the underlying discours...
[ "Kim, Zae Myung", "Lee, Kwang", "Zhu, Preston", "Raheja, Vipul", "Kang, Dongyeop" ]
Threads of Subtlety: Detecting Machine-Generated Texts Through Discourse Motifs
acl-long.298
Poster
2402.10586v2
https://aclanthology.org/2024.acl-long.299.bib
@inproceedings{zhang-etal-2024-jailbreak, title = "Jailbreak Open-Sourced Large Language Models via Enforced Decoding", author = "Zhang, Hangfan and Guo, Zhimeng and Zhu, Huaisheng and Cao, Bochuan and Lin, Lu and Jia, Jinyuan and Chen, Jinghui and Wu, Dinghao", ...
Large Language Models (LLMs) have achieved unprecedented performance in Natural Language Generation (NLG) tasks. However, many existing studies have shown that they could be misused to generate undesired content. In response, before releasing LLMs for public access, model developers usually align those language models ...
[ "Zhang, Hangfan", "Guo, Zhimeng", "Zhu, Huaisheng", "Cao, Bochuan", "Lin, Lu", "Jia, Jinyuan", "Chen, Jinghui", "Wu, Dinghao" ]
Jailbreak Open-Sourced Large Language Models via Enforced Decoding
acl-long.299
Poster
2406.09289v1
https://aclanthology.org/2024.acl-long.300.bib
@inproceedings{srivastava-etal-2024-nice, title = "{NICE}: To Optimize In-Context Examples or Not?", author = "Srivastava, Pragya and Golechha, Satvik and Deshpande, Amit and Sharma, Amit", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Pro...
Recent work shows that in-context learning and optimization of in-context examples (ICE) can significantly improve the accuracy of large language models (LLMs) on a wide range of tasks, leading to an apparent consensus that ICE optimization is crucial for better performance. However, most of these studies assume a fixe...
[ "Srivastava, Pragya", "Golechha, Satvik", "Deshp", "e, Amit", "Sharma, Amit" ]
{NICE}: To Optimize In-Context Examples or Not?
acl-long.300
Poster
9709228v1