paper_id stringlengths 12 48 | title stringlengths 12 155 | url stringlengths 39 46 | abstract stringlengths 389 2.11k | ocr_markdown stringlengths 18.1k 576k |
|---|---|---|---|---|
zhang-etal-2023-investigating | Investigating Glyph-Phonetic Information for {C}hinese Spell Checking: What Works and What{'}s Next? | https://aclanthology.org/2023.findings-acl.1 | While pre-trained Chinese language models have demonstrated impressive performance on a wide range of NLP tasks, the Chinese Spell Checking (CSC) task remains a challenge. Previous research has explored using information such as glyphs and phonetics to improve the ability of CSC models to distinguish misspelled charact... | # Investigating Glyph-Phonetic Information For Chinese Spell Checking: What Works And What'S Next?
Xiaotian Zhang ∗
, Yanjun Zheng ∗
, Hang Yan, Xipeng Qiu †
Shanghai Key Laboratory of Intelligent Information Processing, Fudan University School of Computer Science, Fudan University
{xiaotianzhang21, yanjunzheng21}@m.f... |
jo-2023-self | A Self-Supervised Integration Method of Pretrained Language Models and Word Definitions | https://aclanthology.org/2023.findings-acl.2 | We investigate the representation of pretrained language models and humans, using the idea of word definition modeling{--}how well a word is represented by its definition, and vice versa. Our analysis shows that a word representation in pretrained language models does not successfully map its human-written definition a... | # A Self-Supervised Integration Method Of Pretrained Language Models And Word Definitions
Hwiyeol Jo NAVER Search US
hwiyeolj@gmail.com
## Abstract
We investigate the representation of pretrained language models and humans, using the idea of word definition modeling–how well a word is represented by its definition, ... |
ravfogel-etal-2023-conformal | Conformal Nucleus Sampling | https://aclanthology.org/2023.findings-acl.3 | Language models generate text based on successively sampling the next word. A decoding procedure based on nucleus (top-$p$) sampling chooses from the smallest possible set of words whose cumulative probability exceeds the probability $p$. In this work, we assess whether a top-$p$ set is indeed aligned with its probabil... | # Conformal Nucleus Sampling
Shauli Ravfogel1,2 **Yoav Goldberg**1,2 **Jacob Goldberger**1 1Bar-Ilan University 2Allen Institute for Artificial Intelligence
{shauli.ravfogel, yoav.goldberg}@gmail.com , jacob.goldberger@biu.ac.il
## Abstract
Language models generate text based on successively sampling the next word. ... |
chan-etal-2023-discoprompt | {D}isco{P}rompt: Path Prediction Prompt Tuning for Implicit Discourse Relation Recognition | https://aclanthology.org/2023.findings-acl.4 | Implicit Discourse Relation Recognition (IDRR) is a sophisticated and challenging task to recognize the discourse relations between the arguments with the absence of discourse connectives. The sense labels for each discourse relation follow a hierarchical classification scheme in the annotation process (Prasad et al., ... | # Discoprompt: Path Prediction Prompt Tuning For Implicit Discourse Relation Recognition
Chunkit Chan∗ 1, Xin Liu∗1, Jiayang Cheng1, Zihan Li1**, Yangqiu Song**1, Ginny Y. Wong2**, Simon See**2 1Department of Computer Science and Engineering, HKUST, Hong Kong SAR, China 2NVIDIA AI Technology Center (NVAITC), NVIDIA, S... |
cao-jiang-2023-modularized | Modularized Zero-shot {VQA} with Pre-trained Models | https://aclanthology.org/2023.findings-acl.5 | Large-scale pre-trained models (PTMs) show great zero-shot capabilities. In this paper, we study how to leverage them for zero-shot visual question answering (VQA).Our approach is motivated by a few observations. First, VQA questions often require multiple steps of reasoning, which is still a capability that most PTMs ... | # Modularized Zero-Shot Vqa With Pre-Trained Models
Rui Cao and **Jing Jiang**
School of Computing and Information Systems Singapore Management University ruicao.2020@phdcs.smu.edu.sg, jingjiang@smu.edu.sg
## Abstract
Large-scale pre-trained models (PTMs) show great zero-shot capabilities. In this paper, we study ho... |
tan-etal-2023-timelineqa | {T}imeline{QA}: A Benchmark for Question Answering over Timelines | https://aclanthology.org/2023.findings-acl.6 | Lifelogs are descriptions of experiences that a person had during their life. Lifelogs are created by fusing data from the multitude of digital services, such as online photos, maps, shopping and content streaming services. Question answering over lifelogs can offer personal assistants a critical resource when they try... | # Timelineqa: A Benchmark For Question Answering Over Timelines
Wang-Chiew Tan, Jane Dwivedi-Yu, Yuliang Li, Lambert Mathias, Marzieh Saeidi*, Jing Nathan Yan+**, and Alon Y. Halevy**
Meta Cornell University+
{wangchiew,janeyu,yuliangli,lambert,ayh}@meta.com marzieh.saeidi@googlemail.com* jy858@cornell.edu+
## Abstra... |
lam-etal-2023-abstractive | Abstractive Text Summarization Using the {BRIO} Training Paradigm | https://aclanthology.org/2023.findings-acl.7 | Summary sentences produced by abstractive summarization models may be coherent and comprehensive, but they lack control and rely heavily on reference summaries. The BRIO training paradigm assumes a non-deterministic distribution to reduce the model{'}s dependence on reference summaries, and improve model performance du... |
## Abstractive Text Summarization Using The Brio Training Paradigm
Khang Nhut Lam Can Tho University, Vietnam lnkhang@ctu.edu.vn Khang Thua Pham Duy Tan University, Vietnam phamthuakhang@dtu.edu.vn
## Abstract
Summary sentences produced by abstractive summarization models may be coherent and comprehensive, but they... |
wu-etal-2023-modeling | Modeling the {Q}-Diversity in a Min-max Play Game for Robust Optimization | https://aclanthology.org/2023.findings-acl.8 | Models trained with empirical risk minimization (ERM) are revealed to easily rely on spurious correlations, resulting in poor generalization. Group distributionally robust optimization (group DRO) can alleviate this problem by minimizing the worst-case loss over pre-defined groups. While promising, in practice factors ... |
## Modeling The Q**-Diversity In A Min-Max Play Game** For Robust Optimization
Ting Wu1, Rui Zheng1, Tao Gui2∗, Qi Zhang1,3**, Xuanjing Huang**1 1School of Computer Science, Fudan University 2Institute of Modern Languages and Linguistics, Fudan University 3Shanghai Key Laboratory of Intelligent Information Processing... |
chen-etal-2023-pre | Pre-training Language Model as a Multi-perspective Course Learner | https://aclanthology.org/2023.findings-acl.9 | ELECTRA, the generator-discriminator pre-training framework, has achieved impressive semantic construction capability among various downstream tasks. Despite the convincing performance, ELECTRA still faces the challenges of monotonous training and deficient interaction. Generator with only masked language modeling (MLM... | Pre-training Language Model as a Multi-perspective Course Learner Beiduo Chen§‡∗
, Shaohan Huang‡†
, Zihan Zhang‡, Wu Guo§**, Zhenhua Ling**§,
Haizhen Huang‡, Furu Wei‡, Weiwei Deng‡ **and Qi Zhang**‡
§ National Engineering Research Center of Speech and Language Information Processing, University of Science and Technol... |
tsymboi-etal-2023-layerwise | Layerwise universal adversarial attack on {NLP} models | https://aclanthology.org/2023.findings-acl.10 | In this work, we examine the vulnerability of language models to universal adversarial triggers (UATs). We propose a new white-box approach to the construction of layerwise UATs (LUATs), which searches the triggers by perturbing hidden layers of a network. On the example of three transformer models and three datasets f... |
## Layerwise Universal Adversarial Attack On Nlp Models
# Olga Tsymboi1, 2, Danil Malaev1, **Andrei Petrovskii**1, And **Ivan Oseledets**3, 4
1Sber AI Lab, Moscow, Russia 2Moscow Institute of Physics and Technology, Moscow, Russia 3Skolkovo Institute of Science and Technology, Moscow, Russia 4Artificial Intelligence... |
wang-etal-2023-scene | Scene-robust Natural Language Video Localization via Learning Domain-invariant Representations | https://aclanthology.org/2023.findings-acl.11 | Natural language video localization(NLVL) task involves the semantic matching of a text query with a moment from an untrimmed video. Previous methods primarily focus on improving performance with the assumption of independently identical data distribution while ignoring the out-of-distribution data. Therefore, these ap... | # Scene-Robust Natural Language Video Localization Via Learning Domain-Invariant Representations
Zehan Wang and **Yang Zhao** and **Haifeng Huang** and **Yan Xia** and **Zhou Zhao**∗
{wangzehan01,awalk,huanghaifeng,zhaozhou} @zju.edu.cn Zhejiang University
## Abstract
Natural language video localization(NLVL)
task i... |
jiang-etal-2023-exploiting | Exploiting Pseudo Image Captions for Multimodal Summarization | https://aclanthology.org/2023.findings-acl.12 | Multimodal summarization with multimodal output (MSMO) faces a challenging semantic gap between visual and textual modalities due to the lack of reference images for training. Our pilot investigation indicates that image captions, which naturally connect texts and images, can significantly benefit MSMO. However, exposu... | # Exploiting Pseudo Image Captions For Multimodal Summarization
Chaoya Jiang1∗, Rui Xie1∗, Wei Ye1, Jinan Sun1,2†**, Shikun Zhang**1†
1National Engineering Research Center for Software Engineering, Peking University 2BIGO Technology
{sjn,zhangsk}@pku.edu.cn
## Abstract

Multimodal sum... |
parovic-etal-2023-cross | Cross-Lingual Transfer with Target Language-Ready Task Adapters | https://aclanthology.org/2023.findings-acl.13 | Adapters have emerged as a modular and parameter-efficient approach to (zero-shot) cross-lingual transfer. The established MAD-X framework employs separate language and task adapters which can be arbitrarily combined to perform the transfer of any task to any target language. Subsequently, BAD-X, an extension of the MA... | # Cross-Lingual Transfer With Target Language-Ready Task Adapters
Marinela Parovic´
1 Alan Ansell1**Ivan Vulic´**
1 **Anna Korhonen**1 1Language Technology Lab, TAL, University of Cambridge
{mp939,aja63,iv250,alk23}@cam.ac.uk
## Abstract
Adapters have emerged as a modular and parameter-efficient approach to (zero-sh... |
balepur-etal-2023-dynamite | {D}yna{M}i{TE}: Discovering Explosive Topic Evolutions with User Guidance | https://aclanthology.org/2023.findings-acl.14 | Dynamic topic models (DTMs) analyze text streams to capture the evolution of topics. Despite their popularity, existing DTMs are either fully supervised, requiring expensive human annotations, or fully unsupervised, producing topic evolutions that often do not cater to a user{'}s needs. Further, the topic evolutions pr... | # Dynamite: Discovering Explosive Topic Evolutions With User Guidance
## Nishant Balepur‡∗ Shivam Agarwal‡∗ **Karthik Venkat Ramanan**‡ Susik Yoon‡ Jiawei Han‡ **Diyi Yang**⋆
‡University of Illinois at Urbana-Champaign, ⋆Stanford University
{balepur2,shivama2,kv16,susik,hanj}@illinois.edu, diyiy@stanford.edu
## Abst... |
yu-etal-2023-boost | Boost Transformer-based Language Models with {GPU}-Friendly Sparsity and Quantization | https://aclanthology.org/2023.findings-acl.15 | Along with the performance improvement in NLP domain, the sizes of transformer-based language models (TLM) are also dramatically increased. Some prior works intend to compress TLM models into more compact forms, but do not fully consider the hardware characters may not support the efficient execution for these forms, l... |
## Boost Transformer-Based Language Models With Gpu-Friendly Sparsity And Quantization
Chong Yu1, Tao Chen2,∗**, Zhongxue Gan**1,∗
1Academy for Engineering and Technology, Fudan University 2School for Information Science and Technology, Fudan University 21110860050@m.fudan.edu.cn, {eetchen, ganzhongxue}@fudan.edu.cn
... |
he-etal-2023-rmssinger | {RMSS}inger: Realistic-Music-Score based Singing Voice Synthesis | https://aclanthology.org/2023.findings-acl.16 | We are interested in a challenging task, Realistic-Music-Score based Singing Voice Synthesis (RMS-SVS). RMS-SVS aims to generate high-quality singing voices given realistic music scores with different note types (grace, slur, rest, etc.). Though significant progress has been achieved, recent singing voice synthesis (SV... | # Rmssinger: Realistic-Music-Score Based Singing Voice Synthesis
Jinzheng He jinzhenghe@zju.edu.cn Zhejiang University Jinglin Liu liu.jinglin@bytedance.com ByteDance Zhenhui Ye zhenhuiye@zju.edu.cn Zhejiang University Rongjie Huang rongjiehuang@zju.edu.cn Zhejiang University Huadai Liu huadailiu@zju.edu.cn Zhejiang U... |
kuo-chen-2023-zero | Zero-Shot Prompting for Implicit Intent Prediction and Recommendation with Commonsense Reasoning | https://aclanthology.org/2023.findings-acl.17 | The current generation of intelligent assistants require explicit user requests to perform tasks or services, often leading to lengthy and complex conversations. In contrast, human assistants can infer multiple implicit intents from utterances via their commonsense knowledge, thereby simplifying interactions. To bridge... | # Zero-Shot Prompting For Implicit Intent Prediction And Recommendation With Commonsense Reasoning
Hui-Chi Kuo Yun-Nung Chen National Taiwan University, Taipei, Taiwan r09922a21@csie.ntu.edu.tw y.v.chen@ieee.org
## Abstract 1 Introduction
The current generation of intelligent assistants require explicit user request... |
liu-etal-2023-mtgp | {MTGP}: Multi-turn Target-oriented Dialogue Guided by Generative Global Path with Flexible Turns | https://aclanthology.org/2023.findings-acl.18 | Target-oriented dialogue guides the dialogue to a target quickly and smoothly. The latest approaches focus on global planning, which plans toward the target before the conversation instead of adopting a greedy strategy during the conversation. However, the global plan in existing works is fixed to certain turns by gene... | # Mtgp: Multi-Turn Target-Oriented Dialogue Guided By Generative Global Path With Flexible Turns
Anqi Liu1,2**, Bo Wang**2∗
, Yue Tan2, Dongming Zhao3**, Kun Huang**3, Ruifang He2**, Yuexian Hou**2 1School of New Media and Communication, Tianjin University, Tianjin, China 2College of Intelligence and Computing, Tianji... |
miceli-barone-etal-2023-larger | The Larger they are, the Harder they Fail: Language Models do not Recognize Identifier Swaps in Python | https://aclanthology.org/2023.findings-acl.19 | Large Language Models (LLMs) have successfully been applied to code generation tasks, raising the question of how well these models understand programming. Typical programming languages have invariances and equivariances in their semantics that human programmers intuitively understand and exploit, such as the (near) in... | # The Larger They Are, The Harder They Fail: Language Models Do Not Recognize Identifier Swaps In Python
Antonio Valerio Miceli-Barone1∗
amiceli@ed.ac.uk Fazl Barez1∗
f.barez@ed.ac.uk Ioannis Konstas2 i.konstas@hw.ac.uk Shay B. Cohen1 scohen@inf.ed.ac.uk 1 School of Informatics, University of Edinburgh 2 School of Mat... |
liu-etal-2023-class | Class Lifelong Learning for Intent Detection via Structure Consolidation Networks | https://aclanthology.org/2023.findings-acl.20 | Intent detection, which estimates diverse intents behind user utterances, is an essential component of task-oriented dialogue systems. Previous intent detection models are usually trained offline, which can only handle predefined intent classes. In the real world, new intents may keep challenging deployed models. For e... | # Class Lifelong Learning For Intent Detection Via Structure Consolidation Networks
Qingbin Liu1, Yanchao Hao1, Xiaolong Liu1, Bo Li1, Dianbo Sui2**, Shizhu He**3,4, Kang Liu3,4, **Jun Zhao**3,4, **Xi Chen**1∗
, Ningyu Zhang5, **Jiaoyan Chen**6 1 Platform and Content Group, Tencent, China 2 Harbin Institute of Technol... |
vashishtha-etal-2023-evaluating | On Evaluating and Mitigating Gender Biases in Multilingual Settings | https://aclanthology.org/2023.findings-acl.21 | While understanding and removing gender biases in language models has been a long-standing problem in Natural Language Processing, prior research work has primarily been limited to English. In this work, we investigate some of the challenges with evaluating and mitigating biases in multilingual settings which stem from... | # On Evaluating And Mitigating Gender Biases In Multilingual Settings
Aniket Vashishtha∗ Kabir Ahuja∗ **Sunayana Sitaram**
Microsoft Research India
{t-aniketva,t-kabirahuja,sunayana.sitaram}@microsoft.com
## Abstract
While understanding and removing gender biases in language models has been a longstanding problem in... |
zhuo-etal-2023-rethinking | Rethinking Round-Trip Translation for Machine Translation Evaluation | https://aclanthology.org/2023.findings-acl.22 | Automatic evaluation methods for translation often require model training, and thus the availability of parallel corpora limits their applicability to low-resource settings. Round-trip translation is a potential workaround, which can reframe bilingual evaluation into a much simpler monolingual task. Early results from ... | # Rethinking Round-Trip Translation For Machine Translation Evaluation
Terry Yue Zhuo1and **Qiongkai Xu**2∗and **Xuanli He**3and **Trevor Cohn**2†
1 Monash University, Clayton, VIC, Australia 2 The University of Melbourne, Carlton, VIC, Australia 3 University College London, London, United Kingdom terry.zhuo@monash.ed... |
xiang-etal-2023-g | $G^3R$: A Graph-Guided Generate-and-Rerank Framework for Complex and Cross-domain Text-to-{SQL} Generation | https://aclanthology.org/2023.findings-acl.23 | We present a framework called G3R for complex and cross-domain Text-to-SQL generation. G3R aims to address two limitations of current approaches: (1) The structure of the abstract syntax tree (AST) is not fully explored during the decoding process which is crucial for complex SQL generation; (2) Domain knowledge is not... | # G3**R: A Graph-Guided Generate-And-Rerank Framework For Complex** And Cross-Domain Text-To-Sql Generation
Yanzheng Xiang1, Qian-Wen Zhang2, Xu Zhang1**, Zejie Liu**1, Yunbo Cao2 **and Deyu Zhou**1∗
1School of Computer Science and Engineering, Key Laboratory of Computer Network and Information Integration, Ministry o... |
ding-etal-2023-unified | A Unified Knowledge Graph Augmentation Service for Boosting Domain-specific {NLP} Tasks | https://aclanthology.org/2023.findings-acl.24 | By focusing the pre-training process on domain-specific corpora, some domain-specific pre-trained language models (PLMs) have achieved state-of-the-art results. However, it is under-investigated to design a unified paradigm to inject domain knowledge in the PLM fine-tuning stage. We propose KnowledgeDA, a unified domai... |
## A Unified Knowledge Graph Augmentation Service For Boosting Domain-Specific Nlp Tasks
Ruiqing Ding1,2, Xiao Han∗3**, Leye Wang**∗1,2 1Key Lab of High Confidence Software Technologies (Peking University),
Ministry of Education, China 2School of Computer Science, Peking University, Beijing, China 3School of Informat... |
wang-etal-2023-dialogue | Dialogue Planning via Brownian Bridge Stochastic Process for Goal-directed Proactive Dialogue | https://aclanthology.org/2023.findings-acl.25 | Goal-directed dialogue systems aim to proactively reach a pre-determined target through multi-turn conversations. The key to achieving this task lies in planning dialogue paths that smoothly and coherently direct conversations towards the target. However, this is a challenging and under-explored task. In this work, we ... | # Dialogue Planning Via Brownian Bridge Stochastic Process For Goal-Directed Proactive Dialogue
Jian Wang∗
, Dongding Lin∗
, Wenjie Li Department of Computing, The Hong Kong Polytechnic University
{jian-dylan.wang, dongding88.lin}@connect.polyu.hk cswjli@comp.polyu.edu.hk
## Abstract
Goal-directed dialogue systems a... |
badathala-etal-2023-match | A Match Made in Heaven: A Multi-task Framework for Hyperbole and Metaphor Detection | https://aclanthology.org/2023.findings-acl.26 | Hyperbole and metaphor are common in day-to-day communication (e.g., {``}I am in deep trouble{''}: how does trouble have depth?), which makes their detection important, especially in a conversational AI setting. Existing approaches to automatically detect metaphor and hyperbole have studied these language phenomena ind... | # A Match Made In Heaven: A Multi-Task Framework For Hyperbole And Metaphor Detection
Naveen Badathala∗, Abisek Rajakumar Kalarani∗**, Tejpalsingh Siledar**∗,
Pushpak Bhattacharyya Department of Computer Science and Engineering, IIT Bombay, India
{naveenbadathala, abisekrk, tejpalsingh, pb}@cse.iitb.ac.in
## Abstract... |
yang-etal-2023-prompt | Prompt Tuning for Unified Multimodal Pretrained Models | https://aclanthology.org/2023.findings-acl.27 | Prompt tuning has become a new paradigm for model tuning and it has demonstrated success in natural language pretraining and even vision pretraining. The parameter-efficient prompt tuning methods that optimize soft embeddings while keeping the pretrained model frozen demonstrate advantages in low computation costs and ... | # Prompt Tuning For Unified Multimodal Pretrained Models
Hao Yang∗, Junyang Lin∗**, An Yang, Peng Wang, Chang Zhou**
DAMO Academy, Alibaba Group
{yh351016, junyang.ljy, ya235025, zheluo.wp, ericzhou.zc}@alibaba-inc.com
## Abstract
Prompt tuning has become a new paradigm for model tuning and it has demonstrated succe... |
gao-etal-2023-learning | Learning Joint Structural and Temporal Contextualized Knowledge Embeddings for Temporal Knowledge Graph Completion | https://aclanthology.org/2023.findings-acl.28 | Temporal knowledge graph completion that predicts missing links for incomplete temporal knowledge graphs (TKG) is gaining increasing attention. Most existing works have achieved good results by incorporating time information into static knowledge graph embedding methods. However, they ignore the contextual nature of th... | # Learning Joint Structural And Temporal Contextualized Knowledge Embeddings For Temporal Knowledge Graph Completion
Yifu Gao1, Yongquan He2, Zhigang Kan1, Yi Han3, Linbo Qiao1**, Dongsheng Li**1∗
1 National Key Laboratory of Parallel and Distributed Computing, National University of Defense Technology, Changsha, Chin... |
laskar-etal-2023-systematic | A Systematic Study and Comprehensive Evaluation of {C}hat{GPT} on Benchmark Datasets | https://aclanthology.org/2023.findings-acl.29 | The development of large language models (LLMs) such as ChatGPT has brought a lot of attention recently. However, their evaluation in the benchmark academic datasets remains under-explored due to the difficulty of evaluating the generative outputs produced by this model against the ground truth. In this paper, we aim t... | # A Systematic Study And Comprehensive Evaluation Of Chatgpt On Benchmark Datasets
Md Tahmid Rahman Laskar∗† §, M Saiful Bari∗‡, Mizanur Rahman∗† ¶
Md Amran Hossen Bhuiyan†, Shafiq Joty‡$**, Jimmy Xiangji Huang**†
†York University, ‡Nanyang Technological University,
§Dialpad Canada Inc., ¶Royal Bank of Canada, $Salesf... |
yu-etal-2023-generating-deep | Generating Deep Questions with Commonsense Reasoning Ability from the Text by Disentangled Adversarial Inference | https://aclanthology.org/2023.findings-acl.30 | This paper proposes a new task of commonsense question generation, which aims to yield deep-level and to-the-point questions from the text. Their answers need to reason over disjoint relevant contexts and external commonsense knowledge, such as encyclopedic facts and causality. The knowledge may not be explicitly menti... | # Generating Deep Questions With Commonsense Reasoning Ability From The Text By Disentangled Adversarial Inference
Jianxing Yu, Shiqi Wang, Libin Zheng, Qinliang Su, Wei Liu, Baoquan Zhao, Jian Yin∗
School of Artificial Intelligence, Sun Yat-sen University Guangdong Key Laboratory of Big Data Analysis and Processing, ... |
hung-etal-2023-tada | {TADA}: Efficient Task-Agnostic Domain Adaptation for Transformers | https://aclanthology.org/2023.findings-acl.31 | Intermediate training of pre-trained transformer-based language models on domain-specific data leads to substantial gains for downstream tasks. To increase efficiency and prevent catastrophic forgetting alleviated from full domain-adaptive pre-training, approaches such as adapters have been developed. However, these re... |
## Tada: Efficient Task-Agnostic Domain Adaptation For Transformers
Chia-Chien Hung1, 2, 3∗, Lukas Lange3, Jannik Strötgen**3, 4**
1NEC Laboratories Europe GmbH, Heidelberg, Germany 2Data and Web Science Group, University of Mannheim, Germany 3Bosch Center for Artificial Intelligence, Renningen, Germany 4Karlsruhe Un... |
wang-etal-2023-robust | Robust Natural Language Understanding with Residual Attention Debiasing | https://aclanthology.org/2023.findings-acl.32 | Natural language understanding (NLU) models often suffer from unintended dataset biases. Among bias mitigation methods, ensemble-based debiasing methods, especially product-of-experts (PoE), have stood out for their impressive empirical success. However, previous ensemble-based debiasing methods typically apply debiasi... | # Robust Natural Language Understanding With Residual Attention Debiasing
Fei Wang,∗ James Y. Huang,∗ **Tianyi Yan, Wenxuan Zhou** and **Muhao Chen**
University of Southern California
{fwang598,huangjam,tianyiy,zhouwenx,muhaoche}@usc.edu
## Abstract

Natural language understanding (NL... |
zhang-etal-2023-monet | {M}o{NET}: Tackle State Momentum via Noise-Enhanced Training for Dialogue State Tracking | https://aclanthology.org/2023.findings-acl.33 | Dialogue state tracking (DST) aims to convert the dialogue history into dialogue states which consist of slot-value pairs. As condensed structural information memorizes all history information, the dialogue state in the previous turn is typically adopted as the input for predicting the current state by DST models. Howe... | # Monet: Tackle State Momentum Via Noise-Enhanced Training For Dialogue State Tracking
Haoning Zhang1,3, Junwei Bao2∗**, Haipeng Sun**2, Youzheng Wu2, Wenye Li4,5, Shuguang Cui3,1,6**, Xiaodong He**2 1FNii, CUHK-Shenzhen 2JD AI Research 3SSE, CUHK-Shenzhen 4SDS, CUHK-Shenzhen 5SRIBD 6Pengcheng Lab haoningzhang@link.cu... |
cheng-etal-2023-pal | {PAL}: Persona-Augmented Emotional Support Conversation Generation | https://aclanthology.org/2023.findings-acl.34 | Due to the lack of human resources for mental health support, there is an increasing demand for employing conversational agents for support. Recent work has demonstrated the effectiveness of dialogue models in providing emotional support. As previous studies have demonstrated that seekers{'} persona is an important fac... | # Pal: Persona-Augmented Emotional Support Conversation Generation
Jiale Cheng∗, Sahand Sabour∗**, Hao Sun , Zhuang Chen , Minlie Huang**†
The CoAI group, DCST; Institute for Artificial Intelligence; State Key Lab of Intelligent Technology and Systems; Beijing National Research Center for Information Science and Techn... |
wang-etal-2023-farewell | Farewell to Aimless Large-scale Pretraining: Influential Subset Selection for Language Model | https://aclanthology.org/2023.findings-acl.35 | Pretrained language models have achieved remarkable success in various natural language processing tasks. However, pretraining has recently shifted toward larger models and larger data, which has resulted in significant computational and energy costs. In this paper, we propose Influence Subset Selection (ISS) for langu... | # Farewell To Aimless Large-Scale Pretraining: Influential Subset Selection For Language Model
Xiao Wang⋆∗, Weikang Zhou⋆∗, Qi Zhang⋆†, Jie Zhou⋆**, Songyang Gao**⋆,
Junzhe Wang⋆, Menghan Zhang♦, Xiang Gao♣, Yunwen Chen♣, **Tao Gui**♦ †
⋆ School of Computer Science, Fudan University, Shanghai, China
♦Institute of Mode... |
yadav-bansal-2023-exclusive | Exclusive Supermask Subnetwork Training for Continual Learning | https://aclanthology.org/2023.findings-acl.36 | Continual Learning (CL) methods focus on accumulating knowledge over time while avoiding catastrophic forgetting. Recently, Wortsman et al. (2020) proposed a CL method, SupSup, which uses a randomly initialized, fixed base network (model) and finds a supermask for each new task that selectively keeps or removes each we... | # Exclusive Supermask Subnetwork Training For Continual Learning
Prateek Yadav & Mohit Bansal Department of Computer Science UNC Chapel Hill
{praty,mbansal}@cs.unc.edu
## Abstract
Continual Learning (CL) methods focus on accumulating knowledge over time while avoiding catastrophic forgetting. Recently, Wortsman et a... |
lin-etal-2023-transferring | Transferring General Multimodal Pretrained Models to Text Recognition | https://aclanthology.org/2023.findings-acl.37 | This paper proposes a new method, OFA-OCR, to transfer multimodal pretrained models to text recognition. Specifically, we recast text recognition as image captioning and directly transfer a unified vision-language pretrained model to the end task. Without pretraining on large-scale annotated or synthetic text recogniti... | # Transferring General Multimodal Pretrained Models To Text Recognition
Junyang Lin, Xuancheng Ren, Yichang Zhang, Gao Liu, Peng Wang, An Yang, Chang Zhou DAMO Academy, Alibaba Group junyang.ljy@alibaba-inc.com
## Abstract
This paper proposes a new method, OFA-OCR,
to transfer multimodal pretrained models to text re... |
zouhar-etal-2023-formal | A Formal Perspective on Byte-Pair Encoding | https://aclanthology.org/2023.findings-acl.38 | Byte-Pair Encoding (BPE) is a popular algorithm used for tokenizing data in NLP, despite being devised initially as a compression method.BPE appears to be a greedy algorithm at face value, but the underlying optimization problem that BPE seeks to solve has not yet been laid down. We formalize BPE as a combinatorial opt... | # A Formal Perspective On Byte-Pair Encoding Vilém Zouhare Clara Meistere Juan Luis Gastaldie **Li Du**J Tim Vieiraj Mrinmaya Sachane **Ryan Cotterell**E
ETH ZürichEJohns Hopkins UniversityJ
{vzouhar,meistecl,gjuan,msachan,ryan.cotterell}@ethz.ch
{leodu,timv}@cs.jhu.edu
## Abstract
Byte-Pair Encoding (BPE) is a popu... |
preiss-2023-automatic | Automatic Named Entity Obfuscation in Speech | https://aclanthology.org/2023.findings-acl.39 | Sharing data containing personal information often requires its anonymization, even when consent for sharing was obtained from the data originator. While approaches exist for automated anonymization of text, the area is not as thoroughly explored in speech. This work focuses on identifying, replacing and inserting repl... | # Automatic Named Entity Obfuscation In Speech
Judita Preiss University of Sheffield, Information School The Wave, 2 Whitham Road Sheffield S10 2AH
judita.preiss@sheffield.ac.uk
## Abstract
Sharing data containing personal information often requires its anonymization, even when consent for sharing was obtained from ... |
lee-kim-2023-recursion | Recursion of Thought: A Divide-and-Conquer Approach to Multi-Context Reasoning with Language Models | https://aclanthology.org/2023.findings-acl.40 | Generating intermediate steps, or Chain of Thought (CoT), is an effective way to significantly improve language models{'} (LM) multi-step reasoning capability. However, the CoT lengths can grow rapidly with the problem complexity, easily exceeding the maximum context size. Instead of increasing the context limit, which... | # Recursion Of Thought: A Divide-And-Conquer Approach To Multi-Context Reasoning With Language Models
Soochan Lee Seoul National University soochan.lee@vision.snu.ac.kr
## Abstract
Generating intermediate steps, or Chain of Thought (CoT), is an effective way to significantly improve language models' (LM) multistep r... |
zou-etal-2023-unis | {U}ni{S}-{MMC}: Multimodal Classification via Unimodality-supervised Multimodal Contrastive Learning | https://aclanthology.org/2023.findings-acl.41 | Multimodal learning aims to imitate human beings to acquire complementary information from multiple modalities for various downstream tasks. However, traditional aggregation-based multimodal fusion methods ignore the inter-modality relationship, treat each modality equally, suffer sensor noise, and thus reduce multimod... | # Unis-Mmc: Multimodal Classification Via Unimodality-Supervised Multimodal Contrastive Learning
Heqing Zou, Meng Shen, Chen Chen, Yuchen Hu, Deepu Rajan, Eng Siong Chng Nanyang Technological University, Singapore
{heqing001, meng005, chen1436, yuchen005}@e.ntu.edu.sg, {asdrajan, aseschng}@ntu.edu.sg
## Abstract
Mul... |
wang-etal-2023-robustness | Robustness-Aware Word Embedding Improves Certified Robustness to Adversarial Word Substitutions | https://aclanthology.org/2023.findings-acl.42 | Natural Language Processing (NLP) models have gained great success on clean texts, but they are known to be vulnerable to adversarial examples typically crafted by synonym substitutions. In this paper, we target to solve this problem and find that word embedding is important to the certified robustness of NLP models. G... | # Robustness-Aware Word Embedding Improves Certified Robustness To Adversarial Word Substitutions
Yibin Wang1∗, Yichen Yang1∗**, Di He**2and **Kun He**1†
1School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, China
{yibinwang, yangyc, brooklet60}@hust.edu.cn 2School of Intell... |
liu-etal-2023-exploring | Exploring the Compositional Generalization in Context Dependent Text-to-{SQL} Parsing | https://aclanthology.org/2023.findings-acl.43 | In the context-dependent Text-to-SQL task, the generated SQL statements are refined iteratively based on the user input utterance from each interaction. The input text from each interaction can be viewed as component modifications to the previous SQL statements, which could be further extracted as the modification patt... | # Exploring The Compositional Generalization In Context Dependent Text-To-Sql Parsing Cqr-Sql: Conversational Question Reformulation Enhanced Context-Dependent Text-To-Sql Parsers Cqr-Sql: Conversational Question Reformulation Enhanced Context-Dependent Text-To-Sql Parsers Aiwei Liu∗, Wei Liu∗**, Xuming Hu, Shu'Ang Li,... |
murzaku-etal-2023-towards | Towards Generative Event Factuality Prediction | https://aclanthology.org/2023.findings-acl.44 | We present a novel end-to-end generative task and system for predicting event factuality holders, targets, and their associated factuality values. We perform the first experiments using all sources and targets of factuality statements from the FactBank corpus. We perform multi-task learning with other tasks and event-f... |
## Towards Generative Event Factuality Prediction
John Murzaku♣✸△, Tyler Osborne♠, Amittai Aviram♠**, Owen Rambow**♣✸✷
△ Department of Computer Science ✷ Department of Linguistics
✸ Institute for Advanced Computational Science
♣Stony Brook University, Stony Brook, NY, USA
♠Department of Computer Science, Boston Unive... |
huang-etal-2023-language | Can Language Models Be Specific? How? | https://aclanthology.org/2023.findings-acl.45 | {``}He is a person{''}, {``}Paris is located on the earth{''}. Both statements are correct but meaningless - due to lack of specificity. In this paper, we propose to measure how specific the language of pre-trained language models (PLMs) is. To achieve this, we introduce a novel approach to build a benchmark for specif... | # Can Language Models Be Specific? How?
Jie Huang1 Kevin Chen-Chuan Chang1 Jinjun Xiong2 **Wen-mei Hwu**1,3 1University of Illinois at Urbana-Champaign, USA
2University at Buffalo, USA
3NVIDIA, USA
{jeffhj, kcchang, w-hwu}@illinois.edu jinjun@buffalo.edu
## Abstract
"*He is a person*", "Paris is located on the earth... |
li-etal-2023-web | The Web Can Be Your Oyster for Improving Language Models | https://aclanthology.org/2023.findings-acl.46 | Pretrained language models (PLMs) encode a large amount of world knowledge. However, as such knowledge is frozen at the time of model training, the models become static and limited by the training data at that time. In order to further improve the capacity of PLMs for knowledge-intensive tasks, we consider augmenting P... |
## The Web Can Be Your Oyster For Improving Language Models
Junyi Li1,3,5, Tianyi Tang1**, Wayne Xin Zhao**1,5∗**, Jingyuan Wang**4, Jian-Yun Nie3and **Ji-Rong Wen**1,2,5 1Gaoling School of Artificial Intelligence, Renmin University of China 2School of Information, Renmin University of China 3DIRO, Université de Mont... |
kim-komachi-2023-enhancing | Enhancing Few-shot Cross-lingual Transfer with Target Language Peculiar Examples | https://aclanthology.org/2023.findings-acl.47 | Few-shot cross-lingual transfer, fine-tuning Multilingual Masked Language Model (MMLM) with source language labeled data and a small amount of target language labeled data, provides excellent performance in the target language. However, if no labeled data in the target language are available, they need to be created th... | # Enhancing Few-Shot Cross-Lingual Transfer With Target Language Peculiar Examples
Hwichan Kim and **Mamoru Komachi***

Tokyo Metropolitan University 6-6 Asahigaoka, Hino, Tokyo 191-0065, Japan kim-hwichan@ed.tmu.ac.jp
## Abstract
Few-shot cross-lingual transfer, fine-tuning Multilin... |
winata-etal-2023-overcoming | Overcoming Catastrophic Forgetting in Massively Multilingual Continual Learning | https://aclanthology.org/2023.findings-acl.48 | Real-life multilingual systems should be able to efficiently incorporate new languages as data distributions fed to the system evolve and shift over time. To do this, systems need to handle the issue of catastrophic forgetting, where the model performance drops for languages or tasks seen further in its past. In this p... | # Overcoming Catastrophic Forgetting In Massively Multilingual Continual Learning
Genta Indra Winata1∗, Lingjue Xie1∗, Karthik Radhakrishnan1∗**, Shijie Wu**1∗,
Xisen Jin2†, Pengxiang Cheng1, Mayank Kulkarni3†, **Daniel Preo¸tiuc-Pietro**1 1Bloomberg 2University of Southern California 3Amazon Alexa AI
{gwinata,lxie91,... |
sun-etal-2023-unifine | {U}ni{F}ine: A Unified and Fine-grained Approach for Zero-shot Vision-Language Understanding | https://aclanthology.org/2023.findings-acl.49 | Vision-language tasks, such as VQA, SNLI-VE, and VCR are challenging because they require the model{'}s reasoning ability to understand the semantics of the visual world and natural language. Supervised methods working for vision-language tasks have been well-studied. However, solving these tasks in a zero-shot setting... | # Unifine: A Unified And Fine-Grained Approach For Zero-Shot Vision-Language Understanding Rui Sun1∗ , Zhecan Wang1∗ , Haoxuan You1∗ , Noel Codella2,
Kai-Wei Chang3**, Shih-Fu Chang**1 1 Columbia University 2 Microsoft Research 3 University of California, Los Angeles
{rs4110, zw2627, hy2612, sc250}@columbia.edu ncodel... |
zhang-etal-2023-aligning | Aligning Instruction Tasks Unlocks Large Language Models as Zero-Shot Relation Extractors | https://aclanthology.org/2023.findings-acl.50 | Recent work has shown that fine-tuning large language models (LLMs) on large-scale instruction-following datasets substantially improves their performance on a wide range of NLP tasks, especially in the zero-shot setting. However, even advanced instruction-tuned LLMs still fail to outperform small LMs on relation extra... | # Aligning Instruction Tasks Unlocks Large Language Models As Zero-Shot Relation Extractors Kai Zhang Bernal Jiménez Gutiérrez Yu Su
The Ohio State University
{zhang.13253, jimenezgutierrez.1, su.809}@osu.edu
## Abstract
Recent work has shown that fine-tuning large language models (LLMs) on large-scale instruction-f... |
held-etal-2023-tada | {TADA} : Task Agnostic Dialect Adapters for {E}nglish | https://aclanthology.org/2023.findings-acl.51 | Large Language Models, the dominant starting point for Natural Language Processing (NLP) applications, fail at a higher rate for speakers of English dialects other than Standard American English (SAE). Prior work addresses this using task specific data or synthetic data augmentation, both of which require intervention ... | # Tada: Task-Agnostic Dialect Adapters For English
William Held Caleb Ziems **Diyi Yang**
Georgia Institute of Technology, Stanford University wheld3@gatech.edu
## Abstract
Large Language Models, the dominant starting point for Natural Language Processing (NLP)
applications, fail at a higher rate for speakers of Eng... |
li-etal-2023-generative | Generative Zero-Shot Prompt Learning for Cross-Domain Slot Filling with Inverse Prompting | https://aclanthology.org/2023.findings-acl.52 | Zero-shot cross-domain slot filling aims to transfer knowledge from the labeled source domain to the unlabeled target domain. Existing models either encode slot descriptions and examples or design handcrafted question templates using heuristic rules, suffering from poor generalization capability or robustness. In this ... | # Generative Zero-Shot Prompt Learning For Cross-Domain Slot Filling With Inverse Prompting
Xuefeng Li1∗, Liwen Wang1∗**, Guanting Dong**1∗,
Keqing He2, Jinzheng Zhao3, Hao Lei1, Jiachi Liu1**, Weiran Xu**1 1Beijing University of Posts and Telecommunications, Beijing, China 2Meituan Group, Beijing, China 3School of Co... |
gan-etal-2023-appraising | Re-appraising the Schema Linking for Text-to-{SQL} | https://aclanthology.org/2023.findings-acl.53 | Most text-to-SQL models, even though based on the same grammar decoder, generate the SQL structure first and then fill in the SQL slots with the correct schema items. This second step depends on schema linking: aligning the entity references in the question with the schema columns or tables. This is generally approache... | # Re-Appraising The Schema Linking For Text-To-Sql
Yujian Gan1 Xinyun Chen2 **Matthew Purver**1,3 1Queen Mary University of London 2Google DeepMind 3Jožef Stefan Institute
{y.gan,m.purver}@qmul.ac.uk xinyunchen@google.com
## Abstract
Most text-to-SQL models, even though based on the same grammar decoder 1, generate ... |
scire-etal-2023-echoes | Echoes from Alexandria: A Large Resource for Multilingual Book Summarization | https://aclanthology.org/2023.findings-acl.54 | In recent years, research in text summarization has mainly focused on the news domain, where texts are typically short and have strong layout features. The task of full-book summarization presents additional challenges which are hard to tackle with current resources, due to their limited size and availability in Englis... | # Echoes From Alexandria: A Large Resource For Multilingual Book Summarization
Alessandro Scirè1,2 Simone Conia2 Simone Ciciliano3∗ **Roberto Navigli**2 Babelscape, Italy Sapienza NLP Group Free University of Bozen 1scire@babelscape.com Sapienza University of Rome 3sciciliano@unibz.it 2{first.lastname}@uniroma1.it
##... |
han-etal-2023-gradient | When Gradient Descent Meets Derivative-Free Optimization: A Match Made in Black-Box Scenario | https://aclanthology.org/2023.findings-acl.55 | Large pre-trained language models (PLMs) have garnered significant attention for their versatility and potential for solving a wide spectrum of natural language processing (NLP) tasks. However, the cost of running these PLMs may be prohibitive. Furthermore, PLMs may not be open-sourced due to commercial considerations ... | # When Gradient Descent Meets Derivative-Free Optimization: A Match Made In Black-Box Scenario
Chengcheng Han♢∗ Liqing Cui♢∗ **Renyu Zhu**♢♠
Jianing Wang♢ Nuo Chen♢ Qiushi Sun♢♡ Xiang Li♢ **Ming Gao**♢♣†
♢School of Data Science and Engineering‡
, East China Normal University
♠NetEase Fuxi AI Lab
♡Department of Mathema... |
wu-etal-2023-align | Align-then-Enhance: Multilingual Entailment Graph Enhancement with Soft Predicate Alignment | https://aclanthology.org/2023.findings-acl.56 | Entailment graphs (EGs) with predicates as nodes and entailment relations as edges are typically incomplete, while EGs in different languages are often complementary to each other. In this paper, we propose a new task, multilingual entailment graph enhancement, which aims to utilize the entailment information from one ... | # Align-Then-Enhance: Multilingual Entailment Graph Enhancement With Soft Predicate Alignment
Yuting Wu1, Yutong Hu2,3**, Yansong Feng**2,3∗
, Tianyi Li4 Mark Steedman4**, Dongyan Zhao**2,3 1School of Software Engineering, Beijing Jiaotong University, China 2Wangxuan Institute of Computer Technology, Peking University... |
ding-etal-2023-shot | Few-shot Classification with Hypersphere Modeling of Prototypes | https://aclanthology.org/2023.findings-acl.57 | Metric-based meta-learning is one of the de facto standards in few-shot learning. It composes of representation learning and metrics calculation designs. Previous works construct class representations in different ways, varying from mean output embedding to covariance and distributions. However, using embeddings in spa... |
## Few-Shot Classification With Hypersphere Modeling Of Prototypes
Ning Ding1,2∗, Yulin Chen2∗, Ganqu Cui1**, Xiaobin Wang**3 Hai-Tao Zheng2,4†, **Zhiyuan Liu**1,5†
, **Pengjun Xie**3 1Department of Computer Science and Technology, Tsinghua University 2Shenzhen International Graduate School, Tsinghua University, 3Ali... |
liu-etal-2023-structured | Structured Mean-Field Variational Inference for Higher-Order Span-Based Semantic Role Labeling | https://aclanthology.org/2023.findings-acl.58 | In this work, we enhance higher-order graph-based approaches for span-based semantic role labeling (SRL) by means of structured modeling. To decrease the complexity of higher-order modeling, we decompose the edge from predicate word to argument span into three different edges, predicate-to-head (P2H), predicate-to-tail... | # Structured Mean-Field Variational Inference For Higher-Order Span-Based Semantic Role Labeling
Wei Liu, Songlin Yang, Kewei Tu∗
School of Information Science and Technology, ShanghaiTech University Shanghai Engineering Research Center of Intelligent Vision and Imaging
{liuwei4, yangsl,tukw}@shanghaitech.edu.cn
## A... |
guo-etal-2023-aqe | {AQE}: Argument Quadruplet Extraction via a Quad-Tagging Augmented Generative Approach | https://aclanthology.org/2023.findings-acl.59 | Argument mining involves multiple sub-tasks that automatically identify argumentative elements, such as claim detection, evidence extraction, stance classification, etc. However, each subtask alone is insufficient for a thorough understanding of the argumentative structure and reasoning process. To learn a complete vie... |
## Aqe: Argument Quadruplet Extraction Via A Quad-Tagging Augmented Generative Approach
Jia Guo∗ †1,2 Liying Cheng∗1 Wenxuan Zhang1 Stanley Kok2 Xin Li1 **Lidong Bing**1 1DAMO Academy, Alibaba Group 2School of Computing, National University of Singapore guojia@u.nus.edu, skok@comp.nus.edu.sg
{liying.cheng, saike.zwx,... |
chiesurin-etal-2023-dangers | The Dangers of trusting Stochastic Parrots: Faithfulness and Trust in Open-domain Conversational Question Answering | https://aclanthology.org/2023.findings-acl.60 | Large language models are known to produce output which sounds fluent and convincing, but is also often wrong, e.g. {``}unfaithful{''} with respect to a rationale as retrieved from a knowledge base. In this paper, we show that task-based systems which exhibit certain advanced linguistic dialog behaviors, such as lexica... | # The Dangers Of Trusting Stochastic Parrots: Faithfulness And Trust In Open-Domain Conversational Question Answering
Sabrina Chiesurin* Dimitris Dimakopoulos* Marco Antonio Sobrevilla Cabezudo Arash Eshghi Ioannis Papaioannou Verena Rieser† **Ioannis Konstas**
Alana AI
hello@alanaai.com
## Abstract
Large language m... |
cho-etal-2023-discrete | Discrete Prompt Optimization via Constrained Generation for Zero-shot Re-ranker | https://aclanthology.org/2023.findings-acl.61 | Re-rankers, which order retrieved documents with respect to the relevance score on the given query, have gained attention for the information retrieval (IR) task. Rather than fine-tuning the pre-trained language model (PLM), the large-scale language model (LLM) is utilized as a zero-shot re-ranker with excellent result... | # Discrete Prompt Optimization Via Constrained Generation For Zero-Shot Re-Ranker
Sukmin Cho Soyeong Jeong Jeongyeon Seo Jong C. Park∗
School of Computing Korea Advanced Institute of Science and Technology
{nelllpic,starsuzi,yena.seo,jongpark}@kaist.ac.kr
## Abstract
Re-rankers, which order retrieved documents with ... |
misra-etal-2023-triggering | Triggering Multi-Hop Reasoning for Question Answering in Language Models using Soft Prompts and Random Walks | https://aclanthology.org/2023.findings-acl.62 | Despite readily memorizing world knowledge about entities, pre-trained language models (LMs) struggle to compose together two or more facts to perform multi-hop reasoning in question-answering tasks. In this work, we propose techniques that improve upon this limitation by relying on random-walks over structured knowled... | # Triggering Multi-Hop Reasoning For Question Answering In Language Models Using Soft Prompts And Random Walks
Kanishka MisraF
Purdue University kmisra@purdue.edu Cicero Nogueira dos Santos Google Research cicerons@google.com Siamak Shakeri Google DeepMind siamaks@google.com
## Abstract
Despite readily memorizing wo... |
wang-etal-2023-multimedia | Multimedia Generative Script Learning for Task Planning | https://aclanthology.org/2023.findings-acl.63 | Goal-oriented generative script learning aims to generate subsequent steps to reach a particular goal, which is an essential task to assist robots or humans in performing stereotypical activities. An important aspect of this process is the ability to capture historical states visually, which provides detailed informati... | # Multimedia Generative Script Learning For Task Planning
Qingyun Wang1, Manling Li1, Hou Pong Chan2**, Lifu Huang**3, Julia Hockenmaier1, Girish Chowdhary1, **Heng Ji**1 1 University of Illinois at Urbana-Champaign 2 University of Macau 3 Virginia Tech 1{qingyun4,manling2,juliahmr,girishc,hengji}@illinois.edu 2hpchan... |
clarke-etal-2023-label | Label Agnostic Pre-training for Zero-shot Text Classification | https://aclanthology.org/2023.findings-acl.64 | Conventional approaches to text classification typically assume the existence of a fixed set of predefined labels to which a given text can be classified. However, in real-world applications, there exists an infinite label space for describing a given text. In addition, depending on the aspect (sentiment, topic, etc.) ... | # Label Agnostic Pre-Training For Zero-Shot Text Classification Christopher Clarke Yuzhao Heng Yiping Kang Krisztian Flautner Lingjia Tang Jason Mars
Computer Science & Engineering University of Michigan Ann Arbor, MI
{csclarke, stefanhg, ypkang, manowar, lingjia, profmars}@umich.edu
## Abstract
Conventional approac... |
zheng-etal-2023-click | Click: Controllable Text Generation with Sequence Likelihood Contrastive Learning | https://aclanthology.org/2023.findings-acl.65 | It has always been an important yet challenging problem to control language models to avoid generating texts with undesirable attributes, such as toxic language and unnatural repetition. We introduce Leo for controllable text generation, which needs no modification to the model architecture and facilitates out-of-the-b... |
## Click**: Controllable Text Generation With Sequence Likelihood** Contrastive Learning
Chujie Zheng Pei Ke Zheng Zhang Minlie Huang∗
The CoAI Group, DCST, Institute for Artificial Intelligence, State Key Lab of Intelligent Technology and Systems, Beijing National Research Center for Information Science and Technolo... |
song-etal-2023-improving | Improving Embedding-based Unsupervised Keyphrase Extraction by Incorporating Structural Information | https://aclanthology.org/2023.findings-acl.66 | Keyphrase extraction aims to extract a set of phrases with the central idea of the source document. In a structured document, there are certain locations (e.g., the title or the first sentence) where a keyphrase is most likely to appear. However, when extracting keyphrases from the document, most existing embedding-bas... | # Improving Embedding-Based Unsupervised Keyphrase Extraction By Incorporating Structural Information
Mingyang Song, Huafeng Liu∗**, Yi Feng, Liping Jing**∗
Beijing Key Lab of Traffic Data Analysis and Mining Beijing Jiaotong University, Beijing, China mingyang.song@bjtu.edu.cn
## Abstract
Keyphrase extraction aims ... |
huang-chang-2023-towards | Towards Reasoning in Large Language Models: A Survey | https://aclanthology.org/2023.findings-acl.67 | Reasoning is a fundamental aspect of human intelligence that plays a crucial role in activities such as problem solving, decision making, and critical thinking. In recent years, large language models (LLMs) have made significant progress in natural language processing, and there is observation that these models may exh... | # Towards Reasoning In Large Language Models: A Survey
Jie Huang Kevin Chen-Chuan Chang Department of Computer Science, University of Illinois at Urbana-Champaign
{jeffhj, kcchang}@illinois.edu
## Abstract
Reasoning is a fundamental aspect of human intelligence that plays a crucial role in activities such as problem... |
tahri-etal-2023-transitioning | Transitioning from benchmarks to a real-world case of information-seeking in Scientific Publications | https://aclanthology.org/2023.findings-acl.68 | Although recent years have been marked by incredible advances in the whole development process of NLP systems, there are still blind spots in characterizing what is still hampering real-world adoption of models in knowledge-intensive settings. In this paper, we illustrate through a real-world zero-shot text search case... | # Transitioning From Benchmarks To A Real-World Case Of Information-Seeking In Scientific Publications
Chyrine Tahri ♣,♢ Aurore Bochnakian ♢ Patrick Haouat ♢ **Xavier Tannier** ♣
♣ Sorbonne Université, Inserm, Université Sorbonne Paris-Nord, LIMICS, Paris, France
♢ ERDYN, Paris, France
{chyrine.tahri, xavier.tannier}@... |
qin-etal-2023-cliptext | {CLIPT}ext: A New Paradigm for Zero-shot Text Classification | https://aclanthology.org/2023.findings-acl.69 | While CLIP models are useful for zero-shot vision-and-language (VL) tasks or computer vision tasks, little attention has been paid to the application of CLIP for language tasks. Intuitively, CLIP model have a rich representation pre-trained with natural language supervision, in which we argue that it is useful for lang... |
## Cliptext**: A New Paradigm For Zero-Shot Text Classification**
Libo Qin1, Weiyun Wang2, Qiguang Chen3**, Wanxiang Che**3 1 School of Computer Science and Engineering 1 Central South University, China 2 OpenGVLab, Shanghai AI Laboratory, China 3Research Center for Social Computing and Information Retrieval 3Harbin ... |
wang-etal-2023-rethinking | Rethinking Dictionaries and Glyphs for {C}hinese Language Pre-training | https://aclanthology.org/2023.findings-acl.70 | We introduce CDBert, a new learning paradigm that enhances the semantics understanding ability of the Chinese PLMs with dictionary knowledge and structure of Chinese characters. We name the two core modules of CDBert as Shuowen and Jiezi, where Shuowen refers to the process of retrieving the most appropriate meaning fr... | # Shuo Wén Ji ¯ **E Zì:** ˇ
Rethinking Dictionaries and Glyphs for Chinese Language Pre-training Yuxuan Wang1,2,3, Jianghui Wang2, Dongyan Zhao1,2,3,4†**, Zilong Zheng**2,4†
1 Wangxuan Institute of Computer Technology, Peking University 2 Beijing Institute for General Artificial Intelligence (BIGAI)
3 Center for Data ... |
su-etal-2023-one | One Embedder, Any Task: Instruction-Finetuned Text Embeddings | https://aclanthology.org/2023.findings-acl.71 | We introduce INSTRUCTOR, a new method for computing text embeddings given task instructions: every text input is embedded together with instructions explaining the use case (e.g., task and domain descriptions). Unlike encoders from prior work that are more specialized, INSTRUCTOR is a single embedder that can generate ... | # One Embedder, Any Task: Instruction-Finetuned Text Embeddings
Hongjin Su♠∗ Weijia Shi♣∗ Jungo Kasai♣ Yizhong Wang♣ **Yushi Hu**♣
Mari Ostendorf♣ Wen-tau Yih♢ Noah A. Smith♣♡ Luke Zettlemoyer♣♢ **Tao Yu**♠
♠The University of Hong Kong ♣University of Washington ♢Meta AI
♡Allen Institute for AI
{hjsu,tyu}@cs.hku.hk, {y... |
shimizu-etal-2023-towards | Towards Speech Dialogue Translation Mediating Speakers of Different Languages | https://aclanthology.org/2023.findings-acl.72 | We present a new task, speech dialogue translation mediating speakers of different languages. We construct the SpeechBSD dataset for the task and conduct baseline experiments. Furthermore, we consider context to be an important aspect that needs to be addressed in this task and propose two ways of utilizing context, na... | # Towards Speech Dialogue Translation Mediating Speakers Of Different Languages
Shuichiro Shimizu1 Chenhui Chu1 Sheng Li2 **Sadao Kurohashi**1,3 1Kyoto University, Japan 2National Institute of Information and Communications Technology, Japan 3National Institute of Informatics, Japan
{sshimizu,chu,kuro}@nlp.ist.i.kyoto... |
bhardwaj-etal-2023-adaptation | Adaptation Approaches for Nearest Neighbor Language Models | https://aclanthology.org/2023.findings-acl.73 | Semi-parametric Nearest Neighbor Language Models (kNN-LMs) have produced impressive gains over purely parametric LMs, by leveraging large-scale neighborhood retrieval over external memory datastores. However, there has been little investigation into adapting such models for new domains. This work attempts to fill that ... | # Adaptation Approaches For Nearest Neighbor Language Models
Rishabh Bhardwaj1∗ **George Polovets**2 1Singapore University of Technology and Design, Singapore 2AWS AI Labs rishabhbhardwaj15@gmail.com
{polovg, sunkaral}@amazon.com Monica Sunkara2
## Abstract

Semi-parametric Nearest Ne... |
anschutz-etal-2023-language | Language Models for {G}erman Text Simplification: Overcoming Parallel Data Scarcity through Style-specific Pre-training | https://aclanthology.org/2023.findings-acl.74 | Automatic text simplification systems help to reduce textual information barriers on the internet. However, for languages other than English, only few parallel data to train these systems exists. We propose a two-step approach to overcome this data scarcity issue. First, we fine-tuned language models on a corpus of Ger... | # Language Models For German Text Simplification: Overcoming Parallel Data Scarcity Through Style-Specific Pre-Training
Miriam Anschütz, Joshua Oehms, Thomas Wimmer, Bartłomiej Jezierski and **Georg Groh**
School for Computation, Information and Technology Technical University of Munich, Germany
{miriam.anschuetz, jos... |
kim-etal-2023-client | Client-Customized Adaptation for Parameter-Efficient Federated Learning | https://aclanthology.org/2023.findings-acl.75 | Despite the versatility of pre-trained language models (PLMs) across domains, their large memory footprints pose significant challenges in federated learning (FL), where the training model has to be distributed between a server and clients. One potential solution to bypass such constraints might be the use of parameter... | # Client-Customized Adaptation For Parameter-Efficient Federated Learning
Yeachan Kim1∗
, Junho Kim1∗
, Wing-Lam Mok1, Jun-Hyung Park2**, SangKeun Lee**1,3 1Department of Artificial Intelligence, Korea University, Seoul, South Korea 2BK21 FOUR R&E Center for Artificial Intelligence, Korea University, Seoul, South Kore... |
yu-etal-2023-folkscope | {F}olk{S}cope: Intention Knowledge Graph Construction for {E}-commerce Commonsense Discovery | https://aclanthology.org/2023.findings-acl.76 | Understanding users{'} intentions in e-commerce platforms requires commonsense knowledge. In this paper, we present FolkScope, an intention knowledge graph construction framework, to reveal the structure of humans{'} minds about purchasing items. As commonsense knowledge is usually ineffable and not expressed explicitl... | # Folkscope**: Intention Knowledge Graph Construction For** E-Commerce Commonsense Discovery
Changlong Yu1∗, Weiqi Wang1, Xin Liu1∗, Jiaxin Bai1∗**, Yangqiu Song**1†
Zheng Li2, Yifan Gao2, Tianyu Cao2, **Bing Yin**2 1The Hong Kong University of Science and Technology, Hong Kong SAR, China 2Amazon.com Inc, Palo Alto, U... |
liu-jaidka-2023-psyam | {I} am {P}sy{AM}: Modeling Happiness with Cognitive Appraisal Dimensions | https://aclanthology.org/2023.findings-acl.77 | This paper proposes and evaluates PsyAM (\url{https://anonymous.4open.science/r/BERT-PsyAM-10B9}), a framework that incorporates adaptor modules in a sequential multi-task learning setup to generate high-dimensional feature representations of hedonic well-being (momentary happiness) in terms of its psychological underp... | # I Am Psyam: Modeling Happiness With Cognitive Appraisal Dimensions
Xuan Liu Electrical Engineering and Computer Sciences University of California Berkeley USA
lxstephenlaw@gmail.com Kokil Jaidka Communications and New Media National University of Singapore Singapore jaidka@nus.edu.sg
## Abstract
This paper propose... |
qixiang-etal-2023-value | Value type: the bridge to a better {DST} model | https://aclanthology.org/2023.findings-acl.78 | Value type of the slots can provide lots of useful information for DST tasks. However, it has been ignored in most previous works. In this paper, we propose a new framework for DST task based on these value types. Firstly, we extract the type of token from each turn. Specifically, we divide the slots in the dataset int... | # Value Type: The Bridge To A Better Dst Model
Qixiang Gao1⇤**, Mingyang Sun**1⇤
Yutao Mou1, Chen Zeng1**, Weiran Xu**1⇤
1Beijing University of Posts and Telecommunications, Beijing, China
{gqx,mysun}@bupt.edu.cn
{myt,chenzeng,xuweiran}@bupt.edu.cn
## Abstract
Value type of the slots can provide lots of useful infor... |
li-etal-2023-hypothetical | Hypothetical Training for Robust Machine Reading Comprehension of Tabular Context | https://aclanthology.org/2023.findings-acl.79 | Machine Reading Comprehension (MRC) models easily learn spurious correlations from complex contexts such as tabular data. Counterfactual training{---}using the factual and counterfactual data by augmentation{---}has become a promising solution. However, it is costly to construct faithful counterfactual examples because... | # Hypothetical Training For Robust Machine Reading Comprehension Of Tabular Context
Moxin Li1**, Wenjie Wang**1∗
, Fuli Feng2, 3**, Hanwang Zhang**4, Qifan Wang5**, Tat-Seng Chua**1 1National University of Singapore, 2University of Science and Technology of China 3Institute of Dataspace, Hefei, Anhui, China, 4Nanyang ... |
kabir-etal-2023-banglabook | {B}angla{B}ook: A Large-scale {B}angla Dataset for Sentiment Analysis from Book Reviews | https://aclanthology.org/2023.findings-acl.80 | The analysis of consumer sentiment, as expressed through reviews, can provide a wealth of insight regarding the quality of a product. While the study of sentiment analysis has been widely explored in many popular languages, relatively less attention has been given to the Bangla language, mostly due to a lack of relevan... | # Banglabook**: A Large-Scale Bangla Dataset For Sentiment Analysis** From Book Reviews
Mohsinul Kabir∗, Obayed Bin Mahfuz∗**, Syed Rifat Raiyan**∗,
Hasan Mahmud, Md Kamrul Hasan Systems and Software Lab (SSL)
Department of Computer Science and Engineering Islamic University of Technology, Dhaka, Bangladesh
{mohsinulk... |
haduong-etal-2023-risks | Risks and {NLP} Design: A Case Study on Procedural Document {QA} | https://aclanthology.org/2023.findings-acl.81 | As NLP systems are increasingly deployed at scale, concerns about their potential negative impacts have attracted the attention of the research community, yet discussions of risk have mostly been at an abstract level and focused on generic AI or NLP applications. We argue that clearer assessments of risks and harms to ... | # Risks And Nlp Design: A Case Study On Procedural Document Qa
Nikita Haduong1 Alice Gao1 **Noah A. Smith**1,2 1Paul G. Allen School of Computer Science & Engineering, University of Washington 2Allen Institute for Artificial Intelligence
{qu,atgao,nasmith}@cs.washington.edu
## Abstract
As NLP systems are increasingl... |
hong-etal-2023-diminishing | The Diminishing Returns of Masked Language Models to Science | https://aclanthology.org/2023.findings-acl.82 | Transformer-based masked language models such as BERT, trained on general corpora, have shown impressive performance on downstream tasks. It has also been demonstrated that the downstream task performance of such models can be improved by pretraining larger models for longer on more data. In this work, we empirically e... | # The Diminishing Returns Of Masked Language Models To Science
## Zhi Hong∗, Aswathy Ajith∗, J. Gregory Pauloski∗**, Eamon Duede**†, Kyle Chard∗‡**, Ian Foster**∗‡
∗Department of Computer Science, University of Chicago, Chicago, IL 60637, USA
†Department of Philosophy and Committee on Conceptual and Historical Studie... |
zhang-etal-2023-causal-matching | Causal Matching with Text Embeddings: A Case Study in Estimating the Causal Effects of Peer Review Policies | https://aclanthology.org/2023.findings-acl.83 | A promising approach to estimate the causal effects of peer review policies is to analyze data from publication venues that shift policies from single-blind to double-blind from one year to the next. However, in these settings the content of the manuscript is a confounding variable{---}each year has a different distrib... | # Causal Matching With Text Embeddings: A Case Study In Estimating The Causal Effects Of Peer Review Policies
Raymond Z. Zhang1 Neha Nayak Kennard2 **Daniel Scott Smith**1 Daniel A. McFarland1 Andrew McCallum2 **Katherine A. Keith**3 1Stanford Graduate School of Education 2University of Massachusetts Amherst 3Williams... |
niu-etal-2023-learning | Learning to Generalize for Cross-domain {QA} | https://aclanthology.org/2023.findings-acl.84 | There have been growing concerns regarding the out-of-domain generalization ability of natural language processing (NLP) models, particularly in question-answering (QA) tasks. Current synthesized data augmentation methods for QA are hampered by increased training costs. To address this issue, we propose a novel approac... | # Learning To Generalize For Cross-Domain Qa Yingjie Niu∗ 1,2, **Linyi Yang**∗ 3,4, **Ruihai Dong**1,2, **Yue Zhang** 3,4
1 School of Computer Science, University College Dublin 2 SFI Centre for Research Training in Machine Learning 3Institute of Advanced Technology, Westlake Institute for Advanced Study 4 School of E... |
zhou-etal-2023-enhanced | Enhanced Chart Understanding via Visual Language Pre-training on Plot Table Pairs | https://aclanthology.org/2023.findings-acl.85 | Building cross-model intelligence that can understand charts and communicate the salient information hidden behind them is an appealing challenge in the vision and language (V+L) community. The capability to uncover the underlined table data of chart figures is a critical key to automatic chart understanding. We introd... | # Enhanced Chart Understanding In Vision And Language Task Via Cross-Modal Pre-Training On Plot Table Pairs
Mingyang Zhou1, Yi R. Fung2, Long Chen1**, Christopher Thomas**3, Heng Ji2**, Shih-Fu Chang**1 1Columbia University 2University of Illinois at Urbana-Champaign 3Virginia Tech
{mz2974, cl3695, sc250}@columbia.edu... |
hu-etal-2023-importance | Importance of Synthesizing High-quality Data for Text-to-{SQL} Parsing | https://aclanthology.org/2023.findings-acl.86 | There has been increasing interest in synthesizing data to improve downstream text-to-SQL tasks. In this paper, we examined the existing synthesized datasets and discovered that state-of-the-art text-to-SQL algorithms did not further improve on popular benchmarks when trained with augmented synthetic data. We observed ... | Importance of Synthesizing High-quality Data for Text-to-SQL Parsing Yiqun Hu, Yiyun Zhao∗
, Jiarong Jiang, Wuwei Lan, Henry Zhu, Anuj Chauhan, Alexander Li, Lin Pan, Jun Wang, Chung-Wei Hang, Sheng Zhang, Jiang Guo, Marvin Dong, Joe Lilien, Patrick Ng, Zhiguo Wang, Vittorio Castelli, Bing Xiang AWS AI Labs
∗yiyunzhao@... |
li-etal-2023-exploring | Exploring Schema Generalizability of Text-to-{SQL} | https://aclanthology.org/2023.findings-acl.87 | Exploring the generalizability of a text-to-SQL parser is essential for a system to automatically adapt the real-world databases. Previous investigation works mostly focus on lexical diversity, including the influence of the synonym and perturbations in both natural language questions and databases. However, the struct... | # Exploring Schema Generalizability Of Text-To-Sql
Jieyu Li1, Lu Chen1∗, Ruisheng Cao1, Su Zhu2, Hongshen Xu1**, Zhi Chen**1 Hanchong Zhang1 **and Kai Yu**1∗
1X-LANCE Lab, Department of Computer Science and Engineering MoE Key Lab of Artificial Intelligence, AI Institute Shanghai Jiao Tong University, Shanghai, China ... |
li-etal-2023-enhancing-cross | Enhancing Cross-lingual Natural Language Inference by Soft Prompting with Multilingual Verbalizer | https://aclanthology.org/2023.findings-acl.88 | Cross-lingual natural language inference is a fundamental problem in cross-lingual language understanding. Many recent works have used prompt learning to address the lack of annotated parallel corpora in XNLI.However, these methods adopt discrete prompting by simply translating the templates to the target language and ... | # Enhancing Cross-Lingual Natural Language Inference By Soft Prompting With Multilingual Verbalizer
Shuang Li1, Xuming Hu1, Aiwei Liu1, Yawen Yang1**, Fukun Ma**1, Philip S. Yu2, **Lijie Wen**1∗
1Tsinghua University, 2University of Illinois Chicago 1{lisa18,hxm19,liuaw20,yyw19,mfk22}@mails.tsinghua.edu.cn 2psyu@uic.ed... |
xiong-etal-2023-confidence | A Confidence-based Partial Label Learning Model for Crowd-Annotated Named Entity Recognition | https://aclanthology.org/2023.findings-acl.89 | Existing models for named entity recognition (NER) are mainly based on large-scale labeled datasets, which always obtain using crowdsourcing. However, it is hard to obtain a unified and correct label via majority voting from multiple annotators for NER due to the large labeling space and complexity of this task. To add... | # A Confidence-Based Partial Label Learning Model For Crowd-Annotated Named Entity Recognition
Limao Xiong1, Jie Zhou1˚, Qunxi Zhu2, Xiao Wang1, Yuanbin Wu3**, Qi Zhang**1, Tao Gui4, Xuanjing Huang1, Jin Ma5**, Ying Shan**5 1 School of Computer Science, Fudan Univerisity 2 Research Institute of Intelligent Complex Sys... |
xu-etal-2023-towards-zero | Towards Zero-Shot Persona Dialogue Generation with In-Context Learning | https://aclanthology.org/2023.findings-acl.90 | Much work has been done to improve persona consistency by finetuning a pretrained dialogue model on high-quality human-annoated persona datasets. However, these methods still face the challenges of high cost and poor scalability. To this end, we propose a simple-yet-effective approach to significantly improve zero-shot... | # Towards Zero-Shot Persona Dialogue Generation With In-Context Learning
Xinchao Xu, Zeyang Lei, Wenquan Wu, Zheng-Yu Niu, Hua Wu, Haifeng Wang Baidu Inc., Beijing, China
{xuxinchao, leizeyang, wuwenquan01, niuzhengyu, wu_hua, wanghaifeng}@baidu.com
## Abstract
Much work has been done to improve persona consistency ... |
zheng-etal-2023-grammar | Grammar-based Decoding for Improved Compositional Generalization in Semantic Parsing | https://aclanthology.org/2023.findings-acl.91 | Sequence-to-sequence (seq2seq) models have achieved great success in semantic parsing tasks, but they tend to struggle on out-of-distribution (OOD) data. Despite recent progress, robust semantic parsing on large-scale tasks with combined challenges from both compositional generalization and natural language variations ... | # Grammar-Based Decoding For Improved Compositional Generalization In Semantic Parsing
Jing Zheng and **Jyh-Herng Chow** and **Zhongnan Shen** and **Peng Xu**
Ant Technologies U.S., Inc
{jing.zheng, jyhherngchow, zhongnan.shen, peng.x}@antgroup.com
## Abstract
Sequence-to-sequence (seq2seq) models have achieved grea... |
lyu-etal-2023-exploiting | Exploiting Rich Textual User-Product Context for Improving Personalized Sentiment Analysis | https://aclanthology.org/2023.findings-acl.92 | User and product information associated with a review is useful for sentiment polarity prediction. Typical approaches incorporating such information focus on modeling users and products as implicitly learned representation vectors. Most do not exploit the potential of historical reviews, or those that currently do requ... | # Exploiting Rich Textual User-Product Context For Improving Personalized Sentiment Analysis
Chenyang Lyu† Linyi Yang‡ Yue Zhang‡ Yvette Graham¶ **Jennifer Foster**†
† School of Computing, Dublin City University, Dublin, Ireland
‡ School of Engineering, Westlake University, China
¶ School of Computer Science and Stati... |
vazhentsev-etal-2023-efficient | Efficient Out-of-Domain Detection for Sequence to Sequence Models | https://aclanthology.org/2023.findings-acl.93 | Sequence-to-sequence (seq2seq) models based on the Transformer architecture have become a ubiquitous tool applicable not only to classical text generation tasks such as machine translation and summarization but also to any other task where an answer can be represented in a form of a finite text fragment (e.g., question... | # Efficient Out-Of-Domain Detection For Sequence To Sequence Models
Artem Vazhentsev1,2 ♢, Akim Tsvigun6,7 ♢, Roman Vashurin4 ♢**, Sergey Petrakov**2, Daniil Vasilev5, Maxim Panov4, Alexander Panchenko2,1**, and Artem Shelmanov**3 1AIRI, 2Skoltech, 3MBZUAI, 4TII, 5HSE University, 6AI Center NUST MISiS, 7Semrush
{vazhe... |
xiao-etal-2023-emotion | Emotion Cause Extraction on Social Media without Human Annotation | https://aclanthology.org/2023.findings-acl.94 | In social media, there is a vast amount of information pertaining to people{'}s emotions and the corresponding causes. The emotion cause extraction (ECE) from social media data is an important research area that has not been thoroughly explored due to the lack of fine-grained annotations. Early studies referred to eith... | # Emotion Cause Extraction On Social Media Without Human Annotation
Debin Xiao, Rui Xia∗
, and Jianfei Yu School of Computer Science and Engineering, Nanjing University of Science and Technology, China
{debinxiao, rxia, jfyu}@njust.edu.cn
## Abstract
In social media, there is a vast amount of information pertaining ... |
kim-etal-2023-pseudo | Pseudo Outlier Exposure for Out-of-Distribution Detection using Pretrained Transformers | https://aclanthology.org/2023.findings-acl.95 | For real-world language applications, detecting an out-of-distribution (OOD) sample is helpful to alert users or reject such unreliable samples. However, modern over-parameterized language models often produce overconfident predictions for both in-distribution (ID) and OOD samples. In particular, language models suffer... | # Pseudo Outlier Exposure For Out-Of-Distribution Detection Using Pretrained Transformers
Jaeyoung Kim∗
Gachon University kimjeyoung@gachon.ac.kr Kyuheon Jung∗
Pukyong National University kkyuhun94@pukyong.ac.kr Dongbin Na VUNO, Inc.
dongbin.na@vuno.co Sion Jang Alchera Inc.
so.jang@alcherainc.com
## Eunbin Park
P... |
zhang-liu-2023-adversarial | Adversarial Multi-task Learning for End-to-end Metaphor Detection | https://aclanthology.org/2023.findings-acl.96 | Metaphor detection (MD) suffers from limited training data. In this paper, we started with a linguistic rule called Metaphor Identification Procedure and then proposed a novel multi-task learning framework to transfer knowledge in basic sense discrimination (BSD) to MD. BSD is constructed from word sense disambiguation... | # Adversarial Multi-Task Learning For End-To-End Metaphor Detection
Shenglong Zhang Ying Liu ∗
Tsinghua University, Beijing, China, 100084 zsl18@mails.tsinghua.edu.cn yingliu@mail.tsinghua.edu.cn
## Abstract
Metaphor detection (MD) suffers from limited training data. In this paper, we started with a linguistic rule ... |
adebara-etal-2023-serengeti | {SERENGETI}: Massively Multilingual Language Models for {A}frica | https://aclanthology.org/2023.findings-acl.97 | Multilingual pretrained language models (mPLMs) acquire valuable, generalizable linguistic information during pretraining and have advanced the state of the art on task-specific finetuning. To date, only {\textasciitilde}31 out of {\textasciitilde}2,000 African languages are covered in existing language models. We amel... |
## Serengeti: Massively Multilingual Language Models For Africa
Ife Adebara1,⋆ AbdelRahim Elmadany1,⋆ Muhammad Abdul-Mageed1,2 **Alcides Alcoba**1 1Deep Learning & Natural Language Processing Group, The University of British Columbia 2Department of Natural Language Processing & Department of Machine Learning, MBZUAI
... |
do-etal-2023-prompt | Prompt- and Trait Relation-aware Cross-prompt Essay Trait Scoring | https://aclanthology.org/2023.findings-acl.98 | Automated essay scoring (AES) aims to score essays written for a given prompt, which defines the writing topic. Most existing AES systems assume to grade essays of the same prompt as used in training and assign only a holistic score. However, such settings conflict with real-education situations; pre-graded essays for ... | # Prompt- And Trait Relation-Aware Cross-Prompt Essay Trait Scoring
Heejin Do⋆, Yunsu Kim⋆†**, Gary Geunbae Lee**⋆†
⋆Graduate School of AI, POSTECH
†Department of Computer Science and Engineering, POSTECH
{heejindo, yunsu.kim, gblee}@postech.ac.kr
## Abstract

Automated essay scoring ... |
zheng-etal-2023-augesc | {A}ug{ESC}: Dialogue Augmentation with Large Language Models for Emotional Support Conversation | https://aclanthology.org/2023.findings-acl.99 | Crowdsourced dialogue corpora are usually limited in scale and topic coverage due to the expensive cost of data curation. This would hinder the generalization of downstream dialogue models to open-domain topics. In this work, we leverage large language models for dialogue augmentation in the task of emotional support c... | # Aug**Esc: Dialogue Augmentation With Large Language Models For** Emotional Support Conversation
Chujie Zheng Sahand Sabour Jiaxin Wen Zheng Zhang Minlie Huang∗
The CoAI Group, DCST, Institute for Artificial Intelligence, State Key Lab of Intelligent Technology and Systems, Beijing National Research Center for Inform... |
ahmed-etal-2023-2 | $2*n$ is better than $n^2$: Decomposing Event Coreference Resolution into Two Tractable Problems | https://aclanthology.org/2023.findings-acl.100 | Event Coreference Resolution (ECR) is the task of linking mentions of the same event either within or across documents. Most mention pairs are not coreferent, yet many that are coreferent can be identified through simple techniques such as lemma matching of the event triggers or the sentences in which they appear. Exis... | # 2 ∗ N **Is Better Than** N 2**: Decomposing Event Coreference Resolution Into** Two Tractable Problems
Shafiuddin Rehan Ahmed1 Abhijnan Nath2 James H. Martin1 **Nikhil Krishnaswamy**2 1Department of Computer Science, University of Colorado, Boulder, CO, USA
{shah7567, james.martin}@colorado.edu 2Department of Comput... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.