bibtex_url
stringlengths
41
53
acl_proceedings
stringlengths
38
50
bibtext
stringlengths
528
3.02k
abstract
stringlengths
17
2.35k
authors
listlengths
1
44
title
stringlengths
18
190
id
stringlengths
7
19
arxiv_id
stringlengths
10
10
GitHub
listlengths
1
1
paper_page
stringclasses
528 values
n_linked_authors
int64
-1
15
upvotes
int64
-1
77
num_comments
int64
-1
10
n_authors
int64
-1
52
Models
listlengths
0
100
Datasets
listlengths
0
15
Spaces
listlengths
0
46
paper_page_exists_pre_conf
int64
0
1
type
stringclasses
2 values
https://aclanthology.org/2023.emnlp-main.201.bib
https://aclanthology.org/2023.emnlp-main.201/
@inproceedings{goyal-etal-2023-else, title = "What Else Do {I} Need to Know? The Effect of Background Information on Users{'} Reliance on {QA} Systems", author = "Goyal, Navita and Briakou, Eleftheria and Liu, Amanda and Baumler, Connor and Bonial, Claire and Micher, Jeffrey ...
NLP systems have shown impressive performance at answering questions by retrieving relevant context. However, with the increasingly large models, it is impossible and often undesirable to constrain models{'} knowledge or reasoning to only the retrieved context. This leads to a mismatch between the information that \tex...
[ "Goyal, Navita", "Briakou, Eleftheria", "Liu, Am", "a", "Baumler, Connor", "Bonial, Claire", "Micher, Jeffrey", "Voss, Clare", "Carpuat, Marine", "Daum{\\'e} III, Hal" ]
What Else Do I Need to Know? The Effect of Background Information on Users' Reliance on QA Systems
emnlp-main.201
2305.14331
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.202.bib
https://aclanthology.org/2023.emnlp-main.202/
@inproceedings{surikuchi-etal-2023-groovist, title = "{GROOV}i{ST}: A Metric for Grounding Objects in Visual Storytelling", author = "Surikuchi, Aditya and Pezzelle, Sandro and Fern{\'a}ndez, Raquel", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Pro...
A proper evaluation of stories generated for a sequence of images{---}the task commonly referred to as visual storytelling{---}must consider multiple aspects, such as coherence, grammatical correctness, and visual grounding. In this work, we focus on evaluating the degree of grounding, that is, the extent to which a st...
[ "Surikuchi, Aditya", "Pezzelle, S", "ro", "Fern{\\'a}ndez, Raquel" ]
GROOViST: A Metric for Grounding Objects in Visual Storytelling
emnlp-main.202
2310.17770
[ "https://github.com/akskuchi/groovist" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.203.bib
https://aclanthology.org/2023.emnlp-main.203/
@inproceedings{zhang-etal-2023-vibe, title = "{VIBE}: Topic-Driven Temporal Adaptation for {T}witter Classification", author = "Zhang, Yuji and Li, Jing and Li, Wenjie", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference...
Language features are evolving in real-world social media, resulting in the deteriorating performance of text classification in dynamics. To address this challenge, we study temporal adaptation, where models trained on past data are tested in the future. Most prior work focused on continued pretraining or knowledge upd...
[ "Zhang, Yuji", "Li, Jing", "Li, Wenjie" ]
VIBE: Topic-Driven Temporal Adaptation for Twitter Classification
emnlp-main.203
2310.10191
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.204.bib
https://aclanthology.org/2023.emnlp-main.204/
@inproceedings{sohn-etal-2023-tod, title = "{TOD}-Flow: Modeling the Structure of Task-Oriented Dialogues", author = "Sohn, Sungryull and Lyu, Yiwei and Liu, Anthony and Logeswaran, Lajanugen and Kim, Dong-Ki and Shim, Dongsub and Lee, Honglak", editor = "Bouamor, H...
Task-Oriented Dialogue (TOD) systems have become crucial components in interactive artificial intelligence applications. While recent advances have capitalized on pre-trained language models (PLMs), they exhibit limitations regarding transparency and controllability. To address these challenges, we propose a novel appr...
[ "Sohn, Sungryull", "Lyu, Yiwei", "Liu, Anthony", "Logeswaran, Lajanugen", "Kim, Dong-Ki", "Shim, Dongsub", "Lee, Honglak" ]
TOD-Flow: Modeling the Structure of Task-Oriented Dialogues
emnlp-main.204
2312.04668
[ "https://github.com/srsohn/tod-flow" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.205.bib
https://aclanthology.org/2023.emnlp-main.205/
@inproceedings{pan-etal-2023-topwords, title = "{T}op{WORDS}-Poetry: Simultaneous Text Segmentation and Word Discovery for Classical {C}hinese Poetry via {B}ayesian Inference", author = "Pan, Changzai and Li, Feiyue and Deng, Ke", editor = "Bouamor, Houda and Pino, Juan and Bali,...
As a precious cultural heritage of human beings, classical Chinese poetry has a very unique writing style and often contains special words that rarely appear in general Chinese texts, posting critical challenges for natural language processing. Little effort has been made in the literature for processing texts from cla...
[ "Pan, Changzai", "Li, Feiyue", "Deng, Ke" ]
TopWORDS-Poetry: Simultaneous Text Segmentation and Word Discovery for Classical Chinese Poetry via Bayesian Inference
emnlp-main.205
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.206.bib
https://aclanthology.org/2023.emnlp-main.206/
@inproceedings{yao-etal-2023-knowledge, title = "Knowledge Rumination for Pre-trained Language Models", author = "Yao, Yunzhi and Wang, Peng and Mao, Shengyu and Tan, Chuanqi and Huang, Fei and Chen, Huajun and Zhang, Ningyu", editor = "Bouamor, Houda and Pin...
Previous studies have revealed that vanilla pre-trained language models (PLMs) lack the capacity to handle knowledge-intensive NLP tasks alone; thus, several works have attempted to integrate external knowledge into PLMs. However, despite the promising outcome, we empirically observe that PLMs may have already encoded ...
[ "Yao, Yunzhi", "Wang, Peng", "Mao, Shengyu", "Tan, Chuanqi", "Huang, Fei", "Chen, Huajun", "Zhang, Ningyu" ]
Knowledge Rumination for Pre-trained Language Models
emnlp-main.206
2305.08732
[ "https://github.com/zjunlp/knowledge-rumination" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.207.bib
https://aclanthology.org/2023.emnlp-main.207/
@inproceedings{wu-lu-2023-struct, title = "Struct-{XLM}: A Structure Discovery Multilingual Language Model for Enhancing Cross-lingual Transfer through Reinforcement Learning", author = "Wu, Linjuan and Lu, Weiming", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktit...
Cross-lingual transfer learning heavily relies on well-aligned cross-lingual representations. The syntactic structure is recognized as beneficial for cross-lingual transfer, but limited researches utilize it for aligning representation in multilingual pre-trained language models (PLMs). Additionally, existing methods r...
[ "Wu, Linjuan", "Lu, Weiming" ]
Struct-XLM: A Structure Discovery Multilingual Language Model for Enhancing Cross-lingual Transfer through Reinforcement Learning
emnlp-main.207
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.208.bib
https://aclanthology.org/2023.emnlp-main.208/
@inproceedings{huang-etal-2023-adasent, title = "{A}da{S}ent: Efficient Domain-Adapted Sentence Embeddings for Few-Shot Classification", author = "Huang, Yongxin and Wang, Kexin and Dutta, Sourav and Patel, Raj and Glava{\v{s}}, Goran and Gurevych, Iryna", editor = "Bouamo...
Recent work has found that few-shot sentence classification based on pre-trained Sentence Encoders (SEs) is efficient, robust, and effective. In this work, we investigate strategies for domain-specialization in the context of few-shot sentence classification with SEs. We first establish that unsupervised Domain-Adaptiv...
[ "Huang, Yongxin", "Wang, Kexin", "Dutta, Sourav", "Patel, Raj", "Glava{\\v{s}}, Goran", "Gurevych, Iryna" ]
AdaSent: Efficient Domain-Adapted Sentence Embeddings for Few-Shot Classification
emnlp-main.208
2311.00408
[ "https://github.com/ukplab/adasent" ]
https://huggingface.co/papers/2311.00408
1
0
0
6
[ "yoh/distilroberta-base-sept-adapter" ]
[]
[]
1
Oral
https://aclanthology.org/2023.emnlp-main.209.bib
https://aclanthology.org/2023.emnlp-main.209/
@inproceedings{li-etal-2023-interview, title = "Interview Evaluation: A Novel Approach for Automatic Evaluation of Conversational Question Answering Models", author = "Li, Xibo and Zou, Bowei and Fan, Yifan and Li, Yanling and Aw, Ai Ti and Hong, Yu", editor = "Bouamor, Ho...
Conversational Question Answering (CQA) aims to provide natural language answers to users in information-seeking dialogues. Existing CQA benchmarks often evaluate models using pre-collected human-human conversations. However, replacing the model-predicted dialogue history with ground truth compromises the naturalness a...
[ "Li, Xibo", "Zou, Bowei", "Fan, Yifan", "Li, Yanling", "Aw, Ai Ti", "Hong, Yu" ]
Interview Evaluation: A Novel Approach for Automatic Evaluation of Conversational Question Answering Models
emnlp-main.209
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.210.bib
https://aclanthology.org/2023.emnlp-main.210/
@inproceedings{wilkens-etal-2023-tcfle, title = "{TCFLE}-8: a Corpus of Learner Written Productions for {F}rench as a Foreign Language and its Application to Automated Essay Scoring", author = "Wilkens, Rodrigo and Pintard, Alice and Alfter, David and Folny, Vincent and Fran{\c{c}}oi...
Automated Essay Scoring (AES) aims to automatically assess the quality of essays. Automation enables large-scale assessment, improvements in consistency, reliability, and standardization. Those characteristics are of particular relevance in the context of language certification exams. However, a major bottleneck in the...
[ "Wilkens, Rodrigo", "Pintard, Alice", "Alfter, David", "Folny, Vincent", "Fran{\\c{c}}ois, Thomas" ]
TCFLE-8: a Corpus of Learner Written Productions for French as a Foreign Language and its Application to Automated Essay Scoring
emnlp-main.210
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.211.bib
https://aclanthology.org/2023.emnlp-main.211/
@inproceedings{heineman-etal-2023-dancing, title = "Dancing Between Success and Failure: Edit-level Simplification Evaluation using {SALSA}", author = "Heineman, David and Dou, Yao and Maddela, Mounica and Xu, Wei", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika"...
Large language models (e.g., GPT-4) are uniquely capable of producing highly rated text simplification, yet current human evaluation methods fail to provide a clear understanding of systems{'} specific strengths and weaknesses. To address this limitation, we introduce SALSA, an edit-based human annotation framework tha...
[ "Heineman, David", "Dou, Yao", "Maddela, Mounica", "Xu, Wei" ]
Dancing Between Success and Failure: Edit-level Simplification Evaluation using SALSA
emnlp-main.211
2305.14458
[ "" ]
https://huggingface.co/papers/2305.14458
0
0
0
4
[ "davidheineman/lens-salsa" ]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.212.bib
https://aclanthology.org/2023.emnlp-main.212/
@inproceedings{casola-etal-2023-confidence, title = "Confidence-based Ensembling of Perspective-aware Models", author = "Casola, Silvia and Lo, Soda Marem and Basile, Valerio and Frenda, Simona and Cignarella, Alessandra and Patti, Viviana and Bosco, Cristina", edit...
Research in the field of NLP has recently focused on the variability that people show in selecting labels when performing an annotation task. Exploiting disagreements in annotations has been shown to offer advantages for accurate modelling and fair evaluation. In this paper, we propose a strongly perspectivist model fo...
[ "Casola, Silvia", "Lo, Soda Marem", "Basile, Valerio", "Frenda, Simona", "Cignarella, Aless", "ra", "Patti, Viviana", "Bosco, Cristina" ]
Confidence-based Ensembling of Perspective-aware Models
emnlp-main.212
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.213.bib
https://aclanthology.org/2023.emnlp-main.213/
@inproceedings{wang-etal-2023-tovilag, title = "{T}o{V}i{L}a{G}: Your Visual-Language Generative Model is Also An Evildoer", author = "Wang, Xinpeng and Yi, Xiaoyuan and Jiang, Han and Zhou, Shanlin and Wei, Zhihua and Xie, Xing", editor = "Bouamor, Houda and Pino, ...
Recent large-scale Visual-Language Generative Models (VLGMs) have achieved unprecedented improvement in multimodal image/text generation. However, these models might also generate toxic content, e.g., offensive text and pornography images, raising significant ethical risks. Despite exhaustive studies on toxic degenerat...
[ "Wang, Xinpeng", "Yi, Xiaoyuan", "Jiang, Han", "Zhou, Shanlin", "Wei, Zhihua", "Xie, Xing" ]
ToViLaG: Your Visual-Language Generative Model is Also An Evildoer
emnlp-main.213
2312.11523
[ "https://github.com/victorup/ToViLaG" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.214.bib
https://aclanthology.org/2023.emnlp-main.214/
@inproceedings{wan-etal-2023-gpt, title = "{GPT}-{RE}: In-context Learning for Relation Extraction using Large Language Models", author = "Wan, Zhen and Cheng, Fei and Mao, Zhuoyuan and Liu, Qianying and Song, Haiyue and Li, Jiwei and Kurohashi, Sadao", editor = "Bo...
In spite of the potential for ground-breaking achievements offered by large language models (LLMs) (e.g., GPT-3) via in-context learning (ICL), they still lag significantly behind fully-supervised baselines (e.g., fine-tuned BERT) in relation extraction (RE). This is due to the two major shortcomings of ICL for RE: (1)...
[ "Wan, Zhen", "Cheng, Fei", "Mao, Zhuoyuan", "Liu, Qianying", "Song, Haiyue", "Li, Jiwei", "Kurohashi, Sadao" ]
GPT-RE: In-context Learning for Relation Extraction using Large Language Models
emnlp-main.214
2305.02105
[ "https://github.com/yukinowan/gpt-re" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.215.bib
https://aclanthology.org/2023.emnlp-main.215/
@inproceedings{ch-wang-etal-2023-sociocultural, title = "Sociocultural Norm Similarities and Differences via Situational Alignment and Explainable Textual Entailment", author = "CH-Wang, Sky and Saakyan, Arkadiy and Li, Oliver and Yu, Zhou and Muresan, Smaranda", editor = "Bouamo...
Designing systems that can reason across cultures requires that they are grounded in the norms of the contexts in which they operate. However, current research on developing computational models of social norms has primarily focused on American society. Here, we propose a novel approach to discover and compare descript...
[ "CH-Wang, Sky", "Saakyan, Arkadiy", "Li, Oliver", "Yu, Zhou", "Muresan, Smar", "a" ]
Sociocultural Norm Similarities and Differences via Situational Alignment and Explainable Textual Entailment
emnlp-main.215
2305.14492
[ "https://github.com/asaakyan/socnormnli" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.216.bib
https://aclanthology.org/2023.emnlp-main.216/
@inproceedings{zhou-etal-2023-inform, title = "{INFORM} : Information e{N}tropy based multi-step reasoning {FOR} large language Models", author = "Zhou, Chuyue and You, Wangjie and Li, Juntao and Ye, Jing and Chen, Kehai and Zhang, Min", editor = "Bouamor, Houda and ...
Large language models (LLMs) have demonstrated exceptional performance in reasoning tasks with dedicated Chain-of-Thought (CoT) prompts. Further enhancing CoT prompts with exquisite exemplars can significantly improve reasoning performance.However, the effectiveness of CoT prompts may fluctuate dramatically with differ...
[ "Zhou, Chuyue", "You, Wangjie", "Li, Juntao", "Ye, Jing", "Chen, Kehai", "Zhang, Min" ]
INFORM : Information eNtropy based multi-step reasoning FOR large language Models
emnlp-main.216
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.217.bib
https://aclanthology.org/2023.emnlp-main.217/
@inproceedings{li-etal-2023-adaptive-gating, title = "Adaptive Gating in Mixture-of-Experts based Language Models", author = "Li, Jiamin and Su, Qiang and Yang, Yitao and Jiang, Yimin and Wang, Cong and Xu, Hong", editor = "Bouamor, Houda and Pino, Juan and B...
Large language models have demonstrated exceptional language understanding capabilities in many NLP tasks. Sparsely activated mixture-of-experts (MoE) has emerged as a promising solution for scaling models while maintaining a constant number of computational operations. Existing MoE models adopt a fixed gating network ...
[ "Li, Jiamin", "Su, Qiang", "Yang, Yitao", "Jiang, Yimin", "Wang, Cong", "Xu, Hong" ]
Adaptive Gating in Mixture-of-Experts based Language Models
emnlp-main.217
2310.07188
[ "" ]
https://huggingface.co/papers/2310.07188
1
2
0
6
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.218.bib
https://aclanthology.org/2023.emnlp-main.218/
@inproceedings{valentini-etal-2023-automatic, title = "On the Automatic Generation and Simplification of Children{'}s Stories", author = "Valentini, Maria and Weber, Jennifer and Salcido, Jesus and Wright, T{\'e}a and Colunga, Eliana and von der Wense, Katharina", editor =...
With recent advances in large language models (LLMs), the concept of automatically generating children{'}s educational materials has become increasingly realistic. Working toward the goal of age-appropriate simplicity in generated educational texts, we first examine the ability of several popular LLMs to generate stori...
[ "Valentini, Maria", "Weber, Jennifer", "Salcido, Jesus", "Wright, T{\\'e}a", "Colunga, Eliana", "von der Wense, Katharina" ]
On the Automatic Generation and Simplification of Children's Stories
emnlp-main.218
2310.18502
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.219.bib
https://aclanthology.org/2023.emnlp-main.219/
@inproceedings{wei-etal-2023-decompositions, title = "When Do Decompositions Help for Machine Reading?", author = "Wei, Kangda and Lawrie, Dawn and Van Durme, Benjamin and Chen, Yunmo and Weller, Orion", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", ...
Answering complex questions often requires multi-step reasoning in order to obtain the final answer. Most research into decompositions of complex questions involves open-domain systems, which have shown success in using these decompositions for improved retrieval. In the machine reading setting, however, work to unders...
[ "Wei, Kangda", "Lawrie, Dawn", "Van Durme, Benjamin", "Chen, Yunmo", "Weller, Orion" ]
When Do Decompositions Help for Machine Reading?
emnlp-main.219
2212.10019
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.220.bib
https://aclanthology.org/2023.emnlp-main.220/
@inproceedings{slobodkin-etal-2023-curious, title = "The Curious Case of Hallucinatory (Un)answerability: Finding Truths in the Hidden States of Over-Confident Large Language Models", author = "Slobodkin, Aviv and Goldman, Omer and Caciularu, Avi and Dagan, Ido and Ravfogel, Shauli",...
Large language models (LLMs) have been shown to possess impressive capabilities, while also raising crucial concerns about the faithfulness of their responses. A primary issue arising in this context is the management of (un)answerable queries by LLMs, which often results in hallucinatory behavior due to overconfidence...
[ "Slobodkin, Aviv", "Goldman, Omer", "Caciularu, Avi", "Dagan, Ido", "Ravfogel, Shauli" ]
The Curious Case of Hallucinatory (Un)answerability: Finding Truths in the Hidden States of Over-Confident Large Language Models
emnlp-main.220
2310.11877
[ "https://github.com/lovodkin93/unanswerability" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.221.bib
https://aclanthology.org/2023.emnlp-main.221/
@inproceedings{spangher-etal-2023-identifying, title = "Identifying Informational Sources in News Articles", author = "Spangher, Alexander and Peng, Nanyun and Ferrara, Emilio and May, Jonathan", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "P...
News articles are driven by the informational sources journalists use in reporting. Modeling when, how and why sources get used together in stories can help us better understand the information we consume and even help journalists with the task of producing it. In this work, we take steps toward this goal by constructi...
[ "Spangher, Alex", "er", "Peng, Nanyun", "Ferrara, Emilio", "May, Jonathan" ]
Identifying Informational Sources in News Articles
emnlp-main.221
2305.14904
[ "https://github.com/alex2awesome/source-exploration" ]
https://huggingface.co/papers/2305.14904
0
0
0
4
[ "alex2awesome/quote-detection__roberta-base-sentence", "alex2awesome/quote-detection__roberta-base-sentence-v2", "alex2awesome/quote-attribution__qa-model-v2", "alex2awesome/quote-detection__roberta-base-sentence-v3", "alex2awesome/quote-attribution__qa-model-v3" ]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.222.bib
https://aclanthology.org/2023.emnlp-main.222/
@inproceedings{shah-etal-2023-retrofitting, title = "Retrofitting Light-weight Language Models for Emotions using Supervised Contrastive Learning", author = "Shah, Sapan and Reddy, Sreedhar and Bhattacharyya, Pushpak", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", ...
We present a novel retrofitting method to induce emotion aspects into pre-trained language models (PLMs) such as BERT and RoBERTa. Our method updates pre-trained network weights using contrastive learning so that the text fragments exhibiting similar emotions are encoded nearby in the representation space, and the frag...
[ "Shah, Sapan", "Reddy, Sreedhar", "Bhattacharyya, Pushpak" ]
Retrofitting Light-weight Language Models for Emotions using Supervised Contrastive Learning
emnlp-main.222
2310.18930
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.223.bib
https://aclanthology.org/2023.emnlp-main.223/
@inproceedings{yang-etal-2023-longtriever, title = "Longtriever: a Pre-trained Long Text Encoder for Dense Document Retrieval", author = "Yang, Junhan and Liu, Zheng and Li, Chaozhuo and Sun, Guangzhong and Xie, Xing", editor = "Bouamor, Houda and Pino, Juan and Bal...
Pre-trained language models (PLMs) have achieved the preeminent position in dense retrieval due to their powerful capacity in modeling intrinsic semantics. However, most existing PLM-based retrieval models encounter substantial computational costs and are infeasible for processing long documents. In this paper, a novel...
[ "Yang, Junhan", "Liu, Zheng", "Li, Chaozhuo", "Sun, Guangzhong", "Xie, Xing" ]
Longtriever: a Pre-trained Long Text Encoder for Dense Document Retrieval
emnlp-main.223
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.224.bib
https://aclanthology.org/2023.emnlp-main.224/
@inproceedings{liu-etal-2023-revisiting-de, title = "Revisiting De-Identification of Electronic Medical Records: Evaluation of Within- and Cross-Hospital Generalization", author = "Liu, Yiyang and Li, Jinpeng and Zhu, Enwei", editor = "Bouamor, Houda and Pino, Juan and Bali, Kali...
The de-identification task aims to detect and remove the protected health information from electronic medical records (EMRs). Previous studies generally focus on the within-hospital setting and achieve great successes, while the cross-hospital setting has been overlooked. This study introduces a new de-identification d...
[ "Liu, Yiyang", "Li, Jinpeng", "Zhu, Enwei" ]
Revisiting De-Identification of Electronic Medical Records: Evaluation of Within- and Cross-Hospital Generalization
emnlp-main.224
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.225.bib
https://aclanthology.org/2023.emnlp-main.225/
@inproceedings{juneja-etal-2023-small, title = "Small Language Models Fine-tuned to Coordinate Larger Language Models improve Complex Reasoning", author = "Juneja, Gurusha and Dutta, Subhabrata and Chakrabarti, Soumen and Manchanda, Sunny and Chakraborty, Tanmoy", editor = "Bouam...
Large Language Models (LLMs) prompted to generate chain-of-thought (CoT) exhibit impressive reasoning capabilities. Recent attempts at prompt decomposition toward solving complex, multi-step reasoning problems depend on the ability of the LLM to simultaneously decompose and solve the problem. A significant disadvantage...
[ "Juneja, Gurusha", "Dutta, Subhabrata", "Chakrabarti, Soumen", "Manch", "a, Sunny", "Chakraborty, Tanmoy" ]
Small Language Models Fine-tuned to Coordinate Larger Language Models improve Complex Reasoning
emnlp-main.225
2310.18338
[ "https://github.com/lcs2-iiitd/daslam" ]
https://huggingface.co/papers/2310.18338
0
1
0
5
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.226.bib
https://aclanthology.org/2023.emnlp-main.226/
@inproceedings{xu-etal-2023-language-representation, title = "Language Representation Projection: Can We Transfer Factual Knowledge across Languages in Multilingual Language Models?", author = "Xu, Shaoyang and Li, Junzhuo and Xiong, Deyi", editor = "Bouamor, Houda and Pino, Juan and ...
Multilingual pretrained language models serve as repositories of multilingual factual knowledge. Nevertheless, a substantial performance gap of factual knowledge probing exists between high-resource languages and low-resource languages, suggesting limited implicit factual knowledge transfer across languages in multilin...
[ "Xu, Shaoyang", "Li, Junzhuo", "Xiong, Deyi" ]
Language Representation Projection: Can We Transfer Factual Knowledge across Languages in Multilingual Language Models?
emnlp-main.226
2311.03788
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.227.bib
https://aclanthology.org/2023.emnlp-main.227/
@inproceedings{michaelov-etal-2023-structural, title = "Structural Priming Demonstrates Abstract Grammatical Representations in Multilingual Language Models", author = "Michaelov, James and Arnett, Catherine and Chang, Tyler and Bergen, Ben", editor = "Bouamor, Houda and Pino, Ju...
Abstract grammatical knowledge{---}of parts of speech and grammatical patterns{---}is key to the capacity for linguistic generalization in humans. But how abstract is grammatical knowledge in large language models? In the human literature, compelling evidence for grammatical abstraction comes from structural priming. A...
[ "Michaelov, James", "Arnett, Catherine", "Chang, Tyler", "Bergen, Ben" ]
Structural Priming Demonstrates Abstract Grammatical Representations in Multilingual Language Models
emnlp-main.227
2311.09194
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.228.bib
https://aclanthology.org/2023.emnlp-main.228/
@inproceedings{jiang-etal-2023-reasoninglm, title = "{R}easoning{LM}: Enabling Structural Subgraph Reasoning in Pre-trained Language Models for Question Answering over Knowledge Graph", author = "Jiang, Jinhao and Zhou, Kun and Zhao, Xin and Li, Yaliang and Wen, Ji-Rong", editor ...
Question Answering over Knowledge Graph (KGQA) aims to seek answer entities for the natural language question from a large-scale Knowledge Graph (KG). To better perform reasoning on KG, recent work typically adopts a pre-trained language model (PLM) to model the question, and a graph neural network (GNN) based module t...
[ "Jiang, Jinhao", "Zhou, Kun", "Zhao, Xin", "Li, Yaliang", "Wen, Ji-Rong" ]
ReasoningLM: Enabling Structural Subgraph Reasoning in Pre-trained Language Models for Question Answering over Knowledge Graph
emnlp-main.228
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.229.bib
https://aclanthology.org/2023.emnlp-main.229/
@inproceedings{urrutia-etal-2023-deep, title = "Deep Natural Language Feature Learning for Interpretable Prediction", author = "Urrutia, Felipe and Calderon, Cristian and Barriere, Valentin", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings ...
We propose a general method to break down a main complex task into a set of intermediary easier sub-tasks, which are formulated in natural language as binary questions related to the final target task. Our method allows for representing each example by a vector consisting of the answers to these questions. We call this...
[ "Urrutia, Felipe", "Calderon, Cristian", "Barriere, Valentin" ]
Deep Natural Language Feature Learning for Interpretable Prediction
emnlp-main.229
null
[ "https://github.com/furrutiav/nllf-emnlp-2023" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.230.bib
https://aclanthology.org/2023.emnlp-main.230/
@inproceedings{esiobu-etal-2023-robbie, title = "{ROBBIE}: Robust Bias Evaluation of Large Generative Language Models", author = "Esiobu, David and Tan, Xiaoqing and Hosseini, Saghar and Ung, Megan and Zhang, Yuchen and Fernandes, Jude and Dwivedi-Yu, Jane and Pr...
As generative large language models (LLMs) grow more performant and prevalent, we must develop comprehensive enough tools to measure and improve their fairness. Different prompt-based datasets can be used to measure social bias across multiple text domains and demographic axes, meaning that testing LLMs on more dataset...
[ "Esiobu, David", "Tan, Xiaoqing", "Hosseini, Saghar", "Ung, Megan", "Zhang, Yuchen", "Fern", "es, Jude", "Dwivedi-Yu, Jane", "Presani, Eleonora", "Williams, Adina", "Smith, Eric" ]
ROBBIE: Robust Bias Evaluation of Large Generative Language Models
emnlp-main.230
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.231.bib
https://aclanthology.org/2023.emnlp-main.231/
@inproceedings{ohashi-higashinaka-2023-enhancing, title = "Enhancing Task-oriented Dialogue Systems with Generative Post-processing Networks", author = "Ohashi, Atsumoto and Higashinaka, Ryuichiro", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings ...
Recently, post-processing networks (PPNs), which modify the outputs of arbitrary modules including non-differentiable ones in task-oriented dialogue systems, have been proposed. PPNs have successfully improved the dialogue performance by post-processing natural language understanding (NLU), dialogue state tracking (DST...
[ "Ohashi, Atsumoto", "Higashinaka, Ryuichiro" ]
Enhancing Task-oriented Dialogue Systems with Generative Post-processing Networks
emnlp-main.231
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.232.bib
https://aclanthology.org/2023.emnlp-main.232/
@inproceedings{chevalier-etal-2023-adapting, title = "Adapting Language Models to Compress Contexts", author = "Chevalier, Alexis and Wettig, Alexander and Ajith, Anirudh and Chen, Danqi", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedi...
Transformer-based language models (LMs) are powerful and widely-applicable tools, but their usefulness is constrained by a finite context window and the expensive computational cost of processing long text documents. We propose to adapt pre-trained LMs into AutoCompressors. These language models are capable of compress...
[ "Chevalier, Alexis", "Wettig, Alex", "er", "Ajith, Anirudh", "Chen, Danqi" ]
Adapting Language Models to Compress Contexts
emnlp-main.232
2305.14788
[ "https://github.com/princeton-nlp/autocompressors" ]
https://huggingface.co/papers/2305.14788
0
1
0
4
[ "princeton-nlp/RMT-2.7b-8k", "princeton-nlp/AutoCompressor-2.7b-30k", "princeton-nlp/AutoCompressor-2.7b-6k", "princeton-nlp/AutoCompressor-Llama-2-7b-6k", "princeton-nlp/RMT-1.3b-30k", "princeton-nlp/AutoCompressor-1.3b-30k", "princeton-nlp/FullAttention-2.7b-4k", "princeton-nlp/FullAttention-Llama-2...
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.233.bib
https://aclanthology.org/2023.emnlp-main.233/
@inproceedings{zhou-etal-2023-selective, title = "Selective Labeling: How to Radically Lower Data-Labeling Costs for Document Extraction Models", author = "Zhou, Yichao and Wendt, James Bradley and Potti, Navneet and Xie, Jing and Tata, Sandeep", editor = "Bouamor, Houda and ...
Building automatic extraction models for visually rich documents like invoices, receipts, bills, tax forms, etc. has received significant attention lately. A key bottleneck in developing extraction models for new document types is the cost of acquiring the several thousand high-quality labeled documents that are needed...
[ "Zhou, Yichao", "Wendt, James Bradley", "Potti, Navneet", "Xie, Jing", "Tata, S", "eep" ]
Selective Labeling: How to Radically Lower Data-Labeling Costs for Document Extraction Models
emnlp-main.233
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.234.bib
https://aclanthology.org/2023.emnlp-main.234/
@inproceedings{chen-etal-2023-travel, title = "{TRAVEL}: Tag-Aware Conversational {FAQ} Retrieval via Reinforcement Learning", author = "Chen, Yue and Jin, Dingnan and Huang, Chen and Liu, Jia and Lei, Wenqiang", editor = "Bouamor, Houda and Pino, Juan and Bali, Kal...
Efficiently retrieving FAQ questions that match users{'} intent is essential for online customer service. Existing methods aim to fully utilize the dynamic conversation context to enhance the semantic association between the user query and FAQ questions. However, the conversation context contains noise, e.g., users may...
[ "Chen, Yue", "Jin, Dingnan", "Huang, Chen", "Liu, Jia", "Lei, Wenqiang" ]
TRAVEL: Tag-Aware Conversational FAQ Retrieval via Reinforcement Learning
emnlp-main.234
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.235.bib
https://aclanthology.org/2023.emnlp-main.235/
@inproceedings{cho-etal-2023-continual, title = "Continual Dialogue State Tracking via Example-Guided Question Answering", author = "Cho, Hyundong and Madotto, Andrea and Lin, Zhaojiang and Chandu, Khyathi and Kottur, Satwik and Xu, Jing and May, Jonathan and San...
Dialogue systems are frequently updated to accommodate new services, but naively updating them by continually training with data for new services in diminishing performance on previously learnt services. Motivated by the insight that dialogue state tracking (DST), a crucial component of dialogue systems that estimates ...
[ "Cho, Hyundong", "Madotto, Andrea", "Lin, Zhaojiang", "Ch", "u, Khyathi", "Kottur, Satwik", "Xu, Jing", "May, Jonathan", "Sankar, Chinnadhurai" ]
Continual Dialogue State Tracking via Example-Guided Question Answering
emnlp-main.235
2305.13721
[ "https://github.com/facebookresearch/dst-egqa" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.236.bib
https://aclanthology.org/2023.emnlp-main.236/
@inproceedings{mittal-etal-2023-lost, title = "Lost in Translation, Found in Spans: Identifying Claims in Multilingual Social Media", author = "Mittal, Shubham and Sundriyal, Megha and Nakov, Preslav", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Pr...
Claim span identification (CSI) is an important step in fact-checking pipelines, aiming to identify text segments that contain a check-worthy claim or assertion in a social media post. Despite its importance to journalists and human fact-checkers, it remains a severely understudied problem, and the scarce research on t...
[ "Mittal, Shubham", "Sundriyal, Megha", "Nakov, Preslav" ]
Lost in Translation, Found in Spans: Identifying Claims in Multilingual Social Media
emnlp-main.236
2310.18205
[ "https://github.com/mbzuai-nlp/x-claim" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.237.bib
https://aclanthology.org/2023.emnlp-main.237/
@inproceedings{kim-etal-2023-covid, title = "{COVID}-19 Vaccine Misinformation in Middle Income Countries", author = "Kim, Jongin and Bak, Byeo Rhee and Agrawal, Aditya and Wu, Jiaxi and Wirtz, Veronika and Hong, Traci and Wijaya, Derry", editor = "Bouamor, Houda a...
This paper introduces a multilingual dataset of COVID-19 vaccine misinformation, consisting of annotated tweets from three middle-income countries: Brazil, Indonesia, and Nigeria. The expertly curated dataset includes annotations for 5,952 tweets, assessing their relevance to COVID-19 vaccines, presence of misinformati...
[ "Kim, Jongin", "Bak, Byeo Rhee", "Agrawal, Aditya", "Wu, Jiaxi", "Wirtz, Veronika", "Hong, Traci", "Wijaya, Derry" ]
COVID-19 Vaccine Misinformation in Middle Income Countries
emnlp-main.237
2311.18195
[ "https://github.com/zzoliman/covid-vaccine-misinfo-mic" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.238.bib
https://aclanthology.org/2023.emnlp-main.238/
@inproceedings{zhang-etal-2023-contrastive-learning, title = "Contrastive Learning of Sentence Embeddings from Scratch", author = "Zhang, Junlei and Lan, Zhenzhong and He, Junxian", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 202...
Contrastive learning has been the dominant approach to train state-of-the-art sentence embeddings. Previous studies have typically learned sentence embeddings either through the use of human-annotated natural language inference (NLI) data or via large-scale unlabeled sentences in an unsupervised manner. However, even i...
[ "Zhang, Junlei", "Lan, Zhenzhong", "He, Junxian" ]
Contrastive Learning of Sentence Embeddings from Scratch
emnlp-main.238
2305.15077
[ "https://github.com/sjtu-lit/syncse" ]
https://huggingface.co/papers/2305.15077
0
0
0
3
[ "hkust-nlp/SynCSE-scratch-RoBERTa-large", "hkust-nlp/SynCSE-partial-RoBERTa-base", "hkust-nlp/SynCSE-scratch-RoBERTa-base", "hkust-nlp/SynCSE-partial-RoBERTa-large" ]
[ "hkust-nlp/SynCSE-partial-NLI", "hkust-nlp/SynCSE-scratch-NLI" ]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.239.bib
https://aclanthology.org/2023.emnlp-main.239/
@inproceedings{sandoval-etal-2023-rose, title = "A Rose by Any Other Name would not Smell as Sweet: Social Bias in Names Mistranslation", author = "Sandoval, Sandra and Zhao, Jieyu and Carpuat, Marine and Daum{\'e} III, Hal", editor = "Bouamor, Houda and Pino, Juan and Bal...
We ask the question: Are there widespread disparities in machine translations of names across race/ethnicity, and gender? We hypothesize that the translation quality of names and surrounding context will be lower for names associated with US racial and ethnic minorities due to these systems{'} tendencies to standardize...
[ "S", "oval, S", "ra", "Zhao, Jieyu", "Carpuat, Marine", "Daum{\\'e} III, Hal" ]
A Rose by Any Other Name would not Smell as Sweet: Social Bias in Names Mistranslation
emnlp-main.239
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.240.bib
https://aclanthology.org/2023.emnlp-main.240/
@inproceedings{phang-etal-2023-investigating, title = "Investigating Efficiently Extending Transformers for Long Input Summarization", author = "Phang, Jason and Zhao, Yao and Liu, Peter", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of ...
While large pretrained Transformer models have proven highly capable at tackling natural language tasks, handling long sequence inputs still poses a significant challenge. One such task is long input summarization, where inputs are longer than the maximum input context of most models. Through an extensive set of experi...
[ "Phang, Jason", "Zhao, Yao", "Liu, Peter" ]
Investigating Efficiently Extending Transformers for Long Input Summarization
emnlp-main.240
2208.04347
[ "https://github.com/google-research/pegasus" ]
https://huggingface.co/papers/2208.04347
2
0
0
3
[ "UNIST-Eunchan/Research-Paper-Summarization-Pegasus-x-ArXiv" ]
[]
[ "micknikolic/pdf-abstract-summarizer", "UNIST-Eunchan/Paper-Abstract-Generator" ]
1
Oral
https://aclanthology.org/2023.emnlp-main.241.bib
https://aclanthology.org/2023.emnlp-main.241/
@inproceedings{guo-etal-2023-cs2w, title = "{CS}2{W}: A {C}hinese Spoken-to-Written Style Conversion Dataset with Multiple Conversion Types", author = "Guo, Zishan and Yu, Linhao and Xu, Minghui and Jin, Renren and Xiong, Deyi", editor = "Bouamor, Houda and Pino, Juan and...
Spoken texts (either manual or automatic transcriptions from automatic speech recognition (ASR)) often contain disfluencies and grammatical errors, which pose tremendous challenges to downstream tasks. Converting spoken into written language is hence desirable. Unfortunately, the availability of datasets for this is li...
[ "Guo, Zishan", "Yu, Linhao", "Xu, Minghui", "Jin, Renren", "Xiong, Deyi" ]
CS2W: A Chinese Spoken-to-Written Style Conversion Dataset with Multiple Conversion Types
emnlp-main.241
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.242.bib
https://aclanthology.org/2023.emnlp-main.242/
@inproceedings{ansell-etal-2023-unifying, title = "Unifying Cross-Lingual Transfer across Scenarios of Resource Scarcity", author = "Ansell, Alan and Parovi{\'c}, Marinela and Vuli{\'c}, Ivan and Korhonen, Anna and Ponti, Edoardo", editor = "Bouamor, Houda and Pino, Juan ...
The scarcity of data in many of the world{'}s languages necessitates the transfer of knowledge from other, resource-rich languages. However, the level of scarcity varies significantly across multiple dimensions, including: i) the amount of task-specific data available in the source and target languages; ii) the amount ...
[ "Ansell, Alan", "Parovi{\\'c}, Marinela", "Vuli{\\'c}, Ivan", "Korhonen, Anna", "Ponti, Edoardo" ]
Unifying Cross-Lingual Transfer across Scenarios of Resource Scarcity
emnlp-main.242
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.243.bib
https://aclanthology.org/2023.emnlp-main.243/
@inproceedings{attanasio-etal-2023-tale, title = "A Tale of Pronouns: Interpretability Informs Gender Bias Mitigation for Fairer Instruction-Tuned Machine Translation", author = "Attanasio, Giuseppe and Plaza del Arco, Flor Miriam and Nozza, Debora and Lauscher, Anne", editor = "Bouamor...
Recent instruction fine-tuned models can solve multiple NLP tasks when prompted to do so, with machine translation (MT) being a prominent use case. However, current research often focuses on standard performance benchmarks, leaving compelling fairness and ethical considerations behind. In MT, this might lead to misgend...
[ "Attanasio, Giuseppe", "Plaza del Arco, Flor Miriam", "Nozza, Debora", "Lauscher, Anne" ]
A Tale of Pronouns: Interpretability Informs Gender Bias Mitigation for Fairer Instruction-Tuned Machine Translation
emnlp-main.243
2310.12127
[ "https://github.com/milanlproc/interpretability-mt-gender-bias" ]
https://huggingface.co/papers/2310.12127
1
1
0
4
[]
[ "MilaNLProc/a-tale-of-pronouns" ]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.244.bib
https://aclanthology.org/2023.emnlp-main.244/
@inproceedings{jiang-etal-2023-disco, title = "{D}is{C}o: Distilled Student Models Co-training for Semi-supervised Text Mining", author = "Jiang, Weifeng and Mao, Qianren and Lin, Chenghua and Li, Jianxin and Deng, Ting and Yang, Weiyi and Wang, Zheng", editor = "Bo...
Many text mining models are constructed by fine-tuning a large deep pre-trained language model (PLM) in downstream tasks. However, a significant challenge that arises nowadays is how to maintain performance when we use a lightweight model with limited labeled samples. We present DisCo, a semi-supervised learning (SSL) ...
[ "Jiang, Weifeng", "Mao, Qianren", "Lin, Chenghua", "Li, Jianxin", "Deng, Ting", "Yang, Weiyi", "Wang, Zheng" ]
DisCo: Distilled Student Models Co-training for Semi-supervised Text Mining
emnlp-main.244
2305.12074
[ "https://github.com/litesslhub/disco" ]
https://huggingface.co/papers/2305.12074
0
0
0
7
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.245.bib
https://aclanthology.org/2023.emnlp-main.245/
@inproceedings{yin-etal-2023-dynosaur, title = "Dynosaur: A Dynamic Growth Paradigm for Instruction-Tuning Data Curation", author = "Yin, Da and Liu, Xiao and Yin, Fan and Zhong, Ming and Bansal, Hritik and Han, Jiawei and Chang, Kai-Wei", editor = "Bouamor, Houda ...
Instruction tuning has emerged to enhance the capabilities of large language models (LLMs) to comprehend instructions and generate appropriate responses. Existing methods either manually annotate or employ LLM (e.g., GPT-series) to generate data for instruction tuning. However, they often overlook associating instructi...
[ "Yin, Da", "Liu, Xiao", "Yin, Fan", "Zhong, Ming", "Bansal, Hritik", "Han, Jiawei", "Chang, Kai-Wei" ]
Dynosaur: A Dynamic Growth Paradigm for Instruction-Tuning Data Curation
emnlp-main.245
2305.14327
[ "https://github.com/wadeyin9712/dynosaur" ]
https://huggingface.co/papers/2305.14327
2
0
0
7
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.246.bib
https://aclanthology.org/2023.emnlp-main.246/
@inproceedings{wang-etal-2023-steps, title = "Are All Steps Equally Important? Benchmarking Essentiality Detection in Event Processes", author = "Wang, Haoyu and Zhang, Hongming and Wang, Yueguan and Deng, Yuqian and Chen, Muhao and Roth, Dan", editor = "Bouamor, Houda an...
Natural language often describes events in different granularities, such that more coarse-grained (goal) events can often be decomposed into fine-grained sequences of (step) events. A critical but overlooked challenge in understanding an event process lies in the fact that the step events are not equally important to t...
[ "Wang, Haoyu", "Zhang, Hongming", "Wang, Yueguan", "Deng, Yuqian", "Chen, Muhao", "Roth, Dan" ]
Are All Steps Equally Important? Benchmarking Essentiality Detection in Event Processes
emnlp-main.246
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.247.bib
https://aclanthology.org/2023.emnlp-main.247/
@inproceedings{chen-etal-2023-language, title = "Language Model is Suitable for Correction of Handwritten Mathematical Expressions Recognition", author = "Chen, Zui and Han, Jiaqi and Yang, Chaofan and Zhou, Yi", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", ...
Handwritten mathematical expression recognition (HMER) is a multidisciplinary task that generates LaTeX sequences from images. Existing approaches, employing tree decoders within attention-based encoder-decoder architectures, aim to capture the hierarchical tree structure, but are limited by CFGs and pre-generated trip...
[ "Chen, Zui", "Han, Jiaqi", "Yang, Chaofan", "Zhou, Yi" ]
Language Model is Suitable for Correction of Handwritten Mathematical Expressions Recognition
emnlp-main.247
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.248.bib
https://aclanthology.org/2023.emnlp-main.248/
@inproceedings{de-la-pena-sarracen-etal-2023-vicinal, title = "Vicinal Risk Minimization for Few-Shot Cross-lingual Transfer in Abusive Language Detection", author = "De la Pe{\~n}a Sarrac{\'e}n, Gretel and Rosso, Paolo and Litschko, Robert and Glava{\v{s}}, Goran and Ponzetto, Simon...
Cross-lingual transfer learning from high-resource to medium and low-resource languages has shown encouraging results. However, the scarcity of resources in target languages remains a challenge. In this work, we resort to data augmentation and continual pre-training for domain adaptation to improve cross-lingual abusiv...
[ "De la Pe{\\~n}a Sarrac{\\'e}n, Gretel", "Rosso, Paolo", "Litschko, Robert", "Glava{\\v{s}}, Goran", "Ponzetto, Simone" ]
Vicinal Risk Minimization for Few-Shot Cross-lingual Transfer in Abusive Language Detection
emnlp-main.248
2311.02025
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.249.bib
https://aclanthology.org/2023.emnlp-main.249/
@inproceedings{jiang-etal-2023-superdialseg, title = "{S}uper{D}ialseg: A Large-scale Dataset for Supervised Dialogue Segmentation", author = "Jiang, Junfeng and Dong, Chengzhang and Kurohashi, Sadao and Aizawa, Akiko", editor = "Bouamor, Houda and Pino, Juan and Bali, Kal...
Dialogue segmentation is a crucial task for dialogue systems allowing a better understanding of conversational texts. Despite recent progress in unsupervised dialogue segmentation methods, their performances are limited by the lack of explicit supervised signals for training. Furthermore, the precise definition of segm...
[ "Jiang, Junfeng", "Dong, Chengzhang", "Kurohashi, Sadao", "Aizawa, Akiko" ]
SuperDialseg: A Large-scale Dataset for Supervised Dialogue Segmentation
emnlp-main.249
2305.08371
[ "https://github.com/coldog2333/superdialseg" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.250.bib
https://aclanthology.org/2023.emnlp-main.250/
@inproceedings{bai-etal-2023-atformer, title = "{ATF}ormer: A Learned Performance Model with Transfer Learning Across Devices for Deep Learning Tensor Programs", author = "Bai, Yang and Zhao, Wenqian and Yin, Shuo and Wang, Zixiao and Yu, Bei", editor = "Bouamor, Houda and ...
The training and inference efficiency of ever-larger deep neural networks highly rely on the performance of tensor operators on specific hardware platforms. Therefore, a compilation-based optimization flow with automatic tensor generation and parameter tuning is necessary for efficient model deployment. While compilati...
[ "Bai, Yang", "Zhao, Wenqian", "Yin, Shuo", "Wang, Zixiao", "Yu, Bei" ]
ATFormer: A Learned Performance Model with Transfer Learning Across Devices for Deep Learning Tensor Programs
emnlp-main.250
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.251.bib
https://aclanthology.org/2023.emnlp-main.251/
@inproceedings{overbay-etal-2023-mredditsum, title = "m{R}eddit{S}um: A Multimodal Abstractive Summarization Dataset of {R}eddit Threads with Images", author = "Overbay, Keighley and Ahn, Jaewoo and Pesaran zadeh, Fatemeh and Park, Joonsuk and Kim, Gunhee", editor = "Bouamor, Hou...
The growing number of multimodal online discussions necessitates automatic summarization to save time and reduce content overload. However, existing summarization datasets are not suitable for this purpose, as they either do not cover discussions, multiple modalities, or both. To this end, we present mRedditSum, the fi...
[ "Overbay, Keighley", "Ahn, Jaewoo", "Pesaran zadeh, Fatemeh", "Park, Joonsuk", "Kim, Gunhee" ]
mRedditSum: A Multimodal Abstractive Summarization Dataset of Reddit Threads with Images
emnlp-main.251
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.252.bib
https://aclanthology.org/2023.emnlp-main.252/
@inproceedings{ding-etal-2023-sparse, title = "Sparse Low-rank Adaptation of Pre-trained Language Models", author = "Ding, Ning and Lv, Xingtai and Wang, Qiaosen and Chen, Yulin and Zhou, Bowen and Liu, Zhiyuan and Sun, Maosong", editor = "Bouamor, Houda and ...
Fine-tuning pre-trained large language models in a parameter-efficient manner is widely studied for its effectiveness and efficiency. The popular method of low-rank adaptation (LoRA) offers a notable approach, hypothesizing that the adaptation process is intrinsically low-dimensional. Although LoRA has demonstrated com...
[ "Ding, Ning", "Lv, Xingtai", "Wang, Qiaosen", "Chen, Yulin", "Zhou, Bowen", "Liu, Zhiyuan", "Sun, Maosong" ]
Sparse Low-rank Adaptation of Pre-trained Language Models
emnlp-main.252
null
[ "https://github.com/tsinghuac3i/sora" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.253.bib
https://aclanthology.org/2023.emnlp-main.253/
@inproceedings{don-yehiya-etal-2023-human, title = "Human Learning by Model Feedback: The Dynamics of Iterative Prompting with Midjourney", author = "Don-Yehiya, Shachar and Choshen, Leshem and Abend, Omri", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle...
Generating images with a Text-to-Image model often requires multiple trials, where human users iteratively update their prompt based on feedback, namely the output image. Taking inspiration from cognitive work on reference games and dialogue alignment, this paper analyzes the dynamics of the user prompts along such ite...
[ "Don-Yehiya, Shachar", "Choshen, Leshem", "Abend, Omri" ]
Human Learning by Model Feedback: The Dynamics of Iterative Prompting with Midjourney
emnlp-main.253
2311.12131
[ "https://github.com/shachardon/mid-journey-to-alignment" ]
https://huggingface.co/papers/2311.12131
1
0
0
3
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.254.bib
https://aclanthology.org/2023.emnlp-main.254/
@inproceedings{sedova-roth-2023-ulf, title = "{ULF}: Unsupervised Labeling Function Correction using Cross-Validation for Weak Supervision", author = "Sedova, Anastasiia and Roth, Benjamin", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2...
A cost-effective alternative to manual data labeling is weak supervision (WS), where data samples are automatically annotated using a predefined set of labeling functions (LFs), rule-based mechanisms that generate artificial labels for the associated classes. In this work, we investigate noise reduction techniques for ...
[ "Sedova, Anastasiia", "Roth, Benjamin" ]
ULF: Unsupervised Labeling Function Correction using Cross-Validation for Weak Supervision
emnlp-main.254
null
[ "https://github.com/knodle/knodle" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.255.bib
https://aclanthology.org/2023.emnlp-main.255/
@inproceedings{qi-etal-2023-art, title = "The Art of {SOCRATIC} {QUESTIONING}: Recursive Thinking with Large Language Models", author = "Qi, Jingyuan and Xu, Zhiyang and Shen, Ying and Liu, Minqian and Jin, Di and Wang, Qifan and Huang, Lifu", editor = "Bouamor, Hou...
Chain-of-Thought (CoT) prompting enables large language models to solve complex reasoning problems by generating intermediate steps. However, confined by its inherent single-pass and sequential generation process, CoT heavily relies on the initial decisions, causing errors in early steps to accumulate and impact the fi...
[ "Qi, Jingyuan", "Xu, Zhiyang", "Shen, Ying", "Liu, Minqian", "Jin, Di", "Wang, Qifan", "Huang, Lifu" ]
The Art of SOCRATIC QUESTIONING: Recursive Thinking with Large Language Models
emnlp-main.255
2305.14999
[ "https://github.com/vt-nlp/socratic-questioning" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.256.bib
https://aclanthology.org/2023.emnlp-main.256/
@inproceedings{liu-etal-2023-ideology, title = "Ideology Takes Multiple Looks: A High-Quality Dataset for Multifaceted Ideology Detection", author = "Liu, Songtao and Luo, Ziling and Xu, Minghua and Wei, Lixiao and Wei, Ziyao and Yu, Han and Xiang, Wei and Wang, ...
Ideology detection (ID) is important for gaining insights about peoples{'} opinions and stances on our world and society, which can find many applications in politics, economics and social sciences. It is not uncommon that a piece of text can contain descriptions of various issues. It is also widely accepted that a per...
[ "Liu, Songtao", "Luo, Ziling", "Xu, Minghua", "Wei, Lixiao", "Wei, Ziyao", "Yu, Han", "Xiang, Wei", "Wang, Bang" ]
Ideology Takes Multiple Looks: A High-Quality Dataset for Multifaceted Ideology Detection
emnlp-main.256
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.257.bib
https://aclanthology.org/2023.emnlp-main.257/
@inproceedings{colombo-etal-2023-transductive, title = "Transductive Learning for Textual Few-Shot Classification in {API}-based Embedding Models", author = "Colombo, Pierre and Pellegrain, Victor and Boudiaf, Malik and Tami, Myriam and Storchan, Victor and Ayed, Ismail and ...
Proprietary and closed APIs are becoming increasingly common to process natural language, and are impacting the practical applications of natural language processing, including few-shot classification. Few-shot classification involves training a model to perform a new classification task with a handful of labeled data....
[ "Colombo, Pierre", "Pellegrain, Victor", "Boudiaf, Malik", "Tami, Myriam", "Storchan, Victor", "Ayed, Ismail", "Piantanida, Pablo" ]
Transductive Learning for Textual Few-Shot Classification in API-based Embedding Models
emnlp-main.257
2310.13998
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.258.bib
https://aclanthology.org/2023.emnlp-main.258/
@inproceedings{ahuja-etal-2023-mega, title = "{MEGA}: Multilingual Evaluation of Generative {AI}", author = "Ahuja, Kabir and Diddee, Harshita and Hada, Rishav and Ochieng, Millicent and Ramesh, Krithika and Jain, Prachi and Nambi, Akshay and Ganu, Tanuja and ...
Generative AI models have shown impressive performance on many Natural Language Processing tasks such as language understanding, reasoning, and language generation. An important question being asked by the AI community today is about the capabilities and limits of these models, and it is clear that evaluating generativ...
[ "Ahuja, Kabir", "Diddee, Harshita", "Hada, Rishav", "Ochieng, Millicent", "Ramesh, Krithika", "Jain, Prachi", "Nambi, Akshay", "Ganu, Tanuja", "Segal, Sameer", "Ahmed, Mohamed", "Bali, Kalika", "Sitaram, Sunayana" ]
MEGA: Multilingual Evaluation of Generative AI
emnlp-main.258
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.259.bib
https://aclanthology.org/2023.emnlp-main.259/
@inproceedings{yuan-etal-2023-support, title = "Support or Refute: Analyzing the Stance of Evidence to Detect Out-of-Context Mis- and Disinformation", author = "Yuan, Xin and Guo, Jie and Qiu, Weidong and Huang, Zheng and Li, Shujun", editor = "Bouamor, Houda and Pino, Jua...
Mis- and disinformation online have become a major societal problem as major sources of online harms of different kinds. One common form of mis- and disinformation is out-of-context (OOC) information, where different pieces of information are falsely associated, e.g., a real image combined with a false textual caption ...
[ "Yuan, Xin", "Guo, Jie", "Qiu, Weidong", "Huang, Zheng", "Li, Shujun" ]
Support or Refute: Analyzing the Stance of Evidence to Detect Out-of-Context Mis- and Disinformation
emnlp-main.259
2311.01766
[ "https://github.com/yx3266/SEN" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.260.bib
https://aclanthology.org/2023.emnlp-main.260/
@inproceedings{li-etal-2023-video, title = "Video-Helpful Multimodal Machine Translation", author = "Li, Yihang and Shimizu, Shuichiro and Chu, Chenhui and Kurohashi, Sadao and Li, Wei", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Pro...
Existing multimodal machine translation (MMT) datasets consist of images and video captions or instructional video subtitles, which rarely contain linguistic ambiguity, making visual information ineffective in generating appropriate translations. Recent work has constructed an ambiguous subtitles dataset to alleviate t...
[ "Li, Yihang", "Shimizu, Shuichiro", "Chu, Chenhui", "Kurohashi, Sadao", "Li, Wei" ]
Video-Helpful Multimodal Machine Translation
emnlp-main.260
null
[ "https://github.com/ku-nlp/video-helpful-mmt" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.261.bib
https://aclanthology.org/2023.emnlp-main.261/
@inproceedings{ko-etal-2023-large, title = "Large Language Models are Temporal and Causal Reasoners for Video Question Answering", author = "Ko, Dohwan and Lee, Ji and Kang, Woo-Young and Roh, Byungseok and Kim, Hyunwoo", editor = "Bouamor, Houda and Pino, Juan and ...
Large Language Models (LLMs) have shown remarkable performances on a wide range of natural language understanding and generation tasks. We observe that the LLMs provide effective priors in exploiting $\textit{linguistic shortcuts}$ for temporal and causal reasoning in Video Question Answering (VideoQA). However, such p...
[ "Ko, Dohwan", "Lee, Ji", "Kang, Woo-Young", "Roh, Byungseok", "Kim, Hyunwoo" ]
Large Language Models are Temporal and Causal Reasoners for Video Question Answering
emnlp-main.261
null
[ "https://github.com/mlvlab/Flipped-VQA" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.262.bib
https://aclanthology.org/2023.emnlp-main.262/
@inproceedings{sagirova-burtsev-2023-uncertainty, title = "Uncertainty Guided Global Memory Improves Multi-Hop Question Answering", author = "Sagirova, Alsu and Burtsev, Mikhail", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Confere...
Transformers have become the gold standard for many natural language processing tasks and, in particular, for multi-hop question answering (MHQA). This task includes processing a long document and reasoning over the multiple parts of it. The landscape of MHQA approaches can be classified into two primary categories. Th...
[ "Sagirova, Alsu", "Burtsev, Mikhail" ]
Uncertainty Guided Global Memory Improves Multi-Hop Question Answering
emnlp-main.262
2311.18151
[ "https://github.com/aloriosa/gemformer" ]
https://huggingface.co/papers/2311.18151
2
0
0
2
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.263.bib
https://aclanthology.org/2023.emnlp-main.263/
@inproceedings{liang-etal-2023-prompting, title = "Prompting Large Language Models with Chain-of-Thought for Few-Shot Knowledge Base Question Generation", author = "Liang, Yuanyuan and Wang, Jianing and Zhu, Hanlun and Wang, Lei and Qian, Weining and Lan, Yunshi", editor =...
The task of Question Generation over Knowledge Bases (KBQG) aims to convert a logical form into a natural language question. For the sake of expensive cost of large-scale question annotation, the methods of KBQG under low-resource scenarios urgently need to be developed. However, current methods heavily rely on annotat...
[ "Liang, Yuanyuan", "Wang, Jianing", "Zhu, Hanlun", "Wang, Lei", "Qian, Weining", "Lan, Yunshi" ]
Prompting Large Language Models with Chain-of-Thought for Few-Shot Knowledge Base Question Generation
emnlp-main.263
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.264.bib
https://aclanthology.org/2023.emnlp-main.264/
@inproceedings{zhang-etal-2023-trojansql, title = "{T}rojan{SQL}: {SQL} Injection against Natural Language Interface to Database", author = "Zhang, Jinchuan and Zhou, Yan and Hui, Binyuan and Liu, Yaxin and Li, Ziming and Hu, Songlin", editor = "Bouamor, Houda and P...
The technology of text-to-SQL has significantly enhanced the efficiency of accessing and manipulating databases. However, limited research has been conducted to study its vulnerabilities emerging from malicious user interaction. By proposing TrojanSQL, a backdoor-based SQL injection framework for text-to-SQL systems, w...
[ "Zhang, Jinchuan", "Zhou, Yan", "Hui, Binyuan", "Liu, Yaxin", "Li, Ziming", "Hu, Songlin" ]
TrojanSQL: SQL Injection against Natural Language Interface to Database
emnlp-main.264
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.265.bib
https://aclanthology.org/2023.emnlp-main.265/
@inproceedings{kassem-etal-2023-preserving, title = "Preserving Privacy Through Dememorization: An Unlearning Technique For Mitigating Memorization Risks In Language Models", author = "Kassem, Aly and Mahmoud, Omar and Saad, Sherif", editor = "Bouamor, Houda and Pino, Juan and Ba...
Large Language models (LLMs) are trained on vast amounts of data, including sensitive information that poses a risk to personal privacy if exposed. LLMs have shown the ability to memorize and reproduce portions of their training data when prompted by adversaries. Prior research has focused on addressing this memorizati...
[ "Kassem, Aly", "Mahmoud, Omar", "Saad, Sherif" ]
Preserving Privacy Through Dememorization: An Unlearning Technique For Mitigating Memorization Risks In Language Models
emnlp-main.265
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.266.bib
https://aclanthology.org/2023.emnlp-main.266/
@inproceedings{chen-etal-2023-mingofficial, title = "{M}ing{O}fficial: A Ming Official Career Dataset and a Historical Context-Aware Representation Learning Framework", author = "Chen, You-Jun and Hsieh, Hsin-Yi and Lin, Yu and Tian, Yingtao and Chan, Bert and Liu, Yu-Sin and...
In Chinese studies, understanding the nuanced traits of historical figures, often not explicitly evident in biographical data, has been a key interest. However, identifying these traits can be challenging due to the need for domain expertise, specialist knowledge, and context-specific insights, making the process time-...
[ "Chen, You-Jun", "Hsieh, Hsin-Yi", "Lin, Yu", "Tian, Yingtao", "Chan, Bert", "Liu, Yu-Sin", "Lin, Yi-Hsuan", "Tsai, Richard" ]
MingOfficial: A Ming Official Career Dataset and a Historical Context-Aware Representation Learning Framework
emnlp-main.266
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.267.bib
https://aclanthology.org/2023.emnlp-main.267/
@inproceedings{joo-etal-2023-dpp, title = "{DPP}-{TTS}: Diversifying prosodic features of speech via determinantal point processes", author = "Joo, Seongho and Koh, Hyukhun and Jung, Kyomin", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings ...
With the rapid advancement in deep generative models, recent neural Text-To-Speech(TTS) models have succeeded in synthesizing human-like speech. There have been some efforts to generate speech with various prosody beyond monotonous prosody patterns. However, previous works have several limitations. First, typical TTS m...
[ "Joo, Seongho", "Koh, Hyukhun", "Jung, Kyomin" ]
DPP-TTS: Diversifying prosodic features of speech via determinantal point processes
emnlp-main.267
2310.14663
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.268.bib
https://aclanthology.org/2023.emnlp-main.268/
@inproceedings{hu-etal-2023-meta, title = "Meta-Learning Online Adaptation of Language Models", author = "Hu, Nathan and Mitchell, Eric and Manning, Christopher and Finn, Chelsea", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of t...
Large language models encode impressively broad world knowledge in their parameters. However, the knowledge in static language models falls out of date, limiting the model{'}s effective {``}shelf life.{''} While online fine-tuning can reduce this degradation, we find that naively fine-tuning on a stream of documents le...
[ "Hu, Nathan", "Mitchell, Eric", "Manning, Christopher", "Finn, Chelsea" ]
Meta-Learning Online Adaptation of Language Models
emnlp-main.268
null
[ "https://github.com/nathanhu0/CaMeLS" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.269.bib
https://aclanthology.org/2023.emnlp-main.269/
@inproceedings{leong-etal-2023-self, title = "Self-Detoxifying Language Models via Toxification Reversal", author = "Leong, Chak Tou and Cheng, Yi and Wang, Jiashuo and Wang, Jian and Li, Wenjie", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", bookti...
Language model detoxification aims to minimize the risk of generating offensive or harmful content in pretrained language models (PLMs) for safer deployment. Existing methods can be roughly categorized as finetuning-based and decoding-based. However, the former is often resource-intensive, while the latter relies on ad...
[ "Leong, Chak Tou", "Cheng, Yi", "Wang, Jiashuo", "Wang, Jian", "Li, Wenjie" ]
Self-Detoxifying Language Models via Toxification Reversal
emnlp-main.269
2310.09573
[ "https://github.com/cooperleong00/toxificationreversal" ]
https://huggingface.co/papers/2310.09573
0
0
0
5
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.270.bib
https://aclanthology.org/2023.emnlp-main.270/
@inproceedings{faltings-etal-2023-interactive, title = "Interactive Text Generation", author = "Faltings, Felix and Galley, Michel and Brantley, Kiant{\'e} and Peng, Baolin and Cai, Weixin and Zhang, Yizhe and Gao, Jianfeng and Dolan, Bill", editor = "Bouamor...
Users interact with text, image, code, or other editors on a daily basis. However, machine learning models are rarely trained in the settings that reflect the interactivity between users and their editor. This is understandable as training AI models with real users is not only slow and costly, but what these models lea...
[ "Faltings, Felix", "Galley, Michel", "Brantley, Kiant{\\'e}", "Peng, Baolin", "Cai, Weixin", "Zhang, Yizhe", "Gao, Jianfeng", "Dolan, Bill" ]
Interactive Text Generation
emnlp-main.270
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.271.bib
https://aclanthology.org/2023.emnlp-main.271/
@inproceedings{sultan-2023-knowledge, title = "Knowledge Distillation $\approx$ Label Smoothing: Fact or Fallacy?", author = "Sultan, Md", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Pro...
Originally proposed as a method for knowledge transfer from one model to another, some recent studies have suggested that knowledge distillation (KD) is in fact a form of regularization. Perhaps the strongest argument of all for this new perspective comes from its apparent similarities with label smoothing (LS). Here w...
[ "Sultan, Md" ]
Knowledge Distillation ≈ Label Smoothing: Fact or Fallacy?
emnlp-main.271
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.272.bib
https://aclanthology.org/2023.emnlp-main.272/
@inproceedings{beinborn-pinter-2023-analyzing, title = "Analyzing Cognitive Plausibility of Subword Tokenization", author = "Beinborn, Lisa and Pinter, Yuval", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Met...
Subword tokenization has become the de-facto standard for tokenization although comparative evaluations of their quality across languages are scarce. Existing evaluation studies focus on the effect of a tokenization algorithm on the performance in downstream tasks, or on engineering criteria such as the compression rat...
[ "Beinborn, Lisa", "Pinter, Yuval" ]
Analyzing Cognitive Plausibility of Subword Tokenization
emnlp-main.272
2310.13348
[ "" ]
https://huggingface.co/papers/2310.13348
2
0
0
2
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.273.bib
https://aclanthology.org/2023.emnlp-main.273/
@inproceedings{ma-du-2023-poe, title = "{POE}: Process of Elimination for Multiple Choice Reasoning", author = "Ma, Chenkai and Du, Xinya", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Lang...
Language models (LMs) are capable of conducting in-context learning for multiple choice reasoning tasks, but the options in these tasks are treated equally. As humans often first eliminate wrong options before picking the final correct answer, we argue a similar two-step strategy can make LMs better at these tasks. To ...
[ "Ma, Chenkai", "Du, Xinya" ]
POE: Process of Elimination for Multiple Choice Reasoning
emnlp-main.273
2310.15575
[ "https://github.com/kasmasvan/poe" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.274.bib
https://aclanthology.org/2023.emnlp-main.274/
@inproceedings{singh-etal-2023-neustip, title = "{N}eu{STIP}: A Neuro-Symbolic Model for Link and Time Prediction in Temporal Knowledge Graphs", author = "Singh, Ishaan and Kaur, Navdeep and Gaur, Garima and {Mausam}", editor = "Bouamor, Houda and Pino, Juan and Bali, Kali...
Neuro-symbolic (NS) models for knowledge graph completion (KGC) combine the benefits of symbolic models (interpretable inference) with those of distributed representations (parameter sharing, high accuracy). While several NS models exist for KGs with static facts, there is limited work on temporal KGC (TKGC) for KGs wh...
[ "Singh, Ishaan", "Kaur, Navdeep", "Gaur, Garima", "{Mausam}" ]
NeuSTIP: A Neuro-Symbolic Model for Link and Time Prediction in Temporal Knowledge Graphs
emnlp-main.274
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.275.bib
https://aclanthology.org/2023.emnlp-main.275/
@inproceedings{singh-etal-2023-standardizing, title = "Standardizing Distress Analysis: Emotion-Driven Distress Identification and Cause Extraction ({DICE}) in Multimodal Online Posts", author = "Singh, Gopendra and Ghosh, Soumitra and Verma, Atul and Painkra, Chetna and Ekbal, Asif"...
Due to its growing impact on public opinion, hate speech on social media has garnered increased attention. While automated methods for identifying hate speech have been presented in the past, they have mostly been limited to analyzing textual content. The interpretability of such models has received very little attenti...
[ "Singh, Gopendra", "Ghosh, Soumitra", "Verma, Atul", "Painkra, Chetna", "Ekbal, Asif" ]
Standardizing Distress Analysis: Emotion-Driven Distress Identification and Cause Extraction (DICE) in Multimodal Online Posts
emnlp-main.275
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.276.bib
https://aclanthology.org/2023.emnlp-main.276/
@inproceedings{yang-etal-2023-distribution, title = "Out-of-Distribution Generalization in Natural Language Processing: Past, Present, and Future", author = "Yang, Linyi and Song, Yaoxian and Ren, Xuan and Lyu, Chenyang and Wang, Yidong and Zhuo, Jingming and Liu, Lingq...
Machine learning (ML) systems in natural language processing (NLP) face significant challenges in generalizing to out-of-distribution (OOD) data, where the test distribution differs from the training data distribution. This poses important questions about the robustness of NLP models and their high accuracy, which may ...
[ "Yang, Linyi", "Song, Yaoxian", "Ren, Xuan", "Lyu, Chenyang", "Wang, Yidong", "Zhuo, Jingming", "Liu, Lingqiao", "Wang, Jindong", "Foster, Jennifer", "Zhang, Yue" ]
Out-of-Distribution Generalization in Natural Language Processing: Past, Present, and Future
emnlp-main.276
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.277.bib
https://aclanthology.org/2023.emnlp-main.277/
@inproceedings{zheng-saparov-2023-noisy, title = "Noisy Exemplars Make Large Language Models More Robust: A Domain-Agnostic Behavioral Analysis", author = "Zheng, Hongyi and Saparov, Abulhair", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of th...
Recent advances in prompt engineering enable large language models (LLMs) to solve multi-hop logical reasoning problems with impressive accuracy. However, there is little existing work investigating the robustness of LLMs with few-shot prompting techniques. Therefore, we introduce a systematic approach to test the robu...
[ "Zheng, Hongyi", "Saparov, Abulhair" ]
Noisy Exemplars Make Large Language Models More Robust: A Domain-Agnostic Behavioral Analysis
emnlp-main.277
2311.00258
[ "https://github.com/hiroki39/noisy-exemplars-make-large-language-models-more-robust" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.278.bib
https://aclanthology.org/2023.emnlp-main.278/
@inproceedings{lee-etal-2023-large, title = "Can Large Language Models Capture Dissenting Human Voices?", author = "Lee, Noah and An, Na Min and Thorne, James", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empir...
Large language models (LLMs) have shown impressive achievements in solving a broad range of tasks. Augmented by instruction fine-tuning, LLMs have also been shown to generalize in zero-shot settings as well. However, whether LLMs closely align with the human disagreement distribution has not been well-studied, especial...
[ "Lee, Noah", "An, Na Min", "Thorne, James" ]
Can Large Language Models Capture Dissenting Human Voices?
emnlp-main.278
2305.13788
[ "https://github.com/xfactlab/emnlp2023-llm-disagreement" ]
https://huggingface.co/papers/2305.13788
2
0
0
3
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.279.bib
https://aclanthology.org/2023.emnlp-main.279/
@inproceedings{puduppully-etal-2023-decomt, title = "{D}eco{MT}: Decomposed Prompting for Machine Translation Between Related Languages using Large Language Models", author = "Puduppully, Ratish and Kunchukuttan, Anoop and Dabre, Raj and Aw, Ai Ti and Chen, Nancy", editor = "Boua...
This study investigates machine translation between related languages i.e., languages within the same family that share linguistic characteristics such as word order and lexical similarity. Machine translation through few-shot prompting leverages a small set of translation pair examples to generate translations for tes...
[ "Puduppully, Ratish", "Kunchukuttan, Anoop", "Dabre, Raj", "Aw, Ai Ti", "Chen, Nancy" ]
DecoMT: Decomposed Prompting for Machine Translation Between Related Languages using Large Language Models
emnlp-main.279
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.280.bib
https://aclanthology.org/2023.emnlp-main.280/
@inproceedings{zhao-etal-2023-prototype, title = "Prototype-based {H}yper{A}dapter for Sample-Efficient Multi-task Tuning", author = "Zhao, Hao and Fu, Jie and He, Zhaofeng", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Confe...
Parameter-efficient fine-tuning (PEFT) has shown its effectiveness in adapting the pre-trained language models to downstream tasks while only updating a small number of parameters. Despite the success, most existing methods independently adapt to each task without considering knowledge transfer between tasks and are li...
[ "Zhao, Hao", "Fu, Jie", "He, Zhaofeng" ]
Prototype-based HyperAdapter for Sample-Efficient Multi-task Tuning
emnlp-main.280
2310.11670
[ "https://github.com/bumble666/pha" ]
https://huggingface.co/papers/2310.11670
0
1
0
3
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.281.bib
https://aclanthology.org/2023.emnlp-main.281/
@inproceedings{ma-etal-2023-towards, title = "Towards Building More Robust {NER} datasets: An Empirical Study on {NER} Dataset Bias from a Dataset Difficulty View", author = "Ma, Ruotian and Wang, Xiaolei and Zhou, Xin and Zhang, Qi and Huang, Xuanjing", editor = "Bouamor, Houda ...
Recently, many studies have illustrated the robustness problem of Named Entity Recognition (NER) systems: the NER models often rely on superficial entity patterns for predictions, without considering evidence from the context. Consequently, even state-of-the-art NER models generalize poorly to out-of-domain scenarios w...
[ "Ma, Ruotian", "Wang, Xiaolei", "Zhou, Xin", "Zhang, Qi", "Huang, Xuanjing" ]
Towards Building More Robust NER datasets: An Empirical Study on NER Dataset Bias from a Dataset Difficulty View
emnlp-main.281
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.282.bib
https://aclanthology.org/2023.emnlp-main.282/
@inproceedings{wang-etal-2023-gradsim, title = "{G}rad{S}im: Gradient-Based Language Grouping for Effective Multilingual Training", author = {Wang, Mingyang and Adel, Heike and Lange, Lukas and Str{\"o}tgen, Jannik and Schuetze, Hinrich}, editor = "Bouamor, Houda and Pino,...
Most languages of the world pose low-resource challenges to natural language processing models. With multilingual training, knowledge can be shared among languages. However, not all languages positively influence each other and it is an open research question how to select the most suitable set of languages for multili...
[ "Wang, Mingyang", "Adel, Heike", "Lange, Lukas", "Str{\\\"o}tgen, Jannik", "Schuetze, Hinrich" ]
GradSim: Gradient-Based Language Grouping for Effective Multilingual Training
emnlp-main.282
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.283.bib
https://aclanthology.org/2023.emnlp-main.283/
@inproceedings{yamagiwa-etal-2023-discovering, title = "Discovering Universal Geometry in Embeddings with {ICA}", author = "Yamagiwa, Hiroaki and Oyama, Momose and Shimodaira, Hidetoshi", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of t...
This study utilizes Independent Component Analysis (ICA) to unveil a consistent semantic structure within embeddings of words or images. Our approach extracts independent semantic components from the embeddings of a pre-trained model by leveraging anisotropic information that remains after the whitening process in Prin...
[ "Yamagiwa, Hiroaki", "Oyama, Momose", "Shimodaira, Hidetoshi" ]
Discovering Universal Geometry in Embeddings with ICA
emnlp-main.283
2305.13175
[ "https://github.com/shimo-lab/universal-geometry-with-ica" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.284.bib
https://aclanthology.org/2023.emnlp-main.284/
@inproceedings{brunila-etal-2023-toward, title = "Toward a Critical Toponymy Framework for Named Entity Recognition: A Case Study of Airbnb in {N}ew {Y}ork City", author = "Brunila, Mikael and LaViolette, Jack and CH-Wang, Sky and Verma, Priyanka and F{\'e}r{\'e}, Clara and Mc...
Critical toponymy examines the dynamics of power, capital, and resistance through place names and the sites to which they refer. Studies here have traditionally focused on the semantic content of toponyms and the top-down institutional processes that produce them. However, they have generally ignored the ways in which ...
[ "Brunila, Mikael", "LaViolette, Jack", "CH-Wang, Sky", "Verma, Priyanka", "F{\\'e}r{\\'e}, Clara", "McKenzie, Grant" ]
Toward a Critical Toponymy Framework for Named Entity Recognition: A Case Study of Airbnb in New York City
emnlp-main.284
2310.15302
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.285.bib
https://aclanthology.org/2023.emnlp-main.285/
@inproceedings{qin-etal-2023-well, title = "Well Begun is Half Done: Generator-agnostic Knowledge Pre-Selection for Knowledge-Grounded Dialogue", author = "Qin, Lang and Zhang, Yao and Liang, Hongru and Wang, Jun and Yang, Zhenglu", editor = "Bouamor, Houda and Pino, Juan ...
Accurate knowledge selection is critical in knowledge-grounded dialogue systems. Towards a closer look at it, we offer a novel perspective to organize existing literature, i.e., knowledge selection coupled with, after, and before generation. We focus on the third under-explored category of study, which can not only sel...
[ "Qin, Lang", "Zhang, Yao", "Liang, Hongru", "Wang, Jun", "Yang, Zhenglu" ]
Well Begun is Half Done: Generator-agnostic Knowledge Pre-Selection for Knowledge-Grounded Dialogue
emnlp-main.285
2310.07659
[ "https://github.com/qinlang14/gate" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.286.bib
https://aclanthology.org/2023.emnlp-main.286/
@inproceedings{zhang-etal-2023-merging, title = "Merging Generated and Retrieved Knowledge for Open-Domain {QA}", author = "Zhang, Yunxiang and Khalifa, Muhammad and Logeswaran, Lajanugen and Lee, Moontae and Lee, Honglak and Wang, Lu", editor = "Bouamor, Houda and ...
Open-domain question answering (QA) systems are often built with retrieval modules. However, retrieving passages from a given source is known to suffer from insufficient knowledge coverage. Alternatively, prompting large language models (LLMs) to generate contextual passages based on their parametric knowledge has been...
[ "Zhang, Yunxiang", "Khalifa, Muhammad", "Logeswaran, Lajanugen", "Lee, Moontae", "Lee, Honglak", "Wang, Lu" ]
Merging Generated and Retrieved Knowledge for Open-Domain QA
emnlp-main.286
2310.14393
[ "https://github.com/yunx-z/combo" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.287.bib
https://aclanthology.org/2023.emnlp-main.287/
@inproceedings{kannen-etal-2023-best, title = "Best of Both Worlds: Towards Improving Temporal Knowledge Base Question Answering via Targeted Fact Extraction", author = "Kannen, Nithish and Sharma, Udit and Neelam, Sumit and Khandelwal, Dinesh and Ikbal, Shajith and Karanam, H...
Temporal question answering (QA) is a special category of complex question answering task that requires reasoning over facts asserting time intervals of events. Previous works have predominately relied on Knowledge Base Question Answering (KBQA) for temporal QA. One of the major challenges faced by these systems is the...
[ "Kannen, Nithish", "Sharma, Udit", "Neelam, Sumit", "Kh", "elwal, Dinesh", "Ikbal, Shajith", "Karanam, Hima", "Subramaniam, L" ]
Best of Both Worlds: Towards Improving Temporal Knowledge Base Question Answering via Targeted Fact Extraction
emnlp-main.287
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.288.bib
https://aclanthology.org/2023.emnlp-main.288/
@inproceedings{balepur-etal-2023-text, title = "Text Fact Transfer", author = "Balepur, Nishant and Huang, Jie and Chang, Kevin", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Languag...
Text style transfer is a prominent task that aims to control the style of text without inherently changing its factual content. To cover more text modification applications, such as adapting past news for current events and repurposing educational materials, we propose the task of text fact transfer, which seeks to tra...
[ "Balepur, Nishant", "Huang, Jie", "Chang, Kevin" ]
Text Fact Transfer
emnlp-main.288
2310.14486
[ "https://github.com/nbalepur/text-fact-transfer" ]
https://huggingface.co/papers/2310.14486
0
0
0
3
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.289.bib
https://aclanthology.org/2023.emnlp-main.289/
@inproceedings{chen-etal-2023-cheaper, title = "A Cheaper and Better Diffusion Language Model with Soft-Masked Noise", author = "Chen, Jiaao and Zhang, Aston and Li, Mu and Smola, Alex and Yang, Diyi", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", b...
Diffusion models that are based on iterative denoising have been recently proposed and leveraged in various generation tasks like image generation. Whereas, as a way inherently built for continuous data, existing diffusion models still have some limitations in modeling discrete data, e.g., languages. For example, the g...
[ "Chen, Jiaao", "Zhang, Aston", "Li, Mu", "Smola, Alex", "Yang, Diyi" ]
A Cheaper and Better Diffusion Language Model with Soft-Masked Noise
emnlp-main.289
2304.04746
[ "https://github.com/amazon-science/masked-diffusion-lm" ]
https://huggingface.co/papers/2304.04746
0
0
0
5
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.290.bib
https://aclanthology.org/2023.emnlp-main.290/
@inproceedings{abercrombie-etal-2023-mirages, title = "Mirages. On Anthropomorphism in Dialogue Systems", author = "Abercrombie, Gavin and Cercas Curry, Amanda and Dinkar, Tanvi and Rieser, Verena and Talat, Zeerak", editor = "Bouamor, Houda and Pino, Juan and Bali,...
Automated dialogue or conversational systems are anthropomorphised by developers and personified by users. While a degree of anthropomorphism is inevitable, conscious and unconscious design choices can guide users to personify them to varying degrees. Encouraging users to relate to automated systems as if they were hum...
[ "Abercrombie, Gavin", "Cercas Curry, Am", "a", "Dinkar, Tanvi", "Rieser, Verena", "Talat, Zeerak" ]
Mirages. On Anthropomorphism in Dialogue Systems
emnlp-main.290
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.291.bib
https://aclanthology.org/2023.emnlp-main.291/
@inproceedings{liu-etal-2023-cognitive, title = "Cognitive Dissonance: Why Do Language Model Outputs Disagree with Internal Representations of Truthfulness?", author = "Liu, Kevin and Casper, Stephen and Hadfield-Menell, Dylan and Andreas, Jacob", editor = "Bouamor, Houda and Pin...
Neural language models (LMs) can be used to evaluate the truth of factual statements in two ways: they can be either queried for statement probabilities, or probed for internal representations of truthfulness. Past work has found that these two procedures sometimes disagree, and that probes tend to be more accurate tha...
[ "Liu, Kevin", "Casper, Stephen", "Hadfield-Menell, Dylan", "Andreas, Jacob" ]
Cognitive Dissonance: Why Do Language Model Outputs Disagree with Internal Representations of Truthfulness?
emnlp-main.291
2312.03729
[ "https://github.com/lingo-mit/lm-truthfulness" ]
https://huggingface.co/papers/2312.03729
1
0
0
4
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.292.bib
https://aclanthology.org/2023.emnlp-main.292/
@inproceedings{koo-etal-2023-kebap, title = "{KEBAP}: {K}orean Error Explainable Benchmark Dataset for {ASR} and Post-processing", author = "Koo, Seonmin and Park, Chanjun and Kim, Jinsung and Seo, Jaehyung and Eo, Sugyeong and Moon, Hyeonseok and Lim, Heuiseok", ed...
Automatic Speech Recognition (ASR) systems are instrumental across various applications, with their performance being critically tied to user satisfaction. Conventional evaluation metrics for ASR systems produce a singular aggregate score, which is insufficient for understanding specific system vulnerabilities. Therefo...
[ "Koo, Seonmin", "Park, Chanjun", "Kim, Jinsung", "Seo, Jaehyung", "Eo, Sugyeong", "Moon, Hyeonseok", "Lim, Heuiseok" ]
KEBAP: Korean Error Explainable Benchmark Dataset for ASR and Post-processing
emnlp-main.292
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.293.bib
https://aclanthology.org/2023.emnlp-main.293/
@inproceedings{zhao-etal-2023-adaptive, title = "Adaptive Policy with Wait-k Model for Simultaneous Translation", author = "Zhao, Libo and Fan, Kai and Luo, Wei and Jing, Wu and Wang, Shushu and Zeng, Ziqian and Huang, Zhongqiang", editor = "Bouamor, Houda and ...
Simultaneous machine translation (SiMT) requires a robust read/write policy in conjunction with a high-quality translation model. Traditional methods rely on either a fixed wait-k policy coupled with a standalone wait-k translation model, or an adaptive policy jointly trained with the translation model. In this study, ...
[ "Zhao, Libo", "Fan, Kai", "Luo, Wei", "Jing, Wu", "Wang, Shushu", "Zeng, Ziqian", "Huang, Zhongqiang" ]
Adaptive Policy with Wait-k Model for Simultaneous Translation
emnlp-main.293
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.294.bib
https://aclanthology.org/2023.emnlp-main.294/
@inproceedings{chen-etal-2023-cross, title = "Cross-Document Event Coreference Resolution on Discourse Structure", author = "Chen, Xinyu and Xu, Sheng and Li, Peifeng and Zhu, Qiaoming", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceeding...
Cross-document event coreference resolution (CD-ECR) is a task of clustering event mentions across multiple documents that refer to the same real-world events. Previous studies usually model the CD-ECR task as a pairwise similarity comparison problem by using different event mention features, and consider the highly si...
[ "Chen, Xinyu", "Xu, Sheng", "Li, Peifeng", "Zhu, Qiaoming" ]
Cross-Document Event Coreference Resolution on Discourse Structure
emnlp-main.294
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.295.bib
https://aclanthology.org/2023.emnlp-main.295/
@inproceedings{jang-etal-2023-post, title = "Post-hoc Utterance Refining Method by Entity Mining for Faithful Knowledge Grounded Conversations", author = "Jang, Yoonna and Son, Suhyune and Lee, Jeongwoo and Son, Junyoung and Hur, Yuna and Lim, Jungwoo and Moon, Hyeonseo...
Despite the striking advances in recent language generation performance, model-generated responses have suffered from the chronic problem of hallucinations that are either untrue or unfaithful to a given source. Especially in the task of knowledge grounded conversation, the models are required to generate informative r...
[ "Jang, Yoonna", "Son, Suhyune", "Lee, Jeongwoo", "Son, Junyoung", "Hur, Yuna", "Lim, Jungwoo", "Moon, Hyeonseok", "Yang, Kisu", "Lim, Heuiseok" ]
Post-hoc Utterance Refining Method by Entity Mining for Faithful Knowledge Grounded Conversations
emnlp-main.295
2406.10809
[ "https://github.com/yoonnajang/rem" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.296.bib
https://aclanthology.org/2023.emnlp-main.296/
@inproceedings{zheng-etal-2023-edit, title = "Can We Edit Factual Knowledge by In-Context Learning?", author = "Zheng, Ce and Li, Lei and Dong, Qingxiu and Fan, Yuxuan and Wu, Zhiyong and Xu, Jingjing and Chang, Baobao", editor = "Bouamor, Houda and Pino, Jua...
Previous studies have shown that large language models (LLMs) like GPTs store massive factual knowledge in their parameters. However, the stored knowledge could be false or outdated. Traditional knowledge editing methods refine LLMs via fine-tuning on texts containing specific knowledge. However, with the increasing sc...
[ "Zheng, Ce", "Li, Lei", "Dong, Qingxiu", "Fan, Yuxuan", "Wu, Zhiyong", "Xu, Jingjing", "Chang, Baobao" ]
Can We Edit Factual Knowledge by In-Context Learning?
emnlp-main.296
null
[ "https://github.com/zce1112zslx/ike" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.297.bib
https://aclanthology.org/2023.emnlp-main.297/
@inproceedings{liu-etal-2023-edis, title = "{EDIS}: Entity-Driven Image Search over Multimodal Web Content", author = "Liu, Siqi and Feng, Weixi and Fu, Tsu-Jui and Chen, Wenhu and Wang, William", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", bookti...
Making image retrieval methods practical for real-world search applications requires significant progress in dataset scales, entity comprehension, and multimodal information fusion. In this work, we introduce Entity-Driven Image Search (EDIS), a challenging dataset for cross-modal image search in the news domain. EDIS ...
[ "Liu, Siqi", "Feng, Weixi", "Fu, Tsu-Jui", "Chen, Wenhu", "Wang, William" ]
EDIS: Entity-Driven Image Search over Multimodal Web Content
emnlp-main.297
2305.13631
[ "https://github.com/emerisly/edis" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.298.bib
https://aclanthology.org/2023.emnlp-main.298/
@inproceedings{ainslie-etal-2023-gqa, title = "{GQA}: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints", author = "Ainslie, Joshua and Lee-Thorp, James and de Jong, Michiel and Zemlyanskiy, Yury and Lebron, Federico and Sanghai, Sumit", edito...
Multi-query attention (MQA), which only uses a single key-value head, drastically speeds up decoder inference. However, MQA can lead to quality degradation, and moreover it may not be desirable to train a separate model just for faster inference. We (1) propose a recipe for uptraining existing multi-head language model...
[ "Ainslie, Joshua", "Lee-Thorp, James", "de Jong, Michiel", "Zemlyanskiy, Yury", "Lebron, Federico", "Sanghai, Sumit" ]
GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints
emnlp-main.298
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.299.bib
https://aclanthology.org/2023.emnlp-main.299/
@inproceedings{hou-etal-2023-towards, title = "Towards a Mechanistic Interpretation of Multi-Step Reasoning Capabilities of Language Models", author = "Hou, Yifan and Li, Jiaoda and Fei, Yu and Stolfo, Alessandro and Zhou, Wangchunshu and Zeng, Guangtao and Bosselut, An...
Recent work has shown that language models (LMs) have strong multi-step (i.e., procedural) reasoning capabilities. However, it is unclear whether LMs perform these tasks by cheating with answers memorized from pretraining corpus, or, via a multi-step reasoning mechanism. In this paper, we try to answer this question by...
[ "Hou, Yifan", "Li, Jiaoda", "Fei, Yu", "Stolfo, Aless", "ro", "Zhou, Wangchunshu", "Zeng, Guangtao", "Bosselut, Antoine", "Sachan, Mrinmaya" ]
Towards a Mechanistic Interpretation of Multi-Step Reasoning Capabilities of Language Models
emnlp-main.299
2310.14491
[ "https://github.com/yifan-h/mechanisticprobe" ]
https://huggingface.co/papers/2310.14491
2
0
0
8
[]
[ "yyyyifan/MechanisticProbe_ProofWriter_ARC" ]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.300.bib
https://aclanthology.org/2023.emnlp-main.300/
@inproceedings{zhang-etal-2023-biasx, title = "{B}ias{X}: {``}Thinking Slow{''} in Toxic Content Moderation with Explanations of Implied Social Biases", author = "Zhang, Yiming and Nanduri, Sravani and Jiang, Liwei and Wu, Tongshuang and Sap, Maarten", editor = "Bouamor, Houda a...
Toxicity annotators and content moderators often default to mental shortcuts when making decisions. This can lead to subtle toxicity being missed, and seemingly toxic but harmless content being over-detected. We introduce BiasX, a framework that enhances content moderation setups with free-text explanations of statemen...
[ "Zhang, Yiming", "N", "uri, Sravani", "Jiang, Liwei", "Wu, Tongshuang", "Sap, Maarten" ]
BiasX: “Thinking Slow” in Toxic Content Moderation with Explanations of Implied Social Biases
emnlp-main.300
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster