bibtex_url stringlengths 41 53 | acl_proceedings stringlengths 38 50 | bibtext stringlengths 528 3.02k | abstract stringlengths 17 2.35k | authors listlengths 1 44 | title stringlengths 18 190 | id stringlengths 7 19 | arxiv_id stringlengths 10 10 ⌀ | GitHub listlengths 1 1 | paper_page stringclasses 528
values | n_linked_authors int64 -1 15 | upvotes int64 -1 77 | num_comments int64 -1 10 | n_authors int64 -1 52 | Models listlengths 0 100 | Datasets listlengths 0 15 | Spaces listlengths 0 46 | paper_page_exists_pre_conf int64 0 1 | type stringclasses 2
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://aclanthology.org/2023.findings-emnlp.220.bib | https://aclanthology.org/2023.findings-emnlp.220/ | @inproceedings{li-etal-2023-watermarking,
title = "Watermarking {LLM}s with Weight Quantization",
author = "Li, Linyang and
Jiang, Botian and
Wang, Pengyu and
Ren, Ke and
Yan, Hang and
Qiu, Xipeng",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
... | Abuse of large language models reveals high risks as large language models are being deployed at an astonishing speed. It is important to protect the model weights to avoid malicious usage that violates licenses of open-source large language models. This paper proposes a novel watermarking strategy that plants watermar... | [
"Li, Linyang",
"Jiang, Botian",
"Wang, Pengyu",
"Ren, Ke",
"Yan, Hang",
"Qiu, Xipeng"
] | Watermarking LLMs with Weight Quantization | findings-emnlp.220 | null | [
"https://github.com/twilight92z/quantize-watermark"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.221.bib | https://aclanthology.org/2023.findings-emnlp.221/ | @inproceedings{mirzaee-kordjamshidi-2023-disentangling,
title = "Disentangling Extraction and Reasoning in Multi-hop Spatial Reasoning",
author = "Mirzaee, Roshanak and
Kordjamshidi, Parisa",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Ass... | Spatial reasoning over text is challenging as the models not only need to extract the direct spatial information from the text but also reason over those and infer implicit spatial relations. Recent studies highlight the struggles even large language models encounter when it comes to performing spatial reasoning over t... | [
"Mirzaee, Roshanak",
"Kordjamshidi, Parisa"
] | Disentangling Extraction and Reasoning in Multi-hop Spatial Reasoning | findings-emnlp.221 | 2310.16731 | [
"https://github.com/rshnk73/pistaq-sreqa"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.222.bib | https://aclanthology.org/2023.findings-emnlp.222/ | @inproceedings{zhang-etal-2023-psyattention,
title = "{P}sy{A}ttention: Psychological Attention Model for Personality Detection",
author = "Zhang, Baohua and
Huang, Yongyi and
Cui, Wenyao and
Huaping, Zhang and
Shang, Jianyun",
editor = "Bouamor, Houda and
Pino, Juan and
... | Work on personality detection has tended to incorporate psychological features from different personality models, such as BigFive and MBTI. There are more than 900 psychological features, each of which is helpful for personality detection. However, when used in combination, the application of different calculation stan... | [
"Zhang, Baohua",
"Huang, Yongyi",
"Cui, Wenyao",
"Huaping, Zhang",
"Shang, Jianyun"
] | PsyAttention: Psychological Attention Model for Personality Detection | findings-emnlp.222 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.223.bib | https://aclanthology.org/2023.findings-emnlp.223/ | @inproceedings{kim-etal-2023-roast,
title = "{R}o{AST}: Robustifying Language Models via Adversarial Perturbation with Selective Training",
author = "Kim, Jaehyung and
Mao, Yuning and
Hou, Rui and
Yu, Hanchao and
Liang, Davis and
Fung, Pascale and
Wang, Qifan and
... | Fine-tuning pre-trained language models (LMs) has become the de facto standard in many NLP tasks. Nevertheless, fine-tuned LMs are still prone to robustness issues, such as adversarial robustness and model calibration. Several perspectives of robustness for LMs have been studied independently, but lacking a unified con... | [
"Kim, Jaehyung",
"Mao, Yuning",
"Hou, Rui",
"Yu, Hanchao",
"Liang, Davis",
"Fung, Pascale",
"Wang, Qifan",
"Feng, Fuli",
"Huang, Lifu",
"Khabsa, Madian"
] | RoAST: Robustifying Language Models via Adversarial Perturbation with Selective Training | findings-emnlp.223 | 2312.04032 | [
"https://github.com/bbuing9/roast"
] | https://huggingface.co/papers/2312.04032 | 1 | 1 | 1 | 10 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.findings-emnlp.224.bib | https://aclanthology.org/2023.findings-emnlp.224/ | @inproceedings{mahari-etal-2023-law,
title = "The Law and {NLP}: Bridging Disciplinary Disconnects",
author = "Mahari, Robert and
Stammbach, Dominik and
Ash, Elliott and
Pentland, Alex",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings o... | Legal practice is intrinsically rooted in the fabric of language, yet legal practitioners and scholars have been slow to adopt tools from natural language processing (NLP). At the same time, the legal system is experiencing an access to justice crisis, which could be partially alleviated with NLP. In this position pape... | [
"Mahari, Robert",
"Stammbach, Dominik",
"Ash, Elliott",
"Pentl",
", Alex"
] | The Law and NLP: Bridging Disciplinary Disconnects | findings-emnlp.224 | 2310.14346 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.225.bib | https://aclanthology.org/2023.findings-emnlp.225/ | @inproceedings{chen-etal-2023-symbolization,
title = "Symbolization, Prompt, and Classification: A Framework for Implicit Speaker Identification in Novels",
author = "Chen, Yue and
He, Tianwei and
Zhou, Hongbin and
Gu, Jia-Chen and
Lu, Heng and
Ling, Zhen-Hua",
editor = "B... | Speaker identification in novel dialogues can be widely applied to various downstream tasks, such as producing multi-speaker audiobooks and converting novels into scripts. However, existing state-of-the-art methods are limited to handling explicit narrative patterns like {``}Tom said, '...''', unable to thoroughly unde... | [
"Chen, Yue",
"He, Tianwei",
"Zhou, Hongbin",
"Gu, Jia-Chen",
"Lu, Heng",
"Ling, Zhen-Hua"
] | Symbolization, Prompt, and Classification: A Framework for Implicit Speaker Identification in Novels | findings-emnlp.225 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.226.bib | https://aclanthology.org/2023.findings-emnlp.226/ | @inproceedings{sarch-etal-2023-open,
title = "Open-Ended Instructable Embodied Agents with Memory-Augmented Large Language Models",
author = "Sarch, Gabriel and
Wu, Yue and
Tarr, Michael and
Fragkiadaki, Katerina",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",... | Pre-trained and frozen LLMs can effectively map simple scene re-arrangement instructions to programs over a robot{'}s visuomotor functions through appropriate few-shot example prompting. To parse open-domain natural language and adapt to a user{'}s idiosyncratic procedures, not known during prompt engineering time, fix... | [
"Sarch, Gabriel",
"Wu, Yue",
"Tarr, Michael",
"Fragkiadaki, Katerina"
] | Open-Ended Instructable Embodied Agents with Memory-Augmented Large Language Models | findings-emnlp.226 | 2310.15127 | [
""
] | https://huggingface.co/papers/2310.15127 | 1 | 0 | 0 | 4 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.findings-emnlp.227.bib | https://aclanthology.org/2023.findings-emnlp.227/ | @inproceedings{zhang-etal-2023-act,
title = "{ACT}-{SQL}: In-Context Learning for Text-to-{SQL} with Automatically-Generated Chain-of-Thought",
author = "Zhang, Hanchong and
Cao, Ruisheng and
Chen, Lu and
Xu, Hongshen and
Yu, Kai",
editor = "Bouamor, Houda and
Pino, Juan ... | Recently Large Language Models (LLMs) have been proven to have strong abilities in various domains and tasks. We study the problem of prompt designing in the text-to-SQL task and attempt to improve the LLMs{'} reasoning ability when generating SQL queries. Besides the trivial few-shot in-context learning setting, we de... | [
"Zhang, Hanchong",
"Cao, Ruisheng",
"Chen, Lu",
"Xu, Hongshen",
"Yu, Kai"
] | ACT-SQL: In-Context Learning for Text-to-SQL with Automatically-Generated Chain-of-Thought | findings-emnlp.227 | 2310.17342 | [
"https://github.com/x-lance/text2sql-gpt"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.228.bib | https://aclanthology.org/2023.findings-emnlp.228/ | @inproceedings{sengupta-etal-2023-manifold,
title = "Manifold-Preserving Transformers are Effective for Short-Long Range Encoding",
author = "Sengupta, Ayan and
Akhtar, Md and
Chakraborty, Tanmoy",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findin... | Multi-head self-attention-based Transformers have shown promise in different learning tasks. Albeit these models exhibit significant improvement in understanding short-term and long-term contexts from sequences, encoders of Transformers and their variants fail to preserve layer-wise contextual information. Transformers... | [
"Sengupta, Ayan",
"Akhtar, Md",
"Chakraborty, Tanmoy"
] | Manifold-Preserving Transformers are Effective for Short-Long Range Encoding | findings-emnlp.228 | 2310.14206 | [
"https://github.com/victor7246/transject"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.229.bib | https://aclanthology.org/2023.findings-emnlp.229/ | @inproceedings{vejvar-fujimoto-2023-aspiro,
title = "{ASPIRO}: Any-shot Structured Parsing-error-Induced {R}epr{O}mpting for Consistent Data-to-Text Generation",
author = "Vejvar, Martin and
Fujimoto, Yasutaka",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = ... | We present ASPIRO, an approach for structured data verbalisation into short template sentences in zero to few-shot settings. Unlike previous methods, our approach prompts Large Language Models (LLMs) to directly produce entity-agnostic templates, rather than relying on LLMs to faithfully copy the given example entities... | [
"Vejvar, Martin",
"Fujimoto, Yasutaka"
] | ASPIRO: Any-shot Structured Parsing-error-Induced ReprOmpting for Consistent Data-to-Text Generation | findings-emnlp.229 | 2310.17877 | [
"https://github.com/vejvarm/aspiro"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.230.bib | https://aclanthology.org/2023.findings-emnlp.230/ | @inproceedings{hou-smith-2023-detecting,
title = "Detecting Syntactic Change with Pre-trained Transformer Models",
author = "Hou, Liwen and
Smith, David",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistic... | We investigate the ability of Transformer-based language models to find syntactic differences between the English of the early 1800s and that of the late 1900s. First, we show that a fine-tuned BERT model can distinguish between text from these two periods using syntactic information only; to show this, we employ a str... | [
"Hou, Liwen",
"Smith, David"
] | Detecting Syntactic Change with Pre-trained Transformer Models | findings-emnlp.230 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.231.bib | https://aclanthology.org/2023.findings-emnlp.231/ | @inproceedings{tang-etal-2023-word,
title = "Can Word Sense Distribution Detect Semantic Changes of Words?",
author = "Tang, Xiaohang and
Zhou, Yi and
Aida, Taichi and
Sen, Procheta and
Bollegala, Danushka",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",... | Semantic Change Detection of words is an important task for various NLP applications that must make time-sensitive predictions. Some words are used over time in novel ways to express new meanings, and these new meanings establish themselves as novel senses of existing words. On the other hand, Word Sense Disambiguation... | [
"Tang, Xiaohang",
"Zhou, Yi",
"Aida, Taichi",
"Sen, Procheta",
"Bollegala, Danushka"
] | Can Word Sense Distribution Detect Semantic Changes of Words? | findings-emnlp.231 | 2310.10400 | [
"https://github.com/LivNLP/Sense-based-Semantic-Change-Prediction"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.232.bib | https://aclanthology.org/2023.findings-emnlp.232/ | @inproceedings{deng-etal-2023-gold,
title = "Gold: A Global and Local-aware Denoising Framework for Commonsense Knowledge Graph Noise Detection",
author = "Deng, Zheye and
Wang, Weiqi and
Wang, Zhaowei and
Liu, Xin and
Song, Yangqiu",
editor = "Bouamor, Houda and
Pino, Jua... | Commonsense Knowledge Graphs (CSKGs) are crucial for commonsense reasoning, yet constructing them through human annotations can be costly. As a result, various automatic methods have been proposed to construct CSKG with larger semantic coverage. However, these unsupervised approaches introduce spurious noise that can l... | [
"Deng, Zheye",
"Wang, Weiqi",
"Wang, Zhaowei",
"Liu, Xin",
"Song, Yangqiu"
] | Gold: A Global and Local-aware Denoising Framework for Commonsense Knowledge Graph Noise Detection | findings-emnlp.232 | 2310.12011 | [
"https://github.com/hkust-knowcomp/gold"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.233.bib | https://aclanthology.org/2023.findings-emnlp.233/ | @inproceedings{wang-etal-2023-improving-conversational,
title = "Improving Conversational Recommendation Systems via Bias Analysis and Language-Model-Enhanced Data Augmentation",
author = "Wang, Xi and
Rahmani, Hossein and
Liu, Jiqun and
Yilmaz, Emine",
editor = "Bouamor, Houda and
... | Conversational Recommendation System (CRS) is a rapidly growing research area that has gained significant attention alongside advancements in language modelling techniques. However, the current state of conversational recommendation faces numerous challenges due to its relative novelty and limited existing contribution... | [
"Wang, Xi",
"Rahmani, Hossein",
"Liu, Jiqun",
"Yilmaz, Emine"
] | Improving Conversational Recommendation Systems via Bias Analysis and Language-Model-Enhanced Data Augmentation | findings-emnlp.233 | 2310.16738 | [
"https://github.com/wangxieric/bias-crs"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.234.bib | https://aclanthology.org/2023.findings-emnlp.234/ | @inproceedings{bao-etal-2023-exploring,
title = "Exploring Graph Pre-training for Aspect-based Sentiment Analysis",
author = "Bao, Xiaoyi and
Wang, Zhongqing and
Zhou, Guodong",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Associatio... | Existing studies tend to extract the sentiment elements in a generative manner in order to avoid complex modeling. Despite their effectiveness, they ignore importance of the relationships between sentiment elements that could be crucial, making the large pre-trained generative models sub-optimal for modeling sentiment ... | [
"Bao, Xiaoyi",
"Wang, Zhongqing",
"Zhou, Guodong"
] | Exploring Graph Pre-training for Aspect-based Sentiment Analysis | findings-emnlp.234 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.235.bib | https://aclanthology.org/2023.findings-emnlp.235/ | @inproceedings{nguyen-etal-2023-demaformer,
title = "{D}ema{F}ormer: Damped Exponential Moving Average Transformer with Energy-Based Modeling for Temporal Language Grounding",
author = "Nguyen, Thong and
Wu, Xiaobao and
Dong, Xinshuai and
Nguyen, Cong-Duy and
Ng, See-Kiong and
... | Temporal Language Grounding seeks to localize video moments that semantically correspond to a natural language query. Recent advances employ the attention mechanism to learn the relations between video moments and the text query. However, naive attention might not be able to appropriately capture such relations, result... | [
"Nguyen, Thong",
"Wu, Xiaobao",
"Dong, Xinshuai",
"Nguyen, Cong-Duy",
"Ng, See-Kiong",
"Luu, Anh"
] | DemaFormer: Damped Exponential Moving Average Transformer with Energy-Based Modeling for Temporal Language Grounding | findings-emnlp.235 | 2312.02549 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.236.bib | https://aclanthology.org/2023.findings-emnlp.236/ | @inproceedings{kamoda-etal-2023-test,
title = "Test-time Augmentation for Factual Probing",
author = "Kamoda, Go and
Heinzerling, Benjamin and
Sakaguchi, Keisuke and
Inui, Kentaro",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the... | Factual probing is a method that uses prompts to test if a language model {``}knows{''} certain world knowledge facts. A problem in factual probing is that small changes to the prompt can lead to large changes in model output. Previous work aimed to alleviate this problem by optimizing prompts via text mining or fine-t... | [
"Kamoda, Go",
"Heinzerling, Benjamin",
"Sakaguchi, Keisuke",
"Inui, Kentaro"
] | Test-time Augmentation for Factual Probing | findings-emnlp.236 | 2310.17121 | [
"https://github.com/gokamoda/TTA4FactualProbing"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.237.bib | https://aclanthology.org/2023.findings-emnlp.237/ | @inproceedings{hoeken-etal-2023-methodological,
title = "Methodological Insights in Detecting Subtle Semantic Shifts with Contextualized and Static Language Models",
author = {Hoeken, Sanne and
Alacam, {\"O}zge and
Fokkens, Antske and
Sommerauer, Pia},
editor = "Bouamor, Houda and
... | In this paper, we investigate automatic detection of subtle semantic shifts between social communities of different political convictions in Dutch and English. We perform a methodological study comparing methods using static and contextualized language models. We investigate the impact of specializing contextualized mo... | [
"Hoeken, Sanne",
"Alacam, {\\\"O}zge",
"Fokkens, Antske",
"Sommerauer, Pia"
] | Methodological Insights in Detecting Subtle Semantic Shifts with Contextualized and Static Language Models | findings-emnlp.237 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.238.bib | https://aclanthology.org/2023.findings-emnlp.238/ | @inproceedings{rohanian-etal-2023-disfluent,
title = "Disfluent Cues for Enhanced Speech Understanding in Large Language Models",
author = "Rohanian, Morteza and
Nooralahzadeh, Farhad and
Rohanian, Omid and
Clifton, David and
Krauthammer, Michael",
editor = "Bouamor, Houda and
... | In computational linguistics, the common practice is to {``}clean{''} disfluent content from spontaneous speech. However, we hypothesize that these disfluencies might serve as more than mere noise, potentially acting as informative cues. We use a range of pre-trained models for a reading comprehension task involving di... | [
"Rohanian, Morteza",
"Nooralahzadeh, Farhad",
"Rohanian, Omid",
"Clifton, David",
"Krauthammer, Michael"
] | Disfluent Cues for Enhanced Speech Understanding in Large Language Models | findings-emnlp.238 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.239.bib | https://aclanthology.org/2023.findings-emnlp.239/ | @inproceedings{gu-etal-2023-watermarking,
title = "Watermarking {PLM}s on Classification Tasks by Combining Contrastive Learning with Weight Perturbation",
author = "Gu, Chenxi and
Zheng, Xiaoqing and
Xu, Jianhan and
Wu, Muling and
Zhang, Cenyuan and
Huang, Chengsong and
... | Large pre-trained language models (PLMs) have achieved remarkable success, making them highly valuable intellectual property due to their expensive training costs. Consequently, model watermarking, a method developed to protect the intellectual property of neural models, has emerged as a crucial yet underexplored techn... | [
"Gu, Chenxi",
"Zheng, Xiaoqing",
"Xu, Jianhan",
"Wu, Muling",
"Zhang, Cenyuan",
"Huang, Chengsong",
"Cai, Hua",
"Huang, Xuanjing"
] | Watermarking PLMs on Classification Tasks by Combining Contrastive Learning with Weight Perturbation | findings-emnlp.239 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.240.bib | https://aclanthology.org/2023.findings-emnlp.240/ | @inproceedings{afrin-etal-2023-banlemma,
title = "{B}an{L}emma: A Word Formation Dependent Rule and Dictionary Based {B}angla Lemmatizer",
author = "Afrin, Sadia and
Chowdhury, Md. Shahad Mahmud and
Islam, Md. and
Khan, Faisal and
Chowdhury, Labib and
Mahtab, Md. and
Ch... | Lemmatization holds significance in both natural language processing (NLP) and linguistics, as it effectively decreases data density and aids in comprehending contextual meaning. However, due to the highly inflected nature and morphological richness, lemmatization in Bangla text poses a complex challenge. In this study... | [
"Afrin, Sadia",
"Chowdhury, Md. Shahad Mahmud",
"Islam, Md.",
"Khan, Faisal",
"Chowdhury, Labib",
"Mahtab, Md.",
"Chowdhury, Nazifa",
"Forkan, Massud",
"Kundu, Neelima",
"Arif, Hakim",
"Rashid, Mohammad Mamun Or",
"Amin, Mohammad",
"Mohammed, Nabeel"
] | BanLemma: A Word Formation Dependent Rule and Dictionary Based Bangla Lemmatizer | findings-emnlp.240 | 2311.03078 | [
"https://github.com/eblict-gigatech/BanLemma"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.241.bib | https://aclanthology.org/2023.findings-emnlp.241/ | @inproceedings{loya-etal-2023-exploring,
title = "Exploring the Sensitivity of {LLM}s{'} Decision-Making Capabilities: Insights from Prompt Variations and Hyperparameters",
author = "Loya, Manikanta and
Sinha, Divya and
Futrell, Richard",
editor = "Bouamor, Houda and
Pino, Juan and
... | The advancement of Large Language Models (LLMs) has led to their widespread use across a broad spectrum of tasks, including decision-making. Prior studies have compared the decision-making abilities of LLMs with those of humans from a psychological perspective. However, these studies have not always properly accounted ... | [
"Loya, Manikanta",
"Sinha, Divya",
"Futrell, Richard"
] | Exploring the Sensitivity of LLMs' Decision-Making Capabilities: Insights from Prompt Variations and Hyperparameters | findings-emnlp.241 | 2312.17476 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.242.bib | https://aclanthology.org/2023.findings-emnlp.242/ | @inproceedings{luo-etal-2023-search,
title = "Search Augmented Instruction Learning",
author = "Luo, Hongyin and
Zhang, Tianhua and
Chuang, Yung-Sung and
Gong, Yuan and
Kim, Yoon and
Wu, Xixin and
Meng, Helen and
Glass, James",
editor = "Bouamor, Houda and
... | Large language models (LLMs) have been significantly improved by instruction fine-tuning, but still lack transparency and the ability to utilize up-to-date knowledge and information. In this work, we propose search-augmented instruction learning (SAIL), which grounds the language generation and instruction following ab... | [
"Luo, Hongyin",
"Zhang, Tianhua",
"Chuang, Yung-Sung",
"Gong, Yuan",
"Kim, Yoon",
"Wu, Xixin",
"Meng, Helen",
"Glass, James"
] | Search Augmented Instruction Learning | findings-emnlp.242 | 2305.15225 | [
""
] | https://huggingface.co/papers/2305.15225 | 1 | 2 | 0 | 9 | [
"lukasmoeller/mpt-7b-sail-ep1",
"luohy/SAIL-7b"
] | [
"lukasmoeller/sail_preprocessed"
] | [] | 1 | Poster |
https://aclanthology.org/2023.findings-emnlp.243.bib | https://aclanthology.org/2023.findings-emnlp.243/ | @inproceedings{wan-etal-2023-kelly,
title = "{``}Kelly is a Warm Person, Joseph is a Role Model{''}: Gender Biases in {LLM}-Generated Reference Letters",
author = "Wan, Yixin and
Pu, George and
Sun, Jiao and
Garimella, Aparna and
Chang, Kai-Wei and
Peng, Nanyun",
editor = ... | Large Language Models (LLMs) have recently emerged as an effective tool to assist individuals in writing various types of content, including professional documents such as recommendation letters. Though bringing convenience, this application also introduces unprecedented fairness concerns. Model-generated reference let... | [
"Wan, Yixin",
"Pu, George",
"Sun, Jiao",
"Garimella, Aparna",
"Chang, Kai-Wei",
"Peng, Nanyun"
] | “Kelly is a Warm Person, Joseph is a Role Model”: Gender Biases in LLM-Generated Reference Letters | findings-emnlp.243 | 2310.09219 | [
"https://github.com/uclanlp/biases-llm-reference-letters"
] | https://huggingface.co/papers/2310.09219 | 1 | 0 | 0 | 6 | [
"emmatliu/language-agency-classifier"
] | [
"elaine1wan/Language-Agency-Classification",
"elaine1wan/Reference-Letter-Bias-Prompts"
] | [
"emmatliu/LLMReferenceLetterBias"
] | 1 | Poster |
https://aclanthology.org/2023.findings-emnlp.244.bib | https://aclanthology.org/2023.findings-emnlp.244/ | @inproceedings{zhou-etal-2023-textmixer,
title = "{T}ext{M}ixer: Mixing Multiple Inputs for Privacy-Preserving Inference",
author = "Zhou, Xin and
Lu, Yi and
Ma, Ruotian and
Gui, Tao and
Zhang, Qi and
Huang, Xuanjing",
editor = "Bouamor, Houda and
Pino, Juan and
... | Pre-trained language models (PLMs) are often deployed as cloud services, enabling users to upload textual data and perform inference remotely. However, users{'} personal text often contains sensitive information, and sharing such data directly with the service providers can lead to serious privacy leakage. To address t... | [
"Zhou, Xin",
"Lu, Yi",
"Ma, Ruotian",
"Gui, Tao",
"Zhang, Qi",
"Huang, Xuanjing"
] | TextMixer: Mixing Multiple Inputs for Privacy-Preserving Inference | findings-emnlp.244 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.245.bib | https://aclanthology.org/2023.findings-emnlp.245/ | @inproceedings{kim-etal-2023-fineprompt,
title = "{F}ine{P}rompt: Unveiling the Role of Finetuned Inductive Bias on Compositional Reasoning in {GPT}-4",
author = "Kim, Jeonghwan and
Hong, Giwon and
Myaeng, Sung-Hyon and
Whang, Joyce",
editor = "Bouamor, Houda and
Pino, Juan and
... | Compositional reasoning across texts has been a long-standing challenge in natural language processing. With large language models like GPT-4 taking over the field, prompting techniques such as chain-of-thought (CoT) were proposed to unlock compositional, multi-step reasoning capabilities of LLMs. Despite their success... | [
"Kim, Jeonghwan",
"Hong, Giwon",
"Myaeng, Sung-Hyon",
"Whang, Joyce"
] | FinePrompt: Unveiling the Role of Finetuned Inductive Bias on Compositional Reasoning in GPT-4 | findings-emnlp.245 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.246.bib | https://aclanthology.org/2023.findings-emnlp.246/ | @inproceedings{chaudhary-etal-2023-teacher,
title = "Teacher Perception of Automatically Extracted Grammar Concepts for {L}2 Language Learning",
author = "Chaudhary, Aditi and
Sampath, Arun and
Sheshadri, Ashwin and
Anastasopoulos, Antonios and
Neubig, Graham",
editor = "Bouamor,... | One of the challenges in language teaching is how best to organize rules regarding syntax, semantics, or phonology in a meaningful manner. This not only requires content creators to have pedagogical skills, but also have that language{'}s deep understanding. While comprehensive materials to develop such curricula are a... | [
"Chaudhary, Aditi",
"Sampath, Arun",
"Sheshadri, Ashwin",
"Anastasopoulos, Antonios",
"Neubig, Graham"
] | Teacher Perception of Automatically Extracted Grammar Concepts for L2 Language Learning | findings-emnlp.246 | 2310.18417 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.247.bib | https://aclanthology.org/2023.findings-emnlp.247/ | @inproceedings{sun-etal-2023-allies,
title = "Allies: Prompting Large Language Model with Beam Search",
author = "Sun, Hao and
Liu, Xiao and
Gong, Yeyun and
Zhang, Yan and
Jiang, Daxin and
Yang, Linjun and
Duan, Nan",
editor = "Bouamor, Houda and
Pino, Juan ... | With the advance of large language models (LLMs), the research field of LLM applications becomes more and more popular and the idea of constructing pipelines to accomplish complex tasks by stacking LLM API calls come true. However, this kind of methods face two limitations: narrow information coverage and low fault tol... | [
"Sun, Hao",
"Liu, Xiao",
"Gong, Yeyun",
"Zhang, Yan",
"Jiang, Daxin",
"Yang, Linjun",
"Duan, Nan"
] | Allies: Prompting Large Language Model with Beam Search | findings-emnlp.247 | 2305.14766 | [
"https://github.com/microsoft/simxns"
] | https://huggingface.co/papers/2305.14766 | 1 | 0 | 0 | 7 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.findings-emnlp.248.bib | https://aclanthology.org/2023.findings-emnlp.248/ | @inproceedings{pan-etal-2023-logic,
title = "Logic-{LM}: Empowering Large Language Models with Symbolic Solvers for Faithful Logical Reasoning",
author = "Pan, Liangming and
Albalak, Alon and
Wang, Xinyi and
Wang, William",
editor = "Bouamor, Houda and
Pino, Juan and
Bali,... | Large Language Models (LLMs) have shown human-like reasoning abilities but still struggle with complex logical problems. This paper introduces a novel framework, Logic-LM, which integrates LLMs with symbolic solvers to improve logical problem-solving. Our method first utilizes LLMs to translate a natural language probl... | [
"Pan, Liangming",
"Albalak, Alon",
"Wang, Xinyi",
"Wang, William"
] | Logic-LM: Empowering Large Language Models with Symbolic Solvers for Faithful Logical Reasoning | findings-emnlp.248 | 2305.12295 | [
"https://github.com/teacherpeterpan/logic-llm"
] | https://huggingface.co/papers/2305.12295 | 1 | 0 | 0 | 4 | [] | [
"renma/ProntoQA",
"renma/ProofWriter"
] | [] | 1 | Poster |
https://aclanthology.org/2023.findings-emnlp.249.bib | https://aclanthology.org/2023.findings-emnlp.249/ | @inproceedings{liu-etal-2023-simfy,
title = "{S}i{MF}y: A Simple Yet Effective Approach for Temporal Knowledge Graph Reasoning",
author = "Liu, Zhengtao and
Tan, Lei and
Li, Mengfan and
Wan, Yao and
Jin, Hai and
Shi, Xuanhua",
editor = "Bouamor, Houda and
Pino, Juan... | Temporal Knowledge Graph (TKG) reasoning, which focuses on leveraging temporal information to infer future facts in knowledge graphs, plays a vital role in knowledge graph completion. Typically, existing works for this task design graph neural networks and recurrent neural networks to respectively capture the structura... | [
"Liu, Zhengtao",
"Tan, Lei",
"Li, Mengfan",
"Wan, Yao",
"Jin, Hai",
"Shi, Xuanhua"
] | SiMFy: A Simple Yet Effective Approach for Temporal Knowledge Graph Reasoning | findings-emnlp.249 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.250.bib | https://aclanthology.org/2023.findings-emnlp.250/ | @inproceedings{wang-etal-2023-understanding,
title = "Understanding Translationese in Cross-Lingual Summarization",
author = "Wang, Jiaan and
Meng, Fandong and
Liang, Yunlong and
Zhang, Tingyi and
Xu, Jiarong and
Li, Zhixu and
Zhou, Jie",
editor = "Bouamor, Houda a... | Given a document in a source language, cross-lingual summarization (CLS) aims at generating a concise summary in a different target language. Unlike monolingual summarization (MS), naturally occurring source-language documents paired with target-language summaries are rare. To collect large-scale CLS data, existing dat... | [
"Wang, Jiaan",
"Meng, F",
"ong",
"Liang, Yunlong",
"Zhang, Tingyi",
"Xu, Jiarong",
"Li, Zhixu",
"Zhou, Jie"
] | Understanding Translationese in Cross-Lingual Summarization | findings-emnlp.250 | 2212.07220 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.251.bib | https://aclanthology.org/2023.findings-emnlp.251/ | @inproceedings{hagag-tsarfaty-2023-truth,
title = "The Truth, The Whole Truth, and Nothing but the Truth: A New Benchmark Dataset for {H}ebrew Text Credibility Assessment",
author = "Hagag, Ben and
Tsarfaty, Reut",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle... | In the age of information overload, it is more important than ever to discern fact from fiction. From the internet to traditional media, we are constantly confronted with a deluge of information, much of which comes from politicians and other public figures who wield significant influence. In this paper, we introduce H... | [
"Hagag, Ben",
"Tsarfaty, Reut"
] | The Truth, The Whole Truth, and Nothing but the Truth: A New Benchmark Dataset for Hebrew Text Credibility Assessment | findings-emnlp.251 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.252.bib | https://aclanthology.org/2023.findings-emnlp.252/ | @inproceedings{kumar-etal-2023-indisocialft,
title = "{I}ndi{S}ocial{FT}: Multilingual Word Representation for {I}ndian languages in code-mixed environment",
author = "Kumar, Saurabh and
Sanasam, Ranbir and
Nandi, Sukumar",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika... | The increasing number of Indian language users on the internet necessitates the development of Indian language technologies. In response to this demand, our paper presents a generalized representation vector for diverse text characteristics, including native scripts, transliterated text, multilingual, code-mixed, and s... | [
"Kumar, Saurabh",
"Sanasam, Ranbir",
"N",
"i, Sukumar"
] | IndiSocialFT: Multilingual Word Representation for Indian languages in code-mixed environment | findings-emnlp.252 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.253.bib | https://aclanthology.org/2023.findings-emnlp.253/ | @inproceedings{wang-etal-2023-adaptive,
title = "Adaptive Hinge Balance Loss for Document-Level Relation Extraction",
author = "Wang, Jize and
Le, Xinyi and
Peng, Xiaodi and
Chen, Cailian",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Finding... | Document-Level Relation Extraction aims at predicting relations between entities from multiple sentences. A common practice is to select multi-label classification thresholds to decide whether a relation exists between an entity pair. However, in the document-level task, most entity pairs do not express any relations, ... | [
"Wang, Jize",
"Le, Xinyi",
"Peng, Xiaodi",
"Chen, Cailian"
] | Adaptive Hinge Balance Loss for Document-Level Relation Extraction | findings-emnlp.253 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.254.bib | https://aclanthology.org/2023.findings-emnlp.254/ | @inproceedings{li-etal-2023-answer,
title = "Answer-state Recurrent Relational Network ({A}s{RRN}) for Constructed Response Assessment and Feedback Grouping",
author = "Li, Zhaohui and
Lloyd, Susan and
Beckman, Matthew and
Passonneau, Rebecca",
editor = "Bouamor, Houda and
Pino, ... | STEM educators must trade off the ease of assessing selected response (SR) questions, like multiple choice, with constructed response (CR) questions, where students articulate their own reasoning. Our work addresses a CR type new to NLP but common in college STEM, consisting of multiple questions per context. To relate... | [
"Li, Zhaohui",
"Lloyd, Susan",
"Beckman, Matthew",
"Passonneau, Rebecca"
] | Answer-state Recurrent Relational Network (AsRRN) for Constructed Response Assessment and Feedback Grouping | findings-emnlp.254 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.255.bib | https://aclanthology.org/2023.findings-emnlp.255/ | @inproceedings{xu-etal-2023-low,
title = "Low-Resource Comparative Opinion Quintuple Extraction by Data Augmentation with Prompting",
author = "Xu, Qingting and
Hong, Yu and
Zhao, Fubang and
Song, Kaisong and
Kang, Yangyang and
Chen, Jiaxiang and
Zhou, Guodong",
edi... | Comparative Opinion Quintuple Extraction (COQE) aims to predict comparative opinion quintuples from comparative sentences. These quintuples include subject, object, shareable aspect, comparative opinion, and preference. The existing pipeline-based COQE method fails in error propagation. In addition, the complexity and ... | [
"Xu, Qingting",
"Hong, Yu",
"Zhao, Fubang",
"Song, Kaisong",
"Kang, Yangyang",
"Chen, Jiaxiang",
"Zhou, Guodong"
] | Low-Resource Comparative Opinion Quintuple Extraction by Data Augmentation with Prompting | findings-emnlp.255 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.256.bib | https://aclanthology.org/2023.findings-emnlp.256/ | @inproceedings{yang-etal-2023-new-benchmark,
title = "A New Benchmark and Reverse Validation Method for Passage-level Hallucination Detection",
author = "Yang, Shiping and
Sun, Renliang and
Wan, Xiaojun",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = ... | Large Language Models (LLMs) have shown their ability to collaborate effectively with humans in real-world scenarios. However, LLMs are apt to generate hallucinations, i.e., makeup incorrect text and unverified information, which can cause significant damage when deployed for mission-critical tasks. In this paper, we p... | [
"Yang, Shiping",
"Sun, Renliang",
"Wan, Xiaojun"
] | A New Benchmark and Reverse Validation Method for Passage-level Hallucination Detection | findings-emnlp.256 | 2310.06498 | [
"https://github.com/maybenotime/phd"
] | https://huggingface.co/papers/2310.06498 | 1 | 1 | 0 | 3 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.findings-emnlp.257.bib | https://aclanthology.org/2023.findings-emnlp.257/ | @inproceedings{xia-etal-2023-speculative,
title = "Speculative Decoding: Exploiting Speculative Execution for Accelerating Seq2seq Generation",
author = "Xia, Heming and
Ge, Tao and
Wang, Peiyi and
Chen, Si-Qing and
Wei, Furu and
Sui, Zhifang",
editor = "Bouamor, Houda an... | We propose Speculative Decoding (SpecDec), for the first time ever, to formally study exploiting the idea of speculative execution to accelerate autoregressive (AR) decoding. Speculative Decoding has two innovations: Spec-Drafter {--} an independent model specially optimized for efficient and accurate drafting {--} and... | [
"Xia, Heming",
"Ge, Tao",
"Wang, Peiyi",
"Chen, Si-Qing",
"Wei, Furu",
"Sui, Zhifang"
] | Speculative Decoding: Exploiting Speculative Execution for Accelerating Seq2seq Generation | findings-emnlp.257 | 2203.16487 | [
"https://github.com/hemingkx/gad"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.258.bib | https://aclanthology.org/2023.findings-emnlp.258/ | @inproceedings{wang-etal-2023-app,
title = "{APP}: Adaptive Prototypical Pseudo-Labeling for Few-shot {OOD} Detection",
author = "Wang, Pei and
He, Keqing and
Mou, Yutao and
Song, Xiaoshuai and
Wu, Yanan and
Wang, Jingang and
Xian, Yunsen and
Cai, Xunliang and
... | Detecting out-of-domain (OOD) intents from user queries is essential for a task-oriented dialogue system. Previous OOD detection studies generally work on the assumption that plenty of labeled IND intents exist. In this paper, we focus on a more practical few-shot OOD setting where there are only a few labeled IND data... | [
"Wang, Pei",
"He, Keqing",
"Mou, Yutao",
"Song, Xiaoshuai",
"Wu, Yanan",
"Wang, Jingang",
"Xian, Yunsen",
"Cai, Xunliang",
"Xu, Weiran"
] | APP: Adaptive Prototypical Pseudo-Labeling for Few-shot OOD Detection | findings-emnlp.258 | 2310.13380 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.259.bib | https://aclanthology.org/2023.findings-emnlp.259/ | @inproceedings{zhang-etal-2023-2iner,
title = "2{INER}: Instructive and In-Context Learning on Few-Shot Named Entity Recognition",
author = "Zhang, Jiasheng and
Liu, Xikai and
Lai, Xinyi and
Gao, Yan and
Wang, Shusen and
Hu, Yao and
Lin, Yiqing",
editor = "Bouamor, ... | Prompt-based learning has emerged as a powerful technique in natural language processing (NLP) due to its ability to leverage pre-training knowledge for downstream few-shot tasks. In this paper, we propose 2INER, a novel text-to-text framework for Few-Shot Named Entity Recognition (NER) tasks. Our approach employs inst... | [
"Zhang, Jiasheng",
"Liu, Xikai",
"Lai, Xinyi",
"Gao, Yan",
"Wang, Shusen",
"Hu, Yao",
"Lin, Yiqing"
] | 2INER: Instructive and In-Context Learning on Few-Shot Named Entity Recognition | findings-emnlp.259 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.260.bib | https://aclanthology.org/2023.findings-emnlp.260/ | @inproceedings{wang-etal-2023-generative-emotion,
title = "Generative Emotion Cause Triplet Extraction in Conversations with Commonsense Knowledge",
author = "Wang, Fanfan and
Yu, Jianfei and
Xia, Rui",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "F... | Emotion Cause Triplet Extraction in Conversations (ECTEC) aims to simultaneously extract emotion utterances, emotion categories, and cause utterances from conversations. However, existing studies mainly decompose the ECTEC task into multiple subtasks and solve them in a pipeline manner. Moreover, since conversations te... | [
"Wang, Fanfan",
"Yu, Jianfei",
"Xia, Rui"
] | Generative Emotion Cause Triplet Extraction in Conversations with Commonsense Knowledge | findings-emnlp.260 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.261.bib | https://aclanthology.org/2023.findings-emnlp.261/ | @inproceedings{xie-etal-2023-proto,
title = "Proto-lm: A Prototypical Network-Based Framework for Built-in Interpretability in Large Language Models",
author = "Xie, Sean and
Vosoughi, Soroush and
Hassanpour, Saeed",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
... | Large Language Models (LLMs) have significantly advanced the field of Natural Language Processing (NLP), but their lack of interpretability has been a major concern. Current methods for interpreting LLMs are post hoc, applied after inference time, and have limitations such as their focus on low-level features and lack ... | [
"Xie, Sean",
"Vosoughi, Soroush",
"Hassanpour, Saeed"
] | Proto-lm: A Prototypical Network-Based Framework for Built-in Interpretability in Large Language Models | findings-emnlp.261 | 2311.01732 | [
"https://github.com/yx131/proto-lm"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.262.bib | https://aclanthology.org/2023.findings-emnlp.262/ | @inproceedings{wen-etal-2023-grove,
title = "{GROVE}: A Retrieval-augmented Complex Story Generation Framework with A Forest of Evidence",
author = "Wen, Zhihua and
Tian, Zhiliang and
Wu, Wei and
Yang, Yuxin and
Shi, Yanqi and
Huang, Zhen and
Li, Dongsheng",
editor ... | Conditional story generation is significant in human-machine interaction, particularly in producing stories with complex plots. While Large language models (LLMs) perform well on multiple NLP tasks, including story generation, it is challenging to generate stories with both complex and creative plots. Existing methods ... | [
"Wen, Zhihua",
"Tian, Zhiliang",
"Wu, Wei",
"Yang, Yuxin",
"Shi, Yanqi",
"Huang, Zhen",
"Li, Dongsheng"
] | GROVE: A Retrieval-augmented Complex Story Generation Framework with A Forest of Evidence | findings-emnlp.262 | 2310.05388 | [
""
] | https://huggingface.co/papers/2310.05388 | 0 | 4 | 0 | 7 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.findings-emnlp.263.bib | https://aclanthology.org/2023.findings-emnlp.263/ | @inproceedings{ma-etal-2023-kapalm,
title = "{KAPALM}: Knowledge gr{AP}h enh{A}nced Language Models for Fake News Detection",
author = "Ma, Jing and
Chen, Chen and
Hou, Chunyan and
Yuan, Xiaojie",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "... | Social media has not only facilitated news consumption, but also led to the wide spread of fake news. Because news articles in social media is usually condensed and full of knowledge entities, existing methods of fake news detection use external entity knowledge. However, majority of these methods focus on news entity ... | [
"Ma, Jing",
"Chen, Chen",
"Hou, Chunyan",
"Yuan, Xiaojie"
] | KAPALM: Knowledge grAPh enhAnced Language Models for Fake News Detection | findings-emnlp.263 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.264.bib | https://aclanthology.org/2023.findings-emnlp.264/ | @inproceedings{murthy-etal-2023-comparing,
title = "Comparing the Evaluation and Production of Loophole Behavior in Humans and Large Language Models",
author = "Murthy, Sonia and
Parece, Kiera and
Bridgers, Sophie and
Qian, Peng and
Ullman, Tomer",
editor = "Bouamor, Houda and
... | In law, lore, and everyday life, loopholes are commonplace. When people exploit a loophole, they understand the intended meaning or goal of another person, but choose to go with a different interpretation. Past and current AI research has shown that artificial intelligence engages in what seems superficially like the e... | [
"Murthy, Sonia",
"Parece, Kiera",
"Bridgers, Sophie",
"Qian, Peng",
"Ullman, Tomer"
] | Comparing the Evaluation and Production of Loophole Behavior in Humans and Large Language Models | findings-emnlp.264 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.265.bib | https://aclanthology.org/2023.findings-emnlp.265/ | @inproceedings{payan-etal-2023-instructexcel,
title = "{I}nstruct{E}xcel: A Benchmark for Natural Language Instruction in Excel",
author = "Payan, Justin and
Mishra, Swaroop and
Singh, Mukul and
Negreanu, Carina and
Poelitz, Christian and
Baral, Chitta and
Roy, Subhro ... | With the evolution of Large Language Models (LLMs) we can solve increasingly more complex NLP tasks across various domains, including spreadsheets. This work investigates whether LLMs can generate code (Excel OfficeScripts, a TypeScript API for executing many tasks in Excel) that solves Excel specific tasks provided vi... | [
"Payan, Justin",
"Mishra, Swaroop",
"Singh, Mukul",
"Negreanu, Carina",
"Poelitz, Christian",
"Baral, Chitta",
"Roy, Subhro",
"Chakravarthy, Rasika",
"Van Durme, Benjamin",
"Nouri, Elnaz"
] | InstructExcel: A Benchmark for Natural Language Instruction in Excel | findings-emnlp.265 | 2310.14495 | [
""
] | https://huggingface.co/papers/2310.14495 | 3 | 1 | 2 | 10 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.findings-emnlp.266.bib | https://aclanthology.org/2023.findings-emnlp.266/ | @inproceedings{zhao-etal-2023-hallucination,
title = "Hallucination Detection for Grounded Instruction Generation",
author = "Zhao, Lingjun and
Nguyen, Khanh and
Daum{\'e} III, Hal",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Assoc... | We investigate the problem of generating instructions to guide humans to navigate in simulated residential environments. A major issue with current models is hallucination: they generate references to actions or objects that are inconsistent with what a human follower would perform or encounter along the described path... | [
"Zhao, Lingjun",
"Nguyen, Khanh",
"Daum{\\'e} III, Hal"
] | Hallucination Detection for Grounded Instruction Generation | findings-emnlp.266 | 2310.15319 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.267.bib | https://aclanthology.org/2023.findings-emnlp.267/ | @inproceedings{peskine-etal-2023-definitions,
title = "Definitions Matter: Guiding {GPT} for Multi-label Classification",
author = "Peskine, Youri and
Koren{\v{c}}i{\'c}, Damir and
Grubisic, Ivan and
Papotti, Paolo and
Troncy, Raphael and
Rosso, Paolo",
editor = "Bouamor, ... | Large language models have recently risen in popularity due to their ability to perform many natural language tasks without requiring any fine-tuning. In this work, we focus on two novel ideas: (1) generating definitions from examples and using them for zero-shot classification, and (2) investigating how an LLM makes u... | [
"Peskine, Youri",
"Koren{\\v{c}}i{\\'c}, Damir",
"Grubisic, Ivan",
"Papotti, Paolo",
"Troncy, Raphael",
"Rosso, Paolo"
] | Definitions Matter: Guiding GPT for Multi-label Classification | findings-emnlp.267 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.268.bib | https://aclanthology.org/2023.findings-emnlp.268/ | @inproceedings{xie-etal-2023-echo,
title = "{ECH}o: A Visio-Linguistic Dataset for Event Causality Inference via Human-Centric Reasoning",
author = "Xie, Yuxi and
Li, Guanzhen and
Kan, Min-Yen",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings ... | We introduce ECHo (Event Causality Inference via Human-Centric Reasoning), a diagnostic dataset of event causality inference grounded in visio-linguistic social scenarios. ECHo employs real-world human-centric deductive information building on a television crime drama. ECHo requires the Theory-of-Mind (ToM) ability to ... | [
"Xie, Yuxi",
"Li, Guanzhen",
"Kan, Min-Yen"
] | ECHo: A Visio-Linguistic Dataset for Event Causality Inference via Human-Centric Reasoning | findings-emnlp.268 | 2305.14740 | [
"https://github.com/yuxixie/echo"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.269.bib | https://aclanthology.org/2023.findings-emnlp.269/ | @inproceedings{si-etal-2023-empirical,
title = "An Empirical Study of Instruction-tuning Large Language Models in {C}hinese",
author = "Si, Qingyi and
Wang, Tong and
Lin, Zheng and
Zhang, Xu and
Cao, Yanan and
Wang, Weiping",
editor = "Bouamor, Houda and
Pino, Juan ... | The success of ChatGPT validates the potential of large language models (LLMs) in artificial general intelligence (AGI). Subsequently, the release of LLMs has sparked the open-source community{'}s interest in instruction-tuning, which is deemed to accelerate ChatGPT{'}s replication process. However, research on instruc... | [
"Si, Qingyi",
"Wang, Tong",
"Lin, Zheng",
"Zhang, Xu",
"Cao, Yanan",
"Wang, Weiping"
] | An Empirical Study of Instruction-tuning Large Language Models in Chinese | findings-emnlp.269 | 2310.07328 | [
"https://github.com/phoebussi/alpaca-cot"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.270.bib | https://aclanthology.org/2023.findings-emnlp.270/ | @inproceedings{patil-etal-2023-debiasing,
title = "Debiasing Multimodal Models via Causal Information Minimization",
author = "Patil, Vaidehi and
Maharana, Adyasha and
Bansal, Mohit",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Asso... | Most existing debiasing methods for multimodal models, including causal intervention and inference methods, utilize approximate heuristics to represent the biases, such as shallow features from early stages of training or unimodal features for multimodal tasks like VQA, etc., which may not be accurate. In this paper, w... | [
"Patil, Vaidehi",
"Maharana, Adyasha",
"Bansal, Mohit"
] | Debiasing Multimodal Models via Causal Information Minimization | findings-emnlp.270 | 2311.16941 | [
"https://github.com/vaidehi99/causalinfomin"
] | https://huggingface.co/papers/2311.16941 | 1 | 1 | 0 | 3 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.findings-emnlp.271.bib | https://aclanthology.org/2023.findings-emnlp.271/ | @inproceedings{teodorescu-mohammad-2023-evaluating,
title = "Evaluating Emotion Arcs Across Languages: Bridging the Global Divide in Sentiment Analysis",
author = "Teodorescu, Daniela and
Mohammad, Saif",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findin... | Emotion arcs capture how an individual (or a population) feels over time. They are widely used in industry and research; however, there is little work on evaluating the automatically generated arcs. This is because of the difficulty of establishing the true (gold) emotion arc. Our work, for the first time, systematical... | [
"Teodorescu, Daniela",
"Mohammad, Saif"
] | Evaluating Emotion Arcs Across Languages: Bridging the Global Divide in Sentiment Analysis | findings-emnlp.271 | 2306.02213 | [
"https://github.com/dteodore/emotionarcs"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.272.bib | https://aclanthology.org/2023.findings-emnlp.272/ | @inproceedings{li-etal-2023-multi-step,
title = "Multi-step Jailbreaking Privacy Attacks on {C}hat{GPT}",
author = "Li, Haoran and
Guo, Dadi and
Fan, Wei and
Xu, Mingshi and
Huang, Jie and
Meng, Fanpu and
Song, Yangqiu",
editor = "Bouamor, Houda and
Pino, Jua... | With the rapid progress of large language models (LLMs), many downstream NLP tasks can be well solved given appropriate prompts. Though model developers and researchers work hard on dialog safety to avoid generating harmful content from LLMs, it is still challenging to steer AI-generated content (AIGC) for the human go... | [
"Li, Haoran",
"Guo, Dadi",
"Fan, Wei",
"Xu, Mingshi",
"Huang, Jie",
"Meng, Fanpu",
"Song, Yangqiu"
] | Multi-step Jailbreaking Privacy Attacks on ChatGPT | findings-emnlp.272 | 2304.05197 | [
"https://github.com/hkust-knowcomp/llm-multistep-jailbreak"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.273.bib | https://aclanthology.org/2023.findings-emnlp.273/ | @inproceedings{gatto-etal-2023-chain,
title = "Chain-of-Thought Embeddings for Stance Detection on Social Media",
author = "Gatto, Joseph and
Sharif, Omar and
Preum, Sarah",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association fo... | Stance detection on social media is challenging for Large Language Models (LLMs), as emerging slang and colloquial language in online conversations often contain deeply implicit stance labels. Chain-of-Thought (COT) prompting has recently been shown to improve performance on stance detection tasks {---} alleviating som... | [
"Gatto, Joseph",
"Sharif, Omar",
"Preum, Sarah"
] | Chain-of-Thought Embeddings for Stance Detection on Social Media | findings-emnlp.273 | 2310.19750 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.274.bib | https://aclanthology.org/2023.findings-emnlp.274/ | @inproceedings{nakshatri-etal-2023-using,
title = "Using {LLM} for Improving Key Event Discovery: Temporal-Guided News Stream Clustering with Event Summaries",
author = "Nakshatri, Nishanth and
Liu, Siyi and
Chen, Sihao and
Roth, Dan and
Goldwasser, Dan and
Hopkins, Daniel",
... | Understanding and characterizing the discus- sions around key events in news streams is important for analyzing political discourse. In this work, we study the problem of identification of such key events and the news articles associated with those events from news streams. We propose a generic framework for news strea... | [
"Nakshatri, Nishanth",
"Liu, Siyi",
"Chen, Sihao",
"Roth, Dan",
"Goldwasser, Dan",
"Hopkins, Daniel"
] | Using LLM for Improving Key Event Discovery: Temporal-Guided News Stream Clustering with Event Summaries | findings-emnlp.274 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.275.bib | https://aclanthology.org/2023.findings-emnlp.275/ | @inproceedings{liu-etal-2023-descriptive,
title = "Descriptive Prompt Paraphrasing for Target-Oriented Multimodal Sentiment Classification",
author = "Liu, Dan and
Li, Lin and
Tao, Xiaohui and
Cui, Jian and
Xie, Qing",
editor = "Bouamor, Houda and
Pino, Juan and
Bal... | Target-Oriented Multimodal Sentiment Classification (TMSC) aims to perform sentiment polarity on a target jointly considering its corresponding multiple modalities including text, image, and others. Current researches mainly work on either of two types of targets in a decentralized manner. One type is entity, such as a... | [
"Liu, Dan",
"Li, Lin",
"Tao, Xiaohui",
"Cui, Jian",
"Xie, Qing"
] | Descriptive Prompt Paraphrasing for Target-Oriented Multimodal Sentiment Classification | findings-emnlp.275 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.276.bib | https://aclanthology.org/2023.findings-emnlp.276/ | @inproceedings{jin-etal-2023-joint,
title = "Joint Semantic and Strategy Matching for Persuasive Dialogue",
author = "Jin, Chuhao and
Zhu, Yutao and
Kong, Lingzhen and
Li, Shijie and
Zhang, Xiao and
Song, Ruihua and
Chen, Xu and
Chen, Huan and
Sun, Yuchong... | Persuasive dialogue aims to persuade users to achieve some targets by conversations. While previous persuasion models have achieved notable successes, they mostly base themselves on utterance semantic matching, and an important aspect has been ignored, that is, the strategy of the conversations, for example, the agent ... | [
"Jin, Chuhao",
"Zhu, Yutao",
"Kong, Lingzhen",
"Li, Shijie",
"Zhang, Xiao",
"Song, Ruihua",
"Chen, Xu",
"Chen, Huan",
"Sun, Yuchong",
"Chen, Yu",
"Xu, Jun"
] | Joint Semantic and Strategy Matching for Persuasive Dialogue | findings-emnlp.276 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.277.bib | https://aclanthology.org/2023.findings-emnlp.277/ | @inproceedings{bin-etal-2023-non-autoregressive,
title = "Non-Autoregressive Sentence Ordering",
author = "Bin, Yi and
Shi, Wenhao and
Ji, Bin and
Zhang, Jipeng and
Ding, Yujuan and
Yang, Yang",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
b... | Existing sentence ordering approaches generally employ encoder-decoder frameworks with the pointer net to recover the coherence by recurrently predicting each sentence step-by-step. Such an autoregressive manner only leverages unilateral dependencies during decoding and cannot fully explore the semantic dependency betw... | [
"Bin, Yi",
"Shi, Wenhao",
"Ji, Bin",
"Zhang, Jipeng",
"Ding, Yujuan",
"Yang, Yang"
] | Non-Autoregressive Sentence Ordering | findings-emnlp.277 | 2310.12640 | [
"https://github.com/steven640pixel/nonautoregressive-sentence-ordering"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.278.bib | https://aclanthology.org/2023.findings-emnlp.278/ | @inproceedings{shen-etal-2023-large,
title = "Large Language Models are Not Yet Human-Level Evaluators for Abstractive Summarization",
author = "Shen, Chenhui and
Cheng, Liying and
Nguyen, Xuan-Phi and
You, Yang and
Bing, Lidong",
editor = "Bouamor, Houda and
Pino, Juan a... | With the recent undeniable advancement in reasoning abilities in large language models (LLMs) like ChatGPT and GPT-4, there is a growing trend for using LLMs on various tasks. One area where LLMs can be employed is as an alternative evaluation metric for complex generative tasks, which generally demands expensive human... | [
"Shen, Chenhui",
"Cheng, Liying",
"Nguyen, Xuan-Phi",
"You, Yang",
"Bing, Lidong"
] | Large Language Models are Not Yet Human-Level Evaluators for Abstractive Summarization | findings-emnlp.278 | 2305.13091 | [
"https://github.com/damo-nlp-sg/llm_summeval"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.279.bib | https://aclanthology.org/2023.findings-emnlp.279/ | @inproceedings{sabir-padro-2023-women,
title = "Women Wearing Lipstick: Measuring the Bias Between an Object and Its Related Gender",
author = "Sabir, Ahmed and
Padr{\'o}, Llu{\'\i}s",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Associatio... | In this paper, we investigate the impact of objects on gender bias in image captioning systems. Our results show that only gender-specific objects have a strong gender bias (e.g., women-lipstick). In addition, we propose a visual semantic-based gender score that measures the degree of bias and can be used as a plug-in ... | [
"Sabir, Ahmed",
"Padr{\\'o}, Llu{\\'\\i}s"
] | Women Wearing Lipstick: Measuring the Bias Between an Object and Its Related Gender | findings-emnlp.279 | 2310.19130 | [
"https://github.com/ahmedssabir/genderscore"
] | https://huggingface.co/papers/2310.19130 | 1 | 0 | 0 | 2 | [] | [] | [
"AhmedSSabir/Demo-for-Gender-Score",
"AhmedSSabir/Demo-for-Gender-Score-jp",
"AhmedSSabir/Demo-for-Gender-Score-AR"
] | 1 | Poster |
https://aclanthology.org/2023.findings-emnlp.280.bib | https://aclanthology.org/2023.findings-emnlp.280/ | @inproceedings{rennard-etal-2023-fredsum,
title = "{FREDS}um: A Dialogue Summarization Corpus for {F}rench Political Debates",
author = "Rennard, Virgile and
Shang, Guokan and
Grari, Damien and
Hunter, Julie and
Vazirgiannis, Michalis",
editor = "Bouamor, Houda and
Pino, J... | Recent advances in deep learning, and especially the invention of encoder-decoder architectures, have significantly improved the performance of abstractive summarization systems. While the majority of research has focused on written documents, we have observed an increasing interest in the summarization of dialogues an... | [
"Rennard, Virgile",
"Shang, Guokan",
"Grari, Damien",
"Hunter, Julie",
"Vazirgiannis, Michalis"
] | FREDSum: A Dialogue Summarization Corpus for French Political Debates | findings-emnlp.280 | 2312.04843 | [
"https://github.com/linto-ai/fredsum"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.281.bib | https://aclanthology.org/2023.findings-emnlp.281/ | @inproceedings{wang-shang-2023-towards,
title = "Towards Zero-shot Relation Extraction in Web Mining: A Multimodal Approach with Relative {XML} Path",
author = "Wang, Zilong and
Shang, Jingbo",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the A... | The rapid growth of web pages and the increasing complexity of their structure poses a challenge for web mining models. Web mining models are required to understand semi-structured web pages, particularly when little is known about the subject or template of a new page. Current methods migrate language models to web mi... | [
"Wang, Zilong",
"Shang, Jingbo"
] | Towards Zero-shot Relation Extraction in Web Mining: A Multimodal Approach with Relative XML Path | findings-emnlp.281 | 2305.13805 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.282.bib | https://aclanthology.org/2023.findings-emnlp.282/ | @inproceedings{ganti-etal-2023-narrative,
title = "Narrative Style and the Spread of Health Misinformation on {T}witter",
author = "Ganti, Achyutarama and
Hussein, Eslam Ali Hassan and
Wilson, Steven and
Ma, Zexin and
Zhao, Xinyan",
editor = "Bouamor, Houda and
Pino, Juan ... | Using a narrative style is an effective way to communicate health information both on and off social media. Given the amount of misinformation being spread online and its potential negative effects, it is crucial to investigate the interplay between narrative communication style and misinformative health content on use... | [
"Ganti, Achyutarama",
"Hussein, Eslam Ali Hassan",
"Wilson, Steven",
"Ma, Zexin",
"Zhao, Xinyan"
] | Narrative Style and the Spread of Health Misinformation on Twitter | findings-emnlp.282 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.283.bib | https://aclanthology.org/2023.findings-emnlp.283/ | @inproceedings{wang-etal-2023-hadskip,
title = "{H}ad{S}kip: Homotopic and Adaptive Layer Skipping of Pre-trained Language Models for Efficient Inference",
author = "Wang, Haoyu and
Wang, Yaqing and
Liu, Tianci and
Zhao, Tuo and
Gao, Jing",
editor = "Bouamor, Houda and
Pin... | Pre-trained language models (LMs) have brought remarkable performance on numerous NLP tasks. However, they require significant resources and entail high computational costs for inference, making them challenging to deploy in real-world and real-time systems. Existing early exiting methods aim to reduce computational co... | [
"Wang, Haoyu",
"Wang, Yaqing",
"Liu, Tianci",
"Zhao, Tuo",
"Gao, Jing"
] | HadSkip: Homotopic and Adaptive Layer Skipping of Pre-trained Language Models for Efficient Inference | findings-emnlp.283 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.284.bib | https://aclanthology.org/2023.findings-emnlp.284/ | @inproceedings{chen-etal-2023-empowering,
title = "Empowering Psychotherapy with Large Language Models: Cognitive Distortion Detection through Diagnosis of Thought Prompting",
author = "Chen, Zhiyu and
Lu, Yujie and
Wang, William",
editor = "Bouamor, Houda and
Pino, Juan and
Bali... | Mental illness remains one of the most critical public health issues of our time, due to the severe scarcity and accessibility limit of professionals. Psychotherapy requires high-level expertise to conduct deep, complex reasoning and analysis on the cognition modeling of the patients. In the era of Large Language Model... | [
"Chen, Zhiyu",
"Lu, Yujie",
"Wang, William"
] | Empowering Psychotherapy with Large Language Models: Cognitive Distortion Detection through Diagnosis of Thought Prompting | findings-emnlp.284 | 2310.07146 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.285.bib | https://aclanthology.org/2023.findings-emnlp.285/ | @inproceedings{kazemnejad-etal-2023-measuring,
title = "Measuring the Knowledge Acquisition-Utilization Gap in Pretrained Language Models",
author = "Kazemnejad, Amirhossein and
Rezagholizadeh, Mehdi and
Parthasarathi, Prasanna and
Chandar, Sarath",
editor = "Bouamor, Houda and
P... | While pre-trained language models (PLMs) have shown evidence of acquiring vast amounts of knowledge, it remains unclear how much of this parametric knowledge is actually usable in performing downstream tasks. We propose a systematic framework to measure parametric knowledge utilization in PLMs. Our framework first extr... | [
"Kazemnejad, Amirhossein",
"Rezagholizadeh, Mehdi",
"Parthasarathi, Prasanna",
"Ch",
"ar, Sarath"
] | Measuring the Knowledge Acquisition-Utilization Gap in Pretrained Language Models | findings-emnlp.285 | 2305.14775 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.286.bib | https://aclanthology.org/2023.findings-emnlp.286/ | @inproceedings{zhou-etal-2023-non-compositional,
title = "Non-compositional Expression Generation Based on Curriculum Learning and Continual Learning",
author = "Zhou, Jianing and
Zeng, Ziheng and
Gong, Hongyu and
Bhat, Suma",
editor = "Bouamor, Houda and
Pino, Juan and
Ba... | Non-compositional expressions, by virtue of their non-compositionality, are a classic {`}pain in the neck{'} for NLP systems. Different from the general language modeling and generation tasks that are primarily compositional, generating non-compositional expressions is more challenging for current neural models, includ... | [
"Zhou, Jianing",
"Zeng, Ziheng",
"Gong, Hongyu",
"Bhat, Suma"
] | Non-compositional Expression Generation Based on Curriculum Learning and Continual Learning | findings-emnlp.286 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.287.bib | https://aclanthology.org/2023.findings-emnlp.287/ | @inproceedings{kwak-etal-2023-information,
title = "Information Extraction from Legal Wills: How Well Does {GPT}-4 Do?",
author = "Kwak, Alice and
Jeong, Cheonkam and
Forte, Gaetano and
Bambauer, Derek and
Morrison, Clayton and
Surdeanu, Mihai",
editor = "Bouamor, Houda a... | This work presents a manually annotated dataset for Information Extraction (IE) from legal wills, and relevant in-context learning experiments on the dataset. The dataset consists of entities, binary relations between the entities (e.g., relations between testator and beneficiary), and n-ary events (e.g., bequest) extr... | [
"Kwak, Alice",
"Jeong, Cheonkam",
"Forte, Gaetano",
"Bambauer, Derek",
"Morrison, Clayton",
"Surdeanu, Mihai"
] | Information Extraction from Legal Wills: How Well Does GPT-4 Do? | findings-emnlp.287 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.288.bib | https://aclanthology.org/2023.findings-emnlp.288/ | @inproceedings{jumelet-zuidema-2023-transparency,
title = "Transparency at the Source: Evaluating and Interpreting Language Models With Access to the True Distribution",
author = "Jumelet, Jaap and
Zuidema, Willem",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitl... | We present a setup for training, evaluating and interpreting neural language models, that uses artificial, language-like data. The data is generated using a massive probabilistic grammar (based on state-split PCFGs), that is itself derived from a large natural language corpus, but also provides us complete control over... | [
"Jumelet, Jaap",
"Zuidema, Willem"
] | Transparency at the Source: Evaluating and Interpreting Language Models With Access to the True Distribution | findings-emnlp.288 | 2310.14840 | [
"https://github.com/clclab/pcfg-lm"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.289.bib | https://aclanthology.org/2023.findings-emnlp.289/ | @inproceedings{song-etal-2023-continual,
title = "Continual Generalized Intent Discovery: Marching Towards Dynamic and Open-world Intent Recognition",
author = "Song, Xiaoshuai and
Mou, Yutao and
He, Keqing and
Qiu, Yueyan and
Zhao, Jinxu and
Wang, Pei and
Xu, Weiran",
... | In a practical dialogue system, users may input out-of-domain (OOD) queries. The Generalized Intent Discovery (GID) task aims to discover OOD intents from OOD queries and extend them to the in-domain (IND) classifier. However, GID only considers one stage of OOD learning, and needs to utilize the data in all previous s... | [
"Song, Xiaoshuai",
"Mou, Yutao",
"He, Keqing",
"Qiu, Yueyan",
"Zhao, Jinxu",
"Wang, Pei",
"Xu, Weiran"
] | Continual Generalized Intent Discovery: Marching Towards Dynamic and Open-world Intent Recognition | findings-emnlp.289 | 2310.10184 | [
"https://github.com/songxiaoshuai/CGID"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.290.bib | https://aclanthology.org/2023.findings-emnlp.290/ | @inproceedings{santra-etal-2023-frugal,
title = "Frugal Prompting for Dialog Models",
author = "Santra, Bishal and
Basak, Sakya and
De, Abhinandan and
Gupta, Manish and
Goyal, Pawan",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findin... | The use of large language models (LLMs) in natural language processing (NLP) tasks is rapidly increasing, leading to changes in how researchers approach problems in the field. To fully utilize these models{'} abilities, a better understanding of their behavior for different input protocols is required. With LLMs, users... | [
"Santra, Bishal",
"Basak, Sakya",
"De, Abhin",
"an",
"Gupta, Manish",
"Goyal, Pawan"
] | Frugal Prompting for Dialog Models | findings-emnlp.290 | 2305.14919 | [
"https://github.com/bsantraigi/frugal-prompting"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.291.bib | https://aclanthology.org/2023.findings-emnlp.291/ | @inproceedings{he-garner-2023-interpreter,
title = "The Interpreter Understands Your Meaning: End-to-end Spoken Language Understanding Aided by Speech Translation",
author = "He, Mutian and
Garner, Philip",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Find... | End-to-end spoken language understanding (SLU) remains elusive even with current large pretrained language models on text and speech, especially in multilingual cases. Machine translation has been established as a powerful pretraining objective on text as it enables the model to capture high-level semantics of the inpu... | [
"He, Mutian",
"Garner, Philip"
] | The Interpreter Understands Your Meaning: End-to-end Spoken Language Understanding Aided by Speech Translation | findings-emnlp.291 | 2305.09652 | [
"https://github.com/idiap/translation-aided-slu"
] | https://huggingface.co/papers/2305.09652 | 0 | 0 | 0 | 2 | [
"mutiann/translation-aided-slu"
] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.findings-emnlp.292.bib | https://aclanthology.org/2023.findings-emnlp.292/ | @inproceedings{ding-etal-2023-maclasa,
title = "{M}ac{L}a{S}a: Multi-Aspect Controllable Text Generation via Efficient Sampling from Compact Latent Space",
author = "Ding, Hanxing and
Pang, Liang and
Wei, Zihao and
Shen, Huawei and
Cheng, Xueqi and
Chua, Tat-Seng",
editor ... | Multi-aspect controllable text generation aims to generate fluent sentences that possess multiple desired attributes simultaneously. Traditional methods either require expensive iteration / searching within the discrete text space during the decoding stage, or train separate controllers for each aspect, resulting in a ... | [
"Ding, Hanxing",
"Pang, Liang",
"Wei, Zihao",
"Shen, Huawei",
"Cheng, Xueqi",
"Chua, Tat-Seng"
] | MacLaSa: Multi-Aspect Controllable Text Generation via Efficient Sampling from Compact Latent Space | findings-emnlp.292 | 2305.12785 | [
"https://github.com/trustedllm/maclasa"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.293.bib | https://aclanthology.org/2023.findings-emnlp.293/ | @inproceedings{liu-etal-2023-hpe,
title = "{HPE}: Answering Complex Questions over Text by Hybrid Question Parsing and Execution",
author = "Liu, Ye and
Yavuz, Semih and
Meng, Rui and
Radev, Dragomir and
Xiong, Caiming and
Joty, Shafiq and
Zhou, Yingbo",
editor = "B... | The dominant paradigm of textual question answering systems is based on end-to-end neural networks, which excels at answering natural language questions but falls short on complex ones. This stands in contrast to the broad adaptation of semantic parsing approaches over structured data sources (e.g., relational database... | [
"Liu, Ye",
"Yavuz, Semih",
"Meng, Rui",
"Radev, Dragomir",
"Xiong, Caiming",
"Joty, Shafiq",
"Zhou, Yingbo"
] | HPE: Answering Complex Questions over Text by Hybrid Question Parsing and Execution | findings-emnlp.293 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.294.bib | https://aclanthology.org/2023.findings-emnlp.294/ | @inproceedings{liu-etal-2023-length,
title = "Length-Adaptive Distillation: Customizing Small Language Model for Dynamic Token Pruning",
author = "Liu, Chang and
Tao, Chongyang and
Liang, Jianxin and
Feng, Jiazhan and
Shen, Tao and
Huang, Quzhe and
Zhao, Dongyan",
e... | Pre-trained language models greatly improve the performance of various tasks but at a cost of high computation overhead. To facilitate practical applications, there are mainly two lines of research to accelerate model inference: model compression and dynamic computation (e.g., dynamic token pruning). Existing works eit... | [
"Liu, Chang",
"Tao, Chongyang",
"Liang, Jianxin",
"Feng, Jiazhan",
"Shen, Tao",
"Huang, Quzhe",
"Zhao, Dongyan"
] | Length-Adaptive Distillation: Customizing Small Language Model for Dynamic Token Pruning | findings-emnlp.294 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.295.bib | https://aclanthology.org/2023.findings-emnlp.295/ | @inproceedings{upadhyaya-etal-2023-toxicity,
title = "Toxicity, Morality, and Speech Act Guided Stance Detection",
author = "Upadhyaya, Apoorva and
Fisichella, Marco and
Nejdl, Wolfgang",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the ... | In this work, we focus on the task of determining the public attitude toward various social issues discussed on social media platforms. Platforms such as Twitter, however, are often used to spread misinformation, fake news through polarizing views. Existing literature suggests that higher levels of toxicity prevalent i... | [
"Upadhyaya, Apoorva",
"Fisichella, Marco",
"Nejdl, Wolfgang"
] | Toxicity, Morality, and Speech Act Guided Stance Detection | findings-emnlp.295 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.296.bib | https://aclanthology.org/2023.findings-emnlp.296/ | @inproceedings{schouten-etal-2023-reasoning,
title = "Reasoning about Ambiguous Definite Descriptions",
author = "Schouten, Stefan and
Bloem, Peter and
Markov, Ilia and
Vossen, Piek",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of t... | Natural language reasoning plays an increasingly important role in improving language models{'} ability to solve complex language understanding tasks. An interesting use case for reasoning is the resolution of context-dependent ambiguity. But no resources exist to evaluate how well Large Language Models can use explici... | [
"Schouten, Stefan",
"Bloem, Peter",
"Markov, Ilia",
"Vossen, Piek"
] | Reasoning about Ambiguous Definite Descriptions | findings-emnlp.296 | 2310.14657 | [
"https://github.com/sfschouten/exploiting-ambiguity"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.297.bib | https://aclanthology.org/2023.findings-emnlp.297/ | @inproceedings{canby-hockenmaier-2023-framework,
title = "A Framework for Bidirectional Decoding: Case Study in Morphological Inflection",
author = "Canby, Marc and
Hockenmaier, Julia",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Associati... | Transformer-based encoder-decoder models that generate outputs in a left-to-right fashion have become standard for sequence-to-sequence tasks. In this paper, we propose a framework for decoding that produces sequences from the {``}outside-in{''}: at each step, the model chooses to generate a token on the left, on the r... | [
"Canby, Marc",
"Hockenmaier, Julia"
] | A Framework for Bidirectional Decoding: Case Study in Morphological Inflection | findings-emnlp.297 | 2305.12580 | [
"https://github.com/marccanby/bidi_decoding"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.298.bib | https://aclanthology.org/2023.findings-emnlp.298/ | @inproceedings{fu-etal-2023-text,
title = "Text-guided 3{D} Human Generation from 2{D} Collections",
author = "Fu, Tsu-Jui and
Xiong, Wenhan and
Nie, Yixin and
Liu, Jingyu and
Oguz, Barlas and
Wang, William",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, ... | 3D human modeling has been widely used for engaging interaction in gaming, film, and animation. The customization of these characters is crucial for creativity and scalability, which highlights the importance of controllability. In this work, we introduce Text-guided 3D Human Generation (T3H), where a model is to gener... | [
"Fu, Tsu-Jui",
"Xiong, Wenhan",
"Nie, Yixin",
"Liu, Jingyu",
"Oguz, Barlas",
"Wang, William"
] | Text-guided 3D Human Generation from 2D Collections | findings-emnlp.298 | 2305.14312 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.299.bib | https://aclanthology.org/2023.findings-emnlp.299/ | @inproceedings{huang-zhu-2023-statistically,
title = "Statistically Profiling Biases in Natural Language Reasoning Datasets and Models",
author = "Huang, Shanshan and
Zhu, Kenny",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for... | Recent studies have shown that many natural language understanding and reasoning datasets contain statistical cues that can be exploited by NLP models, resulting in an overestimation of their capabilities. Existing methods, such as {``}hypothesis-only{''} tests and CheckList, are limited in identifying these cues and e... | [
"Huang, Shanshan",
"Zhu, Kenny"
] | Statistically Profiling Biases in Natural Language Reasoning Datasets and Models | findings-emnlp.299 | 2102.04632 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.300.bib | https://aclanthology.org/2023.findings-emnlp.300/ | @inproceedings{hao-linzen-2023-verb,
title = "Verb Conjugation in Transformers Is Determined by Linear Encodings of Subject Number",
author = "Hao, Sophie and
Linzen, Tal",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Comput... | Deep architectures such as Transformers are sometimes criticized for having uninterpretable {``}black-box{''} representations. We use causal intervention analysis to show that, in fact, some linguistic features are represented in a linear, interpretable format. Specifically, we show that BERT{'}s ability to conjugate v... | [
"Hao, Sophie",
"Linzen, Tal"
] | Verb Conjugation in Transformers Is Determined by Linear Encodings of Subject Number | findings-emnlp.300 | 2310.15151 | [
"https://github.com/yidinghao/causal-conjugation"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.301.bib | https://aclanthology.org/2023.findings-emnlp.301/ | @inproceedings{murahari-etal-2023-mux-plms,
title = "{MUX}-{PLM}s: Data Multiplexing for High-throughput Language Models",
author = "Murahari, Vishvak and
Deshpande, Ameet and
Jimenez, Carlos and
Shafran, Izhak and
Wang, Mingqiu and
Cao, Yuan and
Narasimhan, Karthik",
... | The widespread adoption of large language models such as ChatGPT and Bard has led to unprecedented demand for these technologies. The burgeoning cost of inference for ever-increasing model sizes coupled with hardware shortages has limited affordable access and poses a pressing need for efficiency approaches geared towa... | [
"Murahari, Vishvak",
"Deshp",
"e, Ameet",
"Jimenez, Carlos",
"Shafran, Izhak",
"Wang, Mingqiu",
"Cao, Yuan",
"Narasimhan, Karthik"
] | MUX-PLMs: Data Multiplexing for High-throughput Language Models | findings-emnlp.301 | 2302.12441 | [
"https://github.com/princeton-nlp/datamux-pretraining"
] | https://huggingface.co/papers/2302.12441 | 0 | 0 | 0 | 7 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.findings-emnlp.302.bib | https://aclanthology.org/2023.findings-emnlp.302/ | @inproceedings{lee-etal-2023-last,
title = "That was the last straw, we need more: Are Translation Systems Sensitive to Disambiguating Context?",
author = "Lee, Jaechan and
Liu, Alisa and
Ahia, Orevaoghene and
Gonen, Hila and
Smith, Noah",
editor = "Bouamor, Houda and
Pino... | The translation of ambiguous text presents a challenge for translation systems, as it requires using the surrounding context to disambiguate the intended meaning as much as possible. While prior work has studied ambiguities that result from different grammatical features of the source and target language, we study sema... | [
"Lee, Jaechan",
"Liu, Alisa",
"Ahia, Orevaoghene",
"Gonen, Hila",
"Smith, Noah"
] | That was the last straw, we need more: Are Translation Systems Sensitive to Disambiguating Context? | findings-emnlp.302 | 2310.14610 | [
"https://github.com/jaechan-repo/mt-ambiguity"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.303.bib | https://aclanthology.org/2023.findings-emnlp.303/ | @inproceedings{sileo-lernould-2023-mindgames,
title = "{M}ind{G}ames: Targeting Theory of Mind in Large Language Models with Dynamic Epistemic Modal Logic",
author = "Sileo, Damien and
Lernould, Antoine",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findin... | Theory of Mind (ToM) is a critical component of intelligence but its assessment remains the subject of heated debates. Prior research applied human ToM assessments to natural language processing models using either human-created standardized tests or rule-based templates. However, these methods primarily focus on simpl... | [
"Sileo, Damien",
"Lernould, Antoine"
] | MindGames: Targeting Theory of Mind in Large Language Models with Dynamic Epistemic Modal Logic | findings-emnlp.303 | 2305.03353 | [
"https://github.com/antoinelrnld/modlog"
] | https://huggingface.co/papers/2305.03353 | 1 | 0 | 0 | 2 | [] | [
"sileod/mindgames"
] | [] | 1 | Poster |
https://aclanthology.org/2023.findings-emnlp.304.bib | https://aclanthology.org/2023.findings-emnlp.304/ | @inproceedings{liu-etal-2023-latentlogic,
title = "{LATENTLOGIC}: Learning Logic Rules in Latent Space over Knowledge Graphs",
author = "Liu, Junnan and
Mao, Qianren and
Lin, Chenghua and
Song, Yangqiu and
Li, Jianxin",
editor = "Bouamor, Houda and
Pino, Juan and
Ba... | Learning logic rules for knowledge graph reasoning is essential as such rules provide interpretable explanations for reasoning and can be generalized to different domains. However, existing methods often face challenges such as searching in a vast search space (e.g., enumeration of relational paths or multiplication of... | [
"Liu, Junnan",
"Mao, Qianren",
"Lin, Chenghua",
"Song, Yangqiu",
"Li, Jianxin"
] | LATENTLOGIC: Learning Logic Rules in Latent Space over Knowledge Graphs | findings-emnlp.304 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.305.bib | https://aclanthology.org/2023.findings-emnlp.305/ | @inproceedings{asl-etal-2023-robustembed,
title = "{R}obust{E}mbed: Robust Sentence Embeddings Using Self-Supervised Contrastive Pre-Training",
author = "Asl, Javad and
Blanco, Eduardo and
Takabi, Daniel",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle =... | Pre-trained language models (PLMs) have demonstrated their exceptional performance across a wide range of natural language processing tasks. The utilization of PLM-based sentence embeddings enables the generation of contextual representations that capture rich semantic information. However, despite their success with u... | [
"Asl, Javad",
"Blanco, Eduardo",
"Takabi, Daniel"
] | RobustEmbed: Robust Sentence Embeddings Using Self-Supervised Contrastive Pre-Training | findings-emnlp.305 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.306.bib | https://aclanthology.org/2023.findings-emnlp.306/ | @inproceedings{fang-etal-2023-votes,
title = "More than Votes? Voting and Language based Partisanship in the {US} {S}upreme {C}ourt",
author = "Fang, Biaoyan and
Cohn, Trevor and
Baldwin, Timothy and
Frermann, Lea",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika"... | Understanding the prevalence and dynamics of justice partisanship and ideology in the US Supreme Court is critical in studying jurisdiction. Most research quantifies partisanship based on voting behavior, and oral arguments in the courtroom {---} the last essential procedure before the final case outcome {---} have not... | [
"Fang, Biaoyan",
"Cohn, Trevor",
"Baldwin, Timothy",
"Frermann, Lea"
] | More than Votes? Voting and Language based Partisanship in the US Supreme Court | findings-emnlp.306 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.307.bib | https://aclanthology.org/2023.findings-emnlp.307/ | @inproceedings{yue-etal-2023-automatic,
title = "Automatic Evaluation of Attribution by Large Language Models",
author = "Yue, Xiang and
Wang, Boshi and
Chen, Ziru and
Zhang, Kai and
Su, Yu and
Sun, Huan",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kal... | A recent focus of large language model (LLM) development, as exemplified by generative search engines, is to incorporate external references to generate and support its claims. However, evaluating the attribution, i.e., verifying whether the generated statement is fully supported by the cited reference, remains an open... | [
"Yue, Xiang",
"Wang, Boshi",
"Chen, Ziru",
"Zhang, Kai",
"Su, Yu",
"Sun, Huan"
] | Automatic Evaluation of Attribution by Large Language Models | findings-emnlp.307 | 2305.06311 | [
"https://github.com/osu-nlp-group/attrscore"
] | https://huggingface.co/papers/2305.06311 | 2 | 0 | 0 | 6 | [] | [
"osunlp/AttrScore"
] | [] | 1 | Poster |
https://aclanthology.org/2023.findings-emnlp.308.bib | https://aclanthology.org/2023.findings-emnlp.308/ | @inproceedings{sengupta-etal-2023-modeling,
title = "Modeling Highlighting of Metaphors in Multitask Contrastive Learning Paradigms",
author = "Sengupta, Meghdut and
Alshomary, Milad and
Scharlau, Ingrid and
Wachsmuth, Henning",
editor = "Bouamor, Houda and
Pino, Juan and
... | Metaphorical language, such as {``}spending time together{''}, projects meaning from a source domain (here, $\textit{money}$) to a target domain ($\textit{time}$). Thereby, it highlights certain aspects of the target domain, such as the $\textit{effort}$ behind the time investment. Highlighting aspects with metaphors (... | [
"Sengupta, Meghdut",
"Alshomary, Milad",
"Scharlau, Ingrid",
"Wachsmuth, Henning"
] | Modeling Highlighting of Metaphors in Multitask Contrastive Learning Paradigms | findings-emnlp.308 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.309.bib | https://aclanthology.org/2023.findings-emnlp.309/ | @inproceedings{wang-etal-2023-ldm2,
title = "{LDM}$^2$: A Large Decision Model Imitating Human Cognition with Dynamic Memory Enhancement",
author = "Wang, Xingjin and
Li, Linjing and
Zeng, Daniel",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findin... | With the rapid development of large language models (LLMs), it is highly demanded that LLMs can be adopted to make decisions to enable the artificial general intelligence. Most approaches leverage manually crafted examples to prompt the LLMs to imitate the decision process of human. However, designing optimal prompts i... | [
"Wang, Xingjin",
"Li, Linjing",
"Zeng, Daniel"
] | LDM^2: A Large Decision Model Imitating Human Cognition with Dynamic Memory Enhancement | findings-emnlp.309 | 2312.08402 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.310.bib | https://aclanthology.org/2023.findings-emnlp.310/ | @inproceedings{chen-etal-2023-zara,
title = "{ZARA}: Improving Few-Shot Self-Rationalization for Small Language Models",
author = "Chen, Wei-Lin and
Yen, An-Zi and
Wu, Cheng-Kuang and
Huang, Hen-Hsen and
Chen, Hsin-Hsi",
editor = "Bouamor, Houda and
Pino, Juan and
B... | Language models (LMs) that jointly generate end-task answers as well as free-text rationales are known as self-rationalization models. Recent works demonstrate great performance gain for self-rationalization by few-shot prompting LMs with rationale-augmented exemplars. However, the ability to benefit from explanations ... | [
"Chen, Wei-Lin",
"Yen, An-Zi",
"Wu, Cheng-Kuang",
"Huang, Hen-Hsen",
"Chen, Hsin-Hsi"
] | ZARA: Improving Few-Shot Self-Rationalization for Small Language Models | findings-emnlp.310 | 2305.07355 | [
"https://github.com/ntunlplab/zara"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.311.bib | https://aclanthology.org/2023.findings-emnlp.311/ | @inproceedings{lin-etal-2023-toxicchat,
title = "{T}oxic{C}hat: Unveiling Hidden Challenges of Toxicity Detection in Real-World User-{AI} Conversation",
author = "Lin, Zi and
Wang, Zihan and
Tong, Yongqi and
Wang, Yangkun and
Guo, Yuxin and
Wang, Yujia and
Shang, Jingbo... | Despite remarkable advances that large language models have achieved in chatbots nowadays, maintaining a non-toxic user-AI interactive environment has become increasingly critical nowadays. However, previous efforts in toxicity detection have been mostly based on benchmarks derived from social media contents, leaving t... | [
"Lin, Zi",
"Wang, Zihan",
"Tong, Yongqi",
"Wang, Yangkun",
"Guo, Yuxin",
"Wang, Yujia",
"Shang, Jingbo"
] | ToxicChat: Unveiling Hidden Challenges of Toxicity Detection in Real-World User-AI Conversation | findings-emnlp.311 | 2310.17389 | [
""
] | https://huggingface.co/papers/2310.17389 | 0 | 0 | 0 | 7 | [
"google/shieldgemma-2b",
"google/shieldgemma-27b",
"google/shieldgemma-9b",
"lmsys/toxicchat-t5-large-v1.0",
"QuantFactory/shieldgemma-2b-GGUF",
"QuantFactory/shieldgemma-9b-GGUF",
"LiteLLMs/shieldgemma-2b-GGUF",
"LiteLLMs/shieldgemma-9b-GGUF"
] | [
"lmsys/toxic-chat",
"d-llm/toxic-chat"
] | [
"coium/google-shieldgemma-2b"
] | 1 | Poster |
https://aclanthology.org/2023.findings-emnlp.312.bib | https://aclanthology.org/2023.findings-emnlp.312/ | @inproceedings{stahl-etal-2023-mind,
title = "Mind the Gap: Automated Corpus Creation for Enthymeme Detection and Reconstruction in Learner Arguments",
author = {Stahl, Maja and
D{\"u}sterhus, Nick and
Chen, Mei-Hua and
Wachsmuth, Henning},
editor = "Bouamor, Houda and
Pino, Juan... | Writing strong arguments can be challenging for learners. It requires to select and arrange multiple argumentative discourse units (ADUs) in a logical and coherent way as well as to decide which ADUs to leave implicit, so called enthymemes. However, when important ADUs are missing, readers might not be able to follow t... | [
"Stahl, Maja",
"D{\\\"u}sterhus, Nick",
"Chen, Mei-Hua",
"Wachsmuth, Henning"
] | Mind the Gap: Automated Corpus Creation for Enthymeme Detection and Reconstruction in Learner Arguments | findings-emnlp.312 | 2310.18098 | [
"https://github.com/webis-de/emnlp-23"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.313.bib | https://aclanthology.org/2023.findings-emnlp.313/ | @inproceedings{yang-etal-2023-dior,
title = "Dior-{CVAE}: Pre-trained Language Models and Diffusion Priors for Variational Dialog Generation",
author = "Yang, Tianyu and
Tran, Thy Thy and
Gurevych, Iryna",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle =... | Current variational dialog models have employed pre-trained language models (PLMs) to parameterize the likelihood and posterior distributions. However, the Gaussian assumption made on the prior distribution is incompatible with these distributions, thus restricting the diversity of generated responses. These models als... | [
"Yang, Tianyu",
"Tran, Thy Thy",
"Gurevych, Iryna"
] | Dior-CVAE: Pre-trained Language Models and Diffusion Priors for Variational Dialog Generation | findings-emnlp.313 | 2305.15025 | [
"https://github.com/ukplab/dior-cvae"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.314.bib | https://aclanthology.org/2023.findings-emnlp.314/ | @inproceedings{zhao-etal-2023-retrieving,
title = "Retrieving Multimodal Information for Augmented Generation: A Survey",
author = "Zhao, Ruochen and
Chen, Hailin and
Wang, Weishi and
Jiao, Fangkai and
Do, Xuan Long and
Qin, Chengwei and
Ding, Bosheng and
Guo, Xi... | As Large Language Models (LLMs) become popular, there emerged an important trend of using multimodality to augment the LLMs{'} generation ability, which enables LLMs to better interact with the world. However, there lacks a unified perception of at which stage and how to incorporate different modalities. In this survey... | [
"Zhao, Ruochen",
"Chen, Hailin",
"Wang, Weishi",
"Jiao, Fangkai",
"Do, Xuan Long",
"Qin, Chengwei",
"Ding, Bosheng",
"Guo, Xiaobao",
"Li, Minzhi",
"Li, Xingxuan",
"Joty, Shafiq"
] | Retrieving Multimodal Information for Augmented Generation: A Survey | findings-emnlp.314 | 2303.10868 | [
""
] | https://huggingface.co/papers/2303.10868 | 1 | 0 | 0 | 11 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.findings-emnlp.315.bib | https://aclanthology.org/2023.findings-emnlp.315/ | @inproceedings{hou-li-2023-improving,
title = "Improving Contrastive Learning of Sentence Embeddings with Focal {I}nfo{NCE}",
author = "Hou, Pengyue and
Li, Xingyu",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational... | The recent success of SimCSE has greatly advanced state-of-the-art sentence representations. However, the original formulation of SimCSE does not fully exploit the potential of hard negative samples in contrastive learning. This study introduces an unsupervised contrastive learning framework that combines SimCSE with h... | [
"Hou, Pengyue",
"Li, Xingyu"
] | Improving Contrastive Learning of Sentence Embeddings with Focal InfoNCE | findings-emnlp.315 | 2310.06918 | [
"https://github.com/puerrrr/focal-infonce"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.316.bib | https://aclanthology.org/2023.findings-emnlp.316/ | @inproceedings{nguyen-etal-2023-vault,
title = "The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation",
author = "Nguyen, Dung and
Nam, Le and
Dau, Anh and
Nguyen, Anh and
Nghiem, Khanh and
Guo, Jin and
Bui, Nghi",
editor = ... | We present The Vault, an open-source dataset of high quality code-text pairs in multiple programming languages for training large language models to understand and generate code. We propose methods for thoroughly extracting samples that use both rules and deep learning to ensure that they contain high-quality pairs of ... | [
"Nguyen, Dung",
"Nam, Le",
"Dau, Anh",
"Nguyen, Anh",
"Nghiem, Khanh",
"Guo, Jin",
"Bui, Nghi"
] | The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation | findings-emnlp.316 | 2305.06156 | [
"https://github.com/fsoft-ai4code/thevault"
] | https://huggingface.co/papers/2305.06156 | 1 | 1 | 0 | 7 | [
"Fsoft-AIC/Codebert-docstring-inconsistency"
] | [
"Fsoft-AIC/the-vault-function",
"Fsoft-AIC/the-vault-inline",
"Fsoft-AIC/the-vault-class"
] | [
"namnh113/Code_Summarization",
"nam194/Code_Summarization"
] | 1 | Poster |
https://aclanthology.org/2023.findings-emnlp.317.bib | https://aclanthology.org/2023.findings-emnlp.317/ | @inproceedings{lelkes-etal-2023-sdoh,
title = "{SDOH}-{NLI}: a Dataset for Inferring Social Determinants of Health from Clinical Notes",
author = "Lelkes, Adam and
Loreaux, Eric and
Schuster, Tal and
Chen, Ming-Jun and
Rajkomar, Alvin",
editor = "Bouamor, Houda and
Pino, J... | Social and behavioral determinants of health (SDOH) play a significant role in shaping health outcomes, and extracting these determinants from clinical notes is a first step to help healthcare providers systematically identify opportunities to provide appropriate care and address disparities. Progress on using NLP meth... | [
"Lelkes, Adam",
"Loreaux, Eric",
"Schuster, Tal",
"Chen, Ming-Jun",
"Rajkomar, Alvin"
] | SDOH-NLI: a Dataset for Inferring Social Determinants of Health from Clinical Notes | findings-emnlp.317 | 2310.18431 | [
""
] | https://huggingface.co/papers/2310.18431 | 0 | 0 | 0 | 5 | [] | [
"tasksource/SDOH-NLI",
"davanstrien/SDOH-NLI"
] | [] | 1 | Poster |
https://aclanthology.org/2023.findings-emnlp.318.bib | https://aclanthology.org/2023.findings-emnlp.318/ | @inproceedings{pu-etal-2023-zero,
title = "On the Zero-Shot Generalization of Machine-Generated Text Detectors",
author = "Pu, Xiao and
Zhang, Jingyu and
Han, Xiaochuang and
Tsvetkov, Yulia and
He, Tianxing",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika"... | The rampant proliferation of large language models, fluent enough to generate text indistinguishable from human-written language, gives unprecedented importance to the detection of machine-generated text. This work is motivated by an important research question: How will the detectors of machine-generated text perform ... | [
"Pu, Xiao",
"Zhang, Jingyu",
"Han, Xiaochuang",
"Tsvetkov, Yulia",
"He, Tianxing"
] | On the Zero-Shot Generalization of Machine-Generated Text Detectors | findings-emnlp.318 | 2312.12918 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster | |
https://aclanthology.org/2023.findings-emnlp.319.bib | https://aclanthology.org/2023.findings-emnlp.319/ | @inproceedings{hao-etal-2023-complex,
title = "Complex Event Schema Induction with Knowledge-Enriched Diffusion Model",
author = "Hao, Yupu and
Cao, Pengfei and
Chen, Yubo and
Liu, Kang and
Xu, Jiexin and
Li, Huaijun and
Jiang, Xiaojian and
Zhao, Jun",
editor... | The concept of a complex event schema pertains to the graph structure that represents real-world knowledge of events and their multi-dimensional relationships. However, previous studies on event schema induction have been hindered by challenges such as error propagation and data quality issues. To tackle these challeng... | [
"Hao, Yupu",
"Cao, Pengfei",
"Chen, Yubo",
"Liu, Kang",
"Xu, Jiexin",
"Li, Huaijun",
"Jiang, Xiaojian",
"Zhao, Jun"
] | Complex Event Schema Induction with Knowledge-Enriched Diffusion Model | findings-emnlp.319 | null | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.