bibtex_url
stringlengths
41
50
bibtext
stringlengths
693
2.88k
abstract
stringlengths
0
2k
authors
listlengths
1
45
title
stringlengths
21
206
id
stringlengths
7
16
type
stringclasses
2 values
arxiv_id
stringlengths
9
12
https://aclanthology.org/2024.bionlp-1.49.bib
@inproceedings{liao-etal-2024-cid, title = "{CID} at {RRG}24: Attempting in a Conditionally Initiated Decoding of Radiology Report Generation with Clinical Entities", author = "Liao, Yuxiang and Liang, Yuanbang and Qin, Yipeng and Liu, Hantao and Spasic, Irena", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.49", pages = "591--596", abstract = "Radiology Report Generation (RRG) seeks to leverage deep learning techniques to automate the reporting process of radiologists. Current methods are typically modelling RRG as an image-to-text generation task that takes X-ray images as input and generates textual reports describing the corresponding clinical observations. However, the wording of the same clinical observation could have been influenced by the expression preference of radiologists. Nevertheless, such variability can be mitigated by normalizing textual reports into structured representations such as a graph structure. In this study, we attempt a novel paradigm for incorporating graph structural data into the RRG model. Our approach involves predicting graph labels based on visual features and subsequently initiating the decoding process through a template injection conditioned on the predicted labels. We trained and evaluated our model on the BioNLP 2024 Shared Task on Large-Scale Radiology Report Generation and submitted our results to the ViLMedic RRG leaderboard. Although our model showed a moderate ranking on the leaderboard, the results provide preliminary evidence for the feasibility of this new paradigm, warranting further exploration and refinement.", }
Radiology Report Generation (RRG) seeks to leverage deep learning techniques to automate the reporting process of radiologists. Current methods are typically modelling RRG as an image-to-text generation task that takes X-ray images as input and generates textual reports describing the corresponding clinical observations. However, the wording of the same clinical observation could have been influenced by the expression preference of radiologists. Nevertheless, such variability can be mitigated by normalizing textual reports into structured representations such as a graph structure. In this study, we attempt a novel paradigm for incorporating graph structural data into the RRG model. Our approach involves predicting graph labels based on visual features and subsequently initiating the decoding process through a template injection conditioned on the predicted labels. We trained and evaluated our model on the BioNLP 2024 Shared Task on Large-Scale Radiology Report Generation and submitted our results to the ViLMedic RRG leaderboard. Although our model showed a moderate ranking on the leaderboard, the results provide preliminary evidence for the feasibility of this new paradigm, warranting further exploration and refinement.
[ "Liao, Yuxiang", "Liang, Yuanbang", "Qin, Yipeng", "Liu, Hantao", "Spasic, Irena" ]
{CID} at {RRG}24: Attempting in a Conditionally Initiated Decoding of Radiology Report Generation with Clinical Entities
bionlp-1.49
Poster
2303.13818v3
https://aclanthology.org/2024.bionlp-1.50.bib
@inproceedings{srivastav-etal-2024-maira, title = "{MAIRA} at {RRG}24: A specialised large multimodal model for radiology report generation", author = "Srivastav, Shaury and Ranjit, Mercy and P{\'e}rez-Garc{\'\i}a, Fernando and Bouzid, Kenza and Bannur, Shruthi and Castro, Daniel C. and Schwaighofer, Anton and Sharma, Harshita and Ilse, Maximilian and Salvatelli, Valentina and Bond-Taylor, Sam and Falck, Fabian and Thieme, Anja and Richardson, Hannah and Lungren, Matthew P. and Hyland, Stephanie L. and Alvarez-Valle, Javier", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.50", pages = "597--602", abstract = "This paper discusses the participation of the MSR MAIRA team in the Large-Scale Radiology Report Generation Shared Task Challenge, as part of the BioNLP workshop at ACL 2024. We present a radiology-specific multimodal model designed to generate radiological reports from chest X-Rays (CXRs). Our proposed model combines a CXR-specific image encoder RAD-DINO with a Large Language Model (LLM) based on Vicuna-7B, via a multi-layer perceptron (MLP) adapter. Both the adapter and the LLM have been fine-tuned in a single-stage training setup to generate radiology reports. Experimental results indicate that a joint training setup with findings and impression sections improves findings prediction. Additionally, incorporating lateral images alongside frontal images when available further enhances all metrics. More information and resources about MAIRA can be found on the project website: http://aka.ms/maira.", }
This paper discusses the participation of the MSR MAIRA team in the Large-Scale Radiology Report Generation Shared Task Challenge, as part of the BioNLP workshop at ACL 2024. We present a radiology-specific multimodal model designed to generate radiological reports from chest X-Rays (CXRs). Our proposed model combines a CXR-specific image encoder RAD-DINO with a Large Language Model (LLM) based on Vicuna-7B, via a multi-layer perceptron (MLP) adapter. Both the adapter and the LLM have been fine-tuned in a single-stage training setup to generate radiology reports. Experimental results indicate that a joint training setup with findings and impression sections improves findings prediction. Additionally, incorporating lateral images alongside frontal images when available further enhances all metrics. More information and resources about MAIRA can be found on the project website: http://aka.ms/maira.
[ "Srivastav, Shaury", "Ranjit, Mercy", "P{\\'e}rez-Garc{\\'\\i}a, Fern", "o", "Bouzid, Kenza", "Bannur, Shruthi", "Castro, Daniel C.", "Schwaighofer, Anton", "Sharma, Harshita", "Ilse, Maximilian", "Salvatelli, Valentina", "Bond-Taylor, Sam", "Falck, Fabian", "Thieme, Anja", "Richardson, Hannah", "Lungren, Matthew P.", "Hyl", ", Stephanie L.", "Alvarez-Valle, Javier" ]
{MAIRA} at {RRG}24: A specialised large multimodal model for radiology report generation
bionlp-1.50
Poster
2311.13668v3
https://aclanthology.org/2024.bionlp-1.51.bib
@inproceedings{munkhoeva-etal-2024-airi, title = "{AIRI} at {RRG}24: {LL}a{V}a with specialised encoder and decoder", author = "Munkhoeva, Marina and Umerenkov, Dmitry and Samokhin, Valentin", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.51", pages = "603--607", abstract = "We present a new approach to generating the {`}Findings{'} and {`}Impression{'} sections in the chest X-rays radiology reports, developed as part of the shared radiology task at BioNLP 2024. By integrating a DINOv2 vision encoder trained on medical data with specialized biomedical large language model using the LLaVA framework, our method addresses complex medical semantics and diverse findings in imaging. We use datasets from PadChest, BIMCV-COVID19, CheXpert, OpenI, and MIMIC-CXR. The evaluation metrics demonstrate our method{'}s effectiveness and the potential for automating the generation of radiology reports.", }
We present a new approach to generating the {`}Findings{'} and {`}Impression{'} sections in the chest X-rays radiology reports, developed as part of the shared radiology task at BioNLP 2024. By integrating a DINOv2 vision encoder trained on medical data with specialized biomedical large language model using the LLaVA framework, our method addresses complex medical semantics and diverse findings in imaging. We use datasets from PadChest, BIMCV-COVID19, CheXpert, OpenI, and MIMIC-CXR. The evaluation metrics demonstrate our method{'}s effectiveness and the potential for automating the generation of radiology reports.
[ "Munkhoeva, Marina", "Umerenkov, Dmitry", "Samokhin, Valentin" ]
{AIRI} at {RRG}24: {LL}a{V}a with specialised encoder and decoder
bionlp-1.51
Poster
2307.01091v2
https://aclanthology.org/2024.bionlp-1.52.bib
@inproceedings{campanini-etal-2024-ihealth, title = "i{H}ealth-{C}hile-1 at {RRG}24: In-context Learning and Finetuning of a Large Multimodal Model for Radiology Report Generation", author = "Campanini, Diego and Loch, Oscar and Messina, Pablo and Elberg, Rafael and Parra, Denis", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.52", pages = "608--613", abstract = "This paper presents the approach of the iHealth-Chile-1 team for the shared task of Large-Scale Radiology Report Generation at the BioNLP workshop, inspired by progress in large multimodal models for processing images and text. In this work, we leverage LLaVA, a Visual-Language Model (VLM), composed of a vision-encoder, a vision-language connector or adapter, and a large language model able to process text and visual embeddings. We achieve our best result by enriching the input prompt of LLaVA with the text output of a simpler report generation model. With this enriched-prompt technique, we improve our results in 4 of 5 metrics (BLEU-4, Rouge-L, BertScore and F1-RadGraph,), only doing in-context learning. Moreover, we provide details about different architecture settings, fine-tuning strategies, and dataset configurations.", }
This paper presents the approach of the iHealth-Chile-1 team for the shared task of Large-Scale Radiology Report Generation at the BioNLP workshop, inspired by progress in large multimodal models for processing images and text. In this work, we leverage LLaVA, a Visual-Language Model (VLM), composed of a vision-encoder, a vision-language connector or adapter, and a large language model able to process text and visual embeddings. We achieve our best result by enriching the input prompt of LLaVA with the text output of a simpler report generation model. With this enriched-prompt technique, we improve our results in 4 of 5 metrics (BLEU-4, Rouge-L, BertScore and F1-RadGraph,), only doing in-context learning. Moreover, we provide details about different architecture settings, fine-tuning strategies, and dataset configurations.
[ "Campanini, Diego", "Loch, Oscar", "Messina, Pablo", "Elberg, Rafael", "Parra, Denis" ]
i{H}ealth-{C}hile-1 at {RRG}24: In-context Learning and Finetuning of a Large Multimodal Model for Radiology Report Generation
bionlp-1.52
Poster
2407.15268v1
https://aclanthology.org/2024.bionlp-1.53.bib
@inproceedings{loch-etal-2024-ihealth, title = "i{H}ealth-{C}hile-3{\&}2 at {RRG}24: Template Based Report Generation", author = "Loch, Oscar and Messina, Pablo and Elberg, Rafael and Campanini, Diego and Soto, {\'A}lvaro and Vidal, Ren{\'e} and Parra, Denis", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.53", pages = "614--623", abstract = "This paper presents the approaches of the iHealth-Chile-3 and iHealth-Chile-2 teams for the shared task of Large-Scale Radiology Report Generation at the BioNLP workshop. Inspired by prior work on template-based report generation, both teams focused on exploring various template-based strategies, using predictions from multi-label image classifiers as input. Our best approach achieved a modest F1-RadGraph score of 19.42 on the findings hidden test set, ranking 7th on the leaderboard. Notably, we consistently observed a discrepancy between our classification metrics and the F1-CheXbert metric reported on the leaderboard, which always showed lower scores. This suggests that the F1-CheXbert metric may be missing some of the labels mentioned by the templates.", }
This paper presents the approaches of the iHealth-Chile-3 and iHealth-Chile-2 teams for the shared task of Large-Scale Radiology Report Generation at the BioNLP workshop. Inspired by prior work on template-based report generation, both teams focused on exploring various template-based strategies, using predictions from multi-label image classifiers as input. Our best approach achieved a modest F1-RadGraph score of 19.42 on the findings hidden test set, ranking 7th on the leaderboard. Notably, we consistently observed a discrepancy between our classification metrics and the F1-CheXbert metric reported on the leaderboard, which always showed lower scores. This suggests that the F1-CheXbert metric may be missing some of the labels mentioned by the templates.
[ "Loch, Oscar", "Messina, Pablo", "Elberg, Rafael", "Campanini, Diego", "Soto, {\\'A}lvaro", "Vidal, Ren{\\'e}", "Parra, Denis" ]
i{H}ealth-{C}hile-3{\&}2 at {RRG}24: Template Based Report Generation
bionlp-1.53
Poster
2303.17579v2
https://aclanthology.org/2024.bionlp-1.54.bib
@inproceedings{zhang-etal-2024-gla, title = "Gla-{AI}4{B}io{M}ed at {RRG}24: Visual Instruction-tuned Adaptation for Radiology Report Generation", author = "Zhang, Xi and Meng, Zaiqiao and Lever, Jake and Ho, Edmond S.L.", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.54", pages = "624--634", abstract = "This paper introduces a radiology-focused visual language model designed to generate radiology reports from chest X-rays. Building on previous findings that large language models can acquire multimodal capabilities when aligned with pretrained vision encoders, we demonstrate similar potential with chest X-ray images. The model combines an image encoder (CLIP) with a fine-tuned large language model (LLM) based on the Vicuna-7B architecture. The training process involves a two-stage approach: initial alignment of chest X-ray features with the LLM, followed by fine-tuning for radiology report generation. The study highlights the importance of generating both FINDINGS and IMPRESSIONS sections in radiology reports and evaluates the model{'}s performance using various metrics, achieving notable accuracy in generating high-quality medical reports. The research also addresses the need for domain-specific fine-tuning to capture the intricate details necessary for accurate medical interpretations and reports.", }
This paper introduces a radiology-focused visual language model designed to generate radiology reports from chest X-rays. Building on previous findings that large language models can acquire multimodal capabilities when aligned with pretrained vision encoders, we demonstrate similar potential with chest X-ray images. The model combines an image encoder (CLIP) with a fine-tuned large language model (LLM) based on the Vicuna-7B architecture. The training process involves a two-stage approach: initial alignment of chest X-ray features with the LLM, followed by fine-tuning for radiology report generation. The study highlights the importance of generating both FINDINGS and IMPRESSIONS sections in radiology reports and evaluates the model{'}s performance using various metrics, achieving notable accuracy in generating high-quality medical reports. The research also addresses the need for domain-specific fine-tuning to capture the intricate details necessary for accurate medical interpretations and reports.
[ "Zhang, Xi", "Meng, Zaiqiao", "Lever, Jake", "Ho, Edmond S.L." ]
Gla-{AI}4{B}io{M}ed at {RRG}24: Visual Instruction-tuned Adaptation for Radiology Report Generation
bionlp-1.54
Poster
2306.03264v1
https://aclanthology.org/2024.bionlp-1.55.bib
@inproceedings{udomlapsakul-etal-2024-sicar, title = "{SICAR} at {RRG}2024: {GPU} Poor{'}s Guide to Radiology Report Generation", author = "Udomlapsakul, Kiartnarin and Pengpun, Parinthapat and Saengja, Tossaporn and Veerakanjana, Kanyakorn and Tiankanon, Krittamate and Khlaisamniang, Pitikorn and Supholkhan, Pasit and Chinkamol, Amrest and Aussavavirojekul, Pubordee and Phimsiri, Hirunkul and Sripo, Tara and Boonnag, Chiraphat and Tongdee, Trongtum and Siriapisith, Thanongchai and Saiviroonporn, Pairash and Kinchagawat, Jiramet and Ittichaiwong, Piyalitt", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.55", pages = "635--644", abstract = "Radiology report generation (RRG) aims to create free-text radiology reports from clinical imaging. Our solution employs a lightweight multimodal language model (MLLM) enhanced with a two-stage post-processing strategy, utilizing a Large Language Model (LLM) to boost diagnostic accuracy and ensure patient safety. We introduce the {``}First, Do No Harm{''} SafetyNet, which incorporates Xraydar, an advanced X-ray classification model, to cross-verify the model outputs and specifically address false negatives from the MLLM. This comprehensive approach combines the efficiency of lightweight models with the robustness of thorough post-processing techniques, offering a reliable solution for radiology report generation. Our system achieved fourth place on the F1-Radgraph metric for findings generation in the Radiology Report Generation Shared Task (RRG24).", }
Radiology report generation (RRG) aims to create free-text radiology reports from clinical imaging. Our solution employs a lightweight multimodal language model (MLLM) enhanced with a two-stage post-processing strategy, utilizing a Large Language Model (LLM) to boost diagnostic accuracy and ensure patient safety. We introduce the {``}First, Do No Harm{''} SafetyNet, which incorporates Xraydar, an advanced X-ray classification model, to cross-verify the model outputs and specifically address false negatives from the MLLM. This comprehensive approach combines the efficiency of lightweight models with the robustness of thorough post-processing techniques, offering a reliable solution for radiology report generation. Our system achieved fourth place on the F1-Radgraph metric for findings generation in the Radiology Report Generation Shared Task (RRG24).
[ "Udomlapsakul, Kiartnarin", "Pengpun, Parinthapat", "Saengja, Tossaporn", "Veerakanjana, Kanyakorn", "Tiankanon, Krittamate", "Khlaisamniang, Pitikorn", "Supholkhan, Pasit", "Chinkamol, Amrest", "Aussavavirojekul, Pubordee", "Phimsiri, Hirunkul", "Sripo, Tara", "Boonnag, Chiraphat", "Tongdee, Trongtum", "Siriapisith, Thanongchai", "Saiviroonporn, Pairash", "Kinchagawat, Jiramet", "Ittichaiwong, Piyalitt" ]
{SICAR} at {RRG}2024: {GPU} Poor{'}s Guide to Radiology Report Generation
bionlp-1.55
Poster
2405.18112v1
https://aclanthology.org/2024.bionlp-1.56.bib
@inproceedings{he-etal-2024-shimo, title = "Shimo Lab at {``}Discharge Me!{''}: Discharge Summarization by Prompt-Driven Concatenation of Electronic Health Record Sections", author = "He, Yunzhen and Yamagiwa, Hiroaki and Shimodaira, Hidetoshi", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.56", pages = "645--657", abstract = "In this paper, we present our approach to the shared task {``}Discharge Me!{''} at the BioNLP Workshop 2024. The primary goal of this task is to reduce the time and effort clinicians spend on writing detailed notes in the electronic health record (EHR). Participants develop a pipeline to generate the {``}Brief Hospital Course{''} and {``}Discharge Instructions{''} sections from the EHR. Our approach involves a first step of extracting the relevant sections from the EHR. We then add explanatory prompts to these sections and concatenate them with separate tokens to create the input text. To train a text generation model, we perform LoRA fine-tuning on the ClinicalT5-large model. On the final test data, our approach achieved a ROUGE-1 of 0.394, which is comparable to the top solutions.", }
In this paper, we present our approach to the shared task {``}Discharge Me!{''} at the BioNLP Workshop 2024. The primary goal of this task is to reduce the time and effort clinicians spend on writing detailed notes in the electronic health record (EHR). Participants develop a pipeline to generate the {``}Brief Hospital Course{''} and {``}Discharge Instructions{''} sections from the EHR. Our approach involves a first step of extracting the relevant sections from the EHR. We then add explanatory prompts to these sections and concatenate them with separate tokens to create the input text. To train a text generation model, we perform LoRA fine-tuning on the ClinicalT5-large model. On the final test data, our approach achieved a ROUGE-1 of 0.394, which is comparable to the top solutions.
[ "He, Yunzhen", "Yamagiwa, Hiroaki", "Shimodaira, Hidetoshi" ]
Shimo Lab at {``}Discharge Me!{''}: Discharge Summarization by Prompt-Driven Concatenation of Electronic Health Record Sections
bionlp-1.56
Poster
2406.18094v1
https://aclanthology.org/2024.bionlp-1.57.bib
@inproceedings{koontz-etal-2024-ixa, title = "Ixa-{M}ed at Discharge Me! Retrieval-Assisted Generation for Streamlining Discharge Documentation", author = "Koontz, Jordan C. and Oronoz, Maite and P{\'e}rez, Alicia", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.57", pages = "658--663", abstract = "In this paper we present our system for the BioNLP ACL{'}24 {``}Discharge Me!{''} task on automating discharge summary section generation. Using Retrieval-Augmented Generation, we combine a Large Language Model (LLM) with external knowledge to guide the generation of the target sections. Our approach generates structured patient summaries from discharge notes using an instructed LLM, retrieves relevant {``}Brief Hospital Course{''} and {``}Discharge Instructions{''} examples via BM25 and SentenceBERT, and provides this context to a frozen LLM for generation. Our top system using SentenceBERT retrieval achieves an overall score of 0.183, outperforming zero-shot baselines. We analyze performance across different aspects, discussing limitations and future research directions.", }
In this paper we present our system for the BioNLP ACL{'}24 {``}Discharge Me!{''} task on automating discharge summary section generation. Using Retrieval-Augmented Generation, we combine a Large Language Model (LLM) with external knowledge to guide the generation of the target sections. Our approach generates structured patient summaries from discharge notes using an instructed LLM, retrieves relevant {``}Brief Hospital Course{''} and {``}Discharge Instructions{''} examples via BM25 and SentenceBERT, and provides this context to a frozen LLM for generation. Our top system using SentenceBERT retrieval achieves an overall score of 0.183, outperforming zero-shot baselines. We analyze performance across different aspects, discussing limitations and future research directions.
[ "Koontz, Jordan C.", "Oronoz, Maite", "P{\\'e}rez, Alicia" ]
Ixa-{M}ed at Discharge Me! Retrieval-Assisted Generation for Streamlining Discharge Documentation
bionlp-1.57
Poster
2407.02723v1
https://aclanthology.org/2024.bionlp-1.58.bib
@inproceedings{guo-etal-2024-qub, title = "{QUB}-Cirdan at {``}Discharge Me!{''}: Zero shot discharge letter generation by open-source {LLM}", author = "Guo, Rui and Farnan, Greg and McLaughlin, Niall and Devereux, Barry", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.58", pages = "664--674", abstract = "The BioNLP ACL{'}24 Shared Task on Streamlining Discharge Documentation aims to reduce the administrative burden on clinicians by automating the creation of critical sections of patient discharge letters. This paper presents our approach using the Llama3 8B quantized model to generate the {``}Brief Hospital Course{''} and {``}Discharge Instructions{''} sections. We employ a zero-shot method combined with Retrieval-Augmented Generation (RAG) to produce concise, contextually accurate summaries. Our contributions include the development of a curated template-based approach to ensure reliability and consistency, as well as the integration of RAG for word count prediction. We also describe several unsuccessful experiments to provide insights into our pathway for the competition. Our results demonstrate the effectiveness and efficiency of our approach, achieving high scores across multiple evaluation metrics.", }
The BioNLP ACL{'}24 Shared Task on Streamlining Discharge Documentation aims to reduce the administrative burden on clinicians by automating the creation of critical sections of patient discharge letters. This paper presents our approach using the Llama3 8B quantized model to generate the {``}Brief Hospital Course{''} and {``}Discharge Instructions{''} sections. We employ a zero-shot method combined with Retrieval-Augmented Generation (RAG) to produce concise, contextually accurate summaries. Our contributions include the development of a curated template-based approach to ensure reliability and consistency, as well as the integration of RAG for word count prediction. We also describe several unsuccessful experiments to provide insights into our pathway for the competition. Our results demonstrate the effectiveness and efficiency of our approach, achieving high scores across multiple evaluation metrics.
[ "Guo, Rui", "Farnan, Greg", "McLaughlin, Niall", "Devereux, Barry" ]
{QUB}-Cirdan at {``}Discharge Me!{''}: Zero shot discharge letter generation by open-source {LLM}
bionlp-1.58
Poster
2406.00041v2
https://aclanthology.org/2024.bionlp-1.59.bib
@inproceedings{liu-etal-2024-e, title = "e-Health {CSIRO} at {``}Discharge Me!{''} 2024: Generating Discharge Summary Sections with Fine-tuned Language Models", author = "Liu, Jinghui and Nicolson, Aaron and Dowling, Jason and Koopman, Bevan and Nguyen, Anthony", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.59", pages = "675--684", abstract = "Clinical documentation is an important aspect of clinicians{'} daily work and often demands a significant amount of time. The BioNLP 2024 Shared Task on Streamlining Discharge Documentation (Discharge Me!) aims to alleviate this documentation burden by automatically generating discharge summary sections, including brief hospital course and discharge instruction, which are often time-consuming to synthesize and write manually. We approach the generation task by fine-tuning multiple open-sourced language models (LMs), including both decoder-only and encoder-decoder LMs, with various configurations on input context. We also examine different setups for decoding algorithms, model ensembling or merging, and model specialization. Our results show that conditioning on the content of discharge summary prior to the target sections is effective for the generation task. Furthermore, we find that smaller encoder-decoder LMs can work as well or even slightly better than larger decoder-based LMs fine-tuned through LoRA. The model checkpoints from our team (aehrc) are openly available.", }
Clinical documentation is an important aspect of clinicians{'} daily work and often demands a significant amount of time. The BioNLP 2024 Shared Task on Streamlining Discharge Documentation (Discharge Me!) aims to alleviate this documentation burden by automatically generating discharge summary sections, including brief hospital course and discharge instruction, which are often time-consuming to synthesize and write manually. We approach the generation task by fine-tuning multiple open-sourced language models (LMs), including both decoder-only and encoder-decoder LMs, with various configurations on input context. We also examine different setups for decoding algorithms, model ensembling or merging, and model specialization. Our results show that conditioning on the content of discharge summary prior to the target sections is effective for the generation task. Furthermore, we find that smaller encoder-decoder LMs can work as well or even slightly better than larger decoder-based LMs fine-tuned through LoRA. The model checkpoints from our team (aehrc) are openly available.
[ "Liu, Jinghui", "Nicolson, Aaron", "Dowling, Jason", "Koopman, Bevan", "Nguyen, Anthony" ]
e-Health {CSIRO} at {``}Discharge Me!{''} 2024: Generating Discharge Summary Sections with Fine-tuned Language Models
bionlp-1.59
Poster
2407.02723v1
https://aclanthology.org/2024.bionlp-1.60.bib
@inproceedings{lyu-etal-2024-uf, title = "{UF}-{HOBI} at {``}Discharge Me!{''}: A Hybrid Solution for Discharge Summary Generation Through Prompt-based Tuning of {G}ator{T}ron{GPT} Models", author = "Lyu, Mengxian and Peng, Cheng and Paredes, Daniel and Chen, Ziyi and Chen, Aokun and Bian, Jiang and Wu, Yonghui", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.60", pages = "685--695", abstract = "Automatic generation of discharge summaries presents significant challenges due to the length of clinical documentation, the dispersed nature of patient information, and the diverse terminology used in healthcare. This paper presents a hybrid solution for generating discharge summary sections as part of our participation in the {``}Discharge Me!{''} Challenge at the BioNLP 2024 Shared Task. We developed a two-stage generation method using both extractive and abstractive techniques, in which we first apply name entity recognition (NER) to extract key clinical concepts, which are then used as input for a prompt-tuning based GatorTronGPT model to generate coherent text for two important sections including {``}Brief Hospital Course{''} and {``}Discharge Instructions{''}. Our system was ranked 5th in this challenge, achieving an overall score of 0.284. The results demonstrate the effectiveness of our hybrid solution in improving the quality of automated discharge section generation.", }
Automatic generation of discharge summaries presents significant challenges due to the length of clinical documentation, the dispersed nature of patient information, and the diverse terminology used in healthcare. This paper presents a hybrid solution for generating discharge summary sections as part of our participation in the {``}Discharge Me!{''} Challenge at the BioNLP 2024 Shared Task. We developed a two-stage generation method using both extractive and abstractive techniques, in which we first apply name entity recognition (NER) to extract key clinical concepts, which are then used as input for a prompt-tuning based GatorTronGPT model to generate coherent text for two important sections including {``}Brief Hospital Course{''} and {``}Discharge Instructions{''}. Our system was ranked 5th in this challenge, achieving an overall score of 0.284. The results demonstrate the effectiveness of our hybrid solution in improving the quality of automated discharge section generation.
[ "Lyu, Mengxian", "Peng, Cheng", "Paredes, Daniel", "Chen, Ziyi", "Chen, Aokun", "Bian, Jiang", "Wu, Yonghui" ]
{UF}-{HOBI} at {``}Discharge Me!{''}: A Hybrid Solution for Discharge Summary Generation Through Prompt-based Tuning of {G}ator{T}ron{GPT} Models
bionlp-1.60
Poster
2407.15359v1
https://aclanthology.org/2024.bionlp-1.61.bib
@inproceedings{wu-etal-2024-epfl, title = "{EPFL}-{MAKE} at {``}Discharge Me!{''}: An {LLM} System for Automatically Generating Discharge Summaries of Clinical Electronic Health Record", author = "Wu, Haotian and Boulenger, Paul and Faure, Antonin and C{\'e}spedes, Berta and Boukil, Farouk and Morel, Nastasia and Chen, Zeming and Bosselut, Antoine", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.61", pages = "696--711", abstract = "This paper presents our contribution to the Streamlining Discharge Documentation shared task organized as part of the ACL{'}24 workshop. We propose MEDISCHARGE (Meditron-7B Based Medical Summary Generation System for Discharge Me), an LLM-based system to generate Brief Hospital Course and Discharge Instruction summaries based on a patient{'}s Electronic Health Record. Our system is build on a Meditron-7B with context window extension, ensuring the system can handle cases of variable lengths with high quality. When the length of the input exceeds the system input limitation, we use a dynamic information selection framework to automatically extract important sections from the full discharge text. Then, extracted sections are removed in increasing order of importance until the input length requirement is met. We demonstrate our approach outperforms tripling the size of the context window of the model. Our system obtains a 0.289 overall score in the leaderboard, an improvement of 183{\%} compared to the baseline, and a ROUGE-1 score of 0.444, achieving a second place performance in the shared task.", }
This paper presents our contribution to the Streamlining Discharge Documentation shared task organized as part of the ACL{'}24 workshop. We propose MEDISCHARGE (Meditron-7B Based Medical Summary Generation System for Discharge Me), an LLM-based system to generate Brief Hospital Course and Discharge Instruction summaries based on a patient{'}s Electronic Health Record. Our system is build on a Meditron-7B with context window extension, ensuring the system can handle cases of variable lengths with high quality. When the length of the input exceeds the system input limitation, we use a dynamic information selection framework to automatically extract important sections from the full discharge text. Then, extracted sections are removed in increasing order of importance until the input length requirement is met. We demonstrate our approach outperforms tripling the size of the context window of the model. Our system obtains a 0.289 overall score in the leaderboard, an improvement of 183{\%} compared to the baseline, and a ROUGE-1 score of 0.444, achieving a second place performance in the shared task.
[ "Wu, Haotian", "Boulenger, Paul", "Faure, Antonin", "C{\\'e}spedes, Berta", "Boukil, Farouk", "Morel, Nastasia", "Chen, Zeming", "Bosselut, Antoine" ]
{EPFL}-{MAKE} at {``}Discharge Me!{''}: An {LLM} System for Automatically Generating Discharge Summaries of Clinical Electronic Health Record
bionlp-1.61
Poster
2407.15359v1
https://aclanthology.org/2024.bionlp-1.62.bib
@inproceedings{frayling-etal-2024-uog, title = "{U}o{G} Siephers at {``}Discharge Me!{''}: Exploring Ways to Generate Synthetic Patient Notes From Multi-Part Electronic Health Records", author = "Frayling, Erlend and Lever, Jake and McDonald, Graham", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.62", pages = "712--718", abstract = "This paper presents the UoG Siephers team participation at the Discharge Me! Shared Task on Streamlining Discharge Documentation. For our participation, we investigate appropriately selecting and encoding specific sections of Electronic Health Records (EHR) as input data for sequence-to-sequence models, to generate the discharge instructions and brief hospital course sections of a patient{'}s EHR. We found that, despite the large volume of disparate information that is often available in EHRs, selectively choosing an appropriate EHR section for training and prompting sequence-to-sequence models resulted in improved generative quality. In particular, we found that using only the history of present illness section of an EHR as input often led to better performance than using multiple EHR sections.", }
This paper presents the UoG Siephers team participation at the Discharge Me! Shared Task on Streamlining Discharge Documentation. For our participation, we investigate appropriately selecting and encoding specific sections of Electronic Health Records (EHR) as input data for sequence-to-sequence models, to generate the discharge instructions and brief hospital course sections of a patient{'}s EHR. We found that, despite the large volume of disparate information that is often available in EHRs, selectively choosing an appropriate EHR section for training and prompting sequence-to-sequence models resulted in improved generative quality. In particular, we found that using only the history of present illness section of an EHR as input often led to better performance than using multiple EHR sections.
[ "Frayling, Erlend", "Lever, Jake", "McDonald, Graham" ]
{U}o{G} Siephers at {``}Discharge Me!{''}: Exploring Ways to Generate Synthetic Patient Notes From Multi-Part Electronic Health Records
bionlp-1.62
Poster
2308.06354v2
https://aclanthology.org/2024.bionlp-1.63.bib
@inproceedings{wendelken-etal-2024-roux, title = "Roux-lette at {``}Discharge Me!{''}: Reducing {EHR} Chart Burden with a Simple, Scalable, Clinician-Driven {AI} Approach", author = "Wendelken, Suzanne and Antony, Anson and Korutla, Rajashekar and Pachipala, Bhanu and Shanahan, James and Saba, Walid", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.63", pages = "719--723", abstract = "Healthcare providers spend a significant amount of time reading and synthesizing electronic health records (EHRs), negatively impacting patient outcomes and causing provider burnout. Traditional supervised machine learning approaches using large language models (LLMs) to summarize clinical text have struggled due to hallucinations and lack of relevant training data. Here, we present a novel, simplified solution for the {``}Discharge Me!{''} shared task. Our approach mimics human clinical workflow, using pre-trained LLMs to answer specific questions and summarize the answers obtained from discharge summaries and other EHR sections. This method (i) avoids hallucinations through hybrid-RAG/zero-shot contextualized prompting; (ii) requires no extensive training or fine-tuning; and (iii) is adaptable to various clinical tasks.", }
Healthcare providers spend a significant amount of time reading and synthesizing electronic health records (EHRs), negatively impacting patient outcomes and causing provider burnout. Traditional supervised machine learning approaches using large language models (LLMs) to summarize clinical text have struggled due to hallucinations and lack of relevant training data. Here, we present a novel, simplified solution for the {``}Discharge Me!{''} shared task. Our approach mimics human clinical workflow, using pre-trained LLMs to answer specific questions and summarize the answers obtained from discharge summaries and other EHR sections. This method (i) avoids hallucinations through hybrid-RAG/zero-shot contextualized prompting; (ii) requires no extensive training or fine-tuning; and (iii) is adaptable to various clinical tasks.
[ "Wendelken, Suzanne", "Antony, Anson", "Korutla, Rajashekar", "Pachipala, Bhanu", "Shanahan, James", "Saba, Walid" ]
Roux-lette at {``}Discharge Me!{''}: Reducing {EHR} Chart Burden with a Simple, Scalable, Clinician-Driven {AI} Approach
bionlp-1.63
Poster
2407.16905v1
https://aclanthology.org/2024.bionlp-1.64.bib
@inproceedings{socrates-etal-2024-yale, title = "{Y}ale at {``}Discharge Me!{''}: Evaluating Constrained Generation of Discharge Summaries with Unstructured and Structured Information", author = "Socrates, Vimig and Huang, Thomas and Ai, Xuguang and Fereydooni, Soraya and Chen, Qingyu and Taylor, R Andrew and Chartash, David", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.64", pages = "724--730", abstract = "In this work, we propose our top-ranking (2nd place) pipeline for the generation of discharge summary subsections as a part of the BioNLP 2024 Shared Task 2: {``}Discharge Me!{''}. We evaluate both encoder-decoder and state-of-the-art decoder-only language models on the generation of two key sections of the discharge summary. To evaluate the ability of NLP methods to further alleviate the documentation burden on physicians, we also design a novel pipeline to generate the brief hospital course directly from structured information found in the EHR. Finally, we evaluate a constrained beam search approach to inject external knowledge about relevant patient problems into the text generation process. We find that a BioBART model fine-tuned on a larger fraction of the data without constrained beam search outperforms all other models.", }
In this work, we propose our top-ranking (2nd place) pipeline for the generation of discharge summary subsections as a part of the BioNLP 2024 Shared Task 2: {``}Discharge Me!{''}. We evaluate both encoder-decoder and state-of-the-art decoder-only language models on the generation of two key sections of the discharge summary. To evaluate the ability of NLP methods to further alleviate the documentation burden on physicians, we also design a novel pipeline to generate the brief hospital course directly from structured information found in the EHR. Finally, we evaluate a constrained beam search approach to inject external knowledge about relevant patient problems into the text generation process. We find that a BioBART model fine-tuned on a larger fraction of the data without constrained beam search outperforms all other models.
[ "Socrates, Vimig", "Huang, Thomas", "Ai, Xuguang", "Fereydooni, Soraya", "Chen, Qingyu", "Taylor, R Andrew", "Chartash, David" ]
{Y}ale at {``}Discharge Me!{''}: Evaluating Constrained Generation of Discharge Summaries with Unstructured and Structured Information
bionlp-1.64
Poster
2407.17636v1
https://aclanthology.org/2024.bionlp-1.65.bib
@inproceedings{tang-etal-2024-ignitioninnovators, title = "{I}gnition{I}nnovators at {``}Discharge Me!{''}: Chain-of-Thought Instruction Finetuning Large Language Models for Discharge Summaries", author = "Tang, An Quang and Zhang, Xiuzhen and Dinh, Minh Ngoc", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.65", pages = "731--739", abstract = "This paper presents our proposed approach to the Discharge Me! shared task, collocated with the 23th Workshop on Biomedical Natural Language Processing (BioNLP). In this work, we develop an LLM-based framework for solving the Discharge Summary Documentation (DSD) task, i.e., generating the two critical target sections {`}Brief Hospital Course{'} and {`}Discharge Instructions{'} in the discharge summary. By streamlining the recent instruction-finetuning process on LLMs, we explore several prompting strategies for optimally adapting LLMs to specific generation task of DSD. Experimental results show that providing a clear output structure, complimented by a set of comprehensive Chain-of-Thoughts (CoT) questions, effectively improves the model{'}s reasoning capability, and thereby, enhancing the structural correctness and faithfulness of clinical information in the generated text. Source code is available at: https://anonymous.4open.science/r/Discharge{\_}LLM-A233", }
This paper presents our proposed approach to the Discharge Me! shared task, collocated with the 23th Workshop on Biomedical Natural Language Processing (BioNLP). In this work, we develop an LLM-based framework for solving the Discharge Summary Documentation (DSD) task, i.e., generating the two critical target sections {`}Brief Hospital Course{'} and {`}Discharge Instructions{'} in the discharge summary. By streamlining the recent instruction-finetuning process on LLMs, we explore several prompting strategies for optimally adapting LLMs to specific generation task of DSD. Experimental results show that providing a clear output structure, complimented by a set of comprehensive Chain-of-Thoughts (CoT) questions, effectively improves the model{'}s reasoning capability, and thereby, enhancing the structural correctness and faithfulness of clinical information in the generated text. Source code is available at: https://anonymous.4open.science/r/Discharge{\_}LLM-A233
[ "Tang, An Quang", "Zhang, Xiuzhen", "Dinh, Minh Ngoc" ]
{I}gnition{I}nnovators at {``}Discharge Me!{''}: Chain-of-Thought Instruction Finetuning Large Language Models for Discharge Summaries
bionlp-1.65
Poster
2407.17636v1
https://aclanthology.org/2024.bionlp-1.66.bib
@inproceedings{naskar-etal-2024-mlbmikabr, title = "{MLBMIKABR} at {``}Discharge Me!{''}: Concept Based Clinical Text Description Generation", author = "Naskar, Abir and Hocking, Jane and Chondros, Patty and Boyle, Douglas and Conway, Mike", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.66", pages = "740--747", abstract = "This paper presents a method called Concept Based Description Generation, aimed at creating summaries (Brief Hospital Course and Discharge Instructions) using source (Discharge and Radiology) texts. We propose a rule-based approach for segmenting both the source and target texts. In the target text, we not only segment the content but also identify the concept of each segment based on text patterns. Our methodology involves creating a combined summarized version of each text segment, extracting important information, and then fine-tuning a Large Language Model (LLM) to generate aspects. Subsequently, we fine-tune a new LLM using a specific aspect, the combined summary, and a list of all aspects to generate detailed descriptions for each task. This approach integrates segmentation, concept identification, summarization, and language modeling to achieve accurate and informative descriptions for medical documentation tasks. Due to lack to time, We could only train on 10000 training data.", }
This paper presents a method called Concept Based Description Generation, aimed at creating summaries (Brief Hospital Course and Discharge Instructions) using source (Discharge and Radiology) texts. We propose a rule-based approach for segmenting both the source and target texts. In the target text, we not only segment the content but also identify the concept of each segment based on text patterns. Our methodology involves creating a combined summarized version of each text segment, extracting important information, and then fine-tuning a Large Language Model (LLM) to generate aspects. Subsequently, we fine-tune a new LLM using a specific aspect, the combined summary, and a list of all aspects to generate detailed descriptions for each task. This approach integrates segmentation, concept identification, summarization, and language modeling to achieve accurate and informative descriptions for medical documentation tasks. Due to lack to time, We could only train on 10000 training data.
[ "Naskar, Abir", "Hocking, Jane", "Chondros, Patty", "Boyle, Douglas", "Conway, Mike" ]
{MLBMIKABR} at {``}Discharge Me!{''}: Concept Based Clinical Text Description Generation
bionlp-1.66
Poster
2407.15359v1
https://aclanthology.org/2024.bionlp-1.67.bib
@inproceedings{to-etal-2024-deakinnlp, title = "{D}eakin{NLP} at {B}io{L}ay{S}umm: Evaluating Fine-tuning Longformer and {GPT}-4 Prompting for Biomedical Lay Summarization", author = "To, Huy Quoc and Liu, Ming and Huang, Guangyan", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.67", pages = "748--754", abstract = "This paper presents our approaches for the BioLaySumm 2024 Shared Task. We evaluate two methods for generating lay summaries based on biomedical articles: (1) fine-tuning the Longformer-Encoder-Decoder (LED) model, and (2) zero-shot and few-shot prompting on GPT-4. In the fine-tuning approach, we individually fine-tune the LED model using two datasets: PLOS and eLife. This process is conducted under two different settings: one utilizing 50{\%} of the training dataset, and the other utilizing the entire 100{\%} of the training dataset. We compare the results of both methods with GPT-4 in zero-shot and few-shot prompting. The experiment results demonstrate that fine-tuning with 100{\%} of the training data achieves better performance than prompting with GPT-4. However, under data scarcity circumstances, prompting GPT-4 seems to be a better solution.", }
This paper presents our approaches for the BioLaySumm 2024 Shared Task. We evaluate two methods for generating lay summaries based on biomedical articles: (1) fine-tuning the Longformer-Encoder-Decoder (LED) model, and (2) zero-shot and few-shot prompting on GPT-4. In the fine-tuning approach, we individually fine-tune the LED model using two datasets: PLOS and eLife. This process is conducted under two different settings: one utilizing 50{\%} of the training dataset, and the other utilizing the entire 100{\%} of the training dataset. We compare the results of both methods with GPT-4 in zero-shot and few-shot prompting. The experiment results demonstrate that fine-tuning with 100{\%} of the training data achieves better performance than prompting with GPT-4. However, under data scarcity circumstances, prompting GPT-4 seems to be a better solution.
[ "To, Huy Quoc", "Liu, Ming", "Huang, Guangyan" ]
{D}eakin{NLP} at {B}io{L}ay{S}umm: Evaluating Fine-tuning Longformer and {GPT}-4 Prompting for Biomedical Lay Summarization
bionlp-1.67
Poster
2310.10508v1
https://aclanthology.org/2024.bionlp-1.68.bib
@inproceedings{ahuir-etal-2024-elirf, title = "{EL}i{RF}-{VRAIN} at {B}io{L}ay{S}umm: Boosting Lay Summarization Systems Performance with Ranking Models", author = "Ahuir, Vicent and Torres, Diego and Segarra, Encarna and Hurtado, Llu{\'\i}s-F.", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.68", pages = "755--761", abstract = "This paper presents our contribution to the BioLaySumm 2024 shared task of the 23rd BioNLP Workshop. The task is to create a lay summary, given a biomedical research article and its technical summary. As the input to the system could be large, a Longformer Encoder-Decoder (LED) has been used. We continuously pre-trained a general domain LED model with biomedical data to adapt it to this specific domain. In the pre-training phase, several pre-training tasks were aggregated to inject linguistic knowledge and increase the abstractivity of the generated summaries. Since the distribution of samples between the two datasets, eLife and PLOS, is unbalanced, we fine-tuned two models: one for eLife and another for PLOS. To increase the quality of the lay summaries of the system, we developed a regression model that helps us rank the summaries generated by the summarization models. This regression model predicts the quality of the summary in three different aspects: Relevance, Readability, and Factuality. We present the results of our models and a study to measure the ranking capabilities of the regression model.", }
This paper presents our contribution to the BioLaySumm 2024 shared task of the 23rd BioNLP Workshop. The task is to create a lay summary, given a biomedical research article and its technical summary. As the input to the system could be large, a Longformer Encoder-Decoder (LED) has been used. We continuously pre-trained a general domain LED model with biomedical data to adapt it to this specific domain. In the pre-training phase, several pre-training tasks were aggregated to inject linguistic knowledge and increase the abstractivity of the generated summaries. Since the distribution of samples between the two datasets, eLife and PLOS, is unbalanced, we fine-tuned two models: one for eLife and another for PLOS. To increase the quality of the lay summaries of the system, we developed a regression model that helps us rank the summaries generated by the summarization models. This regression model predicts the quality of the summary in three different aspects: Relevance, Readability, and Factuality. We present the results of our models and a study to measure the ranking capabilities of the regression model.
[ "Ahuir, Vicent", "Torres, Diego", "Segarra, Encarna", "Hurtado, Llu{\\'\\i}s-F." ]
{EL}i{RF}-{VRAIN} at {B}io{L}ay{S}umm: Boosting Lay Summarization Systems Performance with Ranking Models
bionlp-1.68
Poster
2010.09252v1
https://aclanthology.org/2024.bionlp-1.69.bib
@inproceedings{karotia-susan-2024-biolay, title = "{B}io{L}ay{\_}{AK}{\_}{SS} at {B}io{L}ay{S}umm: Domain Adaptation by Two-Stage Fine-Tuning of Large Language Models used for Biomedical Lay Summary Generation", author = "Karotia, Akanksha and Susan, Seba", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.69", pages = "762--768", abstract = "Lay summarization is essential but challenging, as it simplifies scientific information for non-experts and keeps them updated with the latest scientific knowledge. In our participation in the Shared Task: Lay Summarization of Biomedical Research Articles @ BioNLP Workshop (Goldsack et al., 2024), ACL 2024, we conducted a comprehensive evaluation on abstractive summarization of biomedical literature using Large Language Models (LLMs) and assessed the performance using ten metrics across three categories: relevance, readability, and factuality, using eLife and PLOS datasets provided by the organizers. We developed a two-stage framework for lay summarization of biomedical scientific articles. In the first stage, we generated summaries using BART and PEGASUS LLMs by fine-tuning them on the given datasets. In the second stage, we combined the generated summaries and input them to BioBART, and then fine-tuned it on the same datasets. Our findings show that combining general and domain-specific LLMs enhances performance.", }
Lay summarization is essential but challenging, as it simplifies scientific information for non-experts and keeps them updated with the latest scientific knowledge. In our participation in the Shared Task: Lay Summarization of Biomedical Research Articles @ BioNLP Workshop (Goldsack et al., 2024), ACL 2024, we conducted a comprehensive evaluation on abstractive summarization of biomedical literature using Large Language Models (LLMs) and assessed the performance using ten metrics across three categories: relevance, readability, and factuality, using eLife and PLOS datasets provided by the organizers. We developed a two-stage framework for lay summarization of biomedical scientific articles. In the first stage, we generated summaries using BART and PEGASUS LLMs by fine-tuning them on the given datasets. In the second stage, we combined the generated summaries and input them to BioBART, and then fine-tuned it on the same datasets. Our findings show that combining general and domain-specific LLMs enhances performance.
[ "Karotia, Akanksha", "Susan, Seba" ]
{B}io{L}ay{\_}{AK}{\_}{SS} at {B}io{L}ay{S}umm: Domain Adaptation by Two-Stage Fine-Tuning of Large Language Models used for Biomedical Lay Summary Generation
bionlp-1.69
Poster
2309.17332v2
https://aclanthology.org/2024.bionlp-1.70.bib
@inproceedings{pakull-etal-2024-wispermed, title = "{W}is{P}er{M}ed at {B}io{L}ay{S}umm: Adapting Autoregressive Large Language Models for Lay Summarization of Scientific Articles", author = {Pakull, Tabea Margareta Grace and Damm, Hendrik and Idrissi-Yaghir, Ahmad and Sch{\"a}fer, Henning and Horn, Peter A. and Friedrich, Christoph M.}, editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.70", pages = "769--779", abstract = "This paper details the efforts of the WisPerMed team in the BioLaySumm2024 Shared Task on automatic lay summarization in the biomedical domain, aimed at making scientific publications accessible to non-specialists. Large language models (LLMs), specifically the BioMistral and Llama3 models, were fine-tuned and employed to create lay summaries from complex scientific texts. The summarization performance was enhanced through various approaches, including instruction tuning, few-shot learning, and prompt variations tailored to incorporate specific context information. The experiments demonstrated that fine-tuning generally led to the best performance across most evaluated metrics. Few-shot learning notably improved the models{'} ability to generate relevant and factually accurate texts, particularly when using a well-crafted prompt. Additionally, a Dynamic Expert Selection (DES) mechanism to optimize the selection of text outputs based on readability and factuality metrics was developed. Out of 54 participants, the WisPerMed team reached the 4th place, measured by readability, factuality, and relevance. Determined by the overall score, our approach improved upon the baseline by approx. 5.5 percentage points and was only approx. 1.5 percentage points behind the first place.", }
This paper details the efforts of the WisPerMed team in the BioLaySumm2024 Shared Task on automatic lay summarization in the biomedical domain, aimed at making scientific publications accessible to non-specialists. Large language models (LLMs), specifically the BioMistral and Llama3 models, were fine-tuned and employed to create lay summaries from complex scientific texts. The summarization performance was enhanced through various approaches, including instruction tuning, few-shot learning, and prompt variations tailored to incorporate specific context information. The experiments demonstrated that fine-tuning generally led to the best performance across most evaluated metrics. Few-shot learning notably improved the models{'} ability to generate relevant and factually accurate texts, particularly when using a well-crafted prompt. Additionally, a Dynamic Expert Selection (DES) mechanism to optimize the selection of text outputs based on readability and factuality metrics was developed. Out of 54 participants, the WisPerMed team reached the 4th place, measured by readability, factuality, and relevance. Determined by the overall score, our approach improved upon the baseline by approx. 5.5 percentage points and was only approx. 1.5 percentage points behind the first place.
[ "Pakull, Tabea Margareta Grace", "Damm, Hendrik", "Idrissi-Yaghir, Ahmad", "Sch{\\\"a}fer, Henning", "Horn, Peter A.", "Friedrich, Christoph M." ]
{W}is{P}er{M}ed at {B}io{L}ay{S}umm: Adapting Autoregressive Large Language Models for Lay Summarization of Scientific Articles
bionlp-1.70
Poster
2405.11950v1
https://aclanthology.org/2024.bionlp-1.71.bib
@inproceedings{gonzalez-sanchez-martinez-2024-hulat, title = "{HULAT}-{UC}3{M} at {B}iolay{S}umm: Adaptation of {B}io{BART} and Longformer models to summarizing biomedical documents", author = "Gonzalez Sanchez, Adrian and Mart{\'\i}nez, Paloma", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.71", pages = "780--785", abstract = "This article presents our submission to the Bio- LaySumm 2024 shared task: Lay Summarization of Biomedical Research Articles. The objective of this task is to generate summaries that are simplified in a concise and less technical way, in order to facilitate comprehension by non-experts users. A pre-trained BioBART model was employed to fine-tune the articles from the two journals, thereby generating two models, one for each journal. The submission achieved the 12th best ranking in the task, attaining a meritorious first place in the Relevance ROUGE-1 metric.", }
This article presents our submission to the Bio- LaySumm 2024 shared task: Lay Summarization of Biomedical Research Articles. The objective of this task is to generate summaries that are simplified in a concise and less technical way, in order to facilitate comprehension by non-experts users. A pre-trained BioBART model was employed to fine-tune the articles from the two journals, thereby generating two models, one for each journal. The submission achieved the 12th best ranking in the task, attaining a meritorious first place in the Relevance ROUGE-1 metric.
[ "Gonzalez Sanchez, Adrian", "Mart{\\'\\i}nez, Paloma" ]
{HULAT}-{UC}3{M} at {B}iolay{S}umm: Adaptation of {B}io{BART} and Longformer models to summarizing biomedical documents
bionlp-1.71
Poster
2204.03905v2
https://aclanthology.org/2024.bionlp-1.72.bib
@inproceedings{kim-etal-2024-saama-technologies, title = "Saama Technologies at {B}io{L}ay{S}umm: Abstract based fine-tuned models with {L}o{RA}", author = "Kim, Hwanmun and Kanakarajan, Kamal raj and Sankarasubbu, Malaikannan", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.72", pages = "786--792", abstract = "Lay summarization of biomedical research articles is a challenging problem due to their use of technical terms and background knowledge requirements, despite the potential benefits of these research articles to the public. We worked on this problem as participating in BioLaySumm 2024. We experimented with various fine-tuning approaches to generate better lay summaries for biomedical research articles. After several experiments, we built a LoRA model with unsupervised fine-tuning based on the abstracts of the given articles, followed by a post-processing unit to take off repeated sentences. Our model was ranked 3rd overall in the BioLaySumm 2024 leaderboard. We analyzed the different approaches we experimented with and suggested several ideas to improve our model further.", }
Lay summarization of biomedical research articles is a challenging problem due to their use of technical terms and background knowledge requirements, despite the potential benefits of these research articles to the public. We worked on this problem as participating in BioLaySumm 2024. We experimented with various fine-tuning approaches to generate better lay summaries for biomedical research articles. After several experiments, we built a LoRA model with unsupervised fine-tuning based on the abstracts of the given articles, followed by a post-processing unit to take off repeated sentences. Our model was ranked 3rd overall in the BioLaySumm 2024 leaderboard. We analyzed the different approaches we experimented with and suggested several ideas to improve our model further.
[ "Kim, Hwanmun", "Kanakarajan, Kamal raj", "Sankarasubbu, Malaikannan" ]
Saama Technologies at {B}io{L}ay{S}umm: Abstract based fine-tuned models with {L}o{RA}
bionlp-1.72
Poster
2402.16843v1
https://aclanthology.org/2024.bionlp-1.73.bib
@inproceedings{stefanou-etal-2024-auth, title = "{AUTH} at {B}io{L}ay{S}umm 2024: Bringing Scientific Content to Kids", author = "Stefanou, Loukritia and Passali, Tatiana and Tsoumakas, Grigorios", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.73", pages = "793--803", abstract = "The BioLaySumm 2024 shared task at the ACL 2024 BioNLP workshop aims to transform biomedical research articles into lay summaries suitable for a broad audience, including children. We utilize the BioBART model, designed for the biomedical sector, to convert complex scientific data into clear, concise summaries. Our dataset, which includes a range of scientific abstracts, enables us to address the diverse information needs of our audience. This focus ensures that our summaries are accessible to both general and younger lay audience. Additionally, we employ specialized tokens and augmentation techniques to optimize the model{'}s performance. Our methodology proved effective, earning us the 7th rank on the final leaderboard out of 57 participants.", }
The BioLaySumm 2024 shared task at the ACL 2024 BioNLP workshop aims to transform biomedical research articles into lay summaries suitable for a broad audience, including children. We utilize the BioBART model, designed for the biomedical sector, to convert complex scientific data into clear, concise summaries. Our dataset, which includes a range of scientific abstracts, enables us to address the diverse information needs of our audience. This focus ensures that our summaries are accessible to both general and younger lay audience. Additionally, we employ specialized tokens and augmentation techniques to optimize the model{'}s performance. Our methodology proved effective, earning us the 7th rank on the final leaderboard out of 57 participants.
[ "Stefanou, Loukritia", "Passali, Tatiana", "Tsoumakas, Grigorios" ]
{AUTH} at {B}io{L}ay{S}umm 2024: Bringing Scientific Content to Kids
bionlp-1.73
Poster
2405.13179v4
https://aclanthology.org/2024.bionlp-1.74.bib
@inproceedings{chizhikova-etal-2024-sinai, title = "{SINAI} at {B}io{L}ay{S}umm: Self-Play Fine-Tuning of Large Language Models for Biomedical Lay Summarisation", author = "Chizhikova, Mariia and D{\'\i}az-Galiano, Manuel Carlos and Ure{\~n}a-L{\'o}pez, L. Alfonso and Mart{\'\i}n-Valdivia, Mar{\'\i}a-Teresa", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.74", pages = "804--809", abstract = "An effective disclosure of scientific knowledge and advancements to the general public is often hindered by the complexity of the technical language used in research which often results very difficult, if not impossible, for non-experts to understand. In this paper we present the approach developed by the SINAI team as the result of our participation in BioLaySumm shared task hosted by the BioNLP workshop at ACL 2024. Our approach stems from the experimentation we performed in order to test the ability of state-of-the-art pre-trained large language models, namely GPT 3.5, GPT 4 and Llama-3, to tackle this task in a few-shot manner. In order to improve this baseline, we opted for fine-tuning Llama-3 by applying parameter-efficient methodologies. The best performing system which resulted from applying self-play fine tuning method which allows the model to improve while learning to distinguish between its own generations from the previous step from the gold standard summaries. This approach achieved 0.4205 ROUGE-1 score and 0.8583 BERTScore.", }
An effective disclosure of scientific knowledge and advancements to the general public is often hindered by the complexity of the technical language used in research which often results very difficult, if not impossible, for non-experts to understand. In this paper we present the approach developed by the SINAI team as the result of our participation in BioLaySumm shared task hosted by the BioNLP workshop at ACL 2024. Our approach stems from the experimentation we performed in order to test the ability of state-of-the-art pre-trained large language models, namely GPT 3.5, GPT 4 and Llama-3, to tackle this task in a few-shot manner. In order to improve this baseline, we opted for fine-tuning Llama-3 by applying parameter-efficient methodologies. The best performing system which resulted from applying self-play fine tuning method which allows the model to improve while learning to distinguish between its own generations from the previous step from the gold standard summaries. This approach achieved 0.4205 ROUGE-1 score and 0.8583 BERTScore.
[ "Chizhikova, Mariia", "D{\\'\\i}az-Galiano, Manuel Carlos", "Ure{\\~n}a-L{\\'o}pez, L. Alfonso", "Mart{\\'\\i}n-Valdivia, Mar{\\'\\i}a-Teresa" ]
{SINAI} at {B}io{L}ay{S}umm: Self-Play Fine-Tuning of Large Language Models for Biomedical Lay Summarisation
bionlp-1.74
Poster
2309.17332v2
https://aclanthology.org/2024.bionlp-1.75.bib
@inproceedings{ji-etal-2024-rag, title = "{RAG}-{RLRC}-{L}ay{S}um at {B}io{L}ay{S}umm: Integrating Retrieval-Augmented Generation and Readability Control for Layman Summarization of Biomedical Texts", author = "Ji, Yuelyu and Li, Zhuochun and Meng, Rui and Sivarajkumar, Sonish and Wang, Yanshan and Yu, Zeshui and Ji, Hui and Han, Yushui and Zeng, Hanyu and He, Daqing", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.75", pages = "810--817", abstract = "This paper introduces the RAG-RLRC-LaySum framework, designed to make complex biomedical research accessible to laymen through advanced Natural Language Processing (NLP) techniques. Our innovative Retrieval Augmentation Generation (RAG) solution, enhanced by a reranking method, utilizes multiple knowledge sources to ensure the precision and pertinence of lay summaries. Additionally, our Reinforcement Learning for Readability Control (RLRC) strategy improves readability, making scientific content comprehensible to non-specialists. Evaluations using the publicly accessible PLOS and eLife datasets show that our methods surpass Plain Gemini model, demonstrating a 20{\%} increase in readability scores, a 15{\%} improvement in ROUGE-2 relevance scores, and a 10{\%} enhancement in factual accuracy. The RAG-RLRC-LaySum framework effectively democratizes scientific knowledge, enhancing public engagement with biomedical discoveries.", }
This paper introduces the RAG-RLRC-LaySum framework, designed to make complex biomedical research accessible to laymen through advanced Natural Language Processing (NLP) techniques. Our innovative Retrieval Augmentation Generation (RAG) solution, enhanced by a reranking method, utilizes multiple knowledge sources to ensure the precision and pertinence of lay summaries. Additionally, our Reinforcement Learning for Readability Control (RLRC) strategy improves readability, making scientific content comprehensible to non-specialists. Evaluations using the publicly accessible PLOS and eLife datasets show that our methods surpass Plain Gemini model, demonstrating a 20{\%} increase in readability scores, a 15{\%} improvement in ROUGE-2 relevance scores, and a 10{\%} enhancement in factual accuracy. The RAG-RLRC-LaySum framework effectively democratizes scientific knowledge, enhancing public engagement with biomedical discoveries.
[ "Ji, Yuelyu", "Li, Zhuochun", "Meng, Rui", "Sivarajkumar, Sonish", "Wang, Yanshan", "Yu, Zeshui", "Ji, Hui", "Han, Yushui", "Zeng, Hanyu", "He, Daqing" ]
{RAG}-{RLRC}-{L}ay{S}um at {B}io{L}ay{S}umm: Integrating Retrieval-Augmented Generation and Readability Control for Layman Summarization of Biomedical Texts
bionlp-1.75
Poster
2405.13179v4
https://aclanthology.org/2024.bionlp-1.76.bib
@inproceedings{zhou-etal-2024-team, title = "Team {YXZ} at {B}io{L}ay{S}umm: Adapting Large Language Models for Biomedical Lay Summarization", author = "Zhou, Jieli and Ye, Cheng and Xu, Pengcheng and Xin, Hongyi", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.76", pages = "818--825", abstract = "Biomedical literature are crucial for disseminating new scientific findings. However, the complexity of these research articles often leads to misinterpretations by the public. To address this urgent issue, we participated in the BioLaySumm task at the 2024 ACL BioNLP workshop, which focuses on automatically simplifying technical biomedical articles for non-technical audiences. We conduct a systematic evaluation of the SOTA large language models (LLMs) in 2024 and found that LLMs can generally achieve better readability scores than smaller models like Bart. Then we iteratively developed techniques of title infusing, K-shot prompting , LLM rewriting and instruction finetuning to further boost readability while balancing factuality and relevance. Notably, our submission achieved the first place in readability at the workshop, and among the top-3 teams with the highest readability scores, we have the best overall rank. Here, we present our experiments and findings on how to effectively adapt LLMs for automatic lay summarization. Our code is available at https://github.com/zhoujieli/biolaysumm.", }
Biomedical literature are crucial for disseminating new scientific findings. However, the complexity of these research articles often leads to misinterpretations by the public. To address this urgent issue, we participated in the BioLaySumm task at the 2024 ACL BioNLP workshop, which focuses on automatically simplifying technical biomedical articles for non-technical audiences. We conduct a systematic evaluation of the SOTA large language models (LLMs) in 2024 and found that LLMs can generally achieve better readability scores than smaller models like Bart. Then we iteratively developed techniques of title infusing, K-shot prompting , LLM rewriting and instruction finetuning to further boost readability while balancing factuality and relevance. Notably, our submission achieved the first place in readability at the workshop, and among the top-3 teams with the highest readability scores, we have the best overall rank. Here, we present our experiments and findings on how to effectively adapt LLMs for automatic lay summarization. Our code is available at https://github.com/zhoujieli/biolaysumm.
[ "Zhou, Jieli", "Ye, Cheng", "Xu, Pengcheng", "Xin, Hongyi" ]
Team {YXZ} at {B}io{L}ay{S}umm: Adapting Large Language Models for Biomedical Lay Summarization
bionlp-1.76
Poster
2309.17332v2
https://aclanthology.org/2024.bionlp-1.77.bib
@inproceedings{modi-karthikeyan-2024-eulerian, title = "Eulerian at {B}io{L}ay{S}umm: Preprocessing Over Abstract is All You Need", author = "Modi, Satyam and Karthikeyan, T", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.77", pages = "826--830", abstract = "In this paper, we present our approach to the BioLaySumm 2024 Shared Task on Lay Sum- marization of Biomedical Research Articles at BioNLP workshop 2024. The task aims to generate lay summaries from the abstract and main texts of biomedical research articles, making them understandable to lay audiences. We used some preprocessing techniques and finetuned FLAN-T5 models for the summarization task. Our method achieved an AlignScore of 0.9914 and a SummaC metric score of 0.944.", }
In this paper, we present our approach to the BioLaySumm 2024 Shared Task on Lay Sum- marization of Biomedical Research Articles at BioNLP workshop 2024. The task aims to generate lay summaries from the abstract and main texts of biomedical research articles, making them understandable to lay audiences. We used some preprocessing techniques and finetuned FLAN-T5 models for the summarization task. Our method achieved an AlignScore of 0.9914 and a SummaC metric score of 0.944.
[ "Modi, Satyam", "Karthikeyan, T" ]
Eulerian at {B}io{L}ay{S}umm: Preprocessing Over Abstract is All You Need
bionlp-1.77
Poster
2305.07137v1
https://aclanthology.org/2024.bionlp-1.78.bib
@inproceedings{malik-etal-2024-hgp, title = "{HGP}-{NLP} at {B}io{L}ay{S}umm: Leveraging {L}o{RA} for Lay Summarization of Biomedical Research Articles using {S}eq2{S}eq Transformers", author = "Malik, Hemang and Pradeep, Gaurav and Seth, Pratinav", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.78", pages = "831--836", abstract = "Lay summarization aims to generate summaries of technical articles for non-experts, enabling easy comprehension for a general audience. The technical language used in research often hinders effective communication of scientific knowledge, making it difficult for non-experts to understand. Automatic lay summarization can enhance access to scientific literature, promoting interdisciplinary knowledge sharing and public understanding. This has become especially important for biomedical articles, given the current global need for clear medical information. Large Language Models (LLMs), with their remarkable language understanding capabilities, are ideal for abstractive summarization, helping to make complex information accessible to the public. This paper details our submissions to the BioLaySumm 2024 Shared Task: Lay Summarization of Biomedical Research Articles. We fine-tune and evaluate sequence-to-sequence models like T5 across various training dataset settings and optimization methods such as LoRA for lay summarization. Our submission achieved the 53rd position overall.", }
Lay summarization aims to generate summaries of technical articles for non-experts, enabling easy comprehension for a general audience. The technical language used in research often hinders effective communication of scientific knowledge, making it difficult for non-experts to understand. Automatic lay summarization can enhance access to scientific literature, promoting interdisciplinary knowledge sharing and public understanding. This has become especially important for biomedical articles, given the current global need for clear medical information. Large Language Models (LLMs), with their remarkable language understanding capabilities, are ideal for abstractive summarization, helping to make complex information accessible to the public. This paper details our submissions to the BioLaySumm 2024 Shared Task: Lay Summarization of Biomedical Research Articles. We fine-tune and evaluate sequence-to-sequence models like T5 across various training dataset settings and optimization methods such as LoRA for lay summarization. Our submission achieved the 53rd position overall.
[ "Malik, Hemang", "Pradeep, Gaurav", "Seth, Pratinav" ]
{HGP}-{NLP} at {B}io{L}ay{S}umm: Leveraging {L}o{RA} for Lay Summarization of Biomedical Research Articles using {S}eq2{S}eq Transformers
bionlp-1.78
Poster
2309.17332v2
https://aclanthology.org/2024.bionlp-1.79.bib
@inproceedings{bao-etal-2024-ctyun, title = "Ctyun {AI} at {B}io{L}ay{S}umm: Enhancing Lay Summaries of Biomedical Articles Through Large Language Models and Data Augmentation", author = "Bao, Siyu and Zhao, Ruijing and Zhang, Siqin and Zhang, Jinghui and Wang, Weiyin and Ru, Yunian", editor = "Demner-Fushman, Dina and Ananiadou, Sophia and Miwa, Makoto and Roberts, Kirk and Tsujii, Junichi", booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.bionlp-1.79", pages = "837--844", abstract = "Lay summaries play a crucial role in making scientific research accessible to a wider audience. However, generating lay summaries from lengthy articles poses significant challenges. We consider two approaches to address this issue: Hard Truncation, which preserves the most informative initial portion of the article, and Text Chunking, which segments articles into smaller, manageable chunks. Our workflow encompasses data preprocessing, augmentation, prompt engineering, and fine-tuning large language models. We explore the influence of pretrained model selection, inference prompt design, and hyperparameter tuning on summarization performance. Our methods demonstrate effectiveness in generating high-quality, informative lay summaries, achieving the second-best performance in the BioLaySumm shared task at BioNLP 2024.", }
Lay summaries play a crucial role in making scientific research accessible to a wider audience. However, generating lay summaries from lengthy articles poses significant challenges. We consider two approaches to address this issue: Hard Truncation, which preserves the most informative initial portion of the article, and Text Chunking, which segments articles into smaller, manageable chunks. Our workflow encompasses data preprocessing, augmentation, prompt engineering, and fine-tuning large language models. We explore the influence of pretrained model selection, inference prompt design, and hyperparameter tuning on summarization performance. Our methods demonstrate effectiveness in generating high-quality, informative lay summaries, achieving the second-best performance in the BioLaySumm shared task at BioNLP 2024.
[ "Bao, Siyu", "Zhao, Ruijing", "Zhang, Siqin", "Zhang, Jinghui", "Wang, Weiyin", "Ru, Yunian" ]
Ctyun {AI} at {B}io{L}ay{S}umm: Enhancing Lay Summaries of Biomedical Articles Through Large Language Models and Data Augmentation
bionlp-1.79
Poster
2309.17332v2
https://aclanthology.org/2024.c3nlp-1.1.bib
@inproceedings{wang-etal-2024-cdeval, title = "{CDE}val: A Benchmark for Measuring the Cultural Dimensions of Large Language Models", author = "Wang, Yuhang and Zhu, Yanxu and Kong, Chao and Wei, Shuyu and Yi, Xiaoyuan and Xie, Xing and Sang, Jitao", editor = "Prabhakaran, Vinodkumar and Dev, Sunipa and Benotti, Luciana and Hershcovich, Daniel and Cabello, Laura and Cao, Yong and Adebara, Ife and Zhou, Li", booktitle = "Proceedings of the 2nd Workshop on Cross-Cultural Considerations in NLP", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.c3nlp-1.1", pages = "1--16", abstract = "As the scaling of Large Language Models (LLMs) has dramatically enhanced their capabilities, there has been a growing focus on the alignment problem to ensure their responsible and ethical use. While existing alignment efforts predominantly concentrate on universal values such as the HHH principle, the aspect of culture, which is inherently pluralistic and diverse, has not received adequate attention. This work introduces a new benchmark, CDEval, aimed at evaluating the cultural dimensions of LLMs. CDEval is constructed by incorporating both GPT-4{'}s automated generation and human verification, covering six cultural dimensions across seven domains. Our comprehensive experiments provide intriguing insights into the culture of mainstream LLMs, highlighting both consistencies and variations across different dimensions and domains. The findings underscore the importance of integrating cultural considerations in LLM development, particularly for applications in diverse cultural settings. This benchmark serves as a valuable resource for cultural studies in LLMs, paving the way for more culturally aware and sensitive models.", }
As the scaling of Large Language Models (LLMs) has dramatically enhanced their capabilities, there has been a growing focus on the alignment problem to ensure their responsible and ethical use. While existing alignment efforts predominantly concentrate on universal values such as the HHH principle, the aspect of culture, which is inherently pluralistic and diverse, has not received adequate attention. This work introduces a new benchmark, CDEval, aimed at evaluating the cultural dimensions of LLMs. CDEval is constructed by incorporating both GPT-4{'}s automated generation and human verification, covering six cultural dimensions across seven domains. Our comprehensive experiments provide intriguing insights into the culture of mainstream LLMs, highlighting both consistencies and variations across different dimensions and domains. The findings underscore the importance of integrating cultural considerations in LLM development, particularly for applications in diverse cultural settings. This benchmark serves as a valuable resource for cultural studies in LLMs, paving the way for more culturally aware and sensitive models.
[ "Wang, Yuhang", "Zhu, Yanxu", "Kong, Chao", "Wei, Shuyu", "Yi, Xiaoyuan", "Xie, Xing", "Sang, Jitao" ]
{CDE}val: A Benchmark for Measuring the Cultural Dimensions of Large Language Models
c3nlp-1.1
Poster
2311.16421v3
https://aclanthology.org/2024.c3nlp-1.2.bib
@inproceedings{baltaji-etal-2024-conformity, title = "Conformity, Confabulation, and Impersonation: Persona Inconstancy in Multi-Agent {LLM} Collaboration", author = "Baltaji, Razan and Hemmatian, Babak and Varshney, Lav", editor = "Prabhakaran, Vinodkumar and Dev, Sunipa and Benotti, Luciana and Hershcovich, Daniel and Cabello, Laura and Cao, Yong and Adebara, Ife and Zhou, Li", booktitle = "Proceedings of the 2nd Workshop on Cross-Cultural Considerations in NLP", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.c3nlp-1.2", pages = "17--31", abstract = "This study explores the sources of instability in maintaining cultural personas and opinions within multi-agent LLM systems. Drawing on simulations of inter-cultural collaboration and debate, we analyze agents{'} pre- and post-discussion private responses alongside chat transcripts to assess the stability of cultural personas and the impact of opinion diversity on group outcomes. Our findings suggest that multi-agent discussions can encourage collective decisions that reflect diverse perspectives, yet this benefit is tempered by the agents{'} susceptibility to conformity due to perceived peer pressure and challenges in maintaining consistent personas and opinions. Counterintuitively, instructions that encourage debate in support of one{'}s opinions increase the rate of instability. Without addressing the factors we identify, the full potential of multi-agent frameworks for producing more culturally diverse AI outputs will remain untapped.", }
This study explores the sources of instability in maintaining cultural personas and opinions within multi-agent LLM systems. Drawing on simulations of inter-cultural collaboration and debate, we analyze agents{'} pre- and post-discussion private responses alongside chat transcripts to assess the stability of cultural personas and the impact of opinion diversity on group outcomes. Our findings suggest that multi-agent discussions can encourage collective decisions that reflect diverse perspectives, yet this benefit is tempered by the agents{'} susceptibility to conformity due to perceived peer pressure and challenges in maintaining consistent personas and opinions. Counterintuitively, instructions that encourage debate in support of one{'}s opinions increase the rate of instability. Without addressing the factors we identify, the full potential of multi-agent frameworks for producing more culturally diverse AI outputs will remain untapped.
[ "Baltaji, Razan", "Hemmatian, Babak", "Varshney, Lav" ]
Conformity, Confabulation, and Impersonation: Persona Inconstancy in Multi-Agent {LLM} Collaboration
c3nlp-1.2
Poster
2405.03862v2
https://aclanthology.org/2024.c3nlp-1.3.bib
@inproceedings{ferawati-etal-2024-synchronizing, title = "Synchronizing Approach in Designing Annotation Guidelines for Multilingual Datasets: A {COVID}-19 Case Study Using {E}nglish and {J}apanese Tweets", author = "Ferawati, Kiki and She, Wan Jou and Wakamiya, Shoko and Aramaki, Eiji", editor = "Prabhakaran, Vinodkumar and Dev, Sunipa and Benotti, Luciana and Hershcovich, Daniel and Cabello, Laura and Cao, Yong and Adebara, Ife and Zhou, Li", booktitle = "Proceedings of the 2nd Workshop on Cross-Cultural Considerations in NLP", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.c3nlp-1.3", pages = "32--41", abstract = "The difference in culture between the U.S. and Japan is a popular subject for Western vs. Eastern cultural comparison for researchers. One particular challenge is to obtain and annotate multilingual datasets. In this study, we utilized COVID-19 tweets from the two countries as a case study, focusing particularly on discussions concerning masks. The annotation task was designed to gain insights into societal attitudes toward the mask policies implemented in both countries. The aim of this study is to provide a practical approach for the annotation task by thoroughly documenting how we aligned the multilingual annotation guidelines to obtain a comparable dataset. We proceeded to document the effective practices during our annotation process to synchronize our multilingual guidelines. Furthermore, we discussed difficulties caused by differences in expression style and culture, and potential strategies that helped improve our agreement scores and reduce discrepancies between the annotation results in both languages. These findings offer an alternative method for synchronizing multilingual annotation guidelines and achieving feasible agreement scores for cross-cultural annotation tasks. This study resulted in a multilingual guideline in English and Japanese to annotate topics related to public discourses about COVID-19 masks in the U.S. and Japan.", }
The difference in culture between the U.S. and Japan is a popular subject for Western vs. Eastern cultural comparison for researchers. One particular challenge is to obtain and annotate multilingual datasets. In this study, we utilized COVID-19 tweets from the two countries as a case study, focusing particularly on discussions concerning masks. The annotation task was designed to gain insights into societal attitudes toward the mask policies implemented in both countries. The aim of this study is to provide a practical approach for the annotation task by thoroughly documenting how we aligned the multilingual annotation guidelines to obtain a comparable dataset. We proceeded to document the effective practices during our annotation process to synchronize our multilingual guidelines. Furthermore, we discussed difficulties caused by differences in expression style and culture, and potential strategies that helped improve our agreement scores and reduce discrepancies between the annotation results in both languages. These findings offer an alternative method for synchronizing multilingual annotation guidelines and achieving feasible agreement scores for cross-cultural annotation tasks. This study resulted in a multilingual guideline in English and Japanese to annotate topics related to public discourses about COVID-19 masks in the U.S. and Japan.
[ "Ferawati, Kiki", "She, Wan Jou", "Wakamiya, Shoko", "Aramaki, Eiji" ]
Synchronizing Approach in Designing Annotation Guidelines for Multilingual Datasets: A {COVID}-19 Case Study Using {E}nglish and {J}apanese Tweets
c3nlp-1.3
Poster
2108.10643v2
https://aclanthology.org/2024.c3nlp-1.4.bib
@inproceedings{wang-etal-2024-craft, title = "{CRAFT}: Extracting and Tuning Cultural Instructions from the Wild", author = "Wang, Bin and Lin, Geyu and Liu, Zhengyuan and Wei, Chengwei and Chen, Nancy", editor = "Prabhakaran, Vinodkumar and Dev, Sunipa and Benotti, Luciana and Hershcovich, Daniel and Cabello, Laura and Cao, Yong and Adebara, Ife and Zhou, Li", booktitle = "Proceedings of the 2nd Workshop on Cross-Cultural Considerations in NLP", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.c3nlp-1.4", pages = "42--47", abstract = "Large language models (LLMs) have rapidly evolved as the foundation of various natural language processing (NLP) applications. Despite their wide use cases, their understanding of culturally-related concepts and reasoning remains limited. Meantime, there is a significant need to enhance these models{'} cultural reasoning capabilities, especially concerning underrepresented regions. This paper introduces a novel pipeline for extracting high-quality, culturally-related instruction tuning datasets from vast unstructured corpora. We utilize a self-instruction generation pipeline to identify cultural concepts and trigger instruction. By integrating with a general-purpose instruction tuning dataset, our model demonstrates enhanced capabilities in recognizing and understanding regional cultural nuances, thereby enhancing its reasoning capabilities. We conduct experiments across three regions: Singapore, the Philippines, and the United States, achieving performance improvement of up to 6{\%}. Our research opens new avenues for extracting cultural instruction tuning sets directly from unstructured data, setting a precedent for future innovations in the field.", }
Large language models (LLMs) have rapidly evolved as the foundation of various natural language processing (NLP) applications. Despite their wide use cases, their understanding of culturally-related concepts and reasoning remains limited. Meantime, there is a significant need to enhance these models{'} cultural reasoning capabilities, especially concerning underrepresented regions. This paper introduces a novel pipeline for extracting high-quality, culturally-related instruction tuning datasets from vast unstructured corpora. We utilize a self-instruction generation pipeline to identify cultural concepts and trigger instruction. By integrating with a general-purpose instruction tuning dataset, our model demonstrates enhanced capabilities in recognizing and understanding regional cultural nuances, thereby enhancing its reasoning capabilities. We conduct experiments across three regions: Singapore, the Philippines, and the United States, achieving performance improvement of up to 6{\%}. Our research opens new avenues for extracting cultural instruction tuning sets directly from unstructured data, setting a precedent for future innovations in the field.
[ "Wang, Bin", "Lin, Geyu", "Liu, Zhengyuan", "Wei, Chengwei", "Chen, Nancy" ]
{CRAFT}: Extracting and Tuning Cultural Instructions from the Wild
c3nlp-1.4
Poster
2405.03138v2
https://aclanthology.org/2024.c3nlp-1.5.bib
@inproceedings{jinnai-2024-cross, title = "Does Cross-Cultural Alignment Change the Commonsense Morality of Language Models?", author = "Jinnai, Yuu", editor = "Prabhakaran, Vinodkumar and Dev, Sunipa and Benotti, Luciana and Hershcovich, Daniel and Cabello, Laura and Cao, Yong and Adebara, Ife and Zhou, Li", booktitle = "Proceedings of the 2nd Workshop on Cross-Cultural Considerations in NLP", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.c3nlp-1.5", pages = "48--64", abstract = "Alignment of the language model with human preferences is a common approach to making a language model useful to end users.However, most alignment work is done in English, and human preference datasets are dominated by English, reflecting only the preferences of English-speaking annotators.Nevertheless, it is common practice to use the English preference data, either directly or by translating it into the target language, when aligning a multilingual language model.The question is whether such an alignment strategy marginalizes the preference of non-English speaking users.To this end, we investigate the effect of aligning Japanese language models with (mostly) English resources.In particular, we focus on evaluating whether the commonsense morality of the resulting fine-tuned models is aligned with Japanese culture using the JCommonsenseMorality (JCM) and ETHICS datasets.The experimental results show that the fine-tuned model outperforms the SFT model. However, it does not demonstrate the same level of improvement as a model fine-tuned using the JCM, suggesting that while some aspects of commonsense morality are transferable, others may not be.", }
Alignment of the language model with human preferences is a common approach to making a language model useful to end users.However, most alignment work is done in English, and human preference datasets are dominated by English, reflecting only the preferences of English-speaking annotators.Nevertheless, it is common practice to use the English preference data, either directly or by translating it into the target language, when aligning a multilingual language model.The question is whether such an alignment strategy marginalizes the preference of non-English speaking users.To this end, we investigate the effect of aligning Japanese language models with (mostly) English resources.In particular, we focus on evaluating whether the commonsense morality of the resulting fine-tuned models is aligned with Japanese culture using the JCommonsenseMorality (JCM) and ETHICS datasets.The experimental results show that the fine-tuned model outperforms the SFT model. However, it does not demonstrate the same level of improvement as a model fine-tuned using the JCM, suggesting that while some aspects of commonsense morality are transferable, others may not be.
[ "Jinnai, Yuu" ]
Does Cross-Cultural Alignment Change the Commonsense Morality of Language Models?
c3nlp-1.5
Poster
2406.16316v1
https://aclanthology.org/2024.c3nlp-1.6.bib
@inproceedings{nie-etal-2024-multilingual, title = "Do Multilingual Large Language Models Mitigate Stereotype Bias?", author = {Nie, Shangrui and Fromm, Michael and Welch, Charles and G{\"o}rge, Rebekka and Karimi, Akbar and Plepi, Joan and Mowmita, Nazia and Flores-Herr, Nicolas and Ali, Mehdi and Flek, Lucie}, editor = "Prabhakaran, Vinodkumar and Dev, Sunipa and Benotti, Luciana and Hershcovich, Daniel and Cabello, Laura and Cao, Yong and Adebara, Ife and Zhou, Li", booktitle = "Proceedings of the 2nd Workshop on Cross-Cultural Considerations in NLP", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.c3nlp-1.6", pages = "65--83", abstract = "While preliminary findings indicate that multilingual LLMs exhibit reduced bias compared to monolingual ones, a comprehensive understanding of the effect of multilingual training on bias mitigation, is lacking. This study addresses this gap by systematically training six LLMs of identical size (2.6B parameters) and architecture: five monolingual models (English, German, French, Italian, and Spanish) and one multilingual model trained on an equal distribution of data across these languages, all using publicly available data. To ensure robust evaluation, standard bias benchmarks were automatically translated into the five target languages and verified for both translation quality and bias preservation by human annotators. Our results consistently demonstrate that multilingual training effectively mitigates bias. Moreover, we observe that multilingual models achieve not only lower bias but also superior prediction accuracy when compared to monolingual models with the same amount of training data, model architecture, and size.", }
While preliminary findings indicate that multilingual LLMs exhibit reduced bias compared to monolingual ones, a comprehensive understanding of the effect of multilingual training on bias mitigation, is lacking. This study addresses this gap by systematically training six LLMs of identical size (2.6B parameters) and architecture: five monolingual models (English, German, French, Italian, and Spanish) and one multilingual model trained on an equal distribution of data across these languages, all using publicly available data. To ensure robust evaluation, standard bias benchmarks were automatically translated into the five target languages and verified for both translation quality and bias preservation by human annotators. Our results consistently demonstrate that multilingual training effectively mitigates bias. Moreover, we observe that multilingual models achieve not only lower bias but also superior prediction accuracy when compared to monolingual models with the same amount of training data, model architecture, and size.
[ "Nie, Shangrui", "Fromm, Michael", "Welch, Charles", "G{\\\"o}rge, Rebekka", "Karimi, Akbar", "Plepi, Joan", "Mowmita, Nazia", "Flores-Herr, Nicolas", "Ali, Mehdi", "Flek, Lucie" ]
Do Multilingual Large Language Models Mitigate Stereotype Bias?
c3nlp-1.6
Poster
2312.07141v2
https://aclanthology.org/2024.c3nlp-1.7.bib
@inproceedings{wong-2024-sociocultural, title = "Sociocultural Considerations in Monitoring Anti-{LGBTQ}+ Content on Social Media", author = "Wong, Sidney", editor = "Prabhakaran, Vinodkumar and Dev, Sunipa and Benotti, Luciana and Hershcovich, Daniel and Cabello, Laura and Cao, Yong and Adebara, Ife and Zhou, Li", booktitle = "Proceedings of the 2nd Workshop on Cross-Cultural Considerations in NLP", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.c3nlp-1.7", pages = "84--97", abstract = "The purpose of this paper is to ascertain the influence of sociocultural factors (i.e., social, cultural, and political) in the development of hate speech detection systems. We set out to investigate the suitability of using open-source training data to monitor levels of anti-LGBTQ+ content on social media across different national-varieties of English. Our findings suggests the social and cultural alignment of open-source hate speech data sets influences the predicted outputs. Furthermore, the keyword-search approach of anti-LGBTQ+ slurs in the development of open-source training data encourages detection models to overfit on slurs; therefore, anti-LGBTQ+ content may go undetected. We recommend combining empirical outputs with qualitative insights to ensure these systems are fit for purpose.", }
The purpose of this paper is to ascertain the influence of sociocultural factors (i.e., social, cultural, and political) in the development of hate speech detection systems. We set out to investigate the suitability of using open-source training data to monitor levels of anti-LGBTQ+ content on social media across different national-varieties of English. Our findings suggests the social and cultural alignment of open-source hate speech data sets influences the predicted outputs. Furthermore, the keyword-search approach of anti-LGBTQ+ slurs in the development of open-source training data encourages detection models to overfit on slurs; therefore, anti-LGBTQ+ content may go undetected. We recommend combining empirical outputs with qualitative insights to ensure these systems are fit for purpose.
[ "Wong, Sidney" ]
Sociocultural Considerations in Monitoring Anti-{LGBTQ}+ Content on Social Media
c3nlp-1.7
Poster
2407.01149v1
https://aclanthology.org/2024.c3nlp-1.8.bib
@inproceedings{ahmad-etal-2024-generative, title = "Are Generative Language Models Multicultural? A Study on {H}ausa Culture and Emotions using {C}hat{GPT}", author = "Ahmad, Ibrahim and Dudy, Shiran and Ramachandranpillai, Resmi and Church, Kenneth", editor = "Prabhakaran, Vinodkumar and Dev, Sunipa and Benotti, Luciana and Hershcovich, Daniel and Cabello, Laura and Cao, Yong and Adebara, Ife and Zhou, Li", booktitle = "Proceedings of the 2nd Workshop on Cross-Cultural Considerations in NLP", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.c3nlp-1.8", pages = "98--106", abstract = "Large Language Models (LLMs), such as ChatGPT, are widely used to generate content for various purposes and audiences. However, these models may not reflect the cultural and emotional diversity of their users, especially for low-resource languages. In this paper, we investigate how ChatGPT represents Hausa{'}s culture and emotions. We compare responses generated by ChatGPT with those provided by native Hausa speakers on 37 culturally relevant questions. We conducted experiments using emotion analysis. We also used two similarity metrics to measure the alignment between human and ChatGPT responses. We also collect human participants ratings and feedback on ChatGPT responses. Our results show that ChatGPT has some level of similarity to human responses, but also exhibits some gaps and biases in its knowledge and awareness of Hausa culture and emotions. We discuss the implications and limitations of our methodology and analysis and suggest ways to improve the performance and evaluation of LLMs for low-resource languages.", }
Large Language Models (LLMs), such as ChatGPT, are widely used to generate content for various purposes and audiences. However, these models may not reflect the cultural and emotional diversity of their users, especially for low-resource languages. In this paper, we investigate how ChatGPT represents Hausa{'}s culture and emotions. We compare responses generated by ChatGPT with those provided by native Hausa speakers on 37 culturally relevant questions. We conducted experiments using emotion analysis. We also used two similarity metrics to measure the alignment between human and ChatGPT responses. We also collect human participants ratings and feedback on ChatGPT responses. Our results show that ChatGPT has some level of similarity to human responses, but also exhibits some gaps and biases in its knowledge and awareness of Hausa culture and emotions. We discuss the implications and limitations of our methodology and analysis and suggest ways to improve the performance and evaluation of LLMs for low-resource languages.
[ "Ahmad, Ibrahim", "Dudy, Shiran", "Ramach", "ranpillai, Resmi", "Church, Kenneth" ]
Are Generative Language Models Multicultural? A Study on {H}ausa Culture and Emotions using {C}hat{GPT}
c3nlp-1.8
Poster
2406.19504v1
https://aclanthology.org/2024.c3nlp-1.9.bib
@inproceedings{oneil-etal-2024-computational, title = "Computational Language Documentation: Designing a Modular Annotation and Data Management Tool for Cross-cultural Applicability", author = "O{'}Neil, Alexandra and Swanson, Daniel and Chelliah, Shobhana", editor = "Prabhakaran, Vinodkumar and Dev, Sunipa and Benotti, Luciana and Hershcovich, Daniel and Cabello, Laura and Cao, Yong and Adebara, Ife and Zhou, Li", booktitle = "Proceedings of the 2nd Workshop on Cross-Cultural Considerations in NLP", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.c3nlp-1.9", pages = "107--116", abstract = "While developing computational language documentation tools, researchers must center the role of language communities in the process by carefully reflecting on and designing tools to support the varying needs and priorities of different language communities. This paper provides an example of how cross-cultural considerations discussed in literature about language documentation, data sovereignty, and community-led documentation projects can motivate the design of a computational language documentation tool by reflecting on our design process as we work towards developing an annotation and data management tool. We identify three recurring themes for cross-cultural consideration in the literature - Linguistic Sovereignty, Cultural Specificity, and Reciprocity - and present eight essential features for an annotation and data management tool that reflect these themes.", }
While developing computational language documentation tools, researchers must center the role of language communities in the process by carefully reflecting on and designing tools to support the varying needs and priorities of different language communities. This paper provides an example of how cross-cultural considerations discussed in literature about language documentation, data sovereignty, and community-led documentation projects can motivate the design of a computational language documentation tool by reflecting on our design process as we work towards developing an annotation and data management tool. We identify three recurring themes for cross-cultural consideration in the literature - Linguistic Sovereignty, Cultural Specificity, and Reciprocity - and present eight essential features for an annotation and data management tool that reflect these themes.
[ "O{'}Neil, Alex", "ra", "Swanson, Daniel", "Chelliah, Shobhana" ]
Computational Language Documentation: Designing a Modular Annotation and Data Management Tool for Cross-cultural Applicability
c3nlp-1.9
Poster
2010.01080v1
https://aclanthology.org/2024.climatenlp-1.1.bib
@inproceedings{singh-etal-2024-climate, title = "Climate Policy Transformer: Utilizing {NLP} to track the coherence of Climate Policy Documents in the Context of the {P}aris Agreement", author = "Singh, Prashant and Lehmann, Erik and Tyrrell, Mark", editor = "Stammbach, Dominik and Ni, Jingwei and Schimanski, Tobias and Dutia, Kalyan and Singh, Alok and Bingler, Julia and Christiaen, Christophe and Kushwaha, Neetu and Muccione, Veruska and A. Vaghefi, Saeid and Leippold, Markus", booktitle = "Proceedings of the 1st Workshop on Natural Language Processing Meets Climate Change (ClimateNLP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.climatenlp-1.1", pages = "1--11", abstract = "Climate policy implementation is pivotal inglobal efforts to mitigate and adapt to climatechange. In this context, this paper explores theuse of Natural Language Processing (NLP) as atool for policy advisors to efficiently track andassess climate policy and strategies, such asNationally Determined Contributions (NDCs).These documents are essential for monitoringcoherence with the Paris Agreement, yet theiranalysis traditionally demands significant la-bor and time. We demonstrate how to leverageNLP on existing climate policy databases totransform this process by structuring informa-tion extracted from these otherwise unstruc-tured policy documents and opening avenuesfor a more in-depth analysis of national and re-gional policies. Central to our approach is thecreation of a machine-learning (ML) dataset{'}CPo-CD{'}, based on data provided by the Inter-national Climate Initiative (IKI) and ClimateWatch (CW). The CPo-CD dataset is utilizedto fine-tune Transformer Models on classify-ing climate targets, actions, policies, and plans,along with their sector, mitigation-adaptation,and greenhouse gas (GHG) components. Wepublish our model and dataset on a HuggingFace repository.", }
Climate policy implementation is pivotal inglobal efforts to mitigate and adapt to climatechange. In this context, this paper explores theuse of Natural Language Processing (NLP) as atool for policy advisors to efficiently track andassess climate policy and strategies, such asNationally Determined Contributions (NDCs).These documents are essential for monitoringcoherence with the Paris Agreement, yet theiranalysis traditionally demands significant la-bor and time. We demonstrate how to leverageNLP on existing climate policy databases totransform this process by structuring informa-tion extracted from these otherwise unstruc-tured policy documents and opening avenuesfor a more in-depth analysis of national and re-gional policies. Central to our approach is thecreation of a machine-learning (ML) dataset{'}CPo-CD{'}, based on data provided by the Inter-national Climate Initiative (IKI) and ClimateWatch (CW). The CPo-CD dataset is utilizedto fine-tune Transformer Models on classify-ing climate targets, actions, policies, and plans,along with their sector, mitigation-adaptation,and greenhouse gas (GHG) components. Wepublish our model and dataset on a HuggingFace repository.
[ "Singh, Prashant", "Lehmann, Erik", "Tyrrell, Mark" ]
Climate Policy Transformer: Utilizing {NLP} to track the coherence of Climate Policy Documents in the Context of the {P}aris Agreement
climatenlp-1.1
Poster
1608.05597v1
https://aclanthology.org/2024.climatenlp-1.2.bib
@inproceedings{dimmelmeier-etal-2024-informing, title = "Informing climate risk analysis using textual information - A research agenda", author = "Dimmelmeier, Andreas and Doll, Hendrik and Schierholz, Malte and Kormanyos, Emily and Fehr, Maurice and Ma, Bolei and Beck, Jacob and Fraser, Alexander and Kreuter, Frauke", editor = "Stammbach, Dominik and Ni, Jingwei and Schimanski, Tobias and Dutia, Kalyan and Singh, Alok and Bingler, Julia and Christiaen, Christophe and Kushwaha, Neetu and Muccione, Veruska and A. Vaghefi, Saeid and Leippold, Markus", booktitle = "Proceedings of the 1st Workshop on Natural Language Processing Meets Climate Change (ClimateNLP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.climatenlp-1.2", pages = "12--26", abstract = "We present a research agenda focused on efficiently extracting, assuring quality, and consolidating textual company sustainability information to address urgent climate change decision-making needs. Starting from the goal to create integrated FAIR (Findable, Accessible, Interoperable, Reusable) climate-related data, we identify research needs pertaining to the technical aspects of information extraction as well as to the design of the integrated sustainability datasets that we seek to compile. Regarding extraction, we leverage technological advancements, particularly in large language models (LLMs) and Retrieval-Augmented Generation (RAG) pipelines, to unlock the underutilized potential of unstructured textual information contained in corporate sustainability reports. In applying these techniques, we review key challenges, which include the retrieval and extraction of CO$_2$ emission values from PDF documents, especially from unstructured tables and graphs therein, and the validation of automatically extracted data through comparisons with human-annotated values. We also review how existing use cases and practices in climate risk analytics relate to choices of what textual information should be extracted and how it could be linked to existing structured data.", }
We present a research agenda focused on efficiently extracting, assuring quality, and consolidating textual company sustainability information to address urgent climate change decision-making needs. Starting from the goal to create integrated FAIR (Findable, Accessible, Interoperable, Reusable) climate-related data, we identify research needs pertaining to the technical aspects of information extraction as well as to the design of the integrated sustainability datasets that we seek to compile. Regarding extraction, we leverage technological advancements, particularly in large language models (LLMs) and Retrieval-Augmented Generation (RAG) pipelines, to unlock the underutilized potential of unstructured textual information contained in corporate sustainability reports. In applying these techniques, we review key challenges, which include the retrieval and extraction of CO$_2$ emission values from PDF documents, especially from unstructured tables and graphs therein, and the validation of automatically extracted data through comparisons with human-annotated values. We also review how existing use cases and practices in climate risk analytics relate to choices of what textual information should be extracted and how it could be linked to existing structured data.
[ "Dimmelmeier, Andreas", "Doll, Hendrik", "Schierholz, Malte", "Kormanyos, Emily", "Fehr, Maurice", "Ma, Bolei", "Beck, Jacob", "Fraser, Alex", "er", "Kreuter, Frauke" ]
Informing climate risk analysis using textual information - A research agenda
climatenlp-1.2
Poster
1608.02519v1
https://aclanthology.org/2024.climatenlp-1.3.bib
@inproceedings{nguyen-etal-2024-climate, title = "My Climate Advisor: An Application of {NLP} in Climate Adaptation for Agriculture", author = "Nguyen, Vincent and Karimi, Sarvnaz and Hallgren, Willow and Harkin, Ashley and Prakash, Mahesh", editor = "Stammbach, Dominik and Ni, Jingwei and Schimanski, Tobias and Dutia, Kalyan and Singh, Alok and Bingler, Julia and Christiaen, Christophe and Kushwaha, Neetu and Muccione, Veruska and A. Vaghefi, Saeid and Leippold, Markus", booktitle = "Proceedings of the 1st Workshop on Natural Language Processing Meets Climate Change (ClimateNLP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.climatenlp-1.3", pages = "27--45", abstract = "Climate adaptation in the agricultural sector necessitates tools that equip farmers and farm advisors with relevant and trustworthy information to help increase their resiliency to climate change. We introduce \textit{My Climate Advisor}, a question-answering (QA) prototype that synthesises information from different data sources, such as peer-reviewed scientific literature and high-quality, industry-relevant grey literature to generate answers, with references, to a given user{'}s question. Our prototype uses open-source generative models for data privacy and intellectual property protection, and retrieval augmented generation for answer generation, grounding and provenance. While there are standard evaluation metrics for QA systems, no existing evaluation framework suits our LLM-based QA application in the climate adaptation domain. We design an evaluation framework with seven metrics based on the requirements of the domain experts to judge the generated answers from 12 different LLM-based models. Our initial evaluations through a user study via domain experts show promising usability results.", }
Climate adaptation in the agricultural sector necessitates tools that equip farmers and farm advisors with relevant and trustworthy information to help increase their resiliency to climate change. We introduce \textit{My Climate Advisor}, a question-answering (QA) prototype that synthesises information from different data sources, such as peer-reviewed scientific literature and high-quality, industry-relevant grey literature to generate answers, with references, to a given user{'}s question. Our prototype uses open-source generative models for data privacy and intellectual property protection, and retrieval augmented generation for answer generation, grounding and provenance. While there are standard evaluation metrics for QA systems, no existing evaluation framework suits our LLM-based QA application in the climate adaptation domain. We design an evaluation framework with seven metrics based on the requirements of the domain experts to judge the generated answers from 12 different LLM-based models. Our initial evaluations through a user study via domain experts show promising usability results.
[ "Nguyen, Vincent", "Karimi, Sarvnaz", "Hallgren, Willow", "Harkin, Ashley", "Prakash, Mahesh" ]
My Climate Advisor: An Application of {NLP} in Climate Adaptation for Agriculture
climatenlp-1.3
Poster
2105.07867v1
https://aclanthology.org/2024.climatenlp-1.4.bib
@inproceedings{zanartu-etal-2024-generative, title = "Generative Debunking of Climate Misinformation", author = "Zanartu, Francisco and Otmakhova, Yulia and Cook, John and Frermann, Lea", editor = "Stammbach, Dominik and Ni, Jingwei and Schimanski, Tobias and Dutia, Kalyan and Singh, Alok and Bingler, Julia and Christiaen, Christophe and Kushwaha, Neetu and Muccione, Veruska and A. Vaghefi, Saeid and Leippold, Markus", booktitle = "Proceedings of the 1st Workshop on Natural Language Processing Meets Climate Change (ClimateNLP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.climatenlp-1.4", pages = "46--62", abstract = "Misinformation about climate change causes numerous negative impacts, necessitating corrective responses. Psychological research has offered various strategies for reducing the influence of climate misinformation, such as the fact-myth-fallacy-fact-structure. However, practically implementing corrective interventions at scale represents a challenge. Automatic detection and correction of misinformation offers a solution to the misinformation problem. This study documents the development of large language models that accept as input a climate myth and produce a debunking that adheres to the fact-myth-fallacy-fact ({``}truth sandwich{''}) structure, by incorporating contrarian claim classification and fallacy detection into an LLM prompting framework. We combine open (Mixtral, Palm2) and proprietary (GPT-4) LLMs with prompting strategies of varying complexity. Experiments reveal promising performance of GPT-4 and Mixtral if combined with structured prompts. We identify specific challenges of debunking generation and human evaluation, and map out avenues for future work. We release a dataset of high-quality truth-sandwich debunkings, source code and a demo of the debunking system.", }
Misinformation about climate change causes numerous negative impacts, necessitating corrective responses. Psychological research has offered various strategies for reducing the influence of climate misinformation, such as the fact-myth-fallacy-fact-structure. However, practically implementing corrective interventions at scale represents a challenge. Automatic detection and correction of misinformation offers a solution to the misinformation problem. This study documents the development of large language models that accept as input a climate myth and produce a debunking that adheres to the fact-myth-fallacy-fact ({``}truth sandwich{''}) structure, by incorporating contrarian claim classification and fallacy detection into an LLM prompting framework. We combine open (Mixtral, Palm2) and proprietary (GPT-4) LLMs with prompting strategies of varying complexity. Experiments reveal promising performance of GPT-4 and Mixtral if combined with structured prompts. We identify specific challenges of debunking generation and human evaluation, and map out avenues for future work. We release a dataset of high-quality truth-sandwich debunkings, source code and a demo of the debunking system.
[ "Zanartu, Francisco", "Otmakhova, Yulia", "Cook, John", "Frermann, Lea" ]
Generative Debunking of Climate Misinformation
climatenlp-1.4
Poster
2407.05599v1
https://aclanthology.org/2024.climatenlp-1.5.bib
@inproceedings{su-pierrehumbert-2024-decoding, title = "Decoding Climate Disagreement: A Graph Neural Network-Based Approach to Understanding Social Media Dynamics", author = "Su, Ruiran and Pierrehumbert, Janet", editor = "Stammbach, Dominik and Ni, Jingwei and Schimanski, Tobias and Dutia, Kalyan and Singh, Alok and Bingler, Julia and Christiaen, Christophe and Kushwaha, Neetu and Muccione, Veruska and A. Vaghefi, Saeid and Leippold, Markus", booktitle = "Proceedings of the 1st Workshop on Natural Language Processing Meets Climate Change (ClimateNLP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.climatenlp-1.5", pages = "63--81", abstract = "This paper presents the ClimateSent-GAT Model, a novel approach that combines Graph Attention Networks (GATs) with natural language processing techniques to accurately identify and predict disagreements within Reddit comment-reply pairs. Our model classifies disagreements into three categories: agree, disagree, and neutral. Leveraging the inherent graph structure of Reddit comment-reply pairs, the model significantly outperforms existing benchmarks by capturing complex interaction patterns and sentiment dynamics. This research advances graph-based NLP methodologies and provides actionable insights for policymakers and educators in climate science communication.", }
This paper presents the ClimateSent-GAT Model, a novel approach that combines Graph Attention Networks (GATs) with natural language processing techniques to accurately identify and predict disagreements within Reddit comment-reply pairs. Our model classifies disagreements into three categories: agree, disagree, and neutral. Leveraging the inherent graph structure of Reddit comment-reply pairs, the model significantly outperforms existing benchmarks by capturing complex interaction patterns and sentiment dynamics. This research advances graph-based NLP methodologies and provides actionable insights for policymakers and educators in climate science communication.
[ "Su, Ruiran", "Pierrehumbert, Janet" ]
Decoding Climate Disagreement: A Graph Neural Network-Based Approach to Understanding Social Media Dynamics
climatenlp-1.5
Poster
2407.07038v1
https://aclanthology.org/2024.climatenlp-1.6.bib
@inproceedings{hsu-etal-2024-evaluating, title = "Evaluating {C}hat{N}et{Z}ero, an {LLM}-Chatbot to Demystify Climate Pledges", author = "Hsu, Angel and Laney, Mason and Zhang, Ji and Manya, Diego and Farczadi, Linda", editor = "Stammbach, Dominik and Ni, Jingwei and Schimanski, Tobias and Dutia, Kalyan and Singh, Alok and Bingler, Julia and Christiaen, Christophe and Kushwaha, Neetu and Muccione, Veruska and A. Vaghefi, Saeid and Leippold, Markus", booktitle = "Proceedings of the 1st Workshop on Natural Language Processing Meets Climate Change (ClimateNLP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.climatenlp-1.6", pages = "82--92", abstract = "This paper introduces and evaluates ChatNetZero, a large-language model (LLM) chatbot developed through Retrieval-Augmented Generation (RAG), which uses generative AI to produce answers grounded in verified, climate-domain specific information. We describe ChatNetZero{'}s design, particularly the innovation of anti-hallucination and reference modules designed to enhance the accuracy and credibility of generated responses. To evaluate ChatNetZero{'}s performance against other LLMs, including GPT-4, Gemini, Coral, and ChatClimate, we conduct two types of validation: comparing LLMs{'} generated responses to original source documents to verify their factual accuracy, and employing an expert survey to evaluate the overall quality, accuracy and relevance of each response. We find that while ChatNetZero responses show higher factual accuracy when compared to original source data, experts surveyed prefer lengthier responses that provide more context. Our results highlight the importance of prioritizing information presentation in the design of domain-specific LLMs to ensure that scientific information is effectively communicated, especially as even expert audiences find it challenging to assess the credibility of AI-generated content.", }
This paper introduces and evaluates ChatNetZero, a large-language model (LLM) chatbot developed through Retrieval-Augmented Generation (RAG), which uses generative AI to produce answers grounded in verified, climate-domain specific information. We describe ChatNetZero{'}s design, particularly the innovation of anti-hallucination and reference modules designed to enhance the accuracy and credibility of generated responses. To evaluate ChatNetZero{'}s performance against other LLMs, including GPT-4, Gemini, Coral, and ChatClimate, we conduct two types of validation: comparing LLMs{'} generated responses to original source documents to verify their factual accuracy, and employing an expert survey to evaluate the overall quality, accuracy and relevance of each response. We find that while ChatNetZero responses show higher factual accuracy when compared to original source data, experts surveyed prefer lengthier responses that provide more context. Our results highlight the importance of prioritizing information presentation in the design of domain-specific LLMs to ensure that scientific information is effectively communicated, especially as even expert audiences find it challenging to assess the credibility of AI-generated content.
[ "Hsu, Angel", "Laney, Mason", "Zhang, Ji", "Manya, Diego", "Farczadi, Linda" ]
Evaluating {C}hat{N}et{Z}ero, an {LLM}-Chatbot to Demystify Climate Pledges
climatenlp-1.6
Poster
2112.11207v1
https://aclanthology.org/2024.climatenlp-1.7.bib
@inproceedings{li-etal-2024-using-llms, title = "Using {LLM}s to Build a Database of Climate Extreme Impacts", author = {Li, Ni and Zahra, Shorouq and Brito, Mariana and Flynn, Clare and G{\"o}rnerup, Olof and Worou, Koffi and Kurfali, Murathan and Meng, Chanjuan and Thiery, Wim and Zscheischler, Jakob and Messori, Gabriele and Nivre, Joakim}, editor = "Stammbach, Dominik and Ni, Jingwei and Schimanski, Tobias and Dutia, Kalyan and Singh, Alok and Bingler, Julia and Christiaen, Christophe and Kushwaha, Neetu and Muccione, Veruska and A. Vaghefi, Saeid and Leippold, Markus", booktitle = "Proceedings of the 1st Workshop on Natural Language Processing Meets Climate Change (ClimateNLP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.climatenlp-1.7", pages = "93--110", abstract = "To better understand how extreme climate events impact society, we need to increase the availability of accurate and comprehensive information about these impacts. We propose a method for building large-scale databases of climate extreme impacts from online textual sources, using LLMs for information extraction in combination with more traditional NLP techniques to improve accuracy and consistency. We evaluate the method against a small benchmark database created by human experts and find that extraction accuracy varies for different types of information. We compare three different LLMs and find that, while the commercial GPT-4 model gives the best performance overall, the open-source models Mistral and Mixtral are competitive for some types of information.", }
To better understand how extreme climate events impact society, we need to increase the availability of accurate and comprehensive information about these impacts. We propose a method for building large-scale databases of climate extreme impacts from online textual sources, using LLMs for information extraction in combination with more traditional NLP techniques to improve accuracy and consistency. We evaluate the method against a small benchmark database created by human experts and find that extraction accuracy varies for different types of information. We compare three different LLMs and find that, while the commercial GPT-4 model gives the best performance overall, the open-source models Mistral and Mixtral are competitive for some types of information.
[ "Li, Ni", "Zahra, Shorouq", "Brito, Mariana", "Flynn, Clare", "G{\\\"o}rnerup, Olof", "Worou, Koffi", "Kurfali, Murathan", "Meng, Chanjuan", "Thiery, Wim", "Zscheischler, Jakob", "Messori, Gabriele", "Nivre, Joakim" ]
Using {LLM}s to Build a Database of Climate Extreme Impacts
climatenlp-1.7
Poster
1605.01156v1
https://aclanthology.org/2024.climatenlp-1.8.bib
@inproceedings{bird-etal-2024-envisioning, title = "Envisioning {NLP} for intercultural climate communication", author = "Bird, Steven and Aquino, Angelina and Gumbula, Ian", editor = "Stammbach, Dominik and Ni, Jingwei and Schimanski, Tobias and Dutia, Kalyan and Singh, Alok and Bingler, Julia and Christiaen, Christophe and Kushwaha, Neetu and Muccione, Veruska and A. Vaghefi, Saeid and Leippold, Markus", booktitle = "Proceedings of the 1st Workshop on Natural Language Processing Meets Climate Change (ClimateNLP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.climatenlp-1.8", pages = "111--122", abstract = "Climate communication is often seen by the NLP community as an opportunity for machine translation, applied to ever smaller languages. However, over 90{\%} the world{'}s linguistic diversity comes from languages with {`}primary orality{'} and mostly spoken in non-Western oral societies. A case in point is the Aboriginal communities of Northern Australia, where we have been conducting workshops on climate communication, revealing shortcomings in existing communication practices along with new opportunities for improving intercultural communication. We present a case study of climate communication in an oral society, including the voices of many local people, and draw several lessons for the research program of NLP in the climate space.", }
Climate communication is often seen by the NLP community as an opportunity for machine translation, applied to ever smaller languages. However, over 90{\%} the world{'}s linguistic diversity comes from languages with {`}primary orality{'} and mostly spoken in non-Western oral societies. A case in point is the Aboriginal communities of Northern Australia, where we have been conducting workshops on climate communication, revealing shortcomings in existing communication practices along with new opportunities for improving intercultural communication. We present a case study of climate communication in an oral society, including the voices of many local people, and draw several lessons for the research program of NLP in the climate space.
[ "Bird, Steven", "Aquino, Angelina", "Gumbula, Ian" ]
Envisioning {NLP} for intercultural climate communication
climatenlp-1.8
Poster
2203.03615v1
https://aclanthology.org/2024.climatenlp-1.9.bib
@inproceedings{saha-etal-2024-enclaim, title = "{E}n{C}laim: A Style Augmented Transformer Architecture for Environmental Claim Detection", author = "Saha, Diya and Sinha, Manjira and Dasgupta, Tirthankar", editor = "Stammbach, Dominik and Ni, Jingwei and Schimanski, Tobias and Dutia, Kalyan and Singh, Alok and Bingler, Julia and Christiaen, Christophe and Kushwaha, Neetu and Muccione, Veruska and A. Vaghefi, Saeid and Leippold, Markus", booktitle = "Proceedings of the 1st Workshop on Natural Language Processing Meets Climate Change (ClimateNLP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.climatenlp-1.9", pages = "123--132", abstract = "Across countries, a noteworthy paradigm shift towards a more sustainable and environmentally responsible economy is underway. However, this positive transition is accompanied by an upsurge in greenwashing, where companies make exaggerated claims about their environmental commitments. To address this challenge and protect consumers, initiatives have emerged to substantiate green claims. With the proliferation of environmental and scientific assertions, a critical need arises for automated methods to detect and validate these claims at scale. In this paper, we introduce EnClaim, a transformer network architecture augmented with stylistic features for automatically detecting claims from open web documents or social media posts. The proposed model considers various linguistic stylistic features in conjunction with language models to predict whether a given statement constitutes a claim. We have rigorously evaluated the model using multiple open datasets. Our initial findings indicate that incorporating stylistic vectors alongside the BERT-based language model enhances the overall effectiveness of environmental claim detection.", }
Across countries, a noteworthy paradigm shift towards a more sustainable and environmentally responsible economy is underway. However, this positive transition is accompanied by an upsurge in greenwashing, where companies make exaggerated claims about their environmental commitments. To address this challenge and protect consumers, initiatives have emerged to substantiate green claims. With the proliferation of environmental and scientific assertions, a critical need arises for automated methods to detect and validate these claims at scale. In this paper, we introduce EnClaim, a transformer network architecture augmented with stylistic features for automatically detecting claims from open web documents or social media posts. The proposed model considers various linguistic stylistic features in conjunction with language models to predict whether a given statement constitutes a claim. We have rigorously evaluated the model using multiple open datasets. Our initial findings indicate that incorporating stylistic vectors alongside the BERT-based language model enhances the overall effectiveness of environmental claim detection.
[ "Saha, Diya", "Sinha, Manjira", "Dasgupta, Tirthankar" ]
{E}n{C}laim: A Style Augmented Transformer Architecture for Environmental Claim Detection
climatenlp-1.9
Poster
2308.14995v1
https://aclanthology.org/2024.climatenlp-1.10.bib
@inproceedings{krahmer-2024-leaf, title = "{LEAF}: Predicting the Environmental Impact of Food Products based on their Name", author = "Krahmer, Bas", editor = "Stammbach, Dominik and Ni, Jingwei and Schimanski, Tobias and Dutia, Kalyan and Singh, Alok and Bingler, Julia and Christiaen, Christophe and Kushwaha, Neetu and Muccione, Veruska and A. Vaghefi, Saeid and Leippold, Markus", booktitle = "Proceedings of the 1st Workshop on Natural Language Processing Meets Climate Change (ClimateNLP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.climatenlp-1.10", pages = "133--142", abstract = "Although food consumption represents a sub- stantial global source of greenhouse gas emis- sions, assessing the environmental impact of off-the-shelf products remains challenging. Currently, this information is often unavailable, hindering informed consumer decisions when grocery shopping. The present work introduces a new set of models called LEAF, which stands for Linguistic Environmental Analysis of Food Products. LEAF models predict the life-cycle environmental impact of food products based on their name. It is shown that LEAF models can accurately predict the environmental im- pact based on just the product name in a multi- lingual setting, greatly outperforming zero-shot classification methods. Models of varying sizes and capabilities are released, along with the code and dataset to fully reproduce the study.", }
Although food consumption represents a sub- stantial global source of greenhouse gas emis- sions, assessing the environmental impact of off-the-shelf products remains challenging. Currently, this information is often unavailable, hindering informed consumer decisions when grocery shopping. The present work introduces a new set of models called LEAF, which stands for Linguistic Environmental Analysis of Food Products. LEAF models predict the life-cycle environmental impact of food products based on their name. It is shown that LEAF models can accurately predict the environmental im- pact based on just the product name in a multi- lingual setting, greatly outperforming zero-shot classification methods. Models of varying sizes and capabilities are released, along with the code and dataset to fully reproduce the study.
[ "Krahmer, Bas" ]
{LEAF}: Predicting the Environmental Impact of Food Products based on their Name
climatenlp-1.10
Poster
1803.05986v1
https://aclanthology.org/2024.climatenlp-1.11.bib
@inproceedings{zhou-etal-2024-large, title = "Large Scale Narrative Messaging around Climate Change: A Cross-Cultural Comparison", author = "Zhou, Haiqi and Hobson, David and Ruths, Derek and Piper, Andrew", editor = "Stammbach, Dominik and Ni, Jingwei and Schimanski, Tobias and Dutia, Kalyan and Singh, Alok and Bingler, Julia and Christiaen, Christophe and Kushwaha, Neetu and Muccione, Veruska and A. Vaghefi, Saeid and Leippold, Markus", booktitle = "Proceedings of the 1st Workshop on Natural Language Processing Meets Climate Change (ClimateNLP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.climatenlp-1.11", pages = "143--155", abstract = "In this study, we explore the use of Large Language Models (LLMs) such as GPT-4 to extract and analyze the latent narrative messaging in climate change-related news articles from North American and Chinese media. By defining {``}narrative messaging{''} as the intrinsic moral or lesson of a story, we apply our model to a dataset of approximately 15,000 news articles in English and Mandarin, categorized by climate-related topics and ideological groupings. Our findings reveal distinct differences in the narrative values emphasized by different cultural and ideological contexts, with North American sources often focusing on individualistic and crisis-driven themes, while Chinese sources emphasize developmental and cooperative narratives. This work demonstrates the potential of LLMs in understanding and influencing climate communication, offering new insights into the collective belief systems that shape public discourse on climate change across different cultures.", }
In this study, we explore the use of Large Language Models (LLMs) such as GPT-4 to extract and analyze the latent narrative messaging in climate change-related news articles from North American and Chinese media. By defining {``}narrative messaging{''} as the intrinsic moral or lesson of a story, we apply our model to a dataset of approximately 15,000 news articles in English and Mandarin, categorized by climate-related topics and ideological groupings. Our findings reveal distinct differences in the narrative values emphasized by different cultural and ideological contexts, with North American sources often focusing on individualistic and crisis-driven themes, while Chinese sources emphasize developmental and cooperative narratives. This work demonstrates the potential of LLMs in understanding and influencing climate communication, offering new insights into the collective belief systems that shape public discourse on climate change across different cultures.
[ "Zhou, Haiqi", "Hobson, David", "Ruths, Derek", "Piper, Andrew" ]
Large Scale Narrative Messaging around Climate Change: A Cross-Cultural Comparison
climatenlp-1.11
Poster
2105.05621v1
https://aclanthology.org/2024.climatenlp-1.12.bib
@inproceedings{gandhi-etal-2024-challenges, title = "Challenges in End-to-End Policy Extraction from Climate Action Plans", author = "Gandhi, Nupoor and Corringham, Tom and Strubell, Emma", editor = "Stammbach, Dominik and Ni, Jingwei and Schimanski, Tobias and Dutia, Kalyan and Singh, Alok and Bingler, Julia and Christiaen, Christophe and Kushwaha, Neetu and Muccione, Veruska and A. Vaghefi, Saeid and Leippold, Markus", booktitle = "Proceedings of the 1st Workshop on Natural Language Processing Meets Climate Change (ClimateNLP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.climatenlp-1.12", pages = "156--167", abstract = "Gray policy literature such as climate action plans (CAPs) provide an information-rich resource with potential to inform analysis and decision-making. However, these corpora are currently underutilized due to the substantial manual effort and expertise required to sift through long and detailed documents. Automatically structuring relevant information using information extraction (IE) would be useful for assisting policy scientists in synthesizing vast gray policy corpora to identify relevant entities, concepts and themes. LLMs have demonstrated strong performance on IE tasks in the few-shot setting, but it is unclear whether these gains transfer to gray policy literature which differs significantly to traditional benchmark datasets in several aspects, such as format of information content, length of documents, and inconsistency of document structure. We perform a case study on end-to-end IE with California CAPs, inspecting the performance of state-of-the-art tools for: (1) extracting content from CAPs into structured markup segments; (2) few-shot IE with LLMs; and (3) the utility of extracted entities for downstream analyses. We identify challenges at several points of the end-to-end IE pipeline for CAPs, and we provide recommendations for open problems centered around representing rich non-textual elements, document structure, flexible annotation schemes, and global information. Tackling these challenges would make it possible to realize the potential of LLMs for IE with gray policy literature.", }
Gray policy literature such as climate action plans (CAPs) provide an information-rich resource with potential to inform analysis and decision-making. However, these corpora are currently underutilized due to the substantial manual effort and expertise required to sift through long and detailed documents. Automatically structuring relevant information using information extraction (IE) would be useful for assisting policy scientists in synthesizing vast gray policy corpora to identify relevant entities, concepts and themes. LLMs have demonstrated strong performance on IE tasks in the few-shot setting, but it is unclear whether these gains transfer to gray policy literature which differs significantly to traditional benchmark datasets in several aspects, such as format of information content, length of documents, and inconsistency of document structure. We perform a case study on end-to-end IE with California CAPs, inspecting the performance of state-of-the-art tools for: (1) extracting content from CAPs into structured markup segments; (2) few-shot IE with LLMs; and (3) the utility of extracted entities for downstream analyses. We identify challenges at several points of the end-to-end IE pipeline for CAPs, and we provide recommendations for open problems centered around representing rich non-textual elements, document structure, flexible annotation schemes, and global information. Tackling these challenges would make it possible to realize the potential of LLMs for IE with gray policy literature.
[ "G", "hi, Nupoor", "Corringham, Tom", "Strubell, Emma" ]
Challenges in End-to-End Policy Extraction from Climate Action Plans
climatenlp-1.12
Poster
2112.11207v1
https://aclanthology.org/2024.climatenlp-1.13.bib
@inproceedings{usmanova-usbeck-2024-structuring, title = "Structuring Sustainability Reports for Environmental Standards with {LLM}s guided by Ontology", author = "Usmanova, Aida and Usbeck, Ricardo", editor = "Stammbach, Dominik and Ni, Jingwei and Schimanski, Tobias and Dutia, Kalyan and Singh, Alok and Bingler, Julia and Christiaen, Christophe and Kushwaha, Neetu and Muccione, Veruska and A. Vaghefi, Saeid and Leippold, Markus", booktitle = "Proceedings of the 1st Workshop on Natural Language Processing Meets Climate Change (ClimateNLP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.climatenlp-1.13", pages = "168--177", abstract = "Following the introduction of the European Sustainability Reporting Standard (ESRS), companies will have to adapt to a new policy and provide mandatory sustainability reports. However, implementing such reports entails a challenge, such as the comprehension of a large number of textual information from various sources. This task can be accelerated by employing Large Language Models (LLMs) and ontologies to effectively model the domain knowledge. In this study, we extended an existing ontology to model ESRS Topical Standard for disclosure. The developed ontology would enable automated reasoning over the data and assist in constructing Knowledge Graphs (KGs). Moreover, the proposed ontology extension would also help to identify gaps in companies{'} sustainability reports with regard to the ESRS requirements.Additionally, we extracted knowledge from corporate sustainability reports via LLMs guided with a proposed ontology and developed their KG representation.", }
Following the introduction of the European Sustainability Reporting Standard (ESRS), companies will have to adapt to a new policy and provide mandatory sustainability reports. However, implementing such reports entails a challenge, such as the comprehension of a large number of textual information from various sources. This task can be accelerated by employing Large Language Models (LLMs) and ontologies to effectively model the domain knowledge. In this study, we extended an existing ontology to model ESRS Topical Standard for disclosure. The developed ontology would enable automated reasoning over the data and assist in constructing Knowledge Graphs (KGs). Moreover, the proposed ontology extension would also help to identify gaps in companies{'} sustainability reports with regard to the ESRS requirements.Additionally, we extracted knowledge from corporate sustainability reports via LLMs guided with a proposed ontology and developed their KG representation.
[ "Usmanova, Aida", "Usbeck, Ricardo" ]
Structuring Sustainability Reports for Environmental Standards with {LLM}s guided by Ontology
climatenlp-1.13
Poster
2407.17657v1
https://aclanthology.org/2024.climatenlp-1.14.bib
@inproceedings{fore-etal-2024-unlearning, title = "Unlearning Climate Misinformation in Large Language Models", author = "Fore, Michael and Singh, Simranjit and Lee, Chaehong and Pandey, Amritanshu and Anastasopoulos, Antonios and Stamoulis, Dimitrios", editor = "Stammbach, Dominik and Ni, Jingwei and Schimanski, Tobias and Dutia, Kalyan and Singh, Alok and Bingler, Julia and Christiaen, Christophe and Kushwaha, Neetu and Muccione, Veruska and A. Vaghefi, Saeid and Leippold, Markus", booktitle = "Proceedings of the 1st Workshop on Natural Language Processing Meets Climate Change (ClimateNLP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.climatenlp-1.14", pages = "178--192", abstract = "Misinformation regarding climate change is a key roadblock in addressing one of the most serious threats to humanity. This paper investigates factual accuracy in large language models (LLMs) regarding climate information. Using true/false labeled Q{\&}A data for fine-tuning and evaluating LLMs on climate-related claims, we compare open-source models, assessing their ability to generate truthful responses to climate change questions. We investigate the detectability of models intentionally poisoned with false climate information, finding that such poisoning may not affect the accuracy of a model{'}s responses in other domains. Furthermore, we compare the effectiveness of unlearning algorithms, fine-tuning, and Retrieval-Augmented Generation (RAG) for factually grounding LLMs on climate change topics. Our evaluation reveals that unlearning algorithms can be effective for nuanced conceptual claims, despite previous findings suggesting their inefficacy in privacy contexts. These insights aim to guide the development of more factually reliable LLMs and highlight the need for additional work to secure LLMs against misinformation attacks.", }
Misinformation regarding climate change is a key roadblock in addressing one of the most serious threats to humanity. This paper investigates factual accuracy in large language models (LLMs) regarding climate information. Using true/false labeled Q{\&}A data for fine-tuning and evaluating LLMs on climate-related claims, we compare open-source models, assessing their ability to generate truthful responses to climate change questions. We investigate the detectability of models intentionally poisoned with false climate information, finding that such poisoning may not affect the accuracy of a model{'}s responses in other domains. Furthermore, we compare the effectiveness of unlearning algorithms, fine-tuning, and Retrieval-Augmented Generation (RAG) for factually grounding LLMs on climate change topics. Our evaluation reveals that unlearning algorithms can be effective for nuanced conceptual claims, despite previous findings suggesting their inefficacy in privacy contexts. These insights aim to guide the development of more factually reliable LLMs and highlight the need for additional work to secure LLMs against misinformation attacks.
[ "Fore, Michael", "Singh, Simranjit", "Lee, Chaehong", "P", "ey, Amritanshu", "Anastasopoulos, Antonios", "Stamoulis, Dimitrios" ]
Unlearning Climate Misinformation in Large Language Models
climatenlp-1.14
Poster
2405.19563v1
https://aclanthology.org/2024.climatenlp-1.15.bib
@inproceedings{mishra-etal-2024-statements, title = "Statements: Universal Information Extraction from Tables with Large Language Models for {ESG} {KPI}s", author = "Mishra, Lokesh and Dhibi, Sohayl and Kim, Yusik and Berrospi Ramis, Cesar and Gupta, Shubham and Dolfi, Michele and Staar, Peter", editor = "Stammbach, Dominik and Ni, Jingwei and Schimanski, Tobias and Dutia, Kalyan and Singh, Alok and Bingler, Julia and Christiaen, Christophe and Kushwaha, Neetu and Muccione, Veruska and A. Vaghefi, Saeid and Leippold, Markus", booktitle = "Proceedings of the 1st Workshop on Natural Language Processing Meets Climate Change (ClimateNLP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.climatenlp-1.15", pages = "193--214", abstract = "Environment, Social, and Governance (ESG) KPIs assess an organization{'}s performance on issues such as climate change, greenhouse gas emissions, water consumption, waste management, human rights, diversity, and policies. ESG reports convey this valuable quantitative information through tables. Unfortunately, extracting this information is difficult due to high variability in the table structure as well as content. We propose Statements, a novel domain agnostic data structure for extracting quantitative facts and related information. We propose translating tables to statements as a new supervised deep-learning universal information extraction task. We introduce SemTabNet - a dataset of over 100K annotated tables. Investigating a family of T5-based Statement Extraction Models, our best model generates statements which are 82{\%} similar to the ground-truth (compared to baseline of 21{\%}). We demonstrate the advantages of statements by applying our model to over 2700 tables from ESG reports. The homogeneous nature of statements permits exploratory data analysis on expansive information found in large collections of ESG reports.", }
Environment, Social, and Governance (ESG) KPIs assess an organization{'}s performance on issues such as climate change, greenhouse gas emissions, water consumption, waste management, human rights, diversity, and policies. ESG reports convey this valuable quantitative information through tables. Unfortunately, extracting this information is difficult due to high variability in the table structure as well as content. We propose Statements, a novel domain agnostic data structure for extracting quantitative facts and related information. We propose translating tables to statements as a new supervised deep-learning universal information extraction task. We introduce SemTabNet - a dataset of over 100K annotated tables. Investigating a family of T5-based Statement Extraction Models, our best model generates statements which are 82{\%} similar to the ground-truth (compared to baseline of 21{\%}). We demonstrate the advantages of statements by applying our model to over 2700 tables from ESG reports. The homogeneous nature of statements permits exploratory data analysis on expansive information found in large collections of ESG reports.
[ "Mishra, Lokesh", "Dhibi, Sohayl", "Kim, Yusik", "Berrospi Ramis, Cesar", "Gupta, Shubham", "Dolfi, Michele", "Staar, Peter" ]
Statements: Universal Information Extraction from Tables with Large Language Models for {ESG} {KPI}s
climatenlp-1.15
Poster
2406.19102v1
https://aclanthology.org/2024.climatenlp-1.16.bib
@inproceedings{zhou-etal-2024-climateli, title = "{CLIMATELI}: Evaluating Entity Linking on Climate Change Data", author = "Zhou, Shijia and Peng, Siyao and Plank, Barbara", editor = "Stammbach, Dominik and Ni, Jingwei and Schimanski, Tobias and Dutia, Kalyan and Singh, Alok and Bingler, Julia and Christiaen, Christophe and Kushwaha, Neetu and Muccione, Veruska and A. Vaghefi, Saeid and Leippold, Markus", booktitle = "Proceedings of the 1st Workshop on Natural Language Processing Meets Climate Change (ClimateNLP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.climatenlp-1.16", pages = "215--222", abstract = "Climate Change (CC) is a pressing topic of global importance, attracting increasing attention across research fields, from social sciences to Natural Language Processing (NLP). CC is also discussed in various settings and communication platforms, from academic publications to social media forums. Understanding who and what is mentioned in such data is a first critical step to gaining new insights into CC. We present CLIMATELI (CLIMATe Entity LInking), the first manually annotated CC dataset that links 3,087 entity spans to Wikipedia. Using CLIMATELI (CLIMATe Entity LInking), we evaluate existing entity linking (EL) systems on the CC topic across various genres and propose automated filtering methods for CC entities. We find that the performance of EL models notably lags behind humans at both token and entity levels. Testing within the scope of retaining or excluding non-nominal and/or non-CC entities particularly impacts the models{'} performances.", }
Climate Change (CC) is a pressing topic of global importance, attracting increasing attention across research fields, from social sciences to Natural Language Processing (NLP). CC is also discussed in various settings and communication platforms, from academic publications to social media forums. Understanding who and what is mentioned in such data is a first critical step to gaining new insights into CC. We present CLIMATELI (CLIMATe Entity LInking), the first manually annotated CC dataset that links 3,087 entity spans to Wikipedia. Using CLIMATELI (CLIMATe Entity LInking), we evaluate existing entity linking (EL) systems on the CC topic across various genres and propose automated filtering methods for CC entities. We find that the performance of EL models notably lags behind humans at both token and entity levels. Testing within the scope of retaining or excluding non-nominal and/or non-CC entities particularly impacts the models{'} performances.
[ "Zhou, Shijia", "Peng, Siyao", "Plank, Barbara" ]
{CLIMATELI}: Evaluating Entity Linking on Climate Change Data
climatenlp-1.16
Poster
2406.16732v2
https://aclanthology.org/2024.climatenlp-1.17.bib
@inproceedings{spokoyny-etal-2024-aligning, title = "Aligning Unstructured {P}aris Agreement Climate Plans with Sustainable Development Goals", author = "Spokoyny, Daniel and Cai, Janelle and Corringham, Tom and Berg-Kirkpatrick, Taylor", editor = "Stammbach, Dominik and Ni, Jingwei and Schimanski, Tobias and Dutia, Kalyan and Singh, Alok and Bingler, Julia and Christiaen, Christophe and Kushwaha, Neetu and Muccione, Veruska and A. Vaghefi, Saeid and Leippold, Markus", booktitle = "Proceedings of the 1st Workshop on Natural Language Processing Meets Climate Change (ClimateNLP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.climatenlp-1.17", pages = "223--232", abstract = "Aligning unstructured climate policy documents according to a particular classification taxonomy with little to no labeled examples is challenging and requires manual effort of climate policy researchers. In this work we examine whether large language models (LLMs) can act as an effective substitute or assist in the annotation process. Utilizing a large set of text spans from Paris Agreement Nationally Determined Contributions (NDCs) linked to United Nations Sustainable Development Goals (SDGs) and targets contained in the Climate Watch dataset from the World Resources Institute in combination with our own annotated data, we validate our approaches and establish a benchmark for model performance evaluation on this task. With our evaluation benchmarking we quantify the effectiveness of using zero-shot or few-shot prompted LLMs to align these documents.", }
Aligning unstructured climate policy documents according to a particular classification taxonomy with little to no labeled examples is challenging and requires manual effort of climate policy researchers. In this work we examine whether large language models (LLMs) can act as an effective substitute or assist in the annotation process. Utilizing a large set of text spans from Paris Agreement Nationally Determined Contributions (NDCs) linked to United Nations Sustainable Development Goals (SDGs) and targets contained in the Climate Watch dataset from the World Resources Institute in combination with our own annotated data, we validate our approaches and establish a benchmark for model performance evaluation on this task. With our evaluation benchmarking we quantify the effectiveness of using zero-shot or few-shot prompted LLMs to align these documents.
[ "Spokoyny, Daniel", "Cai, Janelle", "Corringham, Tom", "Berg-Kirkpatrick, Taylor" ]
Aligning Unstructured {P}aris Agreement Climate Plans with Sustainable Development Goals
climatenlp-1.17
Poster
2105.05621v1
https://aclanthology.org/2024.climatenlp-1.18.bib
@inproceedings{zhang-etal-2024-granular, title = "Granular Analysis of Social Media Users{'} Truthfulness Stances Toward Climate Change Factual Claims", author = "Zhang, Haiqi and Zhu, Zhengyuan and Zhang, Zeyu and Devasier, Jacob and Li, Chengkai", editor = "Stammbach, Dominik and Ni, Jingwei and Schimanski, Tobias and Dutia, Kalyan and Singh, Alok and Bingler, Julia and Christiaen, Christophe and Kushwaha, Neetu and Muccione, Veruska and A. Vaghefi, Saeid and Leippold, Markus", booktitle = "Proceedings of the 1st Workshop on Natural Language Processing Meets Climate Change (ClimateNLP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.climatenlp-1.18", pages = "233--240", abstract = "Climate change poses an urgent global problem that requires efficient data analysis mechanisms to provide insights into climate-related discussions on social media platforms. This paper presents a framework aimed at understanding social media users{'} perceptions of various climate change topics and uncovering the insights behind these perceptions. Our framework employs large language model to develop a taxonomy of factual claims related to climate change and build a classification model that detects the truthfulness stance of tweets toward the factual claims. The findings reveal two key conclusions: (1) The public tends to believe the claims are true, regardless of the actual claim veracity; (2) The public shows a lack of discernment between facts and misinformation across different topics, particularly in areas related to politics, economy, and environment.", }
Climate change poses an urgent global problem that requires efficient data analysis mechanisms to provide insights into climate-related discussions on social media platforms. This paper presents a framework aimed at understanding social media users{'} perceptions of various climate change topics and uncovering the insights behind these perceptions. Our framework employs large language model to develop a taxonomy of factual claims related to climate change and build a classification model that detects the truthfulness stance of tweets toward the factual claims. The findings reveal two key conclusions: (1) The public tends to believe the claims are true, regardless of the actual claim veracity; (2) The public shows a lack of discernment between facts and misinformation across different topics, particularly in areas related to politics, economy, and environment.
[ "Zhang, Haiqi", "Zhu, Zhengyuan", "Zhang, Zeyu", "Devasier, Jacob", "Li, Chengkai" ]
Granular Analysis of Social Media Users{'} Truthfulness Stances Toward Climate Change Factual Claims
climatenlp-1.18
Poster
2211.03533v1
https://aclanthology.org/2024.climatenlp-1.19.bib
@inproceedings{garigliotti-2024-sdg, title = "{SDG} target detection in environmental reports using Retrieval-augmented Generation with {LLM}s", author = "Garigliotti, Dario", editor = "Stammbach, Dominik and Ni, Jingwei and Schimanski, Tobias and Dutia, Kalyan and Singh, Alok and Bingler, Julia and Christiaen, Christophe and Kushwaha, Neetu and Muccione, Veruska and A. Vaghefi, Saeid and Leippold, Markus", booktitle = "Proceedings of the 1st Workshop on Natural Language Processing Meets Climate Change (ClimateNLP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.climatenlp-1.19", pages = "241--250", abstract = "With the consolidation of Large Language Models (LLM) as a dominant component in approaches for multiple linguistic tasks, the interest in these technologies has greatly increased within a variety of areas and domains. A particular scenario of information needs where to exploit these approaches is climate-aware NLP. Paradigmatically, the vast manual labour of inspecting long, heterogeneous documents to find environment-relevant expressions and claims suits well within a recently established Retrieval-augmented Generation (RAG) framework. In this paper, we tackle two dual problems within environment analysis dealing with the common goal of detecting a Sustainable Developmental Goal (SDG) target being addressed in a textual passage of an environmental assessment report.We develop relevant test collections, and propose and evaluate a series of methods within the general RAG pipeline, in order to assess the current capabilities of LLMs for the tasks of SDG target evidence identification and SDG target detection.", }
With the consolidation of Large Language Models (LLM) as a dominant component in approaches for multiple linguistic tasks, the interest in these technologies has greatly increased within a variety of areas and domains. A particular scenario of information needs where to exploit these approaches is climate-aware NLP. Paradigmatically, the vast manual labour of inspecting long, heterogeneous documents to find environment-relevant expressions and claims suits well within a recently established Retrieval-augmented Generation (RAG) framework. In this paper, we tackle two dual problems within environment analysis dealing with the common goal of detecting a Sustainable Developmental Goal (SDG) target being addressed in a textual passage of an environmental assessment report.We develop relevant test collections, and propose and evaluate a series of methods within the general RAG pipeline, in order to assess the current capabilities of LLMs for the tasks of SDG target evidence identification and SDG target detection.
[ "Garigliotti, Dario" ]
{SDG} target detection in environmental reports using Retrieval-augmented Generation with {LLM}s
climatenlp-1.19
Poster
2307.15425v1
https://aclanthology.org/2024.climatenlp-1.20.bib
@inproceedings{joe-etal-2024-assessing, title = "Assessing the Effectiveness of {GPT}-4o in Climate Change Evidence Synthesis and Systematic Assessments: Preliminary Insights", author = "Joe, Elphin and Koneru, Sai and Kirchhoff, Christine", editor = "Stammbach, Dominik and Ni, Jingwei and Schimanski, Tobias and Dutia, Kalyan and Singh, Alok and Bingler, Julia and Christiaen, Christophe and Kushwaha, Neetu and Muccione, Veruska and A. Vaghefi, Saeid and Leippold, Markus", booktitle = "Proceedings of the 1st Workshop on Natural Language Processing Meets Climate Change (ClimateNLP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.climatenlp-1.20", pages = "251--257", abstract = "In this research short, we examine the potential of using GPT-4o, a state-of-the-art large language model (LLM) to undertake evidence synthesis and systematic assessment tasks. Traditional workflows for such tasks involve large groups of domain experts who manually review and synthesize vast amounts of literature. The exponential growth of scientific literature and recent advances in LLMs provide an opportunity to complementing these traditional workflows with new age tools. We assess the efficacy of GPT-4o to do these tasks on a sample from the dataset created by the Global Adaptation Mapping Initiative (GAMI) where we check the accuracy of climate change adaptation related feature extraction from the scientific literature across three levels of expertise. Our results indicate that while GPT-4o can achieve high accuracy in low-expertise tasks like geographic location identification, their performance in intermediate and high-expertise tasks, such as stakeholder identification and assessment of depth of the adaptation response, is less reliable. The findings motivate the need for designing assessment workflows that utilize the strengths of models like GPT-4o while also providing refinements to improve their performance on these tasks.", }
In this research short, we examine the potential of using GPT-4o, a state-of-the-art large language model (LLM) to undertake evidence synthesis and systematic assessment tasks. Traditional workflows for such tasks involve large groups of domain experts who manually review and synthesize vast amounts of literature. The exponential growth of scientific literature and recent advances in LLMs provide an opportunity to complementing these traditional workflows with new age tools. We assess the efficacy of GPT-4o to do these tasks on a sample from the dataset created by the Global Adaptation Mapping Initiative (GAMI) where we check the accuracy of climate change adaptation related feature extraction from the scientific literature across three levels of expertise. Our results indicate that while GPT-4o can achieve high accuracy in low-expertise tasks like geographic location identification, their performance in intermediate and high-expertise tasks, such as stakeholder identification and assessment of depth of the adaptation response, is less reliable. The findings motivate the need for designing assessment workflows that utilize the strengths of models like GPT-4o while also providing refinements to improve their performance on these tasks.
[ "Joe, Elphin", "Koneru, Sai", "Kirchhoff, Christine" ]
Assessing the Effectiveness of {GPT}-4o in Climate Change Evidence Synthesis and Systematic Assessments: Preliminary Insights
climatenlp-1.20
Poster
2407.12826v1
https://aclanthology.org/2024.cmcl-1.1.bib
@inproceedings{shen-etal-2024-bambino, title = "{BAMBINO}-{LM}: (Bilingual-)Human-Inspired Continual Pre-training of {B}aby{LM}", author = "Shen, Zhewen and Joshi, Aditya and Chen, Ruey-Cheng", editor = "Kuribayashi, Tatsuki and Rambelli, Giulia and Takmaz, Ece and Wicke, Philipp and Oseki, Yohei", booktitle = "Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.cmcl-1.1", pages = "1--7", abstract = "Children from bilingual backgrounds benefit from interactions with parents and teachers to re-acquire their heritage language. In this paper, we investigate how this insight from behavioral study can be incorporated into the learning of small-scale language models. We introduce BAMBINO-LM, a continual pre-training strategy for BabyLM that uses a novel combination of alternation and PPO-based perplexity reward induced from a parent Italian model. Upon evaluation on zero-shot classification tasks for English and Italian, BAMBINO-LM improves the Italian language capability of a BabyLM baseline. Our ablation analysis demonstrates that employing both the alternation strategy and PPO-based modeling is key to this effectiveness gain. We also show that, as a side effect, the proposed method leads to a similar degradation in L1 effectiveness as human children would have had in an equivalent learning scenario. Through its modeling and findings, BAMBINO-LM makes a focused contribution to the pre-training of small-scale language models by first developing a human-inspired strategy for pre-training and then showing that it results in behaviours similar to that of humans.", }
Children from bilingual backgrounds benefit from interactions with parents and teachers to re-acquire their heritage language. In this paper, we investigate how this insight from behavioral study can be incorporated into the learning of small-scale language models. We introduce BAMBINO-LM, a continual pre-training strategy for BabyLM that uses a novel combination of alternation and PPO-based perplexity reward induced from a parent Italian model. Upon evaluation on zero-shot classification tasks for English and Italian, BAMBINO-LM improves the Italian language capability of a BabyLM baseline. Our ablation analysis demonstrates that employing both the alternation strategy and PPO-based modeling is key to this effectiveness gain. We also show that, as a side effect, the proposed method leads to a similar degradation in L1 effectiveness as human children would have had in an equivalent learning scenario. Through its modeling and findings, BAMBINO-LM makes a focused contribution to the pre-training of small-scale language models by first developing a human-inspired strategy for pre-training and then showing that it results in behaviours similar to that of humans.
[ "Shen, Zhewen", "Joshi, Aditya", "Chen, Ruey-Cheng" ]
{BAMBINO}-{LM}: (Bilingual-)Human-Inspired Continual Pre-training of {B}aby{LM}
cmcl-1.1
Poster
2406.11418v2
https://aclanthology.org/2024.cmcl-1.2.bib
@inproceedings{panagopoulou-etal-2024-evaluating, title = "Evaluating Vision-Language Models on Bistable Images", author = "Panagopoulou, Artemis and Melkin, Coby and Callison-Burch, Chris", editor = "Kuribayashi, Tatsuki and Rambelli, Giulia and Takmaz, Ece and Wicke, Philipp and Oseki, Yohei", booktitle = "Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.cmcl-1.2", pages = "8--29", abstract = "Bistable images, also known as ambiguous or reversible images, present visual stimuli that can be seen in two distinct interpretations, though not simultaneously, by the observer. In this study, we conduct the most extensive examination of vision-language models using bistable images to date. We manually gathered a dataset of 29 bistable images, along with their associated labels, and subjected them to 121 different manipulations in brightness, resolution, tint, and rotation. We evaluated twelve different models in both classification and generative tasks across six model architectures. Our findings reveal that, with the exception of models from the Idefics family and LLaVA1.5-13b, there is a pronounced preference for one interpretation over another among the models, and minimal variance under image manipulations, with few exceptions on image rotations. Additionally, we compared the models{'} preferences with humans, noting that the models do not exhibit the same continuity biases as humans and often diverge from human initial interpretations. We also investigated the influence of variations in prompts and the use of synonymous labels, discovering that these factors significantly affect model interpretations more than image manipulations showing a higher influence of the language priors on bistable image interpretations compared to image-text training data. All code and data is open sourced.", }
Bistable images, also known as ambiguous or reversible images, present visual stimuli that can be seen in two distinct interpretations, though not simultaneously, by the observer. In this study, we conduct the most extensive examination of vision-language models using bistable images to date. We manually gathered a dataset of 29 bistable images, along with their associated labels, and subjected them to 121 different manipulations in brightness, resolution, tint, and rotation. We evaluated twelve different models in both classification and generative tasks across six model architectures. Our findings reveal that, with the exception of models from the Idefics family and LLaVA1.5-13b, there is a pronounced preference for one interpretation over another among the models, and minimal variance under image manipulations, with few exceptions on image rotations. Additionally, we compared the models{'} preferences with humans, noting that the models do not exhibit the same continuity biases as humans and often diverge from human initial interpretations. We also investigated the influence of variations in prompts and the use of synonymous labels, discovering that these factors significantly affect model interpretations more than image manipulations showing a higher influence of the language priors on bistable image interpretations compared to image-text training data. All code and data is open sourced.
[ "Panagopoulou, Artemis", "Melkin, Coby", "Callison-Burch, Chris" ]
Evaluating Vision-Language Models on Bistable Images
cmcl-1.2
Poster
2402.17510v2
https://aclanthology.org/2024.cmcl-1.3.bib
@inproceedings{de-varda-marelli-2024-locally, title = "Locally Biased Transformers Better Align with Human Reading Times", author = "De Varda, Andrea and Marelli, Marco", editor = "Kuribayashi, Tatsuki and Rambelli, Giulia and Takmaz, Ece and Wicke, Philipp and Oseki, Yohei", booktitle = "Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.cmcl-1.3", pages = "30--36", abstract = "Recent psycholinguistic theories emphasize the interdependence between linguistic expectations and memory limitations in human language processing. We modify the self-attention mechanism of a transformer model to simulate a lossy context representation, biasing the model{'}s predictions to give additional weight to the local linguistic context. We show that surprisal estimates from our locally-biased model generally provide a better fit to human psychometric data, underscoring the sensitivity of the human parser to local linguistic information.", }
Recent psycholinguistic theories emphasize the interdependence between linguistic expectations and memory limitations in human language processing. We modify the self-attention mechanism of a transformer model to simulate a lossy context representation, biasing the model{'}s predictions to give additional weight to the local linguistic context. We show that surprisal estimates from our locally-biased model generally provide a better fit to human psychometric data, underscoring the sensitivity of the human parser to local linguistic information.
[ "De Varda, Andrea", "Marelli, Marco" ]
Locally Biased Transformers Better Align with Human Reading Times
cmcl-1.3
Poster
1303.3997v2
https://aclanthology.org/2024.cmcl-1.4.bib
@inproceedings{cai-etal-2024-large, title = "Do large language models resemble humans in language use?", author = "Cai, Zhenguang and Duan, Xufeng and Haslett, David and Wang, Shuqi and Pickering, Martin", editor = "Kuribayashi, Tatsuki and Rambelli, Giulia and Takmaz, Ece and Wicke, Philipp and Oseki, Yohei", booktitle = "Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.cmcl-1.4", pages = "37--56", abstract = "It is unclear whether large language models (LLMs) develop humanlike characteristics in language use. We subjected ChatGPT and Vicuna to 12 pre-registered psycholinguistic experiments ranging from sounds to dialogue. ChatGPT and Vicuna replicated the human pattern of language use in 10 and 7 out of the 12 experiments, respectively. The models associated unfamiliar words with different meanings depending on their forms, continued to access recently encountered meanings of ambiguous words, reused recent sentence structures, attributed causality as a function of verb semantics, and accessed different meanings and retrieved different words depending on an interlocutor{'}s identity. In addition, ChatGPT, but not Vicuna, nonliterally interpreted implausible sentences that were likely to have been corrupted by noise, drew reasonable inferences, and overlooked semantic fallacies in a sentence. Finally, unlike humans, neither model preferred using shorter words to convey less informative content, nor did they use context to resolve syntactic ambiguities. We discuss how these convergences and divergences may result from the transformer architecture. Overall, these experiments demonstrate that LLMs such as ChatGPT (and Vicuna to a lesser extent) are humanlike in many aspects of human language processing.", }
It is unclear whether large language models (LLMs) develop humanlike characteristics in language use. We subjected ChatGPT and Vicuna to 12 pre-registered psycholinguistic experiments ranging from sounds to dialogue. ChatGPT and Vicuna replicated the human pattern of language use in 10 and 7 out of the 12 experiments, respectively. The models associated unfamiliar words with different meanings depending on their forms, continued to access recently encountered meanings of ambiguous words, reused recent sentence structures, attributed causality as a function of verb semantics, and accessed different meanings and retrieved different words depending on an interlocutor{'}s identity. In addition, ChatGPT, but not Vicuna, nonliterally interpreted implausible sentences that were likely to have been corrupted by noise, drew reasonable inferences, and overlooked semantic fallacies in a sentence. Finally, unlike humans, neither model preferred using shorter words to convey less informative content, nor did they use context to resolve syntactic ambiguities. We discuss how these convergences and divergences may result from the transformer architecture. Overall, these experiments demonstrate that LLMs such as ChatGPT (and Vicuna to a lesser extent) are humanlike in many aspects of human language processing.
[ "Cai, Zhenguang", "Duan, Xufeng", "Haslett, David", "Wang, Shuqi", "Pickering, Martin" ]
Do large language models resemble humans in language use?
cmcl-1.4
Poster
2202.08904v5
https://aclanthology.org/2024.cmcl-1.5.bib
@inproceedings{kouwenhoven-etal-2024-curious, title = "The Curious Case of Representational Alignment: Unravelling Visio-Linguistic Tasks in Emergent Communication", author = "Kouwenhoven, Tom and Peeperkorn, Max and Van Dijk, Bram and Verhoef, Tessa", editor = "Kuribayashi, Tatsuki and Rambelli, Giulia and Takmaz, Ece and Wicke, Philipp and Oseki, Yohei", booktitle = "Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.cmcl-1.5", pages = "57--71", abstract = "Natural language has the universal properties of being compositional and grounded in reality. The emergence of linguistic properties is often investigated through simulations of emergent communication in referential games. However, these experiments have yielded mixed results compared to similar experiments addressing linguistic properties of human language. Here we address representational alignment as a potential contributing factor to these results. Specifically, we assess the representational alignment between agent image representations and between agent representations and input images. Doing so, we confirm that the emergent language does not appear to encode human-like conceptual visual features, since agent image representations drift away from inputs whilst inter-agent alignment increases. We moreover identify a strong relationship between inter-agent alignment and topographic similarity, a common metric for compositionality, and address its consequences. To address these issues, we introduce an alignment penalty that prevents representational drift but interestingly does not improve performance on a compositional discrimination task. Together, our findings emphasise the key role representational alignment plays in simulations of language emergence.", }
Natural language has the universal properties of being compositional and grounded in reality. The emergence of linguistic properties is often investigated through simulations of emergent communication in referential games. However, these experiments have yielded mixed results compared to similar experiments addressing linguistic properties of human language. Here we address representational alignment as a potential contributing factor to these results. Specifically, we assess the representational alignment between agent image representations and between agent representations and input images. Doing so, we confirm that the emergent language does not appear to encode human-like conceptual visual features, since agent image representations drift away from inputs whilst inter-agent alignment increases. We moreover identify a strong relationship between inter-agent alignment and topographic similarity, a common metric for compositionality, and address its consequences. To address these issues, we introduce an alignment penalty that prevents representational drift but interestingly does not improve performance on a compositional discrimination task. Together, our findings emphasise the key role representational alignment plays in simulations of language emergence.
[ "Kouwenhoven, Tom", "Peeperkorn, Max", "Van Dijk, Bram", "Verhoef, Tessa" ]
The Curious Case of Representational Alignment: Unravelling Visio-Linguistic Tasks in Emergent Communication
cmcl-1.5
Poster
2407.17960v1
https://aclanthology.org/2024.cmcl-1.6.bib
@inproceedings{wolfman-etal-2024-hierarchical, title = "Hierarchical syntactic structure in human-like language models", author = "Wolfman, Michael and Dunagan, Donald and Brennan, Jonathan and Hale, John", editor = "Kuribayashi, Tatsuki and Rambelli, Giulia and Takmaz, Ece and Wicke, Philipp and Oseki, Yohei", booktitle = "Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.cmcl-1.6", pages = "72--80", abstract = "Language models (LMs) are a meeting point for cognitive modeling and computational linguistics. How should they be designed to serve as adequate cognitive models? To address this question, this study contrasts two Transformer-based LMs that share the same architecture. Only one of them analyzes sentences in terms of explicit hierarchical structure. Evaluating the two LMs against fMRI time series via the surprisal complexity metric, the results implicate the superior temporal gyrus. These findings underline the need for hierarchical sentence structures in word-by-word models of human language comprehension.", }
Language models (LMs) are a meeting point for cognitive modeling and computational linguistics. How should they be designed to serve as adequate cognitive models? To address this question, this study contrasts two Transformer-based LMs that share the same architecture. Only one of them analyzes sentences in terms of explicit hierarchical structure. Evaluating the two LMs against fMRI time series via the surprisal complexity metric, the results implicate the superior temporal gyrus. These findings underline the need for hierarchical sentence structures in word-by-word models of human language comprehension.
[ "Wolfman, Michael", "Dunagan, Donald", "Brennan, Jonathan", "Hale, John" ]
Hierarchical syntactic structure in human-like language models
cmcl-1.6
Poster
2203.09397v1
https://aclanthology.org/2024.cmcl-1.7.bib
@inproceedings{miyakawa-etal-2024-llms, title = "Do {LLM}s Agree with Humans on Emotional Associations to Nonsense Words?", author = "Miyakawa, Yui and Matsuhira, Chihaya and Kato, Hirotaka and Hirayama, Takatsugu and Komamizu, Takahiro and Ide, Ichiro", editor = "Kuribayashi, Tatsuki and Rambelli, Giulia and Takmaz, Ece and Wicke, Philipp and Oseki, Yohei", booktitle = "Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.cmcl-1.7", pages = "81--85", abstract = "Understanding human perception of nonsense words is helpful to devise product and character names that match their characteristics. Previous studies have suggested the usefulness of Large Language Models (LLMs) for estimating such human perception, but they did not focus on its emotional aspects. Hence, this study aims to elucidate the relationship of emotions evoked by nonsense words between humans and LLMs. Using a representative LLM, GPT-4, we reproduce the procedure of an existing study to analyze evoked emotions of humans for nonsense words. A positive correlation of 0.40 was found between the emotion intensity scores reproduced by GPT-4 and those manually annotated by humans. Although the correlation is not very high, this demonstrates that GPT-4 may agree with humans on emotional associations to nonsense words. Considering that the previous study reported that the correlation among human annotators was about 0.68 on average and that between a regression model trained on the annotations for real words and humans was 0.17, GPT-4{'}s agreement with humans is notably strong.", }
Understanding human perception of nonsense words is helpful to devise product and character names that match their characteristics. Previous studies have suggested the usefulness of Large Language Models (LLMs) for estimating such human perception, but they did not focus on its emotional aspects. Hence, this study aims to elucidate the relationship of emotions evoked by nonsense words between humans and LLMs. Using a representative LLM, GPT-4, we reproduce the procedure of an existing study to analyze evoked emotions of humans for nonsense words. A positive correlation of 0.40 was found between the emotion intensity scores reproduced by GPT-4 and those manually annotated by humans. Although the correlation is not very high, this demonstrates that GPT-4 may agree with humans on emotional associations to nonsense words. Considering that the previous study reported that the correlation among human annotators was about 0.68 on average and that between a regression model trained on the annotations for real words and humans was 0.17, GPT-4{'}s agreement with humans is notably strong.
[ "Miyakawa, Yui", "Matsuhira, Chihaya", "Kato, Hirotaka", "Hirayama, Takatsugu", "Komamizu, Takahiro", "Ide, Ichiro" ]
Do {LLM}s Agree with Humans on Emotional Associations to Nonsense Words?
cmcl-1.7
Poster
2202.12132v4
https://aclanthology.org/2024.cmcl-1.8.bib
@inproceedings{kurch-etal-2024-large, title = "Large language models fail to derive atypicality inferences in a human-like manner", author = "Kurch, Charlotte and Ryzhova, Margarita and Demberg, Vera", editor = "Kuribayashi, Tatsuki and Rambelli, Giulia and Takmaz, Ece and Wicke, Philipp and Oseki, Yohei", booktitle = "Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.cmcl-1.8", pages = "86--100", abstract = "Recent studies have claimed that large language models (LLMs) are capable of drawing pragmatic inferences (Qiu et al., 2023; Hu et al., 2022; Barattieri di San Pietro et al., 2023). The present paper sets out to test LLM{'}s abilities on atypicality inferences, a type of pragmatic inference that is triggered through informational redundancy. We test several state-of-the-art LLMs in a zero-shot setting and find that LLMs fail to systematically fail to derive atypicality inferences. Our robustness analysis indicates that when inferences are seemingly derived in a few-shot settings, these results can be attributed to shallow pattern matching and not pragmatic inferencing. We also analyse the performance of the LLMs at the different derivation steps required for drawing atypicality inferences {--} our results show that models have access to script knowledge and can use it to identify redundancies and accommodate the atypicality inference. The failure instead seems to stem from not reacting to the subtle maxim of quantity violations introduced by the informationally redundant utterances.", }
Recent studies have claimed that large language models (LLMs) are capable of drawing pragmatic inferences (Qiu et al., 2023; Hu et al., 2022; Barattieri di San Pietro et al., 2023). The present paper sets out to test LLM{'}s abilities on atypicality inferences, a type of pragmatic inference that is triggered through informational redundancy. We test several state-of-the-art LLMs in a zero-shot setting and find that LLMs fail to systematically fail to derive atypicality inferences. Our robustness analysis indicates that when inferences are seemingly derived in a few-shot settings, these results can be attributed to shallow pattern matching and not pragmatic inferencing. We also analyse the performance of the LLMs at the different derivation steps required for drawing atypicality inferences {--} our results show that models have access to script knowledge and can use it to identify redundancies and accommodate the atypicality inference. The failure instead seems to stem from not reacting to the subtle maxim of quantity violations introduced by the informationally redundant utterances.
[ "Kurch, Charlotte", "Ryzhova, Margarita", "Demberg, Vera" ]
Large language models fail to derive atypicality inferences in a human-like manner
cmcl-1.8
Poster
2311.10149v1
https://aclanthology.org/2024.cmcl-1.9.bib
@inproceedings{delcaro-etal-2024-predict, title = "Predict but Also Integrate: an Analysis of Sentence Processing Models for {E}nglish and {H}indi", author = "Delcaro, Nina and Onnis, Luca and Alhama, Raquel", editor = "Kuribayashi, Tatsuki and Rambelli, Giulia and Takmaz, Ece and Wicke, Philipp and Oseki, Yohei", booktitle = "Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.cmcl-1.9", pages = "101--108", abstract = "Fluent speakers make implicit predictions about forthcoming linguistic items while processing sentences, possibly to increase efficiency in real-time comprehension. However, the extent to which prediction is the primary mode of processing human language is widely debated. The human language processor may also gain efficiency by integrating new linguistic information with prior knowledge and the preceding context, without actively predicting. At present, the role of probabilistic integration, as well as its computational foundation, remains relatively understudied. Here, we explored whether a Delayed Recurrent Neural Network (d-RNN, Turek et al., 2020), as an implementation of both prediction and integration, can explain patterns of human language processing over and above the contribution of a purely predictive RNN model. We found that incorporating integration contributes to explaining variability in eye-tracking data for English and Hindi.", }
Fluent speakers make implicit predictions about forthcoming linguistic items while processing sentences, possibly to increase efficiency in real-time comprehension. However, the extent to which prediction is the primary mode of processing human language is widely debated. The human language processor may also gain efficiency by integrating new linguistic information with prior knowledge and the preceding context, without actively predicting. At present, the role of probabilistic integration, as well as its computational foundation, remains relatively understudied. Here, we explored whether a Delayed Recurrent Neural Network (d-RNN, Turek et al., 2020), as an implementation of both prediction and integration, can explain patterns of human language processing over and above the contribution of a purely predictive RNN model. We found that incorporating integration contributes to explaining variability in eye-tracking data for English and Hindi.
[ "Delcaro, Nina", "Onnis, Luca", "Alhama, Raquel" ]
Predict but Also Integrate: an Analysis of Sentence Processing Models for {E}nglish and {H}indi
cmcl-1.9
Poster
2001.10340v1
https://aclanthology.org/2024.cmcl-1.10.bib
@inproceedings{kozlova-etal-2024-transformer, title = "Transformer Attention vs Human Attention in Anaphora Resolution", author = "Kozlova, Anastasia and Akhmetgareeva, Albina and Khanova, Aigul and Kudriavtsev, Semen and Fenogenova, Alena", editor = "Kuribayashi, Tatsuki and Rambelli, Giulia and Takmaz, Ece and Wicke, Philipp and Oseki, Yohei", booktitle = "Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.cmcl-1.10", pages = "109--122", abstract = "Motivated by human cognitive processes, attention mechanism within transformer architecture has been developed to assist neural networks in allocating focus to specific aspects within input data. Despite claims regarding the interpretability achieved by attention mechanisms, the extent of correlation and similarity between machine and human attention remains a subject requiring further investigation.In this paper, we conduct a quantitative analysis of human attention compared to neural attention mechanisms in the context of the anaphora resolution task. We collect an eye-tracking dataset based on the Winograd schema challenge task for the Russian language. Leveraging this dataset, we conduct an extensive analysis of the correlations between human and machine attention maps across various transformer architectures, network layers of pre-trained and fine-tuned models. Our aim is to investigate whether insights from human attention mechanisms can be used to enhance the performance of neural networks in tasks such as anaphora resolution. The results reveal distinctions in anaphora resolution processing, offering promising prospects for improving the performance of neural networks and understanding the cognitive nuances of human perception.", }
Motivated by human cognitive processes, attention mechanism within transformer architecture has been developed to assist neural networks in allocating focus to specific aspects within input data. Despite claims regarding the interpretability achieved by attention mechanisms, the extent of correlation and similarity between machine and human attention remains a subject requiring further investigation.In this paper, we conduct a quantitative analysis of human attention compared to neural attention mechanisms in the context of the anaphora resolution task. We collect an eye-tracking dataset based on the Winograd schema challenge task for the Russian language. Leveraging this dataset, we conduct an extensive analysis of the correlations between human and machine attention maps across various transformer architectures, network layers of pre-trained and fine-tuned models. Our aim is to investigate whether insights from human attention mechanisms can be used to enhance the performance of neural networks in tasks such as anaphora resolution. The results reveal distinctions in anaphora resolution processing, offering promising prospects for improving the performance of neural networks and understanding the cognitive nuances of human perception.
[ "Kozlova, Anastasia", "Akhmetgareeva, Albina", "Khanova, Aigul", "Kudriavtsev, Semen", "Fenogenova, Alena" ]
Transformer Attention vs Human Attention in Anaphora Resolution
cmcl-1.10
Poster
2104.05320v1
https://aclanthology.org/2024.cmcl-1.11.bib
@inproceedings{ma-2024-evaluating, title = "Evaluating Lexical Aspect with Large Language Models", author = "Ma, Bolei", editor = "Kuribayashi, Tatsuki and Rambelli, Giulia and Takmaz, Ece and Wicke, Philipp and Oseki, Yohei", booktitle = "Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.cmcl-1.11", pages = "123--131", abstract = "In this study, we explore the proficiency of large language models (LLMs) in understanding two key lexical aspects: duration (durative/stative) and telicity (telic/atelic). Through experiments on datasets featuring sentences, verbs, and verb positions, we prompt the LLMs to identify aspectual features of verbs in sentences. Our findings reveal that certain LLMs, particularly those closed-source ones, are able to capture information on duration and telicity, albeit with some performance variations and weaker results compared to the baseline. By employing prompts at three levels (sentence-only, sentence with verb, and sentence with verb and its position), we demonstrate that integrating verb information generally enhances performance in aspectual feature recognition, though it introduces instability. We call for future research to look deeper into methods aimed at optimizing LLMs for aspectual feature comprehension.", }
In this study, we explore the proficiency of large language models (LLMs) in understanding two key lexical aspects: duration (durative/stative) and telicity (telic/atelic). Through experiments on datasets featuring sentences, verbs, and verb positions, we prompt the LLMs to identify aspectual features of verbs in sentences. Our findings reveal that certain LLMs, particularly those closed-source ones, are able to capture information on duration and telicity, albeit with some performance variations and weaker results compared to the baseline. By employing prompts at three levels (sentence-only, sentence with verb, and sentence with verb and its position), we demonstrate that integrating verb information generally enhances performance in aspectual feature recognition, though it introduces instability. We call for future research to look deeper into methods aimed at optimizing LLMs for aspectual feature comprehension.
[ "Ma, Bolei" ]
Evaluating Lexical Aspect with Large Language Models
cmcl-1.11
Poster
9808001v1
https://aclanthology.org/2024.cmcl-1.12.bib
@inproceedings{herve-etal-2024-daily, title = "Daily auditory environments in {F}rench-speaking infants: A longitudinal dataset", author = "Herv{\'e}, Estelle and Fran{\c{c}}ois, Cl{\'e}ment and Prevot, Laurent", editor = "Kuribayashi, Tatsuki and Rambelli, Giulia and Takmaz, Ece and Wicke, Philipp and Oseki, Yohei", booktitle = "Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.cmcl-1.12", pages = "132--151", abstract = "Babies{'} daily auditory environment plays a crucial role in language development. Most previous research estimating the quantitative and qualitative aspects of early speech inputs has predominantly focused on English- and Spanish-speaking families. In addition, validation studies for daylong recordings{'} analysis tools are scarce on French data sets.In this paper, we present a French corpus of daylong audio recordings longitudinally collected with the LENA (Language ENvironment Analysis) system from infants aged 3 to 24 months. We conduct a thorough exploration of this data set, which serves as a quality check for both the data and the analysis tools.We evaluate the reliability of LENA metrics by systematically comparing them with those obtained from the ChildProject set of tools and by checking the known dynamics of the metrics with age. These metrics are also used to replicate, on our data set, findings from (Warlaumont et al, 2014) about the increase of infants{'} speech vocalizations and temporal contingencies between infants and caregivers with age.", }
Babies{'} daily auditory environment plays a crucial role in language development. Most previous research estimating the quantitative and qualitative aspects of early speech inputs has predominantly focused on English- and Spanish-speaking families. In addition, validation studies for daylong recordings{'} analysis tools are scarce on French data sets.In this paper, we present a French corpus of daylong audio recordings longitudinally collected with the LENA (Language ENvironment Analysis) system from infants aged 3 to 24 months. We conduct a thorough exploration of this data set, which serves as a quality check for both the data and the analysis tools.We evaluate the reliability of LENA metrics by systematically comparing them with those obtained from the ChildProject set of tools and by checking the known dynamics of the metrics with age. These metrics are also used to replicate, on our data set, findings from (Warlaumont et al, 2014) about the increase of infants{'} speech vocalizations and temporal contingencies between infants and caregivers with age.
[ "Herv{\\'e}, Estelle", "Fran{\\c{c}}ois, Cl{\\'e}ment", "Prevot, Laurent" ]
Daily auditory environments in {F}rench-speaking infants: A longitudinal dataset
cmcl-1.12
Poster
2103.02703v1
https://aclanthology.org/2024.cmcl-1.13.bib
@inproceedings{serras-etal-2024-analysing, title = "Analysing and Validating Language Complexity Metrics Across {S}outh {A}merican Indigenous Languages", author = "Serras, Felipe and Carpi, Miguel and Branco, Matheus and Finger, Marcelo", editor = "Kuribayashi, Tatsuki and Rambelli, Giulia and Takmaz, Ece and Wicke, Philipp and Oseki, Yohei", booktitle = "Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.cmcl-1.13", pages = "152--165", abstract = "Language complexity is an emerging concept critical for NLP and for quantitative and cognitive approaches to linguistics. In this work, we evaluate the behavior of a set of compression-based language complexity metrics when applied to a large set of native South American languages. Our goal is to validate the desirable properties of such metrics against a more diverse set of languages, guaranteeing the universality of the techniques developed on the basis of this type of theoretical artifact. Our analysis confirmed with statistical confidence most propositions about the metrics studied, affirming their robustness, despite showing less stability than when the same metrics were applied to Indo-European languages. We also observed that the trade-off between morphological and syntactic complexities is strongly related to language phylogeny.", }
Language complexity is an emerging concept critical for NLP and for quantitative and cognitive approaches to linguistics. In this work, we evaluate the behavior of a set of compression-based language complexity metrics when applied to a large set of native South American languages. Our goal is to validate the desirable properties of such metrics against a more diverse set of languages, guaranteeing the universality of the techniques developed on the basis of this type of theoretical artifact. Our analysis confirmed with statistical confidence most propositions about the metrics studied, affirming their robustness, despite showing less stability than when the same metrics were applied to Indo-European languages. We also observed that the trade-off between morphological and syntactic complexities is strongly related to language phylogeny.
[ "Serras, Felipe", "Carpi, Miguel", "Branco, Matheus", "Finger, Marcelo" ]
Analysing and Validating Language Complexity Metrics Across {S}outh {A}merican Indigenous Languages
cmcl-1.13
Poster
2205.06993v1
https://aclanthology.org/2024.cmcl-1.14.bib
@inproceedings{wang-etal-2024-large-language-models, title = "How can large language models become more human?", author = "Wang, Daphne and Sadrzadeh, Mehrnoosh and Stanojevi{\'c}, Milo{\v{s}} and Chow, Wing-Yee and Breheny, Richard", editor = "Kuribayashi, Tatsuki and Rambelli, Giulia and Takmaz, Ece and Wicke, Philipp and Oseki, Yohei", booktitle = "Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.cmcl-1.14", pages = "166--176", abstract = "Psycholinguistic experiments reveal that efficiency of human language use is founded on predictions at both syntactic and lexical levels. Previous models of human prediction exploiting LLMs have used an information theoretic measure called \textit{surprisal}, with success on naturalistic text in a wide variety of languages, but under-performance on challenging text such as garden path sentences. This paper introduces a novel framework that combines the lexical predictions of an LLM with the syntactic structures provided by a dependency parser. The framework gives rise to an \textit{Incompatibility Fraction}. When tested on two garden path datasets, it correlated well with human reading times, distinguished between easy and hard garden path, and outperformed surprisal.", }
Psycholinguistic experiments reveal that efficiency of human language use is founded on predictions at both syntactic and lexical levels. Previous models of human prediction exploiting LLMs have used an information theoretic measure called \textit{surprisal}, with success on naturalistic text in a wide variety of languages, but under-performance on challenging text such as garden path sentences. This paper introduces a novel framework that combines the lexical predictions of an LLM with the syntactic structures provided by a dependency parser. The framework gives rise to an \textit{Incompatibility Fraction}. When tested on two garden path datasets, it correlated well with human reading times, distinguished between easy and hard garden path, and outperformed surprisal.
[ "Wang, Daphne", "Sadrzadeh, Mehrnoosh", "Stanojevi{\\'c}, Milo{\\v{s}}", "Chow, Wing-Yee", "Breheny, Richard" ]
How can large language models become more human?
cmcl-1.14
Poster
2407.01878v2
https://aclanthology.org/2024.cmcl-1.15.bib
@inproceedings{anh-etal-2024-morphology, title = "Morphology Matters: Probing the Cross-linguistic Morphological Generalization Abilities of Large Language Models through a Wug Test", author = "Anh, Dang and Raviv, Limor and Galke, Lukas", editor = "Kuribayashi, Tatsuki and Rambelli, Giulia and Takmaz, Ece and Wicke, Philipp and Oseki, Yohei", booktitle = "Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.cmcl-1.15", pages = "177--188", abstract = "We develop a multilingual version of the Wug Test, an artificial word completion experiment that is typically used to test the morphological knowledge of children, and apply it to the GPT family of large language models (LLMs). LLMs{'} performance on this test was evaluated by native speakers of six different languages, who judged whether the inflected and derived forms generated by the models conform to the morphological rules of their language. Our results show that LLMs can generalize their morphological knowledge to new, unfamiliar words, but that their success in generating the {``}correct{''} generalization (as judged by native human speakers) is predicted by a language{'}s morphological complexity (specifically, integrative complexity). We further find that the amount of training data has surprisingly little on LLMs{'} morphological generalization abilities within the scope of the analyzed languages. These findings highlight that {``}morphology matters{''}, and have important implications for improving low-resource language modeling.", }
We develop a multilingual version of the Wug Test, an artificial word completion experiment that is typically used to test the morphological knowledge of children, and apply it to the GPT family of large language models (LLMs). LLMs{'} performance on this test was evaluated by native speakers of six different languages, who judged whether the inflected and derived forms generated by the models conform to the morphological rules of their language. Our results show that LLMs can generalize their morphological knowledge to new, unfamiliar words, but that their success in generating the {``}correct{''} generalization (as judged by native human speakers) is predicted by a language{'}s morphological complexity (specifically, integrative complexity). We further find that the amount of training data has surprisingly little on LLMs{'} morphological generalization abilities within the scope of the analyzed languages. These findings highlight that {``}morphology matters{''}, and have important implications for improving low-resource language modeling.
[ "Anh, Dang", "Raviv, Limor", "Galke, Lukas" ]
Morphology Matters: Probing the Cross-linguistic Morphological Generalization Abilities of Large Language Models through a Wug Test
cmcl-1.15
Poster
2310.15113v2
https://aclanthology.org/2024.cmcl-1.16.bib
@inproceedings{qiu-etal-2024-evaluating, title = "Evaluating Grammatical Well-Formedness in Large Language Models: A Comparative Study with Human Judgments", author = "Qiu, Zhuang and Duan, Xufeng and Cai, Zhenguang", editor = "Kuribayashi, Tatsuki and Rambelli, Giulia and Takmaz, Ece and Wicke, Philipp and Oseki, Yohei", booktitle = "Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.cmcl-1.16", pages = "189--198", abstract = "Research in artificial intelligence has witnessed the surge of large language models (LLMs) demonstrating improved performance in various natural language processing tasks. This has sparked significant discussions about the extent to which large language models emulate human linguistic cognition and usage. This study delves into the representation of grammatical well-formedness in LLMs, which is a critical aspect of linguistic knowledge. In three preregistered experiments, we collected grammaticality judgment data for over 2400 English sentences with varying structures from ChatGPT and Vicuna, comparing them with human judgment data. The results reveal substantial alignment in the assessment of grammatical correctness between LLMs and human judgments, albeit with LLMs often showing more conservative judgments for grammatical correctness or incorrectness.", }
Research in artificial intelligence has witnessed the surge of large language models (LLMs) demonstrating improved performance in various natural language processing tasks. This has sparked significant discussions about the extent to which large language models emulate human linguistic cognition and usage. This study delves into the representation of grammatical well-formedness in LLMs, which is a critical aspect of linguistic knowledge. In three preregistered experiments, we collected grammaticality judgment data for over 2400 English sentences with varying structures from ChatGPT and Vicuna, comparing them with human judgment data. The results reveal substantial alignment in the assessment of grammatical correctness between LLMs and human judgments, albeit with LLMs often showing more conservative judgments for grammatical correctness or incorrectness.
[ "Qiu, Zhuang", "Duan, Xufeng", "Cai, Zhenguang" ]
Evaluating Grammatical Well-Formedness in Large Language Models: A Comparative Study with Human Judgments
cmcl-1.16
Poster
2402.01676v1
https://aclanthology.org/2024.cmcl-1.17.bib
@inproceedings{verhoef-etal-2024-kiki, title = "What does Kiki look like? Cross-modal associations between speech sounds and visual shapes in vision-and-language models", author = "Verhoef, Tessa and Shahrasbi, Kiana and Kouwenhoven, Tom", editor = "Kuribayashi, Tatsuki and Rambelli, Giulia and Takmaz, Ece and Wicke, Philipp and Oseki, Yohei", booktitle = "Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.cmcl-1.17", pages = "199--213", abstract = "Humans have clear cross-modal preferences when matching certain novel words to visual shapes. Evidence suggests that these preferences play a prominent role in our linguistic processing, language learning, and the origins of signal-meaning mappings. With the rise of multimodal models in AI, such as vision-and-language (VLM) models, it becomes increasingly important to uncover the kinds of visio-linguistic associations these models encode and whether they align with human representations. Informed by experiments with humans, we probe and compare four VLMs for a well-known human cross-modal preference, the bouba-kiki effect. We do not find conclusive evidence for this effect but suggest that results may depend on features of the models, such as architecture design, model size, and training details. Our findings inform discussions on the origins of the bouba-kiki effect in human cognition and future developments of VLMs that align well with human cross-modal associations.", }
Humans have clear cross-modal preferences when matching certain novel words to visual shapes. Evidence suggests that these preferences play a prominent role in our linguistic processing, language learning, and the origins of signal-meaning mappings. With the rise of multimodal models in AI, such as vision-and-language (VLM) models, it becomes increasingly important to uncover the kinds of visio-linguistic associations these models encode and whether they align with human representations. Informed by experiments with humans, we probe and compare four VLMs for a well-known human cross-modal preference, the bouba-kiki effect. We do not find conclusive evidence for this effect but suggest that results may depend on features of the models, such as architecture design, model size, and training details. Our findings inform discussions on the origins of the bouba-kiki effect in human cognition and future developments of VLMs that align well with human cross-modal associations.
[ "Verhoef, Tessa", "Shahrasbi, Kiana", "Kouwenhoven, Tom" ]
What does Kiki look like? Cross-modal associations between speech sounds and visual shapes in vision-and-language models
cmcl-1.17
Poster
2407.17974v1
https://aclanthology.org/2024.cmcl-1.18.bib
@inproceedings{tater-etal-2024-evaluating, title = "Evaluating Semantic Relations in Predicting Textual Labels for Images of Abstract and Concrete Concepts", author = "Tater, Tarun and Schulte Im Walde, Sabine and Frassinelli, Diego", editor = "Kuribayashi, Tatsuki and Rambelli, Giulia and Takmaz, Ece and Wicke, Philipp and Oseki, Yohei", booktitle = "Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.cmcl-1.18", pages = "214--220", abstract = "This study investigates the performance of SigLIP, a state-of-the-art Vision-Language Model (VLM), in predicting labels for images depicting 1,278 concepts. Our analysis across 300 images per concept shows that the model frequently predicts the exact user-tagged labels, but similarly, it often predicts labels that are semantically related to the exact labels in various ways: synonyms, hypernyms, co-hyponyms, and associated words, particularly for abstract concepts. We then zoom into the diversity of the user tags of images and word associations for abstract versus concrete concepts. Surprisingly, not only abstract but also concrete concepts exhibit significant variability, thus challenging the traditional view that representations of concrete concepts are less diverse.", }
This study investigates the performance of SigLIP, a state-of-the-art Vision-Language Model (VLM), in predicting labels for images depicting 1,278 concepts. Our analysis across 300 images per concept shows that the model frequently predicts the exact user-tagged labels, but similarly, it often predicts labels that are semantically related to the exact labels in various ways: synonyms, hypernyms, co-hyponyms, and associated words, particularly for abstract concepts. We then zoom into the diversity of the user tags of images and word associations for abstract versus concrete concepts. Surprisingly, not only abstract but also concrete concepts exhibit significant variability, thus challenging the traditional view that representations of concrete concepts are less diverse.
[ "Tater, Tarun", "Schulte Im Walde, Sabine", "Frassinelli, Diego" ]
Evaluating Semantic Relations in Predicting Textual Labels for Images of Abstract and Concrete Concepts
cmcl-1.18
Poster
2309.14623v2
https://aclanthology.org/2024.cmcl-1.19.bib
@inproceedings{cain-ryskin-2024-diachronic, title = "Diachronic change in verb usage statistics predicts differences in sentence processing across the lifespan", author = "Cain, Ellis and Ryskin, Rachel", editor = "Kuribayashi, Tatsuki and Rambelli, Giulia and Takmaz, Ece and Wicke, Philipp and Oseki, Yohei", booktitle = "Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.cmcl-1.19", pages = "221--230", abstract = "Diachronic corpus analyses reveal that syntactic usage patterns change over time. Are these changes reflected in differences in language processing across the human lifespan? We use the attachment of with-prepositional phrases (PPs) as a case study for investigating this question: a with-PP can attach to a verb, describing an instrument with which to perform the action (e.g., Slice the cake [with a knife]), or to a direct object (DO), modifying the noun (e.g., Slice the cake [with the pink frosting]). The relative frequencies of the instrument and modifier constructions differ depending on the verb in the sentence {---} the {`}verb bias{'}. Using two diachronic corpora, Syntgram and CCOHA, we analyzed the co-occurrence statistics of 27 verbs and instrument vs. modifier with-PPs. Between the 1940s and the 2000s, some verbs were more instrument-biased (i.e., more likely to co-occur with with-PPs that attach to the verb than the DO) than others and co-occurrence patterns were more similar for temporally close decades, suggesting subtle diachronic changes in usage patterns. We collected sentence interpretation data probing with-PP attachment preferences in participants ranging in age from 25 to 75. Interpretations of globally ambiguous sentences (e.g., Pet the rabbit with the towel) differed depending on the verb (i.e., some verbs elicit more instrument than modifier interpretations of the PP than others and vice versa) and on the age of the participant. In particular, verbs which became less instrument-biased over time elicited more instrument interpretations among older adults than young adults, suggesting that variation in language comprehension can be in part predicted from the corpus statistics of the time periods that an individual experienced.", }
Diachronic corpus analyses reveal that syntactic usage patterns change over time. Are these changes reflected in differences in language processing across the human lifespan? We use the attachment of with-prepositional phrases (PPs) as a case study for investigating this question: a with-PP can attach to a verb, describing an instrument with which to perform the action (e.g., Slice the cake [with a knife]), or to a direct object (DO), modifying the noun (e.g., Slice the cake [with the pink frosting]). The relative frequencies of the instrument and modifier constructions differ depending on the verb in the sentence {---} the {`}verb bias{'}. Using two diachronic corpora, Syntgram and CCOHA, we analyzed the co-occurrence statistics of 27 verbs and instrument vs. modifier with-PPs. Between the 1940s and the 2000s, some verbs were more instrument-biased (i.e., more likely to co-occur with with-PPs that attach to the verb than the DO) than others and co-occurrence patterns were more similar for temporally close decades, suggesting subtle diachronic changes in usage patterns. We collected sentence interpretation data probing with-PP attachment preferences in participants ranging in age from 25 to 75. Interpretations of globally ambiguous sentences (e.g., Pet the rabbit with the towel) differed depending on the verb (i.e., some verbs elicit more instrument than modifier interpretations of the PP than others and vice versa) and on the age of the participant. In particular, verbs which became less instrument-biased over time elicited more instrument interpretations among older adults than young adults, suggesting that variation in language comprehension can be in part predicted from the corpus statistics of the time periods that an individual experienced.
[ "Cain, Ellis", "Ryskin, Rachel" ]
Diachronic change in verb usage statistics predicts differences in sentence processing across the lifespan
cmcl-1.19
Poster
2205.06321v1
https://aclanthology.org/2024.cmcl-1.20.bib
@inproceedings{sadlier-brown-etal-2024-useful, title = "How Useful is Context, Actually? Comparing {LLM}s and Humans on Discourse Marker Prediction", author = "Sadlier-Brown, Emily and Lou, Millie and Silfverberg, Miikka and Kam, Carla", editor = "Kuribayashi, Tatsuki and Rambelli, Giulia and Takmaz, Ece and Wicke, Philipp and Oseki, Yohei", booktitle = "Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.cmcl-1.20", pages = "231--241", abstract = "This paper investigates the adverbial discourse particle actually. We compare LLM and human performance on cloze tests involving actually on examples sourced from the Providence Corpus of speech around children. We explore the impact of utterance context on cloze test performance. We find that context is always helpful, though the extent to which additional context is helpful, and what relative placement of context (i.e. before or after the masked word) is most helpful differs for individual models and humans. The best-performing LLM, GPT-4, narrowly outperforms humans. In an additional experiment, we explore cloze performance on synthetic LLM-generated examples, and find that several models vastly outperform humans.", }
This paper investigates the adverbial discourse particle actually. We compare LLM and human performance on cloze tests involving actually on examples sourced from the Providence Corpus of speech around children. We explore the impact of utterance context on cloze test performance. We find that context is always helpful, though the extent to which additional context is helpful, and what relative placement of context (i.e. before or after the masked word) is most helpful differs for individual models and humans. The best-performing LLM, GPT-4, narrowly outperforms humans. In an additional experiment, we explore cloze performance on synthetic LLM-generated examples, and find that several models vastly outperform humans.
[ "Sadlier-Brown, Emily", "Lou, Millie", "Silfverberg, Miikka", "Kam, Carla" ]
How Useful is Context, Actually? Comparing {LLM}s and Humans on Discourse Marker Prediction
cmcl-1.20
Poster
2306.10658v1
https://aclanthology.org/2024.cmcl-1.21.bib
@inproceedings{moisio-etal-2024-llms, title = "{LLM}s{'} morphological analyses of complex {FST}-generated {F}innish words", author = "Moisio, Anssi and Creutz, Mathias and Kurimo, Mikko", editor = "Kuribayashi, Tatsuki and Rambelli, Giulia and Takmaz, Ece and Wicke, Philipp and Oseki, Yohei", booktitle = "Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.cmcl-1.21", pages = "242--254", abstract = "Rule-based language processing systems have been overshadowed by neural systems in terms of utility, but it remains unclear whether neural NLP systems, in practice, learn the grammar rules that humans use. This work aims to shed light on the issue by evaluating state-of-the-art LLMs in a task of morphological analysis of complex Finnish noun forms. We generate the forms using an FST tool, and they are unlikely to have occurred in the training sets of the LLMs, therefore requiring morphological generalisation capacity. We find that GPT-4-turbohas some difficulties in the task while GPT-3.5-turbo struggles and smaller models Llama2-70B and Poro-34B fail nearly completely.", }
Rule-based language processing systems have been overshadowed by neural systems in terms of utility, but it remains unclear whether neural NLP systems, in practice, learn the grammar rules that humans use. This work aims to shed light on the issue by evaluating state-of-the-art LLMs in a task of morphological analysis of complex Finnish noun forms. We generate the forms using an FST tool, and they are unlikely to have occurred in the training sets of the LLMs, therefore requiring morphological generalisation capacity. We find that GPT-4-turbohas some difficulties in the task while GPT-3.5-turbo struggles and smaller models Llama2-70B and Poro-34B fail nearly completely.
[ "Moisio, Anssi", "Creutz, Mathias", "Kurimo, Mikko" ]
{LLM}s{'} morphological analyses of complex {FST}-generated {F}innish words
cmcl-1.21
Poster
2407.08269v1
https://aclanthology.org/2024.cmcl-1.22.bib
@inproceedings{wu-etal-2024-eye, title = "An Eye Opener Regarding Task-Based Text Gradient Saliency", author = {Wu, Guojun and Bolliger, Lena and Reich, David and J{\"a}ger, Lena}, editor = "Kuribayashi, Tatsuki and Rambelli, Giulia and Takmaz, Ece and Wicke, Philipp and Oseki, Yohei", booktitle = "Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.cmcl-1.22", pages = "255--263", abstract = "Eye movements in reading reveal humans{'} cognitive processes involved in language understanding. The duration a reader{'}s eyes fixate on a word has been used as a measure of the visual attention given to that word or its significance to the reader. This study investigates the correlation between the importance attributed to input tokens by language models (LMs) on the one hand and humans, in the form of fixation durations, on the other hand. While previous research on the internal processes of LMs have employed the models{'} attention weights, recent studies have argued in favor of gradient-based methods. Moreover, previous approaches to interpret LMs{'} internals with human gaze have neglected the tasks readers performed during reading, even though psycholinguistic research underlines that reading patterns are task-dependent. We therefore employ a gradient-based saliency method to measure the importance of input tokens when LMs are targeted on specific tasks, and we find that task specificity plays a crucial role in the correlation between human- and model-assigned importance. Our implementation is available at https://github.com/gjwubyron/Scan.", }
Eye movements in reading reveal humans{'} cognitive processes involved in language understanding. The duration a reader{'}s eyes fixate on a word has been used as a measure of the visual attention given to that word or its significance to the reader. This study investigates the correlation between the importance attributed to input tokens by language models (LMs) on the one hand and humans, in the form of fixation durations, on the other hand. While previous research on the internal processes of LMs have employed the models{'} attention weights, recent studies have argued in favor of gradient-based methods. Moreover, previous approaches to interpret LMs{'} internals with human gaze have neglected the tasks readers performed during reading, even though psycholinguistic research underlines that reading patterns are task-dependent. We therefore employ a gradient-based saliency method to measure the importance of input tokens when LMs are targeted on specific tasks, and we find that task specificity plays a crucial role in the correlation between human- and model-assigned importance. Our implementation is available at https://github.com/gjwubyron/Scan.
[ "Wu, Guojun", "Bolliger, Lena", "Reich, David", "J{\\\"a}ger, Lena" ]
An Eye Opener Regarding Task-Based Text Gradient Saliency
cmcl-1.22
Poster
1505.03581v1
https://aclanthology.org/2024.cmcl-1.23.bib
@inproceedings{bonard-cortal-2024-improving, title = "Improving Language Models for Emotion Analysis: Insights from Cognitive Science", author = "Bonard, Constant and Cortal, Gustave", editor = "Kuribayashi, Tatsuki and Rambelli, Giulia and Takmaz, Ece and Wicke, Philipp and Oseki, Yohei", booktitle = "Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.cmcl-1.23", pages = "264--277", abstract = "We propose leveraging cognitive science research on emotions and communication to improve language models for emotion analysis. First, we present the main emotion theories in psychology and cognitive science. Then, we introduce the main methods of emotion annotation in natural language processing and their connections to psychological theories. We also present the two main types of analyses of emotional communication in cognitive pragmatics. Finally, based on the cognitive science research presented, we propose directions for improving language models for emotion analysis. We suggest that these research efforts pave the way for constructing new annotation schemes, methods, and a possible benchmark for emotional understanding, considering different facets of human emotion and communication.", }
We propose leveraging cognitive science research on emotions and communication to improve language models for emotion analysis. First, we present the main emotion theories in psychology and cognitive science. Then, we introduce the main methods of emotion annotation in natural language processing and their connections to psychological theories. We also present the two main types of analyses of emotional communication in cognitive pragmatics. Finally, based on the cognitive science research presented, we propose directions for improving language models for emotion analysis. We suggest that these research efforts pave the way for constructing new annotation schemes, methods, and a possible benchmark for emotional understanding, considering different facets of human emotion and communication.
[ "Bonard, Constant", "Cortal, Gustave" ]
Improving Language Models for Emotion Analysis: Insights from Cognitive Science
cmcl-1.23
Poster
2406.10265v1
https://aclanthology.org/2024.conda-1.1.bib
@inproceedings{liu-etal-2024-evaluating, title = "Evaluating {C}hinese Large Language Models on Discipline Knowledge Acquisition via Memorization and Robustness Assessment", author = "Liu, Chuang and Jin, Renren and Steedman, Mark and Xiong, Deyi", editor = "Sainz, Oscar and Garc{\'\i}a Ferrero, Iker and Agirre, Eneko and Ander Campos, Jon and Jacovi, Alon and Elazar, Yanai and Goldberg, Yoav", booktitle = "Proceedings of the 1st Workshop on Data Contamination (CONDA)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.conda-1.1", pages = "1--12", abstract = "Chinese LLMs demonstrate impressive performance on NLP tasks, particularly on discipline knowledge benchmarks, with some results approaching those of GPT-4. Previous research has viewed these advancements as potential outcomes of data contamination or leakage, prompting efforts to create new detection methods and address evaluation issues in LLM benchmarks. However, there has been a lack of comprehensive assessment of the evolution of Chinese LLMs. To address this gap, this paper offers a thorough investigation of Chinese LLMs on discipline knowledge evaluation, delving into the advancements of various LLMs, including a group of related models and others. Specifically, we have conducted six assessments ranging from knowledge memorization to comprehension for robustness, encompassing tasks like predicting incomplete questions and options, identifying behaviors by the contaminational fine-tuning, and answering rephrased questions. Experimental findings indicate a positive correlation between the release time of LLMs and their memorization capabilities, but they struggle with variations in original question-options pairs. Additionally, our findings suggest that question descriptions have a more significant impact on LLMs{'} performance.", }
Chinese LLMs demonstrate impressive performance on NLP tasks, particularly on discipline knowledge benchmarks, with some results approaching those of GPT-4. Previous research has viewed these advancements as potential outcomes of data contamination or leakage, prompting efforts to create new detection methods and address evaluation issues in LLM benchmarks. However, there has been a lack of comprehensive assessment of the evolution of Chinese LLMs. To address this gap, this paper offers a thorough investigation of Chinese LLMs on discipline knowledge evaluation, delving into the advancements of various LLMs, including a group of related models and others. Specifically, we have conducted six assessments ranging from knowledge memorization to comprehension for robustness, encompassing tasks like predicting incomplete questions and options, identifying behaviors by the contaminational fine-tuning, and answering rephrased questions. Experimental findings indicate a positive correlation between the release time of LLMs and their memorization capabilities, but they struggle with variations in original question-options pairs. Additionally, our findings suggest that question descriptions have a more significant impact on LLMs{'} performance.
[ "Liu, Chuang", "Jin, Renren", "Steedman, Mark", "Xiong, Deyi" ]
Evaluating {C}hinese Large Language Models on Discipline Knowledge Acquisition via Memorization and Robustness Assessment
conda-1.1
Poster
2312.16132v2
https://aclanthology.org/2024.conda-1.2.bib
@inproceedings{mehrbakhsh-etal-2024-confounders, title = "Confounders in Instance Variation for the Analysis of Data Contamination", author = "Mehrbakhsh, Behzad and Garigliotti, Dario and Mart{\'\i}nez-Plumed, Fernando and Hernandez-Orallo, Jose", editor = "Sainz, Oscar and Garc{\'\i}a Ferrero, Iker and Agirre, Eneko and Ander Campos, Jon and Jacovi, Alon and Elazar, Yanai and Goldberg, Yoav", booktitle = "Proceedings of the 1st Workshop on Data Contamination (CONDA)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.conda-1.2", pages = "13--21", abstract = "Test contamination is a serious problem for the evaluation of large language models (LLMs) because it leads to the overestimation of their performance and a quick saturation of benchmarks, even before the actual capability is achieved. One strategy to address this issue is the (adversarial) generation of variations, by including different exemplars and different rephrasings of the questions. However, these two interventions can lead to instances that can be more difficult (accumulating on the expected loss of performance by partly removing the contamination) but also to instances that can be less difficult (cancelling the expected loss of performance), which would make contamination undetectable. Understanding these two phenomena in terms of instance difficulty is critical to determine and measure contamination. In this paper we conduct a comprehensive analysis of these two interventions on an addition task with fine-tuned LLAMA-2 models.", }
Test contamination is a serious problem for the evaluation of large language models (LLMs) because it leads to the overestimation of their performance and a quick saturation of benchmarks, even before the actual capability is achieved. One strategy to address this issue is the (adversarial) generation of variations, by including different exemplars and different rephrasings of the questions. However, these two interventions can lead to instances that can be more difficult (accumulating on the expected loss of performance by partly removing the contamination) but also to instances that can be less difficult (cancelling the expected loss of performance), which would make contamination undetectable. Understanding these two phenomena in terms of instance difficulty is critical to determine and measure contamination. In this paper we conduct a comprehensive analysis of these two interventions on an addition task with fine-tuned LLAMA-2 models.
[ "Mehrbakhsh, Behzad", "Garigliotti, Dario", "Mart{\\'\\i}nez-Plumed, Fern", "o", "Hern", "ez-Orallo, Jose" ]
Confounders in Instance Variation for the Analysis of Data Contamination
conda-1.2
Poster
2311.01252v1
https://aclanthology.org/2024.conda-1.3.bib
@inproceedings{palavalli-etal-2024-taxonomy, title = "A Taxonomy for Data Contamination in Large Language Models", author = "Palavalli, Medha and Bertsch, Amanda and Gormley, Matthew", editor = "Sainz, Oscar and Garc{\'\i}a Ferrero, Iker and Agirre, Eneko and Ander Campos, Jon and Jacovi, Alon and Elazar, Yanai and Goldberg, Yoav", booktitle = "Proceedings of the 1st Workshop on Data Contamination (CONDA)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.conda-1.3", pages = "22--40", abstract = "Large language models pretrained on extensive web corpora demonstrate remarkable performance across a wide range of downstream tasks. However, a growing concern is data contamination, where evaluation datasets may unintentionally be contained in the pretraining corpus, inflating model performance. Decontamination, the process of detecting and removing such data, is a potential solution; yet these contaminants may originate from altered versions of the test set, evading detection during decontamination. How different types of contamination impact the performance of language models on downstream tasks is not fully understood. We present a taxonomy that categorizes the various types of contamination encountered by LLMs during the pretraining phase and identify which types pose the highest risk. We analyze the impact of contamination on two key NLP tasks{---}summarization and question answering{---}revealing how different types of contamination influence task performance during evaluation.", }
Large language models pretrained on extensive web corpora demonstrate remarkable performance across a wide range of downstream tasks. However, a growing concern is data contamination, where evaluation datasets may unintentionally be contained in the pretraining corpus, inflating model performance. Decontamination, the process of detecting and removing such data, is a potential solution; yet these contaminants may originate from altered versions of the test set, evading detection during decontamination. How different types of contamination impact the performance of language models on downstream tasks is not fully understood. We present a taxonomy that categorizes the various types of contamination encountered by LLMs during the pretraining phase and identify which types pose the highest risk. We analyze the impact of contamination on two key NLP tasks{---}summarization and question answering{---}revealing how different types of contamination influence task performance during evaluation.
[ "Palavalli, Medha", "Bertsch, Am", "a", "Gormley, Matthew" ]
A Taxonomy for Data Contamination in Large Language Models
conda-1.3
Poster
2407.08716v1
https://aclanthology.org/2024.conda-1.4.bib
@inproceedings{sainz-etal-2024-data, title = "Data Contamination Report from the 2024 {CONDA} Shared Task", author = "Sainz, Oscar and Garc{\'\i}a-Ferrero, Iker and Jacovi, Alon and Ander Campos, Jon and Elazar, Yanai and Agirre, Eneko and Goldberg, Yoav and Chen, Wei-Lin and Chim, Jenny and Choshen, Leshem and D{'}Amico-Wong, Luca and Dell, Melissa and Fan, Run-Ze and Golchin, Shahriar and Li, Yucheng and Liu, Pengfei and Pahwa, Bhavish and Prabhu, Ameya and Sharma, Suryansh and Silcock, Emily and Solonko, Kateryna and Stap, David and Surdeanu, Mihai and Tseng, Yu-Min and Udandarao, Vishaal and Wang, Zengzhi and Xu, Ruijie and Yang, Jinglin", editor = "Sainz, Oscar and Garc{\'\i}a Ferrero, Iker and Agirre, Eneko and Ander Campos, Jon and Jacovi, Alon and Elazar, Yanai and Goldberg, Yoav", booktitle = "Proceedings of the 1st Workshop on Data Contamination (CONDA)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.conda-1.4", pages = "41--56", abstract = "The 1st Workshop on Data Contamination (CONDA 2024) focuses on all relevant aspects of data contamination in natural language processing, where data contamination is understood as situations where evaluation data is included in pre-training corpora used to train large scale models, compromising evaluation results. The workshop fostered a shared task to collect evidence on data contamination in current available datasets and models. The goal of the shared task and associated database is to assist the community in understanding the extent of the problem and to assist researchers in avoiding reporting evaluation results on known contaminated resources. The shared task provides a structured, centralized public database for the collection of contamination evidence, open to contributions from the community via GitHub pool requests. This first compilation paper is based on 566 reported entries over 91 contaminated sources from a total of 23 contributors. The details of the individual contamination events are available in the platform. The platform continues to be online, open to contributions from the community.", }
The 1st Workshop on Data Contamination (CONDA 2024) focuses on all relevant aspects of data contamination in natural language processing, where data contamination is understood as situations where evaluation data is included in pre-training corpora used to train large scale models, compromising evaluation results. The workshop fostered a shared task to collect evidence on data contamination in current available datasets and models. The goal of the shared task and associated database is to assist the community in understanding the extent of the problem and to assist researchers in avoiding reporting evaluation results on known contaminated resources. The shared task provides a structured, centralized public database for the collection of contamination evidence, open to contributions from the community via GitHub pool requests. This first compilation paper is based on 566 reported entries over 91 contaminated sources from a total of 23 contributors. The details of the individual contamination events are available in the platform. The platform continues to be online, open to contributions from the community.
[ "Sainz, Oscar", "Garc{\\'\\i}a-Ferrero, Iker", "Jacovi, Alon", "Ander Campos, Jon", "Elazar, Yanai", "Agirre, Eneko", "Goldberg, Yoav", "Chen, Wei-Lin", "Chim, Jenny", "Choshen, Leshem", "D{'}Amico-Wong, Luca", "Dell, Melissa", "Fan, Run-Ze", "Golchin, Shahriar", "Li, Yucheng", "Liu, Pengfei", "Pahwa, Bhavish", "Prabhu, Ameya", "Sharma, Suryansh", "Silcock, Emily", "Solonko, Kateryna", "Stap, David", "Surdeanu, Mihai", "Tseng, Yu-Min", "Ud", "arao, Vishaal", "Wang, Zengzhi", "Xu, Ruijie", "Yang, Jinglin" ]
Data Contamination Report from the 2024 {CONDA} Shared Task
conda-1.4
Poster
2407.21530v2
https://aclanthology.org/2024.fieldmatters-1.1.bib
@inproceedings{koncha-etal-2024-parallel, title = "The Parallel Corpus of {R}ussian and Ruska {R}omani Languages", author = "Koncha, Kirill and Abina, Abina and Tatiana, Kazakova and Rozovskaya, Gloria", editor = "Serikov, Oleg and Voloshina, Ekaterina and Postnikova, Anna and Muradoglu, Saliha and Le Ferrand, Eric and Klyachko, Elena and Vylomova, Ekaterina and Shavrina, Tatiana and Tyers, Francis", booktitle = "Proceedings of the 3rd Workshop on NLP Applications to Field Linguistics (Field Matters 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.fieldmatters-1.1", pages = "1--5", abstract = "The paper presents a parallel corpus for the Ruska Romani dialect and Russian language. Ruska Romani is the dialect of Romani language attributed to Ruska Roma, the largest subgroup of Romani people in Russia. The corpus contains the translations of Russian literature into Ruska Romani dialect. The corpus creation involved manual alignment of a small part of translations with original works, fine-tuning a language model on the aligned pairs, and using the fine-tuned model to align the remaining data. Ruska Romani sentences were annotated using a morphological analyzer, with rules crafted for proper nouns and borrowings. The corpus, available in JSON and Russian National Corpus XML formats. It includes 88,742 Russian tokens and 84,635 Ruska Romani tokens, 74,291 of which were grammatically annotated. The corpus could be used for linguistic research, including comparative and diachronic studies, bilingual dictionary creation, stylometry research, and NLP/MT tool development for Ruska Romani.", }
The paper presents a parallel corpus for the Ruska Romani dialect and Russian language. Ruska Romani is the dialect of Romani language attributed to Ruska Roma, the largest subgroup of Romani people in Russia. The corpus contains the translations of Russian literature into Ruska Romani dialect. The corpus creation involved manual alignment of a small part of translations with original works, fine-tuning a language model on the aligned pairs, and using the fine-tuned model to align the remaining data. Ruska Romani sentences were annotated using a morphological analyzer, with rules crafted for proper nouns and borrowings. The corpus, available in JSON and Russian National Corpus XML formats. It includes 88,742 Russian tokens and 84,635 Ruska Romani tokens, 74,291 of which were grammatically annotated. The corpus could be used for linguistic research, including comparative and diachronic studies, bilingual dictionary creation, stylometry research, and NLP/MT tool development for Ruska Romani.
[ "Koncha, Kirill", "Abina, Abina", "Tatiana, Kazakova", "Rozovskaya, Gloria" ]
The Parallel Corpus of {R}ussian and Ruska {R}omani Languages
fieldmatters-1.1
Poster
2307.12282v1
https://aclanthology.org/2024.fieldmatters-1.2.bib
@inproceedings{seo-etal-2024-manwav, title = "{M}an{W}av: The First {M}anchu {ASR} Model", author = "Seo, Jean and Kang, Minha and Byun, SungJoo and Lee, Sangah", editor = "Serikov, Oleg and Voloshina, Ekaterina and Postnikova, Anna and Muradoglu, Saliha and Le Ferrand, Eric and Klyachko, Elena and Vylomova, Ekaterina and Shavrina, Tatiana and Tyers, Francis", booktitle = "Proceedings of the 3rd Workshop on NLP Applications to Field Linguistics (Field Matters 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.fieldmatters-1.2", pages = "6--11", abstract = "This study addresses the widening gap in Automatic Speech Recognition (ASR) research between high resource and extremely low resource languages, with a particular focus on Manchu, a severely endangered language. Manchu exemplifies the challenges faced by marginalized linguistic communities in accessing state-of-the-art technologies. In a pioneering effort, we introduce the first-ever Manchu ASR model ManWav, leveraging Wav2Vec2-XLSR-53. The results of the first Manchu ASR is promising, especially when trained with our augmented data. Wav2Vec2-XLSR-53 fine-tuned with augmented data demonstrates a 0.02 drop in CER and 0.13 drop in WER compared to the same base model fine-tuned with original data.", }
This study addresses the widening gap in Automatic Speech Recognition (ASR) research between high resource and extremely low resource languages, with a particular focus on Manchu, a severely endangered language. Manchu exemplifies the challenges faced by marginalized linguistic communities in accessing state-of-the-art technologies. In a pioneering effort, we introduce the first-ever Manchu ASR model ManWav, leveraging Wav2Vec2-XLSR-53. The results of the first Manchu ASR is promising, especially when trained with our augmented data. Wav2Vec2-XLSR-53 fine-tuned with augmented data demonstrates a 0.02 drop in CER and 0.13 drop in WER compared to the same base model fine-tuned with original data.
[ "Seo, Jean", "Kang, Minha", "Byun, SungJoo", "Lee, Sangah" ]
{M}an{W}av: The First {M}anchu {ASR} Model
fieldmatters-1.2
Poster
2406.13502v1
https://aclanthology.org/2024.fieldmatters-1.3.bib
@inproceedings{adler-etal-2024-user, title = "User-Centered Design of Digital Tools for Sociolinguistic Studies in Under-Resourced Languages", author = "Adler, Jonas and Scholle, Carsten and Buschek, Daniel and Brandizzi, Nicolo{'} and Adnan, Muhadj", editor = "Serikov, Oleg and Voloshina, Ekaterina and Postnikova, Anna and Muradoglu, Saliha and Le Ferrand, Eric and Klyachko, Elena and Vylomova, Ekaterina and Shavrina, Tatiana and Tyers, Francis", booktitle = "Proceedings of the 3rd Workshop on NLP Applications to Field Linguistics (Field Matters 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.fieldmatters-1.3", pages = "12--27", abstract = "Investigating language variation is a core aspect of sociolinguistics, especially through the use of linguistic corpora. Collecting and analyzing spoken language in text-based corpora can be time-consuming and error-prone, especially for under-resourced languages with limited software assistance. This paper explores the language variation research process using a User-Centered Design (UCD) approach from the field of Human-Computer Interaction (HCI), offering guidelines for the development of digital tools for sociolinguists. We interviewed four researchers, observed their workflows and software usage, and analyzed the data using Grounded Theory. This revealed key challenges in manual tasks, software assistance, and data management. Based on these insights, we identified a set of requirements that future tools should meet to be valuable for researchers in this domain. The paper concludes by proposing design concepts with sketches and prototypes based on the identified requirements. These concepts aim to guide the implementation of a fully functional, open-source tool. This work presents an interdisciplinary approach between sociolinguistics and HCI by emphasizing the practical aspects of research that are often overlooked.", }
Investigating language variation is a core aspect of sociolinguistics, especially through the use of linguistic corpora. Collecting and analyzing spoken language in text-based corpora can be time-consuming and error-prone, especially for under-resourced languages with limited software assistance. This paper explores the language variation research process using a User-Centered Design (UCD) approach from the field of Human-Computer Interaction (HCI), offering guidelines for the development of digital tools for sociolinguists. We interviewed four researchers, observed their workflows and software usage, and analyzed the data using Grounded Theory. This revealed key challenges in manual tasks, software assistance, and data management. Based on these insights, we identified a set of requirements that future tools should meet to be valuable for researchers in this domain. The paper concludes by proposing design concepts with sketches and prototypes based on the identified requirements. These concepts aim to guide the implementation of a fully functional, open-source tool. This work presents an interdisciplinary approach between sociolinguistics and HCI by emphasizing the practical aspects of research that are often overlooked.
[ "Adler, Jonas", "Scholle, Carsten", "Buschek, Daniel", "Br", "izzi, Nicolo{'}", "Adnan, Muhadj" ]
User-Centered Design of Digital Tools for Sociolinguistic Studies in Under-Resourced Languages
fieldmatters-1.3
Poster
1508.07544v2
https://aclanthology.org/2024.fieldmatters-1.4.bib
@inproceedings{spencer-2024-documenting, title = "Documenting Endangered Languages with {L}ang{D}oc: A Wordlist-Based System and A Case Study on {M}oklen", author = "Spencer, Piyapath", editor = "Serikov, Oleg and Voloshina, Ekaterina and Postnikova, Anna and Muradoglu, Saliha and Le Ferrand, Eric and Klyachko, Elena and Vylomova, Ekaterina and Shavrina, Tatiana and Tyers, Francis", booktitle = "Proceedings of the 3rd Workshop on NLP Applications to Field Linguistics (Field Matters 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.fieldmatters-1.4", pages = "28--36", abstract = "Language documentation, especially languages lacking standardised writing systems, is a laborious and time-consuming process. This paper introduces LangDoc, a comprehensive system designed to address challenges and improve the efficiency and accuracy of language documentation projects. LangDoc offers several features, including tools for managing, recording, and reviewing the collected data. It operates both online and offline, crucial for fieldwork in remote locations. The paper also presents a comparative analysis demonstrating LangDoc{'}s efficiency compared to other methods. A case study of the Moklen language documentation project demonstrates how the features address the specific challenges of working with endangered languages and remote communities. Future development areas include integrating with NLP tools for advanced linguistic analysis and emphasising its potential to support the preservation of language diversity.", }
Language documentation, especially languages lacking standardised writing systems, is a laborious and time-consuming process. This paper introduces LangDoc, a comprehensive system designed to address challenges and improve the efficiency and accuracy of language documentation projects. LangDoc offers several features, including tools for managing, recording, and reviewing the collected data. It operates both online and offline, crucial for fieldwork in remote locations. The paper also presents a comparative analysis demonstrating LangDoc{'}s efficiency compared to other methods. A case study of the Moklen language documentation project demonstrates how the features address the specific challenges of working with endangered languages and remote communities. Future development areas include integrating with NLP tools for advanced linguistic analysis and emphasising its potential to support the preservation of language diversity.
[ "Spencer, Piyapath" ]
Documenting Endangered Languages with {L}ang{D}oc: A Wordlist-Based System and A Case Study on {M}oklen
fieldmatters-1.4
Poster
2302.13410v1
https://aclanthology.org/2024.fieldmatters-1.5.bib
@inproceedings{maspong-etal-2024-leveraging, title = "Leveraging Deep Learning to Shed Light on Tones of an Endangered Language: A Case Study of {M}oklen", author = "Maspong, Sireemas and Burroni, Francesco and Sukanchanon, Teerawee and Pornpottanamas, Warunsiri and Pittayaporn, Pittayawat", editor = "Serikov, Oleg and Voloshina, Ekaterina and Postnikova, Anna and Muradoglu, Saliha and Le Ferrand, Eric and Klyachko, Elena and Vylomova, Ekaterina and Shavrina, Tatiana and Tyers, Francis", booktitle = "Proceedings of the 3rd Workshop on NLP Applications to Field Linguistics (Field Matters 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.fieldmatters-1.5", pages = "37--42", abstract = "Moklen, a tonal Austronesian language spoken in Thailand, exhibits two tones with unbalanced distributions. We employed machine learning techniques for time-series classification to investigate its acoustic properties. Our analysis reveals that a synergy between pitch and vowel quality is crucial for tone distinction, as the model trained with these features achieved the highest accuracy.", }
Moklen, a tonal Austronesian language spoken in Thailand, exhibits two tones with unbalanced distributions. We employed machine learning techniques for time-series classification to investigate its acoustic properties. Our analysis reveals that a synergy between pitch and vowel quality is crucial for tone distinction, as the model trained with these features achieved the highest accuracy.
[ "Maspong, Sireemas", "Burroni, Francesco", "Sukanchanon, Teerawee", "Pornpottanamas, Warunsiri", "Pittayaporn, Pittayawat" ]
Leveraging Deep Learning to Shed Light on Tones of an Endangered Language: A Case Study of {M}oklen
fieldmatters-1.5
Poster
1803.02952v2
https://aclanthology.org/2024.fieldmatters-1.6.bib
@inproceedings{fischbach-2024-comparative, title = "A Comparative Analysis of Speaker Diarization Models: Creating a Dataset for {G}erman Dialectal Speech", author = "Fischbach, Lea", editor = "Serikov, Oleg and Voloshina, Ekaterina and Postnikova, Anna and Muradoglu, Saliha and Le Ferrand, Eric and Klyachko, Elena and Vylomova, Ekaterina and Shavrina, Tatiana and Tyers, Francis", booktitle = "Proceedings of the 3rd Workshop on NLP Applications to Field Linguistics (Field Matters 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.fieldmatters-1.6", pages = "43--51", abstract = "Speaker diarization is a critical task in the field of computer science, aiming to assign timestamps and speaker labels to audio segments. The aim of these tests in this Publication is to find a pretrained speaker diarization pipeline capable of distinguishing dialectal speakers from each other and an explorer. To achieve this, three pipelines, namely Pyannote, CLEAVER and NeMo, are tested and compared, across various segmentation and parameterization strategies. The study considers multiple scenarios, such as the impact of threshold values, overlap handling, and minimum duration parameters, on classification accuracy. Additionally, this study aims to create a dataset for German dialect identification (DID) based on the findings from this research.", }
Speaker diarization is a critical task in the field of computer science, aiming to assign timestamps and speaker labels to audio segments. The aim of these tests in this Publication is to find a pretrained speaker diarization pipeline capable of distinguishing dialectal speakers from each other and an explorer. To achieve this, three pipelines, namely Pyannote, CLEAVER and NeMo, are tested and compared, across various segmentation and parameterization strategies. The study considers multiple scenarios, such as the impact of threshold values, overlap handling, and minimum duration parameters, on classification accuracy. Additionally, this study aims to create a dataset for German dialect identification (DID) based on the findings from this research.
[ "Fischbach, Lea" ]
A Comparative Analysis of Speaker Diarization Models: Creating a Dataset for {G}erman Dialectal Speech
fieldmatters-1.6
Poster
2205.09501v1
https://aclanthology.org/2024.fieldmatters-1.7.bib
@inproceedings{parra-2024-noise, title = "Noise Be Gone: Does Speech Enhancement Distort Linguistic Nuances?", author = "Parra, I{\~n}igo", editor = "Serikov, Oleg and Voloshina, Ekaterina and Postnikova, Anna and Muradoglu, Saliha and Le Ferrand, Eric and Klyachko, Elena and Vylomova, Ekaterina and Shavrina, Tatiana and Tyers, Francis", booktitle = "Proceedings of the 3rd Workshop on NLP Applications to Field Linguistics (Field Matters 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.fieldmatters-1.7", pages = "52--60", abstract = "This study evaluates the impact of speech enhancement (SE) techniques on linguistic research, focusing on their ability to maintain essential acoustic characteristics in enhanced audio without introducing significant artifacts. Through a sociophonetic analysis of Peninsular and Peruvian Spanish speakers, using both original and enhanced recordings, we demonstrate that SE effectively preserves critical speech nuances such as voicing and vowel quality. This supports the use of SE in improving the quality of speech samples. This study marks an initial effort to assess SE{'}s reliability in language studies and proposes a methodology for enhancing low-quality audio corpora of under-resourced languages.", }
This study evaluates the impact of speech enhancement (SE) techniques on linguistic research, focusing on their ability to maintain essential acoustic characteristics in enhanced audio without introducing significant artifacts. Through a sociophonetic analysis of Peninsular and Peruvian Spanish speakers, using both original and enhanced recordings, we demonstrate that SE effectively preserves critical speech nuances such as voicing and vowel quality. This supports the use of SE in improving the quality of speech samples. This study marks an initial effort to assess SE{'}s reliability in language studies and proposes a methodology for enhancing low-quality audio corpora of under-resourced languages.
[ "Parra, I{\\~n}igo" ]
Noise Be Gone: Does Speech Enhancement Distort Linguistic Nuances?
fieldmatters-1.7
Poster
2305.18739v1
https://aclanthology.org/2024.fieldmatters-1.8.bib
@inproceedings{jones-etal-2024-comparing, title = "Comparing {K}aldi-Based Pipeline Elpis and Whisper for {\v{C}}akavian Transcription", author = "Jones, Austin and Zhang, Shulin and Hale, John and Renwick, Margaret and Vrzic, Zvjezdana and Langston, Keith", editor = "Serikov, Oleg and Voloshina, Ekaterina and Postnikova, Anna and Muradoglu, Saliha and Le Ferrand, Eric and Klyachko, Elena and Vylomova, Ekaterina and Shavrina, Tatiana and Tyers, Francis", booktitle = "Proceedings of the 3rd Workshop on NLP Applications to Field Linguistics (Field Matters 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.fieldmatters-1.8", pages = "61--68", abstract = "Automatic speech recognition (ASR) has the potential to accelerate the documentation of endangered languages, but the dearth of resources poses a major obstacle. {\v{C}}akavian, an endangered variety spoken primarily in Croatia, is a case in point, lacking transcription tools that could aid documentation efforts. We compare training a new ASR model on a limited dataset using the Kaldi-based ASR pipeline Elpis to using the same dataset to adapt the transformer-based pretrained multilingual model Whisper, to determine which is more practical in the documentation context. Results show that Whisper outperformed Elpis, achieving the lowest average Word Error Rate (WER) of 57.3{\%} and median WER of 35.48{\%}. While Elpis offers a less computationally expensive model and friendlier user experience, Whisper appears better at adapting to our collected {\v{C}}akavian data.", }
Automatic speech recognition (ASR) has the potential to accelerate the documentation of endangered languages, but the dearth of resources poses a major obstacle. {\v{C}}akavian, an endangered variety spoken primarily in Croatia, is a case in point, lacking transcription tools that could aid documentation efforts. We compare training a new ASR model on a limited dataset using the Kaldi-based ASR pipeline Elpis to using the same dataset to adapt the transformer-based pretrained multilingual model Whisper, to determine which is more practical in the documentation context. Results show that Whisper outperformed Elpis, achieving the lowest average Word Error Rate (WER) of 57.3{\%} and median WER of 35.48{\%}. While Elpis offers a less computationally expensive model and friendlier user experience, Whisper appears better at adapting to our collected {\v{C}}akavian data.
[ "Jones, Austin", "Zhang, Shulin", "Hale, John", "Renwick, Margaret", "Vrzic, Zvjezdana", "Langston, Keith" ]
Comparing {K}aldi-Based Pipeline Elpis and Whisper for {\v{C}}akavian Transcription
fieldmatters-1.8
Poster
2101.03027v2
https://aclanthology.org/2024.fieldmatters-1.9.bib
@inproceedings{layacan-etal-2024-zero, title = "Zero-shot Cross-lingual {POS} Tagging for {F}ilipino", author = "Layacan, Jimson and Flores, Isaiah Edri W. and Tan, Katrina and Estuar, Ma. Regina E. and Montalan, Jann and De Leon, Marlene M.", editor = "Serikov, Oleg and Voloshina, Ekaterina and Postnikova, Anna and Muradoglu, Saliha and Le Ferrand, Eric and Klyachko, Elena and Vylomova, Ekaterina and Shavrina, Tatiana and Tyers, Francis", booktitle = "Proceedings of the 3rd Workshop on NLP Applications to Field Linguistics (Field Matters 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.fieldmatters-1.9", pages = "69--77", abstract = "Supervised learning approaches in NLP, exemplified by POS tagging, rely heavily on the presence of large amounts of annotated data. However, acquiring such data often requires significant amount of resources and incurs high costs. In this work, we explore zero-shot cross-lingual transfer learning to address data scarcity issues in Filipino POS tagging, particularly focusing on optimizing source language selection. Our zero-shot approach demonstrates superior performance compared to previous studies, with top-performing fine-tuned PLMs achieving F1 scores as high as 79.10{\%}. The analysis reveals moderate correlations between cross-lingual transfer performance and specific linguistic distances{--}featural, inventory, and syntactic{--}suggesting that source languages with these features closer to Filipino provide better results. We identify tokenizer optimization as a key challenge, as PLM tokenization sometimes fails to align with meaningful representations, thus hindering POS tagging performance.", }
Supervised learning approaches in NLP, exemplified by POS tagging, rely heavily on the presence of large amounts of annotated data. However, acquiring such data often requires significant amount of resources and incurs high costs. In this work, we explore zero-shot cross-lingual transfer learning to address data scarcity issues in Filipino POS tagging, particularly focusing on optimizing source language selection. Our zero-shot approach demonstrates superior performance compared to previous studies, with top-performing fine-tuned PLMs achieving F1 scores as high as 79.10{\%}. The analysis reveals moderate correlations between cross-lingual transfer performance and specific linguistic distances{--}featural, inventory, and syntactic{--}suggesting that source languages with these features closer to Filipino provide better results. We identify tokenizer optimization as a key challenge, as PLM tokenization sometimes fails to align with meaningful representations, thus hindering POS tagging performance.
[ "Layacan, Jimson", "Flores, Isaiah Edri W.", "Tan, Katrina", "Estuar, Ma. Regina E.", "Montalan, Jann", "De Leon, Marlene M." ]
Zero-shot Cross-lingual {POS} Tagging for {F}ilipino
fieldmatters-1.9
Poster
2201.12793v1
https://aclanthology.org/2024.gebnlp-1.1.bib
@inproceedings{wang-demberg-2024-parameter, title = "A Parameter-Efficient Multi-Objective Approach to Mitigate Stereotypical Bias in Language Models", author = "Wang, Yifan and Demberg, Vera", editor = "Fale{\'n}ska, Agnieszka and Basta, Christine and Costa-juss{\`a}, Marta and Goldfarb-Tarrant, Seraphina and Nozza, Debora", booktitle = "Proceedings of the 5th Workshop on Gender Bias in Natural Language Processing (GeBNLP)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.gebnlp-1.1", pages = "1--19", abstract = "Pre-trained language models have shown impressive abilities of understanding and generating natural languages. However, they typically inherit undesired human-like bias and stereotypes from training data, which raises concerns about putting these models into use in real-world scenarios. Although prior research has proposed to reduce bias using different fairness objectives, they usually fail to capture different representations of bias and, therefore, struggle with fully debiasing models. In this work, we introduce a multi-objective probability alignment approach to overcome current challenges by incorporating multiple debiasing losses to locate and penalize bias in different forms. Compared to existing methods, our proposed method can more effectively and comprehensively reduce stereotypical bias, and maintains the language ability of pre-trained models at the same time. Besides, we adopt prefix-tuning to optimize fairness objectives, and results show that it can achieve better bias removal than full fine-tuning while requiring much fewer computational resources. Our code and data are available at https://github.com/Ewanwong/debias{\_}NLG.", }
Pre-trained language models have shown impressive abilities of understanding and generating natural languages. However, they typically inherit undesired human-like bias and stereotypes from training data, which raises concerns about putting these models into use in real-world scenarios. Although prior research has proposed to reduce bias using different fairness objectives, they usually fail to capture different representations of bias and, therefore, struggle with fully debiasing models. In this work, we introduce a multi-objective probability alignment approach to overcome current challenges by incorporating multiple debiasing losses to locate and penalize bias in different forms. Compared to existing methods, our proposed method can more effectively and comprehensively reduce stereotypical bias, and maintains the language ability of pre-trained models at the same time. Besides, we adopt prefix-tuning to optimize fairness objectives, and results show that it can achieve better bias removal than full fine-tuning while requiring much fewer computational resources. Our code and data are available at https://github.com/Ewanwong/debias{\_}NLG.
[ "Wang, Yifan", "Demberg, Vera" ]
A Parameter-Efficient Multi-Objective Approach to Mitigate Stereotypical Bias in Language Models
gebnlp-1.1
Poster
2404.01768v1
https://aclanthology.org/2024.gebnlp-1.2.bib
@inproceedings{zhu-etal-2024-plms, title = "Do {PLM}s and Annotators Share the Same Gender Bias? Definition, Dataset, and Framework of Contextualized Gender Bias", author = "Zhu, Shucheng and Du, Bingjie and Zhao, Jishun and Liu, Ying and Liu, Pengyuan", editor = "Fale{\'n}ska, Agnieszka and Basta, Christine and Costa-juss{\`a}, Marta and Goldfarb-Tarrant, Seraphina and Nozza, Debora", booktitle = "Proceedings of the 5th Workshop on Gender Bias in Natural Language Processing (GeBNLP)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.gebnlp-1.2", pages = "20--32", abstract = "Pre-trained language models (PLMs) have achieved success in various of natural language processing (NLP) tasks. However, PLMs also introduce some disquieting safety problems, such as gender bias. Gender bias is an extremely complex issue, because different individuals may hold disparate opinions on whether the same sentence expresses harmful bias, especially those seemingly neutral or positive. This paper first defines the concept of contextualized gender bias (CGB), which makes it easy to measure implicit gender bias in both PLMs and annotators. We then construct CGBDataset, which contains 20k natural sentences with gendered words, from Chinese news. Similar to the task of masked language models, gendered words are masked for PLMs and annotators to judge whether a male word or a female word is more suitable. Then, we introduce CGBFrame to measure the gender bias of annotators. By comparing the results measured by PLMs and annotators, we find that though there are differences on the choices made by PLMs and annotators, they show significant consistency in general.", }
Pre-trained language models (PLMs) have achieved success in various of natural language processing (NLP) tasks. However, PLMs also introduce some disquieting safety problems, such as gender bias. Gender bias is an extremely complex issue, because different individuals may hold disparate opinions on whether the same sentence expresses harmful bias, especially those seemingly neutral or positive. This paper first defines the concept of contextualized gender bias (CGB), which makes it easy to measure implicit gender bias in both PLMs and annotators. We then construct CGBDataset, which contains 20k natural sentences with gendered words, from Chinese news. Similar to the task of masked language models, gendered words are masked for PLMs and annotators to judge whether a male word or a female word is more suitable. Then, we introduce CGBFrame to measure the gender bias of annotators. By comparing the results measured by PLMs and annotators, we find that though there are differences on the choices made by PLMs and annotators, they show significant consistency in general.
[ "Zhu, Shucheng", "Du, Bingjie", "Zhao, Jishun", "Liu, Ying", "Liu, Pengyuan" ]
Do {PLM}s and Annotators Share the Same Gender Bias? Definition, Dataset, and Framework of Contextualized Gender Bias
gebnlp-1.2
Poster
1912.00578v1
https://aclanthology.org/2024.gebnlp-1.3.bib
@inproceedings{devinney-etal-2024-dont, title = "We Don{'}t Talk About That: Case Studies on Intersectional Analysis of Social Bias in Large Language Models", author = {Devinney, Hannah and Bj{\"o}rklund, Jenny and Bj{\"o}rklund, Henrik}, editor = "Fale{\'n}ska, Agnieszka and Basta, Christine and Costa-juss{\`a}, Marta and Goldfarb-Tarrant, Seraphina and Nozza, Debora", booktitle = "Proceedings of the 5th Workshop on Gender Bias in Natural Language Processing (GeBNLP)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.gebnlp-1.3", pages = "33--44", abstract = "Despite concerns that Large Language Models (LLMs) are vectors for reproducing and amplifying social biases such as sexism, transphobia, islamophobia, and racism, there is a lack of work qualitatively analyzing \textit{how} such patterns of bias are generated by LLMs. We use mixed-methods approaches and apply a feminist, intersectional lens to the problem across two language domains, Swedish and English, by generating narrative texts using LLMs. We find that hegemonic norms are consistently reproduced; dominant identities are often treated as {`}default{'}; and discussion of identity itself may be considered {`}inappropriate{'} by the safety features applied to some LLMs. Due to the differing behaviors of models, depending both on their design and the language they are trained on, we observe that strategies of identifying {``}bias{''} must be adapted to individual models and their socio-cultural contexts.{\_}Content warning: This research concerns the identification of harms, including stereotyping, denigration, and erasure of minoritized groups. Examples, including transphobic and racist content, are included and discussed.{\_}", }
Despite concerns that Large Language Models (LLMs) are vectors for reproducing and amplifying social biases such as sexism, transphobia, islamophobia, and racism, there is a lack of work qualitatively analyzing \textit{how} such patterns of bias are generated by LLMs. We use mixed-methods approaches and apply a feminist, intersectional lens to the problem across two language domains, Swedish and English, by generating narrative texts using LLMs. We find that hegemonic norms are consistently reproduced; dominant identities are often treated as {`}default{'}; and discussion of identity itself may be considered {`}inappropriate{'} by the safety features applied to some LLMs. Due to the differing behaviors of models, depending both on their design and the language they are trained on, we observe that strategies of identifying {``}bias{''} must be adapted to individual models and their socio-cultural contexts.{\_}Content warning: This research concerns the identification of harms, including stereotyping, denigration, and erasure of minoritized groups. Examples, including transphobic and racist content, are included and discussed.{\_}
[ "Devinney, Hannah", "Bj{\\\"o}rklund, Jenny", "Bj{\\\"o}rklund, Henrik" ]
We Don{'}t Talk About That: Case Studies on Intersectional Analysis of Social Bias in Large Language Models
gebnlp-1.3
Poster
2403.14896v1
https://aclanthology.org/2024.gebnlp-1.4.bib
@inproceedings{jeyaraj-delany-2024-explainable, title = "An Explainable Approach to Understanding Gender Stereotype Text", author = "Jeyaraj, Manuela and Delany, Sarah", editor = "Fale{\'n}ska, Agnieszka and Basta, Christine and Costa-juss{\`a}, Marta and Goldfarb-Tarrant, Seraphina and Nozza, Debora", booktitle = "Proceedings of the 5th Workshop on Gender Bias in Natural Language Processing (GeBNLP)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.gebnlp-1.4", pages = "45--59", abstract = "Gender Stereotypes refer to the widely held beliefs and assumptions about the typical traits, behaviours, and roles associated with a collective group of individuals of a particular gender in society. These typical beliefs about how people of a particular gender are described in text can cause harmful effects to individuals leading to unfair treatment. In this research, the aim is to identify the words and language constructs that can influence a text to be considered a gender stereotype. To do so, a transformer model with attention is fine-tuned for gender stereotype detection. Thereafter, words/language constructs used for the model{'}s decision are identified using a combined use of attention- and SHAP (SHapley Additive exPlanations)-based explainable approaches. Results show that adjectives and verbs were highly influential in predicting gender stereotypes. Furthermore, applying sentiment analysis showed that words describing male gender stereotypes were more positive than those used for female gender stereotypes.", }
Gender Stereotypes refer to the widely held beliefs and assumptions about the typical traits, behaviours, and roles associated with a collective group of individuals of a particular gender in society. These typical beliefs about how people of a particular gender are described in text can cause harmful effects to individuals leading to unfair treatment. In this research, the aim is to identify the words and language constructs that can influence a text to be considered a gender stereotype. To do so, a transformer model with attention is fine-tuned for gender stereotype detection. Thereafter, words/language constructs used for the model{'}s decision are identified using a combined use of attention- and SHAP (SHapley Additive exPlanations)-based explainable approaches. Results show that adjectives and verbs were highly influential in predicting gender stereotypes. Furthermore, applying sentiment analysis showed that words describing male gender stereotypes were more positive than those used for female gender stereotypes.
[ "Jeyaraj, Manuela", "Delany, Sarah" ]
An Explainable Approach to Understanding Gender Stereotype Text
gebnlp-1.4
Poster
2311.00306v1